0% found this document useful (0 votes)
220 views

PDF TNPM Installguide

IBM Tivoli Netcool Performance Manager 1.3. Wireline Component Document Revision r2e1 installation Guide. Before using this information and the product it supports, read the information in "notices" on page 251.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views

PDF TNPM Installguide

IBM Tivoli Netcool Performance Manager 1.3. Wireline Component Document Revision r2e1 installation Guide. Before using this information and the product it supports, read the information in "notices" on page 251.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 268

IBM Tivoli Netcool Performance Manager 1.3.

2
Wireline Component
Document Revision R2E1
Installation Guide

Note
Before using this information and the product it supports, read the information in Notices on page 251.
Copyright IBM Corporation 2006, 2012.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . vii
Audience . . . . . . . . . . . . . . . vii
Tivoli Netcool Performance Manager - Wireline
Component . . . . . . . . . . . . . . vii
The Default UNIX Shell . . . . . . . . . . ix
Chapter 1. Introduction . . . . . . . . 1
Tivoli Netcool Performance Manager architecture . . 1
Co-location rules . . . . . . . . . . . . 2
Inheritance . . . . . . . . . . . . . . 4
Notable subcomponents and features . . . . . 5
Typical installation topology . . . . . . . . . 8
Basic topology scenario . . . . . . . . . . 8
Intermediate topology scenario . . . . . . . 9
Advanced topology scenario. . . . . . . . 10
Tivoli Netcool Performance Manager distribution. . 11
Chapter 2. Requirements . . . . . . . 13
Minimum requirements for installation . . . . . 13
Solaris hardware requirements . . . . . . . 13
AIX hardware requirements . . . . . . . . 14
Linux hardware requirements . . . . . . . 14
Oracle deployment space requirements . . . . 14
Tivoli Integrated Portal deployment space
requirements . . . . . . . . . . . . . 15
Screen resolution . . . . . . . . . . . 15
Minimum requirements for a proof of concept
installation . . . . . . . . . . . . . . 15
Solaris hardware requirements (POC). . . . . 15
AIX hardware requirements (POC) . . . . . 16
Linux hardware requirements (POC) . . . . . 16
Screen resolution . . . . . . . . . . . 16
Supported operating systems and modules . . . . 17
Solaris 10 for SPARC platforms . . . . . . . 17
AIX platforms . . . . . . . . . . . . 20
Linux platforms . . . . . . . . . . . . 24
Required user names . . . . . . . . . . . 27
pvuser . . . . . . . . . . . . . . . 27
oracle . . . . . . . . . . . . . . . 27
Ancillary software requirements . . . . . . . 28
FTP support . . . . . . . . . . . . . 28
OpenSSH and SFTP . . . . . . . . . . 28
File compression. . . . . . . . . . . . 29
DataView load balancing . . . . . . . . . 29
Oracle Database . . . . . . . . . . . . 30
Oracle server . . . . . . . . . . . . . 30
Tivoli Common Reporting client . . . . . . 30
Java Runtime Environment (JRE) . . . . . . 31
Web browsers and settings . . . . . . . . 31
X Emulation . . . . . . . . . . . . . 32
WebGUI integration . . . . . . . . . . 33
Microsoft Office Version . . . . . . . . . 33
Chapter 3. Installing and configuring
the prerequisite software . . . . . . . 35
Overview . . . . . . . . . . . . . . . 35
Supported platforms . . . . . . . . . . 36
Pre-Installation setup tasks . . . . . . . . . 36
Setting up a remote X Window display . . . . 36
Changing the ethernet characteristics . . . . . 37
Adding the pvuser login name . . . . . . . 40
Setting the resource limits (AIX only) . . . . . 42
Set the system parameters (Solaris only) . . . . 43
Enable FTP on Linux systems (Linux only) . . . 45
Disable SELinux (Linux only) . . . . . . . 45
Set the kernel parameters (Linux only) . . . . 45
Replace the native taring utility with gnu tar
(AIX 7.1 only) . . . . . . . . . . . . 46
Install a libcrypto.so . . . . . . . . . . 47
Deployer pre-requisites . . . . . . . . . . 47
Operating system check . . . . . . . . . 47
Mount points check . . . . . . . . . . 48
Authentication between distributed servers . . . 48
Downloading the Tivoli Netcool Performance
Manager distribution to disk . . . . . . . . 48
Downloading Tivoli Common Reporting to disk 49
General Oracle setup tasks . . . . . . . . . 49
Specifying a basename for DB_USER_ROOT . . 50
Specifying Oracle login passwords. . . . . . 51
Assumed values . . . . . . . . . . . . 52
Install Oracle 11.2.0.2 server (64-bit) . . . . . . 53
Download the Oracle distribution to disk . . . 53
Verify the required operating system packages. . 54
Run the Oracle server configuration script . . . 54
Set a password for the Oracle login name . . . 57
Run the preinstallation script . . . . . . . 57
Run the rootpre.sh script (AIX only) . . . . . 58
Verify PATH and Environment for the Oracle
login name . . . . . . . . . . . . . 58
Install Oracle using the menu-based script . . . 59
Run the root.sh script . . . . . . . . . . 61
Set the ORACLE_SID variable . . . . . . . 62
Set automatic startup of the database instance . . 63
Configure the Oracle listener . . . . . . . 63
Configure the Oracle net client . . . . . . . 65
Install Oracle 11.2.0.2 client (32-bit) . . . . . . 67
Download the Oracle distribution to disk . . . 67
Run the Oracle client configuration script . . . 68
Set a password for the Oracle login name . . . 70
Run the preinstallation script . . . . . . . 70
Verify PATH and Environment for the Oracle
login name . . . . . . . . . . . . . 71
Install the Oracle client (32-bit) . . . . . . . 71
Run the root.sh script . . . . . . . . . . 73
Configure the Oracle Net Client . . . . . . 74
Update the oracle user's .profile . . . . . . 74
Configure the Oracle Net client . . . . . . . 75
Next steps . . . . . . . . . . . . . . . 77
Copyright IBM Corp. 2006, 2012 iii
Chapter 4. Installing in a distributed
environment . . . . . . . . . . . . 79
Distributed installation process . . . . . . . . 79
Starting the Launchpad . . . . . . . . . . 81
Installing the Topology Editor . . . . . . . . 82
Starting the Topology Editor. . . . . . . . . 83
Creating a new topology . . . . . . . . . . 84
Adding and configuring the Tivoli Netcool
Performance Manager components . . . . . . 84
Add the hosts . . . . . . . . . . . . 84
Add a database configurations component . . . 86
Add a DataMart . . . . . . . . . . . . 87
Add a Discovery Server . . . . . . . . . 89
Add a Tivoli Integrated Portal . . . . . . . 90
Add a DataView. . . . . . . . . . . . 91
Add the DataChannel administrative components 92
Add a DataChannel . . . . . . . . . . 93
Add a Collector . . . . . . . . . . . . 95
Add a Cross Collector CME . . . . . . . . 98
Saving the topology . . . . . . . . . . . 99
Opening an existing topology file . . . . . 100
Starting the Deployer. . . . . . . . . . . 100
Primary Deployer . . . . . . . . . . . 100
Secondary Deployers . . . . . . . . . . 101
Pre-deployment check . . . . . . . . . 101
Deploying the topology . . . . . . . . . . 102
Reuse an existing Tivoli Integrated Portal and
Install DataView using a non root user on a
local host . . . . . . . . . . . . . . 104
Reuse an existing Tivoli Integrated Portal and
Install DataView using a non root user on a
remote host . . . . . . . . . . . . . 106
Next steps . . . . . . . . . . . . . . 108
Resuming a partially successful first-time
installation . . . . . . . . . . . . . . 109
Chapter 5. Installing as a minimal
deployment . . . . . . . . . . . . 111
Overview. . . . . . . . . . . . . . . 111
Before you begin . . . . . . . . . . . . 111
Special consideration . . . . . . . . . . 112
Overriding default values . . . . . . . . 112
Installing a minimal deployment . . . . . . . 113
Download the MIB-II files . . . . . . . . 113
Starting the Launchpad . . . . . . . . . 113
Start the installation . . . . . . . . . . 114
The post-installation script . . . . . . . . . 116
Next steps . . . . . . . . . . . . . . 116
Chapter 6. Modifying the current
deployment . . . . . . . . . . . . 117
Opening a deployed topology . . . . . . . . 117
Adding a new component . . . . . . . . . 118
Changing configuration parameters of existing
Tivoli Netcool Performance Manager components . 120
Moving components to a different host . . . . . 120
Moving a deployed collector to a different host . . 121
Moving a deployed SNMP collector . . . . . 121
Moving a deployed UBA bulk collector. . . . 123
Changing the port for a collector . . . . . . . 126
Modifying Tivoli Integrated Portal and Tivoli
Common Reporting ports . . . . . . . . . 126
Changing ports for the Tivoli Common
Reporting console . . . . . . . . . . . 127
Port assignments . . . . . . . . . . . 128
Viewing the application server profile . . . . 128
Chapter 7. Using the High Availability
Manager . . . . . . . . . . . . . 131
Overview. . . . . . . . . . . . . . . 131
HAM basics . . . . . . . . . . . . . . 131
The parts of a collector . . . . . . . . . 132
Clusters . . . . . . . . . . . . . . 133
HAM cluster configuration . . . . . . . . . 133
Types of spare hosts . . . . . . . . . . 133
Types of HAM clusters . . . . . . . . . 134
Example HAM clusters . . . . . . . . . 134
Resource pools . . . . . . . . . . . . . 140
How the SNMP collector works . . . . . . . 140
How failover works with the HAM and the
SNMP collector. . . . . . . . . . . . 141
Obtaining collector status . . . . . . . . 142
Creating a HAM environment . . . . . . . . 143
Topology prerequisites . . . . . . . . . 144
Procedures . . . . . . . . . . . . . 144
Create the HAM and a HAM cluster . . . . 144
Add the designated spare . . . . . . . . 145
Add the managed definitions . . . . . . . 146
Define the resource pools . . . . . . . . 147
Save and start the HAM. . . . . . . . . 149
Creating an additional HAM environment. . . 150
Modifying a HAM environment . . . . . . . 150
Removing HAM components . . . . . . . 150
Stopping and restarting modified components 151
Viewing the current configuration . . . . . . 151
Show Collector Process... dialog . . . . . . 152
Show Managed Definition... dialog . . . . . 152
Chapter 8. Enabling Common
Reporting on Tivoli Netcool
Performance Manager. . . . . . . . 155
Model Maker . . . . . . . . . . . . . 155
The Base Common Pack Suite . . . . . . . . 156
Installing the BCP package from the distribution 157
Chapter 9. Uninstalling components 159
Removing a component from the topology . . . 159
Restrictions and behavior . . . . . . . . 159
Removing a component . . . . . . . . . 160
Uninstalling the entire Tivoli Netcool Performance
Manager system . . . . . . . . . . . . 161
Order of uninstall . . . . . . . . . . . 161
Restrictions and behavior . . . . . . . . 162
Performing the uninstall. . . . . . . . . 162
Uninstalling the topology editor . . . . . . . 163
Residual files . . . . . . . . . . . . . 164
Appendix A. Remote installation
issues . . . . . . . . . . . . . . 167
iv IBM Tivoli Netcool Performance Manager: Installation Guide
When remote install is not possible . . . . . . 167
FTP is possible, but REXEC or RSH are not . . 167
Neither FTP nor REXEC/RSH are possible . . 168
Installing on a remote host using a secondary
deployer . . . . . . . . . . . . . . . 168
Appendix B. DataChannel architecture 171
Data collection . . . . . . . . . . . . . 171
Data aggregation . . . . . . . . . . . 171
Management programs and watchdog scripts 172
DataChannel application programs . . . . . 173
Starting the DataLoad SNMP collector . . . . . 175
DataChannel management components in a
distributed configuration . . . . . . . . . 176
Manually starting the Channel Manager
programs . . . . . . . . . . . . . . 176
Adding DataChannels to an existing system . . . 177
DataChannel terminology . . . . . . . . . 178
Appendix C. Aggregation sets . . . . 181
Overview. . . . . . . . . . . . . . . 181
Configuring aggregation sets . . . . . . . . 181
Installing aggregation sets . . . . . . . . . 185
Start the Tivoli Netcool Performance Manager
setup program . . . . . . . . . . . . 185
Set aggregation set installation parameters . . 185
Edit aggregation set parameters file . . . . . 188
Linking DataView groups to timezones. . . . . 189
Appendix D. Deployer CLI options 191
Using the -DTarget option . . . . . . . . . 192
Appendix E. Secure file transfer
installation . . . . . . . . . . . . 195
Overview. . . . . . . . . . . . . . . 195
Enabling SFTP . . . . . . . . . . . . . 195
Installing OpenSSH . . . . . . . . . . . 196
AIX systems. . . . . . . . . . . . . 196
Solaris systems . . . . . . . . . . . . 198
Linux systems . . . . . . . . . . . . 200
Configuring OpenSSH . . . . . . . . . . 200
Configuring the OpenSSH server . . . . . . 200
Configuring OpenSSH client . . . . . . . 201
Generating public and private keys . . . . . 201
Testing OpenSSH and SFTP . . . . . . . . 204
Troubleshooting . . . . . . . . . . . . 204
Netcool/Provisio SFTP errors . . . . . . . . 205
Appendix F. LDAP integration . . . . 207
Supported LDAP servers . . . . . . . . . 207
LDAP configuration . . . . . . . . . . . 207
Enable LDAP configuration . . . . . . . 207
Verifying the DataView installation . . . . . 208
Assigning Tivoli Netcool Performance Manager
roles to LDAP users . . . . . . . . . . 208
Appendix G. Using silent mode. . . . 211
Sample properties files . . . . . . . . . . 211
The Deployer . . . . . . . . . . . . . 211
Running the Deployer in silent mode . . . . 211
Confirming the status of a silent install . . . . 212
Restrictions . . . . . . . . . . . . . 213
The Topology Editor . . . . . . . . . . . 213
Appendix H. Installing an interim fix 215
Overview. . . . . . . . . . . . . . . 215
Installation rules . . . . . . . . . . . 215
Behavior and restrictions . . . . . . . . 215
Before you begin . . . . . . . . . . . . 216
Installing a patch . . . . . . . . . . . . 216
Appendix I. Error codes and log files 219
Error codes . . . . . . . . . . . . . . 219
Deployer messages . . . . . . . . . . 219
Topology Editor messages . . . . . . . . 232
InstallAnywhere messages . . . . . . . . 236
Log files . . . . . . . . . . . . . . . 237
COI log files. . . . . . . . . . . . . 237
Deployer log file . . . . . . . . . . . 238
Eclipse log file . . . . . . . . . . . . 238
Trace log file . . . . . . . . . . . . 238
Appendix J. Troubleshooting. . . . . 239
Deployment problems . . . . . . . . . . 239
Saving installation configuration files . . . . 241
Tivoli Netcool Performance Manager component
problems . . . . . . . . . . . . . . . 241
Topology Editor problems . . . . . . . . . 242
Telnet problems . . . . . . . . . . . . 242
Java problems . . . . . . . . . . . . . 243
Testing connectivity to the database . . . . . . 243
Testing external procedure call access . . . . . 244
Appendix K. Migrating DataView
content and users . . . . . . . . . 245
Moving DataView content between Tivoli
Integrated Portal servers. . . . . . . . . . 245
The synchronize command. . . . . . . . 245
Migrating SilverStream content to the Tivoli
Integrated Portal . . . . . . . . . . . . 246
SilverStream page conversion . . . . . . . 246
The migrate command . . . . . . . . . 248
Notices . . . . . . . . . . . . . . 251
Trademarks . . . . . . . . . . . . 255
Contents v
vi IBM Tivoli Netcool Performance Manager: Installation Guide
Preface
The purpose of this manual
IBM

Tivoli Netcool Performance Manager 1.3.2 is a bundled product consisting of


a wireline component and a wireless component.
The purpose of this guide is to help you install theTivoli Netcool Performance
Manager product suite and the Oracle database management system.
This guide provides instructions for installing Tivoli Netcool Performance Manager
components, but not necessarily for configuring the installed components into a
finished system that produces management reports. After going through the steps
in this guide, you will have a set of running Tivoli Netcool Performance Manager
components ready to configure into a fully functional system.
The goal of this guide is to get each component installed and running in its barest
form. The running component does not necessarily have network statistical data
flowing into and out of it yet. In particular, at the end of this installation
procedure, there are no or few management reports that can be viewed in
DataView.
Configuring installed components into a working system is the subject of other
manuals in the Tivoli Netcool Performance Manager documentation set.
Audience
The audience for this manual.
The audience for this manual is the network administrator or operations specialist
responsible for installing the Tivoli Netcool Performance Manager product suite on
an enterprise network. To install Tivoli Netcool Performance Manager successfully,
you should have a thorough understanding of the following subjects:
v Basic principles of TCP/IP networks and network management
v SNMP concepts
v Administration of the Linux, Solaris or AIX operating environment
v Administration of the Oracle database management system
v Tivoli Netcool Performance Manager
Tivoli Netcool Performance Manager - Wireline Component
IBM

Tivoli Netcool Performance Manager consists of a wireline component


(formerly Netcool/Proviso) and a wireless component (formerly Tivoli

Netcool

Performance Manager for Wireless).


Tivoli Netcool Performance Manager - Wireline Component consists of the
following subcomponents:
v DataMart is a set of management, configuration, and troubleshooting GUIs. The
Tivoli Netcool Performance Manager System Administrator uses the GUIs to
define policies and configuration, and to verify and troubleshoot operations.
Copyright IBM Corp. 2006, 2012 vii
v DataLoad provides flexible, distributed data collection and data import of SNMP
and non-SNMP data to a centralized database.
v DataChannel aggregates the data collected through Tivoli Netcool Performance
Manager DataLoad for use by the Tivoli Netcool Performance Manager
DataView reporting functions. It also processes online calculations and detects
real-time threshold violations.
v DataView is a reliable application server for on-demand, web-based network
reports.
v Technology Packs extend the Tivoli Netcool Performance Manager system with
service-ready reports for network operations, business development, and
customer viewing.
The following figure shows the different Tivoli Netcool Performance Manager
modules.
Tivoli Netcool Performance Manager documentation consists of the following:
v Release notes
v Configuration recommendations
v User guides
v Technical notes
v Online help
The documentation is available for viewing and downloading on the information
center at http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.tnpm.doc/welcome_tnpm.html.
viii IBM Tivoli Netcool Performance Manager: Installation Guide
The Default UNIX Shell
Add short description The installation scripts and procedures in this manual
generally presume, but do not require, the use of the Korn or Bash shells, and only
Korn shell syntax is shown in examples.
If you are a user of the C shell or Tcsh, make the necessary adjustments in the
commands shown as examples throughout this manual.
This guide uses the following shell prompts in the examples:
v # (pound sign) indicates commands you perform when logged in as root.
v $ (dollar sign) indicates commands you perform when logged in as oracle or
pvuser.
v SQL> indicates commands you perform at the SQL*Plus prompt.
Preface ix
x IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 1. Introduction
Introduction to Tivoli Netcool Performance Manager installation.
This chapter provides an overview of the Tivoli Netcool Performance Manager
product suite and provides important pre-installation setup information. In
addition, this chapter provides an overview of the installation interface introduced
in version 1.3.2.
Tivoli Netcool Performance Manager architecture
Tivoli Netcool Performance Manager system components.
The Tivoli Netcool Performance Manager components run on:
v SPARC-based servers from Sun Microsystems that run the Solaris operating
system
v AIX servers from IBM
v Linux servers
Exact, release-specific requirements, prerequisites, and recommendations for
hardware and software are described in detail in the IBM Tivoli Netcool
Performance Manager: Configuration Recommendations Guide.
You can work with Professional Services to plan and size the deployment of Tivoli
Netcool Performance Manager components in your environment.
The following diagram provides a high-level overview of the Tivoli Netcool
Performance Manager architecture.
The Tivoli Netcool Performance Manager system components are as follows:
v Tivoli Netcool Performance Manager database - The Tivoli Netcool
Performance Manager database is hosted on Oracle.
Copyright IBM Corp. 2006, 2012 1
v Tivoli Netcool Performance Manager DataMart - Tivoli Netcool Performance
Manager DataMart is the user and administrative interface to the Tivoli Netcool
Performance Manager database and to other Tivoli Netcool Performance
Manager components.
v Tivoli Netcool Performance Manager DataLoad - Tivoli Netcool Performance
Manager DataLoad consists of one or more components that collect network
statistical raw data from network devices and from network management
systems.
v Tivoli Netcool Performance Manager DataChannel - Tivoli Netcool
Performance Manager DataChannel is a collection of components that collect
data from DataLoad collectors, aggregate and process the data, and load the
data into the Tivoli Netcool Performance Manager database. DataChannel
components also serve as the escalation point for collected data that is
determined to be over threshold limits.
v Tivoli Netcool Performance Manager DataView - Tivoli Netcool Performance
Manager DataView is the Web server hosting and analysis platform. This
platform is used to display Web-based management reports based on network
data aggregated and placed in the Tivoli Netcool Performance Manager
database.
v Tivoli Netcool Performance Manager Technology Packs - Each technology pack
is a set of components that describes the format and structure of network
statistical data generated by network devices. Each technology pack is specific
for a particular device, or class of devices; or for a particular company's devices;
or for a protocol (such as standard SNMP values) common to many devices.
v Tivoli Integrated Portal - The Tivoli Integrated Portal application provides a
database-aware Web server foundation for the Web-based management reports
displayed by Tivoli Netcool Performance Manager DataView. The Tivoli
Integrated Portal application server is an essential component of each DataView
installation.
Platform support
All subcomponents of the Tivoli Netcool Performance Manager - Wireline
Component can be installed on a mix of AIX, Linux, and Solaris operating systems;
that is, the operating systems on which the DataChannel, DataView, DataLoad and
DataMart subcomponents are installed do not have to be of the same type.
Co-location rules
Allowed component deployment numbers and co-location rules.
Table 1 lists how many of each component can be deployed per Tivoli Netcool
Performance Manager system and whether multiple instances can be installed on
the same server.
In this table:
v N - Depends on how many subchannels there are per channel, and how many
channels there are per system. For example, if there are 40 subchannels per
channel and 8 channels, theoretically N=320. However, the practical limit is
probably much lower.
v System - The entire Tivoli Netcool Performance Manager system.
v Per host - A single physical host can be partitioned using zones, which effectively
gives you multiple hosts.
2 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: All CME, DLDR, FTE, and LDR components within a channel must share the
same filesystem.
Table 1. Co-location rules
Component
Number of Instances
Allowed
Co-Location
Constraints
Co-Location
Constraints
Supported by
Deployer?
AMGR One per host that
supports
DataChannel
components
Yes
BCOL
v N per system
v One per
corresponding
subchannel
Yes
CME One per subchannel Filesystem Yes
CMGR One per system Yes
Database One per system Yes
Database channel One per
DataChannel;
maximum of 8
Yes
DataLoad (SNMP
collector)
v N per system
v One per
corresponding
subchannel
v One per host
Yes
DataMart
v N per system
v One per host
Yes
DataView
v N per system
v One per host
One per system.
Discovery Server
v N per system
v One per host
Co-locate with
corresponding
DataMart
Yes
DLDR One per channel Filesystem Yes
FTE One per subchannel Filesystem Yes
HAM N+M per system,
where N is the
number of collectors
that HAM is
monitoring, and M is
the number of
standby collectors
Yes
LDR One per channel Filesystem Yes
Log One per system Yes
UBA (simple)
v N per system
v One per
corresponding
subchannel
Yes
UBA (complex) Pack-dependent Pack-dependent Pack-dependent
Chapter 1. Introduction 3
v In the Logical view of the Topology Editor, the DataChannel component contains
the subchannels, LDR, and DLDR components, with a maximum of 8 channels
per system. The subchannel contains the collector, FTE, and CME, with a
maximum of 40 subchannels per channel.
Inheritance
Inheritance is the method by which a parent object propagates its property values
to a child component.
The following rules should be kept in mind when dealing with these properties.
v A Child Property can be read only, but is not always.
v If the Child Property is not read only, then it can be changed to a value different
from the Parent Property.
v If the Parent Property changes, and the Child and Parent properties were the
same before the change, then the child property will be changed to reflect the
new Parent Property value
v If the Child Property changes, the Parent Property value will not be updated
v The Default Value of the Child Property is always the current Parent Property
value
Note: When performing an installation that uses non-default values, that is,
non-default usernames, passwords and locations, it is recommended that you
check both the Logical view and Physical view to ensure that they both contain the
correct values before proceeding with the installation.
Example
As an example of how a new component inherits property values:
The Disk Usage Server (DUS) is a child component of the Host object. The DUS
Remote User property inherits its value from the Host PV User Property on
creation of the DUS. The DUS property value will be taken from the Host property
value.
Child properties that have been inherited are marked as inherited.
As an example of what happens when you change inherited property values:
If we change the Host PV User Property value, it gets pushed down to the DUS
Remote User property value, updating it. The associated Default Value is also
updated.
If we change the DUS Remote User property value, that is the child value, it does
not propagate up to the host; the parent Host PV User Property value remains
unchanged.
Now the child and parent properties are out of sync, and if we change the parent
property value it is not reflected in the child property, though the default value
continues to be updated.
4 IBM Tivoli Netcool Performance Manager: Installation Guide
Notable subcomponents and features
The following sections describe a subset of the Tivoli Netcool Performance
Manager that should be considered before deciding on your topology
configuration.
Collectors
Collectors description.
The DataLoad collector takes in the unrefined network data and stores it in a file
that Tivoli Netcool Performance Manager can read. This file is known as a binary
object format file (BOF).
The following processes are employed in the DataLoad module:
v SNMP Collector - The DataLoad SNMP Collector sends SNMP requests to
network objects. Only the data requested by the configuration that was defined
for those network objects is retrieved.
v Bulk Collector - The Bulk Collector uses a Bulk Adaptor, which is individually
written for specific network resources, to format the unrefined data into a file,
called a PVline file, which is passed to the Bulk Collector.
Installation or topology considerations:
Installation and topology considerations for collectors.
The DataLoad modules can be loaded on lightweight servers and placed as close to
the network as possible (often inside the network firewall). Because a DataLoad
module does not contain a database, the hardware can be relatively inexpensive
and can still reliably handle high volumes of data.
Up to 320 DataLoad modules can be supported per Tivoli Netcool Performance
Manager installation.
The number of collectors in your system will affect the topology configuration. You
can have multiple BULK collectors, UBA or BCOL, on a single host, but you can
only have one SNMP based collector per host. The number of collectors is in turn
driven by the number of required Technology Packs.
Technology packs
Technology packs description.
Tivoli Netcool Performance Manager Technology Packs are custom designed
collections of MIBs, discovery formulas, collection formulas, complex formulas,
grouping rules, reporters, and other functions. Technology packs provide all Tivoli
Netcool Performance Manager needs to gather data for targeted devices.
Technology packs make it possible for Tivoli Netcool Performance Manager to
report on technology from multiple vendors.
Installation or topology considerations:
Installation and topology considerations for technology packs.
If you are creating a UBA collector, you must associate it with a specific technology
pack.
Chapter 1. Introduction 5
Note: General installation information for technology packs can be found in the
IBM Tivoli Netcool Performance Manager: Pack Installation and Configuration Guide,
pack-specific installation guides are also provided. Please consult both sets of
documentation for important installation or topology information.
High Availability
High Availability description.
High availability can be implemented for Tivoli Netcool Performance Manager in
two forms:
v High Availability Manager (HAM): This is a DataChannel component that can
be configured to handle availability of SNMP collectors.
v Veritas Cluster or Sun Cluster (referred to as HA within the documentation):
This method of implementing high availability has a much broader scope and
can cover all or a combination of the database, DataChannel, DataMart and
DataView components.
The following High Availability (HA) documents are available for download from
the Tivoli Open Process Automation Library (OPAL), http://www-01.ibm.com/
software/brandcatalog/opal/.
v TNPM High Availability Overview
Describes high availability solutions for the Tivoli Netcool Performance Manager
product.
v Sun Cluster TNPM Agent Guide
Describes how Sun Clusters can be used with TNPM to created a high
availability Tivoli Netcool Performance Manager system.
v High Availability Operations and Deployment
Describes an example system that was configured to provide high availability.
v TNPM High Availability Installation and Configuration
Describes the steps necessary to install and configure components of Tivoli
Netcool Performance Manager in a highly available configuration.
For information covering the High Availability Manager, see Chapter 7, Using the
High Availability Manager, on page 131
Installation or topology considerations:
Installation and topology considerations for the High Availability Manager.
The HAM must be put on the same machine as the channel manager.
Disk Usage Server
This Disk Usage Server component is responsible for maintaining the properties
necessary for quota management (flow control) of DataChannel.
The DataChannel component requires a Disk Usage Server. This component is
responsible for maintaining the properties necessary for quota management (flow
control) of DataChannel. DataChannel components can only be added to hosts that
include a Disk Usage Server.
Multiple Disk Usage Servers can be configured per host; therefore, allowing
multiple DataChannel directories to exist on a single host. There are two major
reasons why a user may want to configure multiple Disk Usage Servers:
6 IBM Tivoli Netcool Performance Manager: Installation Guide
Disk space is running low
Disk space may be impacted by the addition of a new DataChannel
component. In which case, the user may want to add a new file system
managed by a new Disk Usage Server
Separate disk quota management
The user may want to separately manage the quotas assigned to discrete
DataChannel components. For more information, see Disk quota
management.
The user can assign the management of a new file system to a Disk Usage Server
by editing the local_root_directory property of that Disk Usage Server using the
Topology Editor. The user can then add DataChannel components to the host, and
can assign the component to a Disk Usage Server, either in the creation wizard or
by editing the DUS_NUMBER property inside the component.
Disk quota management:
Disk Quota Management description.
The addition of a Disk Usage Server endeavors to make the process of assigning
space to a component much easier than it has been previously. No longer is a user
required to calculate the requirements of each component and assign that space
individually, but components now work together to more effectively utilize the
space they have under the Disk Usage Server. Also, the user is relieved of trying to
figure out which component needs extra space and then changing the quota for
that component. Now, the user can just change the quota of the DUS and all
components on that Disk Usage Server will get the update and share the space on
an as needed basis.
Good judgement of space requirements is still needed. However, the estimating of
space requirements is being made at a higher level; and should an estimate be
incorrect, only one number needs to be changed instead of potentially updating the
quota for each component separately.
Flow control:
Flow Control description.
Optimized flow control further eliminates problems with component level quotas.
Each component holds on to only a five hours of input and output, and once it has
reached this limit, it stops processing until the downstream component picks up
some of the data. This avoids the cascading scenario where one component stops
processing and the components feeding it begin to stockpile files, which results in
the quota being filled and causes all components to shut down because they have
run out of file space.
Installation or topology considerations:
Installation or Topology considerations for flow control.
DataChannel components can only be added to hosts that include a Disk Usage
Server.
Chapter 1. Introduction 7
Typical installation topology
Example topology scenarios.
Table 2 provides an example of where to install Tivoli Netcool Performance
Manager components, using four servers. Use this example as a guide to help you
determine where to install the Tivoli Netcool Performance Manager components in
your environment.
Basic topology scenario
A basic example topology.
Table 2. Tivoli Netcool Performance Manager basic topology scenario
Server Name
Tivoli Netcool Performance
Manager Components
Hosted Notes
delphi v Oracle server
v Tivoli Netcool
Performance Manager
Database
v Tivoli Netcool
Performance Manager
DataMart
v Tivoli Netcool
Performance Manager
Discovery Server
Install the Topology Editor
and primary deployer on this
system.
corinth v Oracle client
v Tivoli Netcool
Performance Manager
DataLoad, SNMP collector
v Tivoli Netcool
Performance Manager
DataLoad, Bulk Load
collector
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
sparta v Oracle client
v Tivoli Netcool
Performance Manager
DataChannel
You could install Tivoli
Netcool Performance
Manager components
remotely on this system
athens v Oracle client
v Tivoli Integrated Portal
v Tivoli Netcool
Performance Manager
DataView
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
Your configuration can use a
pre-existing Tivoli Integrated
Portal, or install and include
a new instance.
8 IBM Tivoli Netcool Performance Manager: Installation Guide
Intermediate topology scenario
An intermediate example topology scenario.
Table 3. Tivoli Netcool Performance Manager intermediate topology scenario
Server Name
Tivoli Netcool Performance
Manager Components
Hosted Notes
delphi v Oracle server
v Tivoli Netcool
Performance Manager
Database
v Tivoli Netcool
Performance Manager
DataMart
v Tivoli Netcool
Performance Manager
Discovery Server
Install the Topology Editor
and primary deployer on this
system.
corinth v Oracle client
v Tivoli Netcool
Performance Manager
DataLoad, SNMP collector
v Tivoli Netcool
Performance Manager
DataLoad, Bulk Load
collector
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
sparta v Oracle client
v Tivoli Netcool
Performance Manager
DataChannel
You could install Tivoli
Netcool Performance
Manager components
remotely on this system
thessaloniki v Oracle client
v Tivoli Netcool
Performance Manager
DataChannel
Also running the Channel
Manager.
v Tivoli Netcool
Performance Manager
DataLoad, SNMP collector
v Tivoli Netcool
Performance Manager
DataLoad, Bulk Load
collector
v High Availability Manager
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
This server contains a
duplicate set of collectors to
allow for high availability.
athens v Oracle client
v Tivoli Integrated Portal
v Tivoli Netcool
Performance Manager
DataView
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
Your configuration can use a
pre-existing Tivoli Integrated
Portal, or install and include
a new instance.
Chapter 1. Introduction 9
This scenario has an added copy of both collectors on corinth to a second machine,
thessaloniki, for the purposes of failover. HAM only manages SNMP collectors;
therefore, the HAM in this scenario will manage availability of the DataLoad
SNMP collector and not the Bulk Load collector. The HAM must be put on the
same machine as the channel manager.
Advanced topology scenario
An advanced example topology scenario.
Table 4. Tivoli Netcool Performance Manager advanced topology scenario
Server Name
Tivoli Netcool Performance
Manager Components
Hosted Notes
delphi v Oracle server
v Tivoli Netcool
Performance Manager
Database
v Tivoli Netcool
Performance Manager
DataMart
v Tivoli Netcool
Performance Manager
Discovery Server
Install the Topology Editor
and primary deployer on this
system.
corinth v Oracle client
v Tivoli Netcool
Performance Manager
DataLoad, SNMP collector
v Tivoli Netcool
Performance Manager
DataLoad, Bulk Load
collector
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
sparta v Oracle client
v Tivoli Netcool
Performance Manager
DataChannel
You could install Tivoli
Netcool Performance
Manager components
remotely on this system
thessaloniki v Oracle client
v Tivoli Netcool
Performance Manager
DataChannel
Also running the Channel
Manager.
v Tivoli Netcool
Performance Manager
DataLoad, SNMP collector
v Tivoli Netcool
Performance Manager
DataLoad, Bulk Load
collector
v High Availability Manager
You could install Tivoli
Netcool Performance
Manager components
remotely on this system
10 IBM Tivoli Netcool Performance Manager: Installation Guide
Table 4. Tivoli Netcool Performance Manager advanced topology scenario (continued)
Server Name
Tivoli Netcool Performance
Manager Components
Hosted Notes
athens v Oracle client
v Tivoli Integrated Portal
v Tivoli Netcool
Performance Manager
DataView
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
Your configuration can use a
pre-existing Tivoli Integrated
Portal, or install and include
a new instance.
rhodes v Oracle client
v Tivoli Integrated Portal
v Tivoli Netcool
Performance Manager
DataView
You could install Tivoli
Netcool Performance
Manager components
remotely on this system.
Your configuration can use a
pre-existing Tivoli Integrated
Portal, or install and include
a new instance.
Tivoli Netcool Performance Manager distribution
How to get your hands on the product distribution.
The Tivoli Netcool Performance Manager distribution is available as a DVD/CD
and as an electronic image. The instructions in this guide assume that you are
installing from an electronic image.
If you install the product from an electronic image, be sure to keep a copy of the
distribution image in a well-known directory, because you will need this image in
the future to make any changes to the environment, including uninstalling Tivoli
Netcool Performance Manager.
The Tivoli Netcool Performance Manager distribution DVD/CD contains:
v Tivoli Netcool Performance Manager 1.3.2
v Model Maker IBM Cognos Edition version 1.2.0.
v Base Common Pack Suite (1.0.0.3-TIV-TNPM-BCPSUITE.tar.gz).
Chapter 1. Introduction 11
12 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 2. Requirements
Details of all Tivoli Netcool Performance Manager requirements.
This chapter provides the complete set of requirements for Tivoli Netcool
Performance Manager 1.3.2.
IBM Prerequisite Scanner
IBM Prerequisite Scanner is a stand-alone prerequisite checking tool that analyzes
system environments before the installation or upgrade of a Tivoli product or IBM
solution.
The IBM Prerequisite Scanner can be used to check for the presence of Tivoli
Netcool Performance Manager requirements.
Instructions describing how to use and install the IBM Prerequisite Scanner are
available at http://www-01.ibm.com/support/docview.wss?uid=swg24029221.
Minimum requirements for installation
The minimum required host specifications for a Tivoli Netcool Performance
Manager deployment.
Solaris hardware requirements
Tivoli Netcool Performance Manager hardware requirements for the Solaris
environment.
Tivoli Netcool Performance Manager has the following minimum hardware
requirements for the Solaris environment.
v 4 x SPARC64 VI (dual-core) 2.4GHz or 3 x SPARC64 VII (quad-core) 2.88GHz
processor
Note: DataLoad requires virtualization (Solaris zones). For more information,
please see DataLoad SNMP on multiple CPU servers.
v 16 GB Memory
v 2 x 146GB HDD
If deploying in a distributed environment, each server/zone requires:
1 x SPARC64 VI (dual-core) 2.4GHz or 1 x SPARC64 VII, (quad-core) 2.88GHz
processor; 4 GB RAM; 146GB disk space.
Note: For information about performing a minimal installation deployment, see
the IBM Tivoli Netcool Performance Manager: Installation Guide.
Copyright IBM Corp. 2006, 2012 13
AIX hardware requirements
Tivoli Netcool Performance Manager hardware requirements for the AIX
environment.
Tivoli Netcool Performance Manager has the following minimum requirements for
the AIX environment:
v 4 x Power6 (Dual Core) 4.2GHz, or 3 x Power7 (Quad Core) 3.0GHz processor.
Note: DataLoad requires virtualization (LPARs). For more information, please
see DataLoad SNMP on multiple CPU servers.
v 16 GB Memory
v 2 x 146GB HDD
If deploying in a distributed environment, each server/LPAR requires:
v 1 x Power6 (Dual Core) 4.2GHz, or
v 1 x Power7 (Quad Core) 3.0GHz processor;
v 4 GB RAM; 146 GB disk space.
Other deployment configurations should be sized by IBM Professional Services.
Linux hardware requirements
Tivoli Netcool Performance Manager has a minimal deployment space requirement
on Linux
Tivoli Netcool Performance Manager has the following minimum requirements for
the Linux environment:
v 3 x Intel Xeon 5500/5600 series processors (quad-core), 2.4 GHz or greater.
Note: DataLoad requires virtualization (Virtual Machine). For more information,
please see DataLoad SNMP on multiple CPU servers.
v 16 GB memory
v 2 x 146 GB HDD
If deploying in a distributed environment, each server/Virtual Machine requires: 1
x Intel Xeon 5500/5600 series processor (quad-core) 2.4 GHz, 4 GB RAM, 146 GB
disk space.
Other deployment configurations should be sized by IBM Professional Services.
Oracle deployment space requirements
Tivoli Netcool Performance Manager has a minimal deployment space requirement
for the Oracle server.
When you install Oracle you can following the host must conform to the
previously stated hardware requirements for AIX, Solaris and Linux. However,
Oracle may experience problems if sufficient swap space is not provided.
v The same amount of Swap as RAM must be present on the Oracle server host in
a distributed Tivoli Netcool Performance Manager system.
v Twice as much Swap as RAM must be present on the Oracle server host for a
Tivoli Netcool Performance Manager proof of concept install.
14 IBM Tivoli Netcool Performance Manager: Installation Guide
Tivoli Integrated Portal deployment space requirements
Tivoli Netcool Performance Manager has a minimal deployment space requirement
for Tivoli Integrated Portal
When you install DataView, you also install Tivoli Common Reporting (TCR).
When performing a remote install a local /tmp folder is required on the deployer
in order to contain the TCR bundle. The space requirements are:
Local TIP install:
v /TIP install location - 2GB
Remote TIP install
v local /tmp - 1.5GB
v remote /tmp - 1.5GB
v remote /TIP install location - 2GB
Note: If you deploy many technology packs (especially Alcatel-Lucent 5620 SAM,
Alcatel 5620 NM, and Nortel CS2000, which either have multiple UBAs or require
multiple DataChannel applications), you might require more hardware capacity
than is specified in the minimal configuration. In these situations, before moving to
a production environment, IBM strongly recommends that you have IBM
Professional Services size your deployment so that they can recommend additional
hardware, if necessary.
Screen resolution
Recommended screen resolution details.
A screen resolution of 1024 x 768 pixels or higher is recommended when running
the deployer.
Minimum requirements for a proof of concept installation
Tivoli Netcool Performance Manager has minimum required host specifications for
Proof of Concept (POC) deployments.
Note: The minimum requirements do not account for additional functionality such
as WebGUI, Cognos and MDE, each have additional memory and CPU impacts.
Solaris hardware requirements (POC)
The minimum system requirements for a proof of concept install on Solaris.
v 2 x SPARC64 VI (dual-core) 2.4GHz or 2 x SPARC64 VII (quad-core) 2.88GHz
processor.
v 8GB Memory
v 1 x 146 GB HDD
To support:
v SNMP data only.
v All Tivoli Netcool Performance Manager components deployed on a single
server.
v Number of resources supported up to 20,000.
v 3 SNMP Technology Packs based on MIB II, Cisco Device and IPSLA.
Chapter 2. Requirements 15
v 15 minute polling
v Number of DataView users limited to <3.
AIX hardware requirements (POC)
The minimum system requirements for a proof of concept install on AIX.
v 2 x Power6 (Dual Core) 4.2GHz, or 2 x Power7 (Quad Core) 3.0GHz processor
v 8 GB Memory
v 1 x 146 GB HDD
To support:
v SNMP data only.
v All Tivoli Netcool Performance Manager components deployed on a single
server.
v Number of resources supported up to 20,000.
v 3 SNMP Technology Packs based on MIB II, Cisco Device and IPSLA.
v 15 minute polling
v Number of DataView users limited to <3.
Note: Additional features such as WebGUI and MDE are not accounted for in this
spec. They will have additional memory and CPU impacts.
Linux hardware requirements (POC)
The minimum system requirements for a proof of concept install on Linux.
v 2 x Intel Xeon 5500/5600 series processors (quad-core), 2.4 GHz or greater.
v 8 GB memory
v 1 x 146 GB HDD
To support:
v SNMP data only.
v All Tivoli Netcool Performance Manager components deployed on a single
server.
v Number of resources supported up to 20,000.
v 3 SNMP Technology Packs based on MIB II, Cisco Device and IPSLA.
v 15 minute polling
v Number of DataView users limited to <3.
Screen resolution
Recommended screen resolution details.
A screen resolution of 1024 x 768 pixels or higher is recommended when running
the deployer.
16 IBM Tivoli Netcool Performance Manager: Installation Guide
Supported operating systems and modules
The supported operating systems, modules, and third-party applications for IBM
Tivoli Netcool Performance Manager, Version 1.3.2.
The following sections list the supported operating systems, modules, and
third-party applications for IBM Tivoli Netcool Performance Manager, Version
1.3.2.
All subcomponents of the Tivoli Netcool Performance Manager - Wireline
Component can be installed on a mix of AIX, Linux, and Solaris operating systems;
that is, the operating systems on which the DataChannel, DataView, DataLoad and
DataMart subcomponents are installed do not have to be of the same type. The
supported versions of AIX, Linux and Solaris operating systems are discussed in
this document.
For more information, see the IBM Tivoli Netcool Performance Manager, Version
1.3.2 Release Notes, which contains the version numbers for each Tivoli Netcool
Performance Manager module in Version 1.3.2.
Solaris 10 for SPARC platforms
Supported Solaris systems.
Operating systems and kernel
Supported operating system and kernel version.
Solaris 10 Update 6 (released in October, 2008).
Update level can be checked from /etc/release file.
Applying a kernel patch or a Solaris patch bundle is not the equivalent to
installing the specific Solaris 10 "update 6" image. Oracle 11gR2 RDBMS software is
only certified for a base install image of Solaris 10 update 6 or greater. For
additional detail, please see Document 971464.1, FAQ - 11gR2 requires Solaris 10
update 6 +.
Kernel Parameters
Solaris 10 uses the resource control facility to implement the System V IPC.
However, Oracle recommends that you set both resource control and /etc/system/
parameters. Operating system parameters not replaced by resource controls
continue to affect performance and security on Solaris 10 systems.
Parameter Replaced by Resource Control Minimum Value
noexec_user_stack NA (can be set in /etc/system only) 1
semsys:seminfo_semmni project.max-sem-ids 100
semsys:seminfo_semmsl process.max-sem-nsems 256
shmsys:shminfo_shmmax project.max-shm-memory 4294967295
shmsys:shminfo_shmmni project.max-shm-ids 100
Please note that "project.max-shm-memory" represent the maximum shared
memory available for a project, so the value for this parameter should be greater
than sum of all SGA size.
Please refer following document for checking/setting kernel parameter values
using resource control :- Note 429191.1 Kernel setup for Solaris 10 using project
files.
Chapter 2. Requirements 17
v The 'umask' setting for the "oracle" user has to be 022.
v Hostname command should return the fully qualified hostname as shown
below:
% hostname
hostname.domainname
For further information refer to the following:
v NOTE:429191.1 - Kernel setup for Solaris 10 using project files.
v NOTE:971464.1 - FAQ - 11gR2 requires Solaris 10 update 6 or greater
Refer to "Verifying UDP and TCP Kernel Parameters" in the 11g Solaris Install
Guide. Part Number E17163-01 for other recommended kernel parameters.
http://download.oracle.com/docs/cd/E11882_01/install.112/e17163/
pre_install.htm#BABEJIGD
Solaris 10 requirements
Required Packages and Patches for the Solaris 10 system.
Tivoli Netcool Performance Manager requires at a minimum the End User
distribution of Solaris 10.
Required packages:
Required Packages for the Solaris 10 system.
Before installing the Oracle server, make sure the following Solaris packages are
installed on your system:
v SUNWarc
v SUNWbtool
v SUNWcsl
v SUNWhea
v SUNWi15cs
v SUNWi1cs
v SUNWi1of
v SUNWlibC
v SUNWlibm
v SUNWlibms
v SUNWsprot
v SUNWtoo
v SUNWxwfnt
To verify that the required Solaris packages are installed:
1. Enter the following command at the shell prompt:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWuiu8
The following output confirms that the specified Solaris packages are installed
correctly:
18 IBM Tivoli Netcool Performance Manager: Installation Guide
system SUNWarc Archive Libraries
system SUNWbtool CCS tools bundled with SunOS
system SUNWhea SunOS Header Files
system SUNWlibm Forte Developer Bundled libm
system SUNWlibms Forte Developer Bundled shared libm
system SUNWsprot Solaris Bundled tools
system SUNWtoo Programming Tools
...
2. If these packages are not on your system, see the Solaris Installation Guide for
instructions on installing supplementary package software.
Required patches:
Required Patches for the Solaris 10 system.
To determine the patch level on your system, enter the following command:
uname -v
Note: All Tivoli Netcool Performance Manager modules are tested to run on an
End User distribution of Solaris 10.
The following list of minimum Solaris patches is required:
v For all Installations :
120753-06: SunOS 5.10: Microtasking libraries (libmtsk) patch
139574-03: SunOS 5.10: file crle ldd stings elfdump patch
141444-09
141414-02
To determine whether an operating system patch is installed, enter a command
similar to the following:
1. Enter the command:
/usr/sbin/patchadd -p | grep patch_number(without version number)
For example, to determine if any version of the 120753-06 patch is installed, use
the following command:
/usr/sbin/patchadd -p | grep 120753
Solaris 10 virtualized containers
If you are using Solaris 10 virtualized containers, you must create containers with
"whole root" partitions.
If using Solaris 10 virtualized containers, you must create containers with "whole
root" partitions. Tivoli Netcool Performance Manager will not work in containers
with "sparse root" partitions.
Note: The Tivoli Netcool Performance Manager Self Monitoring Pack and SSM
packs do not report data correctly in virtualized environments, due to
compatibility issues of the underlying SSM agents in these configurations. The MIB
II pack can also have difficulties discovering resources in virtualized server
environments.
DataMart
DataMart requirements if you are using Solaris 10.
Configure the default language of Solaris to English.
Chapter 2. Requirements 19
IBM Java Runtime Environment (JRE) 1.5.
DataLoad
DataLoad requirements if you are using Solaris 10.
Configure the default language of Solaris to English.
DataChannel
DataChannel requirements if you are using Solaris 10.
Configure the default language of Solaris to English.
The DataChannel components require X libraries to be installed. The graphical
tools in DataChannel also require a running X server, which can be on a different
host.
Technology packs
Technology pack requirements if you are using Solaris 10.
Installation of technology packs on Solaris 10 requires JRE 1.6 (32-bit). The correct
version is installed with the Topology Editor in the following default location:
/opt/IBM/proviso/jvm/jre/bin
DataView and Tivoli Integrated Portal
DataView and Tivoli Integrated Portal requirements if you are using Solaris 10.
Java Runtime Environment (JRE) 1.6 (32-bit)
AIX platforms
Supported AIX systems.
Tivoli Netcool Performance Manager can be installed and operated in a virtualized
AIX environment.
Note: Tivoli Netcool Performance Manager on AIX does not support connection to
a Solaris-based Oracle database.
Note: The Tivoli Netcool Performance Manager Self Monitoring Pack and SSM
packs do not report data correctly in virtualized environments, due to
compatibility issues of the underlying SSM agents in these configurations. The MIB
II pack can also have difficulties discovering resources in virtualized server
environments.
Operating system
Supported operating system and kernel version.
IBM Tivoli Netcool Performance Manager, Version 1.3.2 supports:
v AIX 6.1 TL 04 SP1 ("6100-04-01), 64-bit kernel
v AIX 7.1 TL 0 SP3 ("7100-00-03"), 64-bit kernel
To verify your AIX release level:
As root, enter the following command at the shell prompt:
# oslevel -r
20 IBM Tivoli Netcool Performance Manager: Installation Guide
This command returns a string that represents the maintenance level for your AIX
system.
For AIX 6.1: If the operating system version is lower than AIX 6.1 Technology
Level 4 SP 1, then upgrade your operating system to this or a later level.
For AIX 7.1: If the operating system version is lower than AIX 7.1 Technology
Level 0 plus SP 3, then upgrade your operating system to this or a later level. AIX
maintenance packages are available from the following Web site:
http://www-933.ibm.com/support/fixcentral/
Required files and fixes
Files and fixes required by Oracle and Tivoli Netcool Performance Manager for the
AIX system.
AIX 6.1
The following operating system filesets are required for AIX 6.1:
v bos.adt.base
v bos.adt.lib
v bos.adt.libm
v bos.perf.libperfstat 6.1.2.1 or later
v bos.perf.perfstat
v bos.perf.proctools
v xlC.aix61.rte:10.1.0.0 or later
v gpfs.base 3.2.1.8 or later
If you are using the minimum operating system TL level for AIX 6L listed above,
then install all AIX 6L 6.1 Authorized Problem Analysis Reports (APARs) for AIX
6.1 TL 02 SP1, and the following AIX fixes:
v IZ41855
v IZ51456
v IZ52319
These 6.1 fixes are already present in the following TL levels:
v AIX 6.1 TL-02 SP-04 and later
v AIX 6.1 TL-03 SP-02 and later
v AIX 6.1 TL-04
The following AIX fixes are required also:
v IZ97457
v IZ89165
Note: See Oracle Metalink Note:1264074.1 and Note:1379753.1 for other AIX 6.1
patches that may be required
AIX 7.1
The following operating system filesets are required for AIX 7.1:
v bos.adt.base
v bos.adt.lib
Chapter 2. Requirements 21
v bos.adt.libm
v bos.perf.libperfstat
v bos.perf.perfstat
v bos.perf.proctools
v xlC.aix61.rte:10.1.0.0 or later
v xlC.rte:10.1.0.0 or later
If you are using the minimum operating system TL level for AIX 7.1 listed above,
then install all AIX 7L 7.1 Authorized Problem Analysis Reports (APARs) for AIX
7.1 TL 0 SP1, and the following AIX fixes:
v IZ87216
v IZ87564
v IZ89165
v IZ97035
Note: See Oracle Metalink Note:1264074.1 and Note:1379753.1 for other AIX 7.1
patches that may be required.
Determining installed filesets
To determine if the required filesets are installed and committed, enter a command
similar to the following:
# lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \
bos.perf.libperfstat bos.perf.proctools
To determine the supported kernel mode, enter a command similar to the
following:
# getconf KERNEL_BITMODE
Determining installed APAR fixes
To determine if an APAR is installed, enter a command similar to the following:
# /usr/sbin/instfix -i -k "IZ42940 IZ49516 IZ52331 IZ41855 IZ52319"
If an APAR is not installed, then download it from the following Web site and
install it: http://www-933.ibm.com/support/fixcentral/
Set resource limits
The default user process limits are not adequate and must be reset.
On AIX systems, the default user process limits are not adequate for Tivoli Netcool
Performance Manager. For detailed information on setting the correct user process
limits, see Set Resource Limits in the IBM Tivoli Netcool Performance Manager:
Installation Guide.
DataMart
DataMart requirements if you are using AIX.
Java Runtime Environment (JRE) 1.5 or higher (for the Database Information
module).
22 IBM Tivoli Netcool Performance Manager: Installation Guide
DataLoad
DataLoad requirements if you are using AIX.
No special requirements.
DataChannel
DataChannel requirements if you are using AIX.
The following DataChannel components require X libraries to be installed:
v Bulk Collector
v CME
v DLDR
v FTE
v FED (also requires that an X Server is running)
v LDR
v The showVersion utility.
Technology packs
Technology Pack requirements if you are using AIX.
Installation of technology packs on AIX 6.1 requires JRE 1.6 (32-bit). The correct
version is installed with the Topology Editor in the following default location:
/opt/IBM/proviso/jvm/jre/bin
Viewing the documentation on AIX systems
Viewing the documentation requires that you can run Adobe Acrobat Reader.
To run on AIX systems, Adobe Acrobat Reader requires GIMP Toolkit (GTK+)
Version 2.2.2 or higher. You can download the toolkit from the following URL:
http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html
In addition, you must install all the dependent packages for GTK+. You can install
GTK+ and its dependent packages either before or after installing Acrobat Reader.
At the time of publication, the latest version of GTK+ is gtk2-2.8.3-9, and the
latest versions of the dependent packages are as follows:
v libpng-1.2.8-5
v libtiff-3.6.1-4
v libjpeg-6b-6
v gettext-0.10.40-6
v glib2-2.8.1-3
v atk-1.10.3-2
v freetype2-2.1.7-5
v xrender-0.8.4-7
v expat-1.95.7-4
v fontconfig-2.2.2-5
v xft-2.1.6-5
v cairo-1.0.2-6
v pango-1.10.0-2
v xcursor-1.0.2-3
Chapter 2. Requirements 23
v gtk2-2.8.3-9
To fulfill dependency requirements, you must install these Red Hat Package
Managers (RPMs) in the order specified.
Installing an RPM:
1. To install an RPM use the following syntax:
rpm -i rpm_filename
2. To see a list of the RPMs that have been installed, enter the following
command:
rpm -qa
Acrobat Reader and LDAP:
By default, AIX systems do not have LDAP installed.
By default, AIX systems do not have LDAP installed. If the AIX system does not
have LDAP installed and you run Acrobat Reader, a warning message is displayed.
Click OK to have Acrobat Reader proceed normally.
To remove the error message, delete or rename /opt/acrobat/Reader/rs6000aix/
plug_ins/PPKLite.api, where /opt/acrobat is the installation path.
Linux platforms
Supported Linux Systems
IBM Tivoli Netcool Performance Manager, Version 1.3.2 can be installed and
operated in an environment utilizing VMware partitions. This section details Linux
environment prerequisites for Tivoli Netcool Performance Manager.
Operating system
Supported operating system and kernel versions.
Tivoli Netcool Performance Manager supports the following Linux systems:
v Linux hosts running 64-bit Red Hat Enterprise Linux Version 5.5 or greater with
Oracle 11g.
To check the version of your operating system, enter:
# cat /etc/redhat-release
This command should return output similar to:
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
To verify the processor type, run the following command:
uname -p
To verify the machine type, run the following command:
uname -m
To verify the hardware platform, run the following command:
uname -i
All results should contain the output:
x86_64
24 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: See Note 225710.1 for supported kernels and Note 265262.1 for "Things to
know about Linux." https://support.oracle.com/CSP/main/article?cmd=show
&type=NOT&id=265262.1
Database
Database requirements if you are using Linux.
It is recommended that the user consult Metalink Note 880989.1, Requirements For
Installing Oracle11g On RHEL5, which has a full list of Oracle requirements for
RHEL5. Oracle keeps this list up to date.
Required packages
Required packages if you are using Linux.
The following is a list of packages for Oracle Enterprise Linux 5.0 and Red Hat
Enterprise Linux 5.0:
v binutils-2.17.50.0.6-6.el5 (x86_64)
v compat-libstdc++-33-3.2.3-61 (x86_64) - both architectures are required. See
next line.
v compat-libstdc++-33-3.2.3-61 (i386) - both architectures are required. See
previous line.
v elfutils-libelf-0.125-3.el5 (x86_64)
v glibc-2.5-24 (x86_64) - both architectures are required. See next line.
v glibc-2.5-24 (i686) - both architectures are required. See previous line.
v glibc-common-2.5-24 (x86_64)
v ksh-20060214-1.7 (x86_64)
v libaio-0.3.106-3.2 (x86_64) - both architectures are required. See next line.
v libaio-0.3.106-3.2 (i386) - both architectures are required. See previous line.
v libgcc-4.1.2-42.el5 (i386) - both architectures are required. See next line.
v libgcc-4.1.2-42.el5 (x86_64) - both architectures are required. See previous
line.
v libstdc++-4.1.2-42.el5 (x86_64) - both architectures are required. See next
line.
v libstdc++-4.1.2-42.el5 (i386) - both architectures are required. See previous
line.
v make-3.81-3.el5 (x86_64)
Note: These are minimum required versions. Also, for some architectures both of
the i386 and x86_64 package versions must be verified. For example, both the i386
and the x86_64 architectures for glibc-2.5-24 must be installed.
Also required are:
v elfutils-libelf-devel-0.125-3.el5.x86_64.rpm
which in turn requires the following:
elfutils-libelf-devel and
elfutils-libelf-devel-static
One depends upon the other.
Therefore, they must be installed together, in one (1) "rpm -ivh" command as
follows:
rpm -ivh elfutils-libelf-devel-0.125-3.el5.x86_64.rpm elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm
Chapter 2. Requirements 25
v glibc-headers-2.5-24.x86_64.rpm
which in turn requires the following:
kernel-headers-2.6.18-92.el5.x86_64.rpm
glibc-devel-2.5-24.x86_64.rpm - both architectures are required. See next
item.
glibc-devel-2.5-24.i386.rpm - both architectures are required. See previous
item.
v gcc-4.1.2-42.el5.x86_64.rpm
which in turn requires the following:
libgomp-4.1.2-42.el5.x86_64.rpm
libstdc++-devel-4.1.2-42.el5.x86_64.rpm
gcc-c++-4.1.2-42.el5.x86_64.rpm
libaio-devel-0.3.106-3.2.x86_64.rpm - both architectures are required. See
next item
libaio-devel-0.3.106-3.2.i386.rpm - both architectures are required. See
previous item.
sysstat-7.0.2-1.el5.x86_64.rpm
unixODBC-2.2.11-7.1.x86_64.rpm - both architectures are required. See next
item
unixODBC-2.2.11-7.1.i386.rpm - both architectures are required. See previous
item.
unixODBC-devel-2.2.11-7.1.x86_64.rpm - both architectures are required. See
next item unix
ODBC-devel-2.2.11-7.1.i386.rpm - both architectures are required. See
previous item.
The following packages are required and checked for by the check_os.ini
application:
v libXp-1.0.0-i386
v libXp-1.0.0-x86_64
v libXpm-3.5.5-x86_64
v libstdc++-devel-4.1.1-x86_64
v glibc-devel-2.5-i386
v glibc-devel-2.5-x86_64
v gcc-c++-4.1.2-x86_64
v openmotif-2.3.1-i386
v openmotif-2.3.1-x86_64
DataMart
DataMart requirements if you are using Linux.
Java Runtime Environment (JRE) 1.5 or higher (for the Database Information
module).
DataLoad
DataLoad requirements if you are using Linux.
No special requirements.
26 IBM Tivoli Netcool Performance Manager: Installation Guide
DataChannel
DataChannel requirements if you are using Linux.
No special requirements.
Required user names
There are two user names that must created when installing Tivoli Netcool
Performance Manager.
Two specific user names are required on any server hosting Tivoli Netcool
Performance Manager components, they are:
v pvuser: A dedicated Tivoli Netcool Performance Manager Unix user.
v oracle: A dedicated Oracle user.
pvuser
The pvuser user name.
The Tivoli Netcool Performance Manager Unix user pvuser must be added to each
server hosting a Tivoli Netcool Performance Manager component. The Tivoli
Netcool Performance Manager Unix user, which is referred to as pvuser
throughout the documentation, can be named using any string as required by your
organizations naming standards.
For more information on how to add the Tivoli Netcool Performance Manager
Unix user pvuser , see the "Pre-Installation Setup Tasks > Adding the pvuser Login
Name" section of the IBM Tivoli Netcool Performance Manager: Installation Guide.
oracle
The oracle user name.
The Oracle user oracle is added to each server hosting a Tivoli Netcool
Performance Manager component. This user is added when installing either Oracle
client or server. The default username used is oracle; however, this Oracle
username can be named using any string as required by your organizations
naming standards.
Note: Should you choose a non-default Oracle username, you must use the same
name across all instances of Oracle Client and Server throughout your Tivoli
Netcool Performance Manager system.
For more information on creating the oracle user for the Oracle server, see the
"Installing the Oracle Server > Run the Oracle Server Configuration Script" section
of the IBM Tivoli Netcool Performance Manager: Installation Guide.
For more information on creating the oracle user for the Oracle client, see the
"Installing the Oracle Client > Run the Oracle Client Configuration Script" section
of the IBM Tivoli Netcool Performance Manager: Installation Guide.
Chapter 2. Requirements 27
Ancillary software requirements
Extra and third-party software requirements.
The following sections outline the extra software packages required by Tivoli
Netcool Performance Manager.
FTP support
Tivoli Netcool Performance Manager requires FTP support.
The FTP (File Transfer Protocol) version used to transfer files between Tivoli
Netcool Performance Manager components is delivered with Solaris 10. AIX also
uses FTP to transfer files between Tivoli Netcool Performance Manager
components.
Tivoli Netcool Performance Manager supports the following file transport protocols
between Tivoli Netcool Performance Manager components and third-party
equipment (for example, EMS):
v FTP Solaris 10
v Microsoft

Internet Information Services (IIS) FTP server


OpenSSH and SFTP
Tivoli Netcool Performance Manager requires OpenSSH and SFTP support.
Tivoli Netcool Performance Manager supports using encrypted Secure File Transfer
(SFTP) and FTP to move data files from DataLoad to DataChannel, or from
DataChannel Remote to the DataChannel Loader.
Tivoli Netcool Performance Manager SFTP is compatible only with OpenSSH
server and client version 3.1p1 and above. OpenSSH is freely downloadable and
distributable (http://www.openssh.org). OpenSSH is supported on Solaris, AIX,
and Linux. Other SSH versions such as F-Secure, the SSH bundled in Solaris 10,
and the SSH bundled with AIX are not compatible.
If you use the SFTP capability, you must obtain, install, generate keys for, maintain,
and support OpenSSH and any packages required by OpenSSH.
See Tivoli Netcool Performance Manager Technical Note: DataChannel Secure File Transfer
Installation for more information about installing and configuring OpenSSH.
AIX requirements
AIX requirements in order to use OpenSSH and SFTP.
The following table lists additional prerequisites that must be installed on an AIX
system, and where these packages can be found:
Table 2: Additional Prerequisites
Package Location
openssl-0.9.7g-1.aix5.1.ppc.rpm https://www14.software.ibm.com/webapp/
iwm/web/preLogin.do?source=aixtbx
openssh-4.1p1_53.tar.Z https://sourceforge.net/projects/openssh-
aix/
28 IBM Tivoli Netcool Performance Manager: Installation Guide
Package Location
bos.adt.libm AIX installation CD.
Solaris requirements
Solaris requirements in order to use OpenSSH and SFTP.
The following are additional prerequisites that must be installed on an Solaris
systems:
v openssh - SSH client (openssh-4.5p1-sol-sparc-local.gz or higher).
v zlib - zlib compression and decompression library (zlib-1.2.3-sol9-sparc-local.gz
or higher).
Linux requirements
Linux requirements in order to use OpenSSH and SFTP.
OpenSSH is required for VSFTP to work with Tivoli Netcool Performance
Manager.OpenSSH is installed by default on any RHEL system.
By default, FTP is not enabled on Linux systems. You must enable FTP on your
Linux host to carry out the installation of Tivoli Netcool Performance Manager.
To enable FTP on your Linux host, run the following command as root:
/etc/init.d/vsftpd start
File compression
File compression support.
Archives delivered as part of the IBM Tivoli Netcool Performance Manager
distribution were created using GNU Tar. This program must be used for the
decompression of archives.
DataView load balancing
Load balancing support.
IBM Tivoli Netcool Performance Manager supports the use of an external load
balancer to optimize the use of available DataView instances.
The load balancer must support the following basic features:
v Basic IP-based load balancing
v Sticky sessions based on incoming IP
v Up/down status based on checking for a listening port
The following is the link to the CSS Basic Configuration Guide:http://
www.cisco.com/univercd/cc/td/doc/product/webscale/css/css_401/bsccfgg
d/index.htm
Chapter 2. Requirements 29
Oracle Database
Oracle database license recommendations.
License recommendations
When purchasing a license from Oracle, IBM recommends a Processor license
rather than a Named User Plus license.
Oracle defines a Named User in such a way that it includes not only actual human
users, but also non-human-operated devices. In other words, you would require a
Named User Plus license for every resource that Tivoli Netcool Performance
Manager polls, which would be very expensive.
http://www.oracle.com .
Oracle server
Oracle server support.
Oracle 11g Enterprise Edition for Solaris (SPARC), Linux and AIX are 64-bit only.
Oracle 11g Enterprise Edition must include the partitioning option. The database
no longer provides 32-bit libraries, so the Oracle client will need to be installed on
the Tivoli Netcool Performance Manager database server, should there be another
Tivoli Netcool Performance Manager component present.
Note: Tivoli Netcool Performance Manager should be installed and run as a
standalone database. It should not be placed on a server that already has a
database as the installation program will likely interfere. The co-hosting of Tivoli
Netcool Performance Manager will also affect performance in unknown ways. If a
co-host is required then the Customer should seek out Professional Services for
support.
Hidden Oracle parameters
Hidden parameters that must be set for Oracle.
This section lists all of the Oracle '_', or hidden parameters, used by Tivoli Netcool
Performance Manager:
v _partition_view_enabled=TRUE
v _unnest_subquery=FALSE
v _gby_hash_aggregation_enabled=FALSE"
Tivoli Common Reporting client
Tivoli Common Reporting and Cognos

client-side requirements.
Tivoli Netcool Performance Manager supports the use of:
v Tivoli Integrated Portal 2.1 and
v Tivoli Integrated Portal 2.2
You should install Tivoli Integrated Portal 2.1 if your system hosts software that is
incompatible with Tivoli Integrated Portal 2.2
To use Cognos, you must download and install a windows version of Tivoli
Common Reporting 2.1 from xTreme Leverage.
30 IBM Tivoli Netcool Performance Manager: Installation Guide
There are two prerequisites that must be in place to use TCR/Cognos on a
Microsoft Windows environment:
v Framework Manager
v Oracle Client
Java Runtime Environment (JRE)
Required Java support.
Java Runtime Environment (JRE) 1.6 (32-bit) is required for all servers hosting
Tivoli Netcool Performance Manager components. The exception to this is
DataMart which requires is IBM Java 1.5 R6.
The IBM JDK is not supplied and installed automatically with the DataMart,
DataChannel and DataLoad components. When installing those components on
servers that are remote from the server hosting the primary deployer (Topology
Editor and Deployer) or TIP, then the required JRE, as stated above, will need to be
deployed to those servers separately.
Web browsers and settings
Supported browsers.
The following browsers are required to support the Web client and provide access
to DataView reports:
Important: If you are using Tivoli Netcool Performance Manager with WebGUI, see
WebGUI integration on page 33 for the browsers supported with both.
Important: No other browser types are supported.
Table 5. Windows Clients
Windows Vista Windows XP
v Microsoft Internet Explorer 7.0, 8.0
v Mozilla Firefox 3.6.
v Microsoft Internet Explorer 7.0, 8.0
v Mozilla Firefox 3.6.
Note: When using Windows Internet Explorer, IBM recommends that you have
available at least 1GB of memory
Table 6. UNIX Clients
AIX Red Hat 5 Solaris 10
v Firefox 3.6 v Firefox 3.6 v Firefox 3.6
The following are required browser settings:
v Enable JavaScript
v Enable cookies
Browser requirements for the Launchpad
Web browser requirements for the Launchpad.
The new Launchpad has been tested on the following browsers:
On Solaris:
v Firefox 3.6
Chapter 2. Requirements 31
On AIX:
v Firefox 3.6
On Linux:
v Firefox 3.6
For information about downloading and installing these browsers, see the
following web sites:
v http://www.mozilla.org/
v http://www-03.ibm.com/systems/p/os/aix/browsers/index.html
Note: You must be a registered user to use this site.
Screen resolution
Recommended screen resolution details.
A screen resolution of 1152 x 864 pixels or higher is recommended for the display
of DataView reports. Some reports may experience rendering issues at lower
resolutions.
Report Studio - Cognos
Report Studio support.
Report Studio is only supported by Microsoft Internet Explorer.
X Emulation
Remote desktop support.
For DataMart GUI access, Tivoli Netcool Performance Manager supports the
following:
v Native X Terminals
v Exceed V 6.2
The following libraries are required in order for Exceed to work with Eclipse:
v libgtk 2.10.1
v libgib 2.12.1
v libfreetype 2.1.10
v libatk 1.12.1
v libcairo 1.2.6
v libxft 2.1.6
v libpango 1.14.0
v Real VNC server 4.0
32 IBM Tivoli Netcool Performance Manager: Installation Guide
WebGUI integration
WebGUI version support.
The IBM Tivoli Netcool/OMNIbus Web GUI Integration Guide for Wireline describes
how to integrate IBM

Tivoli

Netcool /OMNIbus Web GUI with the wireline


component of Tivoli Netcool Performance Manager.
TNPM 1.3.2 will have support for:
v TIP 2.1 and WebGui 7.3.1 + Fix pack x (as per TNPM 1.3.1 support) - example
scenario where customer has multiple Tivoli products in TIP
v TIP 2.2 and WebGui 7.3.1 +FP1+FP2 (new support to 1.3.2).
The web browsers supported by WebGUI and Tivoli Netcool Performance Manager
are listed in the following table.
Table 7. Web clients browsers supported by WebGUI
Browser Version Operating system
Internet Explorer 7.0, 8.0 Windows 2003, Windows XP,
Windows Vista, Windows
2008, and Windows 7
Mozilla Firefox 3.5, 3.6 Windows 2003, Windows XP,
Windows Vista, Windows
2008 , and Windows 7
SuSE Linux Enterprise Server
(SLES) 10, and 11
Red Hat Enterprise Linux
(RHEL) 4, 5 and 6
Solaris 9 and 10
Note: When using Windows Internet Explorer, IBM recommends that you have
available at least 1GB of memory
Microsoft Office Version
Microsoft Office support.
The Tivoli Netcool Performance Manager DataView Scheduled Report option
generates files compatible with Microsoft Office Word 2002 or higher.
Chapter 2. Requirements 33
34 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 3. Installing and configuring the prerequisite software
Installing and configuring the software required by Tivoli Netcool Performance
Manager.
This chapter describes how to install and configure the prerequisite software for
Tivoli Netcool Performance Manager.
Overview
Before beginning the Tivoli Netcool Performance Manager installation, you must
install the prerequisite software listed in the IBM Tivoli Netcool Performance
Manager: Configuration Recommendations Guide.
The required software includes:
v Oracle server: To use Oracle with Tivoli Netcool Performance Manager, you
must install Oracle as described in this chapter - do not use a separate Oracle
installation method provided by Oracle Corporation.
v Oracle client: You must install Oracle client software on each system where you
plan to install a Tivoli Netcool Performance Manager component, except for the
system where you installed the Oracle server.
When you complete the steps in this chapter, the Oracle server and client will be
installed and running, with tablespaces sized and ready to accept the installation
of a Tivoli Netcool Performance Manager DataMart database. You can
communicate with Oracle using the SQLPlus command-line utility.
The steps in this chapter use IBM-provided installation scripts to install and
configure the Oracle database from the Oracle distribution. For use with Tivoli
Netcool Performance Manager, you must install Oracle as described in this
chapter. Do not use a separate Oracle installation method provided by Oracle
Corporation. You should obtain the official Oracle distribution from your
edelivery site (after purchase of an Oracle license). See the IBM Tivoli Netcool
Performance Manager: Configuration Recommendations Guide for recommendations
when purchasing a license from Oracle.
Note: The Tivoli Netcool Performance Manager script used to install Oracle is
platform-independent and can be used to install on Solaris, AIX, or Linux,
regardless of the operating system distribution media.
v OpenSSH: You must install and configure OpenSSH before installing Tivoli
Netcool Performance Manager. For details, see Appendix E, Secure file transfer
installation, on page 195. Linux systems require the installation of VSFTP (Very
Secure FTP).
v Web browser: The launchpad requires a Web browser. IBM recommends using
Mozilla with the launchpad. For the complete list of supported browsers, see the
IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide
document.
v Java: Java is used by DataMart, DataLoad, and the technology packs. You must
ensure you are using the IBM JRE and not the RHEL JRE. The IBM JRE is
supplied with the Topology Editor or with TIP. To ensure you are using the right
JRE you can either:
Copyright IBM Corp. 2006, 2012 35
Set the JRE path to conform to that used by the Topology Editor, do this
using the following commands (using the default location for the primary
deployer):
PATH=/opt/IBM/proviso/topologyEditor/jre/bin:$PATH
export $PATH
For a remote server, that is one that does not host the primary deployer, you
must download and install the required JRE, and set the correct JRE path. See
the IBM Tivoli Netcool Performance Manager: Configuration Recommendations
Guide document for JRE download details.
Note: See the IBM Tivoli Netcool Performance Manager: Configuration
Recommendations Guide document for the complete list of prerequisite software and
their supported versions.
Supported platforms
The platforms supported by Tivoli Netcool Performance Manager.
Refer to the following table for platform requirement information.
Tivoli Netcool Performance Manager
Component Required Oracle Software
All Tivoli Netcool Performance Manager
Components:
v Database
v DataView
v DataChannel
v DataLoad
v DataMart
v Solaris 10 10/08 update 6, 64-bit
v AIX 6.1, 64-bit
v RHEL 5.5, 64-bit
Pre-Installation setup tasks
Before installing the prerequisite software, perform the tasks outlined in this
section.
Setting up a remote X Window display
Setting Up a Remote X Window Display
About this task
For most installations, it does not matter whether you use a Telnet, rlogin, Xterm,
or Terminal window to get to a shell prompt.
Some installation steps must be performed from a window that supports the X
Window server protocols. This means that the steps described in later chapters
must be run from an Xterm window on a remote system or from a terminal
window on the target system's graphical display.
Note: See the IBM Tivoli Netcool Performance Manager: Configuration
Recommendations Guide document for the list of supported X emulators.
36 IBM Tivoli Netcool Performance Manager: Installation Guide
Specifying the DISPLAY environment variable
If you use an X Window System shell window such as Xterm, you must set the
DISPLAY environment variable to point to the IP address and screen number of
the system you are using.
About this task
Command sequences in this manual do not remind you at every stage to set this
variable.
If you use the su command to become different users, be especially vigilant to set
DISPLAY before running X Window System-compliant programs.
Procedure
In general, set DISPLAY as follows:
$ DISPLAY=Host_IP_Address:0.0 $ export DISPLAY
To make sure the DISPLAY environment variable is set, use the echo command:
$ echo $DISPLAY
Disabling access control to the display
If you encounter error messages when trying to run X Window System-based
programs, you might need to temporarily disable X Window System access control
so an installation step can proceed.
About this task
To disable access control:
Procedure
1. Set the DISPLAY environment variable.
2. Enter the following command when logged in as root:
# /usr/openwin/bin/xhost +
Note: Disabling access control is what enables access to the current machine
from X clients on other machines.
Changing the ethernet characteristics
Before installing Tivoli Netcool Performance Manager, you must force both the
ethernet adapter and the port on the switch to 100 full duplex mode -
autonegotiate settings are not enough.
AIX systems
Changing ethernet characteristics on AIX.
About this task
To change the setting to full duplex:
Note: If the AIX node is a virtual partition, you must perform these steps on the
virtual I/O server (including the reboot).
Chapter 3. Installing and configuring the prerequisite software 37
Procedure
1. Using the System Management Interface Tool (SMIT), navigate to Devices >
Communication > Ethernet Adapter > Adapter > Change/Show
Characteristics of an Ethernet Adapter.
2. Select your ethernet adapter (the default is ent0).
3. Change the setting Apply change to DATABASE only to yes.
4. Set the port on the switch or router that the AIX node is plugged into to
100_Full_Duplex.
5. Reboot your system.
Solaris systems
This section describes how to set a network interface card (NIC) and a BGE
network driver to full duplex mode.
NIC:
Change the NIC to full duplex mode on Solaris systems
About this task
To change the NIC to full duplex mode:
Procedure
1. Determine which type of adapter you have by running the following command:
ifconfig -a
2. To determine the current settings of the NIC, run the command ndd -get
/dev/hme with one of the following parameters:
Command Parameter Description
link_status Determines whether the link is up
v 1 - Up
v 0 - Down
link_speed Determines the link speed
v 0 - 10Mb/sec
v 1 - 100Mb/sec
link_mode Determines the duplex mode
v 0 - Half duplex
v 1 - Full duplex
adv_autoneg_cap Determines whether auto negotiation is on
v 0 - Off
v 1 - On
For example:
ndd -get /dev/hme link_status
In these commands, /dev/hme is your NIC; you might need to substitute your
own /dev/xxx.
3. To set your NIC to 100Mb/s with full duplex for the current session, run the
following commands:
ndd -set /dev/hme adv_100hdx_cap 0
ndd -set /dev/hme adv_100fdx_cap 1
ndd -set /dev/hme adv_autoneg_cap 0
38 IBM Tivoli Netcool Performance Manager: Installation Guide
However, these commands change the NIC settings for the current session only.
If you reboot, the settings will be lost. To make the settings permanent, edit the
/etc/system file and add the following entries:
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_100fdx_cap=1
4. Verify that your NIC is functioning as required by rerunning the commands
listed in Step 2.
BGE network driver:
Change a BGE network driver to full duplex mode.
About this task
To change a BGE network driver to full duplex mode.
Procedure
1. To determine the link speed and current duplex setting, run the following
command:
% kstat bge:0 | egrep speed|duplex
The output is similar to the following:
duplex full
ifspeed 100000000
link_duplex 2
link_speed 100
The parameters are as follows:
Parameter Description
link_duplex
Determines the duplex setting
v 1 - Half-duplex
v 2 - Full duplex
link_speed
Determines the link speed
v 10 - 10 Mb/sec
v 100 - 100 Mb/sec
v 1000 - 1 Gb/sec
2. Create a file namedbge.conf in the /platform/uname -i/kernel/drv directory
(for example, /platform/SUNW,Sun-Fire-V210/kernel/drv/bge.conf).
3. Add the following lines to the file:
speed=100;
full duplex=1;
4. Reboot the machine to have your changes take effect.
Linux systems
Enabling 100 full duplex mode on Linux systems.
About this task
Use your primary network interface to enable 100 full duplex mode.
To check if full duplex is enabled:
Chapter 3. Installing and configuring the prerequisite software 39
Procedure
1. Enter the following command:
# dmesg | grep -i duplex
This should result in output similar to the following:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
2. Confirm the output contains the words:
Full Duplex
If this is not contained within the output, you must enable full duplex mode.
The example output resulting from the command executed in step 1:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
indicate that the primary network interface is eth0.
The actions specified in the following process presume that your primary
network interface is eth0.
Enabling full duplex mode on Linux:
To enable full duplex mode.
Procedure
1. Open the file ifcfg-eth0, which is contained in:
/etc/sysconfig/network-scripts/
2. Add the ETHTOOL_OPTS setting by adding the following text:
ETHTOOL_OPTS="speed 100 duplex full autoneg off"
Note: The ETHTOOL_OPTS speed setting can be set to either 100 or 1000
depending on speed of connection available 100Mbit/s or 1000Mbit/s
(1Gbit/s).
Adding the pvuser login name
pvuser is the default name used within this document to describe the required
Tivoli Netcool Performance Manager Unix user.
The required user can be given any name of your choosing. However, for the
remainder of this document this user will be referred to as "pvuser".
Decide in advance where to place the home directory of the pvuser login username.
Use a standard home directory mounted on /home or /export/home, as available.
Note: Do not place the home directory in the same location as the Tivoli Netcool
Performance Manager program files. That is, do not use /opt/proviso or any other
directory in /opt for the home directory.
Add the pvuser login name to every system on which you install a Tivoli Netcool
Performance Manager component, including the system hosting the Oracle server.
Adding pvuser to a Standalone Computer
Use the steps in this section to add the pvuser login name to each standalone
computer.
About this task
These steps add the login name only to the local system files on each computer
(that is, to the local /etc/passwd and /etc/shadow files). If your network uses a
40 IBM Tivoli Netcool Performance Manager: Installation Guide
network-wide database of login names such as Yellow Pages or Network
Information Services (NIS or NIS+), see Adding pvuser on an NIS-managed
network on page 42.
To add pvuser:
Procedure
1. Log in as root.
2. Set and export the DISPLAY environment variable. (seeSetting up a remote X
Window display on page 36.)
3. If one does not already exist, create a group to which you can add pvuser. You
can create a group with the name of your choice using the following command:
groupadd <group>
where:
v <group> is the name of the new group, for example, staff.
4. At a shell prompt, run the following command:
# useradd -g <group> -m -d <home_dir>/<username> -k /etc/skel -s /bin/ksh <username>
Where:
v <group> is the name of the group to which you want to add pvuser.
v <home_dir> is the home directory for the new user, for example,
/export/home/ can be used as the example home directory.
v <username> is the name of the new user. This can be set to any string.
Note: For the remainder of this document this user will be referred to as
pvuser.
5. Set a password for pvuser:
# passwd pvuser
The system prompts you to specify a new password twice. The default pvuser
password assumed by the Tivoli Netcool Performance Manager installer is pv.
This can be set to a password conforming to your organization's standards.
6. Test logging in as pvuser, either by logging out and back in, or with the su
command, such as:
# su - pvuser
Confirm that you are logged in as pvuser with the id command:
$ id
These instructions create a pvuser login name with the following attributes:
Attribute Value
login name
pvuser
member of group staff
home directory
/home/export/pvuser
login shell Korn shell (/bin/ksh)
copy skeleton setup files (.profile, and so on)
from this directory
/etc/skel
Note: The pvuser account must have write access to the /tmp directory.
Chapter 3. Installing and configuring the prerequisite software 41
Multiple computer considerations
If you are creating the pvuser login name on more than one computer in your
network, avoid confusion by specifying the same user ID number for each pvuser
login name on each computer.
When you have created the first pvuser login name, log in as pvuser and run the id
command. The system responds with the user name and user ID number (and the
group name and group ID number). For example:
$ id uid=1001(pvuser) gid=10(staff)
When you create the pvuser login name on the next computer, add the -u option to
the useradd command to specify the same user ID number:
# useradd -g <group> -m -d <home_dir>/pvuser -k /etc/skel -s /bin/ksh -u 1001 pvuser
Where:
v <group> is the name of the group to which you want to add pvuser.
v <home_dir> is the home directory for the new user, for example, /export/home/
can be used as the example home directory.
v <username> is the name of the new user. This can be set to any string.
Adding pvuser on an NIS-managed network
Adding pvuser on an NIS-Managed Network.
If your site's network uses NIS or NIS+ to manage a distributed set of login names,
see your network administrator to determine whether pvuser should be added to
each Tivoli Netcool Performance Manager computer's local setup files, or to the
network login name database.
Setting the resource limits (AIX only)
On AIX systems, it is possible that the default user process limits are not adequate
for Tivoli Netcool Performance Manager.
About this task
If the default user process limits are not adequate for Tivoli Netcool Performance
Manager, do the following.
To set the user process limits on AIX systems:
Procedure
1. Log in as root.
2. Change your working directory to /etc/security by entering the following
command:
# cd /etc/security
3. Make a backup copy of the limits file by entering the following command:
# cp limits limits.ORIG
4. Using a text editor, open the limits file and set the following values:
default: fsize = -1 core = -1 cpu = -1 data = -1 rss = 65536 stack = 65536 nofiles = 2000
totalProcesses = 800
Note: Apply these settings to every AIX system running a Tivoli Netcool
Performance Manager program: the database server, DataLoad servers,
DataChannel servers, and DataMart servers.
5. Write and quit the file.
42 IBM Tivoli Netcool Performance Manager: Installation Guide
6. After modifying the settings, log off every Tivoli Netcool Performance Manager
user and then log in again for the changes to take effect.
Set the system parameters (Solaris only)
Before you install the Oracle server, you must set the Solaris shared memory and
semaphore parameters.
About this task
If using Solaris 10 containers, typically the variable in /etc/system is set only in
the root container, and project variables are set for each container. Refer to Solaris
10 container documentation for further information.
When you install Tivoli Netcool Performance Manager, you specify the size of the
deployment - small, medium, or large. The value you select affects the Oracle
PROCESSES parameter. You must set the appropriate kernel parameter level in
order for the deployment to work properly.
Note: These entries are only for the system running the Oracle server, not the
Oracle client.
To set Solaris system parameters:
Procedure
1. Set the NOEXEC_USER_STACK parameter in the system file:
a. Log in as root.
b. Change to the /etc directory:
# cd /etc
c. Create a backup of the file named system, and open the file with a text
editor.
d. Set the parameter NOEXEC_USER_STACK to 1, by adding the following line at
the bottom of the file:
set NOEXEC_USER_STACK=1
e. Save and exit the system file.
2. Set resource controls correctly.
The parameters affected by the deployment size are project.max-sem-ids,
process.max-sem-nsems, project.max-shm-memory, and project.max-shm-ids.
These parameters define the maximum size of a semaphore set and the
maximum number of semaphores in the system.
a. In Solaris 10, kernel parameters are replaced by resource controls. ), section
2.6: Configuring Kernel Parameters. See also Oracle Metalink ID 169706.1,
Oracle Database on Unix AIX, HP-UX, Linux, Mac OS X, Solaris, Tru64 Unix
Operating Systems Installation and Configuration Requirements Quick
Reference (8.0.5 to 11.2), which lists Solaris requirements.
b. Oracle recommends the following values, noting that they are guidelines
and should be tuned for production database systems. If you use a custom
configuration, you must change the values of the parameters to the
appropriate level.
Resource Control Recommended Value
project.max-sem-ids
100
process.max-sem-nsems 256
Chapter 3. Installing and configuring the prerequisite software 43
Resource Control Recommended Value
project.max-shm-memory
429496725
project.max-shm-ids 100
c. Log in as the Oracle user (for example, oracle).
d. To find the current kernel parameter settings, check the project id, and then
check the resource control settings for that project id:
$ id -p
uid=4074(oracle) gid=9999(dba) projid=3(default)
$ prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 1.95GB - deny -
system 16.0EB max deny -
$ prctl -n project.max-sem-ids -i project 3
project: 1: user.root
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-sem-ids
privileged 128 - deny -
system 16.8M max deny -
$ prctl -n project.max-shm-ids -i project 3
project: 3: default
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-ids
privileged 128 - deny -
system 16.8M max deny -
$ prctl -n process.max-sem-nsems $$
process: 12134: bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
process.max-sem-nsems
privileged 600 - deny -
system 32.8M max deny -
e. To change values, check the Solaris documentation for complete information
on projects. Here is one example, which sets the value of
project.max-shm-memory to 4GB. Log in as root and add a project, attached
to the dba group (assuming the oracle user is part of the dba group), and
set the value:
# projadd -p 100 -G dba -c "Oracle Project" \
-K "project.max-shm-memory=(privileged,4G,deny)" group.dba
f. Check by logging back in as oracle, checking with id -p that the projid is
now the new project number 100, and run prctl again to check that the
max-shm-memory value has been updated.
$ id -p
uid=4074(oracle) gid=9999(dba) projid=100(group.dba)
bash-3.00$ prctl -n project.max-shm-memory -i project 100
project: 100: group.dba
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 4.00GB - deny -
system 16.0EB max deny -
3. Reboot your system before continuing to the next step.
44 IBM Tivoli Netcool Performance Manager: Installation Guide
Enable FTP on Linux systems (Linux only)
By default, FTP is not enabled on Linux systems.
About this task
To enable FTP on your Linux host:
Procedure
1. Log in as root:
2. Change to the following directory:
# /etc/init.d
3. Run the following command:
# ./vsftpd start
Disable SELinux (Linux only)
Tivoli Netcool Performance Manager will not install properly if the SELinux
security policy is set to "enforcing".
About this task
To change the SELinux security policy is set to "enforcing" you must:
Procedure
1. Open the SELinux config file for editing:
$ cat /etc/selinux/config
2. Change the line in the file.
SELINUX=enforcing
To:
SELINUX=disabled
Note: You can also set the SELINUX setting to permissive. Setting SELINUX to
permissive will result in a number of warnings at install time, but it will allow
the installation code to run.
Set the kernel parameters (Linux only)
Required changes to Linux kernel parameters.
About this task
The following steps have been taken from the Metalink Note 421308, which is
available from the Oracle website.
Procedure
1. Add the following the lines in the file /etc/sysctl.conf
v kernel.shmall = physical RAM size / pagesize For most systems, this will be
the value 2097152.
See Note 301830.1, which is available from the Oracle website, for more
information.
v kernel.shmmax = 1/2 of physical RAM, but not greater than 4GB.
This would be the value 2147483648 for a system with 4Gb of physical RAM.
v kernel.shmmni = 4096
Chapter 3. Installing and configuring the prerequisite software 45
v kernel.sem = 250 32000 100 128
v fs.file-max = 512 x processes (for example 65536 for 128 processes)
v net.ipv4.ip_local_port_range =9000 65500
v net.core.rmem_default = 262144
v net.core.rmem_max = 2097152
v net.core.wmem_default = 262144
v net.core.wmem_max = 1048576
v fs.aio-max-nr = 1048576
2. To effect these changes, execute the command:
# sysctl -p
Replace the native taring utility with gnu tar (AIX 7.1 only)
Replace the native AIX 7.1 taring utility with GNU tar.
Before you begin
Check the version of tar command you have at present by entering the following
command:
#tar --version
For a GNU tar utility the output would conform to the following:
tar (GNU tar) 1.14
Copyright (C) 2004 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.
If the resulting output does not indicate a GNU tar utility then perform the
following steps.
Procedure
1. Find the native AIX tar location.
#which tar
/usr/bin/tar
2. Move the native binary tar command:
#cd /usr/bin
#mv tar tar_
3. Install the GNU tar, which can be obtained from the AIX toolbox site:
http://www-03.ibm.com/systems/power/software/aix/linux/toolbox/
download.html
Set the install location to be something similar to /opt/freeware/bin/tar
4. Create a gnu tar soft link:
ln -s /opt/freeware/bin/tar /usr/bin/tar
5. Validate that the tar command is gnu tar:
#tar --version
Resulting output should be as stated in prerequisite step.
46 IBM Tivoli Netcool Performance Manager: Installation Guide
Install a libcrypto.so
For full SNMPv3 support, SNMP DataLoad must have access to the libcrypto.so.
About this task
Note: As libcrypto.so is delivered as standard on Linux platforms, steps 1 and 2
are not required if you are running on Linux.
For each new and existing SNMP DataLoad, you must perform the following steps.
Procedure
1. Install the OpenSSL package. This package can be downloaded from
http://www.openssl.org/.
2. As root, extract and install the libcrypto.so file using the following code:
# cd /usr/lib
# ar -xv ./libcrypto.a
# ln -s libcrypto.so.0.9.8 libcrypto.so
3. Update the dataload.env file so that the LD_LIBRARY_PATH (on Solaris & Linux)
or LIBPATH (on AIX) environment variables include the path:
/usr/lib
What to do next
Check the variable has been set by doing the following:
1. Open a fresh shell
2. Check the dataload.env file.
3. Bounce the SNMP DL
Upon startup, with a valid library, the collector will log the following log
messages:
INFO:CRYPTOLIB_LOADED Library libcrypto.so (OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008, 0x90802f) has been loaded.
INFO:SNMPV3_SUPPORT_OK Full SNMPv3 support Auth(None,MD5,SHA-1) x Priv(None,DES,AES) is available.
Deployer pre-requisites
Minimum filesystem specification and pre-requisites for the Deployer script.
The Deployer will check the for the items described under the following headings.
You should ensure that all elements are installed before running the deployer.
Operating system check
The Deployer will fail if the required patches listed in this file are not installed.
The Deployer performs a check on the operating system versions and that the
minimum required packages are installed.
For more information on the complete set of requirements for installation on Linux,
AIX and Solaris, please consult the IBM Tivoli Netcool Performance Manager:
Configuration Recommendations Guide.
Chapter 3. Installing and configuring the prerequisite software 47
Mount points check
The Deployer assesses the available filesystem space for the defined mount point
locations.
The space requirements are calculated based on:
v The defined topology: The more components added to a single server the more
space is required on that server.
v The component install location: Any directory set as the install location for a
component will require sufficient space to store that component. The default
install directory is /opt. You do not have to use the default. This can be set to
any directory location that has sufficient space.
v Remote installation of components: If components are being installed remotely,
sufficient space must be assigned in the /tmp directory to store the software
before it can be transferred to the remote servers.
For a statement of minimum space requirements per server in a distributed install
or for a single server in a proof of concept install, please consult the IBM Tivoli
Netcool Performance Manager: Configuration Recommendations Guide documentation.
Authentication between distributed servers
Why you must authenticate between distributed servers.
If you are performing an installation that has a topology covering a set of
distributed servers, ensure that RSA keys have been cached between servers for
root and pvuser prior to installation. If there are new servers that form part of
installation topology which have not been authenticated, the installation will fail.
Note: pvuser is the required Tivoli Netcool Performance Manager Unix user.
Adding this user to your system is described in Adding the pvuser login name
on page 40.
Downloading the Tivoli Netcool Performance Manager distribution to
disk
To download the Tivoli Netcool Performance Manager distribution to a directory
on a target server's hard disk:
About this task
Whether you are installing the product from an electronic image or from DVD/CD,
you must copy the distribution to a writeable location on the local filesystem
before beginning the installation.
To download the Tivoli Netcool Performance Manager distribution to a directory
on the host from which you intend to run the Topology Editor:
Procedure
1. On the target host, log in as the Tivoli Netcool Performance Manager user, such
as pvuser.
2. Create a directory to hold the contents of your Tivoli Netcool Performance
Manager distribution. For example:
$ mkdir /var/tmp/cdproviso
48 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: Any further references to this directory within the install will be made
using the token <DIST_DIR>.
You will run a variety of scripts and programs from directories residing in the
directory created on the hard drive, including:
v Oracle configuration script
v Pre-installation script
v Installation script
v Tivoli Netcool Performance Manager setup program
3. Download the Tivoli Netcool Performance Manager distribution to the host
directory created in the previous step and expand the contents of the
distribution package.
Downloading Tivoli Common Reporting to disk
To download the TCR distribution to a directory on a target server's hard disk.
About this task
The TCR driver must be untarred so it can be used by the Tivoli Netcool
Performance Manager Common Installer. The following process ensures the user is
required to specify the TCR media location only once:
Procedure
1. Create a folder named TCR as a peer to the other Tivoli Netcool Performance
Manager Components, that is, DataView, DataChannel, etc. For example:
<DIST_DIR>/proviso/SOLARIS/TCR
2. Extract the TCR 2.1 inside this folder.
Should the user decide not to extract the tar as a peer to the other components,
a TCR folder must still be created having a path to the TCR install.sh the same
as: ./TCR/TCRInstaller/install.sh
Note: If the user extracts the tar directly into the same root location as the
Tivoli Netcool Performance Manager Components then the TCR launchpad.sh
will overwrite the Tivoli Netcool Performance Manager Installer launchpad.sh,
meaning the launchpad cannot be started for the installer.
General Oracle setup tasks
How to install Oracle for use with Tivoli Netcool Performance Manager.
To install Oracle you will need:
v An appropriately sized server with the operating system installed and running
(for the Oracle server).
Note: For a basic overview of the minimum CPU speed, memory size, and disk
configuration requirements for your Tivoli Netcool Performance Manager
installation, see the IBM Tivoli Netcool Performance Manager: Configuration
Recommendations Guide. For more detailed information you can contact IBM
Professional Services.
v The current version of Tivoli Netcool Performance Manager software.
v The downloaded files for the Oracle installation.
v If you are installing Oracle on an AIX system, follow the instructions in
Asynchronous I/O Support (AIX Only) before installing Oracle.
Chapter 3. Installing and configuring the prerequisite software 49
Before installing Oracle, read the setup and password information.
Note: Tivoli Netcool Performance Manager should be installed and run as a
standalone database. It should not be placed on a server that already has a
database as the installation program will likely interfere. The co-hosting of TNPM
will also affect performance in unknown ways. If a co-host is required then the
Customer should seek out Professional Services for support.
Specifying a basename for DB_USER_ROOT
Tivoli Netcool Performance Manager components use distinct Oracle login names
so that database access can be controlled separately by component, and for
database troubleshooting clarity.
About this task
The Tivoli Netcool Performance Manager installation generates the appropriate
login names for each Tivoli Netcool Performance Manager subsystem.
Procedure
Provide a basename, which the installation retains as the variable
DB_USER_ROOT.
Note: This is not an operating system environment variable, but a variable used
internally by the installer.
The default DB_USER_ROOT value is PV. IBM strongly encourages you to retain
the default value.
Results
Oracle login names are generated from the DB_USER_ROOT basename by
appending a function or subsystem identifier to the basename, as in the following
examples:
v PV_ADMIN
v PV_INSTALL
v PV_LDR
v PV_CHANNEL
v PV_COLL
v PV_CHNL_MANAGER
v PV_GUI
In addition, separate Oracle login names are generated for each Tivoli Netcool
Performance Manager DataChannel and subsystem, identified by an appended
channel number, as in the following examples:
v PV_CHANNEL_01
v PV_CHANNEL_02
v PV_LDR_01
v PV_LDR_02
50 IBM Tivoli Netcool Performance Manager: Installation Guide
Specifying Oracle login passwords
For each component that requires an Oracle login name, you must provide a
password for that login name.
About this task
In every case, the installer uses the default Oracle password, PV.
Oracle passwords are not case-sensitive, so PV and pv are the same password. The
default password is usually shown in uppercase, but is sometimes shown in
lowercase. In both cases, the same default password is intended.
Procedure
You can retain the default password, or enter passwords of your own according to
your site password standards.
You should use the same password for all Tivoli Netcool Performance Manager
subsystem Oracle login names. If you use different passwords for each login name,
keep a record of the passwords you assign to each login name.
Results
The Tivoli Netcool Performance Manager installer uses PV for three default values,
as described in Table 5.
Table 5: Uses of PV as Default Values
Installer Default Value Used As Recommendation
PV Default value of the
DB_USER_ROOT variable,
the basename on which
Oracle login names are
generated
In all instances, use the
default value PV, unless
your site has an explicit
naming standard or an
explicit password policy.
PV or pv Default password for all
Oracle login names
PV Default Oracle database
name, also called the Oracle
TNS name
What to do next
Note: If you use a non-default value, you must remember to use the same value in
all installation stages. For example, if you set your Oracle TNS name to PROV
instead of PV, you must override the default PV entry in all subsequent steps that
call for the TNS name.
Chapter 3. Installing and configuring the prerequisite software 51
Assumed values
The steps in this chapter assume the following default values:
Setting Value Assumed in this Chapter
Hostname of the Oracle server
delphi
Oracle server program files installed in
/opt/oracle
ORACLE_BASE =
/opt/oracle
Operating system login name for Oracle user oracle Note: The default name created is
oracle. However, you can set another name
for the Oracle user.
Password for Oracle user oracle
ORACLE_SID = PV
TNS name for Tivoli Netcool Performance
Manager database instance
PV
Oracle installed in (ORACLE_HOME =)
/opt/oracle/product/11.2.0/
Note: The value of ORACLE_HOME cannot
contain soft links to other directories or
filesystems. Be sure to specify the entire
absolute path to Oracle. Tivoli Netcool
Performance Manager expects an Optimal
Flexible Architecture (OFA) structure where
ORACLE_HOME is a sub-directory to
ORACLE_BASE.
Oracle login name for database
administrator (DBA)
system
Password for Oracle DBA login name manager
DB_USER_ROOT = PV
Path for Oracle data, mount point 1
/raid_2/oradata
Path for Oracle data, mount point 2
/raid_3/oradata
Note: If your site has established naming or password conventions, you can
substitute site-specific values for these settings. However, IBM strongly
recommends using the default values the first time you install Tivoli Netcool
Performance Manager. See Specifying a basename for DB_USER_ROOT on page
50 for more information.
52 IBM Tivoli Netcool Performance Manager: Installation Guide
Install Oracle 11.2.0.2 server (64-bit)
Instructions on how to install the Oracle 11.2.0.2 server (64-bit).
To install Oracle 11.2.0.2 server (64-bit), you can use the scripts provided as part of
Tivoli Netcool Performance Manager.
Download the Oracle distribution to disk
The Oracle installation files must be in place before you can begin the installation
of Oracle.
About this task
Note: Oracle 11.2.0.2 must be installed into a new ORACLE_HOME. Make sure
that you install Oracle into a different ORACLE_HOME than the one used for
Oracle 10.2.0.4.
Procedure
1. Log in as root.
2. Set the DISPLAY environment variable.
3. Create a directory to hold the contents of the Oracle distribution. For example:
# mkdir /var/tmp/oracle11202
4. Download the Oracle files to the /var/tmp/oracle11202 directory.
5. Extract the oracle distribution files that now exist in the /var/tmp/oracle11202
directory.
Results
The directory created and to which the Oracle 11.2.0.2 distribution was
downloaded will from now on be referred to using <ORA_DIST_DIR>.
Before beginning this task, make sure that you have:
v downloaded the Tivoli Netcool Performance Manager distribution to disk..
The directory created and to which the Tivoli Netcool Performance Manager
distribution was downloaded will from now on be referred to using
<DIST_DIR>.
What to do next
Before you proceed to the next step, make sure that you obtain the upgrade
instructions provided by Oracle. The instructions contain information on
performing steps required for the upgrade that are not documented in this guide.
See your database administrator to determine whether there are any
company-specific requirements for installing Oracle in your environment.
Chapter 3. Installing and configuring the prerequisite software 53
Verify the required operating system packages
Before installing the Oracle server, make sure the all required packages are
installed on your system.
Procedure
1. Make sure all the required Solaris packages and patches are installed on your
system. All required packages and patches are specified in the IBM Tivoli
Netcool Performance Manager: Configuration Recommendations Guide.
2. If these packages are not on your system, see the relevant operating system
Installation Guide for instructions on installing supplementary package
software.
Run the Oracle server configuration script
In this step, you set up the Oracle environment using the script provided with the
Tivoli Netcool Performance Manager DataMart files on the Tivoli Netcool
Performance Manager distribution.
About this task
This script automatically creates the following configuration:
v Adds the dba and oinstall groups to /etc/group
v Adds the login name oracle, whose primary group membership is dba and
secondary group membership is oinstall.
Note: You must use the same Oracle username across all instances of Oracle
Client and Server throughout your Tivoli Netcool Performance Manager system.
v Creates the Oracle directory structure
v Creates startup and shutdown scripts for Oracle server processes
Note: If you have not created the oracle user, the script creates this user for you,
and ORACLE_BASE is set as the user home directory. If you would prefer to use a
different home directory for the oracle user, create the oracle user before running
the script. The script does not create an oracle user if one already exists.
Note: It is likely you have already created the dba and oinstall groups and the
oracle user. However, this script must still be run to create the required Oracle
directory structure.
To configure the Oracle installation environment using the script:
Procedure
1. As root, set the ORACLE_BASE environment variable to point to the top-level
directory where you want the Oracle server files installed. The default
installation directory is /opt/oracle.
For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
Note: The script places this variable into the oracle login account's .profile
file.
To check that the variable is set correctly, enter the following command:
# env | grep ORA
54 IBM Tivoli Netcool Performance Manager: Installation Guide
2. Change to the following directory:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance
AIX systems:
# cd <DIST_DIR>/proviso/DataBase/AIX<version_num>/oracle/instance
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in Download the Oracle
distribution to disk on page 53.
3. Run the Oracle configuration script by entering the following command:
# ./configure_ora
The following screen is displayed:
--------------------------------------------------
Setting the Oracle environment
<Current Date>
--------------------------------------------------
OS ........... : [ SunOS 5.10 Generic ]
Host ......... : [ delphi ]
Logname ...... : [ root ]
ORACLE_BASE .. : [ /opt/oracle ]
DBA group ................. : [ dba ]
OUI Inventory group ....... : [ oinstall ]
Oracle Software owner ..... : [ oracle ]
Configure Oracle release .. : [ 11.2.0 ]
Menu :
1. Modify Oracle software owner.
2. Next supported release
3. Check environment.
0. Exit
Choice:
4. Type 3 at the Choice prompt and press Enter.
The script creates the dba and oinstall groups and the ORACLE_BASE directory,
unless they exist:
Checking environment...
Checking for group [ dba ] --> Created.
Checking for group [ oinstall ] --> Created.
Checking ORACLE_BASE
** WARNING
** ORACLE_BASE directory does not exist.
** [ /opt/oracle ]
**
** Create it ? (n/y) y
5. Type y and press Enter.
The script creates the /opt/oracle directory and continues as follows:
Checking for user [ oracle ]
** WARNING
** User [ oracle ] does not exist.
**
** Create it locally ? (n/y) y
6. Type y and press Enter.
The script creates the oracle user and continues as follows:
Chapter 3. Installing and configuring the prerequisite software 55
--> Created.
Checking for oracle directory tree :
[ /opt/oracle/product ] --> Created.
[ /opt/oracle/product/11.2.0 ] --> Created.
[ /opt/oracle/product/11.2.0/dbs ] --> Created.
[ /opt/oracle/admin ] --> Created.
[ /opt/oracle/admin/skeleton ] --> Created.
[ /opt/oracle/admin/skeleton/lib ] --> Ok.
[ /opt/oracle/admin/skeleton/lib/libpvmextc.so ] --> Created.
[ /opt/oracle/admin/skeleton/lib/libmultiTask.so ] --> Created.
[ /opt/oracle/admin/skeleton/lib/libcmu.so ] --> Created.
[ /opt/oracle/admin/skeleton/bin ] --> Ok.
[ /opt/oracle/admin/skeleton/bin/snmptrap ] --> Created.
[ /opt/oracle/local ] --> Created.
Checking for oracle .profile file --> Created.
Checking for dbora file --> Created.
/etc/rc0.d/K10dbora link --> Created.
/etc/rc1.d/K10dbora link --> Created.
/etc/rc2.d/S99dbora link --> Created.
Checking for dbora configuration files :
/var/opt/oracle/oratab --> Created.
/var/opt/oracle/lsnrtab --> Created.
Press Enter to continue...
7. Press the Enter key to continue. The main screen is refreshed.
8. Type 0 and press Enter to exit the script.
Note: You must set a password for the oracle login name.
Structure created by the configure_ora script
The script creates the Oracle directory structure.
The following example shows the directory structure created for Oracle, where
ORACLE_BASE was set to /opt/oracle:
/opt/oracle/product
/opt/oracle/product/11.2.0
/opt/oracle/product/11.2.0/dbs
/opt/oracle/admin
/opt/oracle/admin/skeleton
/opt/oracle/admin/skeleton/bin
/opt/oracle/local
The script creates the following setup files:
v /etc/init.d/dbora, which starts the Oracle Listener and database server
automatically on each system boot
v Symbolic links to /etc/init.d/dbora in /etc/rc0.d, /etc/rc1.d, and /etc/rc2.d
v Oracle configuration files /var/opt/oracle/oratab and lsnrtab
v A .profile file for the oracle user containing the following lines:
# -- Begin Oracle Settings --
umask 022
ORACLE_BASE=/opt/oracle
ORACLE_HOME=$ORACLE_BASE/product/11.2.0
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin
EXTPROC_DLLS=ONLY:${LD_LIBRARY_PATH}/libpvmextc.so
export PATH ORACLE_BASE ORACLE_HOME NLS_LANG
export ORA_NLS33 LD_LIBRARY_PATH TNS_ADMIN
export EXTPROC_DLLS
# -- End Oracle Settings --
56 IBM Tivoli Netcool Performance Manager: Installation Guide
Note the following points:
v The value of ORACLE_HOME must not contain soft links to other directories or file
systems. Be sure to specify the entire absolute path to Oracle.
v The ORACLE_SID variable is added to this file at a later stage.
Set a password for the Oracle login name
You must assign a password for the oracle login name to maintain system security.
About this task
The configure_ora script you ran in the previous section creates the oracle login
name. You must assign a password for the oracle login name to maintain system
security, and because subsequent installation steps expect the password to be
already set.
To set a password:
Procedure
1. Log in as root.
2. Enter the following command:
# passwd oracle
3. Enter and re-enter the password (oracle, by default) as prompted. The password
is set.
Run the preinstallation script
Run the preinstallation script to verify your readiness to install Oracle.
Procedure
1. As root, change directory using the following command:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
AIX systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance/ora_installer
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in Download the Oracle
distribution to disk on page 53.
2. Set the ORACLE_BASE environment variable. For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
Note: You must use the same ORACLE_BASE setting that you specified in
Run the Oracle server configuration script.
3. Enter the following command:
# ./pre_install_as_root
The following messages indicate success:
Chapter 3. Installing and configuring the prerequisite software 57
Checking that you are logged in as root --> Ok.
Checking ORACLE_BASE --> Ok.
Checking oraInst.loc file --> Ok.
If the script shows an error, correct the situation causing the error before
proceeding to the next step.
Run the rootpre.sh script (AIX only)
To run the rootpre.sh script.
Procedure
1. Log in as root or become superuser.
2. Set the DISPLAY environment variable.
3. Change to the directory <ORA_DIST_DIR>/database.
Note: The Oracle server (64-bit) distribution is downloaded to
<ORA_DIST_DIR> as per the instructions in the section Download the Oracle
distribution to disk on page 53.
4. Run the following command:
./rootpre.sh
rootpre.sh may return an error like the following:
Configuring Asynchronous I/O....
Asynchronous I/O is not installed on this system.
This error can safely be ignored.
Note: For more information on this Oracle error, see Oracle Metalink Article
282036.1.
Verify PATH and Environment for the Oracle login name
Before proceeding to install Oracle files, make sure the /usr/ccs/bin directory is in
the PATH environment variable for the oracle login name.
About this task
To verify the PATH and environment:
Procedure
1. Log in as oracle. Set and export the DISPLAY environment variable.
If you are using the su command to become oracle, use a hyphen as the second
argument so the oracle user login environment is loaded:
# su - oracle
2. Verify that the environment variable ORACLE_BASE has been set by entering
the following command:
$ env | grep ORA
If the response does not include ORACLE_BASE=/opt/oracle, stop and make sure
the .profile file was set for the oracle user, as described in Run the Oracle
server configuration script on page 54.
3. To verify the path, enter the following command:
$ echo $PATH
58 IBM Tivoli Netcool Performance Manager: Installation Guide
The output must show that /usr/ccs/bin is part of the search path. For
example:
/usr/bin:/opt/oracle/product/11.2.0/bin:/usr/ccs/bin
a. If this directory is in the path, add it by entering the following commands:
$ PATH=$PATH:/usr/ccs/bin
$ export PATH
Install Oracle using the menu-based script
To install the Oracle database files using a menu-based system.
About this task
It is recommended that you use the menu-based script method.
Note: The Oracle installation script provided by IBM is used to install Oracle
server, Oracle client, and upgrade patches to an existing Oracle server or Client
installation.
To install the Oracle server (64-bit) using the menu-based script:
Procedure
1. As oracle, change directory using the following command:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
AIX systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance/ora_installer
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Download the
Oracle distribution to disk on page 53.
2. Enter the following command to start the installer:
$ ./perform_oracle_inst
The installation menu is displayed:
Chapter 3. Installing and configuring the prerequisite software 59
--------------------------------------------------
perform_oracle_inst
Installation of oracle binaries
<Current Date>
--------------------------------------------------
OS ........... : [ SunOS 5.10 Generic ]
Host ......... : [ delphi ]
Logname ...... : [ oracle ]
Install Oracle release .... : [ 11.2.0 ]
Installation type.......... : [ Server ]
Enter the appropriate letter to modify the entries below:
a) ORACLE_BASE .. : [ /opt/oracle/ ]
b) ORACLE_HOME .. : [ /opt/oracle/product/11.2.0/ ]
c) DBA group ..................... : [ dba ]
d) OUI Inventory group ........... : [ oinstall ]
e) Oracle Software owner ......... : [ oracle ]
f) Directory where CDs were copied:
[ ]
Menu :
1. Next supported release
2. Set install type to: Client
3. Perform install
0. Exit
Choice :
3. Verify the following settings:
v The Oracle release number is 11.2.0.
v The Installation type field is Server.
This field cycles between three settings: Server, Client, and Patch. Type 2 at
the Choice prompt and press Enter until Server is displayed.
4. Type f at the Choice prompt and press Enter.
5. At the Choice prompt, enter the full path to the directory containing the
installation files, <ORA_DIST_DIR>. For example:
Choice: f
Enter new value for CD directory: /var/tmp/oracle11202
6. Type a at the Choice prompt and press Enter.
7. At the Choice prompt, enter the full path to the oracle base directory created
for Oracle 11.2.0.2, the default is /opt/oracle.
8. Type b at the Choice prompt and press Enter.
9. At the Choice prompt, enter the full path to the oracle home directory created
for Oracle 11.2.0.2, the default is /opt/oracle/product/11.2.0/.
10. Edit other menu settings as required. For example, if you used non-default
values for ORACLE_BASE or ORACLE_HOME, enter your settings until the menu
shows they correctly point to the directories created for Oracle 11.2.0.2.
11. To begin the Oracle installation, type 3 at the Choice prompt and press Enter.
The installation script checks the environment, then asks if you want to
perform the installation.
12. Type Y at the Choice prompt and press Enter. The installation script starts
installing Oracle and displays a series of status messages.
Note: You can safely ignore any font.properties not found messages in the
output.
When the installation reaches the In Summary Page stage, the installation
slows down significantly while Oracle files are copied and linked.
13. When the installation is complete, messages like the following are displayed:
60 IBM Tivoli Netcool Performance Manager: Installation Guide
In End of Installation Page
The installation of Oracle11 Database was successful.
Please check /opt/oracle/oraInventory/logs/silentInstall2011-09-28_04-23-53PM.log
for more details.
The Oracle installation has completed. Please check the
messages above to determine if the install completed
successfully. If you do not see successful completion
messages, consult the install log at:
/opt/oracle/oraInventory/logs
Press C to continue...
Note: Write down the log file location to aid in troubleshooting if there is an
installation error.
14. Type C and press Enter to return to the installation menu.
15. Type 0 and press Enter to exit the installation menu.
What to do next
Note: Review the Oracle installation logs at $ORACLE_BASE/oraInventory/logs for
errors as some fatal errors may be reported in the log, but the native Oracle
installer will report success in standard output.
Run the root.sh script
After successfully running an Oracle client or server installation, you must run the
root.sh script.
About this task
This step is also required after an Oracle patch installation. See Install Oracle
patches.
To run the root.sh script:
Procedure
1. Log in as root or become superuser. Set the DISPLAY environment variable.
2. Change to the directory where Oracle server files were installed. (This is the
directory as set in the ORACLE_HOME environment variable.) For example:
# cd /opt/oracle/product/11.2.0-client32/
3. Run the following command:
./root.sh
Messages like the following are displayed:
Running Oracle11 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/oracle/product/11.2.0-client32/
Enter the full pathname of the local bin directory: [/usr/local/bin]:
4. If the default entry, /usr/local/bin, is writable by root, press Enter to accept
the default value. The default entry might be NFS-mounted at your site so it
can be shared among several workstations and therefore might be
write-protected. If so, enter the location of a machine-specific alternative bin
directory. (You might need to create this alternative directory at a shell prompt
first.) For example, enter /usr/delphi/bin.
5. The script continues as follows:
Chapter 3. Installing and configuring the prerequisite software 61
...
Adding entry to /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
#
The script runs to completion with no further prompts.
Set the ORACLE_SID variable
A system identifier (SID) identifies each Oracle database instance for internal
connectivity on the Oracle server itself. (Connectivity from Oracle Clients to the
server is controlled by the TNS names system configured later.) The environment
variable for the system identifier is ORACLE_SID.
About this task
Decide on an SID to use for your Tivoli Netcool Performance Manager database
instance. The assumed default for the Tivoli Netcool Performance Manager
installation is PV. IBM recommends using this default SID unless your site has
established Oracle SID naming conventions.
To set the ORACLE_SID environment variable:
Procedure
1. Log in as oracle.
2. Open the .profile file with a text editor.
3. Add the following line anywhere between the Begin and End Oracle Settings
comment lines:
ORACLE_SID=PV; export ORACLE_SID
For example:
# -- Begin Oracle Settings --
umask 022
ORACLE_BASE=/opt/oracle/
ORACLE_HOME=/opt/oracle/product/11.2.0/
ORACLE_SID=PV; export ORACLE_SID
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/delphi/bin
EXTPROC_DLLS=ONLY:${LD_LIBRARY_PATH}/libpvmextc.so
export PATH ORACLE_BASE ORACLE_HOME NLS_LANG
export ORA_NLS33 LD_LIBRARY_PATH TNS_ADMIN
export EXTPROC_DLLS
# -- End Oracle Settings --
4. Save and exit the .profile file.
5. Enter the following shell command to activate the change to your profile:
$ . ./.profile
6. Make sure the variable was set by entering the following command:
$ env | grep ORACLE_SID
62 IBM Tivoli Netcool Performance Manager: Installation Guide
Set automatic startup of the database instance
You must configure your Oracle host to automatically start the Tivoli Netcool
Performance Manager database instance at system startup time.
About this task
To set up automatic startup:
Procedure
1. Log in as oracle.
2. Depending on your operating system, change to the following directory:
Solaris systems:
$ cd /var/opt/oracle
AIX systems:
$ cd /etc
Linux Systems:
$ cd /etc
3. Edit the oratab file with a text editor. The last line of this file looks like this
example:
*:/opt/oracle/product/11.2.0/:N
4. Make the following edits to this line:
v Replace * with $ORACLE_SID (PV by default).
v Replace N with Y.
The last line should now be:
PV:/opt/oracle/product/11.2.0/:Y
5. Save and close the oratab file.
Configure the Oracle listener
How to configure the Oracle Listener.
About this task
Note: Instead of creating the listener.ora file manually, as described in the steps
that follow, you can create it by running the Oracle Net Configuration Assistant
utility. See the Oracle Corporation documentation for information about Net
Configuration Assistant.
The Oracle Listener process manages database connection requests from Oracle
clients to an Oracle server.
To configure the Oracle Listener:
v
Procedure
1. Log in as oracle.
2. Change to one of the following directories:
$ cd $TNS_ADMIN
or
$ cd /opt/oracle/product/11.2.0/network/admin
Chapter 3. Installing and configuring the prerequisite software 63
3. Copy the sample listener.ora contained in the opt/oracle/admin/skeleton/
bin directory:
$ cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.listener.ora listener.ora
Note: By Oracle convention, the keywords in this file are in uppercase but
uppercase is not required.
# listener.ora network configuration file in directory
# /opt/oracle/product/11.2.0/network/admin
LISTENER=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = yourhost) (PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /opt/oracle/product/11.2.0/)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = PV.WORLD)
(SID_NAME = PV)
(ORACLE_HOME = /opt/oracle/product/11.2.0/)
)
)
4. Using a text editor, change the following:
a. Replace the string yourhost in the line (HOST = yourhost) with the name of
your Oracle server.
Note: Specify the host using the hostname only, do not use the IP address.
b. (optional) Replace the default port number 1521 in the line (PORT = 1521)
with your required port number.
c. Write and quit the file.
5. Depending on your operating system, change to the following directory:
Solaris systems:
$ cd /var/opt/oracle
AIX systems:
$ cd /etc
Linux Systems:
$ cd /etc
6. Edit the lsnrtab file and add a line in the following format to the end of the
file (after the initial comments):
LISTENER:value_of_ORACLE-HOME:Y
For example:
LISTENER:/opt/oracle/product/11.2.0/:Y
In this syntax, LISTENER is the name of the listener process.
7. Write and quit the file.
8. Test that the listener process works correctly by starting it manually using the
following command:
64 IBM Tivoli Netcool Performance Manager: Installation Guide
lsnrctl start
(The lsnrctl command also accepts the stop and status arguments.) Look for
a successful completion message.
Configure the Oracle net client
You must configure an Oracle Net client by setting up the TNS (Transport Network
Substrate) service names for your Tivoli Netcool Performance Manager database
instance.
About this task
To set up the TNS service names:
Procedure
1. Log in as oracle.
2. Change to one of the following directories:
$ cd $TNS_ADMIN
or
$ cd /opt/oracle/product/11.2.0/network/admin
3. Create the sqlnet.ora file, which will manage Oracle network operations. You
must create an sqlnet.ora file for both Oracle server and Oracle client
installations. Follow these steps:
a. Copy the sample sqlnet.ora file, template.example_tnpm.sqlnet.ora,
contained in the opt/oracle/admin/skeleton/bin/ directory:
$ cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.sqlnet.ora sqlnet.ora
b. Add the following lines to this file:
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
For example:
# sqlnet.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
Note: If you do not use WORLD as the DEFAULT_DOMAIN value, make
sure you enter the same value for DEFAULT_DOMAIN in both sqlnet.ora
and tnsnames.ora.
c. Write and quit the file.
4. Create the tnsnames.ora file, which maintains the relationships between logical
node names and physical locations of Oracle Servers in the network. You can
do this by copying the existing sample file:
cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.tnsnames.ora tnsnames.ora
Follow these steps:
a. Enter lines similar to the following example, using the actual name of your
Oracle server in the HOST=delphi line and replacing {SID} with PV or your
Oracle SID.
# tnsnames.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
#
# The EXTPROC entry only needs to exist in the
# tnsnames.ora file on the Oracle server.
Chapter 3. Installing and configuring the prerequisite software 65
# For Oracle client installations, tnsnames.ora
# only needs the PV.WORLD entry.
EXTPROC_CONNECTION_DATA.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = IPC)
(KEY = EXTPROC)
)
)
(CONNECT_DATA = (SID = PLSExtProc)
(PRESENTATION = RO)
)
)
PV.WORLD =
(DESCRIPTION =
(ENABLE=BROKEN)
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = delphi)
(PORT = 1521)
)
)
(CONNECT_DATA =
(SERVICE_NAME = PV.WORLD)
(INSTANCE_NAME = PV)
)
)
Note: Indents in this file must be preserved.
Note the following:
v You will use the value in the INSTANCE_NAME field as the TNS entry
when installing Tivoli Netcool Performance Manager DataMart.
v IBM strongly recommends that you include the line (ENABLE=BROKEN)
in the PV.WORLD entry, as shown in the example. This parameter setting
prevents CME processes from hanging in the event that the CME is
disconnected from the database before results are returned to the CME.
v If configuring tnsnames.ora for a server installation, be sure to append
the same domain suffix to all entries including
EXTPROC_CONNECTION_DATA that you specified for the
NAMES.DEFAULT_DOMAIN entry in the sqlnet.ora file. That is,
append.WORLD to each entry.
b. Write and quit the file.
5. Test the Oracle Net configuration by entering a command with the following
syntax:
tnsping Net_service_name.domain 10
For example:
$ tnsping PV.WORLD 10
Look for successful completion messages (OK).
To test without using the domain suffix, enter a command with the following
syntax:
tnsping Net_service_name 10
For example:
$ tnsping PV 10
Note: If either test is not successful, check your configuration and retest.
66 IBM Tivoli Netcool Performance Manager: Installation Guide
Install Oracle 11.2.0.2 client (32-bit)
Instructions on how to install the Oracle 11.2.0.2 client (32-bit).
The Oracle client should be installed to all servers hosting Tivoli Netcool
Performance Manager components, except for the server hosting the Tivoli Netcool
Performance Manager database.
You must specify a different directory path for the Oracle 11.2.0.2 client (32-bit)
from the directory specified when installing the Oracle 10.2.0.4 client (32-bit). For
example, assuming that Oracle 10.2.0.4 is installed in /opt/oracle/product/10.2.0/
then you should install the Oracle 11.2.0.2 client (32-bit) in /opt/oracle/product/
11.2.0-client32/.
Download the Oracle distribution to disk
The Oracle installation files must be in place before you can begin the installation
of Oracle.
About this task
Note: Oracle 11.2.0.2 must be installed into a new ORACLE_HOME. Make sure
that you install Oracle into a different ORACLE_HOME than the one used for
Oracle 10.2.0.4.
Procedure
1. Log in as root.
2. Set the DISPLAY environment variable.
3. Create a directory to hold the contents of the Oracle distribution. For example:
# mkdir /var/tmp/oracle11202
4. Download the Oracle files to the /var/tmp/oracle11202 directory.
5. Extract the oracle distribution files that now exist in the /var/tmp/oracle11202
directory.
Results
The directory created and to which the Oracle 11.2.0.2 distribution was
downloaded will from now on be referred to using <ORA_DIST_DIR>.
Before beginning this task, make sure that you have:
v downloaded the Tivoli Netcool Performance Manager distribution to disk..
The directory created and to which the Tivoli Netcool Performance Manager
distribution was downloaded will from now on be referred to using
<DIST_DIR>.
What to do next
Before you proceed to the next step, make sure that you obtain the upgrade
instructions provided by Oracle. The instructions contain information on
performing steps required for the upgrade that are not documented in this guide.
See your database administrator to determine whether there are any
company-specific requirements for installing Oracle in your environment.
Chapter 3. Installing and configuring the prerequisite software 67
Run the Oracle client configuration script
The Oracle client configuration shell script creates the environment for the client
software installation on the local system.
About this task
This script is named configure_client and is located with the Tivoli Netcool
Performance Manager files downloaded as part of the Tivoli Netcool Performance
Manager distribution. If you are performing this step as part of an upgrade
procedure, make sure that you run the configuration script provided with the
target version of Tivoli Netcool Performance Manager.
The client configuration script makes the following changes to the local system:
v Adds the dba and oinstall groups to /etc/group.
v Adds the Solaris login name oracle, whose primary group membership is dba,
and secondary group membership is oinstall.
Note: You must use the same Oracle username across all instances of Oracle
Client and Server throughout your Tivoli Netcool Performance Manager system.
v Creates the Oracle client directory structure. When you create the environment
for Oracle 11.2.0.2, the default location for this directory structure is
/opt/oracle/product/11.2.0-client32/. You specify this directory as the target
location when you install the Oracle client.
Note: None of the above changes will be made if the changes are already in place.
To configure the Oracle installation environment:
Procedure
1. Log in as root.
2. Set the ORACLE_BASE environment variable to point to the top-level directory
where you want the Oracle client files installed. The default installation
directory is /opt/oracle.
For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
Note: The configure_client script places this variable into the oracle login
name's .profile file.
To check that the variable is set correctly, enter the following command:
# env | grep ORA
3. Set the ORACLE_HOME environment variable. For example:
# ORACLE_HOME=/opt/oracle/product/11.2.0-client32/
# export ORACLE_HOME
Note: The value defined in the configure_client script for ORACLE_HOME is the
value needed in the Topology Editor for Oracle Home on the host level.
4. Change to the following directory:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance
AIX systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance
68 IBM Tivoli Netcool Performance Manager: Installation Guide
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in Download the Oracle
distribution to disk on page 53.
5. Run the Oracle configuration script using the following command:
# ./configure_client
The following screen is displayed:
--------------------------------------------------
configure_client
Setting the Oracle client environment
Tue Dec 6 14:51:23 CST 2011
--------------------------------------------------
OS ........... : [ SunOS 5.10 Generic_141444-09 ]
Host ......... : [ l3provsol1z3 ]
Logname ...... : [ root ]
ORACLE_BASE .. : [ /opt/oracle ]
ORACLE_HOME .. : [ /opt/oracle/product/11.2.0-client32/ ]
DBA group ................. : [ dba ]
OUI Inventory group ....... : [ oinstall ]
Oracle Software owner ..... : [ oracle ]
Configure Oracle release .. : [ 11.2.0 ]
Menu :
1. Modify Oracle software owner.
2. Next supported release.
3. Check environment.
0. Exit
Choice :
6. Type 3 at the Choice prompt and press Enter.
The script creates the dba and oinstall groups, and the ORACLE_BASE directory,
unless they already exist.
Checking environment...
Checking for group [ dba ] --> Created.
Checking for group [ oinstall ] --> Created.
Checking ORACLE_BASE
** WARNING
** ORACLE_BASE directory does not exist.
** [ /opt/oracle ]
**
** Create it ? (n/y) y
If prompted, type y and press Enter.
The script creates the /opt/oracle directory and continues as follows:
Checking for user [ oracle ]
** WARNING
** User [ oracle ] does not exist.
**
** Create it locally ? (n/y) y
If prompted, type y to create the oracle user and press Enter.
The script creates the oracle user and continues as follows:
Chapter 3. Installing and configuring the prerequisite software 69
--> Created.
Checking for oracle directory tree :
[ /opt/oracle/product ] --> Created.
[ /opt/oracle/product ] --> Created.
[ /opt/oracle/product/11.2.0 ] --> Created.
Checking for oracle .profile file --> Created.
Press Enter to continue...
7. Press Enter to continue. The configure_client main screen is displayed.
8. Type 0 and press Enter to exit the script.
Set a password for the Oracle login name
You must assign a password for the oracle login name to maintain system security.
About this task
The configure_ora script you ran in the previous section creates the oracle login
name. You must assign a password for the oracle login name to maintain system
security, and because subsequent installation steps expect the password to be
already set.
To set a password:
Procedure
1. Log in as root.
2. Enter the following command:
# passwd oracle
3. Enter and re-enter the password (oracle, by default) as prompted. The password
is set.
Run the preinstallation script
Run the preinstallation script to verify your readiness to install Oracle.
Procedure
1. As root, change directory using the following command:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
AIX systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance/ora_installer
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in Download the Oracle
distribution to disk on page 53.
2. Set the ORACLE_BASE environment variable. For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
Note: You must use the same ORACLE_BASE setting that you specified in
Run the Oracle server configuration script.
70 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Enter the following command:
# ./pre_install_as_root
The following messages indicate success:
Checking that you are logged in as root --> Ok.
Checking ORACLE_BASE --> Ok.
Checking oraInst.loc file --> Ok.
If the script shows an error, correct the situation causing the error before
proceeding to the next step.
Verify PATH and Environment for the Oracle login name
Before proceeding to install Oracle files, make sure the /usr/ccs/bin directory is in
the PATH environment variable for the oracle login name.
About this task
To verify the PATH and environment:
Procedure
1. Log in as oracle. Set and export the DISPLAY environment variable.
If you are using the su command to become oracle, use a hyphen as the second
argument so the oracle user login environment is loaded:
# su - oracle
2. Verify that the environment variable ORACLE_BASE has been set by entering
the following command:
$ env | grep ORA
If the response does not include ORACLE_BASE=/opt/oracle, stop and make sure
the .profile file was set for the oracle user, as described in Run the Oracle
server configuration script on page 54.
3. To verify the path, enter the following command:
$ echo $PATH
The output must show that /usr/ccs/bin is part of the search path. For
example:
/usr/bin:/opt/oracle/product/11.2.0/bin:/usr/ccs/bin
a. If this directory is in the path, add it by entering the following commands:
$ PATH=$PATH:/usr/ccs/bin
$ export PATH
Install the Oracle client (32-bit)
The Oracle installation script is a shell script that you can use to install the Oracle
server, Oracle client software, or patches to existing installations of the Oracle
server and client.
About this task
This script is named perform_oracle_inst and is located with the Tivoli Netcool
Performance Manager files that you obtained in Download the Oracle distribution
to disk on page 53. This script is provided by IBM as part of the Tivoli Netcool
Performance Manager installation package.
Chapter 3. Installing and configuring the prerequisite software 71
An Oracle client installation is not usable until the following Net configuration
files are configured and installed:
v tnsnames.ora
v sqlnet.ora
These files are configured in later steps.
To install the Oracle client:
Procedure
1. Log in as oracle.
2. Change to the following directory:
Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
AIX systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
Linux systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/oracle/instance/ora_installer
where:
v <DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Download the
Oracle distribution to disk on page 53.
3. Enter the following command to start the installer:
$ ./perform_oracle_inst
The installation menu is displayed:
--------------------------------------------------
perform_oracle_inst
Installation of oracle binaries
<Current Date>
--------------------------------------------------
OS ........... : [ SunOS 5.10 Generic ]
Host ......... : [ delphi ]
Logname ...... : [ oracle ]
Install Oracle release .... : [ 11.2.0 ]
Installation type.......... : [ Client ]
Enter the appropriate letter to modify the entries below:
a) ORACLE_BASE .. : [ /opt/oracle ]
b) ORACLE_HOME .. : [ /opt/oracle/product/11.2.0-client32/ ]
c) DBA group ..................... : [ dba ]
d) OUI Inventory group ........... : [ oinstall ]
e) Oracle Software owner ......... : [ oracle ]
f) Directory where CDs were copied:
[ ]
Menu :
1. Next supported release
2. Set install type to: Client
3. Perform install
0. Exit
Choice :
Note: The ORACLE_HOME should match the ORACLE_HOME set in the deployer for
the server on which the client is being installed.
4. Enter f at the Choice prompt and press Enter.
72 IBM Tivoli Netcool Performance Manager: Installation Guide
5. Enter the full path to the <ORA_DIST_DIR>, as created in Download the
Oracle distribution to disk on page 53. For example:
Choice: f
Enter new value for CD directory: /var/tmp/oracle11202
6. Edit any other menu settings as necessary. Make sure that the values for
ORACLE_BASE and ORACLE_HOME correspond to the locations you specified when
you ran the Oracle client configuration script.
7. To start the Oracle installation, type 3 at the Choice prompt and press Enter.
8. The installation script checks the environment, then asks whether you want to
perform the installation. Type Y at the Choice prompt and press Enter. The
installation script starts installing Oracle and displays a series of status
messages.
Note: You can safely ignore any font.properties not found messages in the
output.
When the installation reaches the In Summary Page stage, the installation
slows down significantly while Oracle files are copied and linked.
9. On AIX

, you may be asked if the script named rootpre.sh has been run. To
run this script:
a. Open another window.
b. Login as root.
c. cd to /var/tmp/oracle11202/database/rootpre
d. run rootpre.sh
e. Then answer Y to the installation script.
10. When the installation process is finished, the installation displays a success
message. Write down the log file location to aid in troubleshooting if there is
an installation error.
11. Type C and press Enter to return to the installation menu.
12. Type 0 and press Enter to exit the installation menu.
13. Perform the steps in Run the root.sh script on page 104.
Run the root.sh script
After successfully running an Oracle client or server installation, you must run the
root.sh script.
About this task
This step is also required after an Oracle patch installation. See Install Oracle
patches.
To run the root.sh script:
Procedure
1. Log in as root or become superuser. Set the DISPLAY environment variable.
2. Change to the directory where Oracle server files were installed. (This is the
directory as set in the ORACLE_HOME environment variable.) For example:
# cd /opt/oracle/product/11.2.0-client32/
3. Run the following command:
./root.sh
Chapter 3. Installing and configuring the prerequisite software 73
Messages like the following are displayed:
Running Oracle11 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/oracle/product/11.2.0-client32/
Enter the full pathname of the local bin directory: [/usr/local/bin]:
4. If the default entry, /usr/local/bin, is writable by root, press Enter to accept
the default value. The default entry might be NFS-mounted at your site so it
can be shared among several workstations and therefore might be
write-protected. If so, enter the location of a machine-specific alternative bin
directory. (You might need to create this alternative directory at a shell prompt
first.) For example, enter /usr/delphi/bin.
5. The script continues as follows:
...
Adding entry to /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
#
The script runs to completion with no further prompts.
Configure the Oracle Net Client
After the Oracle 11g Client is installed on the client machine, copy the sqlnet.ora
and tnsnames.ora files.
Procedure
1. Copy the sqlnet.ora and tnsnames.ora files from the Oracle 10g Net Client
directory, $ORACLE10g_HOME/network/admin.
2. Place the copied files in the Oracle 11g Net Client directory,
$ORACLE11g_HOME/network/admin.
Update the oracle user's .profile
Modify the oracle user's .profile file.
Procedure
1. Make sure that ORACLE_HOME points to $ORACLE_BASE/product/11.2.0.
2. If there is not already an entry for TNS_ADMIN, add one.
TNS_ADMIN=$ORACLE_HOME/network/admin
When complete, the .profile should look similar to:
74 IBM Tivoli Netcool Performance Manager: Installation Guide
# -- Begin Oracle Settings --
umask 022
ORACLE_BASE=/opt/oracle
ORACLE_HOME=$ORACLE_BASE/product/11.2.0
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/local/bin
EXTPROC_DLLS=ONLY:${LD_LIBRARY_PATH}/libpvmextc.so
export PATH ORACLE_BASE ORACLE_HOME NLS_LANG
export ORA_NLS33 LD_LIBRARY_PATH TNS_ADMIN EXTPROC_DLLS
ORACLE_SID=PV
Export ORACLE_SID
# -- End Oracle Settings --
Configure the Oracle Net client
You must configure the Oracle Net client by setting up the TNS (Transport
Network Substrate) service names for your Tivoli Netcool Performance Manager
database instance.
About this task
Next, you configure the Oracle Net client by setting up the TNS (Transport
Network Substrate) service names for your Tivoli Netcool Performance Manager
database instance. You must perform this step for each instance of the Oracle client
software that you installed on the system.
Procedure
v You must configure sqlnet.ora and tnsnames.ora files for both Oracle server
and Oracle client installations. However, the tnsnames.ora file for client
installations should not have the EXTPROC_CONNECTION_DATA section.
v If you are installing DataView and one or more other Tivoli Netcool
Performance Manager components on the same system, you must make sure
that the tnsnames.ora and sqlnet.ora files for each set of client software are
identical. The easiest way to do this is to create these files when you are
configuring the first client instance for Net and then to copy it to the
corresponding directory when you configure the second instance.
Create the sqlnet.ora file
The sqlnet.ora file manages Oracle network operations. You can create a new
sqlnet.ora file, or FTP the file from your Oracle server.
About this task
To set up the TNS service names:
Procedure
1. Log in as oracle.
2. Change to the following directory:
$ cd $ORACLE_HOME/network/admin
3. To create the sqlnet.ora file, FTP the following file from your Oracle server:
Chapter 3. Installing and configuring the prerequisite software 75
/opt/oracle/admin/skeleton/bin/template.example_tnpm.sqlnet.ora
4. Add the following lines to it:
NAMES.DIRECTORY_PATH=(TNSNAMES) NAMES.DEFAULT_DOMAIN=WORLD
For example:
# sqlnet.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
Note: If you do not use WORLD as the DEFAULT_DOMAIN value, make sure you
enter the same value for DEFAULT_DOMAIN in both sqlnet.ora and tnsnames.ora.
5. Write and quit the sqlnet.ora file.
Create the tnsnames.ora file
The tnsnames.ora file maintains the relationships between logical node names and
physical locations of Oracle servers in the network.
About this task
You can create a new tnsnames.ora file, or FTP the file from your Oracle server.
To create the tnsnames.ora file:
Procedure
1. , FTP the following file from your Oracle server:
/opt/oracle/admin/skeleton/bin/template.example_tnpm.tnsnames.ora
2. Add the following lines:
# tnsnames.ora network configuration file in
# /opt/oracle/product/11.2.0/network/admin
#
# For Oracle client installations, tnsnames.ora
# only needs the PV.WORLD entry.
PV.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = yourhost)
(PORT = 1521)
)
)
(CONNECT_DATA =
(SERVICE_NAME = PV.WORLD)
(INSTANCE_NAME = PV)
)
)
Note: Indents in this file must be preserved.
3. Replace the string yourhost in the line (HOST = yourhost) with the name of
your Oracle server.
Note the following:
v You will use the value in the INSTANCE_NAME field as the TNS entry
when installing DataMart.
76 IBM Tivoli Netcool Performance Manager: Installation Guide
v If you reconfigure the Oracle client to connect to a different Oracle database
in another Tivoli Netcool Performance Manager installation, be sure you
update the HOST entry in the tnsnames.ora file, then restart the Oracle
client.
v Specify the host using the hostname only, do not use the IP address.
4. (optional) Replace the default port number 1521 in the line (PORT = 1521) with
your required port number.
5. Write and quit the file.
Test the Oracle net configuration
The steps required to test the Oracle Net configuration.
About this task
To test the Oracle Net configuration:
Procedure
1. Log in as oracle.
2. Enter a command with the following syntax:
tnsping Net_service_name 10
For example: tnsping PV.WORLD 10
3. Test again, using the same Net instance name without the domain suffix:
tnsping PV 10
Look for successful completion messages (OK).
Next steps
The steps that follow installation of the prerequisite software.
Once you have installed the prerequisite software, you are ready to begin the
actual installation of Tivoli Netcool Performance Manager. Depending on the type
of installation you require, follow the directions in the appropriate chapter:
v Chapter 5, Installing as a minimal deployment, on page 111 - Describes how to
install Tivoli Netcool Performance Manager in a distributed production
environment.
v Chapter 6, Modifying the current deployment, on page 117 - Describes how to
install Tivoli Netcool Performance Manager as a minimal deployment, which is
used primarily for demonstration or evaluation purposes.
v If you are planning on Installing Tivoli Netcool Performance Manager as a
distributed environment that uses clustering for high availability, please review
the Tivoli Netcool Performance Manager HA (High Availability) documentation,
which is available for download by going to http://www-01.ibm.com/software/
brandcatalog/opal/ and searching for "Netcool Proviso HA Documentation".
Chapter 3. Installing and configuring the prerequisite software 77
78 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 4. Installing in a distributed environment
This section describes how to install Tivoli Netcool Performance Manager for the
first time in a fresh, distributed environment.
For information about installing the Tivoli Netcool Performance Manager
components using a minimal deployment, see Chapter 5, Installing as a minimal
deployment, on page 111.
Distributed installation process
The mains steps involved in a distributed installation.
A production Tivoli Netcool Performance Manager system that generates and
produces management reports for a real-world network is likely to be installed on
several servers. Tivoli Netcool Performance Manager components can be installed
to run on as few as two or three servers, up to dozens of servers.
Copyright IBM Corp. 2006, 2012 79
Before installing Tivoli Netcool Performance Manager, you must have installed the
prerequisite software. For detailed information, see Chapter 3, Installing and
configuring the prerequisite software, on page 35.
In addition, you must have decided how you want to configure your system. Refer
to the following sections:
v Co-location rules on page 2
v Typical installation topology on page 8
v Appendix A, Remote installation issues, on page 167
The general steps used to install Tivoli Netcool Performance Manager are as
follows:
v Start the launchpad.
v Install the Topology Editor.
v Start the Topology Editor.
v Create the topology.
80 IBM Tivoli Netcool Performance Manager: Installation Guide
v Add the Tivoli Netcool Performance Manager components.
v Save the topology to an XML file.
v Start the deployer.
v Install Tivoli Netcool Performance Manager using the deployer.
The following sections describe each of these steps in detail.
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, Installing and
configuring the prerequisite software, on page 35 for information about tnsping.
Starting the Launchpad
The steps required to start the launchpad.
About this task
To start the launchpad:
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See Setting up a remote X Window display on page 36.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
On Solaris systems:
# BROWSER=/opt/mozilla/mozilla
# export BROWSER
On AIX systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
On Linux systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS
On AIX systems:
# cd <DIST_DIR>/proviso/AIX
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
5. Enter the following command to start the launchpad:
# ./launchpad.sh
Chapter 4. Installing in a distributed environment 81
Installing the Topology Editor
The steps required to install the Topology Editor.
About this task
Only one instance of the Topology Editor can exist in the Tivoli Netcool Performance
Manager environment. Install the Topology Editor on the same system that will host
database server.
You can install the Topology Editor from the launchpad or from the command line.
To install the Topology Editor:
Procedure
1. You can begin the Topology Editor installation procedure from the command
line or from the Launchpad. From the launchpad:
a. On the launchpad, click the Install Topology Editor option in the list of
tasks.
b. On the Install Topology Editor page, click the Install Topology Editor link.
From the command line:
a. Log in as root.
b. Change directory to the directory that contains the Topology Editor
installation script:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/topologyEditor/Disk1/InstData/VM
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/topologyEditor/Disk1/InstData/VM
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
c. Enter the following command:
# ./installer.bin
2. The installation wizard opens in a separate window, displaying a welcome
page. Click Next.
3. Review and accept the license agreement, then click Next.
4. Confirm the wizard is pointing to the correct directory. The default is
/opt/IBM/proviso. If you have previously installed the Topology Editor on this
system, the installer does not prompt you for an installation directory and
instead uses the directory where you last installed the application.
5. Click Next to continue.
6. Confirm the wizard is pointing to the correct base installation directory of the
Oracle JDBC driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click
Choose to navigate to another directory.
7. Click Next to continue.
8. Review the installation information, then click Run.
9. When the installation is complete, click Done to close the wizard.
82 IBM Tivoli Netcool Performance Manager: Installation Guide
The installation wizard installs the Topology Editor and an instance of the
deployer in the following directories:
Interface Directory
Topology Editor
install_dir/topologyEditor
For example:
/opt/IBM/proviso/topologyEditor
Deployer
install_dir/deployer
For example:
/opt/IBM/proviso/deployer
Results
The combination of the Topology Editor and the deployer is referred to as the
primary deployer.
For more information, see Resuming a partially successful first-time installation
on page 109.
Note: To uninstall the Topology Editor, follow the instructions in Uninstalling the
topology editor on page 163. Do not delete the /opt/IBM directory. Doing so will
cause problems when you try to reinstall the Topology Editor.
If the /opt/IBM directory is accidentally deleted, perform the following steps:
1. Change to the /var directory.
2. Rename the hidden file .com.zerog.registry.xml (for example, rename it to
.com.zerog.registry.xml.backup).
3. Reinstall the Topology Editor.
4. Rename the backup file to the original name (.com.zerog.registry.xml).
Starting the Topology Editor
After you have installed the Topology Editor, you can invoke it from either the
launchpad or from the command line.
Procedure
v To start the Topology Editor from the launchpad:
1. If the Install Topology Editor page is not already open, click the Install
Topology Editor option in the list of tasks to open it.
2. On the Install Topology Editor page, click the Start Topology Editor link.
v To start the Topology Editor from the command line:
1. Log in as root.
2. Change directory to the directory in which you installed the Topology Editor.
For example:
# cd /opt/IBM/proviso/topologyEditor
3. Enter the following command:
# ./topologyEditor
Chapter 4. Installing in a distributed environment 83
Note: If your DISPLAY environment variable is not set, the Topology Editor
will fail with a Java assertion message (core dump).
If you are running the Topology Editor for an AIX 6.1 or AIX 7.1
environment, use the command:
# ./topologyEditor -vm /opt/IBM/proviso/topologyEditor/jre/bin/java
Creating a new topology
The steps required to create a new topology.
Procedure
1. In the Topology Editor, select Topology > Create new topology.
The New Topology window is displayed.
2. Enter the Number of resources to be managed by Tivoli Netcool Performance
Manager.
The default value is 10000. The size of your deployment affects the database
sizing.
3. Click Finish.
The Topology Editor creates the following entities:
v In the Logical view, five items are listed: Tivoli Netcool Performance
Manager Topology, Cross Collector CMEs, DataChannels, DataMarts and
Tivoli Integrated Portals.
v In the Physical view, there is a new Hosts folder.
Adding and configuring the Tivoli Netcool Performance Manager
components
Your next step is to add and configure the individual Tivoli Netcool Performance
Manager components.
Note: When performing an installation that uses non-default values, that is,
non-default usernames, passwords and locations, it is recommended that you
check both the Logical view and Physical view to ensure that they both contain the
correct values before proceeding with the installation.
Note: The value defined in the configure_client script for ORACLE_HOME is the
value needed in the Topology Editor for Oracle Home on the host level.
Add the hosts
The first step is to specify all the servers that will host Tivoli Netcool Performance
Manager components.
About this task
Each host that you define has an associated property named PV User. PV User is
the default operating system user for all Tivoli Netcool Performance Manager
components.
You can override this setting in the Advanced Properties tab when you set the
deployment properties for individual components (for example, DataMart and
DataView). This allows you to install and run different components on the same
system as different users.
84 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: DataChannel components always use the default user associated with the
host.
The user account used to transfer files using FTP or SCP/SFTP during installation
is always the PV User defined at the host level, rather than component level.
To add a single host to the topology:
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. The Add Host window opens.
2. Specify the details for the host machine.
The fields are as follows:
v Host name - Enter the name of the host (for example, delphi).
v Operating system - Specifies the operating system (for example, SOLARIS).
This field is filled in for you.
v Oracle home - Specifies the default ORACLE_HOME directory for all Tivoli
Netcool Performance Manager components installed on the system (by
default /opt/oracle/product/11.2.0-client32/).
v PV User - Specifies the default Tivoli Netcool Performance Manager Unix
user (for example, pvuser) for all Tivoli Netcool Performance Manager
components installed on the system.
v PV user password - Specifies the password for the default Tivoli Netcool
Performance Manager user (for example, PV).
v Create Disk Usage Server for this Host? - Selecting this check box creates a
DataChannel subcomponent to handle disk quota and flow control.
If you have not chosen to create a Disk Usage Server, Click Finish to create the
host. The Topology Editor adds the host under the Hosts folder in the Physical
view. If you have chosen to create a Disk Usage Server, click Next and the Add
Host window will allow you to add details for your Disk Usage Server.
3. Specify the details for the Disk Usage Server.
The fields are as follows:
v Local Root Directory - The local DataChannel root directory. This property
allows you to differentiate between a local directory and a remote directory
mounted to allow for FTP access.
v Remote Root Directory - Remote directory mounted for FTP access. This
property allows you to differentiate between a local directory and a remote
directory mounted to allow for FTP access.
v FC FSLL - This is the Flow Control Free Space Low Limit property. When
this set limit is reached the Disk Usage Server will contact all components
who reside in this root directory and tell them to free up all space possible.
v FC QUOTA - This is the Flow Control Quota property. This property allows
you to set the amount of disk space in bytes available to Tivoli Netcool
Performance Manager components on this file system.
v Remote User - User account used when attempting to access this Disk Usage
Server remotely.
v Remote User Password - User account password used when attempting to
access this Disk Usage Server remotely.
v Secure file transfer to be used - Boolean indicator identifying if ssh should
be used when attempting to access this directory remotely.
Chapter 4. Installing in a distributed environment 85
v Port Number - Port number to use for remote access (sftp) in case it is a non
default port.
Click Finishto create the host. The Topology Editor adds the host under the
Hosts folder in the Physical view.
Note: The DataChannel properties will be filled in automatically at a later
stage.
Adding multiple hosts
You may wish to add multiple hosts at one time.
About this task
To add multiple hosts to the topology:
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Multiple Host
from the menu. The Add Multiple Hosts window opens.
2. Add new hosts by typing their names into the Host Name field as a comma
separated list.
3. Click Next.
4. Configure all added hosts.
The Configure hosts dialog allows you to enter configuration settings and
apply these settings to one or more of the specified host set.
To apply configuration settings to one or more of the specified host set:
a. Enter the appropriate host configuration values. All configuration options
are described in Steps 2, and 3 of the previous process, Add the hosts on
page 84.
b. Select the check box opposite each of the hosts to which you want to apply
the entered values.
c. Click Next. The hosts for which all configuration settings have been
specified disappear from the set of selectable hosts.
d. Repeat steps a, b and c till all hosts are configured.
5. Click Finish.
Add a database configurations component
The Database Configurations component hosts all the database-specific parameters.
About this task
You define the parameters once, and their values are propagated as needed to the
underlying installation scripts.
To add a Database Configurations component:
Procedure
1. In the Logical view, right-click the Tivoli Netcool Performance Manager
Topology component and select Add Database Configurations from the menu.
The host selection window opens.
2. You must add the Database Configuration component to the same server that
hosts the Oracle server (for example, delphi). Select the appropriate host using
the drop-down list.
86 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Click Next to configure the mount points for the database.
4. Add the correct number of mount points.
To add a new mount point, click Add Mount Point. A new, blank row is added
to the window. Fill in the fields as appropriate for the new mount point.
5. Enter the required configuration information for each mount point.
a. Enter the mount point location:
v Mount Point Directory Name (for example, /raid_2/oradata)
Note: The mount point directories can be named using any string as
required by your organizations naming standards.
v Used for Metadata Tablespaces? (A check mark indicates True.)
v Used for Temporary Tablespaces? (A check mark indicates True.)
v Used for Metric Tablespaces? (A check mark indicates True.)
v Used for System Tablespaces and Redo? (A check mark indicates True.)
b. Click Back to return to the original page.
c. Click Finish to create the component.
The Topology Editor adds the new Database Configurations component to the
Logical view.
6. Highlight the Database Configurations component to display its properties.
Review the property values to make sure they are valid. For the complete list
of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide.
The Database Configurations component has the following subelements:
v Channel tablespace configurations
v Database Channels
v Database Clients configurations
v Tablespace configurations
v Temporary tablespace configurations
Note: Before you actually install Tivoli Netcool Performance Manager, verify
that both the raid_2/oradata and raid_3/oradata directory structures have been
created, and that the oradata subdirectories are owned by oracle:dba.
Add a DataMart
The steps required to add a DataMart component to your topology.
About this task
Tivoli Netcool Performance Manager DataMart is normally installed on the same
server on which you installed Oracle server and the Tivoli Netcool Performance
Manager database configuration. However, there is no requirement that forces
DataMart to reside on the database server.
Note the following:
v If you are installing DataMart on an AIX system or any remote AIX, Linux or
Solaris system, you must add the IBM JRE to the PATH environment variable for
the Tivoli Netcool Performance Manager Unix user, pvuser.
v You must ensure you are using the IBM JRE and not the RHEL JRE. The IBM
JRE is supplied with the Topology Editor or with Tivoli Integrated Portal. To
ensure you are using the right JRE you can either:
Chapter 4. Installing in a distributed environment 87
Set the JRE path to conform to that used by the Topology Editor, do this
using the following commands (using the default location for the primary
deployer):
PATH=/opt/IBM/proviso/topologyEditor/jre/bin:$PATH
export $PATH
For a remote server, that is one that does not host the primary deployer, you
must download and install the required JRE, and set the correct JRE path. See
the IBM Tivoli Netcool Performance Manager: Configuration
Recommendations Guide document for JRE download details.
To add a DataMart component:
Procedure
1. In the Logical view, right-click the DataMarts folder and select Add DataMart
from the menu. The host selection host window is displayed.
2. Using the drop-down list of available hosts, select the machine on which
DataMart should be installed (for example, delphi).
3. Click Finish.
The Topology Editor adds the new DataMart x component (for example,
DataMart 1) under the DataMarts folder in the Logical view.
4. Highlight the DataMart x component to display its properties. Review the
property values to make sure they are valid. You can specify an alternate
installation user for the DataMart component by changing the values of the
USER_LOGIN and USER_PASSWORD properties in the Advanced Properties
tab. For the complete list of properties for this component, see the IBM Tivoli
Netcool Performance Manager: Property Reference Guide.
Event notification scripts
When you install the DataMart component, two event notification scripts are
installed.
The scripts are called as needed by tablespace size checking routines in Oracle and
in Tivoli Netcool Performance Manager, if either routine detects low disk space
conditions on a disk partition hosting a portion of the Tivoli Netcool Performance
Manager database. Both scripts by default send their notifications by e-mail to a
local login name.
The two files and their installation locations are as follows:
v The script installed as $ORACLE_BASE/admin/$ORACLE_SID/bin/notifyDBSpace
notifies the login name oracle by e-mail of impending database space problems.
This script is called as needed by an Oracle routine that periodically checks for
available disk space.
v The script installed as /opt/datamart/bin/notifyDBSpace notifies the login name
pvuser of the same condition. This script is called as needed by the Hourly
Loader component of Tivoli Netcool Performance Manager DataChannel. The
loader checks for available disk space before attempting its hourly upload of
data to the database.
Either file can be customized to send its warnings to a different e-mail address on
the local machine, to an SMTP server for transmission to a remote machine, or to
send the notices to the local network's SNMP fault management system (that is, to
an SNMP trap manager). You can modify either script to send notifications to an
SNMP trap, instead of, or in addition to its default e-mail notification.
88 IBM Tivoli Netcool Performance Manager: Installation Guide
Add a Discovery Server
The Discovery Server is the Tivoli Netcool Performance Manager component
responsible for SNMP discovery.
About this task
You can add a discovery server for each DataMart defined in the topology.
To add a Discovery Server:
Procedure
In the Logical view, right-click the DataMart x folder and select Add Discovery
server from the menu.
The Topology Editor displays the new Discovery Server under the DataMart n
folder in the Logical view.
Adding multiple Discovery Servers
The steps required to add multiple Discovery servers.
About this task
If you want to run multiple Discovery servers on multiple hosts in your
environment, you must perform additional steps at deployment to make sure that
each host system contains identical inventory files and identical copies of the
inventory hook script. IBM recommends that you only use identically-configured
instances of the Discovery Server.
The inventory files used by the Discovery Server are configuration files named
inventory_elements.txt and inventory_subelements.txt. These files are located in
the $PVMHOME/conf directory of the system where you install the DataMart
component. Some technology packs provide custom sub-elements inventory files
with names different from inventory_subelements.txt that are also used by the
Discovery Server.
To add multiple Discovery Servers, do the following:
Procedure
v Install the primary instance of DataMart and the Discovery Server on one target
host system.
v Install and configure any required technology packs on the primary host. You
modify the contents of the inventory files during this step.
v Install secondary instances of DataMart and the Discovery Server on
corresponding target host systems.
v Replicate the inventory files from the system where the primary instance of
DataMart is running to the $PVMHOME/conf directory on the secondary hosts. You
must also replicate the InventoryHook.sh script that is located in the
$PVMHOME/bin directory and any other files that this script requires.
Chapter 4. Installing in a distributed environment 89
Add a Tivoli Integrated Portal
The Tivoli Integrated Portal (TIP) provides an integrated console for users to log
on and view information contained on the DataView server.
About this task
To add a Tivoli Integrated Portal:
Procedure
1. In the Logical view, right-click on the Tivoli Integrated Portals folder and select
Add TIP from the menu.
The Configure TIP Wizard is displayed.
2. The Topology Editor gives you the choice of adding an already existing Tivoli
Integrated Portal to the topology or to create a new Tivoli Integrated Portal. To
create a new TIP, select the Create a new TIP radio button. To import an
already existing Tivoli Integrated Portal into the topology, select the Import
existing TIPs from host radio button.
3. Using the drop-down list of available hosts, select the host on which Tivoli
Integrated Portal should be installed (for example, delphi).
Note: The hostname of the host selected for the TCR install must not contain
underscores. Underscores in the hostname will cause the installation of TCR to
fail.
4. Click Finish.
The Topology Editor adds the new Tivoli Integrated Portal component to the
Logical view.
5. Highlight the Tivoli Integrated Portal component to display its properties.
6. Review the other property values to make sure they are valid. For the complete
list of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide.
Discovering existing Tivoli Integrated Portals
How to update your topology so it sees existing Tivoli Integrated Portal (TIP)
instances on your system.
About this task
To discover existing Tivoli Integrated Portals:
This step runs an asynchronous check for existing Tivoli Integrated Portals on each
selected DataView host. If a Tivoli Integrated Portal is discovered to exist on a
host. The discovered Tivoli Integrated Portal detail is added to the topology.
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. Add the host that has an existing Tivoli Integrated Portal you wish to
discover.
2. Go to the Logical view, right-click on the Tivoli Integrated Portals folder and
select Import existing TIPs from host from the menu. The Run TIP Discovery
Wizard Page is displayed.
3. Select the check box for each host on which you would like to perform Tivoli
Integrated Portal discovery.
4. Click Import TIP.
90 IBM Tivoli Netcool Performance Manager: Installation Guide
If the discovered Tivoli Integrated Portal is an old version, it is flagged within
the topology for upgrade.
Any DataView without a Tivoli Integrated Portal is flagged within the topology
for Tivoli Integrated Portal installation on that host.
The deployer will take the appropriate action when run. You will find that the
discovered TIP status as "[TCR Found: <TIP Location>".
5. Click Next.
6. Configure Tivoli Integrated Portal properties.
a. Enter the appropriate host configuration values.
v TCR_INSTALLATION_DIRECTORY: This is the directory in which Tivoli
Common Reporting is installed.
v TIP_INSTALLATION_DIRECTORY: This is the directory in which Tivoli
Integrated Portal is installed.
v WAS_USER_NAME: This is the WAS user name.
v WAS_PASSWORD: This is the WAS password.
If you would like to configure LDAP for Tivoli Integrated Portal, please see
Appendix F, LDAP integration, on page 207.
b. Select the check box opposite each of the Tivoli Integrated Portal hosts to
which you want to apply the entered values.
c. Click Next.
The hosts for which all configuration settings have been specified disappear
from the set of selectable hosts.
d. Repeat steps a, b and c till all hosts are configured.
7. Click Next to add the discovered Tivoli Intergrated Portals to the topology.
Note: If you discover a Tivoli Common Reporting/Tivoli Intergrated Portal of
version 2.1 that was installed using the Tivoli Common Reporting installer and
not the Tivoli Netcool Performance Manager installer, the port will not align
with a Technology Pack automatically. To align the port numbers you must
specify the Tivoli Intergrated Portal port when performing the Technology Pack
installation.
Add a DataView
How to add a DataView.
About this task
Note: To display DataView real-time charts, you must have the Java runtime
environment (JRE) installed on the browser where the charts are to be displayed.
You can download the JRE from the Sun download page at http://www.sun.com.
Note: If you are reusing an existing Tivoli Integrated Portal that was installed by a
user other than root, the default deployment of DataView will encounter problems.
To avoid these problems you must remove the offending Tivoli Integrated Portal
from the your topology and add both the Tivoli Integrated Portal and DataView as
a separate post deployment step. The steps you must follow to install DataView
reusing an existing Tivoli Integrated Portal are outlined in the sections:
v Reuse an existing Tivoli Integrated Portal and Install DataView using a non
root user on a local host on page 104
v Reuse an existing Tivoli Integrated Portal and Install DataView using a non
root user on a remote host on page 106
Chapter 4. Installing in a distributed environment 91
To add a DataView component:
Procedure
In the Logical view, right-click on a Tivoli Integrated Portal and select Add
DataView from the menu. The DataView is automatically added inheriting its
properties from the Tivoli Integrated Portal instance.
Migration and synchronization of DataView
Tivoli Netcool Performance Manager supplies various options to migrate and
synchronize DataView component data.
Migrate
Migration of DataView data, that is, content and users, is solely for the purpose of
moving DataView content and users from SilverStream to Tivoli Integrated Portal.
The migrate option can be executed by using the migrate option presented in the
Topology Editor when you right click on a DataView listed in the topology. You
can also use the migrate command line option to move DataView content and
users from SilverStream to Tivoli Integrated Portal.
Note: The DataView migrate option available in the Topology Editor and through
the command line, can only be used to move DataView content and users from
SilverStream to Tivoli Integrated Portal.
Note: The migrate command is discussed in detail in The migrate command
Synchronize
Synchronization of DataView data, that is, content and users, is solely for the
purpose of moving DataView content and users from one Tivoli Integrated Portal
server to another Tivoli Integrated Portal server. Typically this is performed to
move DataView to a new platform, such as moving from Solaris to AIX.
Note: The synchronize command is discussed in detail in The synchronize
command
Add the DataChannel administrative components
The steps required to add DataChannel Administrative components.
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
Administrative Components from the menu. The host selection window opens.
2. Using the drop-down list of available hosts, select the machine that you want
to be the Channel Manager host for your DataChannel configuration (for
example, corinth).
3. Click Finish.
The Topology Editor adds a set of new components to the Logical view:
v Channel Manager - Enables you to start and stop individual DataChannels
and monitor the state of various DataChannel programs. There is one
Channel Manager for the entire DataChannel configuration. The Channel
Manager components are installed on the first host you specify
v Corba Naming Server - Provides near real-time data to DataView.
92 IBM Tivoli Netcool Performance Manager: Installation Guide
v High Availability Managers - This is mainly used for large installations that
want to use redundant SNMP collection paths. The HAM constantly
monitors the availability of one or more SNMP collection hosts, and switches
collection to a backup host (called a spare) if a primary host becomes
unavailable.
v Log Server - Used to store user, debug, and error information.
v Plan Builder - Creates the metric data routing and processing plan for the
other components in the DataChannel.
v Custom DataChannel properties - These are the custom property values that
apply to all DataChannel components.
v Global DataChannel properties - These are the global property values that
apply to all DataChannel components.
Add a DataChannel
A DataChannel is a software module that receives and processes network statistical
information from both SNMP and non-SNMP (BULK) sources.
About this task
This statistical information is then loaded into a database where it can be queried
by SQL applications and captured as raw data or displayed on a portal in a variety
of reports.
Typically, collectors are associated with technology packs, a suite of Tivoli Netcool
Performance Manager programs specific to a particular network device or
technology. A technology pack tells the collector what kind of data to collect on
target devices and how to process that data. See the Technology Pack Installation
Guide for detailed information about technology packs.
To add a DataChannel:
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
2. Using the drop-down list of available hosts, select the machine that will host
the DataChannel (for example, corinth).
3. Accept the default channel number (for example, 1).
4. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel 1)
to the Logical view.
5. Highlight the DataChannel to display its properties. Note that the DataChannel
always installs and runs as the default user for the host (the Tivoli Netcool
Performance Manager Unix username, pvuser). Review the other property
values to make sure they are valid. For the complete list of properties for this
component, see the IBM Tivoli Netcool Performance Manager: Property
Reference Guide.
The DataChannel has the following subelements:
v Daily Loader x - Processes 24 hours of raw data every day, merges it
together, then loads it into the database. The loader process provides
statistics on metric channel tables and metric tablespaces.
Chapter 4. Installing in a distributed environment 93
v Hourly Loader x - Reads files output by the Complex Metric Engine (CME)
and loads the data into the database every hour. The loader process provides
statistics on metric channel tables and metric tablespaces.
The Topology Editor includes the channel number in the element names. For
example, DataChannel 1 would have Daily Loader 1 and File Transfer Engine 1.
Note: When you add DataChannel x, the Problems view shows that the
Input_Components property for the Hourly Loader is blank. This missing value
will automatically be filled in when you add a DataLoad collector (as described
in the next section) and the error will be resolved.
Separating the data and executable directories
You may wish to separate the data and executable directories for your
DataChannel
About this task
Note: Separating the data and executable directories is only possible during the
first install activity. After the installation, you cannot modify the topology to
separate the data and the executable directories.
If you wish to separate the data and executable directories for your DataChannel,
perform the following steps:
Procedure
1. Create two directories on the DataChannel host, for example, DATA_DIR to
hold the data and EXE_DIR to hold the executable.
2. Change the LOCAL_ROOT_DIRECTORY value on that host's Disk Usage
Server to the data root folder DATA_DIR.
In the Host advanced properties you will see the DATA_DIR value propagated
to all DC folder values for the host.
3. Change DC_ROOT_EXE_DIRECTORY to the executable directory EXE_DIR.
This change will propagate to the DC conf directory, the DataChannel Bin
Directory and the Datachannel executable file name.
Note: For advanced information about DataChannels, see Appendix B,
DataChannel architecture, on page 171.
Add a DataChannel Remote (DCR)
A DataChannel is a software module that receives and processes network statistical
information from both SNMP and non-SNMP (BULK) sources. A DataChannel
remote is A DataChannel installation configuration in which the subchannel, CME
and FTE components are installed and run on one host, while the Loader
components are installed and run on another host.
About this task
In a DataChannel remote configuration, the subchannel hosts can continue
processing data and detecting threshold violations, even while disconnected from
the Channel Manager server.
The following task assumes we are placing the LDR and DLDR on a the current
host, called, for example, hostname1; and that we are placing the the subchannel,
CME and FTE, on another host, called, for example, hostname2.
94 IBM Tivoli Netcool Performance Manager: Installation Guide
To add a remote DataChannel:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select From database and click Next.
4. In the Physical View. Add a new host, hostname2, to the downloaded topology.
5. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
6. Using the drop-down list of available hosts, select the machine that will host
the DataChannel, hostname1.
7. Accept the default channel number (for example, 1).
8. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel2)
to the Logical view.
9. Right-click on the new DataChannel, DataChannel2, and select Add SNMP
Collector.
10. Select server hostname2 as the host.
The Collector 2.2 is added.
11. Right-click on the Complex Metric Engine.2.2 and choose Change Host.
12. Select server hostname2 as the host.
The File Transfer Engine 2.2 is added to hostname2.
Results
The setup should looks as follows:
v DataChannel 2 - hostname1
v Collector 2.2 - hostname1
v Collector SNMP - hostname2
v Complex Metric Engine.2.2 - hostname2
v File Transfer Engine 2.2 - hostname2
v Daily Loader 2 - hostname1
v Hourly Loader 2 - hostname1
This accomplishes FTE and CME to be on one server and LDR and DLDR to be on
another server.
Add a Collector
Collectors collect and process raw statistical data about network devices obtained
from various network resources.
The collectors send the received data through a DataChannel for loading into the
Tivoli Netcool Performance Manager database. Note that collectors do not need to
be on the same machine as the Oracle server and DataMart.
Chapter 4. Installing in a distributed environment 95
Collector types
Collector types and their description, plus the steps required to associate a
collector with a Technology Pack.
About this task
There are two basic types of collectors:
v SNMP collector - Collects data using SNMP polling directly to network services.
Specify this collector type if you plan to install a Tivoli Netcool Performance
Manager SNMP technology pack. These technology packs operate in networking
environments where the associated devices on which they operate use an SNMP
protocol.
v Bulk DataLoad collector - Imports data from files. The files can have multiple
origins, including log files generated by network devices, files generated by
SNMP collectors on remote networks, or files generated by a non-Tivoli Netcool
Performance Manager network management database.
There are two types of bulk collectors:
v UBA. A Universal Bulk Adapter (UBA) Collector that handles bulk input files
generated by non-SNMP devices. Specify this collector type if you plan to install
a Tivoli Netcool Performance Manager UBA technology pack, including Alcatel
5620 NM, Alcatel 5620 SAM, and Cisco CWM.
v BCOL. A bulk Collector that retrieves and interprets the flat file output of
network devices or network management systems. This collector type is not
recommended for Tivoli Netcool Performance Manager UBA technology packs,
and is used in custom technology packs.
If you are creating a UBA collector, you must associate it with a specific technology
pack. For this reason, IBM recommends that you install the relevant technology
pack before creating the UBA collector. Therefore, you would perform the following
sequence of steps:
Procedure
1. Install Tivoli Netcool Performance Manager, without creating the UBA collector.
2. Download and install the technology pack.
3. Open the deployed topology file to load the technology pack and add the UBA
collector for it.
Note: For detailed information about UBA technology packs and the
installation process, see the Technology Pack Installation Guide. Configure the
installed pack by following the instructions in the pack-specific user's guide.
Restrictions
There are a number of collector restrictions that must be noted.
Note the following restrictions:
v The maximum collector identification number is 999.
v There is no relationship between the channel number and the collector number
(that is, there is no predefined range for collector numbers based on channel
number). Therefore, collector 555 could be attached to DataChannel 7.
v Each database channel can have a maximum of 40 subchannels (and therefore,
40 collectors).
96 IBM Tivoli Netcool Performance Manager: Installation Guide
Creating an SNMP collector
How to create an SNMP collector.
About this task
To add an SNMP collector:
Procedure
1. In the Logical view, right-click the DataChannel x folder.
The pop-up menu lists the following options:
v Add Collector SNMP - Creates an SNMP collector.
v Add Collector UBA - Creates a UBA collector.
v Add Collector BCOL - Creates a BCOL collector. This collector type is used
in custom technology packs. DataMart must be added to the topology before
a BCOL collector can be added.
Select Add Collector SNMP. The Configure Collector window opens.
2. Using the drop-down list of available hosts on the Configure Collector window,
select the machine that will host the collector (for example, corinth).
3. Accept the default collector number (for example, 1).
4. Click Finish.
The Topology Editor displays the new collector under the DataChannel x folder
in the Logical view.
5. Highlight the collector to view its properties. The Topology Editor displays
both the SNMP collector core parameters and the SNMP technology
pack-specific parameters. The core parameters are configured with all SNMP
technology packs. You can specify an alternate installation user for the SNMP
collector by changing the values of the pv_user, pv_user_group and
pv_user_password properties in the Advanced Properties tab. Review the
values for the parameters to make sure they are valid.
Note: For information about the core parameters, see the IBM Tivoli Netcool
Performance Manager: Property Reference Guide.
Results
The collector has two components:
v Complex Metric Engine x - Perform calculations on the collected data.
v File Transfer Engine (FTE) x - Transfers files from the collector's output
directories and places them in the input directory of the CME.
The FTE writes data to the file /var/adm/wtmpx on each system that hosts a
collector. As part of routine maintenance, check the size of this file to prevent it
from growing too large.
Note: Your Solaris version can be configured with strict access default settings for
secure environments. Strict FTP access settings might interfere with automatic
transfers between a DataChannel subchannel and the DataLoad server. Check for
FTP lockouts in /etc/ftpd/ftpusers, and check for strict FTP rules in
/etc/ftpd/ftpaccess.
Note: The Topology Editor includes the channel and collector numbers in the
element names. For example, DataChannel 1 could have Collector SNMP 1.1, with
Complex Metric Engine 1.1. and File Transfer Engine 1.1.
Chapter 4. Installing in a distributed environment 97
Add a Cross Collector CME
The steps required to add a Cross Collector CME.
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
Cross Collector CME from the menu. The Specify the Cross Collector CME
details window is displayed.
2. Using the drop-down list of available hosts, select the machine that will host
the Cross-Collector CME (for example, corinth).
3. Select the desired Disk Usage Server on the selected host.
4. Select the desired channel number (for example, 1).
5. Click Finish.
The Topology Editor adds the new Cross-Collector CME (for example,
Cross-Collector CME 2000) to the Logical view.
6. Highlight the Cross-Collector CME to display its properties.
Note: The Cross-Collector CME always installs and runs as the default user for
the host (the Tivoli Netcool Performance Manager Unix username, pvuser).
7. Review the other property values to make sure they are valid. For the complete
list of properties for this component, see the IBM Tivoli Netcool Performance
Manager: Property Reference Guide
8. After running the deployer to install the Cross-Collector CME you will need to
restart the CMGR process.
Note: You will notice that dccmd start all will not start the Cross-Collector
CME at this point.
9. You must first deploy a formula against the Cross-Collector CME using the
DataChannel frmi tool.
Run the frmi tool. The following is an example command:
frmi ecma_formula.js -labels formula_labels.txt
Where:
v The format of formula_labels.txt is 2 columns separated by an "=" sign.
v First column is Full Path to formula.
v Second is the number of the Cross-Collector CME.
v The file formula_labels.txt is of the format:
Path_to_ECMA_formulas~Formula1Name=2000
Path_to_ECMA_formulas~Formula2Name=2001
Note: When a Cross-Collector CME (CC-CME) is installed on the system and
formulas are applied against it, the removal of collectors that the CC-CME
depends on is not supported. This is an exceptional case, that is, if you have
not installed a CC-CME, collectors can be removed.
Adding multiple Cross Collectors
About this task
To add multiple Cross Collectors:
98 IBM Tivoli Netcool Performance Manager: Installation Guide
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
multiple Cross Collectors from the menu. The Add Cross Collector CME
window is displayed.
2. (Optional) Click Add Hosts to add to the set of Cross Collector hosts. Only
hosts that have a DUS can be added.
Note: It is recommended that you have 20 Cross Collector CMEs spread across
the set of toplolgy hosts.
3. Set the number of Cross Collector CMEs for the set of hosts, there are two ways
you can do this:
v Click Calculate Defaults to use the wizard to calculate the recommended
spread across the added hosts. This will set the number of Cross Collector
CMEs to the default value.
v To manually set the number of cross collector for each host, use the
drop-down menu opposite each host name.
4. Click Finish.
Saving the topology
When you are satisfied with the infrastructure, verify that all the property values
are correct and that any problems have been resolved, then save the topology to an
XML file.
About this task
To save the topology as an XML file:
Procedure
1. In the Topology Editor, select Topology then either Save Topology As or Save
Topology.
Click Browse to navigate to the directory in which to save the file. By default,
the topology is saved as topology.xml file in the topologyEditor directory.
2. Accept the default value or choose another name or location, then click OK to
close the file browser window.
3. The file name and path is displayed in the original window. Click Finish to
save the file and close the window.
You are now ready to deploy the topology file (see Starting the Deployer on
page 100).
Note: Until you actually deploy the topology file, you can continue making
changes to it as needed by following the directions in Opening an existing
topology file on page 100.
See Chapter 6, Modifying the current deployment, on page 117 for more
information about making changes to a deployed topology file.
Note: Only when you begin the process of deploying a topology is it saved to
the database. For more information, see the section Deploying the Topology.
Chapter 4. Installing in a distributed environment 99
Opening an existing topology file
As you create the topology, you can save the file and update it as needed.
About this task
To open a topology file that exists but that has not yet been deployed:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, click local then use Browse to navigate to the correct
directory and file. Once you have selected the file, click OK. The selected file is
displayed in the Open Topology window.
Click Finish.
The topology is displayed in the Topology Editor.
4. Change the topology as needed.
Starting the Deployer
The primary deployer is installed on the same machine as the Topology Editor. You
first run the topology file on the primary deployer, and then run secondary
installers on the other machines in the distributed environment.
See Resuming a partially successful first-time installation on page 109 for more
information about the difference between primary and secondary deployers.
Note: Before you start the deployer, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, Installing and
configuring the prerequisite software, on page 35 for more information.
Primary Deployer
The steps required to run the primary deployer from the Topology Editor
Procedure
Click Run > Run Deployer for Installation.
Note: When you use the Run menu options (install or uninstall), the deployer uses
the last saved topology file, not the current one. Be sure to save the topology file
before using a Run command.
100 IBM Tivoli Netcool Performance Manager: Installation Guide
Secondary Deployers
A secondary deployer is only required if remote installation using the primary
deployer is not possible.
About this task
For more information on why you may need to use a secondary deployer, see
Appendix A, Remote installation issues, on page 167.
To run a secondary deployer:
Procedure
v To run a secondary deployer from the launchpad:
1. On the launchpad, click Start the Deployer.
2. On the Start Deployer page, click the Start Deployer link.
v To run a secondary deployer from the command line:
1. Log in as root.
2. Change to the directory containing the deployer within the downloaded
Tivoli Netcool Performance Manager distribution:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer/
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer/
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer/
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading
the Tivoli Netcool Performance Manager distribution to disk on page 48.
3. Enter the following command:
# ./deployer.bin
Note: See Appendix D, Deployer CLI options, on page 191 for the list of
supported command-line options.
Pre-deployment check
The Deployer will fail if the required patches listed in this file are not installed.
About this task
The Deployer performs a check on the operating system versions and that the
minimum required packages are installed. The Deployer checks for the files as
listed in the relevant check_os.ini file.
The check_os.ini can be found at:
v The check_os.ini file detailing Solaris requirements can be found at:
/SOLARIS/Install/SOL10/deployer/proviso/bin/Check/check_os.ini
v The check_os.ini file detailing AIX requirements can be found at:
/AIX/Install/deployer/proviso/bin/Check/check_os.ini
v The check_os.ini file detailing Linux requirements can be found at:
/RHEL/Install/deployer/proviso/bin/Check/check_os.ini
Chapter 4. Installing in a distributed environment 101
Procedure
v To check if the required packages are installed:
1. Click Run > Run Deployer for Installation to start the Deployer.
2. Select the Check prerequisites check box.
3. Click Next.
The check will return a failure if any of the required files are missing.
v To repair a failure:
1. Log in as root.
2. Install the packages listed as missing.
3. (Linux only) If any openmotif package is listed as missing:
Install the missing openmotif package and update the package DB using the
command:
# updatedb
4. Rerun the check prerequisites step.
Deploying the topology
How to deploy your defined topology.
About this task
The deployer displays a series of pages to guide you through the Tivoli Netcool
Performance Manager installation. The installation steps are displayed in a table,
which enables you to run each step individually or to run all the steps at once. For
more information about the deployer interface, see Primary Deployer on page
100.
Important: By default, Tivoli Netcool Performance Manager uses Monday to
determine when a new week begins. If you wish to specify a different day, you
must change the FIRST_WEEK_DAY parameter in the Database Registry using the
dbRegEdit utility. This parameter can only be changed when you first deploy the
topology that installs your Tivoli Netcool Performance Manager environment, and
it must be changed BEFORE the Database Channel is installed. For more
information, see the Tivoli Netcool Performance Manager Registry and Space
Management Tech Note.
If you need to stop the installation, you can resume it at a later time. For more
information, see Resuming a partially successful first-time installation on page
109.
To deploy the Tivoli Netcool Performance Manager topology:
Procedure
1. The deployer opens, displaying a welcome page. Click Next to continue.
2. If you started the deployer from the launchpad or from the command line,
enter the full path to your topology file, or click Choose to navigate to the
correct location. Click Next to continue.
Note: If you start the deployer from within the Topology Editor, this step is
skipped.
The database access window prompts for the security credentials.
102 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Enter the host name (for example, delphi) and database administrator
password (for example, PV), and verify the other values (port number, SID,
and user name). Note that if the database does not yet exist, these parameters
must match the values you specified when you created the database
configuration component (see Add a database configurations component on
page 86). Click Next to continue.
4. The node selection window shows the target systems and how the files will be
transferred (see Secondary Deployers on page 101 for an explanation of this
window). The table has one row for each machine where at least one Tivoli
Netcool Performance Manager component will be installed.
The default settings are as follows:
v The Enable checkbox is selected. If this option is not selected, no actions
will be performed on that machine.
v The Check prerequisites checkbox is not selected, if selected scripts are run
to verify that the prerequisite software has been installed.
v Remote execution is enabled, using both RSH and SSH.
If remote execution cannot be enabled, perhaps due to a particular
customer's security protocols, see Appendix A, Remote installation issues,
on page 167 and Resuming a partially successful first-time installation on
page 109.
v File transfer using FTP is enabled.
If desired, reset the values as appropriate for your deployment.
Click Next to continue.
5. Provide media location details.
The Tivoli Netcool Performance Manager Media Location for components
window is displayed, listing component and component platform.
a. Click on the Choose the Proviso Media button. You will be asked to
provide location of the media for each component.
b. Enter the base directory in which your media is located. If any of the
component media is not within the directory specified, you will be asked
to provide media location detail for that component.
6. The deployer displays summary information about the installation. Review the
information, then click Next.
The deployer displays the table of installation steps (see Pre-deployment
check on page 101 for an overview of the steps table). Note the following:
v Regardless of whether the steps are run, or if they pass or fail, closing the
wizard will result in the topology being posted to the Tivoli Netcool
Performance Manager Database, assuming it exists.
v If an installation step fails, see Resuming a partially successful first-time
installation on page 109 for debugging information. Continue the
installation by following the instructions in Resuming a partially successful
first-time installation on page 109.
v If the TCR installation step fails, which can happen when there is not
enough space available in /usr and /tmp or directory cleanup has not been
carried out, run the tcrClean.sh script. To run this script:
a. Copy the tcrClean.sh script from the Primary Deployer (host where the
Topology Editor is installed) to the server where the TCR installation step
fails.
The tcrClean.sh script can be found on the Primary Deployer in the
directory:
/opt/IBM/proviso/deployer/proviso/bin/Util/
Chapter 4. Installing in a distributed environment 103
b. Run tcrClean.sh.
c. When prompted, enter the install location of TCR.
d. Continue the installation by following the instructions in Resuming a
partially successful first-time installation on page 109
7. Click Run All to run all the steps in sequence.
8. The deployer prompts you for the location of the setup files. Use the file
selection window to navigate to the top-level directory for your operating
system to avoid further prompts.
For example:
<DIST_DIR>/RHEL/
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading the
Tivoli Netcool Performance Manager distribution to disk on page 48.
Note: This assumes that the Tivoli Netcool Performance Manager distribution
was downloaded to the folder /var/tmp/cdproviso as per the instructions in
Downloading the Tivoli Netcool Performance Manager distribution to disk
on page 48.
If Tivoli Integrated Portal is configured to install on a remote host, the Run
Remote TIP Install step is included. This step will prompt the user to enter the
root password. The deployer requires this information in order to run as root
on the remote host and perform the Tivoli Integrated Portal installation.
9. When all the steps have completed successfully, click Done to close the
wizard.
10. Stop and start TCR:
a. Navigate to the /tip_install_dir/products/tcr/bin/ directory.
b. Set the ORACLE_HOME environment variable. For example:
ORACLE_HOME=/opt/oracle/product/11.2.0/
export ORACLE_HOME
c. Execute the following:
LD_LIBRARY_PATH= $ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
d. Run the following scripts:
v stopTCRserver.sh <username> <password>
v startTCRserver.sh
Note: These scripts must be run every time Tivoli Integrated Portal is
restarted.
Note: The Topology Editor must be closed after every deployment.
Reuse an existing Tivoli Integrated Portal and Install DataView
using a non root user on a local host
If you are reusing an existing Tivoli Integrated Portal that was installed by a user
other than root the default deployment of DataView will encounter problems.
About this task
This procedure describes how you install DataView to a local host should you
decide to reuse an existing Tivoli Integrated Portal that was installed by a user
other than root.
104 IBM Tivoli Netcool Performance Manager: Installation Guide
Only pursue these steps if you are installing DataView to an existing Tivoli
Integrated Portal that was installed by a user other than root.
Procedure
1. Install Topology Editor on the local host.
2. Execute the steps required to discover existing Tivoli Integrated Portals, as
described in Discovering existing Tivoli Integrated Portals on page 90
3. Configure Tivoli Integrated Portal and DataView in the Topology Editor on
the local host.
The values of the parameters for Tivoli Integrated Portal in the Topology
Editor need to be the same as the Tivoli Integrated Portal previously installed
by OMNIBUS. For example check that the values of USER_INSTALL_DIR and
IAGLOBAL_WC_adminhost in the Topology Editor correspond to the Tivoli
Integrated Portal installed by OMNIbus.
4. Run the Deployer for installation from the Topology Editor.
5. Go through the screens as per usual to the last run steps screen.
6. Mark the Install DataView and Register DataView as held.
Note: If Tivoli Integrated Portal Install steps are noticed, just mark the steps
as Success.
7. Mark all other steps as ready.
8. Run the deployer so that all steps except Install DataView and Register
DataView have run and have a status of success.
9. Change to the Installer directory in the proviso media.
For example, ./proviso/RHEL/Install
10. Change to the DataView directory containing sample dataview file:
dvinstall.cfg.
For example, ./deployer/proviso/data/DeploymentPackage/DeploymentSteps/
DataView/templateDV
11. Manually configure the dvinstall.cfg for your environment.
You can right click on the Install DataView step in the deployer and open the
properties tab so that the values for DataView install are displayed. Use these
values to populate the dvinstall.cfg. See sample dvinstall.cfg provided
below.
12. Change to the DataView directory that contains the install.sh in the TNMP
media.
For example, on a Linux environment this would be:../proviso/RHEL/
DataView/RHEL5
13. In a command terminal, as the same non root user that installed OMNIbus,
set the PATH variable using the command:
export PATH=/opt/IBM/tivoli/tip/java/bin:${PATH}
Note: Check that the java path is correct. ( that /opt/IBM/tivoli/tip/java/
bin exists ).
14. In the same terminal that you set the PATH variable in, run the command to
silently install DataView as the non default user pointing to the dvinstall.cfg
file.
For example, use the command:
./install.sh -i silent -f dvinstall.cfg.
Chapter 4. Installing in a distributed environment 105
15. After the previous step has completed source the Tivoli Integrated Portal url
and check that DataView exists.
For example, https://hostTipIsInstalledOn:16316/ibm/console should be
visible in Performance > Network Resources > Defined Resource Views
16. Mark the Install DataView as success.
17. Mark the Register DataView step as ready.
18. Run the Register DataView step.
19. Click done on the deployer
Example
Sample dvinstall.cfg:
#
# Licensed Materials - Property of IBM
# 5724-P55, 5724-P57, 5724-P58, 5724-P59
# Copyright IBM Corporation 2008. All Rights Reserved.
# US Government Users Restricted Rights- Use, duplication or disclosure
# restricted by GSA ADP Schedule Contract with IBM Corp.
#
# Unix user name for the TIP installation location
USER_INSTALL_DIR=/opt/IBM/tivoli/tip
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR=tipadmin
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR_PASSWORD=tipadmin
# location of the Oracle driver
ORACLE_CLIENT_DIR=/opt/oracle/product/11.2.0-client32
# connection url to the database
TNPM_DATABASE_URL=jdbc:oracle:thin:@VOIPDEV3:1521:PV
# name of a valid database user (metric account: PV_LOUIS)
TNPM_DATABASE_USER=PV_LOIS
# password of the database user
TNPM_DATABASE_USER_PASSWORD=PV
# name of the context root of the web app
DATAVIEW_CONTEXT=PV
# If true then TIP is restarted after DataView is installed. Default is
TRUE if RESTART_TIP is not set RESTART_TIP=yes
Reuse an existing Tivoli Integrated Portal and Install DataView
using a non root user on a remote host
If you are reusing an existing Tivoli Integrated Portal that was installed by a user
other than root, the default deployment of DataView will encounter problems.
About this task
This procedure describes how you install DataView to a remote host should you
decide to reuse an existing Tivoli Integrated Portal that was installed by a user
other than root.
106 IBM Tivoli Netcool Performance Manager: Installation Guide
Only pursue these steps if you are installing DataView to an existing Tivoli
Integrated Portal that was installed by a user other than root.
Procedure
1. Install Topology Editor on the local host.
2. Execute the steps required to discover existing Tivoli Integrated Portals, as
described in Discovering existing Tivoli Integrated Portals on page 90
3. Add DataView to the discovered Tivoli Integrated Portal on the remote host
using the Topology Editor.
4. Run the Deployer for installation from the Topology Editor.
5. Go through the screens as per usual to the last run steps screen.
6. Mark the Run Remote DataView Install and Register Remote DataView as
held.
Note: If Tivoli Integrated Portal Install steps are noticed, just mark the steps
as Success.
7. Mark all other steps as ready including the Register DataView and Prepare
Remote DataView step.
8. Run the deployer so that all steps except Run Remote DataView Install and
Register Remote DataView have run and have a status of success. The
Prepare Remote DataView Install places the DataView files and configuration
files in the /tmp directory of the remote host.
9. Change to the runtime folder in the DataView_step folder on the remote host
in the tmp directory.
cd /tmp/ProvisoConsumer/Plan/MachinePlan_MachName/0000X_DataView_step/runtime
For example, /tmp/ProvisoConsumer/Plan/MachinePlan_voipdev4/
00003_DataView_step/runtime
10. As root user change the permissions off all files and folders in the
0000X_DataView_step folder.
Use the command:
chmod -R 777 *
11. As the non root user, that was used to install OMNIbus, run the run.sh file
Use the command:
./run.sh
12. After the ./run.sh step has completed, source the Tivoli Integrated Portal url
and check that DataView exists.
For example, https://hostTipIsInstalledOn:16316/ibm/console should be
visible in Performance > Network Resources > Defined Resource Views
13. Mark the Register DataView step as ready.
14. Run the Register DataView step.
15. Click done on the deployer
Example
Sample dvinstall.cfg:
#
# Licensed Materials - Property of IBM
# 5724-P55, 5724-P57, 5724-P58, 5724-P59
# Copyright IBM Corporation 2008. All Rights Reserved.
# US Government Users Restricted Rights- Use, duplication or disclosure
# restricted by GSA ADP Schedule Contract with IBM Corp.
#
Chapter 4. Installing in a distributed environment 107
# Unix user name for the TIP installation location
USER_INSTALL_DIR=/opt/IBM/tivoli/tip
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR=tipadmin
# Unix user name for the TIP administrative user
TIP_ADMINISTRATOR_PASSWORD=tipadmin
# location of the Oracle driver
ORACLE_CLIENT_DIR=/opt/oracle/product/11.2.0-client32
# connection url to the database
TNPM_DATABASE_URL=jdbc:oracle:thin:@VOIPDEV3:1521:PV
# name of a valid database user (metric account: PV_LOUIS)
TNPM_DATABASE_USER=PV_LOIS
# password of the database user
TNPM_DATABASE_USER_PASSWORD=PV
# name of the context root of the web app
DATAVIEW_CONTEXT=PV
# If true then TIP is restarted after DataView is installed. Default is TRUE if
RESTART_TIP is not set RESTART_TIP=yes
Next steps
The steps to perform after deployement.
The next step is to install the technology packs, as described in Technology Pack
Installation Guide.
Once you have created the topology and installed Tivoli Netcool Performance
Manager, it is very easy to make changes to the environment. Simply open the
deployed topology file (loading it from the database), make your changes, and run
the deployer with the updated topology file as input. For more information about
performing incremental installations, see Chapter 6, Modifying the current
deployment, on page 117.
Note: After your initial deployment, always load the topology file from the
database to make any additional changes (such as adding or removing a
component), because it reflects the current status of your environment. Once you
have made your changes, you must deploy the updated topology so that it is
propagated to the database. To make any subsequent changes following this
deployment, you must load the topology file from the database again.
To improve performance, IBM recommends that you regularly compute the
statistics on metadata tables. You can compute these statistics by creating a cron
entry that executes the dbMgr (Database Manager Utility) analyzeMetaDataTables
command at intervals.
The following example shows a cron entry that checks statistics every hour at 30
minutes past the hour. Note that the ForceCollection option is set to N, so that
statistics will only be calculated when the internal calendar determines that it is
necessary, and not every hour:
0 5 * * * [ -f /opt/DM/dataMart.env ] && [ -x /opt/DM/bin/dbMgr ] && .
/opt/DM/dataMart.env && dbMgr analyzeMetaDataTables A N
108 IBM Tivoli Netcool Performance Manager: Installation Guide
For more information on dbMgr and the analyzeMetaDataTables command, see the
Tivoli Netcool Performance Manager dbMgr Reference Guide.
For each new SNMP DataLoad, change the env file of the TNPM user to add the
directory with the openssh libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH).
Resuming a partially successful first-time installation
Should your quit during an installation, this section describes how you can resume
the installation process.
About this task
In this scenario, you try deploying a Tivoli Netcool Performance Manager topology
for the first time. You define the topology and start the installation. Although some
of the components of the Tivoli Netcool Performance Manager topology are
installed successfully, the overall installation does not complete successfully.
It addition, it is possible to skip a section of the installation. For example, a remote
node might not be accessible for some reason. After skipping this portion of the
installation, resume the installation to continue with the remaining steps. The
deployer will list only those steps needed to complete the installation on the
missing node.
For example, suppose that during the first installation, Oracle wasn't running, so
the database check failed. Stop the installation, start Oracle, then resume the
installation.
To resume a partial installation:
Procedure
1. After correcting the problem, restart the deployer from the command line using
the following command:
./deployer.bin -Daction=resume
Using the resume switch enables you to resume the installation exactly where
you left off.
Note: If you are asked to select a topology file in order to resume your
installation, select the topology file you saved before beginning the install.
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the default location of the base installation directory of the Oracle JDBC
driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click Choose to
navigate to another directory. Click Next to continue.
4. The steps page shows the installation steps in the very same state they were in
when you stopped the installation (with the completed steps marked Success,
the failed step marked Error, and the remaining steps marked Held).
5. Select the step that previously failed, reset it to Ready, then click Run Next.
Verify that this installation step now completes successfully.
6. Run any remaining installation steps, verifying that they complete successfully.
7. At the end of the installation, the deployer loads the updated topology
information into the database.
Chapter 4. Installing in a distributed environment 109
110 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 5. Installing as a minimal deployment
This chapter describes how to install Tivoli Netcool Performance Manager as a
minimal deployment.
Overview
A minimal deployment installation is used primarily for demonstration or evaluation
purposes, and installs the product on the smallest number of machines possible,
with minimal user input.
This installation type installs all the Tivoli Netcool Performance Manager
components on the local host using a predefined topology file to define the
infrastructure. The minimal deployment installation also installs the MIB-II SNMP
technology pack.
When you perform a minimal deployment installation, the Tivoli Netcool
Performance Manager components are installed on the server you are running the
deployer from.
Before you begin
Before installing Tivoli Netcool Performance Manager, you must have installed the
prerequisite software.
For detailed information, see Chapter 3, Installing and configuring the prerequisite
software, on page 35.
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See
Chapter 3, Installing and configuring the prerequisite software, on page 35
for information about tnsping.
Minimal Installation Process:
If you are setting up a demonstration or evaluation system, it is possible to install
all Tivoli Netcool Performance Manager components on a single server for Linux,
Solaris or AIX systems. In this case your installation process will go as follows:
Copyright IBM Corp. 2006, 2012 111
Special consideration
By default, Tivoli Netcool Performance Manager uses Monday to determine when
a new week begins.
If you wish to specify a different day, you must change the FIRST_WEEK_DAY
parameter in the Database Registry using the dbRegEdit utility. This parameter can
only be changed when you first deploy the topology that installs your Tivoli
Netcool Performance Manager environment, and it must be changed BEFORE the
Database Channel is installed. For more information, see the Tivoli Netcool
Performance Manager Database Administration Guide.
No resume of partial install available
There is no resume functionality available to a minimal deployment installation.
As a result a minimal deployment installation must be carried out in full if it is to be
attempted.
Overriding default values
When performing a minimal deployment installation you must accept all default
values. The exceptions are listed in this section.
The expections to this are:
v The location of the oracle jdbc driver.
The default is /opt/oracle/product/11.2.0/jdbc/li
v The Tivoli Netcool Performance Manager installation destination folder.
The default is /opt/proviso
v Oracle server params. The defaults are:
Oracle Base: /opt/oracle/
Oracle home: /opt/oracle/product/11.2.0/
Oracle Port: 1521
112 IBM Tivoli Netcool Performance Manager: Installation Guide
Installing a minimal deployment
This section provides step-by-step instructions for installing Tivoli Netcool
Performance Manager on a single Solaris, AIX or Linux server.
Download the MIB-II files
The minimal deployment version installs the MIB-II Technology Pack.
About this task
Before beginning the installation, you must download both the Technology Pack
Installer and the MIB-II jar files.
To download these files, access either of the following distributions:
Procedure
v The product distribution site: https://www-112.ibm.com/software/howtobuy/
softwareandservices
Located on the product distribution site are the ProvisoPackInstaller.jar file,
the bundled jar file, and individual stand-alone technology pack jar files.
v (Optional) The Tivoli Netcool Performance Manager CD distribution, which
contains the ProvisoPackInstaller.jar file and the jar files for the Starter Kit
components.
See your IBM customer representative for more information about obtaining
software.
Note: Technology Pack Installer and the MIB-II jar files must be in the same
directory (for example, AP), and no other application jar files should be present,
if there are any more jars in that folder the installation step will fail due to "too
many jars" in the specified folder. In addition, you must add the AP directory to
the Tivoli Netcool Performance Manager distribution's directory structure.
Starting the Launchpad
The steps required to start the launchpad.
About this task
To start the launchpad:
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See Setting up a remote X Window display on page 36.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
On Solaris systems:
# BROWSER=/opt/mozilla/mozilla
# export BROWSER
On AIX systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
On Linux systems:
Chapter 5. Installing as a minimal deployment 113
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS
On AIX systems:
# cd <DIST_DIR>/proviso/AIX
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution.
Form more information see Downloading the Tivoli Netcool Performance
Manager distribution to disk on page 48.
5. Enter the following command to start the launchpad:
# ./launchpad.sh
Start the installation
Steps required to install.
About this task
A minimal deployment installation uses a predefined topology file.
To start the installation:
Procedure
1. On the launchpad, click the Install Tivoli Netcool Performance Manager 1.3.2
for Minimal Deployment option in the list of tasks, then click the Install
Tivoli Netcool Performance Manager 1.3.2 for Minimal Deployment link to
start the deployer.
Alternatively, you can start the deployer from the command line, as follows:
a. Log in as root.
b. Set and export your DISPLAY variable (see Setting up a remote X
Window display on page 36).
c. Change directory to the directory that contains the deployer:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer
d. Enter the following command:
# ./deployer.bin -Daction=poc -DPrimary=true
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the terms of the license agreement, then click Next.
114 IBM Tivoli Netcool Performance Manager: Installation Guide
4. Accept the default location of the base installation directory of the Oracle
JDBC driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click
Choose to navigate to another directory. Click Next to continue.
5. The deployer prompts for the directory in which to install Tivoli Netcool
Performance Manager. Accept the default value (/opt/proviso) or click
Choose to navigate to another directory. Click Next to continue.
6. Verify the following additional information about the Oracle database:
v Oracle Base. The base directory for the Oracle installation (for example,
/opt/oracle/). Accept the provided path or click Choose to navigate to
another directory.
v Oracle Home. The root directory of the Oracle database (for example,
/opt/oracle/product/11.2.0-client32/). Accept the provided path or click
Choose to navigate to another directory.
v Oracle Port. The port used for Oracle communications. The default value is
1521.
Click Next to continue.
7. The node selection window shows the target system and how the files will be
transferred. These settings are ignored for a minimal deployment installation
because all the components are installed on a single server.
Click Next to continue.
8. Provide media location details. The Tivoli Netcool Performance Manager
Media Location for components window is displayed, listing component and
component platform.
a. Click on the Choose the Proviso Media button. You will be asked to
provide location of the media for each component.
b. Enter the base directory in which your media is located. If any of the
component media is not within the directory specified, you will be asked
to provide media location detail for that component.
9. The deployer displays summary information about the installation. Review the
information, then click Next to begin the installation.
The deployer displays the table of installation steps (see Pre-deployment
check on page 101 for an overview of the steps table). Note the following:
v If an installation step fails, see Appendix I, Error codes and log files, on
page 219 for debugging information. Continue the installation by following
the instructions in Resuming a partially successful first-time installation
on page 109
v Some of the installation steps can take a long time to complete. However, if
an installation step fails, it will fail in a short amount of time.
10. Click Run All to run all the steps in sequence.
11. When all the steps have completed successfully, click Done to close the
wizard.
12. Run chmod -R 777 on /opt/IBM/tivoli in order to make all files in the TIP
directory structure accessible.
Your installation is complete. See The post-installation script on page 116 for
information about the post-installation script, or Next steps on page 116 for
what to do next.
Chapter 5. Installing as a minimal deployment 115
The post-installation script
The post-installation script is run automatically when installation is complete.
About this task
For a minimal deployment the script performs four actions:
Procedure
1. Starts the DataChannel.
2. Starts the DataLoad SNMP Collector, if it is not already running.
3. Creates a DataView user named tnpm.
4. Gives the poc user permission to view reports under the NOC Reporting group,
with the default password of tnpm.
Results
The script writes a detailed log to the file /var/tmp/poc-post-
install.${TIMESTAMP}.log.
Next steps
The steps to be performed following the deployment of your system.
When the installation is complete, you are ready to perform the final configuration
tasks that enable you to view reports on the health of your network. These steps
are documented in detail in the Tivoli Netcool Performance Manager
documentation set.
For information about the MIB-II Technology Pack, see the MIB-II Technology Pack
User's Guide.
For information about installing additional technology packs, see the Technology
Pack Installation Guide
For each new SNMP DataLoad, change the env file of the TNPM user to add the
directory with the openssh libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH).
116 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 6. Modifying the current deployment
This chapter describes how to modify an installation of Tivoli Netcool Performance
Manager.
It is possible to modify Tivoli Netcool Performance Manager after it has been
installed. To add, delete or upgrade components, load the deployed topology from
the database, make your changes, and run the deployer with the updated topology
as input.
Note: You must run the updated topology through the deployer in order for your
changes to take effect.
Note the following:
v After your initial deployment, always load the topology from the database to
make any additional changes (such as adding or removing a component),
because it reflects the current status of your environment. Once you have made
your changes, you must deploy the updated topology so that it is propagated to
the database. To make any subsequent changes following this deployment, you
must load the topology from the database again.
v You might have a situation where you have modified a topology by both adding
new components and removing components (marking them "To Be Removed").
However, the deployer can work in only one mode at a time - installation mode
or uninstallation mode. In this situation, first run the deployer in uninstallation
mode, then run it again in installation mode.
For information about deleting components from an existing topology, see
Removing a component from the topology on page 159.
Migrating between platforms
It is possible to migrate components between platforms when performing the
Tivoli Netcool Performance Manager upgrade.
The only supported migration path is from Solaris to AIX. There is no support for
any other platform migration scenario.
Opening a deployed topology
Once you have installed Tivoli Netcool Performance Manager, you can perform
incremental installations by modifying the topology that is stored in the database.
About this task
You retrieve the topology, modify it, then pass the updated data to the deployer.
When the installation is complete, the deployer stores the revised topology data in
the database.
To open a deployed topology:
Copyright IBM Corp. 2006, 2012 117
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select and click Next.
4. Verify that all of the fields for the database connection are filled in with the
correct values:
v Database hostname - The name of the database host. The default value is
localhost.
v Port - The port number used for communication with the database. The
default value is 1521.
v Database user - The user name used to access the database. The default
value is PV_INSTALL.
v Database Password - The password for the database user account. For
example, PV.
v SID - The SID for the database. The default value is PV.
If desired, click Save as defaults to save these values for future incremental
installations.
Click Finish.
Results
The topology is retrieved from the database and is displayed in the Topology
Editor.
Adding a new component
After you have deployed your topology, you might need to make changes to it.
About this task
For example, you might want to add another SNMP collector.
To add a new component to the topology:
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the folder for the
component you want to add.
4. Select Add XXX from the pop-up menu, where XXX is the name of the
component you want to add.
5. The Topology Editor prompts for whatever information is needed to create the
component. See the appropriate section for the component you want to add:
v Add the hosts on page 84
v Add a database configurations component on page 86
v Add a DataMart on page 87
v Add a Discovery Server on page 89
v Add a Tivoli Integrated Portal on page 90
118 IBM Tivoli Netcool Performance Manager: Installation Guide
v Add a DataView on page 91
v Add the DataChannel administrative components on page 92
v Add a DataChannel on page 93
v Add a Collector on page 95
Note: that if you add a collector to a topology that has already been
deployed, you must manually bounce the DataChannel management
components (cnsw, logw, cmgrw, amgrw). For more information, see Manually
starting the Channel Manager programs on page 176.
v Add a Discovery Server on page 89
6. The new component is displayed in the Logical view of the Topology Editor.
7. Save the updated topology. You must save the topology after you add the
component and before you run the deployer. This step is not optional.
8. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology as input.
The deployer can determine that most of the components described in the
topology are already installed, and installs only the new component.
9. When the installation ends successfully, the deployer uploads the updated
topology into the database.
For information about removing a component from the Tivoli Netcool
Performance Manager environment, see Removing a component from the
topology on page 159.
Example
In this example, you update the installed version of Tivoli Netcool Performance
Manager to add a new DataChannel and two SNMP DataLoaders to the existing
system.
To update the Tivoli Netcool Performance Manager installation:
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the DataChannels folder.
4. Select Add Data Channel from the pop-up menu. Following the directions in
Add a DataChannel on page 93, add the following components:
a. Add a new DataChannel (Data Channel 2) with two different SNMP
DataLoaders to the topology. The Topology Editor creates the new
DataChannel.
b. Add two SNMP collectors to the channel structure created by the Topology
Editor. The editor automatically creates a Daily Loader component, an
Hourly Loader component, and two Sub Channels with an FTE component
and a CME component.
5. Save the updated topology.
6. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology as input.
The deployer can determine that most of the components described in the
topology are already installed, and installs only the new components (in the
example, DataChannel 5 with two new subchannels and DataLoaders).
7. When the installation ends, successful or not, the deployer uploads the updated
topology into the database.
Chapter 6. Modifying the current deployment 119
Changing configuration parameters of existing Tivoli Netcool
Performance Manager components
Configuration information is stored in the database. This enables the
DataChannel-related components to retrieve the configuration from the database at
run time.
You set the configuration information using the Topology Editor. As with the other
components, if you make changes to the configuration values, you must pass the
updated topology data to the deployer to have the changes propagated to both the
environment and the database.
Note: After the updated configuration has been stored in the database, you must
manually start, stop, or bounce the affected DataChannel component to have your
changes take effect.
Moving components to a different host
You can use the Topology Editor to move components between hosts.
About this task
You can move all components between hosts when they have not yet been
installed and are in the configured state. You can move SNMP and UBA collectors
when they are in the configured state or after they have been deployed and are in
the installed state.
If the component in the topology has not yet been deployed and is in the configured
state, the Topology Editor provides a Change Host option in the pop-up menu
when you click the component name in the Logical view. This option allows you to
change the host associated with the component prior to deployment.
If the component is an SNMP or UBA collector that was previously deployed and
is in the installed state, the Topology Editor provides a Migrate option in the
pop-up menu. This option instructs the deployer to uninstall the component from
the previous host and re-install it on the new system.
For instructions on moving deployed SNMP and UBA collectors after deployment,
see Moving a deployed collector to a different host on page 121. For instructions
on moving components that have not yet been deployed, see the information
below.
Note: The Movement of installed DataChannel Remote components is not
supported. All other components can be moved.
To change the host associated with a component before deployment:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the component's current host (see Starting the Topology Editor
on page 83 and Opening a deployed topology on page 117).
2. In the Logical view, navigate to the name of the component to move.
3. Right-click the component name, then click Change Host from the pop-up
menu.
120 IBM Tivoli Netcool Performance Manager: Installation Guide
The Migrate Component dialog appears, containing a drop-down list of hosts
where you can move the component.
4. Select the name of the new host from the list, then click Finish.
The name of the new host appears in the Properties tab.
Moving a deployed collector to a different host
You can move a deployed SNMP or UBA collector to a different host. The
instructions for doing so differ for SNMP collectors and UBA collectors.
After you move a collector to a new host, it may take up to an hour for the change
to be registered in the database.
Moving a deployed SNMP collector
The steps required to move a deployed SNMP collector to a different host
About this task
Note: To avoid the loss of collected data, leave the collector running on the
original host until you complete Step 7 on page 108.
To move a deployed SNMP collector to a different host:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the collector's current host (see Starting the Topology Editor on
page 83 and Opening a deployed topology on page 117).
2. In the Logical view, navigate to the name of the collector to move. For example
if moving SNMP 1.1, navigate as follows:
DataChannels > DataChannel 1 > Collector 1.1 > Collector SNMP.1.1
3. Right-click the collector name (for example, Collector SNMP 1.1), then click
Migrate from the pop-up menu.
The Migrate Collector dialog appears, containing a drop-down list of hosts
where you can move the collector.
Note: If you are moving a collector that has not been deployed, select Change
host from the pop-up menu (Migrate is grayed out). After the Migrate Collector
dialog appears, continue with the steps below.
4. Select the name of the new host from the list, then click Finish.
In the Physical view, the status of the collector on the new host is Configured.
The status of the collector on the original host is To be uninstalled. You will
remove the collector from the original host in Step 9.
Note: If you are migrating a collector that has not been deployed, the name of
the original host is automatically removed from the Physical view.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
Starting the Deployer on page 100.
The deployer installs the collector on the new host and starts it.
Chapter 6. Modifying the current deployment 121
Note: Both collectors are now collecting data - the original collector on the
original host, and the new collector on the new host.
7. Before continuing with the steps below, note the current time, and wait until a
time period equivalent to two of the collector's collection periods elapses. Doing
so guards against data loss between collections on the original host and the
start of collections on the new host.
Because data collection on the new host is likely to begin sometime after the
first collection period begins, the data collected during the first collection
period will likely be incomplete. By waiting for two collection time periods to
elapse, you can be confident that data for one full collection period will be
collected.
The default collection period is 15 minutes. You can find the collection period
for the sub-element, sub-element group, or collection formula associated with
the collector in the DataMart Request Editor. For information on viewing and
setting a collection period, see the Tivoli Netcool Performance Manager DataMart
Configuration and Operation Guide.
8. Bounce the FTE for the collector on the collector's new host, as in the following
example:
./dccmd bounce FTE.1.1
The FTE now recognizes the collector's configuration on the new host, and will
begin retrieving data from the collector's output directory on the new host.
9. In the current Topology Editor session, click Run > Run Deployer for
Uninstallation to remove the collector from the original host, passing the
updated topology as input. For more information, see Removing a component
from the topology on page 159.
Note: This step is not necessary if you are moving a collector that has not been
deployed.
Moving a deployed SNMP collector to or from a HAM
environment
If you move a deployed SNMP collector into or out of a High Availability Manager
(HAM) environment, you must perform the steps in this section.
About this task
To move a deployed SNMP collector to or from a HAM environment:
Procedure
1. Move the collector as described in Moving a deployed SNMP collector on
page 121.
Note: If you are moving a spare collector out of the HAM environment, the
navigation path is different than the path shown in Step 2 of the above
instructions. For example, suppose you have a single HAM environment with a
cluster MyCluster on host MyHost, and you are moving the second SNMP
spare out of the HAM. The navigation path to the spare would be as follows:
DataChannels > Administrative Components > High Availability Managers >
HAM MyServer.1 > MyCluster > Collector Processes > Collection Process
SNMP Spare 2.
2. Log in as Tivoli Netcool Performance Manager Unix user, pvuser, on the
collector's new host.
3. Change to the directory where DataLoad is installed. For example:
122 IBM Tivoli Netcool Performance Manager: Installation Guide
cd /opt/dataload
4. Source the DataLoad environment:
. ./dataLoad.env
5. Stop the SNMP collector:
pvmdmgr stop
6. Edit the file dataLoad.env and set the field DL_HA_MODE as follows:
v Set DL_HA_MODE=true if you moved the collector onto a HAM host.
v Set DL_HA_MODE=false if you moved the collector off of a HAM host.
7. Source the DataLoad environment again:
. ./dataLoad.env
8. Start the SNMP collector:
pvmdmgr start
Note: If you move an SNMP collector to or from a HAM host, you
must bounce the HAM. For information, see Stopping and restarting modified
components on page 151.
Moving a deployed UBA bulk collector
The steps required to move a deployed UBA collector to a different host.
About this task
Note: You cannot move BCOL collectors, or UBA collectors that have a BLB or
QCIF subcomponent. If you want to move a UBA collector that has these
subcomponents, you must manually remove it from the old host in the topology
and then add it to the new host.
To move a deployed UBA collector to a different host:
Procedure
1. Log in as pvuser to the DataChannel host where the UBA collector is running.
2. Change to the directory where DataChannel is installed. For example:
cd /opt/datachannel
3. Source the DataChannel environment:
. dataChannel.env
4. Stop the collector's UBA and FTE components. For example, to stop these
components for UBA collector 1.1, run the following commands:
dccmd stop UBA.1.1
and...
dccmd stop FTE.1.1
For information on the dccmd command, see the Tivoli Netcool Performance
Manager Command Line Interface Guide.
Note: Some technology packs have additional pack-specific components that
must be shut down - namely, BLB (bulk load balancer) and IF (inventory file)
components. IF component names have the format xxxIF, where xxx is a
pack-specific name. For example, Cisco CWM packs have a CWMIF
component, Alcatel 5620 SAM packs have a SAMIF component, and Alcatel
5620 NM packs have a QCIF component. Other packs do not use these
technology-specific components.
Chapter 6. Modifying the current deployment 123
5. Tar up the UBA collector's UBA directory. You will copy this directory to the
collector's new host later in the procedure (Step 13).
For example, to tar up a UBA directory for UBA collector 1.1, run the
following command:
Note: This step is not necessary if the collector's current host and the new
host share a file system.
tar -cvf UBA_1_1.tar ./UBA.1.1/*
Note: Some technology packs have additional pack-specific directories that
need to be moved. These directories have the same names as the
corresponding pack-specific components described in Step 4.
6. Start the Topology Editor (if it is not already running) and open the topology
that includes the collector's current host (see Starting the Topology Editor on
page 83 and Opening a deployed topology on page 117).
7. In the Logical view, navigate to the name of the collector to move - for
example, Collector UBA.1.1.
8. Right-click the collector name and select Migrate from the pop-up menu.
The Migrate Collector dialog appears, containing a drop-down list of hosts
where you can move the collector.
9. Select the name of the new host from the list, then click Finish.
In the Physical view, the status of the collector on the new host is Configured.
The collector is no longer listed under the original host.
Note: If the UBA collector was the only DataChannel component on the
original host, the collector will be listed under that host, and its status will be
"To be uninstalled." You can remove the DataChannel installation from the
original host after you finish the steps below. For information on removing
DataChannel from the host, see Removing a component from the topology
on page 159.
10. Click Topology > Save Topology to save the topology.
11. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
Starting the Deployer on page 100.
If DataChannel is not already installed on the new host, this step installs it.
12. Click Run > Run Deployer for Uninstallation to remove the collector from
the original host, passing the updated topology as input. For more
information, see Removing a component from the topology on page 159.
13. Copy any directory you tarred in Step 5 and the associated JavaScript files to
the new host.
Note: This step is not necessary if the collector's original host and the new
host share a file system.
For example, to copy UBA_1_1.tar and the JavaScript files from the collector's
original host:
a. Log in as pvuser to the UBA collector's new host.
b. Change to the directory where DataChannel is installed. For example:
cd /opt/datachannel
c. FTP to the collector's original host.
d. Run the following commands to copy the tar file to the new host. For
example:
124 IBM Tivoli Netcool Performance Manager: Installation Guide
cd /opt/datachannel
get UBA_1_1.tar
bye
tar -xvf UBA_1_1.tar
e. Change to the directory where the JavaScript files for the technology pack
associated with the collector are located:
cd /opt/datachannel/scripts
f. FTP the JavaScript files from the /opt/datachannel/scripts directory on the
original host to the /opt/datachannel/scripts directory on the new host.
14. Log in as pvuser to the Channel Manager host where the Administrator
Components (including CMGR) are running.
15. Stop and restart the Channel Manager by performing the following steps:
a. Change to the $DC_HOME directory (typically, /opt/datachannel).
b. Source the DataChannel environment:
. dataChannel.env
c. Get the CMGR process ID by running the following command:
ps -ef | grep CMGR
The process ID appears in the output immediately after the user ID, as
shown below in bold:
pvuser 6561 6560 0 Aug 21 ? 3:04 /opt/datachannel/bin/CMGR_visualn/dc.im -a CMGRpvuser 25976 24244 0 11:39:38 pts/7 0:00 grep
d. Stop the CMGR process. For example, if 6561 is the CMGR process ID:
kill -9 6561
e. Change to the $DC_HOME/bin directory (typically, /opt/datachannel/bin).
f. Restart CMGR by running the following command:
./cmgrw
16. Log in as pvuser to the UBA collector's new host and change to the
$DC_HOME/bin directory (typically, /opt/datachannel/bin).
17. Run the following command to verify that Application Manager (AMGR) is
running on the new host:
./findvisual
If the AMGR process is running, you will see output that includes an entry
like the following:
pvuser 6684 6683 0 Aug 21 ? 3:43 /opt/datachannel/bin/AMGR_visual -nologo /opt/datachannel/bin/dc.im -a AMGR -lo
Note: If AMGR is not running on the new host, do not continue. Verify that
you have performed the preceding steps correctly.
18. Start the collector's UBA and FTE components on the new host. For example,
to start these components for collector 1.1, run the following commands:
./dccmd start UBA.1.1
and...
./dccmd start FTE.1.1
Note: If any pack-specific components were shut down on the old host (see
Step 4), you must also start those components on the new host.
Chapter 6. Modifying the current deployment 125
Changing the port for a collector
You can use the Topology Editor to change the port associated with a collector.
About this task
To change the port associated with a collector after deployment:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
(see Starting the Topology Editor on page 83 and Opening a deployed
topology on page 117).
2. In the Logical view, navigate to the collector.
3. Highlight the collector to view its properties.
The Topology Editor displays both the collector core parameters and the
technology pack-specific parameters.
4. Edit the port parameter within the list, SERVICE_PORT, then click Finish.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installationto run the deployer, passing the
updated topology as input.
7. When deployment is complete, log onto the server hosting the collector.
8. Log in as Tivoli Netcool Performance Manager Unix user, pvuser, on the
collector's host.
9. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
10. Source the DataLoad environment:
. ./dataLoad.env
11. Stop the SNMP collector:
pvmdmgr stop
12. Edit the file dataLoad.env and set the field DL_ADMIN_TCP_PORT.
For example:
DL_ADMIN_TCP_PORT=8800
13. Source the DataLoad environment again:
. ./dataLoad.env
14. Start the SNMP collector:
pvmdmgr start
Modifying Tivoli Integrated Portal and Tivoli Common Reporting ports
You can update the ports used by Tivoli Integrated Portal and Tivoli Common
Reporting.
The Tivoli Integrated Portal specific ports defined and used to build the
topology.xml file are as follows:
WAS_WC_defaulthost 16710
COGNOS_CONTENT_DATABASE_PORT 1557
IAGLOBAL_LDAP_PORT 389
126 IBM Tivoli Netcool Performance Manager: Installation Guide
Changing ports for the Tivoli Common Reporting console
You can assign new ports to an installed Tivoli Common Reporting console.
Procedure
1. Create a properties file containing values such as host name that match your
environment. The exemplary properties file below uses default values. Modify
the values to match your environment. Save the file in any location.
WAS_HOME=C:/ibm/tivoli/tip22
was.install.root=C:/ibm/tivoli/tip22
profileName=TIPProfile
profilePath=C:/ibm/tivoli/tipv2/profiles/TIPProfile
templatePath=C:/ibm/tivoli/tipv2/profileTemplates/default
nodeName=TIPNode
cellName=TIPCell
hostName=your_TCR_host
portsFile=C:/ibm/tivoli/tipv2/properties/TIPPortDef.properties
2. Edit the TCR_install_dir\properties\TIPPortDef.properties file to contain the
desired port numbers.
3. Stop the Tivoli Common Reporting server by navigating to the following
directory in the command-line interface:
v
Windows
TCR_component_dir\bin, and running the stopTCRserver.bat
command.
v
UNIX
and
Linux
TCR_component_dir/bin, and running the
stopTCRserver.sh.
Important: To stop the server, you must log in with the same user that you
used to install Tivoli Common Reporting.
4. In the command-line interface, navigate to the TCR_install_dir\bin directory.
5. Run the following command: ws_ant.bat -propertyfile C:\temp\tcrwas.props
-file "C:\IBM\tivoli\tipv2\profileTemplates\default\actions\
updatePorts.ant" C:\temp\tcrwas.props is the path to the properties file
created in Step 1.
6. Change the port numbers in IBMCognos Configuration:
a. Open IBMCognos Configuration by running TCR_component_dir\cognos\
bin\tcr_cogconfig.bat for Windows operating systems and
TCR_install_dir/cognos/bin/tcr_cogconfig.sh for Linux and UNIX.
b. In the Environment section, change the port numbers to the desired values,
as in Step 2.
c. Save your settings and close IBMCognos Configuration.
7. Start the Tivoli Common Reporting server by navigating to the following
directory in the command-line interface:
v
Windows
TCR_component_dir\bin, and running the startTCRserver.bat
command.
v
UNIX
and
Linux
TCR_component_dir/bin, and running the
startTCRserver.sh.
Important: To start the server, you must log in with the same user that you
used to install Tivoli Common Reporting.
Chapter 6. Modifying the current deployment 127
Port assignments
The application server requires a set of sequentially numbered ports.
The sequence of ports is supplied during installation in the response file. The
installer checks that the number of required ports (starting with the initial port
value) are available before assigning them. If one of the ports in the sequence is
already in use, the installer automatically terminates the installation process and
you must specify a different range of ports in the response file.
Viewing the application server profile
Open the application server profile to review the port number assignments and
other information.
About this task
The profile of the application server is available as a text file on the computer
where it is installed.
Procedure
1. Locate the /opt/IBM/tivoli/tipv2/profiles/TIPProfile/logs directory.
2. Open AboutThisProfile.txt in a text editor.
Example
This is the profile for an installation on in a Windows environment as it appears in
/opt/IBM/tivoli/tipv2\profiles\TIPProfile\logs\AboutThisProfile.txt:
Application server environment to create: Application server
Location: /opt/IBM/tivoli/tcr\profiles\TIPProfile
Disk space required: 200 MB
Profile name: TIPProfile
Make this profile the default: True
Node name: TIPNode Host name: tivoliadmin.usca.ibm.com
Enable administrative security (recommended): True
Administrative consoleport: 16315
Administrative console secure port: 16316
HTTP transport port: 16310
HTTPS transport port: 16311
Bootstrap port: 16312
SOAP connector port: 16313
Run application server as a service: False
Create a Web server definition: False
What to do next
If you want to see the complete list of defined ports on the application server, you
can open /opt/IBM/tivoli/tipv2/properties/TIPPortDef.properties in a text
editor:
#Create the required WAS port properties for TIP
#Mon Oct 06 09:26:30 PDT 2008
CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS=16323
WC_adminhost=16315
DCS_UNICAST_ADDRESS=16318
BOOTSTRAP_ADDRESS=16312
SAS_SSL_SERVERAUTH_LISTENER_ADDRESS=16321
SOAP_CONNECTOR_ADDRESS=16313
ORB_LISTENER_ADDRESS=16320
128 IBM Tivoli Netcool Performance Manager: Installation Guide
WC_defaulthost_secure=16311
CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS=16322
WC_defaulthost=16310
WC_adminhost_secure=16316
Chapter 6. Modifying the current deployment 129
130 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 7. Using the High Availability Manager
This chapter describes the optional Tivoli Netcool Performance Manager High
Availability Manager (HAM), including how to set up a HAM environment.
Overview
The High Availability Manager (HAM) is an optional component for large
installations that want to use redundant SNMP collection paths.
The HAM constantly monitors the availability of one or more SNMP collection
hosts, and switches collection to a backup host (called a spare) if a primary host
becomes unavailable.
The following figure shows a simple HAM configuration with one primary host
and one spare. In the panel on the left, the primary host is operating normally.
SNMP data is being collected from the network and channeled to the primary host.
In the panel on the right, the HAM has detected that the primary host is
unavailable, so it dynamically unbinds the collection path from the primary host
and binds it to the spare.
HAM basics
An SNMP collector collects data from a specific set of network resources according
to a set of configuration properties.
A collector has two basic parts: the collector process running on the host computer,
and the collector profile that defines the collector's properties.
Note: Do not confuse a "collector profile" with an "inventory profile." A collector
profile contains properties used in the collection of data from network resources -
properties such as collector number, polling interval, and output directory for the
collected data. An inventory profile contains information used to discover network
resources - properties such as the addresses of the resources to look for and the
mode of discovery.
A collector that is not part of a HAM environment is static - that is, the collector
process and the collector profile are inseparable. But in a HAM environment, the
Copyright IBM Corp. 2006, 2012 131
collector process and collector profile are managed as separate entities. This means
that if a collector process is unavailable (due to a collector process crash or a host
machine outage), the HAM can dynamically reconfigure the collector, allowing
data collection to continue. The HAM does so by unbinding the collector profile
from the unavailable collector process on the primary host, and then binding the
collector profile to a collector process on a backup (spare) host.
Note: It may take several minutes for the HAM to reconfigure a collector,
depending on the amount of data being collected.
The parts of a collector
Collector parts and their description.
When you set up a HAM configuration in the Topology Editor, you manage the
two parts of a collector - the collector process and the collector profile - through
the following folders in the Logical view:
v Collector Processes. A collector process is a Unix process representing a runtime
instance of a collector. A collector process is identified by the name of the host
where the process is running and by the collector process port (typically 3002).
A host can have just one SNMP collector process.
v Managed Definitions. A managed definition identifies a collector profile through
the unique collector number defined in the profile.
Every managed definition has a default binding to a host and to the collector
process on that host. The default host and collector process are called the
managed definition's primary host and collector process.
A host that you designate as a spare host has a collector process but no default
managed definition.
The following figure shows the parts of a collector that you manage through the
Collector Process and Managed Definition folders. In the figure, the HAM
dynamically unbinds the collector profile from the collector process on the primary
host, and then binds the profile to the collector process on the spare. This dynamic
re-binding of the collector is accomplished when the HAM binds the managed
definition - in this case, represented by the unique collector ID, Collector 1 - to the
collector process on the spare.
132 IBM Tivoli Netcool Performance Manager: Installation Guide
Clusters
A HAM environment can consist of a single set of hosts or multiple sets of hosts.
Each set of hosts in a HAM environment is called a cluster.
A cluster is a logical grouping of hosts and collector processes that are managed by
a HAM.
The use of multiple clusters is optional. Whether you use multiple clusters or just
one has no affect on the operation of the HAM. Clusters simply give you a way to
separate one group of collectors from another, so that you can better deploy and
manage your primary and spare collectors in a way that is appropriate for your
needs.
Multiple clusters may be useful if you have a large number of SNMP collector
hosts to manage, or if the hosts are located in various geographic areas.
The clusters in a given HAM environment are distinct from one another. In other
words, the HAM cannot bind a managed definition in one cluster to a collector
process in another.
HAM cluster configuration
For host failover to occur, a HAM cluster must have at least one available spare
host.
The cluster can have as few as two hosts - one primary and one spare. Or, it can
have multiple primary hosts with one or more spares ready to replace primary
hosts that become unavailable.
The ratio of primary hosts to spare hosts is expressed as p + s. For example, a
HAM cluster with four primary hosts and two spares is referred to as a 4+2
cluster.
Types of spare hosts
There are two types of spare hosts:
v Designated spare. The sole purpose of this type of spare in a HAM cluster is to
act as a backup host.
A designated spare has a collector process, but no default managed definition.
Its collector process remains idle until the HAM detects an outage on one of the
active hosts, and binds that host's managed definition to the spare's collector
process.
A HAM cluster must have at least one designated spare.
v Floating spare. This type of spare is a primary host that can also act as a backup
host for one or more managed definitions.
Chapter 7. Using the High Availability Manager 133
Types of HAM clusters
The types of HAM clusters that can be created.
When the HAM binds a managed definition to a spare (either a designated spare
or a floating spare), the spare becomes an active component of the collector. It
remains so unless you explicitly reassign the managed definition back to its
primary host or to another available host in the HAM cluster. This is an important
fact to consider when you plan the hosts to include in a HAM cluster.
There are two types of HAM clusters:
v Fixed spare cluster. In this type of cluster, failover can occur only to designated
spares. There are no floating spares in this type of cluster.
When the HAM binds a managed definition to the spare, the spare temporarily
takes the place of the primary that has become unavailable. When the primary
becomes available again, you must reassign the managed definition back to the
primary (or to another available host). The primary then resumes its data
collection operations, and the spare resumes its role as backup host.
If you do not reassign the managed definition back to the primary, the primary
cannot participate in further collection operations. Since the primary is not
configured as a floating spare, it also cannot act as a spare now that its collector
process is idle. As a result, the HAM cluster loses its failover capabilities if no
other spare is available.
Note: A primary host cannot act as a spare unless it is configured as a floating
spare.
v Floating spare cluster. This type of cluster has one or more primary hosts that
can also act as a spare. Failover can occur to a floating spare or to a designated
spare.
You do not need to reassign the managed definition back to this type of primary,
as you do with primaries in a fixed spare cluster. When a floating spare primary
becomes available again, it assumes the role of a spare.
You can designate some or all of the primaries in a HAM cluster as floating
spares. If all the primaries in a HAM cluster are floating spares, you should
never have to reassign a managed definition to another available host in order to
maintain failover capability.
Note: IBM recommends that all the primaries in a cluster be of the same type -
either all floating spares or no floating spares.
Example HAM clusters
Examples of HAM cluster options.
The Tivoli Netcool Performance Manager High Availability Manager feature is
designed to provide great flexibility in setting up a HAM cluster. The following
illustrations show just a few of the possible variations.
1 + 1, fixed spare
A fixed spare cluster with one primary host and one designated spare.
The figure below shows a fixed spare cluster with one primary host and one
designated spare:
v In the panel on the left, Primary1 is functioning normally. The designated spare
is idle.
134 IBM Tivoli Netcool Performance Manager: Installation Guide
v In the panel on the right, Primary1 experiences an outage. The HAM unbinds
the collector from Primary1 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary1 returns to service. For failover to be possible
again, you must reassign Collector 1 to Primary1. This idles the collector process
on the spare, making it available for the next failover operation if Primary 1 fails
again.
Note: When a designated spare serves as the only spare for a single primary, as in
a 1+1 fixed spare cluster, the HAM pre-loads the primary's collector definition on
the spare. This results in a fast failover with a likely loss of no more than one
collection cycle.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated spare
-
2 + 1, fixed spare
A fixed spare cluster with two primary hosts and one designated spare
The figure below shows a fixed spare cluster with two primary hosts and one
designated spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary2 returns to service. For failover to be possible
again, you must reassign Collector 2 to Primary2. This idles the collector process
on the spare, making it available for the next failover operation.
Chapter 7. Using the High Availability Manager 135
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated spare
Primary2
Collector 2 Primary2 (default binding)
Designated spare
Primary1
2 + 1, both primaries are floating spares
Both primaries are floating spares.
The figure below shows a floating spare cluster with two primary hosts and one
designated spare, with each primary configured as a floating spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v When Primary2 returns to service, it will assume the role of spare, meaning its
collector process remains idle. The host originally defined as the dedicated spare
continues as the active platform for Collector 2.
v The following figure shows the same cluster after Primary2 has returned to
service. In the panel on the left, Primary2 is idle, prepared to act as backup if
needed.
136 IBM Tivoli Netcool Performance Manager: Installation Guide
v In the panel on the right, Primary1 experiences an outage. The HAM unbinds
the collector from Primary1 and binds it to the floating spare, Primary2.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Primary2
Designated spare
-
Collector 2 Primary1
Primary2 (default binding)
Designated spare
-
3+ 2, fixed spares
A fixed spare cluster with three primary hosts and two designated spares.
The figure below shows a fixed spare cluster with three primary hosts and two
designated spares:
v In the panel on the left, all three primaries are functioning normally. The
designated spares are idle.
v In the panel on the right, Primary3 experiences an outage. The HAM unbinds
the collector from Primary3 and binds it to Designated Spare 2. The HAM chose
Designated Spare 2 over Designated Spare 1 because the managed definition for
Collector 3 set the failover priority in that order.
Note: Each managed definition sets its own failover priority. Failover priority
can be defined differently in different managed definitions.
v With one spare in use and one other spare available (Designated Spare 1),
failover is now limited to the one available spare - even after Primary3 returns
to service. For dual failover to be possible again, you must reassign Collector 3
to Primary3.
Chapter 7. Using the High Availability Manager 137
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Designated Spare 1
Designated Spare 2
Primary2
Primary3
Collector 2 Primary2 (default binding)
Designated Spare 1
Designated Spare 2
Primary1
Primary3
Collector 3 Primary3 (default binding)
Designated Spare 1
Designated Spare 2
Primary1
Primary2
3+ 2, all primaries are floating spares
A floating spare cluster with three primary hosts and two designated spares, with
each primary configured as a floating spare.
The figure below shows a floating spare cluster with three primary hosts and two
designated spares, with each primary configured as a floating spare:
v In the panel on the left, Primary3 had previously experienced an outage. The
HAM unbound its default collector (Collector 3) from Primary3, and bound the
collector to the first available spare in the managed definition's priority list,
which happened to be Designated Spare 2. Now that Primary3 is available
again, it is acting as a spare, while Designated Spare 2 remains the active
collector process for Collector 3.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
Collector 2 from Primary2, and binds it to the first available spare in the
managed definition's priority list. This happens to be the floating spare
Primary3.
v When Primary2 becomes available again, there will once more be two spares
available - Primary2 and Designated Spare 1.
138 IBM Tivoli Netcool Performance Manager: Installation Guide
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Collector Possible Host Bindings Host Bindings Not Possible
Collector 1 Primary1 (default binding)
Primary2
Primary3
Designated Spare 1
Designated Spare 2
-
Collector 2 Primary1
Primary2 (default binding)
Primary3
Designated Spare 1
Designated Spare 2
-
Collector 3 Primary1
Primary2
Primary3 (default binding)
Designated Spare 1
Designated Spare 2
-
Chapter 7. Using the High Availability Manager 139
Resource pools
When you configure a managed definition in the Topology Editor, you specify the
hosts that the HAM can bind to the managed definition, and also the priority order
in which the hosts are to be bound. This list of hosts is called the resource pool for
the managed definition.
A resource pool includes:
v The managed definition's primary host and collector process (that is, the host
and collector process that are bound to the managed definition by default).
v Zero or more other primary hosts in the cluster.
If you add a primary host to a managed definition's resource pool, that primary
host becomes a floating spare for the managed definition.
v Zero or more designated spares in the cluster.
Typically, each managed definition includes one or more designated spares in its
resource pool.
Note: If no managed definitions include a designated spare in their resource
pools, there will be no available spares in the cluster, and therefore failover
cannot occur in the cluster.
How the SNMP collector works
The SNMP collector capability and behaviour.
The SNMP collector is state-based and designed both to perform initialization and
termination actions, and to "change state" in response to events generated by the
HAM or as a result of internally-generated events (like a timeout, for example).
The following table lists the events that the SNMP collector understands and
indicates whether they can be generated by the HAM.
Event HAM-Generated Description
Load Yes Load collection profile, do
not begin scheduling
collections.
Pause Yes Stop scheduling collections;
do not unload profile.
Reset Yes Reset expiration timer.
Start Yes Start scheduling collections.
Stop Yes Stop scheduling collections;
unload profile
Timeout No Expiration timer expires;
start scheduling collections.
The SNMP collector can reside in one of the following states, as shown in the
following table:
140 IBM Tivoli Netcool Performance Manager: Installation Guide
SNMP Collector State Event Description
Idle N/A Initial state; a collector
number may or may not be
assigned; the collection
profile has not been loaded.
Loading Load Intermediate state between
Idle and Ready. Occurs after
a Load event. Collector
number is assigned, and the
collection profile is being
loaded.
Ready N/A Collector number assigned,
profile loaded, but not
scheduling requests or
performing collections.
Starting Start Intermediate state between
Idle and Running. Occurs
after a Start event. Collector
number assigned, and profile
is being loaded.
Running N/A Actively performing requests
and collections.
Stopping Stop/Pause Intermediate state between
Running and Idle.
The following state diagram shows how the SNMP collector transitions through its
various states depending upon events or time-outs:
How failover works with the HAM and the SNMP collector
How Failover Works With the HAM and the SNMP Collector
The following tables illustrate how the HAM communicates with the SNMP
collectors during failover for a 1+1 cluster and a 2+1 cluster.
Chapter 7. Using the High Availability Manager 141
Table 8. HAM and SNMP Collector in a 1+1 Cluster
State of Primary State of Spare Events and Actions
Running Idle The HAM sends the spare
the Load event for the
specified collection profile.
Running Ready The HAM sends a Pause
event to the spare to extend
the timeout. Note: If the
timeout expires, the spare
will perform start actions
and transition to a Running
state.
Running Running The HAM sends a Pause
event to the collector process
that has been in a Running
state for a shorter amount of
time.
No response Ready The HAM sends a Start
event to the spare.
Table 9. HAM and SNMP Collector in a 2+1 Cluster
State of Primary State of Spare Events and Actions
Running Idle No action
Running Ready No action
Running Running The HAM sends a Stop event
to the collector process that
has been in Running state for
the shorter amount of time.
No Response Idle The HAM sends a Start
event to the spare.
No Response Ready The HAM sends a Start
event to the spare.
Because more than one physical system may produce SNMP collections, the File
Transfer Engine (FTE) must check every capable system for a specific profile. The
FTE retrieves all output for the specific profile. Any duplicated collections are
reconciled by the Complex Metrics Engine (CME).
Obtaining collector status
How to get the status of a collector.
To obtain status on the SNMP collectors managed by the HAM, enter the following
command on the command line:
$ dccmd status HAM.<hostname>.1
The dccmd command returns output similar to the following:
COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS
HAM.DCAIX2.1 HAM DCAIX2 running 10010 1.1 Ok: (box1:3012 ->
Running 1.1 for 5h2m26s); No avail spare; Check: dcaix2:3002, birdnestb:3002 1.2 Ok: (box2:3002 ->
Running 1.2 for 5h9m36s); No avail spare; Check: box4:3002, box5:3002 1.3 Not
Running; No avail spare;
Check: box4:3002, box5:3002
142 IBM Tivoli Netcool Performance Manager: Installation Guide
The following list describes EXTENDED STATUS information:
v 1.1 - Load # Collection profile 1.1
v Ok: - Status of the load. Ok means it is properly collected, Not Running indicates
a severe problem (data losses)
v (box1:3012 -> Running 1.1 for 5h2m26s)- The collector that is currently
performing the load, with its status and uptime.
v No avail spare - List of possible spare, if something happens to the collector
currently working. In this example there is no spare available, a failover would
fail. A list of host:port would indicate the possible spare machines.
v Check: box4:3002, box5:3002 - Indicates what is currently wrong with the
system/configuration. Machines box4:3002 and box5:3002 should be spare but
are either not running, or not reachable. The user is instructed to check these
machines.
For a 1-to-1 failover configuration, the dccmd command might return output like
the following:
$ dccmd status HAM.SERVER.1
COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS
HAM.SERVER.1 HAM SERVER running 10010 1.1 Ok: (box1:3002 ->
Running 1.1 for 5h2m26s); 1 avail spare: (box2:3002 -> Ready 1.1)
This preceding output shows that Collector 1.1 is in a Running state on Box1, and
that the Collector on Box2 is in a Ready state, with the profile for Collector 1.1
loaded.
Creating a HAM environment
This section describes the steps required to create a 3+1 HAM environment with a
single cluster, and with all three primaries configured as floating spares.
About this task
This is just one of the many variations a HAM environment can have. The
procedures described in the following sections indicate the specific steps where
you can vary the configuration.
Note: If you are setting up a new Tivoli Netcool Performance Manager
environment and plan to use a HAM in that environment, perform the following
tasks in the following order:
Procedure
1. Install all collectors.
2. Configure and start the HAM.
3. Install all technology packs.
4. Perform the discovery.
Chapter 7. Using the High Availability Manager 143
Topology prerequisites
The minimum component prerequisite.
A 3+1 HAM cluster requires that you have a topology with the following
minimum components:
v Three hosts, each bound to an SNMP collector. These will act as the primary
hosts. You will create a managed definition for each of the primary hosts.
v One additional host that is not bound to an SNMP collector. This will act as the
designated spare.
For information on installing these components, see Adding a new component
on page 118.
Procedures
The general procedures for creating a single-cluster HAM with one designated
spare and three floating spares.
Create the HAM and a HAM cluster
To create a High Availability Manager with a single cluster
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
where you want to add the HAM (see Starting the Topology Editor on page
83 and Opening a deployed topology on page 117).
2. In the Logical view, right-click High Availability Managers, located at
DataChannels > Administrative Components.
3. Select Add High Availability Manager from the pop-up menu.
The Add High Availability Manager Wizard appears.
4. In the Available hosts field, select the host where you want to add the HAM.
Note: You can install the HAM on a host where a collector process is installed,
but you cannot install more than one HAM on a host.
5. In the Identifier field, accept the default identifier.
The identifier has the following format:
HAM.<HostName>.<n>
where HostName is the name of the host you selected in Step 4, and n is a
HAM-assigned sequential number, beginning with 1, that uniquely identifies
this HAM from others that may be defined on other hosts.
6. Click Finish.
The HAM identifier appears under the High Availability Managers folder.
7. Right-click the identifier of the HAM you just created.
8. Select Add Cluster from the pop-up menu.
The Add Cluster Monitor Wizard appears.
9. In the Identifier field, type a name for the cluster and click Finish.
The cluster name appears under the HAM identifier folder you added in
Step 6. The following folders appear under the cluster name:
v Collector Processes
v Managed Definitions
144 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: To add additional clusters to the environment, repeat Step 7 through
Step 9.
Add the designated spare
How to create and add a designated spare.
About this task
To create a designated spare, you must have a host defined in the Physical view
with no SNMP collector assigned to it. For information on adding a host to a
topology, see Add the hosts on page 84
To add a designated spare to a cluster:
Procedure
1. In the Logical view, right-click the Collector Processes folder that you created
in Step 9 of the previous section, Create the HAM and a HAM cluster on
page 144.
2. Select Add Collection Process SNMP Spare from the pop-up menu.
The Add Collection Process SNMP Spare - Configure Collector Process SNMP
Spare dialog appears.
3. In the Available hosts field, select the host that you want to make the
designated spare.
This field contains the names of hosts in the Physical view that do not have
SNMP collectors assigned to them.
4. In the Port field, specify the default port number, 3002, for the spare's collector
process, then click Finish.
Under the cluster's Collector Processes folder, the entry Collection Process
SNMP Spare <n> appears, where n is a HAM-assigned sequential number,
beginning with 1, that uniquely identifies this designated spare from others
that may be defined in this cluster.
Note: Repeat Step 1 through Step 4 to add an additional designated spare to
the cluster.
What to do next
Should you be making changes to an already existing configuration, please make
sure the dataLoad.env file contains all the right settings:
1. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
2. Source the DataLoad environment:
. ./dataLoad.env
3. Make sure that DL_HA_MODE field in the dataLoad.env file and set to
DL_HA_MODE=true.
4. Source the DataLoad environment again:
. ./dataLoad.env
Chapter 7. Using the High Availability Manager 145
Add the managed definitions
A managed definition allows the HAM to bind a collector profile to a collector
process.
About this task
Note: When you add a managed definition to a HAM cluster, the associated
collector process is automatically added to the cluster's Collector Processes folder.
To add a managed definition to a HAM cluster:
Procedure
1. In the Logical view, right-click the Managed Definitions folder that you
created in Create the HAM and a HAM cluster on page 144.
2. Select Add Managed Definition from the pop-up menu.
The Add Managed Definition - Choose Managed Definition dialog appears.
3. In the Collector number field, select the unique collector number to associate
with this managed definition.
4. Click Finish.
The following entries now appear for the cluster:
v Under the cluster's Managed Definitions folder, the entry Managed
Definition <n> appears, where n is the collector number you selected in Step
3.
v Under the cluster's Collector Processes folder, the entry Collector Process
[HostName] appears, where HostName is the host that will be bound to the
SNMP collector you selected in Step 3. This host is the managed definition's
primary host.
Note: Repeat Step 1 though to Step 4 to add another managed definition to the
cluster.
Example
When you finish adding managed definitions for a 3+1 HAM cluster, the Logical
and Physical views might look like the following:
146 IBM Tivoli Netcool Performance Manager: Installation Guide
In this example, the hosts dcsol1a, dcsol1b, and docserver1 are the primaries, and
docserver2 is the designated spare.
Define the resource pools
A resource pool is a list of the spares, in priority order, that the HAM can bind to a
particular managed definition.
About this task
When you create a managed definition, the managed definition's primary host is
the only host in its resource pool. To enable the HAM to bind a managed
definition to other hosts, you must add more hosts to the managed definition's
resource pool.
To add hosts to a managed definition's resource pool:
Procedure
1. Right-click a managed definition in the cluster's Managed Definitions folder.
2. Select Configure Managed Definition from the pop-up menu.
The Configure Managed Definition - Collector Process Selection dialog appears,
as shown below. In this example, the resource pool being configured is for
Managed Definition 1 (that is, the managed definition associated with Collector
1).
Chapter 7. Using the High Availability Manager 147
3. In the Additional Collector Processes list, check the box next to each host to
add to the managed definition's resource pool.
Typically, you will add at least the designated spare (in this example,
docserver2) to the resource pool. If you add a primary host to the resource
pool, that host becomes a floating spare for the managed definition.
Note: You must add at least one of the hosts in the Additional Collector
Processes list to the resource pool.
Since the goal in this example is to configure all primaries as floating spares,
the designated spare and the two primaries (docserver1 and dcsol1a) will be
added to the resource pool.
4. When finished checking the hosts to add to the resource pool, click Next.
Note: If you add just one host to the resource pool, the Next button is not
enabled. Click Finish to complete the definition of this resource pool. Return to
Step 1 to define a resource pool for the next managed definition in the cluster,
or skip to Save and start the HAM on page 149 if you are finished defining
resource pools.
The Configure Managed Definition - Collector Process Order dialog appears, as
shown below:
148 IBM Tivoli Netcool Performance Manager: Installation Guide
5. Specify the failover priority order for this managed definition. To do so:
a. Select a host to move up or down in the priority list, then click the Up or
Down button until the host is positioned where you want.
b. Continue moving hosts until the priority list is ordered as you want.
c. Click Finish.
In this example, if the primary associated with Managed Definition 1 fails, the
HAM will attempt to bind the managed definition to the floating spare dcsol1a.
If dcsol1a is in use or otherwise unavailable, the HAM attempts to bind the
managed definition to docserver1. The designated spare docserver2 is last in
priority.
6. Return to Step 1 to define a resource pool for the next managed definition in
the cluster, or continue with the next section if you are finished defining
resource pools.
Save and start the HAM
When you finish configuring the HAM as described in the previous sections, you
are ready to save the configuration and start the HAM.
About this task
To save and start the HAM:
Procedure
1. Click Topology > Save Topology to save the topology file containing the HAM
configuration.
2. Run the deployer (see Starting the Deployer on page 100), passing the
updated topology file as input.
3. Open a terminal window on the DataChannel host.
4. Log in as pvuser.
5. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
6. Bounce (stop and restart) the Channel Manager. For instructions, see Step 15 on
page 111.
7. Run the following command:
dccmd start ham
Chapter 7. Using the High Availability Manager 149
Monitoring of the HAM environment begins.
For information on using dccmd, see the Tivoli Netcool Performance Manager
Command Line Interface Guide.
Creating an additional HAM environment
How to create a HAM environment.
Typically, one HAM is sufficient to manage all the collectors you require in your
HAM environment. But for performance reasons, very large Tivoli Netcool
Performance Manager deployments involving dozens or hundreds of collector
processes might benefit from more than one HAM environment.
HAM environments are completely separate from one another. A host in one HAM
environment cannot fail over to a host in another HAM environment.
To create an additional HAM environment, perform all of the procedures described
in Creating a HAM environment on page 143.
Modifying a HAM environment
How to modify a HAM environment.
You can modify a HAM environment by performing any of the procedures in
Creating a HAM environment on page 143. For example, you can add collectors,
add clusters, configure a primary host as a floating spare, change the failover
priority order of a resource pool, and make a number of other changes to the
environment, including moving collectors into or out of a HAM environment.
For information on moving a deployed SNMP collector into or out of a HAM
environment, see Moving a deployed SNMP collector to or from a HAM
environment on page 122.
You can also modify the configuration parameters of the HAM components that
are writable. For information on modifying configuration parameters, see Changing
configuration parameters of existing Tivoli Netcool Performance Manager
components.
Removing HAM components
How to remove HAM components.
You can remove HAM components from the environment by right-clicking the
component name and selecting Remove from the pop-up menu. The selected
component and any subcomponents will be removed.
Before you can remove a designated spare (Collection Process SNMP Spare), you
must remove the spare from any resource pools it may belong to. To remove a
designated spare from a resource pool, open the managed definition that contains
the resource pool, and clear the check box next to the name of the designated spare
to remove. For information about managing resource pools, see Define the
resource pools on page 147
150 IBM Tivoli Netcool Performance Manager: Installation Guide
Stopping and restarting modified components
How to stop and restart modified components.
About this task
If you change the configuration of a HAM or any HAM components, or if you add
or remove an existing collector to or from a HAM environment, you must bounce
(stop and restart) the Tivoli Netcool Performance Manager components you
changed The is generally true for all Tivoli Netcool Performance Manager
components that you change, not just HAM.
To bounce a component:
Procedure
1. Open a terminal window on the DataChannel host.
2. Log in as pvuser.
3. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
4. Run the bounce command in the following format:
dccmd bounce <component>
For example:
v To bounce the HAM with the identifier HAM.dcsol1b.1, run:
dccmd bounce ham.dcsol1b.1
v To bounce all HAMs in the topology, run:
dccmd bounce ham.*.*
v To bounce the FTE for collector 1.1 that is managed by a HAM, run:
dccmd bounce fte.1.1
You do not need to bounce the HAM that the FTE and collector are in.
For information on using dccmd, see the Tivoli Netcool Performance Manager
Command Line Interface Guide.
5. Bounce the Channel Manager. For instructions, see Step 15.
Viewing the current configuration
During the process of creating or modifying a HAM cluster, you may find it useful
to check how the individual collector processes and managed definitions are
currently configured.
About this task
To view the current configuration of a collector process or managed definition:
Procedure
1. Right-click the collector process or managed definition to view.
2. Select Show from the pop-up menu.
The Show Collector Process... or Show Managed Definition... dialog appears.
The following sections describe the contents of these dialogs.
Chapter 7. Using the High Availability Manager 151
Show Collector Process... dialog
Dialog box description.
The following figure shows a collector process configured with three managed
definitions.
The configuration values are described as follows:
v dcsol1a. The primary host where this collector process runs.
v 3002. The port through which the collector process receives SNMP data.
v 3 2 (Primary) 1. The managed definitions that the HAM can bind to this
collector process. The values have the following meanings:
3. The managed definition for Collector 3.
2 (Primary). The managed definition for Collector 2. This is the default
managed definition for the collector process.
1. The managed definition for Collector 1.
Show Managed Definition... dialog
Dialog box description.
The Show Managed Definition... dialog contains the resource pool for a particular
managed definition.
This dialog contains the same information that appears in the Show Collector
Process... dialog, but for multiple hosts instead of just one. As such, this dialog
gives you a broader view of the cluster's configuration than a Show Managed
Definition... dialog.
The following figure shows a managed definition's resource pool configured with
four hosts:
152 IBM Tivoli Netcool Performance Manager: Installation Guide
Note the following about this managed definition's resource pool:
v The priority order of the hosts is from top to bottom - therefore, the first
collector process that the HAM will attempt to bind to this managed definition
is the one on host dcsol1a. The collector process on host docserver2 is last in the
priority list.
v The first three hosts are floating spares. They are flagged as such by each having
a primary managed definition.
v The host docserver2 is the only designated spare in the resource pool. It is
flagged as such by not having a primary managed definition.
Chapter 7. Using the High Availability Manager 153
154 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 8. Enabling Common Reporting on Tivoli Netcool
Performance Manager
Additional software is required if you want to use the Tivoli Common Reporting
server installed with Tivoli Netcool Performance Manager.
IBM Tivoli Common Reporting is an enterprise reporting solution that delivers a
common reporting platform across the Tivoli portfolio. To make full use of
Common Reporting, you must install Model Maker IBM Cognos Edition in your
system.
The Model Maker IBM Cognos Edition tooling greatly simplifies the task of
reporting on network performance by using Tivoli Common Reporting. Through
the creation, deployment, and management of common packs, Model Maker
simplifies the process of publishing report packages for Network Performance
Management on Tivoli Common Reporting.
Note: Please ensure before setting up Model Maker that you have installed the
Oracle 11g Instant Client, as described in Installing Oracle 11g Instant Client
(Standalone Server or Database Server combined with Tivoli Common Reporting).
For more information about Model Maker IBM Cognos Edition, see
http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.tnpm.doc/welcome_tnpm.html.
Model Maker
The Model Maker tooling greatly simplifies the task of reporting on the
performance of wireless and wireline networks. Through the creation, deployment,
and management of common packs, Model Maker simplifies the process of
publishing report packages for network Performance Management on Tivoli
Common Reporting.
Common packs extend technology pack data models and, for example, Summary
and Busy Hour data models to Tivoli Common Reporting on Tivoli Netcool
Performance Manager. After you install a technology pack, you install the
corresponding common pack to enable Tivoli Common Reporting. Common packs
can be delivered in the COTS program with the corresponding technology packs,
or created and customized by customers and business partners.
You must install the Model Maker components on a Tivoli Netcool Performance
Manager 1.3.2 system in the following sequence:
1. Install the Common Pack Service on the Tivoli Common Reporting server.
2. Install Model Maker on a Windows computer.
3. Install a technology pack and the corresponding common packs for each
technology you require.
System requirements
For full details of prerequisite software and the system requirements, see the
following guides:
Copyright IBM Corp. 2006, 2012 155
v IBM Tivoli Netcool Performance Manager: Model Maker: Installation and User Guide
This guide are available at http://publib.boulder.ibm.com/infocenter/tivihelp/
v8r1/index.jsp?topic=/com.ibm.tnpm.doc/welcome_tnpm.html
The following platforms are supported.
The Common Pack Service
The following server operating systems are supported:
v IBM AIX
v Oracle Solaris (SPARC)
v Red Hat Linux
Model Maker
The following client operating systems are supported:
v Microsoft Windows XP Professional, x86-32 or later.
The Base Common Pack Suite
The Base Common Pack Suite is a set of generic Base Common Packs (BCPs), some
of which are mandatory requirements for working with common packs. All
common packs have a dependency on at least one BCP, the TCR Time BCP. For
Wireless users, BCPs provide the cross-vendor technology support provided by
GOMs and GOMlets for technology packs.
The Base Common Pack Suite is updated periodically with the latest versions of
the BCPs. Before you install or create common packs, you must download the
latest version of the Base Common Pack Suite, and install the BCPs you require.
The Base Common Pack Suite consists of an archive file containing a set of
common pack JAR files, which you must download and extract before installing.
v The TCR Time BCP is a mandatory dependent pack for all wireless and wireline
common packs, and provides a common time dimension for reporting. It must
be installed before you can work with any common packs.
v The Wireline Common BCP is a mandatory dependent pack for all wireline
common packs. It must be installed before you can work with wireline common
packs.
v Typically, a number of Wireless BCPs are dependent packs for a wireless
common pack. Wireless BCPs support a number of Global Object Model (GOM)
and GOMlet technology packs. Refer to individual wireless common pack
release notes to see the list of dependent packs for a particular pack.
For current version information about the Base Common Pack Suite, see the Known
issues with Tivoli Netcool Performance Manager 1.3.2 technote in the Support
knowledge base.
156 IBM Tivoli Netcool Performance Manager: Installation Guide
Installing the BCP package from the distribution
Tivoli Netcool Performance Manager 1.3.2 distribution CD also contains the BCP
Suite (1.0.0.3-TIV-TNPM-BCPSUITE.tar.gz).
About this task
Install a number of Base Common Packs (BCPs) from the Base Common Pack
Suite. Base Common Packs (BCPs) are themselves common packs and you install
them exactly as you install any other common packs.
Procedure
1. Download and extract the Base Common Pack Suite to a location of your
choice. See The Base Common Pack Suite on page 156.
2. Install the Base Common Pack Suite.
For instructions on installing Base Common Packs, see Installing common packs
in IBM Tivoli Netcool Performance Manager: Model Maker 1.2.0 Installation and User
Guide.
Chapter 8. Enabling Common Reporting on Tivoli Netcool Performance Manager 157
158 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 9. Uninstalling components
This section provides information on how to uninstall components.
When you perform an uninstall, the "uninstaller" is the same deployer used to
install Tivoli Netcool Performance Manager.
Removing a component from the topology
How to remove an installed component from the topology.
You might have a situation where you have modified a topology by both adding
new components and removing components (marking them "To Be Removed").
However, the deployer can work in only one mode at a time - installation mode or
uninstallation mode. In this situation, first run the deployer in uninstallation mode,
then run it again in installation mode.
Note: After the deployer has completed an uninstall, you must open the topology
(loaded from the database) in the Topology Editor before performing any
additional operations.
Restrictions and behavior
Restrictions and behaviour to observe when performing an uninstall.
Before you remove a component, note the following:
v You can only remove a component from the topology if you have uninstalled the
components that depend on it. To uninstall a component, you must remove it
from the topology and then run the deployer in uninstall mode and execute the
action(s) related to uninstalling the component.
v You can remove a host only if no components are configured or installed on it.
v If you remove a component and redeploy the file, the Topology Editor view is
not refreshed automatically. Reload the topology file from the database to view
the updated topology.
Note: Once components are marked for deletion, the topology must be consumed
by the deployer to propagate the required changes and load the updated file in the
database. When you open the database version of the topology, the "removed"
component will disappear from the topology.
To remove one or more components from the topology where the host system no
longer exists or is unreachable on the network, do the following:
1. Open the Topology Editor and remove all components related to the host
system
2. Remove the host system from the topology
3. Redeploy the topology, ignoring any messages related to the non-existent or
unreachable host.
4. At deployment, the modified topology is saved to the database without the
components that were previously installed on the host system.
DataChannel restrictions:
Copyright IBM Corp. 2006, 2012 159
v You can remove the DataChannel Administrative Component only after all the
DataChannels have been removed.
v If you are uninstalling a DataChannel component, the component should first be
stopped. If you are uninstalling all DataChannel components on a host, then you
should remove the DataChannel entries from the crontab.
v If you delete a DataChannel or collector, the working directories (such as the
FTE and CME) are not removed; you must delete these directories manually.
v When a Cross-Collector CME (CC-CME) is installed on the system and formulas
are applied against it, the removal of collectors that the CC-CME depends on is
not supported. This is an exceptional case, that is, if you have not installed a
CC-CME, collectors can be removed.
DataView restrictions:
Uninstall DataView manually if other products are installed in the same Tivoli
Integrated Portal instance. Use the following procedure to uninstall a DataView
component:
1. Run the uninstall command:
<tip_location>/products/tnpm/dataview/bin/uninstall.sh <tip_location>
<tip_administrator_username> <tip_administrator_password>
2. Remove the DataView directory:
rm -rf <tip_location>/products/tnpm/dataview
3. In the Topology Editor:
4. Remove the DataView component.
5. Save the topology.
6. Run the deployer for uninstallation.
7. Mark the DataView step successful.
8. Run the unregister DataView step.
Note: Once this manual un-install is completed, the DataView instance will remain
in the topology after the un-install operation completes, this is not usually the case
for un-installed components.
Removing a component
To remove component from the topology.
Procedure
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click the component you want
to delete and select Remove from the pop-up menu.
4. The editor marks the component as "To Be Removed" and removes it from the
display.
5. Save the updated topology.
6. Run the deployer (see Starting the Deployer on page 100).
Note: If you forgot to save the modified topology, the deployer will prompt
you to save it first.
160 IBM Tivoli Netcool Performance Manager: Installation Guide
The deployer can determine that most of the components described in the
topology file are already installed, and removes the component that is no
longer part of the topology.
7. The deployer displays the installation steps page, which lists the steps required
to remove the component. Note that the name of the component to be removed
includes the suffix "R" (for "Remove"). For example, if you are deleting a
DataChannel, the listed component is DCR.
8. Click Run All to run the steps needed to delete the component.
9. When the installation ends successfully, the deployer uploads the updated
topology file into the database. Click Done to close the wizard.
Note: If you remove a component and redeploy the file, the Topology Editor
view is not refreshed automatically. Reload the topology file from the database
to view the updated topology.
What to do next
If you have uninstalled DataChannel Components, you will need to bounce CMGR
after you have run the deployer, so it will pick up the updated configuration and
realize the components have been removed. If you do not bounce CMGR after the
deployer runs then you may get errors when you the components are restarted.
Uninstalling the entire Tivoli Netcool Performance Manager system
How to uninstall the entire system.
To uninstall Tivoli Netcool Performance Manager, you must have the CD or the
original electronic image. The uninstaller will prompt you for the location of the
image.
Order of uninstall
The order in which you must uninstall components.
About this task
For all deployments, you must use the Topology Editor to uninstall the Tivoli
Netcool Performance Manager components in the following order:
Procedure
1. DataLoad and DataChannel
When uninstalling DataChannel from a host, you must run ./dccmd stop all,
disable or delete the dataload cron processes and manually stop (kill -9) any
running channel processes (identified by running findvisual). See Appendix B,
DataChannel architecture, on page 171 for more information about the
findvisual command.
2. DataMart
3. DataChannel Administrative Components and any remaining DataChannel
components.
4. DataView Also remove Tivoli Integrated Portal/Tivoli Common Reporting.
5. Tivoli Netcool Performance Manager Database (remove only after all the other
components have been removed). The database determines the operating
platform of the Tivoli Netcool Performance Manager environment.
Chapter 9. Uninstalling components 161
Restrictions and behavior
Before you uninstall Tivoli Netcool Performance Manager you must note the
following restrictions and behavior:
v If you need to stop the uninstallation before it is complete, you can resume it.
The uninstaller relies on the /tmp/ProvisoConsumer directory to store the
information needed to resume an uninstall. However, if the ProvisoConsumer
directory is removed for any reason, the -Daction=resume command will not
work.
Note: When you reboot your server, the contents of /tmp might get cleaned out.
v When you run the uninstaller, it finds the components that are marked as
"Installed", marks them as "To Be Removed", then deletes them in order. The
deployer is able to determine the correct steps to be performed. However, if the
component is not in the Installed state (for example, the component was not
started), the Topology Editor deletes the component from the topology - not the
uninstaller.
v When the uninstallation is complete, some data files still remain on the disk. You
must remove these files manually. See Residual files on page 164 for the list of
files that must be deleted manually.
Performing the uninstall
How to uninstall the Tivoli Netcool Performance Manager installation.
Before you begin
The unistall procedure assumes that you have updated the topology file:
1. If it is not already open, open the Topology Editor (see Starting the Topology
Editor on page 83).
2. Open the existing topology (see Opening a deployed topology on page 117).
3. In the Logical view of the Topology Editor, right-click each component you
want to delete and select Remove from the pop-up menu.
4. The editor marks all components you selected for removal as "To Be Removed"
and removes it from the display.
5. Save the updated topology.
About this task
To remove a Tivoli Netcool Performance Manager installation:
Procedure
1. You can start the uninstaller from within the Topology Editor or from the
command line.
To start the uninstaller from the Topology Editor:
v Select Run > Run Deployer for Uninstallation.
To start the uninstaller from the command line:
a. Log in as root.
b. Set and export your DISPLAY variable (see Setting up a remote X Window
display on page 36).
c. Change directory to the directory that contains the deployer. For example:
# cd /opt/IBM/proviso/deployer
d. Enter the following command:
162 IBM Tivoli Netcool Performance Manager: Installation Guide
# ./deployer.bin -Daction=uninstall
2. The uninstaller opens, displaying a welcome page. Click Next to continue.
3. Accept the default location of the base installation directory of the Oracle JDBC
driver (/opt/oracle/product/11.2.0-client32/jdbc/lib), or click Choose to
navigate to another directory. Click Next to continue. A dialog opens, asking
whether you want to load a topology from disk.
4. Click No.
A dialog box opens asking for you to enter the details of the updated topology
file.
5. Enter the name of the topology file you updated as described in the "Before
you begin" section.
6. The database access window prompts for the security credentials. Enter the
host name (for example, delphi) and database administrator password (for
example, PV), and verify the other values (port number, SID, and user name).
Click Next to continue. The topology as stored in the database is then
compared with the topology loaded from the file.
7. The uninstaller displays several status messages, then displays a message
stating that the environment status was successfully downloaded and saved to
the file /tmp/ProvisoConsumer/Discovery.xml. Click Next to continue.
8. Repeat the process on each machine in the deployment.
Note: After the removal of each Component using the Topology Editor, you
must reload the topology from the Database.
Uninstalling the topology editor
How to uninstall the Topology Editor.
About this task
To uninstall the Topology Editor, follow the instructions in this section. Do not
simply delete the /opt/IBM directory. Doing so will cause problems when you try
to reinstall the Topology Editor. If the /opt/IBM directory is accidentally deleted,
perform the workaround documented in Installing the Topology Editor on page
82.
Note: Uninstall Tivoli Netcool Performance Manager before uninstalling the
Topology Editor.
To uninstall the Topology Editor:
Procedure
1. Log in as root.
2. Set and export your DISPLAY variable (see Setting up a remote X Window
display on page 36).
3. Change directory to the install_dir/uninstall directory. For example:
# cd /opt/IBM/proviso/uninstall
4. Enter the following command:
#./Uninstall_Topology_Editor
5. The Uninstall wizard opens. Click Uninstall to uninstall the Topology Editor.
6. When the script is finished, click Done.
Chapter 9. Uninstalling components 163
Residual files
How to remove the possible residual files that may exist after the uninstall process.
About this task
When you uninstall Tivoli Netcool Performance Manager, some of the files remain
on the disk and must be removed manually. After you exit from the deployer (in
uninstall mode), you must delete these residual files and directories manually.
Perform the following steps:
Procedure
1. Log in as oracle.
2. Enter the following commands to stop Oracle:
sqlplus "/ as sysdba"
shutdown abort
exit
lsnrctl stop
3. As root, enter the following commands to delete these files and directories:
rm -fR /tmp/PvInstall
rm -fR /var/tmp/PvInstall
rm -fR /opt/Proviso
rm -fR /opt/proviso
rm -fR $ORACLE_BASE/admin/PV
rm -fR $ORACLE_BASE/admin/skeleton
rm -fR $ORACLE_HOME/dbs/initPV.ora
rm -fR $ORACLE_HOME/dbs/lkPV
rm -fR $ORACLE_HOME/dbs/orapwPV
rm -fR $ORACLE_HOME/lib/libpvmextc.so
rm -fR $ORACLE_HOME/lib/libmultiTask.so
rm -fR $ORACLE_HOME/lib/libcmu.so
rm -fR $ORACLE_HOME/bin/snmptrap
rm -fR $ORACLE_HOME/bin/notifyDBSpace
rm -fR $ORACLE_HOME/bin/notifyConnection
where $ORACLE_BASE is /opt/oracle/ and $ORACLE_HOME is
/opt/oracle/product/11.2.0/.
4. Enter the following commands to clear your Oracle mount points and remove
any files in those directories:
rm -r /raid_2/oradata/*
rm -r /raid_3/oradata/*
5. Enter the following command to delete the temporary area used by the
deployer:
rm -fr /tmp/ProvisoConsumer
6. Delete the installer file using the following command:
rm /var/.com*
7. Delete the startup file, netpvmd.
v For Solaris, use the command:
rm /etc/init.d/netpvmd
v For AIX, use the command:
rm /etc/rc.d/init.d/netpvmd
v For Linux, use the command:
rm /etc/init.d/netpvmd
164 IBM Tivoli Netcool Performance Manager: Installation Guide
Note: The netpvmd startup and stop files are also present in /etc/rc2.d and
/etc/rc3.d as S99netpvmd and K99netpvmd. These files must also be removed.l
What to do next
Following TCR uninstallation:
To prevent any possible system instability caused by residual processes
post-uninstall of TCR, run the tcrClean.sh script on all systems where TCR has
been uninstalled:
1. On the host where the TCR installation failed, change to the directory
containing tcrClean.sh:
cd /opt/IBM/proviso/deployer/proviso/bin/Util/
2. Run tcrClean.sh
3. When prompted, enter the location where TCR was installed.
Note: If you have uninstalled TCR on a remote host, the tcrClean.sh file will need
to be sent using ftp to the remote host for execution.
Chapter 9. Uninstalling components 165
166 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix A. Remote installation issues
Remote installation of all Tivoli Netcool Performance Manager components is
supported.
A remote installation refers to installation on any host that is not the primary
deployer, that is, the host running the Topology Editor. For some systems, security
settings may not allow for components to be installed remotely. Before deploying
on such a system, you must be familiar with the information in this appendix.
When remote install is not possible
What to do when remote installation is not possible.
There may arise situations where a remote host does not support FTP or the
remote execution of files.
A remote host may not support FTP or the remote execution of files.
It is possible your topology may include hosts one which:
v FTP is possible, but REXEC/RSH are not.
v Neither FTP nor REXEC/RSH are possible
This section describes how to deploy in these situations.
FTP is possible, but REXEC or RSH are not
In situations where FTP is possible but remote execution or remote shell are not.
About this task
For any remote host where FTP is possible, but REXEC or RSH are not,
deployment of the required component or components must be carried out using
the following steps.
There are two options for deployment.
Procedure
v Option 1:
1. Unselect the Remote Command Execution option during the installation. The
deployer creates and transfers the directory with the required component
package in it.
2. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in Installing on a remote host using a secondary
deployer on page 168.
Copyright IBM Corp. 2006, 2012 167
Neither FTP nor REXEC/RSH are possible
In situations where FTP, remote execution and remote shell are not possible.
About this task
For any remote host where neither FTP nor REXEC or RSH are possible the
deployment of the required component or components must be carried out using
the following steps.
There are two options for deployment.
Procedure
v Option 1:
1. Unselect the FTP option during the installation. The deployer creates a
directory containing the required component package.
2. Copy the required component directory to the target system.
3. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in Installing on a remote host using a secondary
deployer.
Installing on a remote host using a secondary deployer
The general procedure for installing a component on a remote host using a
secondary deployer.
About this task
A secondary deployer is used when the host you wish to install on does not
support remote installation.
The following steps describe how to install a Tivoli Netcool Performance Manager
component using a secondary deployer. For the purposes of clarity, we will name
the primary deployer host delphi, and the host on which we want to install a
component using the secondary deployer we will name corinth.
Procedure
1. Copy the Tivoli Netcool Performance Manager distribution to the server on
which you would like to set up the secondary deployer, that is, copy the
distribution to corinth. For more information on copying the Tivoli Netcool
Performance Manager distribution to a server, see Downloading the Tivoli
Netcool Performance Manager distribution to disk on page 48.
2. Open the Topology Editor on the primary deployer host, that is, on delphi, and
add the remote component to the topology definition.
You may have completed this task already when creating your original
topology definition. If you have already added the remote component to your
topology definition, skip to the next step.
3. Deploy the new topology containing the added component using the Topology
Editor. This is done by clicking Run > Run Deployer for Installation. This will
push the edited topology to the database.
4. Open the Deployer on corinth by doing the following:
168 IBM Tivoli Netcool Performance Manager: Installation Guide
a. Connect to corinth, change to the directory where you have downloaded the
product distribution, and launch the deployer either in graphical mode (by
starting the Launchpad and clicking Start Deployer) or CLI mode (by
navigating to the directory containing the deployer and entering the
command ./deployer.bin).
b. Enter the database credentials when prompted. The deployer connects to
the database.
For more information on how to run a secondary deployer, see Secondary
Deployers on page 101.
Note: Due to Step 3, the secondary deployer sees the topology data and
knows that the required component is still to be installed on corinth.
5. Follow the on screen instructions to install the desired component.
Note: You cannot launch the deployer simultaneously from two different hosts.
Only one deployer can be active at any given time.
Appendix A. Remote installation issues 169
170 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix B. DataChannel architecture
This section provides detailed information about the DataChannel architecture.
Data collection
DataChannel data collection.
A Tivoli Netcool Performance Manager DataChannel consists of a number of
components, including the following:
v File Transfer Engine (FTE)
v Complex Metric Engine (CME)
v Daily Database Loader (DLDR)
v Hourly Database Loader (LDR)
v Plan Builder (PBL)
v Channel Manager
The FTE, DLDR, LDR, and PBL components are assigned to each configured
DataChannel. The FTE and CME components are assigned to one or more
Collector subchannels.
Data is produced by Tivoli Netcool Performance Manager DataLoad Collectors.
Both SNMP and BULK Collectors are fed into a subchannel's channel processor.
Data moves through the CME and is synchronized in the Hourly Loader. The
Hourly Loader computes group aggregations from resource aggregation records.
The Daily Loader provides statistics on metric channel tables and metric
tablespaces and inserts data into the database.
Data is moved from one channel component to another as files. These files are
written to and read from staging directories between each component. Within each
staging directory there are subdirectories named do, output, and done. The do
subdirectory contains files that are waiting to be processed by a channel
component. The output subdirectory stores data for the next channel component to
work on. After files are processed, they are moved to the done directory. All file
movement is accomplished by the FTE component.
Data aggregation
A DataChannel aggregates data collected by collectors for eventual use by
DataView reports.
The DataChannel provides online statistical calculations of raw collected data, and
detects real-time threshold violations.
Aggregations include:
v Resource aggregation for every metric and resource
v Group aggregation for every group
v User-defined aggregation computed from raw data
Threshold detections in real time include:
v Raw data violating configured thresholds
Copyright IBM Corp. 2006, 2012 171
v Raw data violating configured thresholds and exceeding the threshold during a
specific duration of time
v Averaged data violating configured thresholds
Management programs and watchdog scripts
Management Programs and Watchdog Scripts
The following table lists the names and corresponding watchdog scripts for the
DataChannel management programs running on different DataChannel hosts.
Table 10: Programs and Scripts
Component Program Executable*
Corresponding
Watchdog Script Notes
Channel Name
Server
CNS cnsw Runs on the host
running the Channel
Manager.
Log Server LOG logw
Channel Manager CMGR cmgrw
Application Manager AMGR amgrw One per subchannel
host and one on the
Channel Manager
host.
* The actual component's executable file seen in the output of ps -ef is named XXX_visual,
where XXX is an entry in this column. For example, the file running for CMGR is seen as
CMGR_visual.
The watchdog scripts run every few minutes from cron. Their function is to
monitor their corresponding management component, and to restart it if necessary.
You can add watchdog scripts for the Channel Manager programs to the crontab
for the pvuser on each host on which you installed a DataChannel component.
To add watchdog scripts to the crontab:
1. Log in as pvuser. Make sure this login occurs on the server running the Channel
Manager components.
2. At a shell prompt, go to the DataChannel conf subdirectory. For example:
$ cd /opt/datachannel/conf
3. Open the file dc.cron with a text editor. (The dc.cron files differ for different
hosts running different DataChannel programs. The following example shows
the dc.cron file for the host running the Channel Manager programs.)
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/cnsw >
/dev/null 2>&1
1,6,11,16,21.26,31,36,41,46,51,56 1-31 1-12 0-6 /opt/datachannel/bin/logw >
/dev/null 2>&1
2,7,12,17,22.27,32,37,42,47,52,57 1-31 1-12 0-6 /opt/datachannel/bin/cmgrw >
/dev/null 2>&1
3,8,13,18,23.28,33,38,43,48,53,58 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
4. Copy the lines in the dc.cron file to the clipboard.
5. At another shell prompt, edit the crontab for the current user.
172 IBM Tivoli Netcool Performance Manager: Installation Guide
export EDITOR=vi
crontab -e
A text editor session opens, showing the current crontab settings.
6. Paste the lines from the dc.cron tab into the crontab file. For example:
0 * * * * [ -f /opt/datamart/dataMart.env ] && [ -x /opt/datamart/bin/pollinv ]
&& ....
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/cnsw >
/dev/null 2>&1
1,6,11,16,21,26,31,36,41,46,51,56 1-31 1-12 0-6 /opt/datachannel/bin/logw >
/dev/null 2>&1
2,7,12,17,22,27,32,37,42,47,52,57 1-31 1-12 0-6 /opt/datachannel/bin/cmgrw >
/dev/null 2>&1
3,8,13,18,23,28,33,38,43,48,53,58 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
7. Save and exit the crontab file.
8. Repeat steps 1 to 8 on each DataChannel host, with this difference:
The dc.cron file on collector and loader hosts will have only one line, like this
example:
0,5,10,15,20,25,30,35,40,45,50,55 1-31 1-12 0-6 /opt/datachannel/bin/amgrw >
/dev/null 2>&1
On such hosts, this is the only line you need to add to the pvuser crontab.
DataChannel application programs
DataChannel Application Program names and descriptions.
The DataChannel subchannel application programs are listed in Table 11.
Table 11: DataChannel Subchannel Application Program Names
DataChannel Program* Description Example
BCOL.n.c Bulk Collector process for
channel n, with Collector
number c
BCOL.1.2
UBA.n.c UBA Bulk Collector process
for channel n, with Collector
number c
UBA.1.100
CME.n.s Complex Metric Engine for
channel n, Collector number
s
CME.2.1
DLDR.n Daily Loader for channel n DLDR.1
LDR.n Hourly Loader for channel n LDR.2
FTE.n. File Transfer Engine for
channel n
FTE.1.1
PBL.n. Plan Builder for channel n PBL.1
* The actual application's executable file visible in the output of ps -ef is named
XXX_visual, where XXX is an entry in this column.
Appendix B. DataChannel architecture 173
Note: For historical reasons, the SNMP DataLoad collector is managed by Tivoli
Netcool Performance Manager DataMart, and does not appear in Table 11.
Starting the DataChannel management programs
How to check if DataChannel management programs are running and then start
application programs.
Procedure
v Verify that the DataChannel management programs are running:
1. Log in as pvuser on each DataChannel host.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the findvisual command:
$ ./findvisual
In the resulting output, look for:
The AMGR process on every DataChannel host
The CNS, CMGR, LOG, and AMGR processes on the Channel Manager
host
v If the DataChannel management programs are running on all DataChannel
hosts, start the application programs on all DataChannel hosts by following
these steps:
1. Log in as pvuser. Make sure this login occurs on the host running the
Channel Manager programs.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the following command to start all DataChannel applications on all
configured DataChannel hosts:
./dccmd start all
The command shows a success message like the following example.
Done: 12 components started, 0 components already running
See the IBM Tivoli Netcool Performance Manager: Command Line Interface Guide
for information about the dccmd command.
There is a Java process associated with the LOG server that must be stopped
should the proviso.log need to be re-created:
1. To find this Java process, enter the following:
ps -eaf | grep LOG
The output should be similar to the following:
pvuser 15774 15773 0 Nov 29 ?
7:59 java -Xms256m -Xmx384m com.ibm.tivoli.analytics.Main -a LOG
2. Kill this process using the command:
kill -9 15774
15774 - is the pid of the Java process as discovered using the grep command.
3. Restart the LOGW process.
174 IBM Tivoli Netcool Performance Manager: Installation Guide
Starting the DataLoad SNMP collector
How to start the DataLoad SNMP Collector
About this task
Once you have started the DataChannel components, check every server that hosts
a DataLoad SNMP collector. to make sure the collectors are running. To check
whether a collector is running, run the following command:
ps -ef | grep -i pvmd
If the collector is running, you will see output similar to the following:
pvuser 27118 1 15 10:03:27 pts/4 0:06 /opt/dataload/bin/pvmd -nologo
-noherald /opt/dataload/bin/dc.im -headless -a S
If a collector is not running, perform the following steps:
Procedure
1. Log into the server that is running Tivoli Netcool Performance Manager SNMP
DataLoad by entering the username and password you specified when
installing SNMP DataLoad.
2. Source the DataLoad environment file by entering the following command:
./$DLHOME/dataLoad.env
where $DLHOME is the location where SNMP DataLoad is installed on the system
(/opt/dataload, by default).
Note: If DataLoad shares the same server as DataMart, make sure you unset
the environment variable by issuing the following command from a BASH shell
command line:
unset PV_PRODUCT
3. Change to the DataLoad bin directory by entering the following command:
cd $PVMHOME/bin
4. Start the DataLoad SNMP collector using the following command:
pvmdmgr start
The command displays the following message when the SNMP collector has
been successfully started:
PVM Collecting Daemon is running.
Results
The script controlling the starting and stopping of SNMP collectors, pvmdmgr,
prevents the possibility that multiple collector instances can be running
simultaneously.
If a user starts a second instance, that second instance will die by itself in under
two minutes without ever contacting or confusing the relevant watchdog script.
Appendix B. DataChannel architecture 175
DataChannel management components in a distributed configuration
A description of the DataChannel management components.
Two channels running on the same system share a common Application Manager
(AMGR) that has a watchdog script, amgrw. The AMGR is responsible for starting,
monitoring through watchdog scripts, and gathering status for each application
server process for the system it runs on. Application programs include the FTE,
CME, LDR, and DLDR programs.
An example of multiple processes running on the same host is:
v Application Manager (AMGR)
v Complex Metric Engines (CME)
v File Transfer Engines (FTE)
v Hourly Data Loaders (LDR)
v Daily Data Loaders (DLDR)
Each program has its own set of program and staging directories.
Manually starting the Channel Manager programs
If you need to manually start the Channel Manager programs, you must do so in a
certain order.
About this task
After a manual start, the program's watchdog script restarts the program as
required.
Procedure
v To start the Channel Manager programs manually:
1. Log in as pvuser on the host running the Channel Manager programs.
2. At a shell prompt, change to the DataChannel bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Enter the following commands at a shell prompt, in this order:
For the Channel Name Server, enter:
./cnsw
For the Log Server, enter:
./logw
For the Channel Manager, enter:
./cmgrw
For the Application Manager, enter:
./amgrw
v To manually start the DataChannel programs on all hosts in your DataChannel
configuration:
1. Start the Channel Manager programs, as described in the previous section.
2. On each DataChannel host, start the amgrw script.
3. On the Channel Manager host, start the application programs as described in
Starting the DataChannel management programs on page 174.
176 IBM Tivoli Netcool Performance Manager: Installation Guide
Adding DataChannels to an existing system
How to add DataChannels to an existing system.
About this task
If you add and configure a new remote DataChannel using the Topology Editor
after the initial deployment of your topology, the system will not pick up these
changes, unless the user manually stop starts the relevant processes, as explained
in Chapter 6, Modifying the current deployment, on page 117.
To shut down the DataChannel:
Note: The DataChannel CMGR, CNS, AMGR, and LOG visual processes must
remain running until you have gathered the DataChannel parameters from your
environment.
Procedure
1. On the DataChannel host, log in as the component user, such as pvuser.
2. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default) using the following command: $ cd
/opt/datachannel/bin
3. Shut down the DataChannel FTE.
Prior to shutting down all DataChannel components, some DataChannel work
queues must be emptied.
To shut down the DataChannel FTE and empty the work queues:
$ ./dccmd stop FTE.*
4. Let all DataChannel components continue to process until the .../do directories
for the FTE and CME components contain no files.
The .../do directories are located in the subdirectories of $DCHOME (typically,
/opt/datachannel) that contain the DataChannel components - for example,
FTE.1.1, CME.1.1.
5. Shut down all CMEs on the same hour (So the operator state files will be in
synch with each other). To accomplish this:
a. Identify the leading CME by either looking at the do and done directories
in each CME and the DAT files inside there; or using dccmd status all to see
which CME is reporting the latest hour in its processing status.
b. All CMEs on that hour must be stopped and then continue using the same
approach to finding the hour being processed and stop each CME as it
reaches the same hour until all CMEs are stopped. CMEs are stopped using
the command:
$ ./dccmd stop CME
6. Use the following dccmd commands to stop the DataChannel applications:
$ ./dccmd stop DLDR
$ ./dccmd stop LDR
$ ./dccmd stop FTE
$ ./dccmd stop DISC
$ ./dccmd stop UBA (if required)
Note: For details on how to restart a DataChannel, see Manually starting the
Channel Manager programs on page 176.
Appendix B. DataChannel architecture 177
DataChannel terminology
Terms used throughout the Tivoli Netcool Performance Manager DataChannel
installation procedure.
v Collector Subchannel: A subdivision of a DataChannel, with each Collector
subchannel associated with a single Collector and CME. The division into
Collector subchannels helps eliminate latency or loss of data caused by delayed
Collectors. If a Collector subchannel disconnects for a period of time, only that
Collector is affected, and all other Collector subchannels continue processing.
The number of Collector subchannels per DataChannel differs according to the
needs of a particular deployment. See the IBM Tivoli Netcool Performance Manager:
Configuration Recommendations Guide for information related to system
configuration requirements. The terms Collector and Collectors are used to refer
to Collector subchannel and Collector subchannels.
v Complex Metric Engine (CME): A DataChannel program that performs
on-the-fly calculations on raw metrics data for a DataChannel. These calculations
include time aggregations for resources, as well as statistical calculations using
raw data, thresholds, properties, and constants as inputs. If CME formulas are
defined for the incoming metrics data, the values are processed by those
formulas. The CME synchronizes metadata at the beginning of each hour, and
only processes the metadata that exists for the hour.
v CORBA (Common Object Request Broker Architecture): An industry-standard
programming architecture that enables pieces of programs, called objects, to
communicate with one another regardless of the programming language that
they were written in or the operating system they are running on.
v Daily Database Loader (DLDR): A DataChannel program that gathers statistical
data processed by a DataChannel's CME and inserts it into the Tivoli Netcool
Performance Manager database. There is one Daily Loader for each
DataChannel; it is part of the channel processor component of the DataChannel.
v DataChannel Remote (DCR): A DataChannel installation configuration in which
the subchannel, CME and FTE components are installed and run on one host,
while the Loader components are installed and run on another host. In this
configuration, the subchannel hosts can continue processing data and detecting
threshold violations, even while disconnected from the Channel Manager server.
v DataChannel Standard: A DataChannel installation configuration in which all
component programs of each subchannel are installed and run on the same
server. DataChannel Standard installation is described in this chapter.
v DataLoad Bulk Collector: A DataLoad program that processes different data
formats and resource files supplied by network devices, network management
platforms, network probes, and other types of resources such as BMC Patrol.
The Bulk Collector translates bulk statistics provided in flat files into Tivoli
Netcool Performance Manager metadata and data. If operating in synchronized
inventory mode, the Bulk Collector passes the resources and properties to the
Tivoli Netcool Performance Manager DataMart Inventory application.
v DataLoad SNMP Collector: A DataLoad program that collects data from
network resources via SNMP polling. The SNMP Collector provides raw data
files to the DataChannel for processing by the CME.
v DataLoad UBA Bulk Collector: A DataLoad program that imports data from
files (referred to as Bulk input files) generated by non-SNMP devices, including
Alcatel 5620 NM, Alcatel SAM, and Cisco CWM. These Bulk input files contain
formats that the Bulk Collector is unable to handle.
178 IBM Tivoli Netcool Performance Manager: Installation Guide
v Discovery Server (DISC): A DataChannel program that runs as a daemon to
perform an inventory of SNMP network devices from which to gather statistical
data.
v Hourly Database Loader (LDR): A DataChannel program that serves as the
point of data synchronization and merging, and of late data processing, for each
DataChannel. The Hourly Loader gathers files generated by the CME, computes
group aggregations from the individual resource aggregation records, and loads
the data into the Tivoli Netcool Performance Manager database.
v File Transfer Engine (FTE): A DataChannel program that periodically scans the
Collector output directories, examines the global execution plan to determine
which computation requests require the data, then sorts the incoming data for
storage.
v Next-Hour Policy: Specifies the number of seconds to wait past the hour for files
to arrive before the next hourly directory is created. The default value causes the
DataChannel to wait until 15 minutes after the hour before it starts processing
data for the next hour. To avoid losing data, you need to set a percentage and a
time-out period during the configuration of the CME.
v Plan Builder (PBL): A DataChannel program that creates the metric data routing
and processing plan for the other components in the DataChannel.
Appendix B. DataChannel architecture 179
180 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix C. Aggregation sets
This appendix describes how to configure and install aggregation sets.
Overview
An aggregation set is a grouping of network management raw data and computed
statistical information stored in the Tivoli Netcool Performance Manager database
for a single timezone.
For example, if your company provides network services to customers in both the
Eastern and Central US timezones, you must configure two aggregation sets.
Because each aggregation set is closely linked with a timezone, aggregation sets are
sometimes referred to as timezones in the in Tivoli Netcool Performance Manager
documentation. However, the two concepts are separate.
Note: "Aggregation set" is abbreviated to "Aggset" in some setup program menus.
Configuring aggregation sets
How to configure aggregation sets.
About this task
When you configure an aggregation set, the following information is stored in the
database:
v The timezone ID number associated with this aggregation set.
v The timezone offset from GMT, in seconds.
v Optionally, the dates that Daylight Savings Time (DST) begins and ends in the
associated timezone for each year from the present through 2010. (Or you can
configure an aggregation set to ignore DST transitions.)
You configure an aggregation set either by creating a new set or by modifying an
existing set. The first aggregation set is installed by default when you install Tivoli
Netcool Performance Manager Datamart, so if your network will monitor only one
timezone, you need only to configure the existing set.
To configure an aggregation set:
v
Procedure
1. Log in as root. (Remain logged in as root for the remainder of this appendix.)
2. At a shell prompt, change to the directory where Tivoli Netcool Performance
Manager DataMart program files are installed. For example:
# cd /opt/datamart
3. Load the DataMart environment variables into your current shell's
environment using the following command:
# . ./dataMart.env
4. Change to the bin directory:
Copyright IBM Corp. 2006, 2012 181
# cd bin
5. Enter the following command:
# ./create_modify_aggset_def
The following menu is displayed:
--------------------------------------------------
Tivoli Netcool Performance Manager Database
Date: <Current Date> <Current Time>
Script name: create_modify_aggset_def
Script revision: <revision_number>
- Aggregation set creation
- Aggregation set modification
- DST configuration for an aggregation set
--------------------------------------------------
Database user................. : [ PV_ADMIN ]
Database user password........ : [ ]
Menu :
1. Input password for PV_ADMIN.
2. Configure an aggset.
0. Exit
Choice : 1
6. Type 1 at the Choice prompt and press Enter to enter the password for
PV_ADMIN. The script prompts twice for the password you set up for
PV_ADMIN.
==> Enter password for PV_ADMIN : PV
==> Re-enter password : PV
Note: The script obtains the DB_USER_ROOT setting from the Tivoli Netcool
Performance Manager database configured in previous chapters, and
constructs the name of the Tivoli Netcool Performance Manager database
administrative login name, PV_ADMIN, from that base. If you set a different
DB_USER_ROOT setting, the "Database user" entry reflects your settings. For
example, if you previously set DB_USER_ROOT=PROV, this script would
generate the administrative login name PROV_ADMIN.
7. To configure the first aggregation set, type 2 at the Choice prompt and press
Enter twice.
The script shows the current settings for the aggregation set with ID 0
(configured by default):
The following Time Zones are defined into the Database :
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________
0 | 1970/01/01 00:00:00 | 0 | GMT | Aggset created
==> Press <Enter> to continue ....
You can use this aggregation set as-is, or modify it to create a new timezone.
8. Press Enter. A list of predefined timezones and their timezone numbers is
displayed:
182 IBM Tivoli Netcool Performance Manager: Installation Guide
Num | OffSet | Time zone Name | Short | Long
| Hours | | Description | Description
___________________________________________________________________________________
[ 1] : -10:00 | America/Adak | HAST | Hawaii-Aleutian Standard Time
[ 2] : -10:00 | Pacific/Rarotonga | CKT | Cook Is. Time
[ 3] : -09:00 | America/Anchorage | AKST | Alaska Standard Time
[ 4] : -09:00 | AST | AKST | Alaska Standard Time
[ 5] : -08:00 | PST | PST | Pacific Standard Time
[ 6] : -07:00 | MST | MST | Mountain Standard Time
[ 7] : -06:00 | America/Mexico_City| CST | Central Standard Time
[ 8] : -06:00 | CST | CST | Central Standard Time
[ 9] : -05:00 | EST | EST | Eastern Standard Time
[10] : -04:00 | America/Santiago | CLT | Chile Time
[11] : -03:00 | America/Sao_Paulo | BRT | Brazil Time
[12] : -01:00 | Atlantic/Azores | AZOT | Azores Time
[13] : 000:00 | Europe/London | GMT | Greenwich Mean Time
[14] : +01:00 | Europe/Paris | CET | Central European Time
[15] : +01:00 | ECT | CET | Central European Time
[16] : +02:00 | Africa/Cairo | EET | Eastern European Time
[17] : +02:00 | Europe/Helsinki | EET | Eastern European Time
[18] : +02:00 | Europe/Bucharest | EET | Eastern European Time
[19] : +03:00 | Asia/Baghdad | AST | Arabia Standard Time
[20] : +03:00 | Europe/Moscow | MSK | Moscow Standard Time
[21] : +04:00 | Asia/Baku | AZT | Azerbaijan Time
[22] : +05:00 | Asia/Yekaterinburg | YEKT | Yekaterinburg Time
[23] : +06:00 | Asia/Novosibirsk | NOVT | Novosibirsk Time
[24] : +07:00 | Asia/Krasnoyarsk | KRAT | Krasnoyarsk Time
[25] : +08:00 | Asia/Irkutsk | IRKT | Irkutsk Time
[26] : +09:00 | Asia/Yakutsk | YAKT | Yakutsk Time
[27] : +10:00 | Australia/Sydney | EST | Eastern Standard Time (New
South Wales)
[28] : +11:00 | Pacific/Noumea | NCT | New Caledonia Time
[29] : +12:00 | Pacific/Auckland | NZST | New Zealand Standard Time
[30] : +12:00 | Asia/Anadyr | ANAT | Anadyr Time
==> Select Time Zone number [1-30 ] (E : Exit) : 9
9. Type the number of the timezone you want to associate with aggregation set
0. For example, type 9 for Eastern Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 0
To associate the specified timezone, EST, with the database's default
aggregation set, type 0.
10. The script asks whether you want your aggregation set to include Daylight
Saving Time (DST) transition dates:
Does your Time Zone manage DST [Y/N] : Y
For most time zones, type Y and press Enter.
11. The script displays the results:
Appendix C. Aggregation sets 183
Complete with Success ...
The following Time Zone has been modified:
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________
0 | 1970/01/01 00:00:00 | 0 | GMT | Aggset created
0 | 2004/09/29 22:00:00 | -14400 | EST_2004_DST | Aggset created
0 | 2004/10/31 06:00:00 | -18000 | EST_2004 | Aggset created
0 | 2005/04/03 07:00:00 | -14400 | EST_2005_DST | Aggset created
0 | 2005/10/30 06:00:00 | -18000 | EST_2005 | Aggset created
0 | 2006/04/02 07:00:00 | -14400 | EST_2006_DST | Aggset created
0 | 2006/10/29 06:00:00 | -18000 | EST_2006 | Aggset created
0 | 2007/04/01 07:00:00 | -14400 | EST_2007_DST | Aggset created
0 | 2007/10/28 06:00:00 | -18000 | EST_2007 | Aggset created
0 | 2008/04/06 07:00:00 | -14400 | EST_2008_DST | Aggset created
0 | 2008/10/26 06:00:00 | -18000 | EST_2008 | Aggset created
0 | 2009/04/05 07:00:00 | -14400 | EST_2009_DST | Aggset created
0 | 2009/10/25 06:00:00 | -18000 | EST_2009 | Aggset created
0 | 2010/04/04 07:00:00 | -14400 | EST_2010_DST | Aggset created
0 | 2010/10/31 06:00:00 | -18000 | EST_2010 | Aggset created
==> Press <Enter> to continue ....
Note: The dates that appear in your output will most likely be different from
the dates that appear in the example.
12. Press Enter to return to the script's main menu.
13. To configure a second aggregation set, type 2 at the Choice prompt and press
Enter three times.
14. Specify the timezone number of your second timezone. For example, type 8 to
specify Central Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 1
If you enter a set number that does not exist in the database, the script creates
a new aggregation set with that number. Type the next available set number,
1.
15. Respond Y to the timezone management query.
The script shows the results of creating the second aggregation set:
____________
The following Time Zone has been modified :
___________________________________________________________________________________ ______________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________ ______________
1 | 2004/09/29 23:00:00 | -18000 | CST_2004_DST | Aggset created
1 | 2004/10/31 07:00:00 | -21600 | CST_2004 | Aggset created
1 | 2005/04/03 08:00:00 | -18000 | CST_2005_DST | Aggset created
1 | 2005/10/30 07:00:00 | -21600 | CST_2005 | Aggset created
1 | 2006/04/02 08:00:00 | -18000 | CST_2006_DST | Aggset created
1 | 2006/10/29 07:00:00 | -21600 | CST_2006 | Aggset created
1 | 2007/04/01 08:00:00 | -18000 | CST_2007_DST | Aggset created
1 | 2007/10/28 07:00:00 | -21600 | CST_2007 | Aggset created
1 | 2008/04/06 08:00:00 | -18000 | CST_2008_DST | Aggset created
1 | 2008/10/26 07:00:00 | -21600 | CST_2008 | Aggset created
1 | 2009/04/05 08:00:00 | -18000 | CST_2009_DST | Aggset created
1 | 2009/10/25 07:00:00 | -21600 | CST_2009 | Aggset created
1 | 2010/04/04 08:00:00 | -18000 | CST_2010_DST | Aggset created
1 | 2010/10/31 07:00:00 | -21600 | CST_2010 | Aggset created
==> Press <Enter> to continue ....
16. Press Enter to return to the main menu, where you can add more aggregation
sets, or type 0 to exit.
184 IBM Tivoli Netcool Performance Manager: Installation Guide
The next step is to install the aggregation sets on the server on which you
installed Tivoli Netcool Performance Manager DataMart.
Installing aggregation sets
How to install aggregation sets.
When you install DataMart, aggregation set 0 is automatically installed. If you
configured only the default aggregation set (in Configuring aggregation sets on
page 181), you can skip this section. However, f you configured timezones for
additional aggregation sets, you must install the non-zero sets using the steps in
this section.
Start the Tivoli Netcool Performance Manager setup program
How to start the Tivoli Netcool Performance Manager Setup Program.
About this task
Start the setup program by following these steps:
Procedure
1. Make sure your EDITOR environment variable is set.
2. Change to the /opt/Proviso directory:
cd /opt/Proviso
3. Start the setup program:
./setup
The setup program's main menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Main Menu]
1. Install
2. Upgrade
3. Uninstall
0. Exit
Choice [1]> 1
4. Type 1 at the Choice prompt and press Enter. The Install menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Install]
1. Tivoli Netcool Performance Manager Database Configuration
0. Previous Menu
Choice [1]> 1
Set aggregation set installation parameters
How to set the aggregation set installation parameters.
Procedure
1. Type 1 at the Choice prompt and press Enter. Setup displays the installation
environment menu:
Appendix C. Aggregation sets 185
Tivoli Netcool Performance Manager Database Configuration <version number> - [installation environment]
1. PROVISO_HOME : /opt/Proviso
2. DATABASE_DEF_HOME : -
3. CHANNELS_DEF_HOME : -
4. AGGRSETS_DEF_HOME : -
5. Continue
0. Exit
Choice [5]> 5
Note: Menu options 2, 3, and 4 are used later in the installation process.
2. Make sure the value for PROVISO_HOME is the same one you used when
you installed the database configuration. If it is not, type 1 at the Choice
prompt and correct the directory location.
3. The script displays the component installation menu:
Tivoli Netcool Performance Manager Database Configuration <version number> - [component installation]
1. Database
2. Channel
3. Aggregation set
0. Exit
Choice [1]> 3
4. Type 3 at the Choice prompt and press Enter. The script displays the
installation environment menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> - [installation environment]
1. PROVISO_HOME : /opt/Proviso
2. ORACLE_HOME : /opt/oracle/product/11.2.0/
3. ORACLE_SID : PV
4. DB_USER_ROOT : -
5. Continue
0. Previous Menu
Choice [5]> 4
5. Type 4 at the Choice prompt and press Enter to specify the same value for
DB_USER_ROOT that you specified in previous chapters. This manual's
default value is PV.
Enter value for DB_USER_ROOT [] : PV
6. Make sure that the values for PROVISO_HOME, ORACLE_HOME, and
ORACLE_SID are the same ones you entered in previous chapters. Correct the
values if necessary.
7. Type 5 at the Choice prompt and press Enter. Setup displays the Aggregation
Set installation options menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> - [installation options]
1. List of configured aggregation sets
2. List of installed aggregation sets
3. Number of the aggregation set to install : -
4. Channel where to install aggregation set : (all)
5. Start date of aggregation set : <Current Date>
6. Continue
0. Back to options menu
Choice [6]>
Note: Do not change the value for option 4. Retain the default value, "all."
8. The first time you use any menu option, the script prompts for the password
for PV_ADMIN:
Enter password for PV_ADMIN : PV
186 IBM Tivoli Netcool Performance Manager: Installation Guide
9. Use menu option 1 to list the aggregation sets you configured in Configuring
aggregation sets on page 181. The script displays a list similar to the
following:
============= LIST OF CONFIGURED AGGREGATION SETS ============
Num Effect Time Name Time lag
---- --------------------- ------------------------------------------- --------
0 01-01-1970 00:00:00 GMT +0h
04-01-2007 07:00:00 EST_2007_DST -4h
04-02-2006 07:00:00 EST_2006_DST -4h
04-03-2005 07:00:00 EST_2005_DST -4h
04-04-2010 07:00:00 EST_2010_DST -4h
04-05-2009 07:00:00 EST_2009_DST -4h
04-06-2008 07:00:00 EST_2008_DST -4h
09-29-2004 22:00:00 EST_2004_DST -4h
10-25-2009 06:00:00 EST_2009 -5h
10-26-2008 06:00:00 EST_2008 -5h
10-28-2007 06:00:00 EST_2007 -5h
10-29-2006 06:00:00 EST_2006 -5h
10-30-2005 06:00:00 EST_2005 -5h
10-31-2004 06:00:00 EST_2004 -5h
10-31-2010 06:00:00 EST_2010 -5h
1 04-01-2007 08:00:00 CST_2007_DST -5h
04-02-2006 08:00:00 CST_2006_DST -5h
04-03-2005 08:00:00 CST_2005_DST -5h
04-04-2010 08:00:00 CST_2010_DST -5h
04-05-2009 08:00:00 CST_2009_DST -5h
04-06-2008 08:00:00 CST_2008_DST -5h
09-29-2004 23:00:00 CST_2004_DST -5h
10-25-2009 07:00:00 CST_2009 -6h
10-26-2008 07:00:00 CST_2008 -6h
10-28-2007 07:00:00 CST_2007 -6h
10-29-2006 07:00:00 CST_2006 -6h
10-30-2005 07:00:00 CST_2005 -6h
10-31-2004 07:00:00 CST_2004 -6h
10-31-2010 07:00:00 CST_2010 -6h
2 aggregation sets configured
Press enter...
10. Select option 2 to list the aggregation sets already installed. The output is
similar to the following:
============== LIST OF CREATED AGGREGATION SETS ==============
============ X: created ==== #: partially created ============
Channels 0
| 1
AggSets -----------------------------------------------------------------------
| 0 X
Press enter...
Remember that aggregation set 0 is automatically installed when you install
the database channel, and continues to be installed even if you modified set 0
by assigning a different timezone.
11. Select option 3 to designate the aggregation set to install. In the examples
above, set 0 is already installed, but set 1 is waiting to be installed. Thus, enter
1 at the prompt:
Enter Aggregation Set number between 1 and 998 : 1
12. By default, the date to start collecting data on the designated aggregation set
is today's date. You can instead use menu option 5 to designate a future date
to start collecting data. Set an appropriate future date for your installation.
Enter start date (GMT) using Oracle format yyyy.mm.dd-hh24 : 2009.08.31-00
WARNING! Start date is set in the future.
No loading is allowed until start date (GMT) is reached.
Do you confirm the start date (Y/N) [N] ? y
Appendix C. Aggregation sets 187
13. When all menu parameters are set, type 6 at the Choice prompt and press
Enter.
Edit aggregation set parameters file
How to edit the aggregation set parameters file
About this task
Procedure
1. The script prompts that it will start the editor specified in the EDITOR
environment variable and open the aggregation set parameters file. Press Enter.
An editing session opens containing the aggsetreg.udef configuration file, as
shown in this example:
#
# Tivoli Netcool Performance Manager Datamart
# <Current Date>
#
#
# Channel C01: GROUPS DAILY aggregates storage
#
[AGGSETREG/C01/1DGA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLE/HISTORIC]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLESPACE/CURRENT]
CREATION_PATH=/raid_2/oradata
EXTENT_SIZE=64K
SIZE=10M
#
[AGGSETREG/C01/1DGA/TABLESPACE/HISTORIC]
CREATION_PATH=/raid_3/oradata
EXTENT_SIZE=64K
SIZE=10M
#
# Channel C01: RESOURCES DAILY aggregates storage
#
[AGGSETREG/C01/1DRA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
...
2. Do not make changes to this file unless you have explicit instructions from
Professional Services.
Only if you have guidelines from Professional Services for advanced
configuration of your aggregation sets, make the suggested edits.
Save and close the file.
3. When you close the configuration file, the script checks the file parameters and
starts installing the aggregation set. The installation takes three to ten minutes,
depending on the speed of your server.
A message like the following is displayed when the installation completes:
P R O V I S O A g g r e g a t i o n S e t <version number>
||||||||||||||||||||||||
AggregationSet installed
Tivoli Netcool Performance Manager Aggregation Set 1 on Channel 1 successfully installed !
Press Enter...
188 IBM Tivoli Netcool Performance Manager: Installation Guide
Linking DataView groups to timezones
Once you have configured and installed the aggregation sets, you must link
DataView groups to a timezone.
About this task
You can link a defined timezone to a calendar you create in the DataView GUI, or
the CME Permanent calendar (a 24-hour calendar).
When you link a group to a specific timezone and calendar, all subgroups inherit
the same timezone and calendar.
Procedure
v Best practice:
Use a separate calendar for each timezone. If you link multiple timezones to the
same calendar, a change to one timezone calendar setting will affect all the
timezones linked to that calendar.
v To link a group to a timezone:
1. Create a calendar with the DataView GUI, or use the default CME
Permanent calendar.
2. Create a text file (for example, linkGroupTZ.txt) with the following format:
Each line has three fields separated by |_|.
The first field is a DataView group name.
The second field is a timezone name from the Tivoli Netcool Performance
Manager internal timezone list. See Configuring aggregation sets on
page 181 for a list of timezone names.
The third field is the name of the calendar you create, or CME Permanent.
The following example line demonstrates the file format:
~Group~USEast|_|EST_2005_DST|_|CME Permanent|_|
Enter as many lines as you have timezone entries in your aggregation set
configuration.
3. At a shell prompt, enter a command similar to the following, which uses the
Resource Manager's CLI to link the group to the timezone:
resmgr -import segp -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To unlink a timezone:
Use the resmgr command. For example:
resmgr -delete linkGroupSE_TZC -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To review timezone to group associations:
Use the resmgr command. For example:
resmgr -export segp -colNames "name tz.name cal.name" -file link.txt
Appendix C. Aggregation sets 189
190 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix D. Deployer CLI options
Deployer CLI options and their descriptions.
To run the deployer from the command line, entering the following command:
# ./deployer.bin [options]
For example, the following command performs a minimal deployment installation:
# ./deployer.bin -Daction=poc
The deployer.bin command accepts the following options:
Option Description
-Daction=mib
Used with -Daction=poc to complete a
minimal deployment installation on an AIX
system. For example:
./deployer.bin -Daction=mib -Daction=poc
-Daction=patch
Performs a patch installation of Tivoli
Netcool Performance Manager. See
Appendix H, Installing an interim fix, on
page 215 for more information.
-Daction=poc
Performs a minimal deployment installation.
See Chapter 5, Installing as a minimal
deployment, on page 111 for more
information.
-Daction=resume
Resumes an interrupted installation at the
current step. Note that this option is possible
only when the /tmp/ProvisoConsumer
directory is available. See Resuming a
partially successful first-time installation on
page 109 for more information.
-Daction=uninstall
Uninstalls all components marked "To Be
Removed" in the current topology file. See
Uninstalling the entire Tivoli Netcool
Performance Manager system on page 161
for more information.
-DCheckUser=true
Specifies whether the deployer checks to see
if it is running as root before performing
install operations. Possible values are true
and false. For most install scenarios,
running the deployer as the operating
system root is required. You can use this
option to override root user checking.
Default is true.
-DOracleClient=oracle_client_home
Enables you to specify the Oracle client
home, so the wizard screen that prompts
you for that information is skipped.
-DOracleServerHost=hostname
Specifies the hostname or IP address where
the Oracle server resides.
-DOracleServerPort=port
Specifies the communication port used by
the Oracle server. Default is 1521.
Copyright IBM Corp. 2006, 2012 191
Option Description
-DOracleSID=sid
Specifies the Oracle server ID. Default is PV.
-DOracleAdminUser=admin_user
Specifies the administrator username for the
Oracle server. Default is PV_INSTALL.
-DOracleAdminPassword=admin_password
Specifies the administrator password for the
Oracle server. Default is PV.
-DPrimary=true
Indicates that the deployer is running on the
primary server. This option is used by the
Topology Editor to invoke the deployer. Use
this option to force a channel configuration
update in the database.
-DTarget=id
Instructs the deployer to install or uninstall
the component specified using the id
parameter, regardless of the current status of
the component in the topology. Use this
option to force an install or uninstall of a
component in a high-availability (HA)
environment, or when fixing an incomplete
or damaged installation. Table 12 contains a
list of possible values for the id parameter.
-DTopologyFile=topology_file_path
Tells the deployer to use the specified
topology file instead of prompting for the
file.
-DTrace=true
Causes the deployer to log additional
diagnostic information.
-DUsehostname=hostname
Enables you to override the hostname that
the deployer uses to define where it is
running. This option is useful when
hostname aliasing is used and none of the
hostnames listed in the topology.xml file
match the hostname of the machine where
the deployer is running.
Using the -DTarget option
How to use the -DTarget option to force an install or uninstall of a component or
damaged installation.
You can use the -DTarget option to force an install or uninstall of a component in a
high-availability (HA) environment, or when fixing an incomplete or damaged
installation. The -DTarget option uses the following syntax:
deployer.bin -DTarget=id
where id is a supported target identifier code.
If you are using the -DTarget option to force the uninstall of a component, you
must also specify the -Daction=uninstall option when you run the deployer
application. The following example shows how to force the uninstallation of
DataMart on the local system:
deployer.bin -Daction=uninstall -DTarget=DMR
Table 12 shows the possible values for the id parameter.
192 IBM Tivoli Netcool Performance Manager: Installation Guide
Table 12: Target Identifier Codes
Value Description
DB Instructs the deployer to install the database
setup components on the local machine.
DM Instructs the deployer to install the
DataMart component on the local machine.
DV Instructs the deployer to install the
DataView component on the local machine.
DC Instructs the deployer to install the
DataChannel component on the local
machine.
DL Instructs the deployer to install the
DataLoad component on the local machine.
TIP Instructs the deployer to install TIP on the
local machine.
DBR Instructs the deployer to remove the
database setup components from the local
machine. Requires the -Daction=uninstall
option.
DMR Instructs the deployer to remove the
DataMart component from the local
machine. Requires the -Daction=uninstall
option.
DVR Instructs the deployer to remove the
DataView component from the local
machine. Requires the -Daction=uninstall
option.
DCR Instructs the deployer to remove the
DataChannel component from the local
machine. Requires the -Daction=uninstall
option.
DLR Instructs the deployer to remove the
DataLoad component from the local
machine. Requires the -Daction=uninstall
option.
TIPR Instructs the deployer to remove TIP
fromthe local machine. Requires the
-Daction=uninstall option.
DBU Instructs the deployer to upgrade the
database setup components on the local
machine.
DMU Instructs the deployer to upgrade the
DataMart component on the local machine.
DVU Instructs the deployer to upgrade the
DataView component on the local machine.
DCU Instructs the deployer to upgrade the
DataChannel component on the local
machine.
DLU Instructs the deployer to upgrade the
DataLoad component on the local machine.
Appendix D. Deployer CLI options 193
When you run the deployer using the -DTarget option, note the following:
v The deployer does not perform component registration in the versioning tables
of the database.
v The deployer does not upload modified topology information to the database.
v The deployer does not allow you to you select other nodes besides the local
node in the Node Selection panel.
v In the case of an uninstall, the deployer does not remove the component from
the topology.
194 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix E. Secure file transfer installation
This section describes the OpenSSH installation, configuration, and testing process
in detail for each platform.
Overview
How to install OpenSSH for Secure File Transfer (SFTP) among Tivoli Netcool
Performance Manager components.
This document explains how to install OpenSSH for Secure File Transfer (SFTP)
among Tivoli Netcool Performance Manager components. You must be proficient in
your operating system and have a basic understanding of public/private key
encryption when working with SFTP. For the purposes of this document, an SFTP
"client" is the node that initiates the SFTP connection and login attempt, while the
SFTP "server" is the node that accepts the connection and permits the login
attempt. This distinction is important for generating public/private keys and
authorization, as the SFTP server should have the public key of the SFTP client in
its authorized hosts file. This process is described in more detail later.
For Tivoli Netcool Performance Manager to use SFTP for the remote execution of
components and file transfer, OpenSSH must be configured for key-based
authentication when connecting from the Tivoli Netcool Performance Manager
account on the client (the host running the Tivoli Netcool Performance Manager
process that needs to use SFTP) to the account on the server. In addition, the host
keys must be established such that the host key confirmation prompt is not
displayed during the connection.
Enabling SFTP
The use of SFTP is supported in Tivoli Netcool Performance Manager.
Tivoli Netcool Performance Manager SFTP can be enabled for a single component,
set of components, or all components as needed. This table shows the Tivoli
Netcool Performance Manager components that support SFTP:
Client Server Description
Node on which DataChannel
resides.
All other DataChannel nodes
to be installed.
Installer can use SFTP to
install Tivoli Netcool
Performance Manager
software to remote locations.
Bulk Collector DataMart Inventory Transfer of inventory files.
FTE Bulk Collector FTE transfers files from
BCOL to CME.
FTE DataLoad SNMP collector Transfer of SNMP data.
CME/LDR Remote CME Downstream CME and LDR
both transfer files from
remote CME.
Note: This document is intended only as a guideline to installing OpenSSH. Tivoli
Netcool Performance Manager calls the ssh binary file directly and uses the SFTP
Copyright IBM Corp. 2006, 2012 195
protocol to transfer files, so the essential Tivoli Netcool Performance Manager
requirement is that OpenSSH is used and public key authentication is enabled. The
following procedures are examples of one method of installing and configuring
OpenSSH. The precise method and final configuration for your site should be
decided by your local operating system security administrator.
For detailed information about OpenSSH and its command syntax, visit the
following URL:
http://www.openssh.com/manual.html
Installing OpenSSH
This section describes the steps necessary to install OpenSSH on AIX, Solaris and
Linux.
Note: The following sections refer to the earliest supported version of the required
packages. Refer to the OpenSSH documentation for information about updated
versions.
AIX systems
To install OpenSSH on AIX systems you must follow all steps described in this
section.
Download the required software packages
How to download the required packages.
Procedure
1. In your browser, enter the following URL:
http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html
2. From the AIX Toolbox for Linux Applications page, download the following
files according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
v prngd - Pseudo Random Number Generation Daemon (prngd-0.9.29-
1.aix5.1.ppc.rpm or later).
v zlib - zlib compression and decompression library (zlib-1.2.2-
4.aix5.1.ppc.rpm or later).
3. From the AIX Toolbox for Linux Applications page, click the AIX Toolbox
Cryptographic Content link.
4. Download the following files to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssl-0.9.7g-1.aix5.1.ppc.rpm or later
5. In your browser, enter the following URL:
http://sourceforge.net/projects/openssh-aix
6. From the OpenSSH on AIX page, search for and download the following files
according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssh-4.1p1_53.tar.Z or later
Install the required packages
How to install the required packages on each Tivoli Netcool Performance Manager
system where SFTP is to be used.
196 IBM Tivoli Netcool Performance Manager: Installation Guide
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded by using the following command:
# cd /download/location
3. Run the RPM Packaging Manager for each package, in the specified order,
using the following commands:
# rpm -i zlib
# rpm -i prndg
# rpm -i openssl
4. Uncompress and untar the openssh tar file by entering the following
commands:
$ uncompress openssh-4.1p1_53.tar.Z
$ tar xvf openssh-4.1p1_53.tar
5. Using the System Management Interface Tool (SMIT), install the openssh
package.
6. Exit from SMIT.
Configure OpenSSH server to start up on system boot
After installing the OpenSSH server and client, you must configure the OpenSSH
server to start up on system boot.
Procedure
To configure the server to start on system boot, modify the /etc/rc.d/rc2.d/Ssshd
init script as follows:
#! /usr/bin/sh
#
# start/stop the secure shell daemon
case "$1" in
start)
# Start the ssh daemon
if [ -x /usr/local/sbin/sshd ]; then
echo "starting SSHD daemon"
/usr/local/sbin/sshd & fi
;;
stop)
# Stop the ssh daemon
kill -9 `ps -eaf | grep /usr/local/sbin/sshd | grep -v grep |
awk {print $2} | xargs`
;;
*)
echo "usage: sshd {start|stop}"
;;
Appendix E. Secure file transfer installation 197
Solaris systems
OpenSSH is required for SFTP to work with Tivoli Netcool Performance Manager
on Solaris systems.
The version of SSH installed with the Solaris 10 operating system is not supported.
Note: The following sections refer to the current version of the required packages.
Refer to the OpenSSH documentation for information about updated versions.
To install OpenSSH on Solaris systems, follow all steps described in this section.
Download the required software packages
How to download the required packages.
Procedure
1. In your browser, enter the following URL: http://www.sunfreeware.com
2. From the Freeware for Solaris page, follow the instructions to download the
following files to each Tivoli Netcool Performance Manager system where SFTP
is to be used. Ensure that you download the correct files for your version of
Solaris.
v gcc - Compiler. Ensure that you download the full Solaris package and not
just the source code (gcc-3.2.3-sol9-sparc-local.gz or later).
v openssh - SSH client (openssh-4.5p1-sol-sparc-local.gz or later).
v openssl - SSL executable files and libraries (openssl-0.9.8d-sol9-sparc-local.gz
or later).
v zlib - zlib compression and decompression library (zlib-1.2.3-sol9-sparc-
local.gz or later).
What to do next
Note: The user should ensure they have the libcrypto.so.0.9.8 instead of
libcrypto.so.1.0.0. to use OpenSSH on Solaris.
Install the required software packages
How to install the required software packages.
About this task
To install the required packages, do the following on each Tivoli Netcool
Performance Manager system where SFTP is to be used:
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded using the following command:
# cd /download/location
3. Copy the downloaded software packages to /usr/local/src, or a similar
location, using the following commands:
# cp gcc-version-sparc-local.gz /usr/local/src
# cp zlib-version-sparc-local.gz /usr/local/src
# cp openssl-version-sparc-local.gz /usr/local/src
# cp openssh-version-sparc-local.gz /usr/local/src
198 IBM Tivoli Netcool Performance Manager: Installation Guide
4. Change your working directory to /usr/local/src using the following
command:
# cd /usr/local/src
5. Install the gcc compiler:
a. Uncompress gcc using the following command:
gunzip gcc-version-sparc-local.gz
b. Add the gcc package using the following command:
pkgadd -d gcc-version-sparc-local
6. Install the zlib compression library:
a. Uncompress zlib using the following command:
gunzip zlib-version-sparc-local.gz
b. Add the zlib package using the following command:
pkgadd -d zlib-version-sparc-local
7. Install the OpenSSL executable and binary files:
a. Uncompress OpenSSL using the following command:
gunzip openssl-version-sparc-local.gz
b. Add the OpenSSL package using the following command:
pkgadd -d openssl-version-sparc-local
8. Install the OpenSSH client:
a. Uncompress OpenSSH using the following command:
gunzip openssh-version-sparc-local.gz
b. Add the OpenSSH package using the following command:
pkgadd -d openssh-version-sparc-local
c. Create a group and user for sshd using the following commands:
groupadd sshd
useradd -g sshd sshd
9. Optional: Remove Sun SSH from the path and link OpenSSH:
a. Change your working directory to /usr/bin using the following command:
cd /usr/bin
b. Move the Sun SSH files and link the OpenSSH files using the following
commands:
# mv ssh ssh.sun
# mv ssh-add ssh-add.sun
# mv ssh-agent ssh-agent.sun
# mv ssh-keygen ssh-keygen.sun
# mv ssh-keyscan ssh-keyscan.sun
# ln -s /usr/local/bin/ssh ssh
# ln -s /usr/local/bin/ssh-add ssh-add
# ln -s /usr/local/bin/ssh-agent ssh-agent
# ln -s /usr/local/bin/ssh-keygen ssh-keygen
# ln -s /usr/local/bin/ssh-keyscan ssh-keyscan
Configure OpenSSH server to start up on system boot
After installing the OpenSSH server and client, you must configure the OpenSSH
server to start up on system boot.
About this task
To configure the server to start on system boot:
Procedure
1. Create or modify the /etc/init.d/sshd init script as follows:
Appendix E. Secure file transfer installation 199
#! /bin/sh
#
# start/stop the secure shell daemon
case "$1" in
start)
# Start the ssh daemon
if [ -x /usr/sbin/sshd ]; then
echo "starting SSHD daemon"
/usr/sbin/sshd &
fi
;;
stop)
# Stop the ssh daemon
/usr/bin/pkill -x sshd
;;
*)
echo "usage: /etc/init.d/sshd {start|stop}"
;;
2. Check that /etc/rc3.d/S89sshd exists (or any sshd startup script exists) and is
a soft link to /etc/init.d/sshd.
If not, create it using the following command:
ln -s /etc/init.d/sshd /etc/rc3.d/S89sshd
Linux systems
OpenSSH is required for VSFTP to work with Tivoli Netcool Performance Manager.
OpenSSH is installed by default on any RHEL system.
Configuring OpenSSH
This section describes how to configure the OpenSSH server and client.
Configuring the OpenSSH server
How to configuring the OpenSSH Server on Linux.
About this task
To configure the OpenSSH Server, follow these steps on each Tivoli Netcool
Performance Manager system where SFTP is to be used:
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the OpenSSH Server was
installed (/usr/local/etc/sshd_config by default) using the following
command:
# cd /usr/local/etc
3. Using the text editor of your choice, open the sshd_config file. This is an
example of a sshd_config file:
#***************************************************************************
# sshd_config
# This is the sshd server system-wide configuration file. See sshd(8)
# for more information.
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.
200 IBM Tivoli Netcool Performance Manager: Installation Guide
Port 22
Protocol 2
ListenAddress 0.0.0.0
HostKey /usr/local/etc/ssh_host_dsa_key
SyslogFacility AUTH
LogLevel INFO
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
RhostsAuthentication no
RhostsRSAAuthentication no
HostbasedAuthentication no
PasswordAuthentication yes
ChallengeResponseAuthentication no
Subsystem sftp /usr/local/libexec/sftp-server
#****************************************************************
4. Locate the Protocol parameter. For security purposes, it is recommended that
this parameter is set to Protcol 2 as follows:
Protocol 2
5. Locate the HostKeys for protocol version 2 parameter and ensure that it is
set as follows:
HostKey /usr/local/etc/ssh_host_dsa_key
6. Locate the PubkeyAuthentication parameter and ensure that it is set as follows:
PubkeyAuthentication yes
7. Locate the PasswordAuthentication parameter and ensure that it is set as
follows:
PasswordAuthentication yes
8. Locate the Subsystem parameter and ensure that the SFTP subsystem and path
are correct. Using defaults, the Subsystem parameter appears as follows:
Subsystem sftp /usr/local/libexec/sftp-server
Configuring OpenSSH client
How to configure OpenSSH client.
The OpenSSH client requires no configuration if it used in its default form. The
default location for the OpenSSH client file is /usr/local/etc/ssh_config.
Generating public and private keys
By default, OpenSSH generates public and private keys for the root user. You must
generate public and private keys with the Tivoli Netcool Performance Manager
user for the SFTP functions to work in Tivoli Netcool Performance Manager.
About this task
To generate public and private keys:
Procedure
1. Log in as pvuser on the node that will be the SFTP client. This node is
referred to as SFTPclient in these instructions, but you must replace
SFTPclient with the name of your node.
2. Create an .ssh directory in the home directory of the Tivoli Netcool
Performance Manager user, set permissions to x/r/w for owner (700), then
change to the directory using the following commands:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ cd ~/.ssh
Appendix E. Secure file transfer installation 201
3. Generate a DSA public and private key with no passphrase (DSA encryption
is used as an example). The following example shows a UNIX server called
SFTPclient:
$ /usr/local/bin/ssh-keygen -t dsa -f SFTPclient -P ""
Generating public/private dsa key pair.
Your identification has been saved in SFTPclient.
Your public key has been saved in SFTPclient.pub.
The key fingerprint is: 77:67:2f:34:d4:2c:66:db:9b:1f:9a:36:fe:c7:07:c6 pvuser@SFTPclient
4. The previous command generates two files, SFTPclient (the private key) and
SFTPclient.pub (the public key). Copy the private key to id_dsa in the ~/.ssh
directory by entering the following command:
$ cp -p ~/.ssh/SFTPclient ~/.ssh/id_dsa
id_dsa identifies the node when it contacts other nodes.
5. To permit Tivoli Netcool Performance Manager components on SFTPclient to
communicate, you must append the contents of the SFTPclient.pub key file to
the file authorized_keys in the ~/.ssh directory by using the following
commands:
cd ~/.ssh
cat SFTPclient.pub >> authorized_keys
6. Log on to the other node that will be the SFTP server. This node is referred to
as SFTPserver in these instructions, but you must replace SFTPserver with the
name of your node.
7. Repeat Step 1 through Step 5 on the SFTPserver node, replacing SFTPclient
with SFTPserver.
8. Copy (with FTP, scp, or a similar utility) the public key from SFTPclient to
SFTPserver and append the contents of the key file to the file authorized_keys
in the ~/.ssh directory. If you cut and paste lines, be careful not to introduce
carriage returns.
Use the following FTP session as an example:
SFTPserver:~/.ssh> ftp SFTPclient
Connected to SFTPclient.
220 SFTPclient FTP server (SunOS 5.8) ready.
Name (SFTPclient:pvuser): pvuser
331 Password required for pvuser.
Password:
230 User pvuser logged in.
ftp> bin
200 Type set to I.
ftp> get .ssh/SFTPclient.pub
200 PORT command successful.
150 Binary data connection for .ssh/SFTPclient.pub
226 Binary Transfer complete.
local: .ssh/SFTPclient.pub remote: .ssh/SFTPclient.pub
ftp> quit
221 Goodbye.
SFTPserver:~/.ssh> cat SFTPclient.pub >> authorized_keys
9. Optional: If you want to set up bidirectional SFTP, repeat Step 8, but from the
SFTserver node to the SFTPclient node.
Note: This step is not needed for Tivoli Netcool Performance Manager.
Use the following FTP session as an example:
SFTPclient:~/.ssh> ftp SFTPserver
Connected to SFTPserver.
220 SFTPserver FTP server (SunOS 5.8) ready.
Name (SFTPserver:pvuser): pvuser
202 IBM Tivoli Netcool Performance Manager: Installation Guide
331 Password required for pvuser.
Password:
230 User pvuser logged in.
ftp> bin
180 Tivoli Netcool Performance Manager Installation Guide, Version 1.3.2
200 Type set to I.
ftp> get .ssh/SFTPserver.pub
200 PORT command successful.
150 Binary data connection for .ssh/SFTPserver.pub
226 Binary Transfer complete.
local: .ssh/SFTPserver.pub remote: .ssh/SFTPserver.pub
ftp> quit
221 Goodbye.
SFTPclient:~/.ssh> cat SFTPserver.pub >> authorized_keys
10. When finished, the SFTPclient and SFTPserver should look similar to the
following:
SFTPclient:~/.ssh> ls -al ~/.ssh
total 10
drwx------ 2 pvuser pvuser 512 Nov 25 16:56 .
drwxr-xr-x 28 pvuser pvuser 1024 Nov 25 15:25 ..
-rw------- 1 pvuser pvuser 883 Nov 25 15:21 id_dsa
-rw-r--r-- 1 pvuser pvuser 836 Nov 25 16:33 known_hosts
SFTPserver:~/.ssh> ls -al ~/.ssh
total 10
drwx------ 2 pvuser pvuser 512 Nov 25 16:56 .
drwxr-xr-x 28 pvuser pvuser 1024 Nov 25 15:25 ..
-rw------- 1 pvuser pvuser 883 Nov 25 15:21 id_dsa
-rw-r--r-- 1 pvuser pvuser 836 Nov 25 16:33 known_hosts
The important files are:
v authorized_keys, which contains the public keys of the nodes that are
authorized to connect to this node
v id_dsa, which contains the private key of the node it is on
v known_hosts, which contains the public keys of the node that you want to
connect to
For security, the private key (id_dsa) should be -rw------. Likewise, the
public key Node<number>.pub, authorized_keys, and known_hosts should be
-rw-r--r--.
The directory itself should be -rwx-----.
Note: The directory that contains the .ssh directory might also need to be
writable by owner.
11. The first time you connect using SSH or SFTP to the other node, it will ask if
the public key fingerprint is correct, and then save that fingerprint in
known_hosts. Optionally, you can manually populate the client's known_hosts
file with the server's public host key (by default, /usr/local/etc/
ssh_host_dsa_key.pub).
For large-scale deployments, a more efficient and reliable procedure is:
a. From one host, ssh to each SFTP server and accept the fingerprint. This
builds a master known_hosts file with all the necessary hosts.
b. Copy that master file to every other SFTP client.
Note: If the known_hosts file has not been populated and secure file
transfer (SFTP) is attempted through Tivoli Netcool Performance Manager,
SFTP fails with vague errors.
Appendix E. Secure file transfer installation 203
Testing OpenSSH and SFTP
How to test OpenSSH and SFTP.
About this task
For the following tests, the commands normally work without asking for a
password. If you are prompted for a password, public/private key encryption is
not working.
Ensure that you specify the full path to the ssh and sshd binary files. Otherwise,
you might use another previously installed SSH client or server.
To test OpenSSH and SFTP:
Procedure
1. On both nodes, kill any existing sshd processes and start the sshd process from
the packages you installed, by entering the following commands:
pkill -9 sshd /usr/local/sbin/sshd &
The path can be different depending on the installation.
2. From SFTPclient, run the following command:
/usr/local/bin/ssh SFTPserver
3. From SFTPclient, run the following command:
/usr/local/bin/sftp SFTPserver
4. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/ssh SFTPclient
5. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/sftp SFTPclient
6. If all tests allow you to log in without specifying a password, follow the Tivoli
Netcool Performance Manager instructions on how to enable SFTP in each
Tivoli Netcool Performance Manager component. Make sure to specify the full
path to SSH in the Tivoli Netcool Performance Manager configuration files. In
addition, make sure the user that Tivoli Netcool Performance Manager is run as
is the same as the user that you used to generate keys.
Troubleshooting
How to troubleshoot OpenSSH and its public keys.
About this task
If you find that OpenSSH is not working properly with public keys:
Procedure
1. Check the ~/known_hosts file on the node acting as the SSH client and make
sure the client host name and IP information is present and correct.
2. Check the ~/authorized_keys file on the node acting as the SSH server and
make sure that the client public key is present and correct. Ensure that the
permissions are -rw-r--r--.
204 IBM Tivoli Netcool Performance Manager: Installation Guide
3. Check the ~/id.dsa file on the node acting as the SSH client and make sure
that the client's private key is present and correct. Ensure that the permissions
are -rw-------.
4. Check the ~/.ssh directory on both nodes to ensure that the permissions on
the directories are -rwx------.
5. Check for syntax errors (common ones are misspelling authorized_keys and
known_hosts without the "s" at the end). In addition, if you copied and pasted
keys into known hosts or authorized keys files, you probably have introduced
carriage returns in the middle of a single, very long line.
6. Check the ~ (home directory) permissions to ensure that they are only writable
by owner.
7. If the permissions are correct, kill the sshd process and restart in debug mode
as follows:
pkill -9 sshd /usr/local/sbin/sshd -d
8. Test SSH again in verbose mode on the other node by entering the following
command:
/usr/local/bin/ssh -v SFTPserver
9. Read the debugging information about both client and server and
troubleshoot from there.
10. Check the log file /var/adm/messages for additional troubleshooting
information.
Netcool/Provisio SFTP errors
Errors that you may encounter.
In the Tivoli Netcool Performance Manager log files, you might see the following
errors:
v [DC10120] FTPERR error: incompatible version, result: sftp status:
SSH2_FX_FAILURE:: incompatible version, log:
This error indicates that the SSH server (sshd) is SSH2 rather than OpenSSH.
OpenSSH is required for Tivoli Netcool Performance Manager to function
correctly.
v [DC10120] FTPERR error: bad version msg, result: sftp status:
SSH2_FX_NO_CONNECTION:: connection not established - check ssh
configuration, log:
This error indicates that the SSH configuration is incorrect or the wrong version
of the SSH server (sshd) is running. OpenSSH is required for Tivoli Netcool
Performance Manager to function correctly.
Appendix E. Secure file transfer installation 205
206 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix F. LDAP integration
Detailed information on how to configure LDAP (Lightweight Directory Access
Protocol) as a default authentication/authorization mechanism for Tivoli Netcool
Performance Manager.
Supported LDAP servers
A list of LDAP servers supported by Tivoli Netcool Performance Manager.
v Domino 6.5.4, 7.0
v IBM Tivoli Directory Server 6.3
v IBM z/OS Security Server 1.6, 1.7
v IBM z/OS.e Security Server 1.6, 1.7
v Microsoft Active Directory 2000, 2003
v Novell eDirectory 8.7.3, 8.8
LDAP configuration
The configuration of LDAP as a default authentication and authorization
mechanism for Tivoli Netcool Performance Manager is achieved using the
Topology Editor.
Enable LDAP configuration
The LDAP configuration option becomes available after adding a Tivoli Integrated
Portal to your topology.
About this task
Note: The process of adding a Tivoli Integrated Portal to the topology using the
Topology Editor is described in Add a Tivoli Integrated Portal on page 90.
To enable LDAP configuration:
Procedure
1. In Logical View of Topology Editor, select the Tivoli Integrated Portal that you
have added.
2. Open the Advanced Properties tab.
3. Click the checkbox opposite the
IAGLOBAL_USER_REGISTRY_LDAP_SELECTED property. This enables LDAP.
4. In Advanced Properties tab, enter the LDAP connection details. This requires
that you populate the following fields:
v WAS_USER_NAME: This is the name you have registered as the Tivoli
Integrated Portal user. For example, "tipadmin".
v IAGLOBAL_LDAP_BIND_DN: This username specified must have read and
write permissions in LDAP 3. Typically this will be an LDAP administrator
username. For example "cn=Directory Manager".
v IALOCAL_LDAP_BIND_PASSWORD: This is the password for the Bind
Distinguished Name specified.
Copyright IBM Corp. 2006, 2012 207
v IAGLOBAL_LDAP_NAME: This is the LDAP server host name. Should this
LDAP server exist behind firewall, make sure that this host has been
authenticated.
v IAGLOBAL_LDAP_PORT: For example, "1389".
v IAGLOBAL_LDAP_REPOSITORY_ID: This is a string used to identify the
LDAP repository, which can be set to the string of your choice.
v IAGLOBAL_LDAP_BASE_ENTRY: The distinguished name of a base entry in
LDAP.
For example, for IBM the base entry is o=IBM, c=US.
Verifying the DataView installation
Verify that the correct users are within the correct groups.
About this task
When the DataView installation is complete, it should have created two users and
two groups in LDAP:
Users:
v tnpm
v tnpmScheduler
Groups:
v tnpmUsers
v tnpmAdministrators
Procedure
Verify from the UI that the users tnpm and tnpmScheduler are members of the
tnpmAdministrators group.
Assigning Tivoli Netcool Performance Manager roles to LDAP
users
To successfully authenticate your LDAP user, you need to assign them to one of
the appropriate roles
Procedure
To successfully authenticate you LDAP user you need to assign them to one of the
following roles:
v tnpmUser
v tnpmAdministrator
This can be done by tipadmin user, by navigating to Users and Groups > User
Roles, and assigning the correct roles.
Alternatively tipcli.sh can be used for assigning roles to the user.
<tip_location>/profiles/TIPProfile/bin/tipcli.sh MapRolesToUser --username
<tip_admin_user> --password <tip_admin_password> --userID <userUniqueId> --rolesList <roleName>
where <userUniqueId> is the concatenation of username and realm in which user
information is stored.
For example:
208 IBM Tivoli Netcool Performance Manager: Installation Guide
<tip_location>/profiles/TIPProfile/bin/tipcli.sh MapRolesToUser --username
<tip_admin_user> --password <tip_admin_password> --userID
uid=<user_name>,dc=<server>,dc=<server>,dc=<company>,dc=com --rolesList
tnpmUser
Roles specific to an application, such as tnpmUser for Tivoli Netcool Performance
Manager, are not stored in LDAP. Roles are stored in a flat XML file in the TIP
directory. For example, if you assign a Tivoli Netcool Performance Manager role to
an LDAP user on tip_instance1, you must also assign the same role to the user on
tip_instance2. Otherwise, the user cannot authenticate on tip_instance2.
Alternatively you can assign tnpmUser role to tnpmUsers group on tip_instance1.
If the user is a member of this group then user can authenticate on tip_instance2.
Appendix F. LDAP integration 209
210 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix G. Using silent mode
This appendix describes how to use silent mode to run the deployer or to install
the Topology Editor.
Sample properties files
The location and contents of the sample properties files.
The Silent subdirectory under the directory that contains the deployer.bin file
(for example, /opt/IBM/proviso/deployer/proviso/data/Silent), contains the
following sample properties files:
v Fresh.properties runs the deployer in standard mode.
v POC.properties runs the deployer in minimal deployment mode.
v topologyEditor.properties runs the Topology Editor installation in silent mode.
The Deployer
How to run the deployer in silent mode.
Running the Deployer in silent mode
How to use the Fresh.properties file to run the deployer.
About this task
Use the Fresh.properties file to run the deployer in standard mode, or the
POC.properties file to run the deployer in minimal deployment mode.
For example, to perform a silent fresh installation:
Procedure
1. Log in as root.
2. Log in to the machine on which you want to run the silent installation.
3. In a text editor, open the Fresh.properties file and make the following edits:
a. Set and verify that the Oracle client path is correct.
b. Set the DownloadTopology flag to True (1) or False (0).
c. If you set DownloadTopology flag to False, set the TopologyFilePath to the
location of your topology.xml file.
d. If you are running the deployer application on the same system where the
Topology Editor is installed, set the Primary flag to true.
e. Set and verify that the Database Access Information is correct.
f. Set and verify the PACKAGE_PATH variable for the relevant system:
On Solaris systems:
<DIST_DIR>/proviso/SOLARIS
On AIX systems:
<DIST_DIR>/proviso/AIX
On Linux systems:
<DIST_DIR>/proviso/RHEL
Copyright IBM Corp. 2006, 2012 211
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in Downloading
the Tivoli Netcool Performance Manager distribution to disk on page 48.
Your edited file will look similar to the following:
#Oracle client JDBC driver path
#------------------------------
OracleClient=/opt/oracle/product/11.2.0-client32/jdbc/lib
#Download Topology from Proviso database
# 1 is true
# 0 is false
#-------------
DownloadTopology=0
#Primary
# Specify if the configuration has to be updated
# Specify true if running the deployer on the same
# system where the Topology Editor is installed.
# true or false
#-------------
Primary=false
#Topology file
# If DownloadTopology=1 this parameter is ignored
#-------------
TopologyFilePath=/tmp/ProvisoConsumer/Topology.xml
#Database access information
#---------------------------
OracleServerHost=lab238053
OracleServerPort=1521
OracleSID=PV
OracleAdminUser=PV_INSTALL
OracleAdminPassword=PV
#Check Prerequisites Flag(true/false)
#Use true only for first time install
#-------------------------------------
CHECK_PREREQ=true
# Tivoli Netcool Performance Manager installation packages path
#---------------------------
PACKAGE_PATH=/cdrom/SOLARIS
#Silver Stream installation packages path
#-------------------------------------
SS_BUNDLE=/cdrom/exteNd40k
g. Write and quit the file.
4. Change to the /opt/IBM/proviso/deployer directory.
5. Run the following command:
./deployer.bin -i silent -f propertyFileWithPath
For example:
./deployer.bin -i silent -f /opt/IBM/proviso/deployer/proviso/data/Silent/Fresh.properties
Confirming the status of a silent install
How to verify a successful installation from the log status messages.
To verify a successful installation, you must analyze the /tmp/ProvisoConsumer/
log.txt file:
For a successful silent install, /tmp/ProvisoConsumer/log.txt will have the
following as the second last line:
ConsumerSilent null CMW3019I Silent installation completed
If the installation fails:
212 IBM Tivoli Netcool Performance Manager: Installation Guide
if the install has not worked and one of the steps fails you will see one of the
following errors:
v Product images not found during silent installation
v An installation step has failed during silent installation
v Silent Installatoin Suspended because a reboot is needed
v Engine main loop internal error
Restrictions
Deployer restrictions.
Note the following restrictions:
v The silent deployer does not support remote installations. You must manually
invoke the script on each machine.
v Silent resume is not supported. If you need to resume a partial silent installation,
use the -Daction=resume option to complete the installation using graphical
mode (the steps table). The step that originally failed might have been in the
middle of a step sequence that cannot be re-created by a subsequent -i silent
invocation.
The Topology Editor
You can also install the Topology Editor in silent mode.
About this task
The Topology Editor is installed with the installer named installer.bin, located
in:
On Solaris systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/topologyEditor/Disk1/InstData/VM
On AIX systems:
# cd <DIST_DIR>/proviso/AIX/Install/topologyEditor/Disk1/InstData/VM
On Linux systems:
# cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM
<DIST_DIR> is the directory on the hard drive where you copied the contents of the
Tivoli Netcool Performance Manager distribution in Downloading the Tivoli
Netcool Performance Manager distribution to disk on page 48.
To install the Topology Editor in silent mode:
Procedure
1. Log in as root to the server on which you want to run the silent installation.
2. Change to the directory that contains the deployer.bin file (for example,
/opt/IBM/proviso/deployer), then change to the /proviso/data/Silent
subdirectory.
3. Using a text editor, open the topologyEditor.properties file and make the
following edits:
a. Set and verify that the Oracle client path is correct.
b. Set the DownloadTopology flag to True (1) or False (0).
Appendix G. Using silent mode 213
c. If you set DownloadTopology flag to False, set the TopologyFilePath to the
location of your topology.xml file.
d. Set and verify that the Database Access Information is correct.
e. Set and verify the PACKAGE_PATH variable.
f. Write and quit the file.
4. Run the following command:
./installer.bin -i silent -f ..../silent/topologyEditor.properties
214 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix H. Installing an interim fix
This appendix describes how to install an interim fix (or patch) release of Tivoli
Netcool Performance Manager.
Overview
Interim fix installation overview.
Unlike major, minor, and maintenance releases, which are planned, patch releases
(interim fixes and fix packs) are unscheduled and are delivered under the
following circumstances:
v A customer is experiencing a "blocking" problem and cannot wait for a
scheduled release for the fix.
v The customer's support contract specifies a timeframe for delivering a fix for a
blocking problem and that timeframe does not correspond with a scheduled
release.
v Development determines that a patch is necessary.
Note: Patches are designed to be incorporated into the next scheduled release,
assuming there is adequate time to integrate the code.
Installation rules
Rules that apply to the installation of patches.
Note the following installation rules for patch installations:
v Apply fix to Database before any other components.
v Fixes for the Database and DataMart must be installed on that host.
v Fixes for the DataChannel, DataLoad, DataMart and DataView can be installed
remotely from the local host in a distributed system.
v Fix packs are installed on general availability (GA) products.
v Sequentially numbered fix packs can be installed on any fix pack with a lower
number.
v Interim fixes must be installed on the absolute fix pack.
The patch installer verifies that your installation conforms to these rules.
Behavior and restrictions
Behaviour and restrictions that apply to the installation of patches.
If remote installation of a component is not possible, the deployer grays out the
any remote component host on the node selection page.
The maintenance deployer must run locally on each DataMart host to apply a
patch.
Copyright IBM Corp. 2006, 2012 215
Before you begin
What you must do before you begin a patch installation.
A patch release updates the file system for the component that the patch is
intended for and updates the versioning information in the database.
To verify that the versioning was updated correctly for the components in the
database, you can run several queries both before and after the installation and
compare the results. For detailed information, see the Tivoli Netcool Performance
Manager Technical Note: Tools for Version Reporting document.
Installing a patch
How to install a patch.
About this task
To install a patch:
Procedure
1. You must have received or downloaded the maintenance package from IBM
Support. The maintenance package contains the Maintenance Descriptor File,
an XML file that describes the contents of the fix pack. Follow the instructions
in the README for the fix pack release to obtain the maintenance package
and unzip the files.
Note: for each tar.gz file, you must unzip them, and then un-tar them. For
example:
gunzip filename.tar.gz
tar -xvf filename.tar
2. Log in as root.
3. Set and export your DISPLAY environment variable (see Setting up a remote
X Window display on page 36).
4. Start the patch deployer using one of the following methods:
From the launchpad:
a. Click the Start Tivoli Tivoli Netcool Performance Manager Maintenance
Deployer option in the list of tasks.
b. Click the Start Tivoli Tivoli Netcool Performance Manager Maintenance
Deployer link.
From the command line:
v Run the following command:
# ./deployer.bin -Daction=patch
5. The deployer displays a welcome page. Click Next to continue.
6. Accept the default location of the base installation directory of the Oracle
JDBC driver (/opt/oracle/product/11.2.0-client32), or click Choose to
navigate to another directory. Click Next to continue.
7. On the patch folder page, click Choose to select the patch you want to install.
8. Navigate to the directory that contains the files for the fix pack, and click into
the appropriate directory. Click Select to select that directory, then click Next
to continue.
216 IBM Tivoli Netcool Performance Manager: Installation Guide
9. A pop-up window asks whether you want to download the topology file.
Click Yes.
10. Verify that all of the fields for the database connection are filled in with the
correct values:
v Database hostname - Enter the name of the database host.
v Port - Specifies the port number used for communication with the database.
The default value is 1521.
v Database user - Specifies the username used to access the database. The
default value is PV_INSTALL.
v Database Password - Enter the password for the database user account (for
example, PV).
v SID - Specifies the SID for the database. The default value is PV.
Click Next.
11. When the topology has been downloaded from the database, click Next.
12. The node selection window shows the target systems and how the files will be
transferred. The table has one row for each machine where at least one Tivoli
Netcool Performance Manager component will be installed. Verify the settings,
then click Next to continue.
13. The deployer displays summary information about the installation. Review the
information, then click Next.
The deployer displays the table of installation steps.
14. Run through each installation step just as you would for a normal installation.
15. When all the steps have completed successfully, click Done to close the
wizard.
Appendix H. Installing an interim fix 217
218 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix I. Error codes and log files
This appendix lists the Tivoli Netcool Performance Manager error messages and
log files.
See Appendix J, Troubleshooting, on page 239 for information about
troubleshooting problems with the Tivoli Netcool Performance Manager
installation.
Error codes
The following sections describe the error messages generated by the Deployer, the
Toplogy Editor and InstallAnywhere.
Deployer messages
The Deployer messages.
Table 13 lists the error messages returned by the Tivoli Netcool Performance
Manager deployer.
Table 13: Deployer Messages
Error Code Description User Action
DataView Messages
GYMCI5000E A system command failed. A
standard UNIX system
command failed. These
commands are used for
standard system operations,
such as creating directories,
changing file permissions,
and removing files.
See the installation log for
more details.
GYMCI5002E The operating system is not
at the prerequisite patch
level. Some required
operating system patches are
not installed.
See the installation log for
details. Install the required
patches, then try the
installation again.
GYMCI5003E The Oracle configuration file,
tnsnames.ora, does not
include an entry for
SilverMaster.
Add an entry for
SilverMaster to the
tnsnames.ora file, then try
the installation again.
GYMCI5004E The Oracle configuration file,
tnsnames.ora, was not found.
The tnsnames.ora file must
be created and stored in the
$TNS_ADMIN directory.
Ensure that the file exists in
the correct location.
GYMCI5005E Unable to connect to the
Oracle database. It is possible
that a specified connection
parameter is incorrect, or the
Oracle server might not be
available.
See the installation log for
more details. Ensure that the
connection parameters you
are using are correct and that
the Oracle server is up and
running.
Copyright IBM Corp. 2006, 2012 219
Error Code Description User Action
GYMCI5006E An error occurred while
running the
DVOptimizerToRule.sql
script to initialize the
database. It is possible that
the Oracle database and
listener are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5007E An error occurred while
trying to remove entries for a
resource from a database
table. It is possible that the
Oracle database and listener
are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5008E An error occurred while
trying to remove version
information from a database
table. It is possible that the
Oracle database and listener
are not running.
See the installation log for
more details. Check that the
database and listener are
running.
GYMCI5009E An error occurred while
reading the configuration
file. The name of a parameter
or the format of the file is
incorrect.
Contact IBM Software
Support.
GYMCI5010E The file system does not
have sufficient free space to
complete the installation.
See the installation log for
more details. Ensure that you
have sufficient space on the
file system before retrying
the installation.
GYMCI5011E The DataView license file is
missing. The license file was
not found, but this file
should not be required. The
installation log will contain
more details of the error.
Contact IBM Software
Support.
GYMCI5012E A configuration file or
directory is missing.
See the installation log for
more details.
GYMCI5013E An error occurred while
creating a configuration file.
The file could not be created.
The installer failed to create
one of the required
configuration files.
See the installation log for
more details.
GYMCI5014E An error occurred while
updating a configuration file.
The file could not be
modified. The installer failed
to make a required
modification to one of the
configuration files.
See the installation log for
more details.
DataMart Messages
GYMCI5101E The DataMart installation
failed.
See the DataMart installer
logs for details.
220 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
Database Configuration Messages
GYMCI5201E The database installation
failed. See the installation log
for details.
See the root_install_dir/
database/install/log/
Oracle_SID/install.log file.
GYMCI5202E The database uninstallation
script failed because of a
syntax error. This script must
be run as oracle. For
example: ./uninstall_db
/var/tmp/PvInstall/
install.cfg.silent
Check the syntax and run
the script again.
GYMCI5204E The database could not be
removed because some
Oracle environment variables
are not correctly set. Some or
all of the Oracle environment
variables are not set (for
example, ORACLE_HOME,
ORACLE_SID, or
ORACLE_BASE).
Check that all the required
Oracle variables are set and
try again.
GYMCI5205E An error occurred when
trying to start the Oracle
database.
See the Oracle alert file for
possible startup errors.
Resolve any problems
reported in the log and try
again.
GYMCI5206E An error occurred when
trying to shut down the
Oracle database.
See the Oracle alert file for
possible shutdown errors.
Resolve any problems
reported in the log and try
again.
GYMCI5207E An error occurred while
querying the database to
determine the data files that
are owned by the database.
See the Oracle alert file for
details of errors. You might
need to manually delete
Oracle data files using
operating system commands.
DataChannel Messages
GYMCI5301E The database channel
installation failed. See the
installation log for details.
See the file
root_install_dir/channel/
install/log/Oracle_SID/
install.log.
GYMCI5401E An error occurred while
running a script.
See the message produced
with the error code for more
details.
GYMCI5402E Unable to find an expected
file.
See the message produced
with the error code for more
details.
GYMCI5403E The data in one of the files is
not valid.
See the message produced
with the error code for more
details.
GYMCI5404E Unable to find an expected
file or expected data.
See the message produced
with the error code for more
details.
Appendix I. Error codes and log files 221
Error Code Description User Action
GYMCI5405E Scripts cannot function
correctly because the
LD_ASSUME_KERNEL
variable is set.
Unset the variable and try
again.
GYMCI5406E An action parameter is
missing.
See the message produced
with the error code for more
details.
GYMCI5407E An error occurred while
processing the tar command.
See the message produced
with the error code for more
details.
GYMCI5408E The product version you are
trying to install seems to be
for a different operating
system.
See the message produced
with the error code for more
details.
GYMCI5409E Unable to locate installed
package information for the
operating system.
See the message produced
with the error code for more
details.
GYMCI5410E A file has an unexpected
owner, group, or
permissions.
See the message produced
with the error code for more
details.
GYMCI5411E A problem was found by the
PvCheck module when
checking the environment.
See the message produced
with the error code for more
details.
GYMCI5412E The installation module
failed.
See the messages in standard
error for more details.
GYMCI5413E The patch installation failed. See the messages in standard
error for more details.
GYMCI5414E The remove action failed. See the messages in standard
error for more details.
GYMCI5415E An unrecoverable error
occurred while running the
script.
See the message produced
with the error code for more
details.
Dataload Messages
GYMCI5501E An error occurred when
running the script.
See the message produced
with the error code for more
details.
GYMCI5502E Unable to find an expected
file.
See the message produced
with the error code for more
details.
GYMCI5503E The data in one of the files is
not valid.
See the message produced
with the error code for more
details.
GYMCI5504E Unable to find an expected
file or expected data.
See the message produced
with the error code for more
details.
GYMCI5505E Scripts cannot function
correctly because the
LD_ASSUME_KERNEL
variable is set.
Unset the variable and try
again.
222 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI5506E An action parameter is
missing.
See the message produced
with the error code for more
details.
GYMCI5507E An error occurred while
processing the tar command.
See the message produced
with the error code for more
details.
GYMCI5508E The product version you are
trying to install seems to be
for a different operating
system.
See the message produced
with the error code for more
details.
GYMCI5509E Unable to locate installed
package information for the
operating system.
See the message produced
with the error code for more
details.
GYMCI5510E A file has an unexpected
owner, group or permissions.
See the message produced
with the error code for more
details.
GYMCI5511E A problem was found by the
PvCheck module when
checking the environment.
See the message produced
with the error code for more
details.
GYMCI5512E The installation module
failed.
See the message produced
with the error code for more
details.
GYMCI5513E The patch installation failed. See the messages in standard
error for more details.
GYMCI5514E The remove action failed. See the messages in standard
error for more details.
GYMCI5515E An unrecoverable error
occurred while running the
script.
See the message produced
with the error code for more
details.
Prerequisite Checkers: Operating System
GYMCI6001E The syntax of the check_os
script is not correct. The
specified component does
not exist.The syntax is:
check_os
PROVISO_COMPONENT
where
PROVISO_COMPONENT is
DL, DC, DM, DB, or DV.
Correct the syntax and try
again.
GYMCI6002E This version of IBM Tivoli
Tivoli Netcool Performance
Manager is not supported on
the host operating system.
See the check_os.ini file for a
list of supported operating
systems.
GYMCI6003E The specified component
does not exist or is not
supported on this operating
system.
Ensure that you have
specified the correct
component. If you have, the
operating system must be
upgraded before the
component can be installed.
Appendix I. Error codes and log files 223
Error Code Description User Action
GYMCI6004E The operating system is not
at the prerequisite patch
level. Some required
operating system patches are
not installed.
Check the product
documentation for a list of
required patches. Apply any
missing patches and try
again.
GYMCI6005E The host operating system is
not supported for this
installation.
Perform the installation on a
supported operating system.
GYMCI6006E In the /etc/security/limits
file, some values are missing
or incorrect. Values must not
be lower than specified in
the check_os.ini file.
Check the values in the
check_os.ini and edit the
default stanza in the
/etc/security/limits file so
that valid values are
specified for all required
limits.
Prerequisite Checkers: Database
GYMCI6101E The syntax of the check_db
script is not correct. The
syntax is: check_db [client -
server] [new - upgrade]
[ORACLE_SID or
tnsnames.ora entry]
Correct the syntax and try
again.
GYMCI6102E The host operating system is
not supported for this
installation.
Perform the installation on a
supported operating system.
GYMCI6103E This version of the IBM
Tivoli Tivoli Netcool
Performance Manager
database is not supported on
the current version of the
host operating system.
See the check_os.ini file for a
list of supported operating
system versions.
GYMCI6104E Some required Oracle
variables are missing or
undefined.
Check the Oracle users
environment files (for
example, .profile and
.bash_profile).
GYMCI6105E An Oracle binary is missing
or not valid.
Ensure that Oracle is
correctly installed.
GYMCI6106E The instance of Oracle
installed on the host is not at
a supported version.
Check the list of supported
Oracle versions in the
check_db.ini file.
GYMCI6107E Unable to contact the Oracle
server using the tnsping
utility with the specified
ORACLE_SID.
Check that your Oracle
listener is running on the
database server. Start the
listener if it is not running.
GYMCI6108E An Oracle instance is
running on the host where
you have requested a new
server installation.
Check whether you have
selected the correct host for a
new Oracle server
installation. If the selected
host is correct, remove the
existing Oracle instance first.
224 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI6109E The number of bits (32 or 64)
for the Oracle binary does
not match the values defined
in the check_db.ini file.
Check the list of supported
Oracle versions in the
check_db.ini file.
GYMCI6110E The installation method
passed to the script is not
valid. Valid installation
methods are New and
Upgrade.
Pass the New or Upgrade
option to the script.
GYMCI6111E The installation type passed
to the script is not valid.
Valid installation methods
are Client and Server.
Pass the Client or Server
option to the script.
GYMCI6112E The script was run with
options set for a new server
installation, but an Oracle
instance configuration file
(init.ora) already exists for
the specified SID. The
presence of the init.ora file
indicates the presence of an
Oracle instance.
Check that a new server
installation is the correct
action for this SID. If it is,
remove the existing Oracle
instance configuration files.
GYMCI6113E A symbolic link was found
in the Oracle home path. The
Oracle home path cannot
contain any symbolic links.
Remove any symbolic links.
Specify the Oracle home path
using only real directories.
GYMCI6114W Cannot contact the Oracle
Listener. The tnsping utility
was run to check the Oracle
Listener status, but, the
Listener could not be
contacted.
Check that the Oracle
Listener is running. Start it if
necessary.
GYMCI6115E The Solaris semaphore and
shared memory check failed.
The sysdef command was
used to check the values for
semaphores and shared
memory. The command did
not report the minimum
value for a particular
semaphore or shared
memory.
Check that the required
/etc/system parameters are
set up for Oracle. Check that
the values of these
parameters meet the
minimum values listed in the
check_db.ini file.
GYMCI6116E Could not find the bos.adt.lib
package in the COMMITTED
state. The package might not
be installed. The package is
either not installed or not in
a COMMITTED state.
Ensure that the bos.adt.lib
package is installed and
committed and then try
again.
Appendix I. Error codes and log files 225
Error Code Description User Action
GYMCI6117E Could not log in to the
database. The verify base
option was used. The option
attempts to log into the
database to ensure it is
running. However, the script
could not log in to the
database.
Check that the database and
Oracle Listener are up and
running. If not, start them.
GYMCI6118E The checkextc script failed.
The verify base option was
used. The option runs the
checkextc script to ensure
external procedure calls can
be performed.
Check that the Tivoli Netcool
Performance Manager
database was created
properly.
GYMCI6119E The tnsnames.ora file is
missing. A tnsnames.ora file
in should exist in
ORACLE_HOME/network/
admin directory.
Check that the tnsnames.ora
file exists in the
ORACLE_HOME/network/
admin directory. If it does
not, create it.
Minimal Deployment: Post-Installation Messages
GYMCI7500E An internal processing error
occurred in the script.
Check the logs and the
output from the script. Look
for incorrect configuration or
improper invocation.
GYMCI7501E The required configuration
or messages files for the
poc-post-install script are not
in the same directory as the
script. These files should be
unpacked by the installer
together with the script.
Check for errors that
occurred during the
installation steps.
GYMCI7502E An environment file is
missing or is in the wrong
location.
Check the poc-post-install
configuration file. The
missing environment file and
expected path will be
identified in the log file.
GYMCI7503E The SNMP DataLoad did not
start. The SNMP DataLoad
process (pvmd) failed to
start.
Check the SNMP DataLoad
log for errors during startup.
GYMCI7504E The network inventory
failed. New devices cannot
be discovered unless the
inventory runs successfully.
Check the inventory log for
errors. Ensure the DISC
server and SNMP DataLoad
(Collector) processes are
running.
GYMCI7505E The Report Grouping
operation failed. This action
does not depend on any
external application
processes. The database must
be running, and correct
DataMart grouping rule
definitions are required.
Check the inventory log file
for more details of the
Report Grouping failure.
226 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI7506E The DataChannel command
line failed. It is possible that
the CNS, CMGR, and AMGR
processes are not running.
Ensure that the required
processes are running. Check
the proviso.log for details of
the failure.
GYMCI7507E The Report User was not
created. The Web user will
not be able to view reports.
The DataMart resmgr utility
is used to add this
configuration to the
database. It is possible that
the database is not running.
Ensure that the database is
running, and check for error
logs in the DataMart logs
directory.
GYMCI7508E Failed to associate a Report
User to a group. The report
user is associated with a
group to allow the user to
view reports. The DataMart
resmgr utility is used to add
this configuration to the
database. It is possible that
the database is not running.
Ensure that the database is
running, and check for error
logs in the DataMart logs
directory. Ensure that the
specified report group exists.
GYMCI7509E A report user could not be
deleted from the database.
Check for error and trace
logs in the DataMart logs
directory.
GYMCI7510E Failed to create a Web User.
The user will not be able to
authenticate with the
Web/application server.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running.
GYMCI7511E The Web group could not be
created, and the Web user
might not be properly
configured to view reports.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running.
GYMCI7512E Failed to associate the Web
User with a group. The Web
user might not be properly
configured to view reports
unless successfully associated
with a group.
Check the Web/application
server log file for errors.
Ensure that the
Web/application server is
running. This step relies on
the database component
only.
GYMCI7513E Failed to delete Web Users.
Web user authentication was
not removed.
Check the Web/application
server logs.
GYMCI7514E The Channel Naming Service
failed to start.
Cross-application
communication cannot
function.
Check for walkback or error
files in the DataChannel log
or state directory.
GYMCI7515E The central LOG server
failed to start. Logging for
DataChannel will be
unavailable.
Check for walkback or error
files in the DataChannel log
or state directory.
Appendix I. Error codes and log files 227
Error Code Description User Action
GYMCI7516E The Channel Manager failed
to start. DataChannel
applications cannot be
started or stopped.
Application status will be
unavailable.
Check the proviso.log file for
errors. Check for walkback
or error files in the
DataChannel log or state
directory.
GYMCI7517E The Application Manager
failed to start. DataChannel
applications cannot be
started or stopped.
Application status will be
unavailable.
Check the proviso.log file for
errors. Check for walkback
or error files in the
DataChannel log or state
directory.
GYMCI7518E Failed to create the DV user
group. The DV user will
remain in the Orphans
group.
Check the poc-post-install
log in /var/tmp for more
details on the error
condition.
GYMCI7519E Failed to associate the DV
user to the DV group. The
DV user will remain in the
Orphans group.
Check the poc-post-install
log in /var/tmp for more
details on the error
condition.
GYMCI7520E The Web Application server
is not running or took too
long to start up.
Start up the Web Application
server as documented.
GYMCI7597E The MIB-II Technology Pack
jar file was not found in the
specified directory.
Add the MIB2 Technology
Pack jar to the directory.
Remove other jar files and
try again.
GYMCI7598E Too many jar files are present
in the specified directory.
Only two jar file can be
present in the directory: the
ProvisoPackInstaller.jar and
the MIB-II Technology Pack
jar.
Remove the other jar files
and try again.
GYMCI7599E The Technology Pack
installer failed. Check the
Technology Pack installer
logs for details.
Installer Action Messages and IA Flow Messages
GYMCI9998E Unable to find a message for
the key. The message was
not retrieved from the
message catalog.
See the installation log for
more details.
GYMCI9999E An unknown error occurred
for the component name
with the error code code. The
message could not be
retrieved from the catalog.
See the installation log for
more details.
GYMCI9001E An error occurred during
installation. An exception has
been generated during an
installation step.
See the installation log for
more details.
228 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI9002E An unrecoverable error
occurred when running the
command command.
See the installation log for
more details.
GYMCI9003E An unrecoverable error
occurred while running a
command.
See the installation log for
more details.
GYMCI9004E An error occurred while
connecting to the database.
See the installation log for
more details.
GYMCI9005E An error occurred while
performing a database
operation.
See the installation log for
more details.
GYMCI9006E Remote File Transfer has
been disabled.
To continue, change the step
property to Allow Remote
Execution and run the step
again, or manually transfer
the directory to the host.
When the transfer is
completed, change the step
status to Success and
continue the installation.
GYMCI9007E An error occurred while
remotely connecting to
target. There are connection
problems with the host.
See the installation log for
more details.
GYMCI9008E An error occurred while
connecting to target. There
are connection problems with
the host.
See the installation log for
more details.
GYMCI9009E An error occurred while
copying install_dir.
See the installation log for
more details.
GYMCI9010E Remote Command Execution
has been disabled.
To continue: 1. Change the
step property to Set Allow
Remote Execution. 2. Run the
step again. Or, manually
transfer the directory to the
host. When the transfer is
completed, change the step
status to Success and
continue the installation.
GYMCI9011E An error occurred during file
creation.
See the installation log for
more details.
GYMCI9012E An error occurred while
loading the discovered
topology file.
See the installation log for
more details.
GYMCI9013E An error occurred while
loading the topology file.
See the installation log for
more details.
GYMCI9014E The installation engine
encountered an
unrecoverable error.
See the installation log for
more details.
GYMCI9015E An error occurred while
saving the topology file.
See the installation log for
more details.
Appendix I. Error codes and log files 229
Error Code Description User Action
GYMCI9016E The installer cannot proceed
with the installation because
there is insufficient disk
space on the local host.
See the installation log for
more details.
GYMCI9017E The installer cannot
download the topology from
the specified database. Verify
that the Tivoli Netcool
Performance Manager
database exists and that it
has been started. If it does
not exist, launch the installer,
providing a topology file.
Ensure that the correct host
name, port, and SID were
specified and that the
database has been started.
GYMCI9018E The installer cannot connect
to the specified database
indicated because of
incorrect credentials.
Ensure that you provide the
correct user name and
password.
GYMCI9019W The installer could not
establish a connection to the
specified database. Check
that the Tivoli Netcool
Performance Manager
database can be contacted.
Click Next to proceed
without checking the current
environment status.
Check that the Tivoli Netcool
Performance Manager
database can be contacted.
GYMCI9020E The database connection
parameters do not match
those in the topology file.
Ensure that you provide the
correct parameters.
GYMCI9021E An error occurred while
loading the Oracle client jar.
See the installation log for
more details.
GYMCI9022E The configuration file name
was not found. The step
cannot run.
See the installation log for
more details.
GYMCI9023W There appear to be no
differences between the
desired topology state and
the current state of the Tivoli
Netcool Performance
Manager installation. The
installer shows this message
when it determines there is
not work that it can do.
Normally, this occurs when
the Tivoli Netcool
Performance Manager system
is already at the desired
state. However, it can also
occur when there are
component dependencies
that are not satisfied.
See the installation log for
more details.
GYMCI9024E The operating system
specified for this node in the
topology file is not correct.
Correct the topology file.
230 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI9025E The path is not valid or you
do not have permissions to
write to it.
Correct the parameter and
try again.
GYMCI9026E The path is not a valid
Oracle path. The sqlplus
command could not be
found.
Correct the parameter and
try again.
GYMCI9027E The specified port is not
valid.
Correct the parameter and
try again.
GYMCI9028E At least one parameter is
null.
Specify values for the
required parameters.
GYMCI9029E The specified host name
contains unsupported
characters.
Ensure that host names
include only supported
characters.
GYMCI9030E The specified host cannot be
contacted.
Ensure that the host name is
correct and check that the
host is available.
GYMCI9031E The path not exists on the
local system.
Correct the path and try
again.
GYMCI9032E An error occurred while
saving the topology. It has
not been uploaded to the
Tivoli Netcool Performance
Manager database. This error
occurs when there is a
database connection error or
when the Tivoli Netcool
Performance Manager
database has not yet been
created
See the log file for further
details.
GYMCI9033E One of the following
parameters must be set to 1:
param1 param2
Check the log file for further
details. Redefine the
parameters and try again.
GYMCI9034E An error occurred while
creating mount point
directories.
See the log file for further
details.
GYMCI9035E An error occurred while
changing the ownership or
the group of mount point
directories.
See the log file for further
details.
GYMCI9036E The machine hostname was
not found in the Tivoli
Netcool Performance
Manager model
(topology.xml file). The
machine where the installer
is running is not part of the
Tivoli Netcool Performance
Manager topology.
If a host name alias is used,
make the machine host name
match the host name in the
model. Alternatively, use the
option
-DUsehostname=hostname to
override the machine host
name used by the installer.
Appendix I. Error codes and log files 231
Error Code Description User Action
GYMCI9037E The Deployer version you
are using is not compatible
with the component that you
are trying to install.
Use a Deployer at a version
that supports the
deployment of the
component you are trying to
install.
GYMCI9038E The XML file cannot be read
or cannot be parsed.
Ensure the file is not
corrupted. See the log file for
more details.
GYMCI9039E The deployment cannot
proceed, because an error
occurred the deployment
plan was being generated.
See the log file for more
details. Check that there is
sufficient disk space and that
the Deployer images are not
corrupted.
GYMCI9040E The Deployer cannot manage
the indicated component on
the specified node.
See the log file for more
details about the condition
that was detected.
GYMCI9041E The user ID you specified is
not defined on the target
system.
Check that you have
specified the correct user ID.
GYMCI9042E You specified a host that is
running on an unsupported
platform.
Check that you have
specified the correct host
name.
GYMCI9043E The value you specified is
not supported.
Specify one of the supported
values.
Topology Editor messages
The Topology Editor messages.
Table 14 lists the error messages returned by the Topology Editor.
Table 14: Topology Editor Messages
Error Code Description User Action
GYM0001E A connection error was
caused by an SQL failure
when running the report.
Details are logged in the
trace file. There is a
connection problem with the
database. Possible problems
include: The database is not
running. The database
password provided when the
engine was created is wrong
or has been changed.
Check the error log and trace
files for the possible cause of
the problem. Check that the
database is up and that the
connection credentials are
correct. Correct the problem
and try the operation again.
GYMCI0000E Folder name containing
technology pack metadata
files was not found. The
specified folder does not
exist.
Ensure that you have the
correct location for the
technology pack metadata
files and try the operation
again.
GYMCI0001E An internal error, associated
with the XML parser
configuration, occurred.
Contact IBM Software
Support.
232 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI0002I No item has been found that
satisfies the filtering criteria.
Ensure that you enter the
correct filtering criteria and
try the operation again.
GYMCI0003E An error occurred when
reading XML file name. The
XML file might be corrupt or
in an incorrect format.
Ensure that you have
selected the correct file and
try the operation again.
GYMCI0004E The input value must be an
integer.
Correct the input value and
try the operation again.
GYMCI0005E An unexpected element was
found when reading the
XML file.
Ensure that you have
selected the correct file and
try the operation again.
GYMCI0006E A value must be specified. Correct the input value and
retry the operation.
GYMCI0007E The value must represent a
log filter matching regular
expression expression.
Correct the input value and
try the operation again.
GYMCI0008E Metadata file name was not
found. The specified file does
not exist.
Ensure that you have the
correct file name and path
and retry the operation.
GYMCI0009E Metadata file name is
corrupted.
Contact IBM Software
Support.
GYMCI0010E Metadata file name was
already imported. Do you
want to replace it?
Click Yes to replace the file
or No to cancel the
operation.
GYMCI0011E Object name was not found
in the repository. The
specified object does not
exist.
Ensure that you have the
correct object name and try
the operation again.
GYMCI0012E The specified value must
identify an existing directory.
The specified directory does
not exist.
Ensure that you have the
correct directory name and
try the operation again.
GYMCI0013E Removing object from host in
Physical View.
No user action required.
GYMCI0014E File name does not exist. Ensure that you have the
correct file name and try the
operation again.
GYMCI0015E An unexpected error
occurred writing file name.
See the trace file for details.
Ensure that there is sufficient
space to write the file in the
file system where the
Topology Editor is running.
GYMCI0016E The user or password that
you specified is wrong.
Correct the login credentials
and try the operation again.
GYMCI0017E The value specified for at
least one of the following
fields is not valid: host name,
port, or SID.
Correct the input value or
values and try the operation
again.
GYMCI0018E The file name is corrupted. Select a valid XML file.
Appendix I. Error codes and log files 233
Error Code Description User Action
GYMCI0019E An unexpected error
occurred when retrieving
data from the database. See
the trace file for details.
Ensure that the database is
up and running and that you
can connect to it.
GYMCI0020E An unexpected error
occurred when parsing file
name. See the trace file for
details.
Select a valid XML file.
GYMCI0021E An unexpected error
occurred. See the trace file
for details.
Contact IBM Software
Support.
GYMCI0022E The input value must be a
boolean.
Correct the input value and
try the operation again.
GYMCI0023E The specified value must be
one of the following
operating systems: AIX,
SOLARIS, or Linux.
Correct the input value and
try the operation again.
GYMCI0024E The value must be a software
version number in the format
n.n.n or n.n.n.n. For example
7.1.2, or 7.1.2.1.
Correct the input value and
try the operation again.
GYMCI0025E The value must be an integer
in the range minValue to
maxValue, inclusive.
Correct the input value and
try the operation again.
GYMCI0026E The value must be a
comma-separated list of
strings.
Correct the input value and
try the operation again.
GYMCI0027E The value must be a file size
expressed in kilobytes. For
example, 1024K.
Correct the input value and
try the operation again.
GYMCI0028E The value must be a file size
expressed in megabytes. For
example, 512M.
Correct the input value and
try the operation again.
GYMCI0029E The value must be a file size
expressed in kilobytes or
megabytes. For example
1024K or 512M.
Correct the input value and
try the operation again.
GYMCI0030E The value must be an FTP or
SFTP connection string. For
example,
ftp://
username:password@hostname/directory.
Correct the input value and
try the operation again.
GYMCI0031E The value must be a
comma-separated list of
directories. For example,
/opt, /var/tmp, /home.
Correct the input value and
try the operation again.
GYMCI0032E Value cannot be a
fully-qualified domain name,
IP address, or name
containing hyphen or period.
Supply the unqualified host
name without the domain.
Do not use the IP address or
a name that contains
hyphens.
234 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description User Action
GYMCI0033E Metadata file name contains
an technology pack with a
wrong structure.
Contact IBM Software
Support.
GYMCI0034E Value should be in the
format YYYY-MM-DD,
cannot be a date prior than
1970-01-01, or later than the
current date.
Specify a date that is within
the range and in the correct
format.
GYMCI0035E The meta-data file contains
an technology pack with the
wrong structure.
Obtain a valid meta-data file
and try again.
GYMCI0036E Value should be in the
format YYYY-MM-DD,
cannot be a date prior than
1970-01-01, or later than the
current date.
Correct the input value and
retry the operation.
GYMCI0037E The operation failed because
the specified file does not
exist.
Ensure that the file name
and path you specified is
correct and retry the
operation.
GYMCI0038E The operation failed because
of an error while validating
the host name mappings file.
See the trace file for more
details.
GYMCI0039E The host name retrieved by
the upgrade process is not
valid. Fully qualified host
names, IP addresses and
names containing hyphens or
periods are not supported.
Correct the entry for the
specified host name in the
topology definition.
GYMCI0040E The upgrade process
retrieved two entries for the
specified host name. The
fully qualified host name is
not supported.
Remove the entry for the
fully qualified host name.
GYMCI0040W The upgrade process did not
retrieve a valid value for the
specified property. A default
value has been used.
Check that the default
assigned is appropriate and
change it if necessary.
GYMCI0041E No component is present on
the specified host.
Specify a host where at least
one component is present.
GYMCI0042E The operation failed because
the input value is not the
correct data type. The correct
data type is Long.
Correct the input value and
retry the operation.
GYMCI0043E The operation failed because
the input value is not valid.
Correct the input value and
retry the operation.
GYMCI0044W The upgrade process did not
retrieve a valid value for the
specified property. A default
value has been used.
Check that the default
assigned is appropriate and
change it if necessary.
Appendix I. Error codes and log files 235
InstallAnywhere messages
The InstallAnywhere messages.
Table 14 lists the InstallAnywhere

error messages. These messages could be


returned by either the deployer or the Topology Editor. See the InstallAnywhere
documentation for more information about these error codes and how to resolve
them.
Table 15: Install Anywhere Messages
Error Code Description
0 Success: The installation completed
successfully without any warnings or errors.
1 The installation completed successfully, but
one or more of the actions from the
installation sequence caused a warning or a
non-fatal error.
8 The silent installation failed because of step
Error errors.
-1 One or more of the actions from the
installation sequence caused a fatal error.
1000 The installation was cancelled by the user.
1001 The installation includes an invalid
command-line option.
2000 Unhandled error.
2001 The installation failed the authorization
check, may indicate an expired version.
2002 The installation failed a rules check. A rule
placed on the installer itself failed.
2003 An unresolved dependency in silent mode
caused the installer to exit.
2004 The installation failed because not enough
disk space was detected during the
execution of the Install action.
2005 The installation failed while trying to install
on a Windows 64-bit system, but installation
did not include support for Windows 64-bit
systems.
2006 The installation failed because it was
launched in a UI mode that is not supported
by this installer.
3000 Unhandled error specific to a launcher.
3001 The installation failed due to an error
specific to the lax.main.class property.
3002 The installation failed due to an error
specific to the lax.main.method property.
3003 The installation was unable to access the
method specified in the lax.main.method
property.
236 IBM Tivoli Netcool Performance Manager: Installation Guide
Error Code Description
3004 The installation failed due to an exception
error caused by the lax.main.method
property.
3005 The installation failed because no value was
assigned to the lax.application.name
property.
3006 The installation was unable to access the
value assigned to the
lax.nl.java.launcher.main.class property.
3007 The installation failed due to an error
specific to the
lax.nl.java.launcher.main.class property.
3008 The installation failed due to an error
specific to the
lax.nl.java.launcher.main.method property.
3009 The installation was unable to access the
method specified in the
lax.nl.launcher.java.main.method property.
4000 A Java executable could not be found at the
directory specified by the java.home system
property.
4001 An incorrect path to the installer jar caused
the relauncher to launch incorrectly.
Log files
A description of the files that are used to log errors for the Tivoli Netcool
Performance Manager components and its underlying framework.
Several files are used to log errors for the Tivoli Netcool Performance Manager
components and its underlying framework. These log files include:
v COI log files
v Deployer log file on page 238
v Eclipse log file on page 238
v Trace log file on page 238
See the Technology Pack Installation Guide for information about the technology pack
log files.
COI log files
COI log files description.
The Composite Offering Installer (COI) adds a layer called the COI Plan to the
Tivoli Netcool Performance Manager installation. The COI Plan consists of a set of
COI Machine Plans, one for each machine where Tivoli Netcool Performance
Manager components should be installed. A COI Machine Plan is a collection of
COI Steps to be run on the corresponding machine.
The COI Plan is created in the directory /tmp/ProvisoConsumer/Plan.
The COI provides the following log files:
Appendix I. Error codes and log files 237
Table 16: COI Log Files
Log File Description Log File Location
MachinePlan_machinename_ [INSTALL_mmdd_hh.mm].log
For example:
MachinePlan_delphi_[INSTALL _0610_10.37].log
Contains detailed
information about the
tasks executed by the
COI steps on the
specified machine
/tmp/ProvisoConsumer/Plan/
MachinePlan_machinename/logs/
DeploymentPlan.log
Contains high-level
information about the
COI Plan execution
tmp/ProvisoConsumer/Plan/
logs/INSTALL_mmdd_hh.mm
Deployer log file
Deployer log file description.
Installation errors and messages are written to the file /tmp/ProvisoConsumer/
log.txt. The log file supports two levels:
v High (FINEST) - This is the default and only setting.
Eclipse log file
Eclipse log file description.
The Eclipse framework logs severe problems in a file under the Topology Editor
installation directory (for example, /opt/IBM/Proviso/topologyEditor/workspace/
.metadata). By default, the Eclipse log file is named .log. You should not need to
look there unless there is a problem with the underlying Eclipse framework.
Trace log file
Trace Log File description.
About this task
The trace log file is located in the Topology Editor installation directory (for
example, /opt/IBM/Proviso/topologyEditor). By default, this file is named
topologyEditorTrace and the default trace level is FINE.
To change the trace level:
Procedure
1. In the Topology Editor, select Window > Preferences. The Log Preferences
window opens.
2. Select the new trace level. If desired, change the name of the log file.
3. Click Apply to apply your changes. To revert back to the default values, click
Restore Defaults.
4. Click OK to close the window.
238 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix J. Troubleshooting
This section lists problems that might occur during an installation and how to
resolve them. The problems are grouped by the interface or component exhibiting
the problem.
Deployment problems
A list of deployment problem descriptions and solutions.
Problem Solution
The deployer window does not
automatically become the focus
window after launching from it from
the Topology Editor.
Cause: In some cases (for example, when you export the display on a VNC
session on Linux systems), the deployer window does not get the focus.
User action: Click on the deployer window or move other windows to make
the deployer window the focus window.
When the user tries to launch the
Firefox browser an error is displayed
regarding the Cairo 1.4.10 package:
Cause: Cairo 1.4.10 may not support the requested image format.
User action: Start VNC server using the following command:
/usr/bin/X11/vncserver -depth 24 -geometry 1280x1024
In a fresh installation, the database
installation step fails.
Cause: You did not perform the necessary preparatory steps.
User action: This step verifies that the Oracle Listener is working properly
before actually creating the Tivoli Netcool Performance Manager database. If
the step fails: 1. Complete the necessary manual steps (see Configure the
Oracle listener). 2. Change the status of the step to Ready. 3. Resume the
installation. The step should complete successfully.
An installation step hangs. Cause: There are many possible causes.
User action:
1. Make sure the installation step is really in a hung state. For example, the
Tivoli Netcool Performance Manager database-related steps might take
more than an hour to complete; other steps complete in far less time.
2. Determine which child process is causing the hang. First, find the
installer process by entering the following command:
ps -ef
The installer process has an entry similar to this one:
root 12899 7290 10 13:43:31 pts/7 0:10
/tmp/install.dir.12899/Solaris/resource/
jre/jre/bin/java -Djava.compiler=NONE -
3. Find the process that has that process number (for example, 12899) as its
father. Continue until you find the last process.
4. Kill the last process using the following command:
kill -9
At this point, the status of the hung step will change to Error.
5. If you can determine the cause of the hang, fix the problem and resume
the installation. Otherwise, collect the log files and contact IBM for
support.
The deployer hangs when displaying
the Preview page. (This step normally
takes only a few seconds).
Cause: The NFS file system is not working properly. User action: Run the df
-k command and make sure that all NFS mounted file systems are working
properly. When the problem has been corrected, restart the deployer.
Copyright IBM Corp. 2006, 2012 239
Problem Solution
There is a problem with remote
command execution.
Cause: The deployer uses either RSH or OpenSSH to perform remote
command execution. You must configure OpenSSH to make this connection
possible.
User action: After configuring OpenSSH, run the test program provided in
deployer_root/proviso/data/Utils/testremote.sh to test your configuration,
where deployer_root is the root directory for the deployer. For example:
/export/home/pvuser/443/SOLARIS/Install/ SOL9/deployer
Installation messages report success,
but might include messages similar to
the following: Fatal Error]:4:1: An
invalid XML character (Unicode:
0x1b) was found in the element
content of the document.
This is screen noise and can safely be ignored.
When you click the Done button to
complete a fresh installation, the
deployer displays database access
error messages.
Cause: You stopped a fresh installation before the installing and configuring
the Tivoli Netcool Performance Manager database.
User action:If the Tivoli Netcool Performance Manager database has not
been installed, complete the installation using the -Daction=resume option
(see Resuming a partially successful first-time installation on page 109). If
the database has been installed, there is another problem. Contact IBM
Software Support.
Data does not appear in real-time
reports, and right-clicking on a
real-time report does not display the
option menu. This problem can occur
with a silent installation or a minimal
deployment installation on a Solaris
system.
Cause: When it starts, the channel manager (CMGR) places information in
the database that is needed for real-time reports to start correctly. During
installation, a cron job is created that starts CMGR. A silent installation
might run fast enough that the cron job does not run before DataView is
started. In this case, CMGR does not add the required information to the
database, and real-time reports do not start up correctly.
User action:
1. Make sure that the CMGR process is running (see Management
programs and watchdog scripts on page 172 and Starting the
DataChannel management programs on page 174).
2. Restart DataView.
During Tivoli Integrated Portal install,
the Deployment Engine failed to find
pre-installed Tivoli Integrated Portal.
User action:
1. Log in as root.
2. Enter the following commands to restart the Deployment Engine:
# cd /usr/ibm/common/acsi/bin
# ./acsisrv.sh -start
3. Check DE is running with the following command
# ./listIU.sh
This will list all IUs in the system.
240 IBM Tivoli Netcool Performance Manager: Installation Guide
Saving installation configuration files
Installation configuration files can be used to troubleshoot a Tivoli Netcool
Performance Manager installation.
When you install Tivoli Netcool Performance Manager components, the deployer
creates a set of temporary configuration files that are used during the installation
process. These files specify the components that are to be installed on a target
system and the deployment information required to install them. You can use these
configuration files to troubleshoot a Tivoli Netcool Performance Manager
installation.
The temporary configuration files are normally removed from the target system
when the deployer completes the installation process. You can prevent the
deployer from removing the files by editing the installer XML file associated with a
component. This file is named n_comp_name.xml, where n is an index number
generated by the deployer and comp is a string that identifies the component.
Possible values for the comp string are DataView, DataMart, DataView, DBChannel and
DBSetup. Installer XML files are located by default in the /tmp/ProvisoConsumer/
Plan/MachinePlan_hostname directory, where hostname is the host name of the
target system.
To prevent the deployer from removing the temporary files associated with a
component install, open the corresponding install XML file and modify the
following element so that the value of the arg2 property is false:
<equals arg1="${remove.temporary.files}" arg2="true"/>
The following excerpt from the file shows the resulting XML element:
<equals arg1="${remove.temporary.files}" arg2="false"/>
When you contact IBM support about a Tivoli Netcool Performance Manager
installation problem, the support staff might ask you for these files. You can create
a tar file or zip archive that contains the entire contents of the
/tmp/ProvisoConsumer directory and send it to the IBM support staff for assistance.
Tivoli Netcool Performance Manager component problems
A list of Tivoli Netcool Performance Manager component problems and solutions.
Problem Solution
A Tivoli Netcool Performance Manager
component is still listed as Configured in
the Topology Editor even though it's been
installed.
Cause: The component is installed, but has
not been started. User action: Start the
component. Its status changes to Installed.
A new channel component was deployed, or
the channel configuration was changed, but
the change has no effect.
Cause: The channel components need to be
bounced. User action: Bounce the
components, as described in Appendix B,
DataChannel architecture, on page 171.
Appendix J. Troubleshooting 241
Topology Editor problems
A list of Topology Editor problems and solutions.
Problem Solution
The Topology Editor won't open and the application
window shows a Java exception (core dump).
Cause: You forgot to set and export your DISPLAY
variable. User action: 1. Enter the following commands:
$ DISPLAY=Host_IP_Address:0.0 $ export DISPLAY
2. Restart the Topology Editor.
The splash screen for the Topology Editor is displayed,
but the Topology Editor doesn't start and no explanatory
message is displayed.
Cause: You did not log in as root. User action: 1. Log in
as root. 2. Restart the Topology Editor.
The topology editor reports the following error when
you attempt to add a UBA collector: GYMCI0504E An
internal error occurred while processing file pack
Where pack is the name of the application jar file. In
addition, the topology editor log file contains the
following error: YYYY-MM-DD HH:MM:SS SEVERE
FileHelper Manifest of file pack is corrupted. It
was not possible to determine if its install type
is bundle or standalone.
Cause: You tried to add a UBA collector for an SNMP
technology pack. User action: Make sure that you read
the Tivoli Netcool Performance Manager technology
packs release notes before you install and configure a
pack and before you add any collectors. The release
notes contain information on whether a specific
technology pack is a UBA or SNMP pack. UBA and
SNMP packs require you to perform different
configuration steps. Before you install and configure an
technology pack, you must also read the information in
"Before You Begin" on page Chapter 6:-129 and follow
the steps listed in that section.
Telnet problems
A list of Topology Editor problems and solutions.
Problem Solution
Telnet client fails at initial connection and reports the
following error: Not enough room in buffer for display
location option reply Can occur when you start Tivoli
Netcool Performance Manager components from a
Solaris 10 system where the user interface is displayed
remotely on a Windows desktop using an X Window tool
like Exceed.
Cause: Length of the DISPLAY variable passed via the
telnet client is too long (for example,
XYZ-DA03430B70B-009034197130.example.com:0.0). User
action: Set the value of the DISPLAY variable using the
IP address of the local system, or the hostname only
without the domain name. Then, reconnect to the Solaris
10 machine using the telnet client.
242 IBM Tivoli Netcool Performance Manager: Installation Guide
Java problems
A list of Java problems and solutions.
Problem Solution
Installer reports a Java Not Found error
during installation of technology packs.
Cause: The installer expected, but did not
find, Java executables in the path reported in
the error message. The technology pack
installation requires the correct path in order
to function. User action: Create a symbolic
link from the reported directory to the
directory on the system where the Java
executables are installed, for example:
ln -s bin_path $JAVA_HOME/bin/java
where bin_path is the directory where the
binaries are located. After you create the
symbolic link, you must re-start the
technology pack installation.
Testing connectivity to the database
How to test the connectivity to the Oracle database.
About this task
To test client connectivity to the Oracle database:
Procedure
1. Make sure you are logged in as oracle and that the DISPLAY environment
variable is set.
2. Enter the following command:
$ sqlplus system/[email protected]
In this syntax:
v password is the password you set for the Oracle system login name. (The
default password is manager.)
v PV is the TNS name for your Tivoli Netcool Performance Manager database
defined in your Oracle Net configuration.
For example:
$ sqlplus system/[email protected]
3. Output like the following example indicates a successful connection:
SQL*Plus: Release 11.2.0.2.0 - Production on <Current Date>
Copyright (c) 1982, 2010, Oracle Corporation. All rights reserved.
Connected to:
Oracle11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning option
JServer Release 11.2.0.2.0 - Production
SQL>
4. Type exit at the SQL> prompt.
Appendix J. Troubleshooting 243
Testing external procedure call access
In the Oracle Net configuration, you set up an Oracle listener to wait for
connections using external procedure calls.
About this task
The shared library libpvmextc.so executes system commands from stored Oracle
procedures. This file is installed in the $ORACLE_BASE/admin/PV/lib directory
(where PV is the ORACLE_SID). A symbolic link to this library file is created by the
configure_db script in the $ORACLE_HOME/lib directory.
To test external procedure call access:
Procedure
1. Make sure you are logged in as oracle and that the DISPLAY environment
variable is set.
2. At a shell prompt, change to the following directory path:
$ cd $ORACLE_BASE/admin/skeleton/bin
3. Run the checkextc script, using the system database login name and password
as a parameter:
$ ./checkextc system/password
For example:
$ ./checkextc system/manager
4. Output like the following example indicates a successful test.
checkextc - Checking the installation of the library libpvmextc.so
This program try to execute the following unix commands
from a PL/SQL stored procedure.
1- Check ExternalCall : echo "UNIX : Check libpvmextc.so configuration."
2- Check Version
3- Check ExternalPipe : pwd
ORACLE : Connecting to Oracle ...
ORACLE : Creating library LibExtCall ...
ORACLE : Creating function ExternalCall ...
ORACLE : Calling function ExternalCall ...
UNIX : Check libpvmextc.so configuration succeeded.
ORACLE : Creating function Version ...
ORACLE : Calling function Version ...
UNIX : Check Version libpvmextc.so - Revision: 1.0.1.1
ORACLE : Creating function ExternalPipe ...
ORACLE : Calling function ExternalPipe ...
UNIX : Check ExternalPipe - /var/opt/oracle
ORACLE : Dropping function Version ...
ORACLE : Dropping function ExternalCall ...
ORACLE : Dropping function ExternalPipe ...
ORACLE : Dropping library LibExtCall ...
244 IBM Tivoli Netcool Performance Manager: Installation Guide
Appendix K. Migrating DataView content and users
When modifying your system to use Tivoli Integrated Portal or changing your
Tivoli Integrated Portal, you must migrate existing DataView content and users.
Moving DataView content between Tivoli Integrated Portal servers
You can use the synchronize command to move custom DataView content between
Tivoli Integrated Portal servers.
You can copy your custom DataView content, such as JSP pages, CSS and images,
from a remote Tivoli Integrated Portal server to a local Tivoli Integrated Portal
server. All remote content is copied to the local content directory at
<tip_location>/products/tnpm/dataview/legacy/content.
The synchronize command
Copies custom DataView content, such as JSP pages, CSS and images, from a
remote Tivoli Integrated Portal to a local Tivoli Integrated Portal. All remote
content is copied to the local content directory at <tip_location>/products/tnpm/
dataview/legacy/content.
Location
<tip_location>/products/tnpm/dataview/legacy/bin
Where <tip_location> is the Tivoli Integrated Portal installation directory, by default
/opt/IBM/tivoli/tipv2.
Required privileges
Adequate privileges are required to read and write files to the file system. You
must run this command from the UNIX command line as the Tivoli Netcool
Performance Manager UNIX user (by default, pvuser), or a user with similar or
greater privileges.
Syntax
synchronize.sh -tipuser <tip_username -tippassword <tip_password> -sourceuser
<source_username> -sourcepassword <source_password> -sourceurl <source_url>
[-pattern <pattern>]
Parameters
<tip_username>
A Tivoli Integrated Portal user name for the local Tivoli Integrated Portal.
<tip_password>
The Tivoli Integrated Portal user password for the local Tivoli Integrated
Portal.
<source_username>
A Tivoli Integrated Portal user name for the remote Tivoli Integrated Portal.
Copyright IBM Corp. 2006, 2012 245
<source_password>
The Tivoli Integrated Portal user password for the remote Tivoli Integrated
Portal.
<source_url>
The URL of the remote server, including the port and DataView context.
Optional parameter
<pattern>
The name pattern that identifies the types of files to filter for the
synchronization. Wildcards * and ? are supported. To synchronize all files, omit
the pattern; do not use * on its own to synchronize all files.
Example
The following command synchronizes the DataView custom .jsp file content from
a remote Tivoli Integrated Portal to a local Tivoli Integrated Portal:
synchronize.sh -tipuser <tip_username> -tippassword <tip_password>
-sourceuser <source_username> -sourcepassword <source_password>
https://server.ibm.com:16711:PV *.jsp
Migrating SilverStream content to the Tivoli Integrated Portal
Use the migrate command to move SilverStream content, users, or both to the
Tivoli Integrated Portal.
The migrate command connects remotely to a SilverStream server and migrates all
the specified data to a Tivoli Integrated Portal installation.
The migrate command requires:
Source access credentials
SilverStream administrator user name and password
Destination access credentials
Tivoli Integrated Portal administrator user name and password
You can run the migrate command from Solaris, AIX, or Linux computers. If there
are multiple SilverMasters, you can run the migrate tool and create multiple new
DataView servers from a single SilverStream computer. Refer to The migrate
command on page 248 for more information.
SilverStream page conversion
The migrate command performs the following processing when converting
SilverStream pages to JavaServer Pages (JSPs).
Each SilverStream page is made up of three parts:
v HTML (including HTML markup, inline CSS, JavaScript, and images)
v Java code
v SilverStream page controls
The migrate command does not modify inline CSS, JavaScript, or images. This type
of page content is extracted from the SilverStream server and imported into Tivoli
Integrated Portal unchanged.
246 IBM Tivoli Netcool Performance Manager: Installation Guide
The HTML and Java code sections are read from the page, processed and
combined to generate a new JSP.
HTML
When processing HTML sections, SilverStream page controls are replaced with
custom DataView JSP tags. For example, consider the following HTML from a
SilverStream page:
<div>
<h1><AGCONTROL name="stylesheetNameLabel"></h1>
</div>
That HTML is converted to the following JSP. The className attribute in the new
JSP tag comes from the SilverStream page control metadata section of the
SilverStream page.
<<div>
<h1><proviso:pageControl className="com.sssw.shr.page.AgpLabel"
name="stylesheetNameLabel"></h1>
</div>
Java code
When processing Java code sections, the code is embedded into the JSP directly
using the standard JSP statement tag. For example, consider the following Java
code from a SilverStream page:
<class SomePage extends AgpPage {
private AgpLabel label;
public SomePage() {
this.label.setText("foo")
}
}
That Java code is converted to the following JSP:
<%!
//class SomePage extends AgpPage {
private AgpLabel label;
public SomePage() {
this.label.setText("foo")
}
}
%>
All JSP fragments generated from the HTML and Java code conversion are
combined into one JSP page. When executed, this JSP page generates the same
HTML as the SilverStream page.
The resulting JSP can only be run by using the SilverStream emulation layer. If you
are using unsupported SilverStream page controls (AgpTabPane, AgpImageHotSpot
and AgpButtonRadio) then the resulting JSP will not work correctly.
Appendix K. Migrating DataView content and users 247
The migrate command
Migrates all custom DataView content, or DataView users, from the SilverStream
server on a previous DataView installation.
The migrate command connects remotely to a SilverStream server and migrates all
the specified data to a Tivoli Integrated Portal installation. Ensure that both the
SilverStream and Tivoli Integrated Portal servers are running before running the
command.
Location
<tip_location>/products/tnpm/dataview/legacy/bin
Where <tip_location> is the Tivoli Integrated Portal installation directory, by default
/opt/IBM/tivoli/tipv2.
Required privileges
Adequate privileges are required to read and write files to the file system. You
must run this command from the UNIX command line as the Tivoli Netcool
Performance Manager UNIX user (by default, pvuser), or a user with similar or
greater privileges.
Syntax
migrate.sh -tipuser <tip_username> -tippassword <tip_password> -ssurl
<silverstream_URL> -ssuser <silverstream_username> -sspassword
<silverstream_password> -target <content|users|all> [-import ] [-verbose ]
Parameters
<tip_username>
A Tivoli Integrated Portal user name for the local Tivoli Integrated Portal.
<tip_password>
The Tivoli Integrated Portal user password for the local Tivoli Integrated
Portal.
<silverstream_URL>
The URL of the DataView SilverStream server.
<silverstream_username>
The SilverStream administrator user name.
<silverstream_password>
The SilverStream administrator password.
<content|users|all>
Indicates the type of data to be migrated:
content
Migrates all SilverStream content. All content is copied to the content
directory in the Tivoli Integrated Portal installation
<tip_location>/products/tnpm/dataview/legacy/content.
Depending on the type of content, it is copied to the following
directories:
v <tip_location>/products/tnpm/dataview/legacy/content/
SilverStream/Pages
248 IBM Tivoli Netcool Performance Manager: Installation Guide
v <tip_location>/products/tnpm/dataview/legacy/content/
SilverStream/Objectstore/Images
v <tip_location>/products/tnpm/dataview/legacy/content/
SilverStream/Objectstore/General
users
Migrates all SilverStream users. Each user in SilverStream is either an
administrator or user. In the Tivoli Integrated Portal, the
administrator role is mapped to tnpmAdministrator, and the user role
is mapped to tnpmUser. The migration tool creates two user groups,
tnpmAdministrators and tnpmUsers, that contain all the users with the
corresponding roles.
Password information is not migrated. Under Tivoli Integrated Portal,
the password is set to be the same as the user name. For example, the
SilverStream user pvuser with password pv becomes user pvuser with
password pvuser when migrated to the Tivoli Integrated Portal.
all Exports all SilverStream content and users.
Optional parameters
-import
Indicates data should be imported into the Tivoli Integrated Portal.
-verbose
Indicates additional migration messages should be displayed.
Example
Run the following command as user root. Assuming a default installation, this
command copies all SilverStream content and users in the SilverStream server
silverstreamserver to the /opt/IBM/tivoli/tipv2/products/tnpm/dataview/
legacy/content directory in a Tivoli Integrated Portal installation:
migrate.sh -tipuser <tip_username> -tippassword <tip_password> -ssurl
http://127.0.0.1:8080/PV -ssuser <silverstream_username> -sspassword
<silverstream_password> -target all -import -verbose
Appendix K. Migrating DataView content and users 249
250 IBM Tivoli Netcool Performance Manager: Installation Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan, Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain
transactions, therefore, this statement might not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Copyright IBM Corp. 2006, 2012 251
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Such information may be available, subject to appropriate terms and conditions,
including in some cases payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
Any performance data contained herein was determined in a controlled
environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which
illustrate programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating
platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM for the purposes of developing, using, marketing, or distributing application
programs conforming to IBMs application programming interfaces.
252 IBM Tivoli Netcool Performance Manager: Installation Guide
If you are viewing this information in softcopy form, the photographs and color
illustrations might not be displayed.
Notices 253
254 IBM Tivoli Netcool Performance Manager: Installation Guide
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at Copyright and
trademark information at http://www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
other countries, or both.
Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer
Entertainment, Inc., in the United States, other countries, or both and is used under
license therefrom.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,
Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States
and other countries.
IT Infrastructure Library is a registered trademark of the Central Computer and
Telecommunications Agency which is now part of the Office of Government
Commerce.
ITIL is a registered trademark, and a registered community trademark of the Office
of Government Commerce, and is registered in the U.S. Patent and Trademark
Office.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries,
or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Copyright IBM Corp. 2006, 2012 255
256 IBM Tivoli Netcool Performance Manager: Installation Guide

Printed in USA

You might also like