HPOM 9.22 Installation Guide Linux
HPOM 9.22 Installation Guide Linux
Installation Guide
Legal Notices
Warranty
The only warranties for Hewlett Packard Enterprise products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed
as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or
editorial errors or omissions contained herein.
Copyright Notice
Copyright 19932016 Hewlett-Packard Development Company, L.P.
Trademark Notices
Adobe and Acrobat are trademarks of Adobe Systems Incorporated.
Intel, Itanium, and Pentium are trademarks of Intel Corporation in the U.S. and other countries.
Java is a registered trademark of Oracle and/or its affiliates.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
Oracle is a registered trademark of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group.
Documentation Updates
The title page of this document contains the following identifying information:
l Software Version number, which indicates the software version.
l Document Release Date, which changes each time the document is updated.
l Software Release Date, which indicates the release date of this version of the software.
To check for recent updates or to verify that you are using the most recent edition of a document, go to:
https://softwaresupport.hpe.com
This site requires that you register for an HP Passport and sign in. To register for an HP Passport ID, go to:
https://cf.passport.hpe.com/hppcf/createuser.do
Or click the the Register link at the top of the HP Software Support page.
You will also receive updated or new editions if you subscribe to the appropriate product support service.
Contact your HP sales representative for details.
Support
Visit the HP Software Support Online web site at: https://softwaresupport.hpe.com
This web site provides contact information and details about the products, services, and support that HP
Software offers.
HP Software online support provides customer self-solve capabilities. It provides a fast and efficient way to
access interactive technical support tools needed to manage your business. As a valued support customer,
you can benefit by using the support web site to:
l Search for knowledge documents of interest
l Submit and track support cases and enhancement requests
l Download software patches
l Manage support contracts
l Look up HP support contacts
l Review information about available services
l Enter into discussions with other software customers
l Research and register for software training
Most of the support areas require that you register as an HP Passport user and sign in. Many also require a
support contract. To register for an HP Passport ID, go to:
https://cf.passport.hpe.com/hppcf/createuser.do
To find more information about access levels, go to:
https://softwaresupport.hpe.com/web/softwaresupport/access-levels
HP Software Solutions Now accesses the HPSW Solution and Integration Portal Web site. This site enables
you to explore HP Product Solutions to meet your business needs, includes a full list of Integrations between
HP Products, as well as a listing of ITIL Processes. The URL for this Web site is
https://softwaresupport.hpe.com
Contents
Chapter 1: Installation Requirements for the Management Server 13
In This Chapter 13
Preparation Steps 32
Installing Oracle Database 11g Release 1 32
Installing Oracle Database 11g Release 2 34
Installing Oracle Database 12c 36
Installing and Configuring the HPOM Software on the Management Server System 41
Usage of the ovoinstall and ovoconfigure Scripts 42
Before Running ovoinstall 43
Installing and Configuring the HPOM Software on the Management Server 43
Configuring an Oracle Database 50
Configuring a PostgreSQL Database 52
Managed PostgreSQL Database 53
PostgreSQL Cluster Directory Is Empty or Non-Existent 53
PostgreSQL Cluster Directory Belongs To an HPOM-created Cluster 55
Independent PostgreSQL Database 55
Configuring HPOM for Non-Root Operation 56
Viewing the Installation Log Files 57
Administration UI Installation Log File 58
Supported Platforms 68
Supported Languages 69
Installation Requirements 70
Hardware Requirements 70
Software Requirements 71
Supported Web Browsers 71
Upgrade of the Systems in a MoM Setup by Reusing the IP Addresses and Hostnames 151
Upgrading Systems in a MoM Setup by Reusing IP Addresses and Hostnames 152
Stopping the HPOperations Management Server in a Cluster Environment for Maintenance 180
Installing and Configuring the HPOperations Management Server in a Cluster Environment 184
Before You Install the HPOperations Management Server on the First Cluster Node 184
Preparation Steps for the First Cluster Node in a Basic Environment 185
Preparation Steps for the First Cluster Node in a Decoupled Environment 187
Preparation Steps for the First Cluster Node in a Cluster Environment Using an
Independent Database Server 190
Before You Install the HPOperations Management Server on Additional Cluster Nodes 191
Preparation Steps for Additional Cluster Nodes 191
Installing and Configuring the HPOperations Management Server on Cluster Nodes 198
Installing and Configuring the HPOperations Management Server on the First Cluster
Node 198
Chapter 11: Installing HPOM in a Red Hat Cluster Suite Environment 204
In This Chapter 204
Installing and Configuring the HPOperations Management Server in a Cluster Environment 205
Before You Install the HPOperations Management Server on the First Cluster Node 205
Preparation Steps for the First Cluster Node in a Basic Environment 206
Preparation Steps for the First Cluster Node in a Decoupled Environment 208
Preparation Steps for the First Cluster Node in a Cluster Environment Using an
Independent Database Server 211
Before You Install the HPOperations Management Server on Additional Cluster Nodes 213
Preparation Steps for Additional Cluster Nodes 213
Installing and Configuring the HPOperations Management Server on Cluster Nodes 220
Installing and Configuring the HPOperations Management Server on the First Cluster
Node 220
Installing and Configuring the HPOperations Management Server on an Additional
Cluster Node 223
Installing and Configuring the HPOperations Management Server in a Cluster Environment 226
Before You Install the HPOperations Management Server on the First Cluster Node 227
Preparation Steps for the First Cluster Node in a Basic Environment 228
Preparation Steps for the First Cluster Node in a Decoupled Environment 230
Preparation Steps for the First Cluster Node in a Cluster Environment Using an
Independent Database Server 234
Before You Install the HPOperations Management Server on Additional Cluster Nodes 235
Preparation Steps for Additional Cluster Nodes 235
Installing and Configuring the HPOperations Management Server on Cluster Nodes 242
Installing and Configuring the HPOperations Management Server on the First Cluster
Node 242
Installing and Configuring the HPOperations Management Server on an Additional
Cluster Node 245
Check your system parameters before running the HPOM installation script. This chapter helps you to
set the system parameters.
2. Install the Java GUI. l HPOM must be installed on the "Installing the Java GUI" on
management server. page 68
l Install the Java GUI software on the
systems where the Java GUI will
be running.
l Concurrent operators
l Messages processed
l Monitored nodes
Migrating the management server to a larger system at a later date requires considerable effort,
particularly if your configuration is large and includes hundreds or thousands of managed nodes.
Hardware Requirements
The system you select as the management server must meet the following hardware requirements:
l x86_64
l Additional disk space
l Additional RAM
l Swap space (see Table 3)
Note: It is strongly recommended that you use a multiple-CPU system for the HPOperations
management server, with the possibility to add additional CPUs, RAM, and disk space to the
system at a later time if needed.
You can install an Oracle database or a PostgreSQL database on a dedicated system. For further
information, see "Setting Up HPOM with a Remote/Manual Oracle Database" on page 87 or "Setting
Up HPOM with a Remote/Manual PostgreSQL Database" on page 99.
Required Disk
File System Space (GB)
/etc/opt/OV 2
/var/opt/OV 5
/opt/OV 3.5
tmp 1.2
Review the disk requirements of any other applications, such as HP Performance Manager, that
you want to install on the management server in the future.
If you do not have enough disk space in the file tree, you can use one of the following methods to
solve the problem:
l Mount a dedicated volume for the directory.
l Make the directory a symbolic link to a file system with enough disk space.
For details about the HPOM directory structure, see "Directory Structure on the Management
Server" on page 125.
2. How fast is the average disk I/O time?
The disk I/O time affects the application start-up time and the swapping activities. It is
recommended that you distribute the database, the HPOM binaries, and the runtime data over
several disks. To maintain optimum performance, do not locate swap space on the same disks as
the HPOM binaries and the database.
Before selecting a system to serve as your management server, review the following questions:
The actual RAM requirements depend heavily on your production environment and mode of use.
The factors that affect the RAM requirements include: the number and frequency of HPOM
messages, the number of operators working in parallel, and the number of managed nodes.
Memory consumption of the Java GUI needed on the server and the display station may be
approximately computed.
2. Does the system provide enough swap space?
In most cases, you need a total of 4 GB of swap space on the management server system.
Note: Use device swap space rather than file system swap space for improved system
performance.
To check your currently available swap space, run the following command:
/usr/bin/free
To achieve the best performance and to avoid a disk access bottleneck, do not locate the
database and the swap space on the same physical disk.
3. How many HPOM users will work at the same time?
The number of users influences the number of parallel GUIs running on the management server.
For each additional operating Java GUI and Service Navigator, about 16-20 MB of RAM or swap
space is required, plus 6 MB per 1000 active messages.
4. How many background graphics are integrated into Service Navigator?
Background graphics can also slow down the system by using excessive amounts of RAM.
aThe value recommended by Oracle is equal to the system physical memory (RAM) or 2 GB, whichever is
greater. For the PostgreSQL database, swap space is not required.
bThis value depends on the number of GUIs running in parallel, and on the number of active and
acknowledged messages. For each additional operating Java GUI and Service Navigator, about 16-20 MB
of RAM or swap space is required, plus 6 MB per 1000 active messages.
Reserve enough physical memory to accommodate all the virtual memory needs of HPOM. This extra
memory will eliminate the need for process swapping, and will result in the best possible performance.
The performance of HPOM can decrease if swapping becomes necessary.
Performance Requirements
The speed with which HPOM processes messages and the Java GUI performance both depend on the
available CPU time as well as the overall CPU power. Therefore, consider the demands of other
installed applications on CPU time, disk access, and RAM or swap space usage.
Note: It is strongly recommended that you use a multiple-CPU system for the management server
system, especially if you plan to run multiple Java GUIs.
Because the throughput of LAN packets can affect the management server performance, you should
not use the management server system for other purposes, such as NFS, NIS (YP), DNS, and so on.
However, configuring the HPOperations management server system as a secondary Domain Name
Server (DNS) can help to increase the speed of name lookups.
Table 4: Processor Requirements
Before setting up the connection between the managed nodes and the HPOperations management
server, review the following questions:
1. Is the system accessible all the time (at least while HPOM operators are working)?
The management server should be accessible at least while the managed nodes are operating.
If it is not, the following inconveniences can occur:
l Automatic actions that do not run directly on the local managed node cannot be performed
while the management server is down.
l When the management server is restarted, the managed nodes forward all locally buffered
HPOM messages to the management server. If hundreds or thousands of messages need to
be processed, this has a significant effect on the performance of HPOM.
2. Is the system located centrally as regards network connectivity and network speed?
To minimize the HPOM response time, a fast network (LAN) should be available between the
management server system and its managed nodes. For example, the management server should
not be connected by a serial line or X.25 with all the other systems networked in a LAN.
3. Are the display stations of the HPOM operators and the management server connected by
fast lines?
Having fast lines between the management server and the operator workstations is strongly
recommended.
Software Requirements
Before you install HPOM, the following software must be correctly installed on the management server.
Operating System
Table 5 shows on which operating system version the HPOperations management server is
supported.
Table 5: Supported Operating System Version for the Management Server
Supported Operating
Operating System Platform System Version
For the most up-to-date list of supported operating system versions, see the support matrix at the
following location:
https://softwaresupport.hpe.com/km/KM323488
HPOM on Red Hat Enterprise Linux is a 64-bit application. It supports integrations with 64-bit
applications on the API level.
Kernel Parameters
Several of the kernel parameters must be increased on the HPOperations management server
because the operating system default values are too small. The ovoinstall script checks your current
settings.
aBoth Red Hat Compatible Kernel and Unbreakable Enterprise Kernel are supported.
Table 6: Minimum Kernel Settings Required for HPOM Installation on the Management Server
kernel.sem.1 250
kernel.sem.2 32000
kernel.sem.3 100
kernel.sem.4 128
net.ipv4.ip_local_port_range.1 9000
net.ipv4.ip_local_port_range.2 65500
net.core.rmem_max 4194304
net.core.rmem_default 262144
net.core.wmem_max 1048576
net.core.wmem_default 262144
fs.aio-max-nr 1048576
fs.file-max 6815744
kernel.shmall 2097152
kernel.shmmax 4294967295
kernel.shmmni 4096
Caution: On the HPOperations management server with a high number of Reverse Channel
Proxy (RCP) nodes, the ovbbccb process opens many connections and may therefore run out of
the available file descriptors. As a result, the agents start buffering.
To avoid this problem, increase the number of file descriptors to 4096 on the management server.
Increase the maximum number of open files by using the limits.conf file:
tail /etc/security/limits.conf
* soft nofile 4096
* hard nofile 4096
Caution: Before you install any of the required operating system patches, read the README file
supplied with the patch.
For latest information about the required patches, see the HPOM Software Release Notes. This
document is available at the following location:
https://softwaresupport.hpe.com/group/softwaresupport/search-result?keyword=
At the time of installation, the documented patches may be superseded. Use the latest patches from
the following location:
https://softwaresupport.hpe.com/group/softwaresupport/patches
Administration UI Requirements
The Administration UI is installed during the installation and configuration of HPOM, so make sure that
you also perform all the checks described in this section.
Caution: Make sure that you have at least 1.2 GB of free disk space in the /tmp directory.
Otherwise, the installation of the Administration UI may fail.
Passwords
Make sure that you have access to the HPOM database user password.
Any database user with read access to the HPOM database objects can be used. Both opc_op and
opc_report users who are created during the HPOperations management server installation fulfill this
requirement.
Note: Oracle only: Oracle 11g or higher has password aging enabled by default. This means that
passwords expire after 6 months. If the password of the Oracle user that HPOM uses to connect
to the database expires, HPOM cannot connect to the database. For detailed information, see the
https://softwaresupport.hpe.com/km/KM323488
Caution: Make sure to review and verify all connection parameters. The majority of configuration
problems appear because of incorrect connection settings (for example, when non-standard ports
or incorrect hostnames are used).
Oracle RAC environments only: The correct configuration setup must be performed after the
Administration UI is installed.
HA cluster only: If you use the Oracle database that runs as an HA cluster package, provide the virtual
cluster hostname of that HA cluster package.
Database Passwords
The passwords for the database users are stored in an encrypted form inside the Administration UI
configuration files.
If you need to change the passwords after installing the Administration UI, follow these steps:
https://softwaresupport.hpe.com/km/KM323488
l Install and configure HPOM for the first time on the management server.
l Set up a database for use with HPOM.
l Start HPOM and verify the installation.
l Create additional database users.
l Reconfigure HPOM.
Note: The HPE Operations Agent software is automatically installed during the installation of the
HPOM software on the HPOperations management server.
l The Red Hat Enterprise Linux, Oracle Linux, or CentOS Linux operating system must be installed.
l Kernel parameters on the management server must be adapted.
For more information, see "Verifying Installation Requirements" on page 15.
l Red Hat Enterprise Linux, Oracle Linux, or CentOS Linux operating system patches must be
installed.
l Sufficient disk space must be available in the right partitions of the file system.
For more information, see "Required Disk Space" on page 16.
l xinetd must be installed.
Note: On RHEL 7.x, xinetd service must be enabled and running during HPOM configuration.
l Shared memory subsystem must be available. You can check whether this subsystem is available
by mounting the tmpfs file system (/dev/shm).
l Input and output data for multiple language support must be configured if you use any non-ASCII
character.
For more information, see "Configuring Input/Output for Multiple Language Support" below.
Note: The LANG variable determines the language of HPOM messages, templates,
and uploaded configuration. If some of the contents are not available for the chosen
locale, HPOM defaults to the English contents instead.
l Linux:
Make sure that you set locales to a UTF-8 version in the same way as for the management
server. To find the appropriate UTF-8 suffix, use locale -a.
l Task 1: "Installing an Oracle Database" on the next page or "Installing a PostgreSQL Database" on
page 38
Caution: Before installing a database, you should consider which database you want to use
with HPOM, namely an Oracle database or a PostgreSQL database.
l Task 2: "Installing and Configuring the HPOM Software on the Management Server System" on
page 41
Caution: Keep in mind the following changes introduced with HPOM 9.20:
l HPE Operations Agent software is no longer shipped together with HPOM. To obtain the
supported agent version, request the agent media 11.1x from HP.
Note: The Red Hat Enterprise Linux installation procedure provides an option to enable a basic
firewall. The default settings of this firewall block all HPOM communications to other systems. If
you choose to enable this firewall, you must configure it to allow HPOM communications. For
detailed information about the ports that you need to open, see the HPOperations Manager
Firewall Concepts and Configuration Guide.
l Oracle Database 11g Release 1 Enterprise Edition, Standard Edition, or Standard Edition One (with
the 11.1.0.7 patch set)
l Oracle Database 11g Release 2 Enterprise Edition, Standard Edition, or Standard Edition One
(versions 11.2.0.111.2.0.4)
l Oracle Database 12c Release 1 Enterprise Edition, Standard Edition, or Standard Edition One
(12.1.0.1 or 12.1.0.2)
For the latest Oracle system requirements (for example, system patches), more detailed instructions
than those provided in this section, or non-standard installations, see the documentation supplied with
the Oracle Database product.
For information about the support of later versions of Oracle, see the latest edition of the HPOM
Software Release Notes.
Note: Oracle 11g and Oracle 12c are the Oracle Corporation products and cannot be purchased
directly from Hewlett-Packard.
If you have the existing Oracle database and want to verify which Oracle products are installed, use the
Oracle Universal Installer to view the installed Oracle products:
1. See the Oracle product documentation to make sure that the database is compatible with Oracle
database version 11g (11.1 or 11.2) or 12c (12.1).
2. Make sure Oracle environment variables are set as described in "Before Installing an Oracle
Database" below.
3. Continue with "Installing and Configuring the HPOM Software on the Management Server
System" on page 41.
1. Make sure that your system meets the hardware and software requirements listed in "Installation
Requirements for the Management Server" on page 13.
2. Run User Manager as the root user, and then create the oracle user with the following
attributes:
a. Create UNIX groups named oinstall, dba, and oper (the ID of each group should be greater
than 100).
b. Create a UNIX user named oracle (the user ID should be greater than 100).
Caution: In a cluster environment, you must use the same IDs on all cluster nodes.
Otherwise, the startup of the HA resource group on the second node fails.
c. Make the oracle user a member of oinstall as the primary group and dba and oper as the
secondary groups.
d. As the home directory of the oracle user, use the following:
/home/oracle
e. Make sure that a POSIX shell (for example, sh) is assigned as the default shell for the oracle
user.
3. As the root user, set umask to allow users to access Oracle binaries by running the following
command:
umask 022
4. Create the directories required by the Oracle installation, and then change the ownership and set
correct permissions as follows:
a. Create the ORACLE_HOME directory by running the following command:
mkdir -p /opt/oracle/product/<version>
In this instance, <version> is the Oracle database version, 11.1.0, 11.2.0, or 12.1.0.
You can also choose a different directory, but you must use it consistently in all
subsequent steps.
b. Create a base directory for the Oracle installation files by running the following command:
mkdir -p /opt/oracle/oraInventory
Note: You can also choose a different directory, but you must use it consistently in all
subsequent steps.
c. Change the ownership and set correct permissions by running the following commands:
chown -R oracle:oinstall /opt/oracle/oraInventory
chmod -R 770 /opt/oracle/oraInventory
5. Change the ownership of the directories to oracle:oinstall by typing the following command:
chown -R oracle:oinstall /opt/oracle \
/opt/oracle/product /opt/oracle/product/<version>
In this instance, <version> is the Oracle database version, 11.1.0, 11.2.0, or 12.1.0.
6. Set the following Oracle environment variables in /home/oracle/.profile or
/home/oracle/.bash_profile of the oracle user:
l ORACLE_BASE=/opt/oracle
export ORACLE_BASE
This variable determines the location of the Oracle installation. The default recommended
setting is /opt/oracle, but you can use a different installation prefix if needed.
l ORACLE_HOME=$ORACLE_BASE/product/<version>
export ORACLE_HOME
In this instance, <version> is the Oracle database version, 11.1.0, 11.2.0, or 12.1.0.
This variable determines the location and the version of the Oracle installation. This is the
recommended setting, but you can use a different setting if needed.
Note: The ORACLE_BASE and ORACLE_HOME Oracle environment variables are not
mandatory for the operation with HPOM.
l ORACLE_SID=openview
export ORACLE_SID
This variable defines the name of the database you will create. The default setting is openview,
but you can use a different setting if needed.
When using an existing database, use the name of this database for setting ORACLE_SID.
When configuring the database, the ovoconfigure script detects that a database of this name
exists and asks whether you want to use it for the HPOM database objects. If you choose this
approach, the HPOM database objects are created within the existing database.
l ORACLE_TERM=<terminal_type>
export ORACLE_TERM
This variable defines the type of terminal (for example, xterm, hp, ansi) to be used with the
Oracle installer and other Oracle tools.
Make sure to set this variable to the type of your terminal.
l PATH=$PATH:$ORACLE_HOME/bin
export PATH
This variable sets the directories through which the system searches to find and execute
commands.
7. If you want to use port 1521 for Oracle listener communication: Make sure that the ncube port is
commented out in /etc/services (if this file exists on your system):
#ncube-lm 1521/tcp # nCube License Manager
#ncube-lm 1521/udp # nCube License Manager
Preparation Steps
Note: Oracle Database 11g Release 1 and Release 2 as well as Oracle Database 12c Release 1
for Red Hat Enterprise Linux or Oracle Linux are available on DVD-ROMs. These products, as well
as all required patch sets, can be downloaded from the Oracle web site.
Note that due to Oracle restrictions, HPOM does not support an Oracle database running on
CentOS Linux.
1. Open two terminal windows, and then log on as the root user in the first terminal window and as
the oracle user in the second one.
2. As the oracle user, make sure that the ORACLE_TERM environment variable is set correctly.
To check the setting, type the following:
echo $ORACLE_TERM
3. Verify, and if necessary, set the ORACLE_HOME and ORACLE_SID variables.
4. Set the DISPLAY environment variable by typing the following:
DISPLAY=<nodename>:0.0
export DISPLAY
In this instance, <nodename> is the name of your system.
5. On most systems, the disk is mounted automatically when you insert it into the disk drive.
However, if the disk is not mounted automatically, create a mount point, and then, as the root
user, run the following command to mount it:
/bin/mount -o ro -t iso9660 /dev/cdrom <mount_point>
In this instance, <mount_point> is the disk mount point directory.
Note: Before proceeding with the installation of the Oracle database, it is recommended that
you copy the contents of installation media to a hard disk.
1. As the oracle user, start the Oracle Universal Installer by running the following command:
<path>/runInstaller
In this instance, <path> is the full path of the database directory on the installation media.
The Select Installation Method window opens.
2. In the Select Installation Method window, click Advanced Installation, and then Next.
The Specify Inventory directory and credentials window opens.
Note: If an error message appears indicating that the inventory location could not be created,
you can safely ignore it.
3. Make sure that the /opt/oracle/oraInventory path is given in the Specify Inventory directory
and credentials window, and then click Next.
The Select Installation Type window appears.
4. In the Select Installation Type window, click either Enterprise Edition or Standard Edition
(according to your needs or your Oracle license agreement), and then click Next.
The Install Location window opens.
Note: If you plan to run an HPOperations management server in a language other than
English, which is the default language, you can add languages by clicking the Product
Languages... button and selecting languages from the list.
5. In the Install Location window, check that the Oracle variables are set correctly, and then click
Next.
The Product-Specific Prerequisite Checks window appears.
6. In the Product-Specific Prerequisite Checks window, the result of checking requirements appears.
If no problems are reported, click Next.
The Select Configuration Option window opens.
Note: If a problem report message appears, check all requirements and set them accordingly.
7. In the Select Configuration Option window, click Install Software Only, and then Next.
The Privileged Operating System Groups window appears.
8. In the Privileged Operating System Groups window, click Next.
The Summary window opens.
9. Review the information displayed in the Summary window, and then click Install to start the
installation.
10. When the Execute Configuration scripts window appears, follow these steps:
a. Open a terminal window, and then log on as the root user.
b. Run the following two scripts:
${ORACLE_HOME}/root.sh
/opt/oracle/oraInventory/orainstRoot.sh
c. Return to the Execute Configuration scripts window, and then click OK to continue.
l Direct upgrades from previous releases to the most recent patch set are supported.
l Out-of-place patch set upgrades, in which you install the patch set into a new and separate
Oracle home, are the best practice recommendation. In-place upgrades are supported but are
not recommended.
l New installations consist of installing the most recent patch set, rather than installing a base
release and then upgrading to a patch release.
To install Oracle Database 11g Release 2 from the DVD-ROM, follow these steps:
1. As the oracle user, start the Oracle Universal Installer by running the following command:
<path>/runInstaller
In this instance, <path> is the full path of the database directory on the installation media.
Depending on the version of the Oracle database you are installing, one of the following two
windows opens:
l 11.2.0.1: Select Installation Option window
2. Oracle database version 11.2.0.211.2.0.4: Use the Software Updates feature to dynamically
download and apply latest updates.
To dynamically download and apply latest updates, in the Configure Security Updates window, do
one of the following:
l If you want to receive information about security issues, follow these steps:
i. Either type your email address or select the I wish to receive security updates via My
Oracle Support check box and type your Oracle support password. Click Next.
l If you do not want to receive information about security issues, follow these steps:
i. Clear the I wish to receive security updates via My Oracle Support check box, and
then click Next.
You are asked whether you are sure you do not want to receive information about security
issues.
ii. Click Yes.
The Download Software Updates window opens.
iii. In the Download Software Updates window, skip applying updates to the downloaded
Oracle software by clicking Skip software updates followed by Next.
The Select Installation Option window opens.
3. In the Select Installation Option window, click the Install database software only radio button,
and then click Next.
The Grid Installation Options window opens.
4. In the Grid Installation Options window, click Single instance database installation, and then
click Next.
The Select Product Languages window opens.
5. In the Select Product Languages window, you can find a list of available languages that you can
select according to your preferences (for example, if you plan to run an HPOperations
management server in a language other than English, which is the default language, or if you want
to receive Oracle messages in a different language).
After you select the languages you want, click Next.
The Select Database Edition window opens.
6. In the Select Database Edition window, click Enterprise Edition or Standard Edition (according
to your needs or your Oracle license agreement), and then click Next.
The Specify Installation Location window opens.
7. In the Specify Installation Location window, check that the Oracle base and software location
values correspond to the ORACLE_BASE and ORACLE_HOME values you created, and then click Next.
Note: If a problem report message appears, check all requirements and set them accordingly.
10. Review the information displayed in the Summary window, and then click Install to start the
installation.
The Install Product window opens.
11. When the Execute Configuration scripts window appears, follow these steps:
a. Open a terminal window, and then log on as the root user.
b. You are requested to run one or both of the following scripts:
${ORACLE_HOME}/root.sh
/opt/oracle/oraInventory/orainstRoot.sh
c. Return to the Execute Configuration scripts window, and then click OK to continue.
The Finish window opens.
12. In the Finish window, click Close to finish the Oracle database installation.
1. As the oracle user, start the Oracle Universal Installer by running the following command:
<path>/runInstaller
In this instance, <path> is the full path of the database directory on the installation media.
The Configure Security Updates window opens.
2. Use the Software Updates feature to dynamically download and apply latest updates.
To dynamically download and apply latest updates, in the Configure Security Updates window, do
one of the following:
l If you want to receive information about security issues, follow these steps:
i. Either type your email address or select the I wish to receive security updates via My
Oracle Support check box and type your Oracle support password. Click Next.
The Download Software Updates window opens.
ii. In the Download Software Updates window, do one of the following:
l Apply updates to the downloaded Oracle software, and then click Next.
l Skip applying updates to the downloaded Oracle software by clicking Skip software
updates followed by Next.
In both cases, the Select Installation Option window opens.
l If you do not want to receive information about security issues, follow these steps:
i. Clear the I wish to receive security updates via My Oracle Support check box, and
then click Next.
You are asked whether you are sure you do not want to receive information about security
issues.
ii. Click Yes.
The Download Software Updates window opens.
iii. In the Download Software Updates window, skip applying updates to the downloaded
Oracle software by clicking Skip software updates followed by Next.
The Select Installation Option window opens.
3. In the Select Installation Option window, click the Install database software only radio button,
and then click Next.
The Grid Installation Options window opens.
4. In the Grid Installation Options window, click Single instance database installation, and then
click Next.
The Select Product Languages window opens.
5. In the Select Product Languages window, you can find a list of available languages that you can
select according to your preferences (for example, if you plan to run an HPOperations
management server in a language other than English, which is the default language, or if you want
to receive Oracle messages in a different language).
After you select the languages you want, click Next.
The Select Database Edition window opens.
6. In the Select Database Edition window, click Enterprise Edition or Standard Edition (according
to your needs or your Oracle license agreement), and then click Next.
Note: If you install the Oracle database on the system for the first time, the Create Inventory
window appears before the Specify Installation Location window. In the Create Inventory
window, specify the path to the Oracle inventory directory.
You may get a message warning you that the central inventory is located inside the ORACLE_
BASE directory. In this case, continue by clicking Yes.
7. In the Specify Installation Location window, check that the Oracle base and software location
values correspond to the ORACLE_BASE and ORACLE_HOME values you created, and then click Next.
The Privileged Operating System groups window opens.
8. In the Privileged Operating System Groups window, specify the group names for the Database
Administrator group (OSDBA), for example, dba, and optionally, the Database Operator group
(OSOPER), for example, oper. Click Next.
The Perform Prerequisite Checks window opens.
9. In the Perform Prerequisite Checks window, the result of checking requirements appears. If no
problems are reported, click Next.
The Summary window opens.
Note: If a problem report message appears, check all requirements and set them accordingly.
10. Review the information displayed in the Summary window, and then click Install to start the
installation.
The Install Product window opens.
11. When the Execute Configuration scripts window appears, follow these steps:
a. Open a terminal window, and then log on as the root user.
b. You are requested to run one or both of the following scripts:
${ORACLE_HOME}/root.sh
/opt/oracle/oraInventory/orainstRoot.sh
c. Return to the Execute Configuration scripts window, and then click OK to continue.
The Finish window opens.
12. In the Finish window, click Close to finish the Oracle database installation.
For detailed information about installing a PostgreSQL database, see "Installing a PostgreSQL
Database" below. For the latest PostgreSQL system requirements or more detailed instructions than
those provided in this section, see the PostgreSQL documentation that is available at the following
location:
http://www.postgresql.org/docs
Caution: The set of PostgreSQL server binaries you choose (for example, Open Source,
EnterpriseDB, or compiled from source) must contain the server binaries that are built with enabled
thread safety.
Note: The PostgreSQL object-relational database management system can be downloaded from
the PostgreSQL web site.
After you choose the PostgreSQL database version that you want to install, complete the following
tasks:
l http://www.postgresql.org/download a
l http://enterprisedb.com/downloads/postgres-postgresql-downloads b
When installing the PostgreSQL server binaries, keep in mind the following:
l The installation package may have dependencies. You can find the links to these dependencies on
the same page as the PostgreSQL binary package. For details, see the corresponding package
documentation.
l For the HPOperations management server, a 64-bit version of PostgreSQL is required.
l If you use server binaries provided by OpenSGC, make sure that you always set the LD_LIBRARY_
PATH variable to the directory where the PostgreSQL client libraries are stored. You must do this
because the server binaries provided by OpenSGC are not built with the correct runtime path.
Note: It is strongly recommended that you do not use server binaries provided by OpenSGC.
l When building from source, thread safety must be enabled. Follow the instructions provided for each
package. Depending on the package you choose, one or more sub-packages for the server, the
client, or the libraries are available. In a local scenario, all of them are needed for the HPOperations
management server. In a remote scenario, you must install the packages for the server, the client,
and the libraries on the PostgreSQL server, while the packages for the client and the libraries must
be installed on the HPOperations management server.
l Add the directory where the PostgreSQL binaries are stored into PATH. Otherwise, the operating
system may include a different and usually older version of PostgreSQL, which may create
conflicts.
Caution: In a cluster environment, you must use the same user and group IDs on all cluster
nodes. Otherwise, the startup of the HA resource group on the second node fails.
l Automatically:
During ovoconfigure. In this case, a database cluster is created locally on the HPOperations
management server.
For details, see "Configuring a PostgreSQL Database" on page 52.
l Semi-automatically:
By running the psqlcluster tool on the database server system. In this case, a local or remote
database cluster is created.
For details, see "Creating and Configuring a PostgreSQL Database Cluster by Using the psqlcluster
Tool" on page 101.
l Manually:
This method enables additional customization of cluster parameters and a file location.
For details, see "Creating and Configuring a PostgreSQL Database Cluster Manually" on page 102.
To ensure that the HPOM installation runs smoothly, your system must meet all the prerequisites
detailed in "Installation Requirements for the Management Server" on page 13.
Note: SELinux running in the enforcing mode, which is the default mode that executes the
SELinux security policy in a RHEL environment, is supported with HPOM.
Before running the ovoinstall script, decide whether you want to set the database to start
automatically every time you restart your system.
After you install the HPOM software on the management server, the ovoinstall script asks you if you
want to continue with the server software configuration. If you answer in the affirmative, the
ovoconfigure script is started automatically.
Caution: Do not install HPOM product packages directly by using rpm. Use ovoinstall for the
administration of the HPOM software on the HPOperations management server.
In addition, it is not possible to install HPOM from the software depot server.
The syntax of the ovoinstall and ovoconfigure scripts is the same and is as follows:
ovoinstall|ovoconfigure
[-pkgdir <package_dir>] [-agtdir <software_dir>]
[-adminUIdir <software_dir>]
[-defaults <defaults_file>]
[-no_clear_display] [-u|-unattended] [-check]
You can use the following options with the ovoinstall and ovoconfigure scripts:
-agtdir <software_ Enables you to specify the HPE Operations Agent software location.
dir>
-defaults Enables you to specify the file containing the default answers
<defaults_file> totheovoconfigure questions.
-no_clear_display By specifying this option, you can stop the ovoconfigure script
fromcleaning the screen contents after eachsuccessfully finished step.
l If the version of your RHEL operating system is 7.x, you must install HPE Operations Agent before
starting the HPOM installation and configuration. Otherwise, the HPOM installation and
configuration will fail.
Caution: RHEL 7.x is supported with HPE Operations Agent 11.14 or higher.
l Verify whether you use Network Information Services (NIS) for user or group management. This
information is available from the entries for passwd and the group in the /etc/nsswitch.conf file.
If you use NIS, keep the following in mind before running the ovoinstall script:
l If the opc_op user already exists in the NIS environment, it must belong to the opcgrp group. If it
does not exist, the opc_op user will be created by the ovoinstall script.
l Home directories of the opc_op and oracle or postgres users must be accessible on the
HPOperations management server as well as the same as on the NIS server.
l If you plan to use PostgreSQL as the database server, both the PostgreSQL OS DBA user and its
group must be created.
If you do not use NIS for user or group management, ovoinstall automatically sets up both groups
and users.
l If you do not want your user account and group configuration to be modified during the installation
and configuration of the HPOM software on the management server, make sure to configure the
opc_op user and the opcgrp group before starting the installation.
Options Description
Note: If at any point either ovoinstall or ovoconfigure returns a value with an error, type back
and repeat the step, type exit and cancel the procedure, or type ? and gain more information.
To install and configure the HPOM software on the management server, follow these steps:
l If your operating system is RHEL 7.x or Oracle Linux 7.x, follow these steps:
o Install the following versions of HPE Operations Agent:
4. Press ENTER to verify that you want the installation procedure to start.
You are prompted to enter the HPOM software package repository location where all server
packages are located.
5. Press ENTER to accept the default repository location, or enter the desired location followed by
ENTER.
aTo correct any value, type back, and then set the value to match the required value.
The ovoinstall script checks and installs the server setup package that contains the server
installation infrastructure.
6. Press ENTER to continue with checking the system.
The following is checked:
l root user
l LANG
l NLS_LANG
l umask
l Language
l Kernel parameters
l Installed software
l Running processes
l Required files
Note: If the system check returns a failed value, type back and repeat the step, type exit
and cancel the procedure, or type ? and gain more information.
In case of a minimum deviation from the requirements, you can safely continue with the
installation.
7. After the system check is done, press ENTER to continue with the installation.
You are prompted to enter the HPE Operations Agent software location. This location refers to the
path to the mounted HPE Operations Agent media, where the file oainstall.sh is located.
8. Enter the HPE Operations Agent software location, press ENTER.
You are prompted to enter the HPOM Administration UI software location. For example,
/tmp/HPOperationsAgent (location where untarred HP Operations Agent software file is located),
When the detection procedure finishes, you are prompted to enter the certificate backup
passwordthe password used for a certificate backup and restore (for example, cert_bkp).
17. Accept the default value by pressing ENTER, or type the desired value followed by ENTER.
The ovoconfigure script asks you if you want to configure HPPerformance Manager (OVPM).
18. Press ENTER to accept the default value and not to configure OVPM, or press y followed by
ENTER to configure OVPM during the server configuration. In that case, specify OVPMs network
node and port.
Caution: At this point, you must decide which database you want to configure, an Oracle
database or a PostgreSQL database.
Note: When choosing a password for an HPOM database, avoid using a straight quotation
mark ("), a single quotation mark (), a dollar sign ($), and a backslash (\). However, if you
want your password to contain , $, or \, you can change it later by using the opcdbpwd
command.
After you answer all the Oracle or PostgreSQL database-related questions, the ovoconfigure
script checks the database configuration data and the summary of all provided answers appears.
23. Check the summary data, and then press ENTER to perform the database configuration.
Caution: If the database configuration fails, you can reconfigure the database by typing one
of the following:
l back: All the questions related to the database configuration must be answered once
again.
l repeat: Answers that you provided for the database configuration are reused.
The entire database configuration procedure is written in the log files, which you can view at any
time during the database configuration. For more information, see "Viewing the Installation Log
Files" on page 57.
24. Press ENTER to continue with the server initialization.
During the server initialization, the ovoconfigure script performs the integration into the start/stop
sequence.
25. Press ENTER to continue with the server final configuration that consists of the following:
l Assigning the management server policy group
l Configuring subagents
l Backing up certificates
26. If you want to enable the Event Storm Filter component, press ENTER. Otherwise, press n
followed by ENTER.
For detailed information about the Event Storm Filter component, see the HPOM Administrators
Reference.
27. If you want to enable the Health Check component, press ENTER. Otherwise, press n followed
by ENTER.
For detailed information about the Health Check component, see the HPOM Administrators
Reference.
The ovoconfigure script continues with installing server add-on packages.
28. Press ENTER to confirm that you want to install the server add-on packages.
After the server add-on packages are installed, the ovoconfigure script starts the Administration
UI installation and you are prompted to answer the questions listed in Table 8 by either accepting
the default value and pressing ENTER, or typing the desired value followed by ENTER.
Table 8: Administration UI-related Questions
Administration UI port Web application port to which you connect with the web
browser. The default value is 9662.
Administration UI Web application secure port to which you connect with the web
secure port browser. The default value is 9663.
It is not possible to disable either port. If you enter a non-default
port number, you must also specify the alternate port number in
the URL, which is used to invoke the Administration UI Web
Application from the web browser.
Administration UI XML Password for the XML database. It stores the Administration UI
DB password users, user groups, user roles, and so on.
Database opc_op The password for the opc_op database user. The default value is
password opc_op.
After the Administration UI is successfully installed, the ovoconfigure script asks you if you
want to switch HPOM to non-root operation.
29. Optional: Open a second window and install the latest Administration UI patch.
Close the second window after you have successfully installed the patch. Return to the original
window to continue with the configuration.
30. If you do not want to switch HPOM to non-root operation, press ENTER to accept the default
value n.
If you want to switch HPOM to non-root operationa, follow the steps listed in "Configuring HPOM
for Non-Root Operation" on page 56.
For detailed information about non-root operation, see the HPOM Concepts Guide. For details
about how to configure the Administration UI for non-root operation, see the HPOM
Administrators Reference.
31. Optional: Check if the installation of the HPOM software on the management server was
successful.
For more information, see "Starting HPOM and Verifying the Installation" on page 58.
32. Make the HPOM manual pages available for users by adding the /opt/OV/man directory to the
MANPATH environment variable. To do so, run the following commands:
MANPATH=$MANPATH:/opt/OV/man
export MANPATH
The MANPATH environment variable must be set either for a particular user in the .profile or
.bash_profile file, or for all users in the /etc/profile file.
Note: It is recommended to set the PATH variable to include the following HPOM directories
on the management server: /opt/OV/bin, /opt/OV/bin/OpC, /opt/OV/nonOV/perl/a/bin,
and /opt/OV/bin/OpC/utils.
Caution: After you answer all the Oracle database-related questions, continue with the HPOM
installation and configuration steps on page 48.
Table 9 shows which questions you must answer if you use the Oracle database with HPOM.
Table 9: Oracle Database-related Questions
Enable automatic y
database startup?
Set up the database This question allows you to choose how to create the database,
manually manually or automatically.
(local/remote)?
l If you want to create the database manually, press y followed
by ENTER. In this case, the ovoconfigure script pauses
instead of creating the database, allowing you to manually
create the database. After you create the database manually
as described in "Setting Up HPOM with a Remote/Manual
Oracle Database" on page 87, the ovoconfigure script
configures HPOM to use the created database.
l If you want the ovoconfigure script to create the database
automatically, press ENTER to accept the default answer.
Oracle Base The Oracle database base directory, which is usually the same
as the ORACLE_BASE variable. The default is /opt/oracle.
Oracle Home The Oracle database home directory, which is usually the same
as the ORACLE_HOME variable. The default is
/opt/oracle/product/11.1.0.
Oracle User The Oracle user for the HPOperations management server
database. The default is oracle.
Oracle Data Directory The directory where the HPOperations management server
database files are stored (for example, /opt/oracle/oradata).
Oracle User opc_op The password for the opc_op database user. The default is opc_
Password op.
Oracle User opc_ The password for the opc_report database user. The default is
report Password opc_report.
Oracle User system The password for the system database user. The default is
Password manager.
Caution: After you answer all the PostgreSQL-database related questions, continue with the
HPOM installation and configuration steps on page 48.
Depending on whether you want to have a managed database (HPOM manages a local database that
will be created from the beginning or was created by using the psqlcluster tool) or an independent
database (HPOM connects to an independent local or remote database, but does not manage it), press
one of the following two keys, and then carefully follow the instructions:
Note: If the independent database is not created, the ovoconfigure script pauses, allowing
you to manually create the database.
Regardless of whether you choose y (the default answer) or n, the following question is displayed:
After you choose the PostgreSQL cluster directory, the ovoconfigure script checks it. Depending on
whether this directory is empty or non-existent, or it belongs to an HPOM-created cluster, you must
answer either the questions described in Table 10 and Table 11.
Question Description
PSQL binary The directory where the PostgreSQL binaries are stored. Keep in
directory mind that the location of this directory varies depending on the
distribution or the version.
Question Description
PSQL library The directory where the PostgreSQL client libraries are stored.
directory Keep in mind that the location of this directory varies depending on
the distribution or the version.
PSQL data directory The directory where the data tablespaces are stored. This directory
must be empty or non-existent. If you do not provide an answer to
this question, <cluster_dir>/HPOM is used.
PSQL index directory The directory where the index tablespaces are stored. This
directory must be empty or non-existent. If you do not provide an
answer to this question, <cluster_dir>/HPOM is used.
Do you wish to start Press y if you want the database cluster to be started automatically
the PSQL cluster each time the system is started. Otherwise, you must start the
automatically at
database cluster manually before you can start HPOM.
boot time?
Make sure no other process uses this port at any time, including
after a system restart.
Database name The name of the HPOM database. The default is openview.
OS DBA user The operating system user that controls database processes and
has access to all PostgreSQL binaries and HPOM database
directories. This user is usually set to postgres.
DB DBA user The name of the administrator user inside the database cluster or
server, which is usually set to postgres.
DB DBA user password The password of the administrator user inside the database cluster
or server, which is usually set to postgres.
Database opc_op The password for the opc_op database user. The default is opc_op.
password
Database opc_report The password for the opc_report database user. The default is
password opc_report.
Question Description
PSQL library The directory where the PostgreSQL client libraries are stored.
directory Keep in mind that the location of this directory varies depending on
the distribution or the version.
Database name The name of the HPOM database. The default is openview.
Database opc_op The password for the opc_op database user. The default is opc_
password op.
Database opc_report The password for the opc_report database user. The default is
password opc_report.
Question Description
PSQL binary The directory where the PostgreSQL binaries are stored. Keep in
directory mind that the location of this directory varies depending on the
distribution or the version.
PSQL library The directory where the PostgreSQL client libraries are stored.
directory Keep in mind that the location of this directory varies depending on
the distribution or the version.
Make sure no other process uses this port at any time, including
after a system restart.
Question Description
Database name The name of the HPOM database. The default is openview.
DB DBA user The name of the administrator user inside the database cluster or
the server. It is usually set to postgres.
DB DBA user password The password of the administrator user inside the database cluster
or the server. It is usually set to postgres.
Database opc_op The password for the opc_op database user. The default is opc_
password op.
Database opc_report The password for the opc_report database user. The default is
password opc_report.
Caution: You cannot switch back to the root mode if you have configured HPOM for non-root
operation.
1. When the ovoconfigure script asks you if you want to switch HPOM to non-root operation, leave
the ovoconfigure window open, and then open a second window.
2. In the new window, verify if the ovoinstall script has created the non-root (opc_op) user, by
running the following command:
cat /etc/passwd | grep opc_op
Note the home directory in which the non-root user has been created.
3. Log on as a non-root user:
su - opc_op
The operating system logs you on to the home directory.
4. Verify the current directory by running the pwd command.
The current directory must be the same as the home directory in which the ovoinstall script
created the non-root user. If the current directory is different from the non-root user home directory,
perform the following steps:
more /var/opt/OV/log/OpC/mgmt_sv/installation.log
more /var/opt/OV/log/OpC/mgmt_sv/installation.log.verbose
HPOM_Administration_UI_Install_<date_time>.log
For example:
HPOM_Administration_UI_Install_04_01_2014_16_00_23.log
1. As the root user, verify that all HPOperations server services are running by entering the
following:
/opt/OV/bin/OpC/opcsv
An output similar to the following one should appear:
If the HPOperations management server services are not running, you can start them with the
following command:
/opt/OV/bin/OpC/opcsv -start
Caution: You must have a local agent installed to perform steps 2 and 3.
2. Verify that all the HPE Operations Agent services are running on the management server system
by running the following command:
/opt/OV/bin/OpC/opcagt -status
An output similar to the following one should appear:
If the HPE Operations Agent services are not running, you can start them with the following
command:
/opt/OV/bin/OpC/opcagt -start
3. Submit test messages by typing the following:
/opt/OV/bin/OpC/utils/submit.sh
This program sends simulated messages to the message browser. The number of messages
received depends on the configuration of your system. Under normal conditions, you will usually
receive at least two messages.
4. Complete one of the following tasks to be able to test and use an application configured as
Window (Input/Output) from the HPOM Users Assigned Applications window:
l As the root user, set the UNIX password for opc_op for each managed node where you want
to use Input/Output applications.
To do this, type the following:
passwd opc_op
Note: By default, the opc_op user is not allowed to log on to the system (* entry in the
password field of /etc/passwd).
l Make sure the $HOME/.rhosts file exists on the managed node ($HOME is the home directory of
opc_op on the managed node). If it does not exist, create it.
Make an entry in .rhosts for the opc_op user on the managed node. For example:
<management_server>.<domain> opc_op
It is not recommended to keep the .rhosts entry in a production environment because it can
represent a security risk.
l Make sure the /etc/hosts.equiv file exists on the managed node. If it does not exist, create
it.
Add the hostname of your management server to this file. For example:
<management_server>.<domain>.com
It is not recommended to keep the /etc/hosts.equiv entry in a production environment
because it can represent a security risk.
You can change the passwords of these Oracle users with the Oracle tool, SQL*Plus.
For example:
su - oracle
sqlplus /nolog
SQL> connect / as sysdba
SQL> alter user system identified by <new_password>
SQL> exit
exit
You can choose the names for all the database users during the database creation or configuration, or
you can change these names later on by running the following commands:
su - postgres
psql -U <DB_DBA_user> -h <hostname> -p <port>
postgres=# alter user <user> with password '<password>';
postgres=# alter user <user> valid until 'infinity';
postgres=# \q
exit
In this instance, <DB_DBA_user> is the name of the administrator user inside the database cluster or
server, <hostname> is the system on which the database cluster or server is installed, and <port> is
the port on which the database cluster or server listens.
Caution: Make sure that you change the password in the ~/.pgpass file of the operating system
user. Otherwise, the HPOM scripts and programs may stop working.
If you want to use a separate system as the database server, first configure the database server
system as described in "Setting Up HPOM with a Remote/Manual Oracle Database" on page 87 or
"Setting Up HPOM with a Remote/Manual PostgreSQL Database" on page 99.
1. Make sure that the LANG environment variable is set to a UTF-8 locale.
For more information, see "Configuring Input/Output for Multiple Language Support" on page 26.
To check the setting, type the following command:
echo $LANG
2. For an Oracle database only: Export all Oracle environment variables including NLS_LANG.
For instructions, see "Before Installing an Oracle Database" on page 29.
Note: Make sure that you set the same ORACLE_SID value as the one you specified before
running the ovoinstall script.
This section describes how to change your database password as well as how to reconfigure HPOM to
work with a new database name.
# su - oracle
$ sqlplus /nolog
SQL> conn / as SYSDBA
SQL> alter user <user_name> identified by <new password>;
SQL> commit;
3. If you changed the password of the opc_op or opc_report user: Make sure that you also update the
Administration UI configuration.
4. If you changed the password of the SYSTEM (RMAN) user: Update the HPOM configuration by
running the following commands:
RMAN_PASSWD=<new_password>
export RMAN_PASSWD
/opt/OV/bin/OpC/opcdbpwd -rpr
unset RMAN_PASSWD
Note: If you do not have access to the admin user, see the PostgreSQL documentation
describing the pg_hba.conf file and how to temporarily disable authentication.
4. Connect to the database, and then change the PostgreSQL password by running the following
commands:
psql -U <DB_DBA_user> -h <hostname> -p <port>
postgres=# ALTER USER <user> WITH ENCRYPTED PASSWORD '<password>';
5. If you changed the opc_op user password: Update the HPOM configuration by running the
following commands:
OPC_OP_PASSWD=<new_password>
export OPC_OP_PASSWD
/opt/OV/bin/OpC/opcdbpwd -pre
unset OPC_OP_PASSWD
6. If you changed the DB DBA user password: Update the HPOM configuration by running the
following commands:
RMAN_PASSWD=<new_password>
export RMAN_PASSWD
/opt/OV/bin/OpC/opcdbpwd -rpr
unset RMAN_PASSWD
7. Edit the .pgpass file in the home directory by replacing the old password with the new one, so that
HPOM connects to the database with the new password. For details, see the PostgreSQL
documentation.
8. If you changed the password of the opc_op or opc_report user: Make sure that you also update the
Administration UI configuration.
/opt/OV/OMU/adminUI/adminui clean
/opt/OV/OMU/adminUI/adminui start
5. Make sure that all database-specific configuration files are also updated (for example, the listener
files for the Oracle database or the .pgpass file for the PostgreSQLdatabase).
6. Start all database and HPOM processes.
Complete the following tasks to configure the components in your HPOM environment to use TLS/SSL
secure connection to the Oracle database:
Prerequisite
Make sure that your Oracle server and client are configured for TLS/SSLsupport. It is recommended
that you use Oracle 12c database with Oracle patch set 12.1.0.2.0 or later. However, you can also
configure Oracle database version 11g Release 2 to use TLS/SSL by installing a bundle patch. For
more information, see the 11.2.0.4 Connections Fail With ORA-12560 When Using TLS 1.1 or 1.2 (By
Setting SSL_VERSION In Sqlnet.ora) Oracle knowledge document (Document ID 2026419.1) at
https://support.oracle.com.
https://docs.oracle.com/database/121/DBSEG/
/opt/OV/OMU/adminUI/conf/opccfg.properties
/opt/OV/OMU/adminUI/conf/ovoappl.properties
/opt/OV/OMU/adminUI/conf/ovoconfig.properties
/opt/OV/OMU/adminUI/conf/ovoinstall.properties
For example,
If you are using the Oracle Net alias ov_net_ssl, edit and change the following line:
ovodb.url=jdbc\:oracle\:thin\:@<server hostname>\:1521\:openview
To:
ovodb.url=jdbc\:oracle\:oci\:@ov_net_ssl
Note: The Oracle JDBC OCI driver requires an Oracle client installation of the same version
as the driver. HPOM 9.22 is shipped with the Oracle 12c JDBC OCI driver. If you are using an
Oracle 11g database, you must use the Oracle 11g JDBC OCI driver.
Note: The Administration UI takes longer to start if secure Oracle connection is used.
Supported Platforms
The Java GUI is tested only on the operating system platforms listed in Table 13, and is therefore
supported only on these operating system platforms.
Caution: On all operating system platforms not listed in Table 13, you run the Java GUI at your
own risk. Running the Java GUI on a UNIX platform is not recommended because it can lead to
performance problems.
aFor the list of supported web browsers, see "Supported Web Browsers" on page 71.
Windows 2003
Windows 2003 Server (64-bit)
Windows Vista
Windows 2008 R2 (64-bit)
Windows 7
Windows 8
Windows 8.1
Windows 10
For the most up-to-date list of supported platforms, see the support matrix at the following location:
https://softwaresupport.hpe.com/km/KM323488
Supported Languages
Table 14 shows a list of languages into which the Java GUI is translated.
Table 14: Supported Languages of the Java GUI Client
Mac OS X Japanese
Mac OS X running on Intel processors Korean
Simplified Chinese
Spanish
aFor the list of supported web browsers, see "Supported Web Browsers" on page 71.
Solaris 10 Japanese
Korean
Simplified Chinese
Spanish
Windows XP Japanese
Windows 2003 Korean
Windows 2003 Server (64-bit) Simplified Chinese
Windows Vista Spanish
Windows 2008 R2 (64-bit)
Windows 7
Windows 8
Windows 8.1
For the most up-to-date list of supported platforms, see the support matrix at the following location:
https://softwaresupport.hpe.com/km/KM323488
When starting the Java GUI, select the correct locale. The locale influences the sorting, the text
display, and the representation of date and time. It also selects the localized files for your installation.
For example, to start the Spanish Java GUI, select Spain (Spanish) in the log-on window.
Installation Requirements
This section describes the hardware and software requirements for installing the Java GUI, as well as
web browsers supported by the product.
Hardware Requirements
l UNIX or Linux
For more information, see "Installation Requirements for the Management Server" on page 13.
l Windows
The best performance is achieved with an x86-based PC with a processor of at least 1 Ghz, a
minimum of 256 MB RAM, and additional 30MB RAM per GUI session.
Software Requirements
Make sure the following requirements are met:
Note: The kernel parameter that defines the maximum number of file descriptors per process
must be adjusted to ensure good performance.
For the most up-to-date list of supported JRE versions, see the support matrix at the following
location:
https://softwaresupport.hpe.com/km/KM323488
Note: The HPOM installation automatically installs and configures Tomcat web server version 7
on the management server.
Note: If you want to use the cockpit view client, make sure that your browser has Adobe Flash
Player 10 or higher with ActiveX installed as a plug-in.
For the most up-to-date list of supported web browser versions and architectures, see the support
matrix at the following location:
https://softwaresupport.hpe.com/km/KM323488
Note: The valid browsers are browsers with ActiveX and external browsers. On UNIX, only an
external browser can be used. On Windows, a browser with ActiveX is the default browser.
The HPOperations management server installation automatically installs the Java GUI binaries into
the /opt/OV/www/htdocs/ito_op/ directory on the management server.
To install the Java GUI on Linux systems by using the rpm tool, follow these steps:
l Spanish:
rpm -i --nodeps /<dir>/packages/HPOvOUWwwSpa.rpm
l Japanese:
rpm -i --nodeps /<dir>/packages/HPOvOUWwwJpn.rpm
l Korean:
rpm -i --nodeps /<dir>/packages/HPOvOUWwwKor.rpm
l Simplified Chinese:
rpm -i --nodeps /<dir>/packages/HPOvOUWwwSch.rpm
In these instances, <dir> is the location where the HPOM installation tar file is extracted.
Note: To log on to the Java GUI for the first time, use default users and passwords. The default
log-on passwords are as follows:
l For administrators:OpC_adm
l For operators:OpC_op
The next time you log on, you should change your default password for security reasons. You
can change your password again later, but you will not be allowed to set the password back to
the default one.
If you want to access web pages that start Java applets in a workspace, the Java GUI must be running
as an applet. For more information about starting the Java GUI as an applet, see "Starting the Java GUI
from a Web Browser" on page 76.
Make sure you use the proper LANG variable when starting the Java GUI in languages other than
English. Starting the Java GUI by using the English locale C and then switching to the other language
may result in incorrectly displayed accentuated characters in some dialog boxes and in displaying
garbage characters in the window title.
For more information about the ito_op script, see the ito_op(1M) manual page (UNIX), the ito_op.bat
script (Windows), and the HPOM Administrators Reference.
To start the Java GUI from a web browser, follow these steps:
1. Make sure that all the prerequisites are met as described in "Installation Requirements" on page
70.
2. On the system where the Java GUI will be running, open one of the following URLs in a web
browser:
http://<management_server>:8081/ITO_OP
https://<management_server>:8444/ITO_OP
In these URLs, <management_server> is the fully qualified hostname of your management
server.
3. Follow the instructions given on the web page for downloading the Java applet.
If you want to install and access the Java GUI, you must configure your HTTP server. The
configuration varies depending on the type of HTTP server.
l Apache Tomcat (automatically installed and configured with the HPOM installation)
l Netscape
For details about configuring a Netscape web server, see "Configuring a Netscape Web Server" on
the next page.
l W3C Jigsaw
For details about configuring a W3C Jigsaw web server, see "Configuring a W3C Jigsaw Web
Server" below.
l Set up startup and shutdown operations for the HPOperations management server services.
l Start and stop a database automatically.
l Start and stop a database manually.
l Replace an HPOM database.
l Set up HPOM with a remote/manual database.
l Set up HPOM in an Oracle Real Application Clusters (RAC) environment.
You can, however, start the HPOperations management server services by using the opcsv -start
command. Similarly, you can stop the HPOperations management server services by using the opcsv
-stop command.
The opcsv command is located in the /opt/OV/bin/OpC directory and has the following functions:
The opcsv command does not start and stop the subagent processes. The subagent communication
processes are managed by the ovc command, which is located at /opt/OV/bin. If you want to stop the
HPE Operations Agent processes, use ovc -stop AGENT. If you want to start the HPE Operations
Agent processes, use ovc -start AGENT.
For more information about the opcsv and ovc commands, see the opcsv(1) and ovc(1M) manual
pages.
Tip: If you experience communication problems between the HPOperations server and agents, or
if the server processes are not correctly informed about configuration changes, restart both the
HPOperations management server and HPE Operations Agent processes:
The option for the automatic startup and shutdown of the database is set in the following file:
/etc/sysconfig/ovoracle
Change the OVORACLE and OVORALISTENER variables to 1, as shown in the following extract from the
file:
The ovopsql script is configured to run at startup and it reads the /etc/ovopsql configuration file
containing a list of database clusters that are started automatically. The configuration file is
automatically updated when you create a database cluster by using psqlcluster -ar, but you can
also customize it by editing the configuration file manually.
Note: If you want to start and stop a remote PostgreSQL database automatically, you must install
the HPOvOUPSQLConf package. For details, see "Installing and Configuring HPOM with a
Remote/Manual PostgreSQL Database" on page 105.
Caution: Start the database before starting HPOM and stop the database after stopping HPOM.
-m Shutdown mode.
-f Fast shutdown mode that rolls back all transactions and disconnects.
Note: If you do not specify any mode, the smart shutdown mode is used.
Note: To avoid unnecessary conversions taking place in the database, use the same character set
for both the database and the environment of the HPOM user interface and server processes. After
you install a database, you can no longer change the character set.
The NLS parameters are controlled by the Oracle environment variable NLS_LANG that has the following
format:
<language>_<territory>.<character_set>
For example, HPOM uses the following NLS_LANG setting for the English language:
american_america.AL32UTF8
By default, HPOM uses the value of NLS_LANG set in the environment. If NLS_LANG is not set in the
environment, HPOM uses the value specified in the following file:
/etc/opt/OV/share/conf/ovdbconf
If NLS_LANG is not present there, HPOM uses the LANG value to determine the value of NLS_LANG.
HPOM checks the character set of the Oracle database and stores this information as part of its
configuration. Oracle provides the v$nls_parameters database table that contains the settings for the
language and character set parameters.
Local and managed database The following processes run on the management server:
l Database processes
l HPOperations management server processes
l GUI processes
These processes connect to the database server.
l opcdbsetup tool for the Oracle database or psqlcluster and psqlsetup tools for the
PostgreSQL database
For details, see "Installing an Oracle Database" on page 28 or "Creating and Configuring a
PostgreSQL Database Cluster" on page 100.
Caution: The new database must be created with new database server binaries.
Make sure that the previously created Oracle database for HPOM is used and that it is accessible
through Oracle Net Services.
l Removing the database or dropping the tablespaces by using opcdbsetup is not supported. You can
remove the database or drop the tablespaces manually.
When removing the database manually, make sure to remove the following files from the
HPOperations management server:
l /etc/opt/OV/share/conf/ovdbconf
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbpwd.sec
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbrem.sec
l The mondbfile policy can run only on the database server. Unassign the mondbfile policy from the
HPOperations management server policy group and, if an HPE Operations Agent is running on the
database server system, assign the mondbfile policy there.
l The opcadddbf tool is not supported.
Note: For the previously created Oracle database setup, the same limitations apply as for a
remote/manual Oracle database setup.
Preparation Steps
Before installing and configuring HPOM with a remote/manual Oracle database, you must complete the
following tasks:
Note: Verify that your system meets the following Oracle requirements:
Caution: In the process of creating the database by using the Oracle Database Creation
Assistant, follow the wizard. Not all steps in the wizard are described in this procedure. In all the
steps that are not described, leave default values or make custom selections that suit your needs.
The steps for creating and configuring the HPOM database differ depending on which Oracle
database version you use:
l If you use Oracle 11g, see "Creating and Configuring Oracle Database 11g" below.
l If you use Oracle 12c, see "Creating and Configuring Oracle Database 12c" on page 92.
1. In the Database Templates window, select Custom Database, and then click Next.
Note: During the database creation, a window may pop up with the following error displayed:
2. In the Database Identification window, enter the global database name and the Oracle System
Identifier (for example, enter openview for the global database name). Click Next.
3. In the Management Options window, clear the Configure Enterprise Manager check box, and
then click Next.
Note: If you leave the default value, the warning message appears informing you that you
must either configure a listener before you can proceed or choose to continue without the
Database Control configuration. In the latter case, which is recommended, you must clear the
Configure Enterprise Manager check box.
4. In the Database Components tab of the Database Content window, do the following:
a. Clear all the components.
b. Click Standard Database Components, and then clear all the features.
c. Click OK.
5. In the Initialization Parameters window, do the following:
a. In the Connection Mode tab, select Dedicated Server Mode.
b. In the Character Sets tab, select Use Unicode (AL32UTF8).
Note: For more information about supported character sets and NLS_LANG values, see the
HPOM Administrators Reference.
c. Click All Initialization Parameters, and then set initialization parameters using the
recommended values listed in Table 16.
Caution: Make sure that db_block_size is at least 16384 bytes. Otherwise, the HPOM
database creation fails and you must recreate the database from the beginning.
Parameter Value
db_block_size 16384
diagnostic_dest <ORACLE_BASE>
db_files 80
db_file_multiblock_read_ 16
count
memory_targeta 600M
log_checkpoint_interval 99999
processes 200
dml_locks 100
log_buffer 1572864
max_dump_file_size 10240
open_cursors 1024
sort_area_size 262144
compatible 11.1.0.0.0
nls_length_semantics BYTE
6. In the Database Storage window, create tablespaces and their datafiles using the recommended
initial sizes listed in Table 17. Make sure to set OPC_TEMP as a default temporary tablespace.
Caution: Create the datafiles as autoextend files, so that the datafiles can grow as needed.
The autoextend option can be enabled in the Datafiles list under the Storage tab.
aThe variable that controls the global memory usage of the HPOM instance. The other variable, memory_
max_target, allows you to dynamically increase the value of memory_target. By default, the memory_
max_target parameter takes the same value as memory_target. If you want to adjust the memory_target
value without restarting the instance, manually specify a greater value for memory_max_target.
Datafile
Note: HPOM requires at least three redo logs with the size of 20M each. Having more and
bigger redo logs may increase the performance. It is recommended that you create mirrored
copies of the redo logs on another disk. For more information, see the HPOM Administrators
Reference.
7. In the Creation Options window, select Create Database, and then click Finish.
Caution: When the database is created, define the passwords for the SYSTEM and SYS users.
Do not forget the passwords you defined. You will need these passwords for HPOM configuration
and database administration.
1. In the Creation Mode window, select Advanced Mode, and then click Next.
Note: During the database creation, a window may pop up with the following error displayed:
2. In the Database Template window, select Custom Database, and then click Next.
3. In the Database Identification window, enter the global database name and the SID (for example,
enter openview for the global database name). Click Next.
4. In the Management Options window, clear the Configure Enterprise Manager (EM) Database
Express check box, and then click Next.
5. In the Database Credentials window, select Use the Same Administrative Password for All
Accounts, and then specify the password for the SYS and SYSTEM users. Click Next.
Caution: Do not forget the password you specified. You will need it for the HPOM
configuration and database administration.
6. In the Network Configuration window, specify the listener name and port. Click Next.
7. In the Storage Locations window, do the following:
a. Under Database Files, select the File System storage type, and then select Use Database
File Locations from Template.
b. Under Recovery Related Files, select the File System storage type, and then select Specify
Fast Recovery Area.
c. Click Next.
8. In the Database Components tab of the Database Options window, clear all the components, and
then click Next.
9. In the Initialization Parameters window, do the following:
a. In the Memory tab, set the memory size to 600 MB.
b. In the Sizing tab, set the block size to 16384 bytes and the number of operating system user
processes to 200.
Note: For more information about supported character sets and NLS_LANG values, see the
HPOM Administrators Reference.
Caution: Make sure that db_block_size is at least 16384 bytes. Otherwise, the HPOM
database creation fails and you must recreate the database from the beginning.
f. Click Next.
10. In the Creation Options window, select the Create Database check box, and then click
Customize Storage Locations.
The Customize Storage window opens. Create tablespaces and their datafiles using the
recommended initial sizes listed in Table 17.
Create the datafiles as autoextend files, so that the datafiles can grow as needed. The autoextend
option can be enabled in the Datafiles list.
Note: HPOM requires at least three redo logs with the size of 20M each. Having more and
bigger redo logs may increase the performance.
1. Connect as sysdba:
a. Depending on your system, choose one of the following:
o Unix and Linux:
Log on as the oracle user by running the following command:
su - oracle
o Windows:
Move to the <ORACLE_HOME>\bin directory as the Oracle owner.
grant connect,
resource,
create public synonym,
drop public synonym,
alter tablespace
to opc_op;
6. Oracle 12c only: Remove the default disk space restrictions for the opc_op user by running the
following command:
grant unlimited tablespace to opc_op;
7. Prevent the opc_op password from expiring by running the following command:
SQL> alter profile default limit password_life_time unlimited;
8. Optional: Configure additional user rights on the database server.
If you want to use the mondbfile policy, the opc_odc tool, and the HPOM data backup on the
management server, type the following:
create role opc_monitorer;
Caution: The mondbfile policy can run only on the database server. If the HPE Operations
Agent is running on the database server, you can assign the mondbfile policy there.
Note: The example files described in "Syntax Examples for the .ora Files" below must be
thoroughly followed with new lines, spaces, and tabs. In all example files, change the
hostname and directory path information according to your system settings.
2. Depending on your system, choose one of the following to start the listener:
l Unix and Linux systems:
As the oracle user, run the following command:
lsnrctl start
l Windows systems:
Move to the <ORACLE_HOME>\bin directory as the Oracle owner, and then run the following
command:
lsnrctl start
On Windows systems, the example contents of the sqlnet.ora file also includes the following line:
SQLNET.AUTHENTICATION_SERVICES = (NTS)
Software on the Management Server" on page 43 with regard to the following steps:
1. When the ovoinstall script asks you if you want to continue with the server configuration, leave
the ovoinstall window open, and then open a new window.
2. In the new window, as the root user, install the latest HPOperations management server patch,
and then type y followed by ENTER to continue with the server configuration.
The ovoconfigure script asks you if you want to configure the database.
3. Type y followed by ENTER.
When the ovoconfigure script asks you if you want to set up the database manually (local or
remote), leave the ovoconfigure window open.
4. Open a new window (a terminal to the database server, either local or remote) and, as the root
user, follow these steps:
a. Export ORACLE_HOME, ORACLE_SID, LANG, and LC_ALL (for an appropriate LANG value, see the
HPOM Administrators Reference).
Note: Make sure that you use ORACLE_HOME of the database client installation, and not
ORACLE_HOME of the database server.
b. Copy the following Net files from the Oracle database server to the HPOperations
management server:
o $ORACLE_HOME/network/admin/sqlnet.ora
o $ORACLE_HOME/network/admin/tnsnames.ora
o $ORACLE_HOME/network/admin/tnsnav.ora
These files are required on the database server and the HPOperations management server.
When you copy the files to the HPOperations management server, check that the directory
paths point to the correct locations and modify them if necessary.
Note: The tnsnav.ora and sqlnet.ora files are optional. If you configured these files on
the database server, you should also configure them on the HPOperations management
server.
If you copy the sqlnet.ora file from the Windows system, remove the following line from
it on the HPOperations management server:
SQLNET.AUTHENTICATION_SERVICES = (NTS)
5. Log on as the oracle user and verify that you can connect to the database. Run the following
commands:
su - oracle
sqlplus opc_op@ov_net
6. Return to the ovoconfigure window. Type y followed by ENTER to configure the database.
Note: If the database configuration fails, you can perform the database configuration step
manually by using opcdbsetup -p.
If you rerun ovoconfigure after successfully configuring the database with opcdbsetup -p,
type n when the following question appears:
Configure the database?
7. Optional: If you configured additional user rights on the database server during the process of
configuring users, passwords, and rights manually, you can run /opt/OV/contrib/OpC/opc_odc
to verify the database setup (the log file is in /tmp/opc_odc.log).
8. Configure the Administration UI database connection parameters:
a. Add the major Oracle database release number to the ovodb.DBMajorVersion property in the
ovoappl.properties, opccfg.properties, and ovoconfig.properties files. For example:
ovodb.DBMajorVersion=11
Make sure that you do not use blank spaces.
b. Edit the ovodb.url property in the ovoinstall.properties, ovoconfig.properties,
opccfg.properties, and ovoappl.properties files as follows:
ovodb.url=jdbc:oracle:thin:@<db_server_hostname>:<db_port>:<db_name>
In this instance, <db_server_hostname> is the hostname of the system where the remote
database is located, <db_port> is the database port, and <db_name> is the name of the
database.
c. Restart the Administration UI by running the following commands:
/opt/OV/OMU/adminUI/adminui clean
/opt/OV/OMU/adminUI/adminui start
In a remote database scenario, you must make sure that the previously created PostgreSQL database
for HPOM is accessible through the network.
l For an independent PostgreSQL database setup, removing the database cluster or dropping the
database by using opcdbsetup is not supported. You can remove the database cluster or drop the
database manually.
When removing the database cluster manually, make sure to remove the following files from the
HPOperations management server:
l /etc/opt/OV/share/conf/ovdbconf
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbpwd.sec
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbrem.sec
l The mondbfile policy is not supported with PostgreSQL. The mondbfile policy can run only on the
database server. Unassign the mondbfile policy from the HPOperations management server policy
group and, if an HPE Operations Agent is running on the database server system, assign the
mondbfile policy there.
l The opcadddbf tool is not used with PostgreSQL.
Note: Before proceeding, verify that the PostgreSQL version is 9.1, 9.2, 9.3, 9.4, or 9.5. Open-
source versions and commercial offerings from EnterpriseDB are supported.
Depending on whether you want to create and configure a PostgreSQL database cluster by using the
psqlcluster tool or manually, follow the instructions described in one of the following sections:
l "Creating and Configuring a PostgreSQL Database Cluster by Using the psqlcluster Tool" below
l "Creating and Configuring a PostgreSQL Database Cluster Manually" on the next page
/opt/OV/bin/OpC
psqlcluster -d <cluster_dir>
-b <path_to_psql_binaries>
[-o <OS_DBA_user>]
[-dt <data_tablespace_dir>]
[-it <index_tablespace_dir>]
-p <db_port>
[-dbu <DB_DBA_user>]
[-dbp <DB_DBA_password>]
-ar
[-u]
-h
You can use the following options with the psqlcluster tool:
-dt <data_tablespace_ Specifies the directory where the data tablespaces are stored.
dir>
-it <index_tablespace_ Specifies the directory where the index tablespaces are
dir> stored.
-dbu <DB_DBA_user> Specifies the name of the administrator user inside the
database cluster or server.
-dbp <DB_DBA_password> Specifies the password of the administrator user inside the
database cluster or server.
1. Verify that the operating system user (OS DBA user) is already created by the installation program
or packages. If the operating system user is not created or you want to use another user, make
sure to create it at this point.
2. Create a cluster directory where the main PostgreSQL cluster files will be stored. To do this, run
the following command:
mkdir -p <cluster_directory>
3. Apply proper permissions to the cluster directory by running the following commands:
chown <OS_DBA_user> <cluster_directory>
chmod 700 <cluster_directory>
4. Create a file containing the password of the administrator user inside the database cluster or
server by running the following command:
echo <DB_DBA_user_password> > <password_file>
5. Create a database cluster by using the initdb script provided by PostgreSQL. To do this, run the
following commands:
su - <OS_DBA_user>
<PSQL_bin_directory>/initdb -D <cluster_directory> \
-A md5 -E UTF8 --locale=en_US.utf8 -U <DB_DBA_user> --pwfile=<password_file>
By running the initdb script, the basic structure of the database cluster is created and initialized.
6. Configure the database cluster by following these steps:
a. Open the <cluster_directory>/postgresql.conf file, and then change the port, listen_
addresses, and max_locks_per_transaction parameters according to your needs.
For example:
port = 5432
listen_addresses = '*'
max_locks_per_transaction = 256 # min 10, default 64
Note: You can also customize other parameters to adapt the database to the
environment needs (for example, shared_buffers and work_mem). For details, see the
PostgreSQL documentation.
Caution: Make sure that the HPOperations management server can access the
PostgreSQL port on the database system by checking the configuration of firewalls,
proxies, and Network Address Translation (NAT).
c. Edit the .pgpass file under the <OS_DBA_user> home directory to add local access to the
administrator user inside the database cluster or the server.
For example:
localhost:<Port>:*:<DB_DBA_user>:<DB_DBA_user_password>
7. Start the database by running the following commands:
su - <OS_DBA_user>
<PSQL_bin_directory>/pg_ctl -D <cluster_directory> \
start -l <cluster_directory>/logfile
8. Create the data tablespace and index tablespace directories. For each directory, perform as
follows:
a. Create a directory:
mkdir -p <directory>
b. Apply proper permissions to the directory:
e. Quit the PostgreSQL session, and then go back to the terminal window:
\q
1. When the ovoinstall script asks you if you want to continue with the server configuration, leave
the ovoinstall window open, and then open a new window.
2. In the new window, as the root user, install the latest HPOperations management server patch (if
needed), and then type y followed by ENTER to continue with the server configuration.
The ovoconfigure script asks you if you want to configure the database.
3. Type y followed by ENTER.
The following question appears:
Will HPOM run on an Oracle instance (n for PostgreSQL)?
4. Type n followed by ENTER.
The ovoconfigure script asks you if you want HPOM to manage the PostgreSQL database
cluster.
5. Type n followed by ENTER.
You are asked a series of questions about the database configuration. For detailed information
about these questions, see "Configuring a PostgreSQL Database" on page 52.
After you answer all the database-related questions, the summary of all provided answers
appears.
6. After you check the data, type y followed by ENTER.
When the ovoconfigure script asks you to perform the remote/manual database configuration,
leave the ovoconfigure window open.
7. Open a new window (a terminal to the database server, either local or remote) and, as the root
user, choose how to create a database cluster, manually or automatically.
Caution: Before choosing the way of creating the database cluster, make sure that you
performed all the steps described in "Installing PostgreSQL Server Binaries" on page 40 and
"Preparing HPOM to Use the PostgreSQL Database" on page 40.
l Creating a database cluster manually: To create a database cluster manually, follow the
instructions described in "Creating and Configuring a PostgreSQL Database Cluster Manually"
on page 102.
l Creating a database cluster automatically: To create a database cluster automatically, use the
psqlcluster tool as described in "Creating and Configuring a PostgreSQL Database Cluster
by Using the psqlcluster Tool" on page 101.
If you have a database on a local system (that is, a manual PostgreSQL database), the
psqlcluster tool is already on the system. On the other hand, if you have a database on a
remote system (that is, a remote PostgreSQL database), you must obtain the HPOvOUPSQLConf
package that is appropriate for the architecture of the database system, copy it to the database
system, and then install it there according to the procedure indicated for your operating system.
You can find the latest version of the HPOvOUPSQLConf package that installs a copy of the
psqlcluster tool in the database system at the following location:
/var/opt/OV/packages/PSQL
8. Log on as the operating system user (OS DBA user), and then verify that you can connect to the
database.
Run the following commands:
su - postgresql
<PSQL_bin_directory>/psql -p <Port> -U <DB_admin_USER> -h localhost
psql > \q
9. Return to the ovoconfigure window. Type y followed by ENTER to configure the database.
Note: If the database configuration fails, you can perform the database configuration step
manually by using psqlsetup.
10. Optional (use only if the database configuration fails): Set up the PostgreSQL database cluster to
be used with HPOM by using the psqlsetup tool that you can find at the following location:
/opt/OV/bin/OpC
The syntax of the psqlsetup tool is as follows:
psqlsetup -b <path_to_psql_binaries>
-l <path_to_psql_libs>
-o <OS_DBA_user>
-h <hostname>
-p <db_port>
[-d <database_name>]
-dba_user <DB_DBA_user>
-dba_pass <DB_DBA_password>
[-dbop_pass <DB_opc_op_password>]
[-dbrep_pass <DB_opc_report_password>]
[-u]
[-ni]
[-help]
You can use the following options with the psqlsetup tool:
-dba_user <DB_DBA_user> Specifies the name of the administrator user inside the
database cluster or server.
-dba_pass <DB_DBA_ Specifies the password of the administrator user inside the
password> database cluster or server.
-dbop_pass <DB_opc_op_ Specifies the password for the opc_op database user.
password>
-dbrep_pass <DB_opc_ Specifies the password for the opc_report database user.
report_password>
Note: If you rerun ovoconfigure after successfully configuring the database with
psqlsetup, make sure that you type n when the following question appears:
11. Optional: If you configured additional user rights on the database server during the process of
configuring users, passwords, and rights manually, you can run /opt/OV/contrib/OpC/opc_odc
to verify the database setup (the log file is in /tmp/opc_odc.log).
12. Configure the Administration UI database connection parameters:
a. Add the first digit group of the major PostgreSQL version to the ovodb.DBMajorVersion
property in the ovoappl.properties, opccfg.properties, and ovoconfig.properties
files. For example, for PostgreSQL version 9.1, add 9:
ovodb.DBMajorVersion=9
Make sure that you do not use blank spaces.
b. Edit the ovodb.url property in the ovoinstall.properties, ovoconfig.properties,
opccfg.properties, and ovoappl.properties files as follows:
ovodb.url=jdbc:postgresql://<db_server_hostname>:<db_port>/<db_name>
In this instance, <db_server_hostname> is the hostname of the system where the remote
database is located, <db_port> is the database port, and <db_name> is the name of the
database.
c. Restart the Administration UI by running the following commands:
/opt/OV/OMU/adminUI/adminui clean
/opt/OV/OMU/adminUI/adminui start
For detailed information about Oracle RAC server requirements, see the Oracle RAC documentation at
the following URL:
http://www.oracle.com/us/products/database/options/real-application-clusters/overview/index.html
Make sure that the previously created Oracle database for HPOM is used and that it is accessible
through Oracle Net Services.
Caution: HPOM supports Oracle 11g Release 1 RAC (11.1.0.7), Oracle 11g Release 2 RAC
(11.2.0.111.2.0.4), and Oracle 12c Release 1 RAC (12.1.0.1 and 12.1.0.2).
l Removing the database or dropping the tablespaces by using opcdbsetup is not supported. You can
remove the database or drop the tablespaces manually.
When removing the database manually, make sure to remove the following files from the
HPOperations management server:
l /etc/opt/OV/share/conf/ovdbconf
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbpwd.sec
l /etc/opt/OV/share/conf/OpC/mgmt_sv/.opcdbrem.sec
Note: For the previously created Oracle database setup, the same limitations apply as for a
remote/manual Oracle database setup.
http://www.oracle.com/us/products/database/options/real-application-clusters/overview/index.html
Preparation Steps
Before installing and configuring HPOM in an Oracle RAC environment, you must complete the
following tasks:
l Task 1: "Creating and Configuring the HPOM Database on Cluster Nodes" below
l Task 2: "Configuring Users, Passwords, and User Rights Manually" on page 116
l Task 3: "Configuring Access to the HPOM Database" on page 118
Note: Verify that your system meets the following Oracle requirements:
$ORACLE_HOME/bin/dbca &
Note: In the process of creating the database by using the Oracle Database Creation Assistant,
follow the wizard. Not all steps in the wizard are described in this procedure. In all the steps that
are not described, leave default values or make custom selections that suit your needs.
The steps for creating and configuring the HPOM database on cluster nodes differ depending on which
Oracle database version you use:
l If you use Oracle 11g, see "Creating and Configuring Oracle Database 11g" below.
l If you use Oracle 12c, see "Creating and Configuring Oracle Database 12c" on page 115.
1. In the Welcome window, select Oracle Real Application Clusters database, and then click
Next.
Note: The Welcome window is used for creating the Oracle RAC database and it is displayed
only if the Oracle home from which it is invoked is on the cluster system. Otherwise, the
generic Welcome window opens and only the Oracle single instance database option is
available.
2. In the Operations window, select Create a Database, and then click Next.
Note: During the database creation, a window may pop up with the following error displayed:
3. If you are using Oracle Database 11g Release 2, skip this step: In the Node Selection window,
select all the cluster nodes on which you want to create the cluster database, and then click Next.
4. In the Database Templates window, select Custom Database, and then click Next.
5. In the Database Identification window, type the global database name (for example, openview)
and the Oracle system identifier prefix (for example, GRID) for your cluster database. Click Next.
6. In the Management Options window, select Configure Enterprise Manager and Configure
Database Control for local management, and then click Next.
7. In the Database Credentials window, define the passwords for the SYSTEM and SYS users, and
then click Next.
Caution: Do not forget the passwords you defined. You will need these passwords for the
HPOM configuration and database administration.
8. In the Storage Options window, select Automatic Storage Management (ASM), and then click
Next.
At this point, you may be asked to provide the ASMSNMP password. If you do not remember this
password, you can do one of the following:
Note: If specifying an incorrect password or changing the ASMSNMP password does not
solve the issue, check it with your database administrator or see the Oracle product
documentation.
9. If you are using Oracle Database 11g Release 2, skip this step: Enter the SYS password for the
ASM instance, and then click Next.
10. In the Database Components tab of the Database Content window, first clear all the components,
and then click Standard Database Components....
The Standard Database Components window opens.
11. In the Standard Database Components window, clear all the features, and then click OK.
The Database Content window opens again. Click Next to continue.
12. In the Character Sets tab of the Initialization Parameters window, select Choose from the list of
character sets.
Note: For more information on supported character sets and NLS_LANG values, see the
HPOM Administrators Reference.
13. In the Connection Mode tab of the Initialization Parameters window, select Dedicated Server
Mode.
14. In the Initialization Parameters window, click All Initialization Parameters, and then set
initialization parameters using the recommended values (see Table 19).
Caution: Make sure that db_block_size is at least 16384 bytes. Otherwise, the HPOM
database creation fails and you must recreate the database from the beginning.
Parameter Value
db_block_size 16384
diagnostic_dest <ORACLE_BASE>
db_files 80
db_file_multiblock_read_ 16
count
memory_targeta 600M
log_checkpoint_interval 99999
processes 200
dml_locks 100
log_buffer 1572864
max_dump_file_size 10240
open_cursors 1024
sort_area_size 262144
compatible 11.1.0.0.0
nls_length_semantics BYTE
15. In the Database Storage window, create tablespaces and their datafiles using the recommended
initial sizes (see Table 20). Make sure to set OPC_TEMP as a default temporary tablespace.
aThe variable that controls the global memory usage of the HPOM instance. The other variable, memory_
max_target, allows you to dynamically increase the value of memory_target. By default, the memory_
max_target parameter takes the same value as memory_target. If you want to adjust the memory_target
value without restarting the instance, manually specify a greater value for memory_max_target.
Additional tablespaces are required depending on whether you plan to use Undo Tablespace
Management or Rollback Segments.
Caution: Create the datafiles as autoextend files, so that the datafiles can grow as needed.
The autoextend option can be enabled in the Datafiles list under the Storage tab.
Datafile
Tablespace
Name Tablespace Type Size Next
Note: HPOM requires at least 3 redo logs with a size of 20M each. Having more and bigger
redo logs may increase the performance. It is recommended that you create mirrored copies
of the redo logs on another disk. For more information, see the HPOM Administrators
Reference.
16. In the Creation Options window, select the Create Database option, and then click Finish.
1. In the Database Operation window, select Create Database, and then click Next.
Note: During the database creation, a window may pop up with the following error displayed:
2. In the Creation Mode window, select Advanced Mode, and then click Next.
3. In the Database Template window, select the type of database you want to configure and a
template for your database:
a. From the Database Type drop-down list, select Oracle Real Application Clusters (RAC)
database.
b. From the Configuration Type drop-down list, select Admin-Managed.
c. Select the Custom Database template.
4. In the Database Identification window, enter the global database name and the SID (for example,
enter openview for the global database name). Click Next.
5. In the Database Placement window, select all the nodes on which you want to create the cluster
database, and then click Next.
6. In the Management Options window, select the Run Cluster Verification Utility (CVU) Checks
Periodically and Configure Enterprise Manager (EM) Database Express check boxes.
7. In the Database Credentials window, select Use the Same Administrative Password for All
Accounts, and then specify the password for the SYS and SYSTEM users. Click Next.
Caution: Do not forget the password you specified. You will need it for the HPOM
configuration and database administration.
9. In the Database Components tab of the Database Options window, clear all the components, and
then click Next.
10. In the Initialization Parameters window, do the following:
a. In the Memory tab, set the memory size to 600 MB.
b. In the Sizing tab, set the block size to 16384 bytes and the number of operating system user
processes to 200.
c. In the Character Sets tab, select Use Unicode (AL32UTF8).
Note: For more information about supported character sets and NLS_LANG values, see the
HPOM Administrators Reference.
Caution: Make sure that db_block_size is at least 16384 bytes. Otherwise, the HPOM
database creation fails and you must recreate the database from the beginning.
11. In the Creation Options window, select Create Database, and then click Customize Storage
Locations....
The Customize Storage window opens. Create tablespaces and their datafiles using the
recommended initial sizes listed in Table 20.
Note: HPOM requires at least three redo logs with the size of 20M each. Having more and
bigger redo logs may increase the performance.
12. In the Summary window, review the selected options, and then click Finish.
1. From one of the nodes, log on as the oracle user, and connect as sysdba.
Type the following commands:
su - oracle
sqlplus system as sysdba
grant connect,
resource,
create public synonym,
create table,
create view,
drop public synonym,
alter tablespace
to opc_op;
6. Oracle 12c only: Remove the default disk space restrictions for the opc_op user by running the
following command:
grant unlimited tablespace to opc_op;
7. To prevent the opc_op password from expiring, type the following:
su - oracle
sqlplus /nolog
SQL> conn / as sysdba;
SQL> alter profile default limit password_life_time unlimited;
8. Optional: Configure additional user rights on the database server.
If you want to use the opc_odc tool, type the following:
create role opc_monitorer;
grant select on v_$datafile to opc_monitorer;
grant select on v_$log to opc_monitorer;
grant select on v_$logfile to opc_monitorer;
grant select on v_$database to opc_monitorer;
To enable the connection from the HPOperations management server to the database instances on all
Oracle RAC nodes, specify your configuration preferences in the following file:
$ORACLE_HOME/network/admin/tnsnames.ora
Figure 2: Example of RAC Configuration
Figure 2 shows the example of the Oracle RAC configuration for the following managed nodes:
l node1.hp.com
With IP address 192.168.1.101, virtual node name node1-vip, and configured database instance
GRID1
l node2.hp.com
With IP address 192.168.1.100, virtual node name node2-vip, and configured database instance
GRID2
During the Oracle RAC configuration, the database name is specified (for example, openview). The
database consists of both database instances, GRID1 and GRID2.
Caution: Make sure that the ORACLE_SID variable is always properly set. In the shown example,
the ORACLE_SID variable is GRID1 on the first node and GRID2 on the second node.
The HPOperations management server uses the ov_net alias to connect to the HPOM database
(service name openview in Figure 2). The Oracle RAC server handles the database connections as
specified in the tnsnames.ora file by using load balancing and failover. For detailed information, see
the Oracle RAC documentation.
1. Configure Net Services that are needed on all Oracle RAC cluster nodes.
The tnsnames.ora and listener.ora files are required. Optionally, you can also configure the
tnsnav.ora and sqlnet.ora files. These files are located in the $ORACLE_HOME/network/admin
directory. You can find syntax examples for the .ora files in "Syntax Examples for the .ora Files"
on the next page.
Note: The example files described in "Syntax Examples for the .ora Files" on the next page
must be thoroughly followed with new lines, spaces, and tabs. In all example files, change
hostnames, IPs, and directory paths according to your system settings.
2. Start the listener as the oracle user on each node by typing the following:
su oracle
lsnrctl start <listener_name>
Note: With some installations, it is possible that Oracle already created its own listener files.
To stop the listeners, follow these steps:
a. Log on as root.
b. Export the ORACLE_HOME, ORACLE_BASE, and ORACLE_SID variables, and then add
$ORACLE_HOME/bin to PATH.
c. Stop the listener by running the following command:
lsnrctl stop <listener_name>
d. Log on as the oracle user and start the correct listener.
ov_net =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = openview)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
GRID1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = openview)
(INSTANCE_NAME = GRID1)
)
)
GRID2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = openview)
(INSTANCE_NAME = GRID2)
)
)
LISTENERS_OPENVIEW =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
NODE_1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(CONNECT_DATA =
(SID = GRID1)
)
)
NODE_2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(CONNECT_DATA =
(SID = GRID2)
)
)
CONNECT_TIMEOUT_LISTENER_NODE1 = 10
LOG_DIRECTORY_LISTENER_NODE1 = /opt/oracle/product/11.1.0/network/log
LOG_FILE_LISTENER_NODE1 = LISTENER_NODE1
SID_LIST_LISTENER_NODE1 =
(SID_LIST =
(SID_DESC =
(SID_NAME=GRID1)
(GLOBAL_DBNAME = openview)
(ORACLE_HOME=/opt/oracle/product/11.1.0/)
)
)
TRACE_LEVEL_LISTENER_NODE1 = OFF
1. When the ovoinstall script asks you if you want to continue with the server configuration, leave
the ovoinstall window open, and then open a new window.
2. In the new window, as the root user, install the latest HPOperations management server patch,
and then type y followed by ENTER to continue with the server configuration.
Caution: If you install HPOM in a cluster environment, install the latest HPOperations
management server patch for all cluster nodes.
The ovoconfigure script asks you if you want to configure the database.
3. Type y followed by ENTER.
When the ovoconfigure script asks you if you want to set up the database manually, leave the
ovoconfigure window open.
4. Open a new window, and, as the root user, follow these steps:
Caution: If you are installing HPOM in a cluster environment, perform these steps only for
the first cluster node.
a. Export ORACLE_HOME, ORACLE_SID, and LANG (for an appropriate LANG value, see the HPOM
Administrators Reference).
b. Copy the following Net files from the Oracle database server to the HPOperations
management server:
o $ORACLE_HOME/network/admin/sqlnet.ora
o $ORACLE_HOME/network/admin/tnsnames.ora
o $ORACLE_HOME/network/admin/tnsnav.ora
These files are required on the database server and the HPOperations management server.
When you copy the files to the HPOperations management server, check that the directory
paths point to the correct locations, and modify them if necessary.
Note: The tnsnav.ora and sqlnet.ora files are optional. If you configured these files on
the RAC cluster, you must also configure them on the HPOperations management
server.
c. If you are installing HPOM in a cluster environment, export the OPC_HA and OPC_MGMT_SERVER
variables by running the following commands:
/opt/OV/bin/ovconfchg -ovrg server -ns opc -set OPC_HA TRUE
/opt/OV/bin/ovconfchg -ovrg server -ns opc -set \
OPC_MGMT_SERVER <valid_virtual_host>
In this instance, <valid_virtual_host> is the long hostname of the virtual host that was
previously selected during the installation procedure.
5. Return to the ovoconfigure window. Type y followed by ENTER to configure the database.
Note: The database configuration step can be done manually by using opcdbsetup -p.
6. Optional: If you configured additional user rights on the database server during the process of
configuring users, passwords, and rights manually, you can run /opt/OV/contrib/OpC/opc_odc
to verify the database setup (the log file is in /tmp/opc_odc.log).
Assume that your Oracle RAC environment consists of the servers with physical hostnames
astrid14 and astrid15, and virtual hostnames astrid14-vip and astrid15-vip. The port is
1521 and the SID is openview.
/opt/OV/OMU/adminUI/conf/opccfg.properties
/opt/OV/OMU/adminUI/conf/ovoappl.properties
/opt/OV/OMU/adminUI/conf/ovoconfig.properties
/opt/OV/OMU/adminUI/conf/ovoinstall.properties
Each of these configuration files contains a JDBC connection string that looks as follows:
ovodb.url=jdbc:oracle:thin:@astrid15:1521:openview
Use the virtual hostnames in the process of modifying the configuration files. In addition, use the
proper port and SID data when required.
1. Modify the configuration files so that each of them contains the correct Oracle RAC JDBC
connection string. To do this, choose one of the following ways:
l By replacing the default JDBC connection string:
In this example, the string in each configuration file should look as follows:
ovodb.url=jdbc:oracle:thin:@(DESCRIPTION=(FAILOVER=ON)(ADDRESS_LIST=(LOAD_
BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=astrid14-vip)(PORT=1521))(ADDRESS=
(PROTOCOL=TCP)(HOST=astrid15-vip)(PORT=1521)))(CONNECT_DATA=(SERVICE_
NAME=openview)))
2. After the JDBC connection string is modified, add the following line to the
/opt/OV/OMU/adminUI/conf/servicemix/wrapper.conf file:
wrapper.java.additional.17=-Duser.timezone=<time_zone>
For example, <time_zone> can be UTC.
3. Make sure that the value of the ovodb.DBMajorVersion variable in the opccfg.properties,
ovoappl.properties, and ovoconfig.properties files is set to the database major version (that
is, 11 or 12).
4. Restart the Administration UI by running the following commands:
/opt/OV/OMU/adminUI/adminui stop
/opt/OV/OMU/adminUI/adminui clean
/opt/OV/OMU/adminUI/adminui start
Note: The file tree can include additional subdirectories if HPE Operations Agent software or other
HPOperations software is installed.
<HPOM version> is the version of HPOM that supports a particular agent platform. HPOM can manage
several different HPOM versions for each agent platform.
The customer subtree is similar to the vendor subtree, without the HPOM version. You can integrate
your additional scripts, including individual scripts and binaries in the monitor, cmds, and actions
subdirectories. These files are automatically distributed to the managed node by HPOM.
l /etc/passwd
Contains the entry for the default HPOM operator.
l /etc/group
Contains the entry for the default HPOM operator.
l /etc/services
Adds the ito-e-gui service for the Java GUI.
l /etc/xinetd.d/ito-e-gui
Starts the /opt/OV/bin/OpC/opcuiwww process when requested.
Deinstalling HPOM
To deinstall HPOM, follow these steps:
Caution: Before removing server packages, remove all server patches and applications that
have a dependency on server packages. Otherwise, the removal of the HPOperations
management server might fail.
Note: For the information about the additional steps that you might need to perform, see the
latest edition of the HPOM Software Release Notes.
The ovoremove script checks the current directory and if there are any opened Java GUIs.
The following text is displayed:
Welcome to the HP Operations Manager for UNIX removal
6. Press ENTER to verify that you want the removal procedure to start.
Note: The whole removal procedure is written in the following log files that you can view at
any time during the removal procedure:
l /var/opt/OV/log/OpC/mgmt_sv/installation.log
l /var/opt/OV/log/OpC/mgmt_sv/installation.log.error
l /var/opt/OV/log/OpC/mgmt_sv/installation.log.verbose
The ovoremove script detects if the management server runs as a HARG and removes the
Administration UI. It also detects which server add-on packages are installed, and then it asks you
if you want to continue with the removal of server add-ons.
7. Press ENTER to accept y as the default answer.
The ovoremove script continues with the following:
l Server add-on packages removal
l Database removal
Caution: If the opc_op user and the opcgrp group are still present on the system after the
HPOM deinstallation, you can remove them manually.
The ovoremove script detects the installed software and informs you about the packages and
components that will be removed:
l ECS Composer packages
l Localization packages
l Server packages
l Core components
Note: Do not remove the /opt/OV, /etc/opt/OV, and /var/opt/OV directories unless all HP
products are removed.
If the /var/opt/midas directory (containing the configuration file created when the Administration
UI is installed) is still present on the system after the HPOM deinstallation, it is recommended to
remove it by running the following command:
rm -r /var/opt/midas
To deinstall the database, see the documentation supplied by the database vendor.
l Spanish
/bin/rpm -e HPOvOUWwwSpa
l Japanese
/bin/rpm -e HPOvOUWwwJpn
l Korean
/bin/rpm -e HPOvOUWwwKor
l Simplified Chinese
/bin/rpm -e HPOvOUWwwSch
Reinstalling HPOM
To reinstall HPOM, follow these steps:
1. Make a copy of old management server certificates by running the following command:
/opt/OV/bin/OpC/opcsvcertbackup -backup \
-passwd <passwd> -file <old_certs>
In this instance, <passwd> is the user password and <old_certs> is the file with the old
management server certificates.
If you omit the -file option, a.tar archive file is created at the following default location:
/tmp/opcsvcertbackup.<date_time>.tar
2. Make a copy of the old management server OvCoreId by running the following command:
/opt/OV/bin/ovcoreid -ovrg server > /tmp/mgmtsv_coreid
Note: Make sure that the old management server OvCoreId and certificates are reused after
the HPOperations management server installation. If they are not reused, managed nodes
cannot communicate with the management server.
Note: After you run the ovoremove script, make sure that you check the latest edition of the
HPOM Software Release Notes for the information about the additional steps that you might
need to perform.
6. Install HPOM as described in "Installing and Configuring the HPOM Software on the Management
Server" on page 43.
7. Update the OvCoreId and certificates on the new management server by following these steps:
a. Stop all HPOM processes:
/opt/OV/bin/ovc -kill
b. Install the certificate backup from the old HPOperations management server:
/opt/OV/bin/OpC/opcsvcertbackup -restore \
-passwd <passwd> -file <old_certs> -force
In this instance, <passwd> is the user password and <old_certs> is the file with the old
management server certificates.
Caution: Do not forget to use the -force option when installing the certificate backup
from the old HPOperations management server.
1. If required, deinstall HPOM from all managed nodes as described in the HPOM Administrators
Reference.
Caution: After you reinitialize the HPOM database, all the node configuration is lost. You
must reconfigure the nodes.
2. Only if you use the Oracle database with HPOM: As the root user, export the Oracle variables as
follows:
export ORACLE_HOME=/opt/oracle/product/<version>
export ORACLE_BASE=/opt/oracle
3. Only if HPOM was deinstalled: Reinstall HPOM as described in "Reinstalling HPOM" on page
133.
4. Stop HPOperations management server and agent processes by running the following
commands:
/opt/OV/bin/OpC/opcsv -stop
/opt/OV/bin/ovc -stop AGENT
5. Clean the database, including the configuration for operators and nodes, as well as all active and
history messages. To do so, run the following commands:
su - root
/opt/OV/bin/OpC/opcdbinit -c [-v]
exit
The opcdbinit command uses the following modes:
6. Restart the HPOperations management server and agent processes by running the following
commands:
/opt/OV/bin/OpC/opcsv -start
/opt/OV/bin/ovc -start AGENT
it:
CONFIGURE.INSTALL_ADMINUI:DONE
Caution: If you do not edit the checkpoints.conf file, the ovoconfigure script skips the
Administration UI installation.
Note: After you reinstall the Administration UI, keep in mind that all the Administration UI patches
and hotfixes are removed from the system.
Before starting with the migration, consider the following terms used in this chapter:
Old server: The source management server from which you migrate HPOM.
New server: The target management server on which you migrate HPOM.
1. Hardware
2. Operating system (including operating system patches)
3. Database
4. HPOM software
HPOM places no restrictions on the number of managed nodes with the 60-day Instant-On license.
Make sure that you acquire the correct license for your requirements before the Instant-On license
expires.
If you have a product installed that is integrated with the old server (for example, HPPerformance
Manager), make sure this product is compatible with a newer version of HPOM before starting the
HPOM migration process. For information about how to perform the HPOM migration in this situation,
see the documentation of the integrated product.
Migration Scenarios
When migrating from one system to another, you can choose one of the following scenarios:
Note: Unlike the upgrade procedure described in "Upgrading HPOM to Version 9.2x" on page 154,
these scenarios require almost no operational downtime.
l New hardware with a new IP address and a new hostname is used for the new server.
l Depending on the setup, you can switch to the stand-alone server setup after the migration process
is finished and shut down the old server.
l If the old server is to be switched off after the migration, you can request a new server permanent
license in exchange for the old license (contact the HP Password Delivery Center). In the meantime,
you can work on the new server with the 60-day Instant-On license.
l After setting up the new server, you can also upgrade the old server to HPOM 9.2x (for example, if a
hardware cluster will be reused).
l Migration can be performed almost without operational downtime.
l Messages can be synchronized in both directions, from the old server to the new server and vice
versa, whereas the configuration data exchange is only possible from the old server to the new
server.
l All managed nodes must be updated with the root certificate of the new server.
l Can also be used for hardware upgrades of the same HPOM version.
To migrate to the system with the different IP address and hostname, complete these tasks:
l Task 3 (only if you have HP-UX managed nodes): "Uploading 32-bit HPE Operations Agent" on the
next page
l Task 4: "Uploading the Saved HPOM Configuration" on page 142
l Task 5: "Establishing a Trust Relationship Between the Two Management Servers" on page 144
l Task 6: "Setting Up Message Forwarding Between the Management Servers" on page 146
l Task 7 (optional): "Decommissioning the Old Management Server" on page 148
Note: For detailed information about setting up a backup server, see the HPOM Concepts Guide.
For detailed information about the HPOM policies, see the HPOM Administrators Reference and
the HPOM Concepts Guide.
1. Include service data into the download by running the following command:
/opt/OV/bin/ovconfchg -ovrg server -ns opc -set \
OPC_OPCCFGDWN_ALL_INCLUDE_SELDIST_SERVICES TRUE
2. Create a download specification file for all configuration data:
echo "* ;" > /tmp/cfgdwn/download.dsf
3. Download the configuration:
/opt/OV/bin/OpC/opccfgdwn -force -backup /tmp/cfgdwn/download.dsf /tmp/cfgdwn
4. If you want to migrate your server configuration settings, store the output of ovconfget -ovrg
server into a file, and then transfer it to the new server.
The process of downloading the old server configuration may be repeated several times during the
migration. This is because the configuration changes (for example, adding new managed nodes) take
place in the old production server environment, and must, therefore, be synchronized to the new server
occasionally.
Note: The audit records cannot be migrated from HPOM 8.xx to HPOM 9.xx. Download the audit
data before migrating HPOM if you want to keep a copy of all audit entries. Enter the following
command:
For detailed information about the message transfer from the old server to the new server, see "Setting
Up Message Forwarding Between the Management Servers" on page 146.
1. Install the HPOperations management server as described in "Installing and Configuring HPOM
on the Management Server" on page 25.
Caution: Make sure your system meets hardware and software requirements for the HPOM
software installation. For information about the installation requirements, see "Installation
Requirements for the Management Server" on page 13.
Note: You can verify that the OvCoreId was correctly updated in the database of the old server
by running the following command:
From HPE Operations Agent version 12.x onward, 64-bit agent packages are supported on HP-UX IA.
MACH_BBC_HPUX_IPF64 is the new machine type introduced in the 64-bit packages. If you upload saved
configuration from an old HPOperations management server to a new server on which you have
installed HPOM with HPE Operations Agent 12.x package, the HP-UX managed nodes will be
uploaded with machine type IP/Other/Other.
If you have HP-UX managed nodes, you must copy the HPE Operations Agent 11.x packages from the
old server to the new server so that the managed nodes are uploaded with correct machine type.
Follow these steps to copy the HPE Operations Agent 11.x packages to the new server:
1. Check the server machine type on the new server by running the following command:
echo "select platform_selector,pltf_abs_name from opc_net_machine where
machine_type in (43,95);"|opcdbpwd -e sqlplus -s
Example outputs for 64-bit and 32-bit machine types:
platform_selector | pltf_abs_name
hp/ipf64/hpux1131 | MACH_BBC_HPUX_IPF64
hp/ipf32/hpux1122 | MACH_BBC_HPUX_IPF32
2. If the 32-bit machine type (MACH_BBC_HPUX_IPF32) is not present, follow these steps to copy the
HPE Operations Agent 11.x packages from the old server:
a. Compress the HP-UX IPF32 platform directory into a tar file by running the following
command:
tar cvf /tmp/hpux-ipf32.tar /var/opt/OV/share/databases/OpC/mgd_
node/vendor/hp/ipf32/hpux1122gzip /tmp/hpux-ipf32.tar
b. Copy /tmp/hpux-ipf32.tar.gz to the new server.
c. Extract the packages by running the following command:
cd /
gunzip /tmp/hpux-ipf32.tar.gz
d. Register the HPE Operations Agent platform by running the following command:
/opt/OV/bin/OpC/opcagtdbcfg -p hp/ipf32/hpux1122 -d -f
e. Register the new platform with the Administration UI:
/opt/OV/OMU/adminUI/adminui machtypes
Caution: In a cluster environment, first disable the HA resource group monitoring by running
the following command:
3. If you stored the output of ovconfget -ovrg server into a file and transferred it to the new
server, follow these steps:
a. Edit the file by running the following command:
/opt/OV/bin/ovconfchg -ovrg server -edit
b. In the editor, merge the configuration.
Make sure that you add only the variables that you modified, and not all internal HPOM
variables. Failing to do so may cause problems with the HPOperations management server
installation.
To determine which variables were modified, run the following command on the old server:
/opt/OV/bin/ovconfchg -ovrg server -edit
Then compare this output with the output from the new server.
4. Upload the configuration on the new server by running the following command:
/opt/OV/bin/OpC/opccfgupld -replace -subentity -configured <download_directory>
For example:
/opt/OV/bin/OpC/opccfgupld -replace -subentity -configured /tmp/cfgdwn
5. Verify that the old server node is configured on the new server by running the following command:
/opt/OV/bin/OpC/utils/opcnode -list_nodes
If the old server is not listed, run the following command:
/opt/OV/bin/OpC/utils/opcnode -add_node \
node_name=<old_server> group_name=<nodegrp_name> \
net_type=<network_type> mach_type=<machine_type> id=<old_server_OvCoreId>
To get the OvCoreId, run the following command on the old server:
/opt/OV/bin/OpC/utils/opcnode -list_id node_list=<old_server_hostname>
6. If the old server is running in an HA cluster and the new server is a stand-alone server, run the
following command on the new server:
/opt/OV/bin/OpC/utils/opcnode -list_virtual node_name=<new_server>
If a line similar to cluster_package=ov-server appears, run the following commands:
/opt/OV/bin/OpC/utils/opcnode -set_physical node name=<new_server>
/opt/OV/bin/OpC/utils/opcnode -list_virtual
An output similar to the following one should appear:
node <new_server> is not a virtual one
1. If the old server has a Certification Authority (default): Share the server certificates by exporting
the local CA trusted certificates:
/opt/OV/bin/ovcert -exporttrusted -file /tmp/<hostname>.cert -ovrg server
For detailed information, see the HPOM Administrators Reference.
2. Copy the certificate file to the new server, and then follow these steps:
a. Import the certificates from the old server to the new server by running the following command
on the new server:
/opt/OV/bin/ovcert -importtrusted -file /tmp/<hostname>.cert -ovrg server
Note: To view the current certificates before importing the certificates from the old server
to the new server, run the following command on the new server:
/opt/OV/bin/ovcert -list
b. On the new server, propagate the trusted certificates of the old server to the local agent by
running the following command:
/opt/OV/bin/ovcert -updatetrusted
To check whether an additional CA trusted certificate is installed, list the installed certificates
by running the following command:
/opt/OV/bin/ovcert -list
3. Import the CA trusted certificate of the new server to the old server. To do so, follow these steps:
a. On the new server, run the following command:
/opt/OV/bin/ovcert -exporttrusted -file /tmp/<hostname>.cert -ovrg server
b. Copy the file to the old server, and then import the certificates there:
/opt/OV/bin/ovcert -importtrusted -file /tmp/<hostname>.cert -ovrg server
Note: Because the file contains all trusted certificates from the old server, you will
receive a warning that the certificate is already installed.
c. On the old server, propagate the new servers trusted certificates to the local agent by running
the following command:
/opt/OV/bin/ovcert -updatetrusted
To check whether the additional CA trusted certificate is installed, list the installed certificates
by using the following command:
/opt/OV/bin/ovcert -list
4. Configure the flexible management policy on the old server:
l If you have the MoM setup: Add the new server to the /etc/opt/OV/share/conf/OpC/mgmt_
sv/respmgrs/allnodes file, and then verify the syntax:
/opt/OV/bin/OpC/opcmomchk allnodes
l If you do not have the MoM setup: The system contains several example files that are located
in the following directory:
/etc/opt/OV/share/conf/OpC/mgmt_sv/tmpl_respmgrs
Create a copy of the backup server example policy, and then modify it to reflect your own
configuration. To confirm that the file syntax is configured correctly in the new policy file, run
the following command:
/opt/OV/bin/OpC/opcmomchk <policy_filename>
Name the file allnodes and copy it to the following directory:
/etc/opt/OV/share/conf/OpC/mgmt_sv/respmgrs
5. Deploy the flexible management policy to all nodes. On the old server, run the following command:
/opt/OV/bin/OpC/opcragt distrib policies -all
Make sure that you update the trusted certificates on the remote agents. In the Java GUI, mark all
the managed nodes, and then start the Update Trusts application in the Certificate Tools
application group.
6. On the new server, check if the agents can be contacted:
/opt/OV/bin/OpC/opcragt -status -all
Run the command on the old server as well, and then compare its output with the output of the
new server.
Note: On the old server, you can use multiple threads for the opcragt command by running
the following command:
Note: The agents that could not be contacted are listed in the following file:
/var/opt/OV/share/tmp/OpC/mgmt_sv/opcragt-status-failed
7. Copy the allnodes file from the old server to the new server. The file location is the following:
/etc/opt/OV/share/conf/OpC/mgmt_sv/respmgrs/allnodes
If you want to decommission the old server when the new server is up and running, you must
update the configuration settings on the managed nodes. For more information, see
"Decommissioning the Old Management Server" on page 148.
8. Optional: You can upgrade the managed nodes to the latest version at any time later on. For more
information, see "Upgrading the HPEOperations Agent Software" on page 174.
Note: Only new incoming messages are synchronized by using message forwarding. All the
messages that had arrived before the shadow period began must be handled on the old server.
l If you do not have the MoM with message forwarding setup: The system contains an example
file that is located in the following directory:
/etc/opt/OV/share/conf/OpC/mgmt_sv/tmpl_respmgrs
Create a copy of the msgforw example policy and modify it to reflect your own configuration.
The following is an excerpt for a two server setup:
...
MSGTARGETRULE
DESCRIPTION "forward all messages"
MSGTARGETRULECONDS
MSGTARGETMANAGERS
MSGTARGETMANAGER
TIMETEMPLATE "$OPC_ALWAYS"
OPCMGR IP 0.0.0.0 "<new_server_hostname>"
MSGCONTROLLINGMGR
MSGTARGETMANAGER
TIMETEMPLATE "$OPC_ALWAYS"
OPCMGR IP 0.0.0.0 "<old_server_hostname>"
MSGCONTROLLINGMGR
...
Caution: Both servers must be mentioned in the message target rule and the
MSGCONTROLLINGMGR keyword must be used.
To confirm that the file syntax is configured correctly in the new policy file, run the following
command:
/opt/OV/bin/OpC/opcmomchk <policy_filename>
Name the file msgforw, and copy it to the following directory:
/etc/opt/OV/share/conf/OpC/mgmt_sv/respmgrs
1. Make sure that the operators start using the new server.
For detailed information, see the HPOM Administrators Reference.
2. Optional: Download and upload the history messages from the old server to the new server as
follows:
a. On the old server, run the following command:
/opt/OV/bin/OpC/opchistdwn -until <start_of_shadow_period> \
-file /tmp/history
In this instance, <start_of_shadow_period> is a timestamp in the mm/dd/yy format.
b. Copy the file to the new server, and then run the following command:
/opt/OV/bin/OpC/opchistupl /tmp/history
Note: If the HPOM 8.xx installation has non-ASCII characters in the messages, use the
-upgrade option to convert the messages from the HPOM 8.xx character set to the
HPOM 9.xx character set. For example:
1. Change the following configuration settings on the managed nodes to set the hostname of the new
server:
2. Change the owner attribute for all policies, by running the following command:
/opt/OV/bin/ovpolicy -setowner OVO:<new_server_hostname> -all -host $NODE
Note: Do not run this command if you are using the OPC_POLICY_OWNER configuration
variable, as the node is already configured to that owner, regardless of the management
server. For more information, see the "Distributing Configuration and Policies in the Flexible
Management Environment" section of the HPOM Administrators Reference.
3. Run the following command to inform the HPOM server processes that HPE Operations Agent
software is installed on the node, and to start heartbeat polling for the node:
/opt/OV/bin/OpC/opcsw -i $NODE
4. Distribute the new HPOM configuration (including mgrconf and nodeinfo policies) by running the
following command:
/opt/OV/bin/OpC/opcragt -distrib -force $NODE
5. If the new and old servers are not in a server pooling environment, configure all managed nodes to
use the new server as the primary manager, by running the following command on the new server:
/opt/OV/bin/OpC/opcragt -primmgr -all
Note: In a server pooling environment, OPC_PRIMARY_MGR on the managed node is already set
to the virtual node name for the virtual IP that is now active on the new server.
1. Remove the old server from the msgforw file of the new server, if applicable.
2. Remove the old server from the allnodes file and node-specific mgrconf policies.
3. Distribute templates to all managed nodes, for the new allnodes file to be distributed to the
nodes.
4. Delete the old server from the node bank.
If you remove the trusted certificate of the old server, all the certificates that were created by the old
server on the managed nodes become invalid and communication between the new server and the
managed nodes is no longer possible. You must replace the certificate on each managed node with a
certificate from the new server, and remove the trusted certificate of the old server.
To replace certificates and remove the trusted certificate of the old server:
cd /var/opt/OV/datafiles/policies/
rm -rf configsettings le mgrconf monitor msgi trapi sched svcdisc
configfile
b. Remove the trusted certificate of the old server by running the following commands:
ovcert -remove <CA_old-server-ovcoreid> -f
ovcert -remove <CA_old-server-ovcoreid> -ovrg server -f
3. On each managed node, update the trusted certificate of the new server to remove the trusted
certificate of the old server:
ovcert -updatetrusted
Note: Because this scenario basically represents a subcase of the upgrade procedure described in
"Upgrading HPOM to Version 9.2x" on page 154, only the specifics of the MoM upgrade are
described in this section.
1. Ignore this step in a server pooling environment: Switch all agents to report to server B. On server
B, run the following command:
/opt/OV/bin/OpC/opcragt -primmgr all
2. Make sure that message forwarding between server A and server B is switched to HTTPS
communication.
If required, perform the following steps on both servers:
a. Enable HTTPS-based message forwarding by running the following command:
/opt/OV/bin/ovconfchg -ovrg server -ns opc -set OPC_HTTPS_MSG_FORWARD TRUE
b. Restart processes on both servers:
/opt/OV/bin/ovc -stop
/opt/OV/bin/ovc -start
c. Verify that HTTPS-based message forwarding works correctly by sending several test
messages and acknowledging them. In addition, check that message synchronization works
correctly.
3. Stop server A.
From the moment you stop server A, server B starts buffering all messages and message
operations. Run the following command:
/opt/OV/bin/ovc -stop
Note: During the upcoming upgrade installation of server A, it can happen that server B sends
buffered messages as soon as server A is up.
4. If server A is to be replaced by a new hardware, back up its certificates and the OvCoreId:
/opt/OV/bin/OpC/opcsvcertbackup -backup \
-passwd <password> -file <my_cert_backup>
In this instance, <my_cert_backup> is the file where you backed up the certificates.
5. To upgrade the management server, see "Upgrading HPOM to Version 9.2x" on page 154.
6. Ignore this step if the old hardware of server A was reused: If server A was replaced by a new
hardware, the initial installation generated a new OvCoreId and new certificates. Server B cannot
forward messages to server A at this point. Therefore, you must reinstall the saved OvCoreId and
certificates. Run the following commands:
/opt/OV/bin/ovc -kill
/opt/OV/bin/OpC/opcsvcertbackup -restore \
-passwd <password> -file <my_cert_backup> -force
In this instance, <password> is the same password as you used for backing up the certificates
and the OvCoreId of server A and <my_cert_backup> is the file where you backed up the
certificates.
It may happen that in the meantime certain configuration changes are done on server B.
7. Make sure that you synchronize the servers:
a. On server B, run the following commands:
echo "* ;" >/tmp/all.dsf
mkdir /tmp/all
/opt/OV/bin/OpC/opccfgdwn -backup /tmp/all.dsf /tmp/all
b. On server A, run the following command:
/opt/OV/bin/OpC/opccfgupld -replace -subentity <data_from_B>
In this instance, <data_from_B> is the data downloaded from server B.
8. Start server processes on server A by running the following command:
/opt/OV/bin/ovc -start
Note: At this point, server B can forward all messages and message operations that were
buffered.
Note: You can either upgrade the software on the same system, as described in the following
sections, or migrate your data to a new HPOM 9.2x installation on a different system. For detailed
information about migrating HPOM, see "Migrating HPOM from One System to Another" on page
138.
In this chapter, you can also find information about the following topics:
Caution: The HPE Operations Agent software is no longer shipped with HPOM. To obtain the
supported agent version, request the agent media from HPE.
l Make sure that the new management server meets at least the minimum system requirements as
described in "Installation Requirements for the Management Server" on page 13.
l Make sure that HPOM 9.1x is installed and configured on the system on which the upgrade is
performed.
l If the upgrade is performed in a cluster environment, make sure that HPOM 9.1x is installed and
configured in the cluster environment.
1. Back up server certificates and the OvCoreId by running the following command:
/opt/OV/bin/OpC/opcsvcertbackup backup
2. Download all configuration data:
a. Create an empty download specification file:
mkdir /tmp/cfgdwn
echo "* ;" > /tmp/cfgdwn/download.dsf
b. Download the server configuration:
/opt/OV/bin/OpC/opccfgdwn -force -backup /tmp/cfgdwn/download.dsf
/tmp/cfgdwn
3. Optional: Download all messages by following these steps:
a. Perform a history download:
/opt/OV/bin/OpC/opchistdwn -older 0s -file /tmp/history
b. Acknowledge all active messages:
/opt/OV/bin/OpC/opcack -u <user_for_all_msg_grps>-a -f
c. Perform a second history download:
/opt/OV/bin/OpC/opchistdwn -older 0s -file /tmp/active
4. Start the HPOM upgrade procedure:
Type the following:
/<master_directory>/HPOMInstallationMediaDirectory/ovoupgrade
For example, if you created the /tmp directory as the master directory, you can start ovoupgrade
by typing the following:
/tmp/HPOMInstallationMediaDirectory/ovoupgrade
The following text appears:
Welcome to the HP Operations Manager for UNIX upgrade
5. Press ENTER to verify that you want the upgrade procedure to start.
The ovoupgrade script continues with detecting special environments and creating a file
permission snapshot.
Caution: In a cluster environment: You must first perform the upgrade procedure on the
active cluster node, and then on all passive cluster nodes. During the upgrade procedure on
the passive cluster nodes, make sure not to perform a server switchover.
You are prompted to enter the HPOM software package repository location where all server
packages are located.
6. Press ENTER to accept the default repository location, or enter the desired location followed by
ENTER.
You are prompted to enter the HPE Operations Agent software location.
7. After you enter the HPE Operations Agent software location, press ENTER.
You are prompted to enter the HPOM Administration UI software location.
8. After you enter the HPOM Administration UI software location, press ENTER.
The ovoupgrade script checks which patches are installed and removes them. After deleting the
patches, it checks and installs the server setup package that contains the server installation
infrastructure.
9. Press ENTER to continue with detecting installed software.
The ovoupgrade script informs you about the software that will be removed.
10. Press ENTER to continue with the software removal.
The ovoupgrade script continues with installing the local agent. After the process of installing the
local agent returns the OK value, it checks core component packages, server packages,
localization packages, and ECS Composer packages.
11. Press ENTER to continue with installing the packages.
After all the packages are installed, the following note is displayed:
Before continuing with the server configuration, you can manually install
available server patches.
12. Optional: Install the patches.
a. Open a second window and install the latest versions of the following patches:
o Consolidated Server and Java GUI
o Core and Accessories
b. Close the second window after you have successfully installed the patches. Return to the
original window to continue with the upgrade.
13. Press ENTER to continue.
The ovoupgrade script performs the integration into the start/stop sequence and installs agent
deployment packages.
14. If you want to enable the Event Storm Filter component, press ENTER. Otherwise, press n
followed by ENTER.
For detailed information about the Event Storm Filter component, see the HPOM Administrators
Reference.
15. If you want to enable the Health Check component, press ENTER. Otherwise, press n followed
by ENTER.
For detailed information about the Health Check component, see the HPOM Administrators
Reference.
16. Press ENTER to confirm that you want to install the server add-on packages.
After the server add-on packages are installed, the ovoupgrade script asks you if you want to
migrate your database from Oracle to PostgreSQL.
17. Press ENTER to accept the default value n and not to migrate from the Oracle database to the
PostgreSQL database, or press y followed by ENTER and start the migration from the Oracle
database to the PostgreSQL database. For more information, see "Migrating from Oracle to
PostgreSQL " on the next page.
The ovoupgrade script displays messages that the Administration UI is upgraded and the server
is started. The Administration UI is upgraded to the HPOM 9.20 version.
18. If you want to switch HPOM to non-root operationa, press y followed by ENTER. Otherwise,
accept the default value n by pressing ENTER.
Caution: You cannot switch back to the root mode if you have configured HPOM to non-root
operation.
For detailed information about non-root operation, see the HPOM Concepts Guide.
Before the ovoupgrade script completes the upgrade procedure, it informs you about the
commands that you must run if you want to revert file permission changes made during the
upgrade. An output similar to the following one appears:
You can revert file permission changes made during the upgrade by running the
following commands:
/opt/OV/bin/OpC/install/ovoconfigure -revertPermissions
/opt/OV/bin/OpC/install/file_permissions.09.10.240.conf
Caution: If you decided to switch HPOM to non-root operation, make sure not to revert file
permission changes.
1. Answer the questions listed in Table 21 by either accepting the default value and pressing
ENTER, or typing the desired value followed by ENTER.
Table 21: Oracle to PostgreSQL-related Migration Questions
PSQL binary directory The directory where the PostgreSQL binaries are stored.
Keep in mind that the location of this directory varies
depending on the distribution or the version.
PSQL library The directory where the PostgreSQL client libraries are
directory stored. Keep in mind that the location of this directory
varies depending on the distribution or the version.
PSQL data directory The directory where the data tablespaces are stored.
This directory must be empty or non-existent. If you do
not provide an answer to this question, <cluster_
dir>/HPOM is used.
PSQL index directory The directory where the index tablespaces are stored.
This directory must be empty or non-existent. If you do
not provide an answer to this question, <cluster_
dir>/HPOM is used.
Do you wish to start Press y if you want the database cluster to be started
the PSQL cluster automatically each time the system is started.
automatically at boot
Otherwise, you must start the database cluster manually
time?
before you can start HPOM.
DB DBA user The name of the administrator user inside the database
cluster or server, which is usually set to postgres.
DB DBA user password The password of the administrator user inside the
database cluster or server, which is usually set to
postgres.
Database opc_op The password for the opc_op database user. The default
password is opc_op.
Database opc_report The password for the opc_report database user. The
password default is opc_report.
After you answer all the questions, the ovoupgrade script checks the database configuration data
and the summary of all provided answers appears.
2. Check the summary data, and then press ENTER to perform the database configuration.
3. Press ENTER to continue. The ovoupgrade script continues as follows:
l Uploads the configuration, history messages, and active messages to the PostgreSQL
database.
At this point, the Administration UI is either installed (if you do not have it installed yet) or upgraded (if it
is already installed). In the first case, you must answer the Administration UI-related questions
described in Table 8.
l Make sure that the new management server meets at least the minimum system requirements as
described in "Installation Requirements for the Management Server" on page 13.
l Make sure that HPOM 9.20 or 9.21 is installed and configured on the system on which the upgrade is
performed.
l If the upgrade is performed in a cluster environment, make sure that HPOM 9.20 or 9.21 is installed
and configured in the cluster environment.
from:
To upgrade HPOM from version 9.20 or 9.21 to version 9.22, follow these steps:
1. Back up server certificates and the OvCoreId by running the following command:
/opt/OV/bin/OpC/opcsvcertbackup backup
2. Download all configuration data:
a. Create an empty download specification file:
mkdir /tmp/cfgdwn
echo "* ;" > /tmp/cfgdwn/download.dsf
b. Download the server configuration:
/opt/OV/bin/OpC/opccfgdwn -force -backup /tmp/cfgdwn/download.dsf
/tmp/cfgdwn
3. Optional: Download all messages by following these steps:
a. Perform a history download:
/opt/OV/bin/OpC/opchistdwn -older 0s -file /tmp/history
b. Acknowledge all active messages:
/opt/OV/bin/OpC/opcack -u <user_for_all_msg_grps>-a -f
c. Perform a second history download:
/opt/OV/bin/OpC/opchistdwn -older 0s -file /tmp/active
4. Start the HPOM upgrade procedure:
Type the following:
/<master_directory>/HPOMInstallationMediaDirectory/ovoupgrade
For example, if you created the /tmp directory as the master directory, you can start ovoupgrade
by typing the following:
/tmp/HPOMInstallationMediaDirectory/ovoupgrade
The following text appears:
Welcome to the HP Operations Manager for UNIX upgrade
5. The ovoupgrade script continues with detecting special environments and creating a file
permission snapshot.
Caution: In a cluster environment: You must first perform the upgrade procedure on the
active cluster node, and then on all passive cluster nodes. During the upgrade procedure on
the passive cluster nodes, make sure not to perform a server switchover.
You are prompted to enter the HPOM software package repository location where all server
packages are located.
6. Press ENTER to verify that you want the upgrade procedure to start.
7. Press ENTER to accept the default repository location, or enter the desired location followed by
ENTER.
You are prompted to enter the HPE Operations Agent software location.
8. After you enter the HPE Operations Agent software location, press ENTER.
You are prompted to enter the HPOM Administration UI software location.
9. After you enter the HPOMAdministration UI software location, press ENTER.
The ovoupgrade script checks which patches are installed and removes them. After deleting the
patches, it checks and installs the server setup package that contains the server installation
infrastructure.
10. Press ENTER to continue with detecting installed software.
The ovoupgrade script informs you about the software that will be removed.
11. Press ENTER to continue with the software removal.
The ovoupgrade script continues with installing the local agent. After the process of installing the
local agent returns the OK value, it checks core component packages, server packages,
localization packages, and ECS Composer packages.
12. Press ENTER to continue with installing the packages.
After all the packages are installed, the following note is displayed:
Before continuing with the server configuration, you can manually install
available server patches.
13. Optional: Install the patches.
a. Open a second window and install the latest versions of the following patches:
o Consolidated Server and Java GUI
o Core and Accessories
b. Close the second window after you have successfully installed the patches. Return to the
original window to continue with the upgrade.
14. Press ENTER to continue.
The ovoupgrade script performs the integration into the start/stop sequence and installs agent
deployment packages.
15. If you want to enable the Event Storm Filter component, press ENTER. Otherwise, press n
followed by ENTER.
For detailed information about the Event Storm Filter component, see the HPOM Administrators
Reference.
16. If you want to enable the Health Check component, press ENTER. Otherwise, press n followed
by ENTER.
For detailed information about the Health Check component, see the HPOM Administrators
Reference.
17. Press ENTER to confirm that you want to install the server add-on packages.
After the server add-on packages are installed, the ovoupgrade script asks you if you want to
migrate your database from Oracle to PostgreSQL.
18. Press ENTER to accept the default value n and not to migrate your database, or press y followed
by ENTER and start the database migration.
For information about migrating from Oracle database to PostgreSQLdatabase, see "Migrating
from Oracle to PostgreSQL " on page 158.
For information about migrating from PostgreSQLdatabase to Oracle database, see "Migrating
from PostgreSQL to Oracle " below
The ovoupgrade script displays messages that the Administration UI is upgraded and the server
is started. The Administration UI is upgraded to the HPOM 9.20 version.
19. If you want to switch HPOM to non-root operationa, press y followed by ENTER. Otherwise,
accept the default value n by pressing ENTER.
Caution: You cannot switch back to the root mode if you have configured HPOM to non-root
operation.
For detailed information about non-root operation, see the HPOM Concepts Guide.
20. Optional: Install the latest Administration UI patch.
1. Answer the questions listed in the following table by either accepting the default value and
pressing ENTER, or typing the desired value followed by ENTER.
Table 22: PostgreSQL to Oracle-related Migration Questions
Enable automatic y
database startup?
if already exist?
Set up the database This question allows you to choose how to create the database,
manually manually or automatically.
(local/remote)?
l If you want to create the database manually, press y followed
by ENTER. In this case, the ovoconfigure script pauses
instead of creating the database, allowing you to manually
create the database. After you create the database manually
as described in "Setting Up HPOM with a Remote/Manual
Oracle Database" on page 87, the ovoconfigure script
configures HPOM to use the created database.
Oracle Base The Oracle database base directory, which is usually the same
as the ORACLE_BASE variable. The default is /opt/oracle.
Oracle Home The Oracle database home directory, which is usually the same
as the ORACLE_HOME variable. The default is
/opt/oracle/product/11.1.0.
Oracle User The Oracle user for the HPOperations management server
database. The default is oracle.
Oracle Data Directory The directory where the HPOperations management server
database files are stored (for example, /opt/oracle/oradata).
Oracle User opc_op The password for the opc_op database user. The default is opc_
Password op.
Oracle User opc_ The password for the opc_report database user. The default is
report Password opc_report.
Oracle User system The password for the system database user. The default is
Password manager.
After you answer all the questions, the ovoupgrade script checks the database configuration data
and the summary of all provided answers appears.
2. Check the summary data, and then press ENTER to perform the database configuration.
3. Press ENTER to continue. The ovoupgrade script continues as follows:
l Stops the PostgreSQL database.
l Uploads the configuration, history messages, and active messages to the Oracle database.
For detailed information about installing an Oracle database or a PostgreSQL database, see "Installing
an Oracle Database" on page 28 or "Installing a PostgreSQL Database" on page 38.
To upgrade an Oracle database (for example, version 11.1 to version 11.2) by using the out-of-place
upgrade method, follow these steps:
5. Recommended: Back up the old Oracle home directories, data directories, and configuration files.
6. Remove the old Oracle database instance by running the following command:
/opt/OV/bin/OpC/opcdbsetup -d
7. Optional: Remove the old Oracle installation. For detailed information, see the Oracle
documentation.
Note: Depending on the HPOM environment, removing the old Oracle installation may
include removing the Oracle server as well as the client and instant client products.
8. Install the new Oracle database version as described in "Installing an Oracle Database" on page
28.
When installing the new Oracle database version, keep in mind the following:
l Because there might be a difference in required operating system versions, patches, and kernel
parameters for different Oracle versions, make sure that your system meets the requirements
stated in the Oracle documentation.
l The .profile file for the Oracle user or other configuration files (for example, /etc/oratab,
listener configuration files, and so on) may contain one or more of the following Oracle
configuration variables: ORACLE_HOME, ORACLE_SID, and ORACLE_BASE. If this is the case, it is
important to update them to the new values before proceeding with the upgrade.
9. Run the Oracle database setup tool (that is, opcdbsetup) and make sure to use the appropriate
values for the new database version.
Note: The links from the HPOM library directory to the Oracle client libraries are updated and
point to the new location. If this is not the case, you can recreate them either manually or by
running the /opt/OV/bin/OpC/opcdblink oracle command.
10. Make sure that the new Oracle database is up and running. Depending on whether you have a
local database or a remote/independent database, choose one of the following procedures to
restart the Oracle database:
l Local database:
/sbin/init.d/ovoracle start
Because minor PostgreSQL database versions are always compatible with earlier and later minor
PostgreSQL database versions of the same major PostgreSQL database version, the upgrade
procedure is simple and consists of replacing the executables while the management server is down
and restarting the management server. In this case, the data directory remains unchanged. For details,
see "Upgrading a Minor PostgreSQL Database Version" on the next page.
When upgrading the major PostgreSQL database version, the contents of the data directory changes,
which makes this method more complicated than the method for upgrading the minor PostgreSQL
database version. For details, see "Upgrading a Major PostgreSQL Database Version" on page 171.
Note: Because there might be a difference in required operating system versions, patches, and
kernel parameters for different PostgreSQL versions, make sure that your system meets the
requirements stated in the PostgreSQL documentation before you start the upgrade procedure.
It is also recommended that you back up your system before upgrading the PostgreSQL database.
Note: The new PostgreSQL server binaries may be installed at the same location as the old
ones. If you have another PostgreSQL database cluster running on the old PostgreSQL server
binaries, it is highly recommended that you temporarily stop them during the installation of the
new PostgreSQL server binaries.
Note: Make sure that /opt/OV/lib64/PSQL points to the correct location (that is, to the
PostgreSQL library directory). If not, recreate the link manually.
6. Depending on whether you have a managed database or a remote/manual database, choose one
of the following two commands to restart the PostgreSQL database:
l For a managed database:
/etc/init.d/ovopsql start current
su - <OS_DBA_user>
<PostgreSQL_binary_directory>/pg_ctl \
-D <PostgreSQL_cluster_directory> start -l logfile
6. Create and configure a PostgreSQL database cluster as described in "Creating and Configuring a
PostgreSQL Database Cluster" on page 100.
Caution: The PostgreSQL database cluster must be created with new PostgreSQL server
binaries.
7. Recommended: Back up old PostgreSQL database cluster directory and configuration files.
8. Remove the old PostgreSQL database cluster installation by choosing one of the following two
methods:
l Automatically:
As the root user, run the following command:
/opt/OV/bin/OpC/psqlsetup remove
l Manually:
As the root user, follow these steps:
i. Delete the old PostgreSQL database cluster directory:
rm -rf <old_cluster_directory>
ii. Delete the HPOM database configuration file:
rm -f /etc/opt/OV/share/conf/ovdbconf
iii. If the PostgreSQL database cluster is set to autostart, edit the /etc/ovopsql
configuration file, and then delete the old PostgreSQL database cluster directory within
the configuration file.
9. Make sure that the new PostgreSQL database cluster is up and running. Depending on whether
you have a managed database or a remote/manual database, choose one of the following two
commands to restart the PostgreSQL database:
l For a managed database:
/etc/init.d/ovopsql start current
10. Run the PostgreSQL database setup tool (that is, psqlsetup) according to the new installation
and configuration.
11. Upload the configuration data by running the following command:
/opt/OV/bin/OpC/opccfgupld -replace /tmp/cfgdwn
12. If you downloaded all the messages, upload them by following these steps:
Note: To obtain the correct Oracle JDBC connection string, check the $ORACLE_
HOME/network/admin/tnsnames.ora file.
l PostgreSQL:
ovodb.url=jdbc:Postgresql://<PostgreSQL_host>:<port>/<DB_name>
For example:
ovodb.url=jdbc:Postgresql://avocado.hp.com:5433/openview
1. Deinstall any previous version of the Java GUI from the client system.
For detailed information about deinstalling the Java GUI, see "Deinstalling the Java GUI" on page
132.
2. Install the new version of the Java GUI on the client system.
For details, see "Installing the Java GUI" on page 68.
12.01 is required.
To upgrade the HPE Operations Agent software to a newer version, run the inst.sh script:
/opt/OV/bin/OpC/agtinstall/inst.sh
For detailed information about how to upgrade the HPE Operations Agent software automatically by
using the installation script, see the HPOM Administrators Reference and the inst.sh(1M) manual
page.
HPOM 9.2x license passwords are exchangeable between the HP-UX on HP Integrity, Sun Solaris,
and Linux operating systems.
Note: It is possible to install license passwords from systems with a different IP address.
However, this does not mean that they are valid on the target system. Validity is checked during
runtime and license passwords without matching IP addresses are ignored.
Migrating Licenses
To migrate license passwords from an HPOM 9.1x source system to an HPOM 9.2x target system,
follow these steps:
1. Copy the license passwords from the source system to a safe place on the target system.
The license passwords are located in the following file:
/var/opt/OV/shared/server/OprEl/AutoPass/LicFile.txt
For example, to copy the file, run the following command:
scp /var/opt/OV/shared/server/OprEl/AutoPass/LicFile.txt \
<target_sys>:<directory>/HPOM9-LicFile-Backup.txt
2. On the target system, install the license passwords selectively by using the AutoPass GUI
(recommended) or nonselectively by using the ovolicense tool.
Caution: Make sure that you never copy the license passwords directly into the AutoPass
license password file or overwrite the AutoPass license password file with another license
password file, as this could result in license locks.
Caution: If you have installed HPOM on a CentOS Linux platform, you must install the
libXtst package to launch the ovolicense tool to open the licensing GUI.
b. In the GUI, select Install License Key, and then Install/Restore License Key from file.
c. Click Browse to select the license file copied from the source system.
l Choose a configuration scenario for installing the HPOperations management server and the
database server in a cluster environment.
l Upgrade HPOM in a cluster environment.
l Stop the HPOperations management server in a cluster environment for maintenance.
l Deinstall HPOM from cluster nodes.
For detailed information about the high availability terms, see the HPOM Concepts Guide.
For more information about the administration of the HPOperations management server in a cluster
environment, see the HPOM Administrators Reference.
Configuration Scenarios
When installing an HPOperations management server and an Oracle database server or a PostgreSQL
database server in a cluster environment, you can choose one of the following configuration scenarios:
group (HARG).
same node.
independent cluster.
When upgrading HPOM from version 9.1x to version 9.2x, follow the procedure described in "Upgrading
from HPOM 9.1x to HPOM 9.2x" on page 154.
Caution: When stopping and starting the HPOperations management server, make sure not to
use the cluster-related commands. Only the HPOM commands such as ovc and opcsv should be
used.
Caution: Before you run the opcsv -stop, ovc -stop, or ovc -kill command, you must
disable HARG monitoring. Failing to do so results in a failover.
3. Perform the intended action (the patch installation, an upgrade, the maintenance, and so on).
4. Start the HPOperations management server.
5. Enable HARG monitoring by running the following command:
/opt/OV/lbin/ovharg -monitor ov-server enable
Note: Before enabling HARG monitoring, make sure that the HPOperations management
server is running.
l The HPOperations server HARG ov-server may not be active on this node.
l Virtual host may not be active.
l Shared file systems may not be mounted.
After ensuring that all these requirements are met, deinstall the HPOperations management server as
described in "Deinstalling HPOM" on page 130.
Deinstall the HPOperations management server from this node as described in "Deinstalling HPOM"
on page 130.
For more information about managing HPOM in a cluster environment, see "Managing HPOM in a
Cluster Environment" on page 177.
Installation Requirements
To run HPOM in an HP ServiceGuard environment, your system must meet the following requirements:
l RHEL 6.x
For the most up-to-date list of supported RHEL versions, see the support matrix at the following
location:
https://softwaresupport.hpe.com/km/KM323488
l HP ServiceGuard 11.20
l Make sure to use LVM as the logical volume manager.
Caution: Make sure that the HPE Operations Agent version is 11.04.016 or higher.
For additional requirements about installing HPOM, see "Installation Requirements for the
Management Server" on page 13.
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 192
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 198
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 192
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 198
Caution: You cannot install HPOM simultaneously on all cluster nodes. When the installation
process is completed on one cluster node, begin the installation on the next node, until HPOM is
installed on all the nodes in a cluster environment.
For detailed information about configuration scenarios, see "Configuration Scenarios" on page 177.
Depending on the configuration scenario you choose, see one of the following sections:
l Basic environment: "Preparation Steps for the First Cluster Node in a Basic Environment" below
l Decoupled environment: "Preparation Steps for the First Cluster Node in a Decoupled Environment"
on page 187
l Independent database server: "Preparation Steps for the First Cluster Node in a Cluster
Environment Using an Independent Database Server" on page 190
Caution: When defining a volume group or any of the volumes within the volume group,
you can specify an optional name.
Make sure you do not use the MINUS SIGN (-) sign in the volume group and the logical
volume names.
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
HPOperations management server database index files
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov_vg volume group by running the following command:
/sbin/vgchange -a y ov_vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_db_data \
<database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_db_index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
/bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_db_tables \
<postgres_table_data_mount_point>
g. Optional: If you choose to install Oracle database server binaries on a shared disk:
/bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_db_core \
<oracle_binaries_mount_point>
6. Activate the HPOperations management server virtual network IP:
/usr/sbin/cmmodnet -a -i <IP> <subnet>
In this instance, <IP> is the IP address of the virtual host that you previously selected and
<subnet> is the subnet address of the virtual host you previously selected.
c. Define the ov_db_vg volume group consisting of at least one shared disk for the HARG.
d. Define the following volumes within the ov_db_vg volume group:
o ov_volume_db_data
o If the database index directory is on a different volume than the main data directory: ov_
volume_db_index
o If the PostgreSQL database table data directory is on a different volume than the main
cluster directory: ov_volume_db_tables
o If you choose to install Oracle database server binaries on a shared disk: ov_volume_db_
core
Caution: When defining a volume group or any of the volumes within the volume group, you
can specify an optional name.
Make sure you do not use the MINUS SIGN (-) sign in the volume group and the logical
volume names.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
HPOperations management server database index files
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov_vg and ov_db_vg volume groups by running the following commands:
/sbin/vgchange -a y ov_vg
/sbin/vgchange -a y ov_db_vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/ov_db_vg/ov_volume_db_data \
<database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/ov_db_vg/ov_volume_db_index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
/bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_db_tables \
<postgres_table_data_mount_point>
g. Optional: If you choose to install Oracle database server binaries on a shared disk:
/bin/mount [-t <FSType>] /dev/ov_db_vg/ov_volume_db_core \
<oracle_binaries_mount_point>
Caution: When defining a volume group or any of the volumes within the volume group,
you can specify an optional name.
Make sure you do not use the MINUS SIGN (-) sign in the volume group and the logical
volume names.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov_vg volume group by running the following command:
/sbin/vgchange -a y ov_vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov_vg/ov_volume_lcore \
/var/opt/OV/shared/server
6. Activate the HPOperations management server virtual network IP:
/usr/sbin/cmmodnet -a -i <IP> <subnet>
In this instance, <IP> is the IP address of the virtual host that you previously selected and
<subnet> is the subnet address of the virtual host you previously selected.
l The HPOperations management server must already be installed and running on one of the cluster
nodes. This enables you to add a local node to the HPOperations management server configuration,
and install and start the HPE Operations Agent software on the local node.
l On the node where HPOM is running, enable the remote shell connection for the root user to the
node where you plan to install the HPOperations management server. You can do this by adding the
following line into the <home_directory>/.rhosts file:
<node> root
You can check if the remote shell is enabled by running the following command:
rsh <active_node> -l root -n ls
A list of the files on the root directory from the node where the HPOperations management server
is running should be displayed.
In more secure environments, you can set up a secure shell (SSH) connection between the node
where you plan to install an HPOperations management server and the node where the
HPOperations management server is running.
For the HPOperations management server installation, you must enable SSH access without
password, for the root user between these two nodes. During the installation, the ssh and scp
commands are used. Therefore, both commands must be accessible from the main path.
You can check if the secure remote shell is enabled by running the following command:
ssh <active_node> -l root -n ls
The type of connection is detected automatically. A secure connection has a higher priority if both
types of connection are enabled.
l Shared file systems may not be mounted on this cluster node. They are already mounted on the
cluster node where the HPOperations management server is running.
l Virtual IP may not be activated on this node because it is already used on the node where the
HPOperations management server is running.
In exceptional cases, you may want to install the Oracle database server binaries on a shared disk.
This way only one set of Oracle database server binaries is installed but there is a greater risk of
losing Oracle availability. If you choose the decoupled scenario for installing HPOM, a separate
Oracle client installation is also needed.
l If you use the PostgreSQL database:
The PostgreSQL database server binaries must be installed locally on all nodes. The installation
path must be the same on all cluster nodes.
Table 27 shows which procedure to follow depending on the configuration scenario you choose.
Note: After the database server installation, on all HP Operations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Question Answer
groups?
Note: When installing and configuring the HPOperations management server, the ORACLE_
BASE and ORACLE_HOME variables must be set to the Oracle database server location.
a. Copy the following configuration files from the Oracle database server location on the shared
disk (<Oracle_server_home>/network/admin/) to the Oracle client location on the local disk
(<Oracle_client_home>/network/admin/):
o listener.ora
o sqlnet.ora
o tnsnames.ora
o tnsnav.ora
b. To contain the location of the Oracle client software, modify the ORACLE_HOME variable at the
following location:
/etc/opt/OV/share/conf/ovdbconf
c. Stop the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server stop <local_hostname>
d. Add the following lines to the /etc/sysconfig/ovoracle file:
ORACLE_HOME=<Oracle_Server_Home>
ORACLE_SID=<ORACLE_SID>
export ORACLE_HOME ORACLE_SID
The /etc/sysconfig/ovoracle file is used as a configuration file by the
/etc/init.d/ovoracle script, which is used by the Oracle HARG to start the Oracle database.
Note: Make sure that you use the latest version of the /etc/init.d/ovoracle script.
Copy the file from newconfig by running the following command:
cp /opt/OV/newconfig/OpC/etc/init.d/ovoracle /etc/init.d/ovoracle
e. Remove the existing Oracle client library links from the /opt/OV/lib64 directory and replace
them with the following ones:
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so.11.1
ln -sf <ORACLE_HOME>/lib/libnnz11.so /opt/OV/lib64/libnnz11.so
ln -sf <ORACLE_HOME>/lib/libnnz12.so /opt/OV/lib64/libnnz12.so
f. Start the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server start <local_hostname>
The HPOperations management server will now connect to the Oracle database server through the
Oracle client.
l Additional cluster node
Install the Oracle client on the local disk. All other database configuration steps are performed by the
HPOperations management server installation script.
Note: When installing and configuring the HPOperations management server, the ORACLE_
HOME variable must be set to the Oracle client location.
Note: After the database server installation, on all HP Operations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Caution: Make sure that cluster node names are the same as hostnames. Otherwise, the
configuration fails.
1. After the ovoconfigure script detects a special environment, provide answers to the following
cluster-specific questions:
Question Instruction
Would you prefer to Press ENTER to accept the default answer (that is, n).
Question Instruction
HA Resource Group Press ENTER to accept the default answer (that is, ov-server), or
name specify an alternative name for the HARG, and then press
ENTER.
HARGs are created during the installation of HPOM. The
ovoinstall script builds the package or the service control file,
and the configuration file automatically. Do not create these files
manually and do not use your own configuration files. If you already
did it, remove them before starting the installation of HPOM.
The entered HARG name may not be one of the already existing
names.
Server virtual Enter the short name of the virtual host (for example, virtip1).
hostname
Will HPOM run on an Choose the appropriate option depending on the database type
Oracle instance (n HPOM will run on.
for PostgreSQL )
Oracle only: Oracle Choose the Oracle database base directory (the default is
Base /opt/oracle).
PostgreSQL only: PSQL Choose the directory where you want the cluster to be created (it
cluster directory
Question Instruction
Database Table Data Choose the mount point where database table data files are stored.
Mount Point
Database Index Mount Choose the mount point where database index files are stored (by
Point default, it is the same as the database table data mount point).
Cluster preconfiguration . . . . . . . . . . . OK
5. Press ENTER to continue with the database configuration and the server initialization.
Make sure to answer all the questions related to the database configuration and the server
initialization.
6. Press ENTER to continue with the cluster configuration.
An output similar to the following one should appear:
l Subagents configuration
l Certificates backup
Note: To limit the server communication to the virtual IP only, run the following command:
1. After the ovoconfigure script detects a special environment, you are asked if you want to run the
HPOperations management server as an HARG.
Press y followed by ENTER.
The script checks the remote shell connection and the secure remote shell connection, and then
the following question appears:
Would you prefer to use REMSH even though SSH is enabled?
2. Press ENTER to accept the default answer (that is, n).
You are prompted to enter the HARG name.
3. Press ENTER to accept the default answer (that is, ov-server), or specify an alternative name
Caution: The entered HARG must be configured and running on the first cluster node.
Cluster preconfiguration . . . . . . . . . . . . OK
6. Press ENTER to continue with the server final configuration that consists of the following:
l Management server policy group assignment
Log Files
For details about the cluster-specific installation, check the following log files:
l /var/opt/OV/log/OpC/mgmt_sv/installation.log.verbose
This log file contains the information about the success of the installation and eventual problems
during the installation.
l /var/opt/OV/hacluster/<HARG_name>/trace.loga, /var/opt/OV/hacluster/<HARG_
name>/error.log, /var/log/messages, and /usr/local/cmcluster/run/log/<HARG>.log.
These log files contain the information about managing the HARG.
Note: The size of the HARG trace.log file is limited. When the maximum file size is reached,
trace.log is moved into trace.log.old and the new information is written into a new trace.log
file.
You can change the maximum size of the trace.log file by adding the following line to the
/var/opt/OV/hacluster/<HARG_name>/settings file:
TRACING_FILE_MAX_SIZE=<maximum_size_in_kBytes>
For example:
TRACING_FILE_MAX_SIZE=7000
For more information about managing HPOM in a cluster environment, see "Managing HPOM in a
Cluster Environment" on page 177.
Installation Requirements
To run HPOM in a Red Hat Cluster Suite environment, your system must meet the following
requirements:
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 214
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 220
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 214
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 220
Caution: You cannot install HPOM simultaneously on all cluster nodes. When the installation
process is completed on one cluster node, begin the installation on the next node, until HPOM is
installed on all the nodes in a cluster environment.
For detailed information about configuration scenarios, see "Configuration Scenarios" on page 177.
Depending on the configuration scenario you choose, see one of the following sections:
l Basic environment: "Preparation Steps for the First Cluster Node in a Basic Environment" below
l Decoupled environment: "Preparation Steps for the First Cluster Node in a Decoupled Environment"
on page 208
l Independent database server: "Preparation Steps for the First Cluster Node in a Cluster
Environment Using an Independent Database Server" on page 211
Note: When defining a volume group or any of the volumes within the volume group, you can
specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
HPOperations management server database index files
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov-vg volume group by running the following command:
l For RHEL 5.x:
/usr/sbin/vgchange -a y ov-vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-db-data <database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-db-index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
/bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-db-tables \
<postgres_table_data_mount_point>
g. Optional: If you choose to install Oracle database server binaries on a shared disk:
/bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-db-core \
<oracle_binaries_mount_point>
6. Activate the HPOperations management server virtual network IP:
ip addr add <IP/subnet> dev <iface>
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
c. Define the ov-db-vg volume group consisting of at least one shared disk for the HARG.
Note: When defining a volume group or any of the volumes within the volume group, you can
specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
HPOperations management server database index files
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov-vg and ov-db-vg volume groups by running the following commands:
l For RHEL 5.x:
/usr/sbin/vgchange -a y ov-vg
/usr/sbin/vgchange -a y ov-db-vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/ov-db-vg/ov-volume-db-data \
<database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/ov-db-vg/ov-volume-db-index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
Note: When defining a volume group or any of the volumes within the volume group, you can
specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Caution: When choosing a file system type of shared file systems, keep in mind that GFS
and GFS2 are not supported with HPOM.
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Start the ov-vg volume group by running the following command:
l For RHEL 5.x:
/usr/sbin/vgchange -a y ov-vg
5. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-var /var/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-etc /etc/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/ov-vg/ov-volume-lcore \
/var/opt/OV/shared/server
6. Activate the HPOperations management server virtual network IP:
ip addr add <IP/subnet> dev <iface>
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
l The HPOperations management server must already be installed and running on one of the cluster
nodes. This enables you to add a local node to the HPOperations management server configuration,
and install and start the HPOperations agent software on the local node.
l On the node where HPOM is running, enable the remote shell connection for the root user to the
node where you plan to install the HPOperations management server. You can do this by adding the
following line into the <home_directory>/.rhosts file:
<node> root
You can check if the remote shell is enabled by running the following command:
rsh <active_node> -l root -n ls
A list of files on the root directory from the node where the HPOperations management server is
running should be displayed.
In more secure environments, you can set up a secure shell (SSH) connection between the node
where you plan to install an HPOperations management server and the node where the
HPOperations management server is running.
For the HPOperations management server installation, you must enable passwordless SSH access
for the root user between these two nodes. During the installation, the ssh and scp commands are
used. Therefore, both commands must be accessible from the main path.
You can check if the secure remote shell is enabled by running the following command:
ssh <active_node> -l root -n ls
The type of connection is detected automatically. A secure connection has a higher priority if both
types of connection are enabled.
l Shared file systems may not be mounted on this cluster node. They are already mounted on the
cluster node where the HPOperations management server is running.
l Virtual IP may not be activated on this node because it is already used on the node where the
HPOperations management server is running.
Note: After the database server installation, on all HP Operations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Note: When installing and configuring the HPOperations management server, the ORACLE_
BASE and ORACLE_HOME variables must be set to the Oracle database server location.
b. To contain the location of the Oracle client software, modify the ORACLE_HOME variable at the
following location:
/etc/opt/OV/share/conf/ovdbconf
c. Stop the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server stop <local_hostname>
d. Add the following lines to the /etc/sysconfig/ovoracle file:
ORACLE_HOME=<Oracle_Server_Home>
ORACLE_SID=<ORACLE_SID>
export ORACLE_HOME ORACLE_SID
The /etc/sysconfig/ovoracle file is used as a configuration file by the
/etc/init.d/ovoracle script, which is used by the Oracle HARG to start the Oracle database.
Note: Make sure that you use the latest version of the /etc/init.d/ovoracle script.
Copy the file from newconfig by running the following command:
cp /opt/OV/newconfig/OpC/etc/init.d/ovoracle /etc/init.d/ovoracle
e. Remove the existing Oracle client library links from the /opt/OV/lib64 directory and replace
them with the following ones:
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so.11.1
ln -sf <ORACLE_HOME>/lib/libnnz11.so /opt/OV/lib64/libnnz11.so
ln -sf <ORACLE_HOME>/lib/libnnz12.so /opt/OV/lib64/libnnz12.so
f. Start the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server start <local_hostname>
The HPOperations management server will now connect to the Oracle database server through the
Oracle client.
l Additional cluster node
Install the Oracle client on the local disk. All other database configuration steps are performed by the
HPOperations management server installation script.
Note: When installing and configuring the HPOperations management server, the ORACLE_
HOME variable must be set to the Oracle client location.
Note: After the database server installation, on all HPOperations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Caution: Make sure that cluster node names are the same as hostnames. Otherwise, the
configuration fails.
1. After the ovoconfigure script detects a special environment, provide answers to the following
cluster-specific questions:
Question Instruction
Would you prefer to Press ENTER to accept the default answer (that is, n).
use REMSH even
though SSH is
enabled
HA Resource Group Press ENTER to accept the default answer (that is, ov-server), or
name specify an alternative name for the HARG, and then press
ENTER.
HARGs are created during the installation of HPOM. The
ovoinstall script builds the package or the service control file,
and the configuration file automatically. Do not create these files
manually and do not use your own configuration files. If you already
did it, remove them before starting the installation of HPOM.
Question Instruction
The entered HARG name may not be one of the already existing
names.
Server virtual Enter the short name of the virtual host (for example, virtip1).
hostname
Will HPOM run on an Choose the appropriate option depending on the database type
Oracle instance (n HPOM will run on.
for PostgreSQL )
Oracle only: Oracle Choose the Oracle database base directory (the default is
Base /opt/oracle).
PostgreSQL only: PSQL Choose the directory where you want the cluster to be created (it
cluster directory must be empty) or where the cluster was created by using the
psqlcluster tool.
Database Table Data Choose the mount point where database table data files are stored.
Mount Point
Database Index Mount Choose the mount point where database index files are stored (by
Point default, it is the same as the database table data mount point).
b. Type the desired shared file system mount point, and then press ENTER.
Otherwise, accept the default value n by pressing ENTER.
The ovoconfigure script continues with checking virtual hosts.
3. If you want to add a new virtual host, follow these steps:
a. Press y followed by ENTER.
You are prompted to add the virtual hostname.
b. Type the desired virtual hostname (for example, virtip3), and then press ENTER.
Otherwise, accept the default value n by pressing ENTER.
The summary of all shared file systems and virtual hosts is displayed, after which the
ovoconfigure script asks you if you want to continue.
4. Press ENTER.
An output similar to the following one should appear:
Cluster preconfiguration . . . . . . . . . . . OK
5. Press ENTER to continue with the database configuration and the server initialization.
Make sure to answer all the questions related to the database configuration and the server
initialization.
6. Press ENTER to continue with the cluster configuration.
An output similar to the following one should appear:
l Subagents configuration
l Certificates backup
Note: To limit the server communication to the virtual IP only, run the following command:
1. After the ovoconfigure script detects a special environment, you are asked if you want to run the
HPOperations management server as an HARG.
Press y followed by ENTER.
The script checks the remote shell connection and the secure remote shell connection, and then
the following question appears:
Would you prefer to use REMSH even though SSH is enabled?
2. Press ENTER to accept the default answer (that is, n).
You are prompted to enter the HARG name.
3. Press ENTER to accept the default answer (that is, ov-server), or specify an alternative name
for the HARG, and then press ENTER.
Caution: The entered HARG must be configured and running on the first cluster node.
Cluster preconfiguration . . . . . . . . . . . . OK
6. Press ENTER to continue with the server final configuration that consists of the following:
l Management server policy group assignment
Log Files
For details about the cluster-specific installation, check the following log files:
l /var/opt/OV/log/OpC/mgmt_sv/installation.log.verbose
This log file contains the information about the success of the installation and eventual problems
during the installation.
l /var/opt/OV/hacluster/<HARG_name>/trace.loga, /var/opt/OV/hacluster/<HARG_
name>/error.log, and /var/log/messages
These files contain the information about managing the HARG.
Note: The size of the HARG trace.log file is limited. When the maximum file size is reached,
trace.log is moved into trace.log.old and the new information is written into a new trace.log
file.
You can change the maximum size of the trace.log file by adding the following line to the
/var/opt/OV/hacluster/<HARG_name>/settings file:
TRACING_FILE_MAX_SIZE=<maximum_size_in_kBytes>
For example:
TRACING_FILE_MAX_SIZE=7000
For more information about managing HPOM in a cluster environment, see "Managing HPOM in a
Cluster Environment" on page 177.
Installation Requirements
To run HPOM in a Veritas cluster environment, your system must meet the following requirements:
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 236
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 242
l Task 2: "Installing a Database Server for HPOM in a Cluster Environment" on page 236
l Task 3: "Installing and Configuring the HPOperations Management Server on Cluster Nodes" on
page 242
Caution: You cannot install HPOM simultaneously on all cluster nodes. When the installation
process is completed on one cluster node, begin the installation on the next node, until HPOM is
installed on all the nodes in a cluster environment.
For detailed information about configuration scenarios, see "Configuration Scenarios" on page 177.
Depending on the configuration scenario you choose, see one of the following sections:
l Basic environment: "Preparation Steps for the First Cluster Node in a Basic Environment" on the
next page
l Decoupled environment: "Preparation Steps for the First Cluster Node in a Decoupled Environment"
on page 230
l Independent database server: "Preparation Steps for the First Cluster Node in a Cluster
Environment Using an Independent Database Server" on page 234
Note: When defining a disk device group or any of the volumes within the disk device group,
you can specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Import the ov-dg disk device group on the current node by running the following command:
/usr/sbin/vxdg import ov-dg
5. Start the volumes by running the following command:
/usr/sbin/vxvol -g ov-dg startall
6. Check if all the volumes of the ov-dg disk device group are started by running the following
command:
/usr/sbin/vxinfo -g ov-dg
If the volumes are started, an output similar to the following one appears:
ov-volume-var Started
ov-volume-etc Started
ov-volume-lcore Started
ov-volume-db-data Started
ov-volume-db-index Started
ov-volume-db-core Started
7. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-etc \
/etc/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-var \
/var/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-db-data \
<database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-db-index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-db-tables \
<postgres_table_data_mount_point>
g. Optional: If you choose to install Oracle database server binaries on a shared disk:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-db-core \
<oracle_binaries_mount_point>
8. Activate the HPOperations management server virtual network IP:
ip addr add <IP/subnet> dev <iface>
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
c. Define the ov-db-dg disk device group consisting of at least one shared disk for the HARG.
d. Define the following volumes within the ov-db-dg disk device group:
o ov-volume-db-data
o If the database index directory is on a different volume than the main data directory: ov-
volume-db-index
o If the PostgreSQL database table data directory is on a different volume than the main
cluster directory: ov-volume-db-tables
o If you choose to install Oracle database server binaries on a shared disk: ov-volume-db-
core
Note: When defining a disk device group or any of the volumes within the disk device
group, you can specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
Note: Oracle only: You may select an alternative mount point. The default is the following:
/u01/oradata/<ORACLE_SID>
In this instance, <ORACLE_SID> is the value of the ORACLE_SID variable used for the
configuration of the HPOperations management server database. It is usually set to
openview.
l If the database index directory is on a different volume than the main data directory:
HPOperations management server database index files
l If the PostgreSQL database table data directory is on a different volume than the main cluster
directory: PostgreSQL database table data files
l If you choose to install Oracle database server binaries on a shared disk: Oracle database
server binaries (equal to the value of the ORACLE_BASE variable)
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Import the ov-dg and ov-db-dg disk device groups on the current node by running the following
commands:
/usr/sbin/vxdg import ov-dg
/usr/sbin/vxdg import ov-db-dg
5. Start the volumes by running the following commands:
/usr/sbin/vxvol -g ov-dg startall
/usr/sbin/vxvol -g ov-db-dg startall
6. Check the following:
a. Check if all the volumes of the ov-dg disk device group are started by running the following
command:
/usr/sbin/vxinfo -g ov-dg
If the volumes are started, an output similar to the following one appears:
ov-volume-var Started
ov-volume-etc Started
ov-volume-lcore Started
b. Check if all the volumes of the ov-db-dg disk device group are started by running the
following command:
/usr/sbin/vxinfo -g ov-db-dg
If the volumes are started, an output similar to the following one appears:
ov-volume-db-data Started
ov-volume-db-index Started
ov-volume-db-core Started
7. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-etc \
/etc/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-var \
/var/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-lcore \
/var/opt/OV/shared/server
d. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-db-dg/ov-volume-db-data \
<database_mount_point>
e. Optional: If the database index directory is on a different volume than the main data directory:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-db-dg/ov-volume-db-index \
<database_index_mount_point>
f. Optional: If the PostgreSQL database table data directory is on a different volume than the
main cluster directory:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-db-dg/ov-volume-db-tables \
<postgres_table_data_mount_point>
g. Optional: If you choose to install Oracle database server binaries on a shared disk:
/bin/mount [-t <FSType>] /dev/vx/dsk/ov-db-dg/ov-volume-db-core \
<oracle_binaries_mount_point>
8. Activate the HPOperations management server virtual network IP:
ip addr add <IP/subnet> dev <iface> label <iface>:1
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
9. Activate the database virtual network IP:
ip addr add <IP/subnet> dev <iface> label <iface>:2
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
Note: When defining a disk device group or any of the volumes within the disk device group,
you can specify an optional name.
2. Make sure that the following shared file systems are available:
l /etc/opt/OV/share
l /var/opt/OV/share
l /var/opt/OV/shared/server
/etc/opt/OV/share 2 GB
/var/opt/OV/shared/server 2.5 GB
3. Prepare mount points for the shared file systems listed in the previous step.
4. Import the ov-dg disk device group on the current node by running the following command:
/usr/sbin/vxdg import ov-dg
5. Start the volumes by running the following command:
/usr/sbin/vxvol -g ov-dg startall
6. Check if all the volumes of the ov-dg disk device group are started by running the following
command:
/usr/sbin/vxinfo -g ov-dg
If the volumes are started, an output similar to the following one appears:
ov-volume-var Started
ov-volume-etc Started
ov-volume-lcore Started
7. Mount the shared file systems on the prepared mount points as follows:
a. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-etc \
/etc/opt/OV/share
b. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-var \
/var/opt/OV/share
c. /bin/mount [-t <FSType>] /dev/vx/dsk/ov-dg/ov-volume-lcore \
/var/opt/OV/shared/server
8. Activate the HPOperations management server virtual network IP:
ip addr add <IP/subnet> dev <iface>
In this instance, <IP> is the IP address of the virtual host that you previously selected, <subnet>
is the subnet address of the virtual host you previously selected, and <iface> is the interface that
hosts the new IP address.
l The HPOperations management server must already be installed and running on one of the cluster
nodes. This enables you to add a local node to the HPOperations management server configuration,
and install and start the HPOperations agent software on the local node.
l On the node where HPOM is running, enable the remote shell connection for the root user to the
node where you plan to install the HPOperations management server. You can do this by adding the
following line into the <home_directory>/.rhosts file:
<node> root
You can check if the remote shell is enabled by running the following command:
rsh <active_node> -l root -n ls
A list of the files on the root directory from the node where the HPOperations management server
is running should be displayed.
In more secure environments, you can set up a secure shell (SSH) connection between the node
where you plan to install an HPOperations management server and the node where the
HPOperations management server is running.
For the HPOperations management server installation, you must enable passwordless SSH access
for the root user between these two nodes. During the installation, the ssh and scp commands are
used. Therefore, both commands must be accessible from the main path.
You can check if the secure remote shell is enabled by running the following command:
ssh <active_node> -l root -n ls
The type of connection is detected automatically. A secure connection has a higher priority if both
types of connection are enabled.
l Shared file systems may not be mounted on this cluster node. They are already mounted on the
cluster node where the HPOperations management server is running.
l The virtual IP may not be activated on this node because it is already used on the node where the
HPOperations management server is running.
loosing Oracle availability. If you choose the decoupled scenario for installing HPOM, a separate
Oracle client installation is also needed.
l If you use the PostgreSQL database:
The PostgreSQL database server binaries must be installed locally on all nodes. The installation
path must be the same on all cluster nodes.
Table 35 shows which procedure to follow depending on the configuration scenario you choose.
Note: After the database server installation, on all HP Operations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Question Answer
groups?
Note: When installing and configuring the HPOperations management server, the ORACLE_
BASE and ORACLE_HOME variables must be set to the Oracle database server location.
a. Copy the following configuration files from the Oracle database server location on the shared
disk (<Oracle_server_home>/network/admin/) to the Oracle client location on the local disk
(<Oracle_client_home>/network/admin/):
o listener.ora
o sqlnet.ora
o tnsnames.ora
o tnsnav.ora
b. To contain the location of the Oracle client software, modify the ORACLE_HOME variable at the
following location:
/etc/opt/OV/share/conf/ovdbconf
c. Stop the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server stop <local_hostname>
d. Add the following lines to the /etc/sysconfig/ovoracle file:
ORACLE_HOME=<Oracle_Server_Home>
ORACLE_SID=<ORACLE_SID>
export ORACLE_HOME ORACLE_SID
The /etc/sysconfig/ovoracle file is used as a configuration file by the
/etc/init.d/ovoracle script, which is used by the Oracle HARG to start the Oracle database.
Note: Make sure that you use the latest version of the /etc/init.d/ovoracle script.
Copy the file from newconfig by running the following command:
cp /opt/OV/newconfig/OpC/etc/init.d/ovoracle /etc/init.d/ovoracle
e. Remove the existing Oracle client library links from the /opt/OV/lib64 directory and replace
them with the following ones:
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so
ln -sf <ORACLE_HOME>/lib/libclntsh.so /opt/OV/lib64/libclntsh.so.11.1
ln -sf <ORACLE_HOME>/lib/libnnz11.so /opt/OV/lib64/libnnz11.so
ln -sf <ORACLE_HOME>/lib/libnnz12.so /opt/OV/lib64/libnnz12.so
f. Start the HPOperations management server as an HARG by running the following command:
/opt/OV/bin/ovharg_config ov-server start <local_hostname>
The HPOperations management server will now connect to the Oracle database server through the
Oracle client.
l Additional cluster node
Install the Oracle client on the local disk. All other database configuration steps are performed by the
HPOperations management server installation script.
Note: When installing and configuring the HPOperations management server, the ORACLE_
HOME variable must be set to the Oracle client location.
Note: After the database server installation, on all HP Operations management server cluster
nodes, create a script or a binary so that the HPOperations management server can determine
the status of the database:
/opt/OV/bin/OpC/utils/ha/ha_check_db
The exit code of this script or binary must be 0 if the database server runs, or other than 0 if it
does not run.
PostgreSQL only: You can determine if the PostgreSQL server is up and running by checking if
the <cluster_dir>/postmaster.pid file exists.
When the following questions appear during the independent database server configuration, make sure
that you answer as follows:
Question Answer
Caution: Make sure that cluster node names are the same as hostnames. Otherwise, the
configuration fails.
1. After the ovoconfigure script detects a special environment, provide answers to the following
cluster-specific questions:
Question Instruction
Would you prefer to Press ENTER to accept the default answer (that is, n).
Question Instruction
HA Resource Group Press ENTER to accept the default answer (that is, ov-server), or
name specify an alternative name for the HARG, and then press
ENTER.
HARGs are created during the installation of HPOM. The
ovoinstall script builds the package or the service control file,
and the configuration file automatically. Do not create these files
manually and do not use your own configuration files. If you already
did it, remove them before starting the installation of HPOM.
The entered HARG name may not be one of the already existing
names.
Server virtual Enter the short name of the virtual host (for example, virtip1).
hostname
Will HPOM run on an Choose the appropriate option depending on the database type
Oracle instance (n HPOM will run on.
for PostgreSQL )
Oracle only: Oracle Choose the Oracle database base directory (the default is
Base /opt/oracle).
PostgreSQL only: PSQL Choose the directory where you want the cluster to be created (it
cluster directory
Question Instruction
Database Table Data Choose the mount point where database table data files are stored.
Mount Point
Database Index Mount Choose the mount point where database index files are stored (by
Point default, it is the same as the database table data mount point).
Cluster preconfiguration . . . . . . . . . . . OK
5. Press ENTER to continue with the database configuration and the server initialization.
Make sure to answer all the questions related to the database configuration and the server
initialization.
6. Press ENTER to continue with the cluster configuration.
An output similar to the following one should appear:
l Subagents configuration
l Certificates backup
Note: To limit the server communication to the virtual IP only, run the following command:
1. After the ovoconfigure script detects a special environment, you are asked if you want to run the
HPOperations management server as an HARG.
Press y followed by ENTER.
The script checks the remote shell connection and the secure remote shell connection, and then
the following question appears:
Would you prefer to use REMSH even though SSH is enabled?
2. Press ENTER to accept the default answer (that is, n).
You are prompted to enter the HARG name.
3. Press ENTER to accept the default answer (that is, ov-server), or specify an alternative name
Caution: The entered HARG must be configured and running on the first cluster node.
Cluster preconfiguration . . . . . . . . . . . . OK
6. Press ENTER to continue with the server final configuration that consists of the following:
l Management server policy group assignment
Log Files
For details about the cluster-specific installation, check the following log files:
l /var/opt/OV/log/OpC/mgmt_sv/installation.log.verbose
This log file contains the information about the success of the installation and eventual problems
during the installation.
l /var/opt/OV/hacluster/<HARG_name>/trace.loga, /var/opt/OV/hacluster/<HARG_
name>/error.log, and /var/VRTSvcs/log/engine_A.log
These log files contain the information about managing the HARG.
Note: The size of the HARG trace.log file is limited. When the maximum file size is reached,
trace.log is moved into trace.log.old and the new information is written into a new trace.log
file.
You can change the maximum size of the trace.log file by adding the following line to the
/var/opt/OV/hacluster/<HARG_name>/settings file:
TRACING_FILE_MAX_SIZE=<maximum_size_in_kBytes>
For example:
TRACING_FILE_MAX_SIZE=7000
If no email client is available, copy the information above to a new message in a web mail client, and
send your feedback to [email protected].