Hp Man Ovpa461 Unix User PDF
Hp Man Ovpa461 Unix User PDF
User’s Manual
The only warranties for HP products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional
warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent
with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and
Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard
commercial license.
Copyright Notices
Trademark Notices
2
Support
You can visit the HP OpenView Support web site at:
http://www.hp.com/go/hpsoftwaresupport
HP OpenView online support provides an efficient way to access interactive technical support tools. As a
valued support customer, you can benefit by using the support site to:
• Search for knowledge documents of interest
• Submit and track support cases and enhancement requests
• Download software patches
• Manage support contracts
• Look up HP support contacts
• Review information about available services
• Enter into discussions with other software customers
• Research and register for software training
Most of the support areas require that you register as an HP Passport user and sign in. Many also require a
support contract.
To find more information about access levels, go to:
www.hp.com/managementsoftware/access_level
To register for an HP Passport ID, go to:
www.managementsoftware.hp.com/passport-registration.html
3
4
Contents
5
Scan Report Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Initial Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Initial Parm File Application Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Chronological Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Summaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Utility Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
analyze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
checkdef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
licheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
logfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
parmfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
quit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
resize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6
configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
cpu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
global. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
licheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
logfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
lvolume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
monthly. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
netif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
quit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
sh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
weekdays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
weekly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
yearly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7
Common Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
ALARM Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
ALERT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
EXEC Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
PRINT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
IF Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
LOOP Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
INCLUDE Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
USE Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
VAR Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
ALIAS Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
SYMPTOM Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Alarm Definition Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Customizing Alarm Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Viewing MPE Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Viewing and Printing Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Viewing Documents on the Web. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Adobe Acrobat Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
ASCII Text Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8
1 Overview of OpenView Performance Agent
OV Performance Agent version 4.0 and later, connects to OV Performance Manager (OVPM)
4.0 and later using the HTTP data communication protocol. OVPM 3.x (PerfView) connects to
OV Performance Agent using the DCE or NCS data communication protocol, for all UNIX
platforms except OV Performance Agent for Linux and Sun Solaris 10 on x86.
9
How OpenView Performance Agent Works
OpenView Performance Agent (OVPA) collects, summarizes, timestamps, and detects alarm
conditions on current and historical resource data across your system. OVPA provides
performance, resource, and end-to-end transaction response time measurements, and
supports network and database measurement information.
Data collected outside OVPA can be integrated using data source integration (DSI)
capabilities. For example, network, database, and your own application data can be brought in
through DSI and treated the same as data collected by OV Performance Agent. All DSI data is
logged and time-stamped, and can be alarmed on. (For details, see the HP OpenView
Performance Agent for UNIX Data Source Integration Guide.)
The data collected or received by OVPA can be analyzed using spreadsheet programs, analysis
tools such as HP OV Performance Manager, and other third-party analysis products.
The comprehensive data logged by OV Performance Agent allows you to:
• Characterize the workloads in the environment.
• Analyze resource usage for load balancing.
• Perform trend analysis to isolate and identify bottlenecks.
• Perform service-level management based on transaction response time.
• Perform capacity planning.
• Respond to alarm conditions.
• Solve system management problems before they arise.
OV Performance Agent gathers comprehensive and continuous information on system activity
without imposing significant overhead on the system. The design also provides scope for
customization. You can accept default configurations or set parameters to collect data for
specific conditions.
10 Chapter 1
release, you can configure the data collection interval in the collection parameters file.(parm
file). In this release OVPA is also supported on virtualized environments: HPVM, AIX LPAR,
and VMWare ESX server.
OVPA for Linux and Sun Solaris 10 on x86 supports only the HTTPS data communication
mechanism.
1. To enable the DCE based alarm generator, alarmgen, stop OV Performance Agent, rename the
perfalarm executable to perfalarm.old, and restart OV Performance Agent using the mwa
script.
12 Chapter 1
Figure 2 Component interaction with DCE or NCS mode enabled
Data Communication
OV Performance Agent uses the coda daemon or a set of repository servers that provide
previously collected data to the alarm generator and the OV Performance Manager analysis
product. The coda daemon uses the HTTP data communication mechanism, and the
repository servers use the DCE or NCS mechanism. If both HTTP and DCE/NCS data
communication mechanisms are enabled, OVPA uses both the coda daemon and the set of
repository servers. For more information on configuring the data communication mechanism,
see the HP OpenView Performance Agent Installation and Configuration Guide.
Each data source consists of a single log file set. The data source list that coda accesses is
maintained in the datasources configuration file that resides in the /var/opt/OV/conf/
perf/ directory. The data source list that the repository servers access is maintained in the
perflbd.rc file that resides in the /var/opt/perf/ directory. The perflbd.rc file is
maintained as a symbolic link to the datasources file.
There is a repository server for each specific data source such as scopeux log files or DSI log
files. When you first start up OV Performance Agent after installation, a default data source
named SCOPE is already configured and provides a scopeux log file set.
If you want to add other data sources, you can configure them in the datasources file. If you
no longer want to view the OVPA or DSI log file data from OV Performance Manager, or
process alarms for the log file, you can modify the datasources file to remove the data
source and the path to the log file set. When you restart the coda daemon or the repository
server, it reads the datasources file and makes the data available over datacomm linkages to
analysis tools for each data source available.
14 Chapter 1
You can also remove the log file set if you no longer need the data. If you remove the log file
set but do not remove the data source from datasources, coda or the repository server will
skip the data source.
You might also choose to stop logging DSI data to a log file set but keep the coda daemon or
the repository server open so you can view the historical data in OV Performance Manager. In
this case, stop the dsilog process but do not delete the data source from the datasources
file.
OpenView GlancePlus
GlancePlus (or Glance) is an online diagnostic tool that displays current performance data
directly to a user terminal or workstation. It is designed to assist you in identifying and
troubleshooting system performance problems as they occur.
OpenView Reporter
OV Reporter creates web-based reports from data of targeted systems it "discovers." Discovery
of a system can occur if the system is running OpenView agent and subagent software such as
OV Performance Agent. Reporter can also generate reports on systems managed by OV
Operations. After Reporter has run through its discovery, it gathers data based on pre-defined
and user-specified lists of metrics, then formats the collected data into web page reports.
OpenView Operations
OV Operations (OVO) also displays and analyzes alarm information sent by OV Performance
Agent. OVO is a distributed client-server software solution designed to help system
administrators detect, solve, and prevent problems occurring in networks, systems, and
applications in any enterprise. OVO is a scalable and flexible solution that can be configured
to meet the requirements of any information technology (IT) organization and its users.
For more information about any of these products, see the product documentation on the HP
OpenView Manuals web site at:
http://ovweb.external.hp.com/lpe/doc_serv
Select <product name> from the product list box, select the release version, select the OS, and
select the manual title. Click [Open] to view the document online, or click [Download] to place
the file on your computer.
16 Chapter 1
Chapter Summary and Related Documentation
The information in this manual is organized as follows:
• The scopeux data collector is described in Chapter 2, Managing Data Collection.
• The repositories, coda and data sources are described later in this chapter and in the HP
OpenView Performance Agent Installation and Configuration Guide.
• Alarm generation components are described in Chapter 7, Performance Alarms.
• Data source integration (DSI), including dsilog and other DSI components, is described
in the HP OpenView Performance Agent for UNIX Data Source Integration Guide.
Introduction
This chapter provides instructions to manage the following data collection activities that are
involved in using OV Performance Agent.
• The Scopeux Data Collector for collection and recording performance data.
• The collection parameters (parm File) file and its parameters
• Stopping and Restarting Data Collection
• Effective Data Collection Management by Controlling Disk Space Used by Log Files and
Data Archiving
19
Scopeux Data Collector
The scopeux daemon collects and summarizes performance measurements of
system-resource utilization and records the data into the following log files, depending on the
data classes specified in the log line of the parm file.
• The logglob file contains measurements of system-wide, or global, resource utilization
information. Global data is summarized and recorded periodically at intervals specified in
the parm file. For more information refer to the section Configure Data Logging Intervals
on page 35.
• The logappl file contains aggregate measurements of processes in each user-defined
application from the parm file. Application data is summarized periodically at intervals
specified in the parm file and each application that had any activity during that interval is
recorded. For more information refer to the section Configure Data Logging Intervals on
page 35.
• The logproc file contains measurements of selected interesting processes. Process data
is summarized periodically at intervals specified in the parm file. For more information
refer to the section Configure Data Logging Intervals on page 35. However, only
interesting processes are recorded.
• The logdev file contains measurements of individual device (such as disk and netif)
performance. Device data is summarized periodically at intervals specified in the parm
file and data from each device that had any activity during that interval is recorded. For
more information refer to the section Configure Data Logging Intervals on page 35.
• The logtran file contains measurements of ARM transaction data. Transaction data is
summarized periodically at intervals specified in the parm file and each transaction that
had any activity is recorded. For more information refer to the section Configure Data
Logging Intervals on page 35. (For more information, see the HP OpenView Performance
Agent & Glance Plus for UNIX Tracking Your Transactions guide.)
• The logls file contains information about the logical systems. Data of logical systems is
summarized periodically at intervals specified in the parm file. For more information refer
to the section Configure Data Logging Intervals on page 35.
• The logls file is available only on OV Performance Agent for HPVM, VMware
ESX Server and AIX operating systems.
— On AIX, for specific configuration instructions, refer to the section, OV
Performance Agent on a Virtualized Environment, in the HP OpenView
Performance Agent Installation and Configuration Guide for AIX.
• The log files logproc, logappl and logtran are not available on OV
Performance Agent on VMware ESX Server.
• The logindx file contains information needed to access data in the other log files.
• The timestamps of the records in the log files indicate the starting time of data
collection.
• The concept of interesting processes is a filter that helps minimize the volume
of data logged and is controlled from the parm file.
20 Chapter 2
Scopeux Status
In addition to the log files, two other files are created when scopeux is started. They are the
RUN file that resides in the /var/opt/perf/datafiles/ directory and the status.scope
file that resides in the /var/opt/perf/ directory.
The RUN file is created to indicate that the scopeux process is running. Removing this file
causes scopeux to terminate.
The /var/opt/perf/status.scope file serves as a status/error log for the scopeux
process. New information is appended to this file each time the scopeux collector is started,
stopped, or when a warning or error is encountered. To view the most recent status and error
information from scopeux, use the perfstat -t command.
The cmd and argv1 parameters are not supported on Tru64 UNIX systems.
22 Chapter 2
parm File Parameters
Scopeux is controlled by specific parameters in the collection parameters (parm) file that do
the following:
• Set maximum amount of disk space for the raw scopeux log files.
• Specify the data types to be logged.
• Specify the intervals at which data should be logged.
• Specify attributes of processes and metrics to be logged.
• Define types of performance data to be collected and logged.
• Specify the user-definable sets of applications that should be monitored. An application
can be one or more programs that are monitored as a group.
• Specify when scopeux should perform daily log file maintenance activities so that they do
not impact system availability.
You can modify these parameters to tell scopeux to log performance data that match the
requirements of your particular system (see Modifying the parm File on page 22).
The parm file parameters listed in Table 1 on page 23 are used by scopeux. Some of these
parameters are for specific systems as indicated in the table. For detailed descriptions of these
parameters, see Parameter Descriptions on page 25 and Application Definition Parameters on
page 31.
The parameters listed in the following table that are applicable only to HP-UX are described
in detail in Chapter 2 of the HP OpenView Performance Agent Installation & Configuration
Guide for HP-UX.
id system ID
log • global
• application [=prm] [=all]
(not on VMware ESX Server, [=prm] on HP-UX only)
• process
(not on VMware ESX Server)
• device=disk,lvm,cpu,filesystem,all
(lvm on HP-UX only)
• transaction=correlator,resource
(not on VMware ESX Server, resource on HP-UX only)
• logicalsystem
scopetransactions on
off
javaarg true
false
(not on Tru64 UNIX)
procthreshold cpu=percent
(same as threshold) disk=rate (not on Linux or Windows)
memory=nn (values in MBs)
nonew
nokilled
shortlived
appthreshold cpu=percent
diskthreshold util=rate
bynetifthreshold iorate=rate
fsthreshold util=rate
lvthreshold iorate=rate
bycputhreshold cpu=percent
group groupname [, ]
or
24 Chapter 2
Table 1 parm File Parameters Used by Scopeux (cont’d)
gapapp blank
UnassignedProcesses
ExistingApplicationName
other
Parameter Descriptions
Following are descriptions of each of the parm file parameters.
ID
The system ID value is a string of characters that identifies your system. The default ID
assigned is the system’s host name. If you want to modify the default ID assigned, then make
sure all the systems have unique ID strings. This identifier is included in the log files to
identify the system on which the data was collected. You can specify a maximum of 40
characters.
Log
Thresholds
The following parameters specify the thresholds for different classes of metrics. When the
threshold value specified is exceeded for a particular instance of a class of data, a record for
that instance will be logged by scopeux.
You can specify lower values for the threshold, to enable scopeux to log more data or you can
specify higher values for the threshold, to enable scopeux to log lesser data so that you have
fewer records logged on average. Listed below are the threshold parameter available:
• Procthreshold
• appthreshold
• diskthreshold
• bynetifthreshold
• fsthreshold
• lvthreshold
26 Chapter 2
• bycputhreshold
Procthreshold
The procthreshold parameter is used to set activity levels to specify criteria for interesting
processes. To use this parameter, you must enable process logging. procthreshold affects
only processes that are logged and do not affect other class of data. For example, not logging
process data would not affect collection or values of application or global data.
The procthreshold parameter is same as the threshold parameter. The threshold
parameter is also supported in OV Performance Agent version 4.6. If both the parameters are
enabled, the latest value specified overrides the previous value.
Enter the options for thresholds on the same parameter line (separated by commas).
Procthreshold Options for Process Data
cpu Sets the percentage of CPU utilization that a process must exceed to
become “interesting” and be logged.
The value “percent” is a real number indicating overall CPU use. For
example, cpu=7.5 indicates that a process is logged if it exceeds 7.5 percent
of CPU utilization in a 1-minute sample.
disk (Not available on Linux or Windows). Sets the rate of physical disk I/O per
second that a process must exceed to become “interesting” and be logged.
The value is a real number. For example, disk=8.0 indicates that a process
will be logged, if the average physical disk I/O rate exceeds 8 KBs per
second.
memory Sets the memory threshold that a process must exceed to become
“interesting” and be logged.
The value is in megabyte units and is accurate to the nearest 100KB. If set,
the memory threshold is compared with the value of the PROC_MEM_VIRT
metric. Each process that exceeds the memory threshold will be logged,
similarly to the disk and CPU process logging thresholds.
nonew Turns off logging of new processes if they have not exceeded any threshold.
If not specified, all new processes are logged. On HP-UX, if shortlived is
not specified, then only new processes that lasted more than one second are
logged.
appthreshold
The appthreshold parameter is used to specify threshold values for the APPLICATION data
class (APP_CPU_TOTAL_UTIL metric). The threshold criteria is based on the percentage of
CPU utilization that an application must exceed for the application to be recorded in the log
files.
The default threshold values records applications that use more than 10% of CPU.
diskthreshold
The diskthreshold parameter is used to specify the threshold values for DISK class. The
threshold criteria for DISK class is based on the percentage of time duration, a disk performs
I/Os (BYDSK_UTIL metric).
The default values records the disks that are busy performing I/Os for more than 10% of the
time duration.
bynetifthreshold
The bynetifthreshold specifies the thresholds for NETIF class. Netif data class threshold
criteria is based on the number of packets transferred by the network interface per second
(BYNETIF_PACKET_RATE metric).
The default values records network interfaces that transfer more than 60 packets per second.
fsthreshold
The fsthreshold specifies the thresholds for FILESYSTEM class. The file system data class
threshold criteria is based on the percentage of disk space utilized by the filesystems
(FS_SPACE_UTIL metric).
The default values records filesystems that utilize more than 70% of disk space.
lvthreshold
The lvthreshold specifies the thresholds for LOGICALVOLUME class. Logical volume data class
threshold values are based on I/Os per second (LV_READ_RATE + LV_WRITE_RATE).
28 Chapter 2
The default value records logical volumes that has more than 35 I/Os per second.
bycputhreshold
The bycputhreshold specifies the thresholds for CPU class. CPU data class thresholds criteria
is based on percentage of time the cpu was busy (BYCPU_CPU_TOTAL_UTIL).
The default value records CPUs that are busy more than 90% of time.
Scopetransactions
The scopeux collector itself is instrumented with ARM (Application Response Measurement)
API calls to log its own transactions. The scopetransactions flag determines whether or not
scopeux transactions will be logged. The default is scopetransactions=on; scopeux will log
two transactions, Scope_Get_Process_Metrics and Scope_Get_Global_Metrics. If you do
not want these scopeux transactions to be logged, specify scopetransactions=off. A third
transaction, Scope_Log_Headers, will always be logged; it is not affected by
scopetransactions=off.
For more information about ARM, see your HP OpenView Performance Agent & Glance Plus
for UNIX Tracking Your Transactions guide.
Subprocinterval
The subprocinterval parameter, if specified, overrides the default that scopeux uses to
sample process data. Process data and global data are logged periodically at intervals
specified in the parm file. However, scopeux probes its instrumentation every few seconds to
catch short-term activities. This instrumentation sampling interval is 5 seconds by default.
The process data logging interval must be an even multiple of the subprocinterval. For
more information, refer to the section Configure Data Logging Intervals on page 35.
On some systems with thousands of active threads or processes, the subprocinterval should
be made longer to reduce overall scopeux overhead. On other systems with many short-lived
processes that you may wish to log, setting the subprocinterval lower could be considered,
although the effect on scopeux overhead should be monitored closely in this case. This setting
must take values that are factors of the process logging interval as specified in the parm file.
Lower values for the subprocinterval will decrease the gap between global metrics and the
sum of applications on all operating systems other than HP-UX.
gapapp
The gapapp parameter in the parm file controls the modification of application class of data to
account for any difference between the global (system-wide) data and the sum of application
data.
Application data is derived from process-level instrumentation. Typically there is difference
between the global metrics and the sum of applications. In systems which have high process
creation rates the difference will be significant. You can choose from the following options:
• If gapapp is blank, an application named gapapp will be added to the application list.
• If gapapp = UnassignedProcesses,an application by the name UnassignedProcesses
will be added to the application list.
• If gapapp = ExistingApplicationName (or) gapapp = other, The difference to the
global values will be added to the specified application instead of being logged separately
and adding a new entry to the application list.
The size parameter is used to set the maximum size (in megabytes) of any raw log file. You
cannot set the size to be less than one megabyte.
The scopeux collector reads these specifications when it is initiated. If any of these log files
achieve their maximum size during collection, they will continue to grow until mainttime,
when they will be rolled back automatically. During a roll back, the oldest 25 percent of the
data is removed from the log file. Raw log files are designed to only hold a maximum of one
year's worth of data if not limited by the size parameter. See Log File Contents Summary on
page 56 and Log File Empty Space Summary on page 56 in the Utility Scan Report Details
section in Chapter 3.
If the size specification in the parm file is changed, scopeux detects it during startup. If the
maximum log file size is decreased to the point where existing data does not fit, an automatic
resize will take place at the next mainttime. If the existing data fits within the new maximum
size specified, no action is taken.
Any changes you make to the maximum size of a log file take effect at the time specified in the
mainttime parameter.
Regardless of the size parameters, the maximum size of the scopeux log files
will be limited also by the amount of data stored over one year. Raw scope log
files cannot contain more than one year of data, so if logs extend back that
long, the data older than one year will be overwritten. See extract on page 124
for information about how to create archival log files if over a year of data is
desired.
Mainttime
Log files are rolled back if necessary by scopeux only at a specific time each day. The default
time can be changed using the mainttime parameter. For example, setting
mainttime=8:30, causes log file maintenance to be done at 8:30 am each day.
We suggest setting mainttime to a time when the system is at its lowest utilization.
Log file maintenance only rolls out data older than one day, so if a log file such as logproc
grows very quickly and reaches its limit within 24 hours, its size can exceed the configured
size limit.
javaarg
The javaarg parameter is a flag that can be set to true or false. It ONLY affects the value of
the proc_proc_argv1 metric. This parameter is not supported on Tru64 UNIX systems.
When javaarg is set to false or is not defined in the parm file, the proc_proc_argv1 metric
is always set to the value of the first argument in the command string for the process.
When javaarg is set to true, the proc_proc_argv1 metric is overridden, for java processes
only, with the class or jar specification if that can be found in the command string. In other
words, for processes whose file names are java or jre, the proc_proc_argv1 metric is
overridden with the first argument without a leading dash not following a -classpath or a
-cp, assuming the data can be found in the argument list provided by the OS.
While this sounds complex, it is very plain when you have java processes running on your
system: set javaarg=true and the proc_proc_argv1 metric is logged with the class or jar
name. This can be very useful if you want to define applications specific to java. When the
class name is in proc_proc_argv1, then you can use the argv1= application qualifier
(explained below) to define your application by class name.
30 Chapter 2
Flush
The flush parameter specifies the data logging intervals (in seconds) at which all instances of
application and device data will be logged. The flush intervals must be in the range 300-32700
and be an even multiple of 300. The default value of flush interval is 3600 seconds for all
instances of application and device data.
You can disable the flush parameter by specifying the value as 0 (zero). If the flush
parameter is set to 0, scopeux will not log application and the device data which does not
meet the thresholds specified in the parm file.
Application
The application name defines an application or class that groups together multiple processes
and reports on their combined activities.
• The application name is a string of up to 19 characters used to identify the application.
• Application names can be lowercase or uppercase and can contain letters, numbers,
underscores, and embedded blanks. Do not use the same application name more than once
in the parm file.
• An equal sign (=) is optional between the application keyword and the application name.
• The application parameter must precede any combination of file, user, group, cmd,
argv1, and or parameters that refer to it, with all such parameters applying against the
last application workload definition.
• Each parameter can be up to 170 characters long including the carriage return character,
with no continuation characters permitted. If your list of files is longer than 170
characters, continue the list on the next line after another file, user, group, cmd or
argv1 statement.
• You can define up to 998 applications. OV Performance Agent predefines an application
named other. The other application collects all processes not captured by application
statements in the parm file.
For example:
application xyz
file xyz*,startxyz
You can have a maximum of 4096 file, user, group, argv1, and cmd specifications for all
applications combined. The previous example includes nine file specifications. (xyz*
counts as only one specification even though it can match more than one program file.)
If a program file is included in more than one application, it is logged in the first
application that contains it.
The default parm file contains some sample applications that you can modify. The
examples directory also contains other samples (in a file called parm_apps) you can copy
into your parm file and modify as needed.
File
The file parameter specifies the program files that belong to an application. All interactive
or background executions of these programs are included. It applies to the last application
statement issued. An error is generated if no application statement is found.
The file name can be any of the following:
• A single UNIX program file such as vi.
• A group of UNIX program files (indicated with a wild card) such as xyz*. In this case, any
program name that starts with the letters xyz is included. A file specification with wild
cards counts as only one specification toward the maximum allowed.
The name in the file parameter is limited to 15 characters in length. An equal sign (=) is
optional between the file parameter and the file name.
You can enter multiple file names on the same parameter line (separated by commas) or in
separate file statements. File names cannot be qualified by a path name. The file
specifications are compared to the specific metric PROC_PROC_NAME, which is set to a process’s
argv[0] value (typically its base name). For example:
application = prog_dev
file = vi,vim,gvim,make,gmake,lint*,cc*,gcc,ccom*,cfront
file = cpp*,CC,cpass*,c++*
file = xdb*,adb,pxdb*,dbx,xlC,ld,as,gprof,lex,yacc,are,nm,gencat
file = javac,java,jre,aCC,ctcom*,awk,gawk
application Mail
file = sendmail,mail*,*mail,elm,xmh
If you do not specify a file parameter, all programs that satisfy the other parameters qualify.
The asterisk (*) is the only wild card character supported for parm file application qualifiers
except for the cmd qualifier (see below).
argv1
The argv1 parameter specifies the processes that are selected for the application by the value
of the PROC_PROC_ARGV1 metric. This is normally the first argument of the command line,
except when javaarg=true, when this is the class or jar name for java processes. This
parameter uses the same pattern matching syntax used by parm parameters like file= and
user=. Each selection criteria can have asterisks as a wildcard match character, and you can
have more than one selection on one line separated by commas.
32 Chapter 2
For example, the following application definition buckets all processes whose first argument
in the command line is either -title, -fn, or -display:
application = xapps
argv1 = -title,-fn,-display
The following application definition buckets a specific java application (when javaarg=true):
application = JavaCollector
argv1 = com.*Collector
The following shows how the argv1 parameter can be combined with the file parameter:
application = sun-java
file = java
argv1 = com.sun*
cmd
The cmd parameter specifies processes for inclusion in an application by their command
strings, which consists of the program executed and its arguments (parameters). Unlike other
selection parameters, this parameter allows extensive wildcarding besides the use of the
asterisk character.
Similar to regular expressions, extensive pattern matching is allowed. For a complete
description of the pattern criteria, see the UNIX man page for fnmatch. Unlike other
parameters, you can have only one selection per line, however you can have multiple lines.
The following shows use of the cmd parameter:
application = newbie
cmd = *java *[Hh]ello[Ww]orld*
User
The user parameter specifies which user login names belong to an application.
For example
application Prog_Dev
file vi,xb,abb,ld,lint
user ted,rebecca,test*
User specifications that include wildcards count as only one specification toward the
maximum allowed.
If you do not specify a user parameter, all programs that satisfy the other parameters qualify.
The name in the user parameter is limited to 15 characters in length.
Group
The group parameter specifies which user group names belong to an application.
For example:
application Prog_Dev_Group2
file vi,xb,abb,ld,lint
user ted,rebecca,test*
group lab, test
If you do not specify a group parameter, all programs that satisfy the other parameters
qualify.
The name in the group parameter is limited to 15 characters in length.
Use the or parameter to allow more than one application definition to apply to the same
application. Within a single application definition, a process must match at least one of each
category of parameters. Parameters separated by the or parameter are treated as
independent definitions. If a process matches the conditions for any definition, it will belong
to the application.
For example:
application = Prog_Dev_Group2
user julie
or
user mark
file vi, store, dmp
This defines the application (Prog_Dev_Group2) that consists of any programs run by the user
julie plus other programs (vi, store, dmp) if they are executed by the user mark.
Priority
The parm file is processed in the order entered and the first match of the qualifier will define
the application to which a particular process belongs. Therefore, it is normal to have more
specific application definitions prior to more general definitions.
application othersvrs
cmd = *appserver*
cmd = *appsvr*
or
argv1 = -xyz
The following is an example of how several of the programs would be logged using the
preceding parm file.
34 Chapter 2
Command String User Login Application
Example 1:
36 Chapter 2
Stopping and Restarting Data Collection
The scopeux daemon and the other daemon processes that are part of OV Performance Agent
are designed to run continuously. The only time you should stop them are when any of the
following occurs:
• You are updating OV Performance Agent software to a new release.
• You are adding or deleting transactions in the transaction configuration file, ttd.conf.
(For more information, see the HP OpenView Performance Agent & Glance Plus for UNIX
Tracking Your Transactions guide.)
• You are modifying distribution ranges or service level objectives (SLOs) in the transaction
configuration file, ttd.conf. (For more information, see the HP OpenView Performance
Agent & Glance Plus for UNIX Tracking Your Transactions guide.)
• You are changing the parm file and want the changes to take effect. Changes made to the
parm file take effect only when scopeux is started.
• You are using the utility program's resize command to resize a OV Performance Agent
log file.
• You are shutting down the system.
OV Performance Agent provides the ovpa and mwa scripts that include options for stopping
and restarting the processes. For a description of these options, see the respective man pages.
38 Chapter 2
Effective Data Collection Management
Efficient analysis of performance depends on how easy it is to access the performance data
you collect. This section discusses effective strategies for activities such as managing log files,
data archiving, and system analysis to make the data collection process easier, more effective,
and more useful.
Setting Mainttime
Normally, scopeux will only perform log file roll backs at a specific time each day. This is to
ensure that the operation is performed at off peak hours and does not impact normal system
usage. The time the log files are examined for roll back is set by the mainttime parameter in
the parm file.
Data Archiving
Automatic log file management keeps the latest log file data available for analysis. Data from
the raw log files are archived. Process data and global data are logged periodically at intervals
specified in the parm file. For more information, refer to the section Configure Data Logging
Intervals on page 35. To make room for new data, older data is removed when the log files
reach their maximum sizes. If you want to maintain log file data for longer periods of time,
you should institute a data archiving process. The exact process you choose depends on your
needs. Here are a few possibilities:
• Size the raw log files to be very large and let automatic log file maintenance do the rest.
This is the easiest archiving method, but it can consume large amounts of disk space after
several months.
40 Chapter 2
• Extract the data from the raw log files into extracted archive files before it is removed
from the raw log files. Formulate a procedure for copying the archive files to long term
storage such as tape until needed.
• Extract only a subset of the raw log files into extracted archive files. For example, you may
not want to archive process data due to its high volume and low time-value.
• Some combination of the preceding techniques can be used.
We recommend the following procedures for data archiving:
• Size the raw log files to accommodate the amount of detail data you want to keep online.
• Once a week, copy the detailed raw data into files that will be moved to offline storage.
You can use the extract program to combine data from multiple extracted files or to make a
subset of the data for easier transport and analysis.
For example, you can combine data from several yearly extracted files in order to do
multiple-year trending analysis. (See yearly on page 206 in Chapter 6.
Moving log files that were created on a new version of OV Performance Agent to a system
using an older version of OV Performance Agent is not supported.
Introduction
The utility program is a tool for managing and reporting information on log files, the
collection parameters (parm) file, and the alarm definitions (alarmdef) file. You can use the
utility program interactively or in batch mode to perform the following tasks.
• Scan raw or extracted log files and produce a report showing:
— dates and times covered
— times when the scopeux collector was not running
— changes in scopeux parameter settings
— changes in system configuration
— log file disk space
— effects of application and process settings in the collection parameters (parm) file
• Resize raw log files
• Check the parm file for syntax warnings or errors
• Check the alarmdef file for syntax warnings or errors
• Process log file data against alarm definitions to detect alarm conditions in historical data
This chapter covers the following topics:
• Running the Utility Program
• Using Interactive Mode
• using the Utility Command Line Interface
• Utility Scan Report Details
Detailed descriptions of the utility program’s commands are in Chapter 4, Utility
Commands.
43
Running the Utility Program
There are three ways to run the utility program:
• Command line mode - You control the utility program using command options and
arguments in the command line.
• Interactive mode - You supply interactive commands and parameters while executing the
program with stdin set to an interactive terminal or workstation.
If you are an experienced user, you can quickly specify only those commands required for a
given task. If you are a new user, you may want to use the utility program’s guide
command to get some assistance in using the commands. In guided mode, you are asked to
select from a list of options to perform a task. While in guided mode, the interactive
commands that accomplish each task are listed as they are executed, so you can see how
they are used. You can quit and re-enter guided mode at any time.
• Batch mode - You can run the program and redirect stdin to a file that contains
interactive commands and parameters.
The syntax for the command line interface is similar to typical UNIX command line interfaces
on other programs and is described in detail in this chapter.
For interactive and batch mode the command syntax is the same. Commands can be entered
in any order; if a command has a parameter associated with it, the parameter must be entered
immediately after the corresponding command.
There are two types of parameters - required (for which there are no defaults) and optional
(for which defaults are provided). How utility handles these parameters depends on the
mode in which it is running.
• In interactive mode, if an optional parameter is missing, the program displays the default
argument and lets you either confirm it or override it. If a required parameter is missing,
the program prompts you to enter the argument.
• In batch mode, if an optional parameter is missing, the program uses the default values. If
a required parameter is missing, the program terminates.
Errors and missing data are handled differently for interactive mode than for command line
and batch mode. You can supply additional data or correct mistakes in interactive mode, but
not in command line and batch mode.
44 Chapter 3
Using Interactive Mode
Using the utility program’s interactive mode requires you to issue a series of commands to
execute a specific task.
For example, if you want to check a log file to see if alarm conditions exist in data that was
logged during the current day, you issue the following commands after invoking the utility
program:
checkdef /var/opt/perf/alarmdef
detail off
start today-1
analyze
The checkdef command checks the alarm definitions syntax in the alarmdef file and then
sets and saves the file name for use with the analyze command. The detail off command
causes the analyze command to show only a summary of alarms. The start today-1
command specifies that only data logged yesterday is to be analyzed. The analyze command
analyzes the raw log files in the default SCOPE data source against the alarmdef file.
46 Chapter 3
Utility Command Line Interface
In addition to the interactive and batch mode command syntax, command options and their
associated arguments can be passed to the utility program through the command line
interface. The command line interface fits into the typical UNIX environment by allowing the
utility program to be easily invoked by shell scripts and allowing its input and output to be
redirected to UNIX pipes.
For example, to use the command line equivalent of the example shown in the previous
section “Using Interactive Mode” enter:
utility -xr global days=120 empty=45 yes
Command line options and arguments are listed in the following table. The referenced
command descriptions can be found in Chapter 4, Utility Commands.
Command
Option Argument Description
-xc alarmdef Syntax checks and sets the alarmdef file name to
use with -xa (or analyze command). (See
checkdef command in Chapter 4.)
Command
Option Argument Description
-xs logfile Scans a log file and produces a report. (See scan
command in Chapter 4.)
-xr global SIZE=nn Resizes a log file. (See resize command in Chapter
application n 4.)
process DAYS=nn
device n
transaction
ls
EMPTY=nnn
SPACE=nnn YES
NO
MAYBE
48 Chapter 3
Example of Using the Command Line Interface
The following situation applies when you enter command options and arguments on the
command line:
Errors and missing data are handled exactly as in the corresponding batch mode command.
That is, missing data is defaulted if possible and all errors cause the program to terminate
immediately.
Echoing of commands and command results is disabled. Utility does not read from its
stdin file. It terminates following the actions in the command line.
utility -xp -d -xs
Which translates into:
Initial Values
Initial parm file global information and Printed only if detail on is specified.
system configuration information
Chronological Detail
Summaries
Log file contents summary Always printed. Includes space and dates
covered.
50 Chapter 3
Scan Report Information
The information in a utility scan report is divided into three types:
• Initial values
• Chronological details
• Summaries
Initial Values
This section describes the following initial values:
• Initial parm file global information
• Initial parm file application definitions
During the scan, you are notified of applications that were added or deleted. Additions and
deletions are determined by comparing the spelling and case of the old application names to
the new set of logged application names. No attempt is made to detect a change in the
definition of an application. If an application with a new name is detected, it is listed along
with its new definition.
The date and time on this record is the last time scopeux was started before logging the first
application record currently in the log file.
Chronological Detail
This section describes the following chronological details:
• parm file global change notifications
• parm file application addition and deletion notifications
• scopeux off-time notifications
• Application-specific summary report
52 Chapter 3
03/13/99 17:30 Application 4 "Accounting_Users_1" was added
User=ted,rebecca,test*,mark,gene
Application definitions are not checked for changes. They are listed when an application
name is changed, but any change to an existing application's definition without an
accompanying name change is not detected.
Summaries
This section describes the following summaries:
• Process log reason summary
• Scan start and stop actual dates and times
• Application overall summary
• scopeux coverage summary
• Log file contents summary
• Log file empty space summary
NOTE: A process can be logged for more than one reason at a time. Record
counts and percentages will not add up to 100% of the process records.
If the detail on command is issued, this report is generated each time a threshold value is
changed so you can evaluate the effects of that change. Each report covers the period since the
last report. A final report, generated when the scan is finished, covers the time since the last
report.
If the detail off command is issued, only one report is generated covering the entire
scanned period.
54 Chapter 3
You can reduce the amount of process data logged by scopeux by modifying the parm file's
threshold parameter and raising the thresholds of the interest reasons that generate the
most process log records. To increase the amount of data logged, lower the threshold for the
area of interest.
In the previous example, you could decrease the amount of disk space used for the process
data (at the expense of having less information logged) by raising the CPU threshold or setting
the nonew threshold.
Column Explanation
Type The general type of data being logged. One special type, Overhead, exists:
Overhead is the amount of disk space occupied (or reserved) by the log file
versus the amount actually used by the scanned data records.
If less than the entire log file was scanned, Overhead includes the data records
that were not scanned. If the entire file was scanned, Overhead accounts for
any inefficiencies in blocking the data into the file plus any file-access support
structures. It is normal for extracted log files to have a higher overhead than
raw log files since they have additional support structures for quicker
positioning.
Total The total record count and disk space scanned for each type of data.
Each Full The number of records and amount of disk space used for each 24-hour period
Day that scopeux runs.
Dates The first and last valid dates for the data records of each data type scanned.
Full Day The number of full (24-hour) days of data scanned for this data type.
Full Days may not be equal to the difference between the start and stop dates
if scopeux coverage did not equal 100 percent of the scanned time.
The TOTAL line (at the bottom of the listed data) gives you an idea of how much disk space is
being used and how much data you can expect to accumulate each day.
56 Chapter 3
The amount of room available for more data is calculated based on the amount of unused
space in the file and the scanned value for the number of megabytes of data being logged each
24-hour day (see). If the megabytes-scanned-per-day values appear unrealistically low, they
are replaced with default values for this calculation.
If you scan an extracted file, you get a single report line because all data types share the same
extracted file.
Introduction
This chapter describes the utility program's commands. It includes a syntax summary and
a command reference section that lists the commands in alphabetical order.
Utility commands and parameters can be entered with any combination of uppercase and
lowercase letters. Only the first three letters of the command name are required. For example,
the logfile command can be entered as logfile or it can be abbreviated as log or LOG.
Examples of how these commands are used can be found in online help for the utility
program.
The table on the next pages contains a summary of utility command syntax and
parameters.
Command Parameter
analyze
detail on
off
exit
e
guide
licheck
list filename or *
logfile logfile
menu
?
parmfile parmfile
quit
q
59
Table 4 Utility Commands: Syntax and Parameters (cont’d)
Command Parameter
resize global
application
process
device
transaction
days=maxdays
size=max MB
empty=days
space=MB
yes
no
maybe
scan logfile
(Operation is also affected by the list, start, stop, and detail
commands.
show all
sh system command
!
60 Chapter 4
analyze
Use the analyze command to analyze the data in a log file against alarm definitions in an
alarm definitions (alarmdef) file and report resulting alarm status and activity. Before
issuing the analyze command, you should run the checkdef command to check the alarm
definitions syntax. Checkdef also sets and saves the alarm definitions file name to be used
with analyze. If you do not run checkdef before analyze, you are prompted for an alarm
definitions file name.
If you are using command line mode, the default alarm definitions file /var/opt/perf/
alarmdef is used.
For detailed information about alarm definitions, see Chapter 7, Performance Alarms.
Syntax
analyze
How to Use It
When you issue the analyze command, it analyzes the log files specified in the data sources
configuration file, datasources, against the alarm definitions in the alarmdef file.
The analyze command allows you to evaluate whether or not your alarm definitions are a
good match against the historical data collected on your system. It also lets you decide if your
alarm definitions will generate too many or too few alarms on your analysis workstation.
Also, you can perform data analysis with definitions (IF statements) set in the alarm
definitions file because you can get information output by PRINT statements when conditions
are met. For explanations of how to use the IF and PRINT statements in an alarm definition,
see Chapter 7, Performance Alarms.
You can optionally run the start, stop, and detail commands with analyze to customize
the analyze process. You specify these commands in the following order:
checkdef
start
stop
detail
analyze
Use the start and stop commands if you want to analyze log file data that was collected
during a specific period of time. (Descriptions of the start and stop commands appear later
in this chapter.)
While the analyze command is executing, it lists alarm events such as alarm start, end, and
repeat status plus any text in associated print statements. Also, any text in PRINT
statements is listed as conditions (in IF statements) become true. EXEC statements are not
executed but are listed so you can see what would have been executed. An alarm summary
report shows a count of the number of alarms and the amount of time each alarm was active
(on). The count includes alarm starts and repeats, but not alarm ends.
If you want to see the alarm summary report only, issue the detail off command. However,
if you are using command line mode, detail off is the default so you need to specify -D to
see the alarm events as well as the alarm summary.
Utility Commands 61
Example
The checkdef command checks the alarm definitions syntax in the alarmdef file and saves
the name of the alarmdef file for later use with the analyze command. The start today
command specifies that only data logged today is to be analyzed. Lastly, the analyze
command analyzes the log file in the default SCOPE data source specified in the datasources
file against the alarm definitions in the alarmdef file.
utility>
checkdef /var/opt/perf/alarmdef
start today
analyze
To perform the above task using command line arguments, enter:
utility -xc -D -b today -xa
62 Chapter 4
checkdef
Use the checkdef command to check the syntax of the alarm definitions in an alarm
definitions file and report any warnings or errors that are found. This command also sets and
saves the alarm definitions file name for use with the analyze command.
For descriptions of the alarm definitions syntax and how to specify alarm definitions, see
Chapter 7, Performance Alarms.
Syntax
checkdef [/directorypath/alarmdef]
Parameters
alarmdef The name of any alarm definitions file. This can be a user-specified file or the
default alarmdef file. If no directory path is specified, the current directory
will be searched.
How to Use It
When you have determined that the alarm definitions are correct, you can process them
against the data in a log file using the analyze command.
In batch mode, if no alarm definitions file is specified, the default alarmdef file is used.
In interactive mode, if no alarm definitions file is specified, you are prompted to specify one.
Example
The checkdef command checks the alarm definitions syntax in the alarmdef file and then
saves the name of the alarmdef file for later use with the analyze command.
utility>
checkdef /var/opt/perf/alarmdef
To perform the above task using command line arguments, enter:
utility -xc
Utility Commands 63
detail
Use the detail command to control the level of detail printed in the analyze, parmfile, and
scan reports.
The default is detail on in interactive and batch modes and detail off in command line
mode.
Syntax
detail [on]
[off]
Parameters
on Prints the effective contents of the parm file as well as parm file errors. Prints
complete analyze and scan reports.
off In the parm file report, application definitions are not printed. In the scan report,
scopeux collection times, initial parm file global information, and application
definitions are not printed. In the analyze report, alarm events and alarm actions
are not printed.
How to Use It
For explanations of how to use the detail command with the analyze, scan, and parmfile
commands, see the analyze, parmfile, and scan command descriptions in this chapter.
Examples
For examples of using the detail command, see the descriptions of the analyze, parmfile, and
scan commands in this chapter.
64 Chapter 4
exit
Use the exit command to terminate the utility program. The exit command is equivalent
to the utility program’s quit command.
Syntax
exit
e
Utility Commands 65
guide
Use the guide command to enter guided commands mode. The guided command interface
leads you through the various utility commands and prompts you to perform the most
common tasks that are available.
Syntax
guide
How to Use It
• To enter guided commands mode from utility’s interactive mode, type guide and press
Return.
• To accept the default value for a parameter, press Return.
• To terminate guided commands mode and return to interactive mode, type q at the
guide> prompt.
This command does not provide all possible combinations of parameter settings. It selects
settings that should produce useful results for the majority of users.
66 Chapter 4
licheck
Use this command to check the validity of the product license. If the license is valid, it can be
a trial version or a permanent version.
Syntax
licheck
Example
To check the validity of the license, type the command:
utility -licheck
If the license is valid and permanent, the following message appears:
The permanent OVPA software has been installed
Utility Commands 67
help
Use the help command to access the utility program's online help facility.
Syntax
help [keyword]
How to Use It
You can enter parameters to obtain information on utility commands and tasks, or on help
itself. You can navigate to different topics by entering a key word. If more than one page of
information is available, the display pauses and waits for you to press Return before
continuing. Type q or quit to exit the help system and return to the utility program.
You can also request help on a specific topic. For example,
help tasks
or
help resize parmfile
When you use this form of the help command, you receive the help text for the specified topic
and remain in the utility command entry context. Because you do not enter the help
subsystem interactively, you do not have to type quit before entering the next utility
command.
68 Chapter 4
list
Use the list command to specify the output file for all utility reports. The contents of the
report depends on which other commands are issued after the list command. For example,
using the list command before the logfile, detail on, and scan commands produces the
list file for a detailed summary report of a log file.
Syntax
list [filename]|*
where * sets the output back to stdout.
How to Use It
There are two ways to specify the list file for reports:
• Redirect stdout when invoking the utility program by typing:
utility > utilrept
• Or, use the list command when utility is running by typing:
list utilrept
In either case, user interactions and errors are printed to stderr and reports go to the file
specified.
The filename parameter in the list command must represent a valid filename to which you
have write access. Existing files have the new output appended to the end of existing contents.
If the file does not exist, it will be created.
To determine the current output file, issue the list command without parameters:
If the output file is not stdout, most commands are echoed to the output file as they are
entered.
Example
The list command produces a summary report on the extracted log file rxlog. The list
utilrept command directs the scan report listing to a disk file. Detail off specifies less
than full detail in the report. The scan command reads rxlog and produces the report.
The list * command sets the list device back to the default stdout. !lp utilrept sends the
disk file to the system printer.
utility>
logfile rxlog
list utilrept
detail off
scan
list *
!lp utilrept
To perform the above task using command line arguments, enter:
utility -l rxlog -f utilrept -d -xs print utilrept
Utility Commands 69
logfile
Use the logfile command to open a log file. For many utility program functions, a log file
must be opened. You do this explicitly by issuing the logfile command or implicitly by
issuing some other command. If you are in batch or command line mode and do not specify a
log file name, the default /var/opt/perf/datafiles/logglob file is used. If you are in
interactive mode and do not specify a log file name, you are prompted to provide one or accept
the default /var/opt/perf/datafiles/logglob file.
Syntax
logfile [logfile]
How to Use It
You can specify the name of either a raw or extracted log file. If you specify an extracted log
file name, all information is obtained from this single file. You do not need to specify any of the
raw log files other than the global log file, logglob. Opening logglob gives you access to all
of the data in the other logfiles.
Raw log files have the following names:
Once a log file is opened successfully, a report is printed or displayed showing the general
content of the log file (or log files), as shown in the example on the next page.
70 Chapter 4
Global file: /var/opt/perf/datafiles/logglob version D
Application file: /var/opt/perf/datafiles/logappl
Process file: /var/opt/perf/datafiles/logproc
Device file: /var/opt/perf/datafiles/logdev
Transaction file: /var/opt/perf/datafiles/logtran
Index file: /var/opt/perf/datafiles/logindx
System ID: homer
System Type 9000/715 S/N 6667778899 O/S HP-UX B.10.20. A
Data Collector: SCOPE/UX C.02.30
File Created: 06/14/99
Data Covers: 27 days to 7/10/99
Shift is: All Day
Do not rename raw log files. Access to these files assumes that the standard log file names
are in effect.
If you must have more than one set of raw log files on the same system, create a separate
directory for each set of files. Although the log file names cannot be changed, different
directories may be used. If you want to resize the log files in any way, you must have read/
write access to all the log files.
Utility Commands 71
menu
Use the menu command to print a list of the available utility commands.
Syntax
menu
Example
utility> menu
Command Parameters Function
HELP topic] Get information on commands and options
GUIDE Enter guided commands mode for novice users
LOGFILE [logname] Specify a log file to be processed
LIST [filename|*] Specify the listing file
START [startdate time] Set starting date & time for SCAN or ANALYZE
STOP [stopdate time] Set ending date & time for SCAN or ANALYZE
DETAIL [ON|OFF] Set report detail for SCAN, PARMFILE, or ANALYZE
SHOW [ALL] Show the current program settings
PARMFILE [parmfile] Check parsing of a parameter file
SCAN [logname] Read the log file and produce a summary report
RESIZE [GLOB|APPL|PROC|DEV|TRAN]
[DAYS=][EMPTY=] Resize raw log files
CHECKDEF [alarmdef] Check parsing and set the alarmdef file
ANALYZE Analyze the log file using the alarmdef file
! or Sh [command] Execute a system command
MENU or ? List the commands menu (This listing)
EXIT or Q Terminate the program
utility>
72 Chapter 4
parmfile
Use the parmfile command to view and syntax check the OV Performance Agent parm file
settings that are used for data collection.
Syntax
parmfile [/directorypath/parmfile]
How to Use It
You can use the parmfile command to do any of the following:
• Examine the parm file for syntax warnings and review the resulting settings. All
parameters are checked for correct syntax and errors are reported. After the syntax check
is completed, only the applicable settings are reported.
• Find out how much room is left for defining applications.
• If detail on is specified, print the effective contents of the parm file plus any default
settings that were not overridden, and print application definitions.
In batch mode, if no parm file name is specified, the default parm file is used.
In interactive mode, if no parm file name is supplied, you are prompted to supply one.
Example
The parmfile command checks the syntax of the current parm file and reports any warnings
or errors. Detail on lists the logging parameter settings.
utility>
detail on
parmfile parm
To perform the above task using command line arguments, enter:
utility -xp -D
Utility Commands 73
quit
Use the quit command to terminate the utility program. The quit command is equivalent
to the utility program’s exit command.
Syntax
quit
q
74 Chapter 4
resize
Use the resize command to manage the space in your raw log file set. This is the only
program you should use to resize the raw log files in order to preserve coordination between
the files and their internal control structures. If you use other tools you might remove or
destroy the validity of these control structures.
The utility program cannot be used to resize extracted files. If you want to resize an
extracted file, use the extract program to create a new extracted log file.
Syntax
resize [global ] [days=maxdays] [empty=days] [yes ]
[application] [size=maxMB ] [space=MB ] [no ]
[process ] [maybe]
[device ]
[transaction]
Parameters
log file type Specifies the type of raw data you want to resize: global, application,
process, device, or transaction, which correspond to the raw log files
logglob, logappl, logproc, logdev, and logtran. If you do not
specify a data type and are running utility in batch mode, the batch
job terminates. If you are running utility interactively, you are
prompted to supply the data type based on those log files that currently
exist.
days & size Specify the maximum size of the log file. The actual size depends on the
amount of data in the file.
empty & space Specify the minimum amount of room required in the file after the
resizing operation is complete. This value is used to determine if any of
the data currently in the log file must be removed in the resizing process.
Utility Commands 75
You might expect that a log file would not fill up until the specified number of days after a
resizing operation. You may want to use this feature of the resize command to minimize the
number of times a log file must be resized by the scopeux collector because resizing can occur
any time the file is filled. Using resize to force a certain amount of empty space in a log file
causes the log file to be resized when you want it to be.
The days and empty values are entered in units of days; the size and space values are
entered in units of megabytes. Days are converted to megabytes by using an average
megabytes-per-day value for the log file. This conversion factor varies depending on the type
of data being logged and the particular characteristics of your system.
More accurate average-megabytes-per-day conversion factors can be obtained if you issue the
scan command on the existing log file before you issue the resize command. A scan
measures the accumulation rates for your system. If no scan is done or if the measured
conversion factor seems unreasonable, the resize command uses a default conversion factor
for each type of data.
log file You are prompted for each available log file No default. This is a
type type. required parameter.
empty space The current amount of empty space or The current amount of
enough empty space to retain all data empty space or enough
currently in the file, whichever is smaller. empty space to retain all
data currently in the file,
whichever is smaller.
76 Chapter 4
Table 5 Default Resizing Parameters (cont’d)
yes You are prompted following the reported disk Yes. Resizing will occur.
no space results.
maybe
How to Use It
Before you resize a log file, you must stop OV Performance Agent using the steps under
Stopping and Restarting Data Collection on page 37 in Chapter 2.
A raw log file must be opened before resizing can be performed. Open the raw log file with the
logfile command before issuing the resize command. The files cannot be opened by any
other process.
The resize command creates the new file /tmp/scopelog before deleting the original file.
Make sure there is sufficient disk space in the /var/tmp directory (/tmp on IBM AIX 4.1 and
later) to hold the original log file before doing the resizing procedure.
After resizing, a log file consists of data plus empty space. The data retained is calculated as
the maximum file size minus the required empty space. Any data removed during the resizing
operation is lost. To save log file data for longer periods, use extract to copy this data to an
extracted file before doing the resize operation.
Utility Commands 77
data records: 61 days ( 6.2 mb) ??
empty space: 4 days ( 0.5 mb) ??
In this example, you are prompted to supply the amount of empty space for the file before the
final resizing report is given. If no action parameter is given for interactive resizing, you are
prompted for whether or not to resize the log file immediately following the final resizing
report.
Examples
The following commands are used to resize a raw process log file. The scan is performed before
the resize to increase the accuracy of the number-of-days calculations.
logfile /var/opt/perf/datafiles/logglob
detail off
scan
resize process days=60 empty=30 yes
days=60 specifies holding a maximum of 60 days of data. empty=30 specifies that 30 days of
this file are currently empty. That is, the file is resized with no more than 30 days of data in
the file to leave room for 30 more days out of a total of 60 days of space. yes specifies that the
resizing operation should take place regardless of current empty space.
The next example shows how you might use the resize command in batch mode to ensure
that log files do not fill up during the upcoming week (forcing scopeux to resize them). You
could schedule a cron script using the at command that specifies a minimum amount of space
such as 7 days - or perhaps 10 days, just to be safe.
The following shell script accomplishes this:
echo detail off > utilin
echo scan >> utilin
echo resize global empty=10 maybe >> utilin
echo resize application empty=10 maybe >> utilin
echo resize process empty=10 maybe >> utilin
echo resize device empty=10 maybe >> utilin
echo quit >> utilin
utility < utilin > utilout 2> utilerr
78 Chapter 4
Specifying maybe instead of yes avoids any resizing operations if 10 or more days of empty
space currently exist in any log files. Note that the maximum file size defaults to the current
maximum file size for each file. This allows the files to be resized to new maximum sizes
without affecting this script.
If you use the script described above, remember to stop scopeux before running it. See the
“Starting & Running OV Performance Agent” chapter in your HP OpenView Performance
Agent Installation & Configuration Guide for information about stopping and starting
scopeux.
Utility Commands 79
scan
Use the scan command to read a log file and write a report on its contents. (For a detailed
description of the report, see Utility Scan Report Details on page 50 in Chapter 3.
Syntax
scan
How to Use It
The scan command requires a log file to be opened. The log file scanned is the first of one of
the following:
• The log file named in the scan command itself.
• The last log file opened by any previous command.
• The default log file.
In this case, interactive users are prompted to override the default log file name if desired.
The following commands affect the operation of the scan function:
detail Specifies the amount of detail in the report. The default, detail on, specifies
full detail.
list Redirects the output to another file. The default is to list to the standard list
device.
start Specifies the date and time of the first log file record you want to scan. The
default is the beginning of the log file.
stop Specifies the date and time of the last log file record you want to scan. The
default is the end of the log file.
For more information about the detail, list, start, and stop commands, see their
descriptions in this chapter.
The scan command report consists of 12 sections. You can control which sections are included
in the report by issuing the detail command prior to issuing scan.
80 Chapter 4
The following four sections are always printed (even if detail off is specified):
• Scan start and stop actual dates and times
• Collector coverage summary
• Log file contents summary
• Log file empty space summary
The following sections are printed if detail on (the default) is specified:
• Initial parm file global information and system configuration information
• Initial parm file application definitions
• parm file global changes
• parm file application addition/deletion notifications
• Collector off-time notifications
• Application-specific summary reports
The following section is always printed if application data was scanned (even if detail off is
specified):
• Application overall summary
The following section is always printed if process data was scanned (even if detail off is
specified):
• Process log reason summary
Example
The scan of the current default global log file starts with records logged from June 1, 1999 at
7:00 AM until the present date and time.
utility>
logfile /var/opt/perf/datafiles/logglob
detail on
start 6/1/99 7:00 am
scan
To perform the above task using command line arguments, enter:
utility -D -b 6/1/99 7:00 am -xs
Utility Commands 81
sh
Use sh to enter a shell command without exiting utility by typing sh or an exclamation
point (!) followed by a shell command.
Syntax
sh or ! [shell command]
Parameters
How to Use It
Following the execution of the single command, you automatically return to utility. If you
want to issue multiple shell commands without returning to utility after each one, you can
start a new shell. For example,
sh ksh
or
!ksh
82 Chapter 4
show
Use the show command to list the names of the files that are open and the status of the
utility parameters that can be set.
Syntax
show [all]
Examples
Use show to produce a list that may look like this:
Logfile: /var/opt/perf/datafiles/logglob
List: "stdout"
Detail: ON for ANALYZE, PARMFILE and SCAN functions
The default starting date & time = 10/08/99 08:17 AM (FIRST + 0)
The default stopping date & time = 11/20/99 11:59 PM (LAST - 0)
The default shift = 12:00 AM - 12:00 AM
The default shift time is shown for information only. Shift time cannot be changed in utility.
Use show all to produce a more detailed list that may look like this:
Logfile: /var/opt/perf/datafiles/logglob
Global file: /var/opt/perf/datafiles/logglob
Application file: /var/opt/perf/datafiles/logappl
Process file: /var/opt/perf/datafiles/logproc
Device file: /var/opt/perf/datafiles/logdev
Transaction file: /var/opt/perf/datafiles/logtran
Index file: /var/opt/perf/datafiles/logindx
System ID: homer
System Type 9000/715 S/N 66677789 OS/ HP-UX B.10.20 A
Data Collector: SCOPE/UX C.02.30
File created: 10/08/99
Data Covers: 44 days to 11/20/99
Shift is: All Day
Data records available are:
Global Application Process Disk Volume Transaction
Maximum file sizes:
Global=10.0 Application=10.0 Process=20.0 Device=10.0 Transaction 10.0
MB
List "stdout"
Detail ON for ANALYZE, PARMFILE and SCAN functions
The default starting date & time = 10/08/99 11:50 AM (FIRST + 0)
The default stopping date & time = 11/20/99 11:59 PM (LAST - 0)
The default shift = 12:00 AM - 12:00 AM
Utility Commands 83
start
Use the start command to specify the beginning of the subset of a log file that you want to
scan or analyze. Start lets you start the scan or analyze process at data that was logged at a
specific date and time.
The default starting date and time is set to the date and time of the first record of any type in
a log file that has been currently opened with the logfile command.
Syntax
[date [time]]
start [today [-days] [time]]
[last [-days] [time]]
[first [+days] [time]]
Parameters
date The date format depends on the native language configured on the system being
used. If you do not use native languages or have the default language set to C,
the date format is mm/dd/yy (month/day/year) or 06/30/99 for June 30, 1999.
time The time format also depends on the native language being used. For C, the
format is hh:mm am or hh:mm pm (hour:minute in 12-hour format with the am/
pm suffix) such as 07:00 am for 7 o'clock in the morning. Twenty-four hour time
is accepted in all languages. For example, 23:30 would be accepted for 11:30 pm.
If the date or time is entered in an unacceptable format, an example in the
correct format is shown.
If no start time is given, a midnight (12 am) is assumed. A starting time of
midnight for a given day starts at the beginning of that day (00:00 on a 24-hour
clock).
today Specifies the current day. The parameter today-days specifies the number of
days prior to today’s date. For example, today-1 indicates yesterday’s date and
today-2, the day before yesterday.
last Can be used to represent the last date contained in the log file. The parameter
last-days specifies the number of days prior to the last date in the log file.
first Can be used to represent the first date contained in the log file. The parameter
first+days specifies the number of days after the first date in the log file.
How to Use It
The start command is useful if you have a very large log file and do not want to scan or
analyze the entire file. You can also use it to specify a specific time period for which data is
scanned. For example, you can scan a log file for data that was logged for a period beginning
14 days before the present date by specifying today-14.
You can use the stop command to further limit the log file records you want to scan.
84 Chapter 4
If you are not sure whether native language support is installed on your system, you can force
utility to use the C date and time formats by issuing the following statement before
running utility:
LANG=C; export LANG
or in C Shell
setenv LANG C
Example
The scan of the default global log file starts with records logged from August 5, 1999 at 8:00
AM until the present date and time.
utility>
logfile /var/opt/perf/datafiles/logglob
detail on
start 8/5/99 8:00 AM
scan
To perform the above task using command line arguments, enter:
utility -D -b 8/5/99 8:00 am -xs
Utility Commands 85
stop
Use the stop command to specify the end of a subset of a log file that you want to scan or
analyze. Stop lets you terminate the scan or analyze process at data that was logged at a
specific date and time.
The default stopping date and time is set to the date and time of the last record of any type in
a log file that has been currently opened with the logfile command.
Syntax
[date [time]]
stop [today [-days] [time]]
[last [-days] [time]]
[first [+days] [time]]
Parameters
date The date format depends on the native language configured on the system being
used. If you do not use native languages or have the default language set to C,
the date format is mm/dd/yy (month/day/year) or 06/30/99 for June 30, 1999.
time The time format also depends on the native language being used. For C, the
format is hh:mm am or hh:mm pm (hour:minute in 12-hour format with the am/
pm suffix) such as 07:00 am for 7 o'clock in the morning. Twenty-four hour time
is accepted in all languages. For example, 23:30 would be accepted for 11:30 pm.
If the date or time is entered in an unacceptable format, an example in the
correct format is shown.
If no stop time is given, one minute before midnight (11:59 pm) is assumed. A
stopping time of midnight (12 am) for a given day stops at the end of that day
(23:59 on a 24-hour clock).
today Specifies the current day. The parameter today-days specifies the number of
days prior to today’s date. For example, today-1 indicates yesterday’s date and
today-2, the day before yesterday.
last Can be used to represent the last date contained in the log file. The parameter
last-days specifies the number of days prior to the last date in the log file.
first Can be used to represent the first date contained in the log file. The parameter
first+days specifies the number of days after the first date in the log file.
How to Use It
The stop command is useful if you have a very large log file and do not want to scan the entire
file. You can also use it to specify a specific time period for which data is scanned. For
example, you can scan a log file for seven-days worth of data that was logged starting a month
before the present date.
If you are not sure whether native language support is installed on your system, you can force
utility to use the C date and time formats by issuing the following statement before
running utility:
86 Chapter 4
LANG=C; export LANG
or in C Shell
setenv LANG C
Example
The scan of 14 days worth of data starts with records logged from July 5, 1999 at 8:00 AM and
stops at the last record logged July 18, 1999 at 11:59 pm.
utility>
logfile /var/opt/perf/datafiles/logglob
detail on
start 7/5/99 8:00 am
stop 7/18/99 11:59 pm
scan
To perform the above task using command line arguments, enter:
utility -D -b 7/5/99 8:00 am -e 7/18/99 11:59pm -xs
Utility Commands 87
88 Chapter 4
5 Using the Extract Program
Introduction
The extract program has two main functions: it lets you extract data from raw log files and
write it to extracted log files. Extract also lets you export log file data for use by analysis
products such as spreadsheets.
After the initial installation of OV Performance Agent, services must be started for file
installation to complete, before extract will function.
The extract and export functions copy data from a log file; no data is removed.
Three types of log files are used by OV Performance Agent:
• scopeux log files, which contain data collected in OV Performance Agent by the scopeux
collector.
• extracted log files, which contain data extracted from raw scopeux log files.
• DSI (data source integration) log files, which contain user-defined data collected by
external sources such as applications and databases. The data is subsequently logged by
OV Performance Agents DSI programs.
Use the extract program to perform the following tasks:
• Extract subsets of data from raw scopeux log files into an extracted log file format that is
suitable for placing in archives, for transport between systems, and for analysis by OV
Performance Manager. Data cannot be extracted from DSI log files.
• Manage archived log file data by extracting or exporting data from extracted format files,
appending data to existing extracted log files, and subsetting data by type, date, and shift
(hour of day).
• Export data from raw or extracted scopeux log files and DSI log files into ASCII, binary,
datafile, or WK1 (spreadsheet) formats suitable for reporting and analysis or for
importing into spreadsheets or similar analysis packages.
The extract function cannot produce summarized data. Summary data can only be produced
by the export function.
Examples of how various tasks are performed and how extract commands are used can be
found in online help for the extract program.
This chapter covers the following topics:
• Running the Extract Program
• Using Interactive Mode
• Extract Command Line Interface
• Overview of the Export Function
89
Running the Extract Program
There are three ways to run the extract program:
You control the extract program using command options and arguments in the command
line.
Interactive mode
You supply interactive commands and parameters while executing the program with stdin
set to an interactive terminal or workstation.
If you are an experienced user, you can quickly specify only those commands required for a
given task. If you are a new user, you may want to specify guided mode to receive more
assistance in using extract. In guided mode, you are asked to select from a list of options in
order to perform a task. While in guided mode, the interactive commands that accomplish
each task are listed as they are executed, so you can see how they are used. You can quit or
re-enter guided mode at any time.
Batch mode
You can run the program and redirect stdin to a file that contains interactive commands and
parameters.
Syntax
The syntax for the command line interface is similar to standard UNIX command line
interfaces on other programs and is described in detail in this chapter.
For interactive and batch mode the command syntax is the same: a command followed by one
or more parameters. Commands can be entered in any order; if a command has a parameter
associated with it, the parameter must be entered immediately after the corresponding
command.
There are two types of parameters - required (for which there are no defaults) and optional
(for which defaults are provided). How the extract program handles these parameters
depends on the mode in which it is running.
• In interactive mode, if an optional parameter is missing, the program displays the default
parameter and lets you either confirm it or override it. If a required parameter is missing,
the program prompts you to enter the parameter.
• In batch mode, if an optional parameter is missing, the program uses the default values.
If a required parameter is missing, the program terminates.
Errors and missing data are handled differently for interactive mode than for command line
and batch mode, because you can supply additional data or correct mistakes in interactive
mode, but not in command line and batch mode.
90 Chapter 5
Using Interactive Mode
Using the extract program’s interactive mode requires you to issue a series of commands to
execute a specific task.
For example, if you want to export application data collected starting May 15, 2003, from the
default global log file, you issue the following commands after invoking the extract program
logfile /var/opt/perf/datafiles/logglob
application detail
start 5/15/2003
export
The logfile command opens /var/opt/perf/datafiles/logglob, the default global log
file. The start command specifies that only data logged after 5/15/03 will be exported. The
export command starts the exporting of the data.
Command
Option Argument Description
92 Chapter 5
Table 6 Command Line Arguments (cont’d)
Command
Option Argument Description
Command
Option Argument Description
94 Chapter 5
Table 6 Command Line Arguments (cont’d)
Command
Option Argument Description
When you are evaluating arguments and entering command options on the command line, the
following rules apply:
• Errors and missing data are handled exactly as in the corresponding batch mode
command. That is, missing data will be defaulted if possible and all errors cause the
program to terminate immediately.
• Echoing of commands and command results is disabled unless the -v argument is used to
enable verbose mode.
• If no valid action is specified (-xp, -xw, -xm, -xy, or -xt), extract starts reading
commands from its stdin file after all parameters have been processed.
• If an action is specified (-xp, -xw, -xm, -xy, or -xt), the program will execute those
command options after all other parameters are evaluated, regardless of where they were
positioned in the list of parameters.
• If an action is specified in the command line, the extract program will not read from its
stdin file; instead it will terminate following the action:
extract -f rxdata -r /var/opt/perf/rept1 -xp d-1 -G
Which translates into:
Note that the actual exporting is not done until the end so the -G parameter is processed
before the export is done.
96 Chapter 5
Overview of the Export Function
The extract program's export command converts OV Performance Agent raw, extracted, or
DSI log file data into exported files. The export command writes files in any one of four
possible formats: ASCII, datafile, binary, and WK1 (spreadsheet). Exported files can be used
in a variety of ways, such as reports, custom graphics packages, databases, and user-written
analysis programs.
• You can specify which data items (metrics) are needed for each type of data.
• You can specify starting and ending dates for the time period in which the data was
collected along with shift and weekend exclusion filters.
• You can specify the desired format for the exported data in an export template file. This
file can be created using any text editor or word processor that lets you save a file in
ASCII (text) format.
• You can also use the default export template file, /var/opt/perf/reptfile. This file
specifies the following output format settings:
— ASCII file format
If you extracting or exporting data from log files which are created from a specific platform, it
is recommended that you use the reptall file from the same platform. This is because the
list of metric classes supported are different on different platforms.
98 Chapter 5
Export Data Files
If you used the output command to specify the name of an output file prior to issuing the
export command, all exported data will be written to this single file. If you are running the
extract program interactively and want to export data directly to your workstation
(standard output file), specify the extract command output stdout prior to issuing the
export command.
If the output file is set to the default, the exported data is separated into as many as 14
different default output files depending on the type of data being exported.
The default export log file names are:
where ext= asc (ASCII), bin (binary), dat (datafile), or wk1 (spreadsheet).
No output file is created unless you specify the type and associated items that match the data
in the export template file prior to issuing the export command.
Parameters
report Specifies the title for the export file. For more information, see the following
section, Export File Title on page 102.
format Specifies the format for the exported data.
ASCII
ASCII (or text) format is best for copying files to a printer or terminal. It does
not enclose fields with double quotes (").
Datafile
Binary
WK1 (spreadsheet)
The WK1 (spreadsheet) format is compatible with Microsoft Excel and other
spreadsheet and graphics programs.
headings Specifies whether or not to include column headings for the metrics listed in
the export file. If headings off is specified, no column headings are
written to the file. The first record in the file is exported data. If headings
on is specified, ASCII and datafile formats place the export title plus
column headings for each column of metrics written before the first data
records. Column headings in binary format files contain the description of
the metrics in the file. WK1 formats always contain column headings.
100 Chapter 5
separator Specifies the character that is printed between each field in ASCII or
datafile formatted data. The default separator character is a blank space.
Many programs prefer a comma as the field separator. You can specify the
separator as any printing or nonprinting character.
summary Specifies the number of minutes for each summary interval. The value
determines how much time is included in each record for summary records.
The default interval is 60 minutes. The summary value can be set between 5
and 1440 minutes (1 day).
missing Specifies the data value to be used in place of missing data. The default value
for missing data is zero. You can specify another value in order to
differentiate missing from zero data. A data item may be missing if it was:
• not logged by a particular version of the scopeux collector
• not logged by scopeux because the instance (application, disk,
transaction, netif) it belongs to was not active during the interval, or
• in the case of DSI log files, no data was provided to the dsilog program
during an interval, resulting in “missing records”.
Missing records are, by default, excluded from exported data.
layout Specifies either single or multiple layouts (output per record output) for
data types such as application, disk, transaction, lvolume, or netif.
• Single layout writes one instance per record, for every application that
was active during the time interval. Example: One disk exported in one
record.
• Multiple layout writes multiple instances in one record for each time
interval, with part of that record reserved for each configured application.
Example: All disks exported in one record.
output Specifies the name of the output file where the exported data will be written.
output can be specified for each class or data type exported by placing
output filename just after the line indicating the data type that starts the
list of exported data items. Any valid file name can be specified for output.
You can also specify the name interactively using the output command. Any
name you specify overrides the default output file name.
data type Specifies one of the exportable data types: global, application, process,
disk, transaction, lvolume, netif, configuration, or DSI class name.
This starts a section of the export template file that lists the data items to be
copied when this type of data is exported.
items Specifies the metrics to be included in the exported file. Metric names are
listed, one per line, in the order you want them listed in the resulting file. You
must specify the proper data type before listing items. The same export
template file can include item lists for as many data types as you want. Each
data type will be referenced only if you choose to export that type of data.
The output and layout parameters can be used more than once within an export template
file. For example:
data type global
output=myglobal
gbl_cpu_total_util
You cannot specify different layouts within a single data type. For example, you cannot
specify data type disk once with layout = multiple and again with layout = single
within the same export file.
102 Chapter 5
DATA TYPE APPLICATION
APP_CPU_TOTAL_UTIL
APP_DISK_PHYS_IO_RATE
APP_ALIVE_PROCESSES
3 Run the extract program.
4 Issue the report command to specify the export template file you created.
report /var/opt/perf/report1
5 Specify global summary data and application summary data using the global and
application commands.
global summary
application summary
6 Issue the export command to start the export.
export
7 Because you did not specify where the program should get the performance data from, you
are prompted to do so. In this example, since the default log file is correct, just press Enter.
8 The output looks like this:
exporting global data ........50%......100%
exporting application data .....50%......100%
The exported file contains 31 days of data from 01/01/99 to 01/31/99
examined exported
data type records records space
----------------------- --------- --------- ---------
global summaries 736 0.20 Mb
application summaries 2560 0.71 Mb
---------
0.91 Mb
The two files you have just created — xfrsGLOBAL.asc and xfrsAPPLICATION.asc —
contain the global and application summary data in the specified format.
Report title and heading lines are not repeated in the file.
To use a nonprinting special character as a separator, enter it into your export template file
immediately following the first double quote in the separator parameter.
104 Chapter 5
Binary integers are written in the format that is native to the system on which the extract
program is being run. For example, Intel systems write “little endian” integers while HP-UX,
IBM AIX, and Sun systems write “big endian” integers. Use care when transporting binary
exported files to systems that use different “"endians”.
Record Length Number of bytes in the record, including the 16 byte record header.
Record ID A number to identify the type of record (see below).
Date_Seconds Time since January 1, 1970 (in seconds).
Number_of_vars Number of repeating entries in this record.
The Record ID metric uniquely identifies the type of data contained in the record. Current
Record ID values are:
-1 Title Record
-2 First header Record (Contains Item Numbers)
-3 Second header Record (Contains Item Scale Factors)
-4 Application Name Record (for Multiple Instance Application Files)
-5 Transaction Name Record (for Multiple Instance Transaction Files)
-7 Disk Device Name Record (for Multiple Instance Disk Device Files)
-8 Logical Volume Name Record (for Multiple Instance Lvolume Files)
-9 Netif Name Record (for Multiple Instance Netif Files)
-10 Filesystem Name Record (for Multiple Instance Netif Files)
-11 CPU Name Record (for Multiple Instance Netif Files)
106 Chapter 5
Binary Header Records
Title Record This record (Record ID -1) is written whenever headings on,
regardless of whether the user specified a report title. It contains
information about the log file and its source.
First Header Record The first header record (Record ID -2) contains a list of unique
item identification numbers corresponding to the items contained
in the log file. The position of the item ID numbers can be used to
determine the position and size of each exported item in the file.
Second Header Record The second header record (Record ID -3) contains a list of scale
factors which correspond to the exported items. For more details,
see the discussion of “Scale Factors” later in this section.
Application Name Record This record (Record ID -4) will only be present in application data
files that utilize the Multiple Layout format. It lists the names of
the applications that correspond to the groups of application
metrics in the rest of the file.
Transaction Name This record (Record ID -5) will only be present in transaction
Record tracking data files that utilize the Multiple Layout format. It lists
the names of the transactions that correspond to the groups of
transaction metrics in the rest of the file.
Disk Device Name This record (Record ID -7) will only be present in disk device data
Record files that utilize the Multiple Layout format. It lists the names of
disk devices that correspond to the groups of disk device metrics in
the rest of the file.
Logical Volume Name This record (Record ID -8) will only be present in logical volume
Record data files that utilize the Multiple Layout format. It lists the
names of logical volumes that correspond to the groups of logical
volume metrics in the rest of the file.
Netif Name Record This record (Record ID -9) will only be present in netif (LAN) data
files that utilize the Multiple Layout format. It lists the names of
netif devices that correspond to the groups of netif device
metrics in the rest of the file.
Filesystem Name Record This record (Record ID -12) will only be present in filesystem data
files that utilize the Multiple Layout format. It lists the names of
filesystems that correspond to the groups of filesystem metrics in
the rest of the file.
Cpu Name Record This record (Record ID -13) will only be present in CPU data files
that utilize the Multiple Layout format. It lists the names of CPUs
that correspond to the groups of CPU metrics in the rest of the file.
108 Chapter 5
Each scale factor is a 32-bit (4-byte) integer to match the majority of data items. Special
values for the scale factors are used to indicate non-numeric and other special valued metrics.
110 Chapter 5
6 Extract Commands
Introduction
This chapter describes the extract program’s commands. It includes a table showing
command syntax, a table of commands for extracting and exporting data, and a command
reference section describing the commands in alphabetical order.
Commands and parameters for extract can be entered with any combination of uppercase
and lowercase letters. Only the first three letters of the command's name are required, except
for the weekdays and weekly commands that require you to enter the whole name. For
example, the command application detail can be abbreviated as app det.
Examples of how these commands are used can be found in online help for the extract
program.
The table on the following pages summarizes the syntax of the extract commands and their
parameters.
The extract function cannot produce summarized data. Summary data can only be produced
by the export function.
Command Parameter
application on
detail
summary (export only)
both (export only)
off (default)
cpu detail
summary (export only)
both (export only)
off (default)
configuration on
detail
off (default)
111
Table 7 Extract Commands: Syntax and Parameters (cont’d)
Command Parameter
disk on
detail
summary (export only)
both (export only)
off (default)
exit
e
filesystem detail
summary (export only)
both (export only)
off (default)
global on
detail (default)
summary (export only)
both (export only)
off
guide
help
licheck
list filename
*
logfile logfile
lvolume on
detail
summary (export only)
both (export only)
off (default)
menu
monthl
y yyymm
mm
112 Chapter 6
Table 7 Extract Commands: Syntax and Parameters (cont’d)
Command Parameter
netif on
detail
summary (export only)
both (export only)
off (default)
output outputfile
,new
,purgeboth
,append
process on
detail [app#[-#],...]
off (default)
killed
quit
q
sh shell command
!
show all
start date[time]
today[-days][time]
last[-days][time]
first[+days][time]
stop date[time]
today[-days][time]
last[-days][time]
first[+days][time]
transaction on
detail
summary (export only)
both (export only)
off (default)
weekdays 1.....7
weekly yyww
ww
yearly yyyy
yy
application x x x
class x x x x
configuration x x
cpu x x x
disk x x x
export x x x
extract x x
filesystem x x x
global x x x
logfile x x x x
lvolume x x x
monthly x x
netif x x
output x x x x
process x x x
report x x x
shift x x x
start x x x x
stop x x x x
transaction x x x x
weekdays x x x
weekly x x
yearly x x
logicalsystems x x x
114 Chapter 6
application
Use the application command to specify the type of application data that is being extracted
or exported.
The default is application off
Syntax
[on]
[detail]
application [summary]
[both]
[off]
Parameters
If you are using OV Performance Manager, detail data must be included in an extracted file
before drawing application graphs with points every 5 minutes.
Example
In this example, the application command causes detailed application log file data to be
exported: The output export file contains the application metrics specified in the myrept
export template file.
logfile /var/opt/perf/datafiles/logglob
global off
application detail
report /var/opt/perf/myrept
export
To perform the above task using command line arguments, enter:
extract -a -r /var/opt/perf/myrept -xp
Syntax
[detail]
class [classname] [summary]
[both]
[off]
Parameters
Examples
To export summary data in a DSI log file that contains a class named acctg_info, issue the
following command:
class acctg_info summary
Once the log file is specified by the user and opened by the extract program, the
acctg_info class is verified to exist in the log file and can subsequently be exported.
Other variations of this command are:
CLASS ACCTG_INFO SUMMARY
class ACCTG_INFO summary
class acctg_info sum
Commands can be either uppercase or lowercase. Class names are always upshifted and then
compared.
In the following example, summary data in a class named fin_info is exported.
extract>
class fin_info summary
export
To perform the above task using command line arguments, enter:
extract -l dsi.log -C fin_info summary -xp
116 Chapter 6
configuration
Use the configuration command to specify whether or not to export system configuration
information.
The default is configuration off.
Syntax
[on]
configuration [detail]
[off]
Parameters
The configuration command affects only the export function. The extract function is not
affected because it always extracts system configuration information.
Example
In this example, the configuration command causes system configuration information to be
exported. The output export file contains the configuration metrics specified in the myrept
export template file.
logfile /var/opt/perf/datafiles/logglob
configuration on
report /var/opt/perf/myrept
export
To perform the above task using command line arguments, enter:
extract -c -r /var/opt/perf/myrept -xp
Syntax
[detail]
cpu [summary]
[both][off]
Parameters
Example
In this example, the cpu command causes CPU detail data that was collected starting July
26, 2001 to be exported. Because no export template file is specified, the default export
template file, reptfile, is used. All disk metrics are included in the output file as specified
by reptfile.
logfile /var/opt/perf/datafiles/logglob
global off
cpu detail
start 7/26/01
export
To perform the above task using command line arguments, enter:
extract -u -b 7/26/01 -xp
118 Chapter 6
disk
Use the disk command to specify the type of disk device data that is being extracted or
exported.
The default is disk off.
Syntax
[on]
[detail]
disk [summary]
[both]
[off]
Parameters
Example
In this example, the disk command causes disk detail data that was collected starting July
5, 1999 to be exported. Because no export template file is specified, the default export
template file, reptfile, is used. All disk metrics are included in the output file as specified
by reptfile.
logfile /var/opt/perf/datafiles/logglob
global off
disk detail
start 7/5/99
export
To perform the above task using command line arguments, enter:
extract -D -b 7/5/99 -xp
Syntax
exit
e
120 Chapter 6
export
Use the export command to start the process of copying data into an exported file format.
Syntax
Parameters
Use one of the following parameters to export data for a particular interval.
How to Use It
There are four ways to specify a particular interval (day, week, month, year).
• Current interval - Specify the parameter only. For example, month means the current
month.
• Previous interval - Specify the parameter, a minus, and the number of intervals before the
current one desired. For example, day-1 is yesterday, week-2 is two weeks prior to the
current week.
• Absolute interval - Specify the parameter and a positive number. The number indicates
the absolute interval desired in the current year. For example, day 2 is January 2 of the
current year.
• Absolute interval plus year - Specify the parameter and a large positive number. The
number should consist of the last two digits of the year and the absolute interval number
in that year. In this format the absolute day would have 5 digits (99002 means January 2,
1999) and all other parameters would have four digits (month 9904 means April of 1999).
If you have not previously specified a log file or an export template file, the logfile command
uses the default global log file logglob and the report command uses the default export
template file reptfile.
The settings or defaults for all other parameters are used. For details on their actions, see
descriptions of the application, configuration, global, process, disk, lvolume, netif,
CPU, filesystem, transaction, output, shift, start, and stop commands.
For more information about exporting data, see Overview of the Export Function on page 97
in Chapter 5.
Example
In this example, the export command causes log file data collected yesterday from 8:00 am to
5 pm to be exported. Because no export template file is specified, the default export template
file, reptfile, is used. All global metrics are included in the output file as specified by
reptfile
logfile /var/opt/perf/datafiles/logglob
start today-1 8:00 am
stop today-1 5:00 pm
global both
export
122 Chapter 6
To perform the above task using command line arguments, enter:
extract -gG -b today-1 8:00 am -e today-1 5:00 pm -xp
Syntax
Parameters
Use one of the following parameters to extract data for a particular interval:
How to Use It
There are four ways to specify a particular interval (day, week, month, year).
• Current interval - Specify the parameter only. For example, month means the current
month.
• Previous interval - Specify the parameter, a minus, and the number of intervals before the
current one desired. For example, day-1 is yesterday, week-2 is two weeks prior to the
current week.
• Absolute interval - Specify the parameter and a positive number. The number indicates
the absolute interval desired in the current year. For example, day 2 is January 2 of the
current year.
• Absolute interval plus year - Specify the parameter and a large positive number. The
number should consist of the last two digits of the year and the absolute interval number
in that year. In this format, the absolute day would have five digits (99002 means January
2, 1999) and all other parameters would have four digits (month 99904 means April of
1999).
The extract command starts data extraction. If not previously specified, the logfile and
output commands assume the following defaults when the extract command is executed:
log file = /var/opt/perf/datafiles/logglob
output file = rxlog,new
124 Chapter 6
The settings or defaults for all other parameters are used. For details on their actions, see
descriptions of the application, global, process, disk, lvolume, netif, CPU, filesystem,
transaction, shift, start, and stop commands.
The size of an extracted log file cannot exceed 3.5 gigabytes.
Example
In the first example, data collected from March 1, 2000 to June 30, 2000 during the hours 8:00
am to 5:00 pm on weekdays is extracted. Only global and application detail data is extracted.
logfile /var/opt/perf/datafiles/logglob
start 03/01/00
stop 06/30/00
shift 8:00 am - 5:00 pm noweekends
global detail
application detail
extract
To perform the above task using command line arguments, enter:
extract -ga -b 03/01/00 -e 6/30/00 -s 8:00 am - 5:00 noweekends -xt
In the second example, a new extracted log file named rxjan00 is created. Any existing file
that has this name is purged. All raw log file data collected from January 1, 2000 through
January 31, 2000 is extracted:
logfile /var/opt/perf/datafiles/logglob
output rxjan00,purge
start 01/01/00
stop 01/31/00
global detail
application detail
transaction detail
process detail
disk detail
lvolume detail
netif detail
filesystem detail
cpu detail
extract
To perform the above task using command line arguments, enter:
extract -f rxjan00,purge -gatpdznyu -b 01/01/00 -e 01/31/00 -xt
Syntax
[detail]
filesystem [summary]
[both]
[off]
Parameters
Example
In this example, the filesystem command causes filesystem detail data that was collected
starting July 26, 2001 to be exported. Because no export template file is specified, the default
export template file, reptfile, is used. All filesystem metrics are included in the output file
as specified by reptfile.
logfile /var/opt/perf/datafiles/logglob
global off
filesystem detail
start 7/26/01
export
To perform the above task using command line arguments, enter:
extract -y -b 7/26/01 -xp
126 Chapter 6
global
Use the global command to specify the amount of global data to be extracted or exported.
The default is global detail. (In command line mode, the default is global off.)
Syntax
[on]
[detail]
global [summary]
[both]
[off]
Parameters
How to Use It
Detail data must be extracted if you want to draw OV Performance Manager global graphs
with points every 5 minutes.
Summarized data is graphed by OV Performance Manager more quickly since fewer data
records are needed to produce a graph. If only global summaries are extracted, OV
Performance Manager global graphs cannot be drawn with data points every 5 minutes.
The both option maintains the access speed gained with the hourly summary records while
permitting you to draw OV Performance Manager global graphs with points every 5 minutes.
The off parameter is not recommended if you are using OV Performance Manager because
you must have global data to properly understand overall system behavior. OV Performance
Manager global graphs cannot be drawn unless the extracted file contains at least one type of
global data.
Example
The global command is used here to specify that no global data is to be exported (global
detail is the default). Only detailed transaction data is exported. The output export file
contains the transaction metrics specified in the myrept export template file.
extract>
logfile /var/opt/perf/datafiles/logglob
global off
transaction detail
report /var/opt/perf/myrept
export
To perform the above task using command line arguments, enter:
extract -l -t -r /var/opt/perf/myrept -xp
Syntax
guide
How to Use It
• To enter guided commands mode from extract‘s interactive mode, type guide.
• To accept the default value for a parameter, press Return.
• To terminate guided commands mode and return to interactive mode, type q at the
guide> prompt.
This command does not provide all possible combinations of parameter settings. It selects
settings that should produce useful results for the majority of users. You can obtain full
control over extract‘s functions through extract‘s interactive command mode.
If you are exporting DSI log file data, we recommend using guided commands mode to create
a customized export template file and export the data.
128 Chapter 6
help
Use the help command to access online help.
Syntax
help [keyword]
How to Use It
You can enter parameters to obtain information on extract commands and tasks, or on help
itself. You can navigate to different topics by entering a key word. If more than one page of
information is available, the display pauses and waits for you to press Return before
continuing. Type q or quit to exit the help system and return to the extract program.
You can also request help on a specific topic. For example,
help tasks
or
help resize parms
When you use this form of the help command, you receive the help text for the specified topic
and remain in the extract command entry context. Because you do not enter the help
subsystem interactively, you do not have to type quit before entering the next extract
command.
Syntax
licheck
Example
To check the validity of the license, type the command:
extract -licheck
If the license is valid and permanent, the following message appears:
The permanent OVPA software has been installed
130 Chapter 6
list
Use the list command to specify the list file for all extract program reports.
Syntax
list [file]
[*]
How to Use It
You can use list at any time while using extract to specify the list device. It uses a file
name or list device name to output the user-specified settings. If the list file already exists, the
output is appended to it.
The data that is sent to the list device is also displayed on your screen.
While extract is running, type:
list outfilename
To return the listing device to the user terminal, type:
list stdout
or
list *
To determine the current list device, type the list command without parameters as follows:
list
If the list file is not stdout, most commands are echoed to the list file as they are entered.
Example
The following example, the list device is set to mylist. The results of the next commands are
printed to mylist and displayed on your screen.
extract>
logfile /var/opt/perf/datafiles/logglob
list mylist
global detail
shift 8:00 AM - 5:00 PM
extract
Syntax
logfile [logfile]
How to Use It
To open a log file, you can specify the name of either a raw or extracted log file. You cannot
specify the name of a file created by the export command. If you specify an extracted log file
name, all information is obtained from this single file. If you specify a raw log file name, you
must specify the name of the global log file before you can access the raw log file. This is the
only raw log file name you should specify.
If the log file is not in your current working directory, you must provide its path.
The global log file and other raw log files must be in the same directory. They have the
following names:
The general contents of the log file are displayed when the log file is opened.
Do not rename raw log files! When accessing these files, the program assumes that the
standard log file names are in effect. If you must rename log files to place log files from
multiple systems on the same system for analysis, you should first extract the data and then
rename the extracted log files.
Example
This is a typical listing of the default global log file.
Global file: /var/opt/perf/datafiles/logglob, version D
Application file: /var/opt/perf/datafiles/logappl
Process file: /var/opt/perf/datafiles/logproc
Device file: /var/opt/perf/datafiles/logdev
Transaction file: /var/opt/perf/datafiles/logdev
Index file: /var/opt/perf/datafiles/logindx
System ID: homer
System Type 9000/715/ S/N 2223334442 O/S HP-UX B.10.20 A
Data collector: SCOPE/UX C.02.30
File Created: 10/08/99
Data Covers: 44 days to 11/20/99
132 Chapter 6
Shift is: All Day
Data records available are:
Global Application Process Disk Volume Transaction
Maximum file sizes:
Global=10.0 Application=10.0 Process=20.0 Device=10.0 Transaction=10.0 MB
The first GLOBAL record is on 10/08/99 at 08:17 AM
The first NETIF record is on 10/08/99 at 08:17 AM
The first APPLICATION record is on 11/17/99 at 12:15 PM
The first PROCESS record is on 10/08/99 at 08:17 AM
The first DEVICE record is on 10/31/99 at 10:45 AM
The Transaction data file is empty
The default starting date & time = 10/08/99 11:50 AM (LAST -30)
The default stopping date & time = 11/20/99 11:59 PM (LAST -0)
Syntax
[on]
[detail]
lvolume [summary]
[both]
[off]
Parameters
Example
In this example, a new extracted log file named rx899 is created and any existing file that
has that name is purged. All logical volume data collected from August 1 through August 31 is
extracted.
logfile /var/opt/perf/datafiles/logglob
output rx899,purge
start 08/01/99
stop 08/31/99
global detail
lvolume detail
month 9908
To perform the above task using command line arguments, enter:
extract -f rx899,purge -gz -xm 9908
134 Chapter 6
menu
Use the menu command to print a list of the available extract commands.
Syntax
menu
Example
Command Parameters Function
HELP [topic] Get information on commands and options
GUIDE Enter guided commands mode for novice users
LOGFILE [logname] Specify a log file to be processed
LIST [filename|*] Specify the listing file
OUTPUT [filename]
[,NEW/PURGE/APPEND] Specify a destination file
REPORT [filename][,SHOW] Specify an Export Format file for EXPORT"
GLOBAL [DETAIL/SUMMARY/BOTH/OFF] Extract GLOBAL records
APPLICATION [DETAIL/SUMMARY/BOTH/OFF] Extract APPLICATION records
PROCESS [DETAIL/OFF/KILLED][APP=] Extract PROCESS records
DISK [DETAIL/SUMMARY/BOTH/OFF] Extract DISK DEVICE records
LVOLUME [DETAIL/SUMMARY/BOTH/OFF] Extract Logical VOLUME records
NETIF [DETAIL/SUMMARY/BOTH/OFF] Extract Logical NETIF records
CPU [DETAIL/SUMMARY/BOTH/OFF] Extract CPU records
FILESYSTEM [DETAIL/SUMMARY/BOTH/OFF] Extract FILESYSTEM records
CONFIG [DETAIL/OFF] Export CONFIGURATION records
CLASS classname[DETAIL/SUMMARY/BOTH/OFF] Export classname records
TRANSACTION [DETAIL/SUMMARY/BOTH/OFF] Extract TRANSACTION records
START [startdate time] Specify a starting date and time for SCAN
STOP [stopdate time] Specify an ending date and time for SCAN
SHIFT [starttime - stoptime] [NOWEEKENDS] Specify daily shift times
SHOW [ALL] Show the current program settings
EXPORT [d/w/m/y][-offset] Copy log file records to HOST format files
EXTRACT [d/w/m/y][-offset] Copy selected records to output (or append) file
WEEKLY [ww/yyww] Extract one calendar week's data with auto file names
MONTHLY [mm/yymm] Extract one calendar month's data with auto file names
YEARLY [yy/yyyy] Extract one calendar year's data with auto file names
WEEKDAYS [1...7] Set days to exclude from export 1=Sunday ! or SH
[command] Execute a system command
MENU or ? List the command menu (this listing)
EXIT or Q Terminate the program
Syntax
monthly [yymm]
[mm]
Parameters
How to Use It
Use the monthly command when you are extracting data for archiving on a monthly basis.
The type of data extracted and summarized follows the normal rules for the extract
command and can be set before executing the monthly command. These settings are honored
unless a monthly output file already exists. If it does, data is appended to it based on the type
of data that was originally specified.
The monthly command has a feature that opens the previous month's extracted file and
checks to see if it is filled--whether it contains data extracted up to the last day of the month.
If not, the monthly command appends data to this file to complete the previous month's
extraction.
For example, a monthly command is executed on May 7, 1999. This creates a log file named
rxmo199905 containing data from May 1 through the current date (May 7).
On June 4, 1999, another monthly command is executed. Before the rxmo199906 file is
created for the current month, the rxmo199905 file from the previous month is opened and
checked. When it is found to be incomplete, data is appended to it to complete the extraction
through May 31, 1999. Then, the rxmo199906 file is created to hold data from June 1, 1999 to
the current date (June 4).
136 Chapter 6
As long as you execute the monthly command at least once a month, this feature will complete
each month's file before creating the next month's file. When you see two adjacent monthly
files--for example, rxmo199905 (May) and rxmo199906 (June)--you can assume that the first
file is complete for that month and it can be archived and purged.
The monthly and extract month commands are similar in that they both extract one
calendar month's data. The monthly command ignores the setting of the output command,
using instead predefined output file names. It also attempts to append missing data to the
previous month's extracted log file if it is still present on the system. The extract month
command, on the other hand, uses the settings of the output command. It cannot append
data to the previous month's extracted file since it does not know its name.
Example
In this example, detail application data logged during May 1999 is extracted.
logfile /var/opt/perf/datafiles/logglob
global off
application detail
monthly 9905
To perform the above task using command line arguments, enter:
extract -a -xm 9905
Syntax
[on]
[detail]
netif [summary]
[both]
[off]
Parameters
Example
In this example, netif detail data collected from March 1, 2000 to June 30, 2000 during the
hours 8:00 am to 5:00 pm on weekdays is extracted.
logfile /var/opt/perf/datafiles/logglob
start 03/01/00
stop 06/30/00
shift 8:00 AM - 5:00 PM noweekends
netif detail
extract
To perform the above task using command line arguments, enter:
extract -n -b 03/01/00 -e 6/30/00 -s 8:00 am - 5:00 noweekends -xt
138 Chapter 6
output
Use the output command to specify the name of an output file for the extract or export
functions.
The optional second parameter specifies the action to be taken if an output file with the same
name exists.
Syntax
[,new]
output [filename] [,purge]
[,append]
Parameters
,new Specifies that the output file must be a new file. This is the default action in
batch mode. If a file with the same name exists, the batch job terminates.
,purge Specifies that any existing file should be purged to make room for the new
output file.
,append Specifies that an existing extracted file should have data appended to it. If no
file exists with the output file name specified, a new file is created.
How to Use It
If you do not specify an action in batch mode, the default action,new is used. In interactive
mode, you are prompted to enter an action if a duplicate file is found.
If you do not specify an output file, default output files are created. The default output file
names are:
For extract: rxlog
For export:
xfrdGLOBAL.ext
xfrsGLOBAL.ext
xfrdAPPLICATION.ext
xfrsAPPLICATION.ext
xfrdPROCESS.ext
xfrdDISK.ext
xfrsDISK.ext
xfrdLVOLUME.ext
xfrsLVOLUME.ext
xfrdNETIF.ext
xfrsNETIF.ext
xfrdCPU.ext
xfrsCPU.ext
xfrdFILESYSTEM.ext
xfrsFILESYSTEM.ext
You can override the default output file names for exported files using the output parameter
in the export template file.
You should not output extract operation files to stdout, because they are incompatible with
ASCII devices. You should also not output binary or WK1 formats of the export operation to
the stdout file for the same reason.
Care should be taken to avoid appending extracted data to an existing exported data file and
to avoid appending exported data to an existing extracted file. Attempts to append the wrong
data type will result in an error condition.
Example
In this example, no output file is specified so the default output file, rxlog is used for the
application summary data being extracted. The,purge option specifies that any existing
output file should be purged.
extract>
logfile /var/opt/perf/datafiles/logglob
output rxlog,purge
global off
application detail
extract month 9905
To perform the above task using command line arguments, enter:
extract -f rxlog,purge -a -xm 9905
140 Chapter 6
process
Use the process command to specify whether or not to extract or export process data.
The default is process off.
Syntax
[on]
process [detail] [application=#[-#] ,...]
[off]
[killed]
Parameters
Process data can increase the size of an extracted log file significantly. If you plan to copy the
log file to a workstation for analysis, you might want to limit the amount of process data
extracted.
Example
In this example, the process command specifies processes that terminated during an interval
and belong to applications 1, 4, 6, 7, 8, or 10. Use the utility program’s scan command to
find the application numbers for specific applications.
process killed applications=1,4,6-8,10
Syntax
quit
q
142 Chapter 6
report
Use the report command to specify the export template file to be used by the export
function. If no export template file is specified, the default export template file, reptfile, is
used. The export template file is used to specify various output format attributes used in the
export function. It also specifies which metrics will be exported.
If you are in interactive mode and specify no export template file, all metrics for the data
types requested will be exported in ASCII format.
Syntax
report [exporttemplatefile] [,show]
Parameters
,show Specifies that the field positions and starting columns should be listed for all
metrics specified in the export template file. This listing can be used when export
files are processed by other programs.
How to Use It
When you issue this command, you are prompted by a message that asks whether or not you
want to validate metrics in the export template with the previously specified log file.
Validation ensures that the metrics specified in the export template file exist in the log file.
This allows you to check for possible errors in the export template file. If no validation is
performed, this action is deferred until you perform an export.
The,show parameter of the report command discussed here is different from the show
command discussed later.
Syntax
sh or ! [shell command]
Parameters
sh ls Executes the ls command and returns to extract. The shell command is any
system command.
!ls Same as above.
!ksh Starts a Korn shell. Does not return immediately to extract. Type exit or
CTRL-d Return to return to the extract program.
How to Use It
Following the execution of the single command, you automatically return to extract. If you
want to issue multiple shell commands without returning to extract after each one, you can
start a new shell.
If you issue the sh command without the name of the shell command, you are prompted to
supply it. For example,
sh
144 Chapter 6
shift
Use the shift command to limit data extraction to certain hours of the day corresponding to
work shifts and to exclude weekends (Saturday and Sunday).
The default is shift all day to extract data for all day, every day including weekends.
Syntax
[starttime-stoptime]
shift [all day]
[noweekends]
Parameters
The starttime and stoptime parameters are entered in the same format as the time in the
start command. Shifts that span midnight are permitted. If starttime is scheduled after
the stoptime, the shift will start at the start time and proceed past midnight, ending at the
stoptime of the next day.
all day Specifies the default shift of 12:00 am - 12:00 am (or 00:00 -00:00 on a
24-hour clock).
noweekends Specifies the exclusion of data which was logged on Saturdays and Sundays.
If noweekends is entered in conjunction with a shift that spans midnight,
the weekend will consist of those shifts that start on Saturday or Sunday.
Example
In this example, disk detail data collected between 10:00 am and 4:00 pm every day starting
June 15, 1999 is extracted.
extract>
logfile /var/opt/perf/datafiles/logglob
global off
disk detail
shift 10:00 am - 4:00 PM
start 6/15/99
extract
To perform the above task using command line arguments, enter:
extract -d -b 6/15/99 -s 10:00 AM-4:00 PM -xt
Syntax
show [all]
The show command discussed here is different from the,show parameter of the report
command discussed earlier.
Examples
Use show by itself to produce a list that may look like this:
Logfile: /var/opt/perf/datafiles/logglob
Output: Default
Report: Default
List: "stdout"
The default starting date & time = 10/08/99 12:00 AM (LAST -30)
The default stopping date & time = 11/20/99 11:59 PM (LAST -0)
The default shift = 12:00 AM - 12:00 PM
GLOBAL DETAIL records will be processed
APPLICATION. . . . . . . . . . NO records will be processed
PROCESS . . . . . . . . . . . NO records will be processed
DISK DEVICE. . . . . . . . . . NO records will be processed
LVOLUME. . . . . . . . . . . . NO records will be processed
TRANSACTION. . . . . . . . . . NO records will be processed
NETIF . . . . . . . . . . . . .NO records will be processed
CPU . . . . . . . . . . . . . .NO records will be processed
FILESYSTEM. . . . . . . . . . .NO records will be processed
Configuration . . . . . . . . .NO records will be processed
Use show all to produce a more detailed list that may look like this:
Logfile: /var/opt/perf/datafiles/logglob
Global file: /var/opt/perf/datafiles/logglob,version D
Application file: /var/opt/perf/datafiles/logappl
Process file: /var/opt/perf/datafiles/logproc
Device file: /var/opt/perf/datafiles/logdev
Transaction file: /var/opt/perf/datafiles/logdev
Index file: /var/opt/perf/datafiles/logindx
System ID: homer
System Type 9000/715/ S/N 2223334442 O/S HP-UX B.10.20 A
Data collector: SCOPE/UX C.02.30
File Created: 10/08/99
Data Covers: 44 days to 11/20/99
Shift is: All Day
Data records available are:
Global Application Process Disk Volume Transaction
Maximum file sizes:
Global=10.0 Application=10.0 Process=20.0 Device=10.0
Transaction=10.0 MB
146 Chapter 6
Output: Default
Report: Default
List: "stdout"
The default starting date & time = 10/08/99 11:50 AM (LAST -30)
The default stopping date & time = 11/20/99 11:59 PM(LAST - 0)
The default shift = 12:00 AM - 12:00 PM
GLOBAL...........DETAIL...........records will be processed
APPLICATION....................NO records will be processed
PROCESS........................NO records will be processed
DISK DEVICE....................NO records will be processed
LVOLUME........................NO records will be processed
TRANSACTION....................NO records will be processed
NETIF..........................NO records will be exported
CPU............................NO records will be processed
FILESYSTEM.....................NO records will be processed
Configuration ..................NO records will be exported
Export Report Specifications:
Interval = 3600, Separator = " "
Missing data will not be displayed
Headings will be displayed
Date/time will be formatted
Days to exclude: None
Syntax
[date [time]]
start [today [-day][time]]
[last [-days][time]]
[first [+days][time]]
Parameters
date The date format depends on the native language that is configured for your
system. If you do not use native languages or you have set C as the default
language, the data format is mm/dd/yy (month/day/year) such as 09/30/99 for
September 30, 1999, for the extract or export function.
time The time format also depends on the native language used. For the C language,
the format is hh:mm am or hh:mm pm (hour:minute in a 12-hour format with
the am or pm suffix). For example, 07:00 am is 7 o'clock in the morning.
Twenty-four hour time is accepted in all languages. For example, 23:30 would be
accepted for 11:30 pm.
If the format of the date or time is unacceptable, you are prompted with an
example in the correct format.
If no start time is given, midnight (12:00 am) is assumed. A starting time of
midnight for a given day starts at the beginning of that day (00:00 on a 24-hour
clock).
today Specifies the current day. The qualification of the parameter, such as
today-days, specifies the number of days prior to today's date. For example,
today-1 indicates yesterday's date and today-2, the day before yesterday.
last Can be used to represent the last date contained in the log file. The parameter
last-days specifies the number of days prior to the last date in the log file.
first Can be used to represent the first date contained in the log file. The parameter
first+days specifies the number of days after the first date in the log file.
How to Use It
The following commands override the starting date set by the start command.
• weekly
• monthly
• yearly
• extract (If day, week, month, or year parameter is used)
• export (If day, week, month, or year parameter is used)
148 Chapter 6
Example
In this example, the start command specifies June 5, 1999 8:00 am as the start time of the
first interval to be extracted. The output command specifies an output file named myout.
logfile /var/opt/perf/datafiles/logglob
start 6/5/99 8:00 am
output myout
global detail
extract
To perform the above task using command line arguments, enter:
extract -g -b 06/05/99 8:00 AM -f myout -xt
Syntax
[date [time]]
start [today [-day][time]]
[last [-days][time]]
[first [+days][time]]
Parameters
date The date format depends on the native language that is configured for your
system. If you do not use native languages or you have set C as the default
language, the data format is mm/dd/yy (month/day/year) such as 09/30/99 for
September 30, 1999, for the extract or export function.
time The time format also depends on the native language used. For the C language,
the format is hh:mm am or hh:mm pm (hour:minute in a 12-hour format with
the am or pm suffix). For example, 07:00 am is 7 o'clock in the morning.
Twenty-four hour time is accepted in all languages. For example, 23:30 would be
accepted for 11:30 pm.
If the format of the date of time is unacceptable, you are prompted with an
example in the correct format.
If no stop time is given, one minute before midnight (11:59 pm) is assumed. A
stopping time of midnight (12:00 am) for a given day stops at the end of that day
(23:59 on a 24-hour clock).
today Specifies the current day. The qualification of the parameter, such as
today-days, specifies the number of days prior to today's date. For example,
today-1 indicates yesterday's date and today-2 the day before yesterday.
last Can be used to represent the last date contained in the log file. The parameter
last-days specifies the number of days prior to the last date in the log file.
first Can be used to represent the first date contained in the log file. The parameter
first+days specifies the number of days after the first date in the log file.
How to Use It
The following commands override the stopping date set by the stop command.
• weekly
• monthly
• yearly
• extract (If day, week, month, or year parameter is used)
• export (If day, week, month, or year parameter is used)
150 Chapter 6
Example
In this example, the stop command specifies June 5, 1999 5:00 pm as the stopping time of the
last interval to be extracted. The output command specifies an output file named myout.
extract>
logfile /var/opt/perf/datafiles/logglob
start 6/5/99 8:00 AM
stop 6/5/99 5:00 PM
output myout
global detail
extract
To perform the above task using command line arguments, enter:
extract -g -b 6/5/99 8:00 AM -e 6/5/99 5:00 PM -f myout -xt
Syntax
[on]
[detail]
transaction [summary]
[both]
[off]
Parameters
Example
A new extracted log file called rxmay99 is created on June 1, 1999. Any existing file that has
this name is purged. All raw transaction log file data collected from May 1, 1999 to May 31,
1999 is extracted.
extract>
logfile /var/opt/perf/datafiles/logglob
output rxmay99,purge
global detail
transaction detail
month 9905
To perform the above task using command line arguments, enter:
extract -gt -f rxmay99,purge -xm 9905
152 Chapter 6
weekdays
Use the weekdays command to exclude data for specific days from being exported (day 1 =
Sunday).
Syntax
weekdays [1|2.....7]
How to Use It
If you want to export data from only certain days of the week, use this command to exclude
the days from which you do not want data. Days have the following values:
Sunday =1
Monday =2
Tuesday =3
Wednesday =4
Thursday =5
Friday =6
Saturday =7
For example, if you want to export data that was logged only on Monday through Thursday,
exclude data from Friday, Saturday, and Sunday from your export.
Example
In this example, any detailed global data logged on Tuesdays and Thursdays is excluded from
the export. The output export file contains the global metrics specified in the myrept export
template file.
extract>
logfile /var/opt/perf/datafiles/logglob
global detail
report myrept
weekdays 35
export
Syntax
weekly [yyww]
[ww]
Parameters
How to Use It
Use the weekly command when you are extracting data for archiving on a weekly basis.
The name of the output file consists of the letters rxwe followed by the last two digits of the
year, and the two-digit week number for the week being extracted. For example, the 12th
week of 1999 (from Monday, March 22 to Sunday, March 29) would be output to a file named
rxwe9912.
The type of data extracted and summarized follow the normal rules for the extract command
and can be set before executing the weekly command. These settings are honored unless a
weekly output file already exists. If it does, data is appended to it, based on the type of data
selected originally.
The weekly command has a feature that opens the previous week's extracted file and checks
to see if it is filled--whether it contains data extracted up to the last day of the week. If not,
the weekly command appends data to this file to complete the previous week's extraction.
For example, a weekly command is executed on Thursday, May 20, 1999. This creates a log
file named rxwe199920 containing data from Monday, May 17 through the current date
(May 20).
On Wednesday, May 26, 1999, another weekly command is executed. Before the rxwe199921
file is created for the current week, the rxwe199920 file from the previous week is opened
and checked. When it is found to be incomplete, data is appended to it to complete the
extraction through Sunday, May 23, 1999. Then, the rxwe199921 file is created to hold data
from Monday, May 24, 1999 to the current date (May 26).
154 Chapter 6
As long as you execute the weekly command at least once a week, this feature will complete
each week's file before creating the next week's file. When you see two adjacent weekly files
(for example, rxwe199920 and rxwe199921), you can assume that the first file is complete
for that week, and it can be archived and purged.
The weeks are numbered based on their starting day. Thus, the first week of the year (week
01) is the week starting on the first Monday of that year. Any days before that Monday belong
to the last week of the previous year. The weekly and extract week commands are similar in
that they both extract one calendar week's data. The weekly command ignores the setting of
the output command, using instead predefined output file names. It also attempts to append
missing data to the previous week's extracted log file if it is still present on the system. The
extract week command, on the other hand, uses the settings of the output command. It
cannot append data to the previous week's extracted file because it does not know its name.
The output file is named rxwe followed by the current year (yyyy) and week of the year (ww).
Example
In this example, the weekly command causes the current week’s data to be extracted and
complete the previous week’s extracted file, if it is present.
extract>
logfile /var/opt/perf/datafiles/logglob
global detail
application detail
process detail
weekly
To perform the above task using command line arguments, enter:
extract -gap -xw
Syntax
yearly [yyyy]
[yy]
Parameters
How to Use It
Use the yearly command when you are extracting data for archiving on a yearly basis.
The name of the output file consists of the letters rxyr followed by the four digits of the year
being extracted. Thus, data from 1999 would be output to a file named rxyr1999.
The type of data extracted and summarized follow the normal rules for the extract
command and can be set before executing the yearly command. These settings are honored
unless a yearly output file already exists. If it does, data is appended to it, based upon the type
of data selected originally.
The yearly command has a feature that opens the previous year's extracted file and checks to
see if it is filled--whether it contains data extracted up to the last day of the year. If not, the
yearly command appends data to this file to complete the previous year's extraction.
For example, a yearly command was executed on December 15, 1998. This created a log file
named rxyr1998 containing data from January 1, 1998 to the current date (December 15).
On January 5, 1999, another yearly command is executed. Before the rxyr1999 file is
created for the current year, the rxyr1998 file from the previous year is opened and checked.
When it is found to be incomplete, data is appended to it to complete its extraction until
December 31, 1998. Then, the rxyr1999 file is created to hold data from January 1, 1999 to
the current date (January 5).
As long as you execute the yearly command at least once a year, this feature will complete
each year's file before creating the next year's file. When you see two adjacent yearly files (for
example, rxyr1998 and rxyr1999), you can assume that the first file is complete for that
year, and it can be archived and purged.
156 Chapter 6
The previous paragraph is true only if the raw log files are sized large enough to hold one full
year of data. It would be more common to size the raw log files smaller and execute the yearly
command more often (such as once a month).
The yearly and extract year commands are similar in that they both extract one calendar
year's data. The yearly command ignores the setting of the output command, using instead
predefined output file names. It also attempts to append missing data to the previous year's
extracted log file if it is still present on the system. The extract year command, on the other
hand, will use the settings of the output command. It cannot append data to the previous
year's extracted file since it does not know its name.
Example
In this example, application and global detail data is appended to the existing yearly
summary file (or creates it, if necessary). The output file is rxyryyyy (where yyyy represents
the current year).
extract>
logfile /var/opt/perf/datafiles/logglob
global detail
application detail
process off
yearly
To perform the above task using command line arguments, enter:
extract -ga -xy
Introduction
This chapter describes what an alarm is, the syntax used to define an alarm, how an alarm
works, and how alarms can be used to monitor performance.
You can use OV Performance Agent to define alarms. These alarms notify you when scopeux
or DSI metrics meet or exceed conditions that you have defined.
To define alarms, you specify conditions on each OV Performance Agent system that when
met, trigger an alert or action. You define alarms in the OV Performance Agent alarm
definitions text file, alarmdef.
As data is logged by scopeux or DSI, it is compared to the alarm definitions to determine if a
condition is met. When this occurs an alert or action is triggered.
With the real time alarm generator you can configure where you want alert notifications sent
and whether you want local actions performed. SNMP traps can be sent to HP OpenView
Network Node Manager. Alert notifications can also be sent to OpenView Operations (OVO).
Local actions can be performed on your OV Performance Agent system.
You can analyze historical log file data against the alarm definitions and report the results
using the utility program's analyze command.
159
Processing Alarms
As performance data is collected by OV Performance Agent, it is compared to the alarm
conditions defined in the alarmdef file to determine whether the conditions have been met.
When a condition is met, an alarm is generated and the actions defined for alarms (ALERTs,
PRINTs, and/or EXECs) are performed. You can set up how you want the alarm information
communicated once an alarm is triggered. For example, you can:
• send SNMP traps to Network Node Manager
• send messages to OVO
• execute a UNIX command on the local system to send yourself a message
Alarm Generator
The OV Performance Agent alarm generator handles the communication of alarm
notifications. The alarm generator consists of the alarm generator server (perfalarm), the
alarm generator database (agdb), and the utility program agsysdb.
The agdb contains a list of SNMP nodes. The agsysdb program is used for displaying and
changing the actions taken by alarm events.
When you start up OV Performance Agent, perfalarm starts and reads the agdb at startup to
determine where and whether to send alarm notifications. It also checks to see if an OVO
agent is on the system.
Use the following command line option to see a list showing where alert notifications are
being sent:
agsysdb -l
160 Chapter 7
Every ALERT generated will cause an SNMP trap to be sent to the system you defined. The
trap text will contain the same message as the ALERT.
To stop sending SNMP traps to a system, you must delete the system name from agdb using
the command:
agsysdb -delete systemname
Table 9 Settings for sending information to OVO and executing local actions
OVO Flag
162 Chapter 7
05/10/99 11:25 ALARM [1] END
RESET: CPU test 22.86%
EXEC: end.script
If you are using a color workstation, the following output is highlighted:
CRITICAL statements are RED
164 Chapter 7
Alarm Syntax Reference
This section describes the statements that are available in the alarm syntax. You may want to
look at the alarmdef file for examples of how the syntax is used to create useful alarm
definitions.
Alarm Syntax
ALARM condition [[AND,OR]condition]
FOR duration [SECONDS, MINUTES]
[TYPE="string"]
[SERVICE="string"]
[SEVERITY=integer]
[START action]
[REPEAT EVERY duration [SECONDS, MINUTES] action]
[END action]
[RED, CRITICAL, ORANGE, MAJOR, YELLOW, MINOR, CYAN, WARNING,
GREEN, NORMAL, RESET] ALERT message
EXEC "UNIX command"
PRINT message
IF condition
THEN action
[ELSE action]
{APPLICATION, PROCESS, DISK, LVOLUME, TRANSACTION, NETIF, CPU,
FILESYSTEM} LOOP action
INCLUDE "filename"
USE "data source name"
[VAR] name = value
ALIAS name = "replaced-name"
SYMPTOM variable [ TYPE = {CPU, DISK, MEMORY, NETWORK}]
RULE condition PROB probability
[RULE condition PROB probability]
.
.
Conventions
• Braces ({ }) indicate that one of the choices is required.
• Brackets ([ ]) indicate an optional item.
• Items separated by commas within brackets or braces are options. Choose only one.
• Italics indicate a variable name that you replace.
• All syntax keywords are in uppercase.
Common Elements
The following elements are used in several statements in the alarm syntax and are described
below.
• comments
• compound statements
Comments
You can precede comments either by double forward slashes (//) or the pound sign (#). In both
cases, the comment ends at the end of the line. For example:
# any text or characters
or
// any text or characters
Compound Statements
Compound statements allow a list of statements to be executed as a single statement. A
compound statement is a list of statements inside braces ({}). Use the compound statement
with the IF statement, the LOOP statement, and the START, REPEAT, and END clauses of
the ALARM statement. Compound statements cannot include ALARM and SYMPTOM
statements.
{
statement
statement
}
In the example below, highest_cpu = 0 defines a variable called highest_cpu. The
highest_cpu value is saved and notifies you only when that highest_cpu value is exceeded
by a higher highest_cpu value.
highest_cpu = 0
IF gbl_cpu_total_util > highest_cpu THEN
// Begin compound statement
{
highest_cpu = gbl_cpu_total_util
ALERT "Our new high CPU value is ", highest_cpu, "%"
}
// End compound statement
Conditions
A condition is defined as a comparison between two items.
item1 {>, <, >=, <=, ==, !=}item2
[AND, OR[item3 {>, <, >=, <=, ==, !=}item4]]
where "==" means "equal", and "!=" means "not equal".
Conditions are used in the ALARM, IF, and SYMPTOM statements. An item can be a metric
name, a numeric constant, an alphanumeric string enclosed in quotes, an alias, or a variable.
When comparing alphanumeric items, only == or != can be used as operators.
166 Chapter 7
Constants
Constants can be either numeric or alphanumeric. An alphanumeric constant must be
enclosed in double quotes. For example:
345
345.2
"Time is"
Constants are useful in expressions and conditions. For example, you may want to compare a
metric against a constant numeric value inside a condition to generate an alarm if it is too
high, such as
gbl_cpu_total_util > 95
Expressions
Arithmetic expressions perform one or more arithmetic operations on two or more operands.
You can use an expression anywhere you would use a numeric value. Legal arithmetic
operators are:
+, -, *, /
Parentheses can be used to control which parts of an expression are evaluated first.
For example:
Iteration + 1
gbl_cpu_total_util - gbl_cpu_user_mode_util
( 100 - gbl_cpu_total_util ) / 100.0
Metric Names
When you specify a metric name in an alarm definition, the current value of the metric is
substituted. Metric names must be typed exactly as they appear in the metric definition,
except for case sensitivity. Metrics definitions can be found in the HP OpenView Performance
Agent Dictionary of Operating Systems Performance Metrics. If you are using OV Performance
Manager, choose On Metrics from the OV Performance Manager help menu to display a list of
metrics by platform.
It is recommended that you use fully-qualified metric names if the metrics are from a data
source other than the SCOPE data source (such as DSI metrics).
The format for specifying a fully qualified metric is:
data_source:instance(class):metric_name
A global metric in the SCOPE data source requires no qualification. For example:
metric_1
An application metric, which is available for each application defined in the SCOPE data
source, requires the application name. For example,
application_1:metric_1
For multi-instance data types such as application, process, disk, netif, transaction,
lvolume, cpu and filesystem, you must associate the metric with the data type name, except
when using the LOOP statement. To do this, specify the data type name followed by a colon,
and then the metric name. For example, other_apps:app_cpu_total_util specifies the total
CPU utilization for the application other_apps.
If you use an application name that has an embedded space, you must replace the space with
an underscore (_). For example, application 1 must be changed to application_1. For more
information on using names that contain special characters, or names where case is
significant, see ALIAS Statement on page 181.
If you had a disk named “other” and an application named “other”, you would need to specify
the class as well as the instance:
other (disk):metric_1
A global metric in an extracted log file (where scope_extract is the data source name) would
be specified this way:
scope_extract:application_1:metric_1
A DSI metric would be specified this way:
dsi_data_source:dsi_class:metric_name
Any metric names containing special characters (such as asterisks) must be aliased before
they are specified.
Messages
A message is the information sent by a PRINT or ALERT statement. It can consist of any
combination of quoted alphanumeric strings, numeric constants, expressions, and variables.
The elements in the message are separated by commas. For example:
RED ALERT "cpu utilization=", gbl_cpu_total_util
Numeric constants, metrics, and expressions can be formatted for width and number of
decimals. Width specifies how wide the field should be formatted; decimals specifies how
many decimal places to use. Numeric values are right-justified. The - (minus sign) specifies
left-justification. Alphanumeric strings are always left-justified. For example:
metric names [|[-]width[|decimals]]
gbl_cpu_total_util|6|2 formats as '100.00'
(100.32 + 20)|6 formats as ' 120'
gbl_cpu_total_util|-6|0 formats as '100 '
gbl_cpu_total_util|10|2 formats as ' 99.13'
gbl_cpu_total_util|10|4 formats as ' 99.1300'
ALARM Statement
The ALARM statement defines a condition or set of conditions and a duration for the
conditions to be true. Within the ALARM statement, you can define actions to be performed
when the alarm condition starts, repeats, and ends. Conditions or events that you might want
to define as alarms include:
• when global swap space has been nearly full for 5 minutes
168 Chapter 7
• when the memory paging rate has been too high for 1 interval
• when your CPU has been running at 75 percent utilization for the last ten minutes
Syntax
ALARM condition [[AND,OR]condition]
FOR duration{SECONDS, MINUTES}
[TYPE="string"]
[SERVICE="string"]
[SEVERITY=integer]
[START action]
[REPEAT EVERY duration {SECONDS, MINUTES} action]
[END action]
• The ALARM statement must be a top-level statement. It cannot be nested within any other
statement. However, you can include several ALARM conditions in a single ALARM
statement. If the conditions are linked by AND, all conditions must be true to trigger the
alarm. If they are linked by OR, any one condition will trigger the alarm.
• TYPE is a quoted string of up to 38 characters. If you are sending alarms to OV
Performance Manager, you can use TYPE to categorize alarms and to specify the name of a
graph template to use. OV Performance Manager can only accept up to eight characters,
so up to eight characters are shown.
• SERVICE is a quoted string of up to 200 characters. If you are using ServiceNavigator,
which is being released with ITO 5.0, you can link your OV Performance Agent alarms
with the services you defined in ServiceNavigator (see the HP Open View
ServiceNavigator Concepts and Configuration Guide).
SERVICE="Service_id"
• SEVERITY is an integer from 0 to 32767. If you are sending alarms to OV Performance
Manager, you can use this to categorize alarms.
• START, REPEAT, and END are keywords used to specify what action to take when alarm
conditions are met, met again, or stop. You should always have at least one of START,
REPEAT, or END in an ALARM statement. Each of these keywords is followed by an action.
• action – The action most often used with an ALARM START, REPEAT, or END is the
ALERT statement. However, you can also use the EXEC statement to mail a message or
run a batch file, or a PRINT statement if you are analyzing historical log files with the
utility program. Any syntax statement is legal except another ALARM.
START, REPEAT, and END actions can be compound statements. For example, you can
use compound statements to provide both an ALERT and an EXEC.
• Conditions – A condition is defined as a comparison between two items.
item1 {>, <, >=, <=, ==, !=}item2
[AND, OR[item3 {>, <, >=, <=, ==, !=}item4]]
where "==" means "equal", and "!=" means "not equal"
An item can be a metric name, a numeric constant, an alphanumeric string enclosed in
quotes, an alias, or a variable. When comparing alphanumeric items, only == or != can be
used as operators.
You can use compound conditions by specifying the “OR” and “AND” operator between
subconditions. For example:
How It Is Used
The alarm cycle begins on the first interval that all of the ANDed, or one of the ORed alarm
conditions have been true for at least the specified duration. At that time, the alarm generator
executes the START action, and on each subsequent interval checks the REPEAT condition. If
enough time has transpired, the action for the REPEAT clause is executed. (This continues
until one or more of the alarm conditions becomes false.) This completes the alarm cycle and
the END statement is executed if there is one.
In order for OV Performance Manager to be notified of the alarm, you should use the ALERT
statement within the START and END statements. If you do not specify an END ALERT, the
alarm generator will automatically send one to OV Performance Manager and OVO and send
an SNMP trap to Network Node Manager.
Examples
The following ALARM example sends a red alert when the swap utilization is high for 5
minutes. It is similar to an alarm condition in the default alarmdef file. Do not add this
example to your alarmdef file without removing the default alarm condition, or your
subsequent alert messages may be confusing.
ALARM gbl_swap_space_util > 90 FOR 5 MINUTES
START
RED ALERT "swap utilization is very high "
REPEAT EVERY 15 MINUTES
RED ALERT "swap utilization is still very high "
END
RESET ALERT "End of swap utilization condition"
This ALARM example tests the metric gbl_swap_space_util to see if it is greater than 90.
Depending on how you configured the alarm generator, the ALERT can be sent to the Alarms
window in OV Performance Manager, to Network Node Manager via an SNMP trap, or as a
message to OVO. If you have OV Performance Manager configured correctly, the RED ALERT
statement places the “swap utilization still very high” message in the OV Performance
Manager Alarms window.
170 Chapter 7
The REPEAT statement checks for the gbl_swap_space_util condition every 15 minutes. As
long as the metric remains greater than 90, the REPEAT statement will send the message
“swap utilization is still very high” every 15 minutes.
When the gbl_swap_space_util condition goes below 90, the RESET ALERT statement
with the “End of swap utilization condition” message is sent.
The following example defines a compound action within the ALARM statement. This
example shows you how to cause a message to be mailed when an event occurs.
ALARM gbl_cpu_total_util > 90 FOR 5 MINUTES
START
{
RED ALERT "Your CPU is busy."
EXEC "echo 'cpu is too high'| mailx root"
}
END
RESET ALERT "CPU no longer busy."
The ALERT can trigger an SNMP trap to be sent to Network Node Manager or a message to
be sent to OVO. The EXEC can trigger a mail message to be sent as a local action on your OV
Performance Agent system, depending on how you configured your alarm generator. If you set
up OV Performance Manager to receive alarms from this system, the RED ALERT statement
places the “Your CPU is busy” message in the OV Performance Manager Alarms window and
causes a message to be sent.
By default, if the OVO agent is running, the local action will not execute. Instead, it will be
sent as a message to OVO.
The following two examples show the use of multiple conditions. You can have more than one
test condition in the ALARM statement. In this case, each statement must be true for the
ALERT to be sent.
The following ALARM example tests the metric gbl_cpu_total_util and the metric
gbl_cpu_sys_mode_util. If both conditions are true, the RED ALERT statement sends a red
alert. When either test condition becomes false, the RESET is sent.
ALARM gbl_cpu_total_util > 90
AND gbl_cpu_sys_mode_util > 50 FOR 5 MINUTES
START
RED ALERT "CPU busy and Sys Mode CPU util is high."
END
RESET ALERT "The CPU alert is now over."
The next ALARM example tests the metric gbl_cpu_total_util and the metric
gbl_cpu_sys_mode_util. If either condition is true, the RED ALERT statement sends a red
alert.
ALARM gbl_cpu_total_util > 90
OR
gbl_cpu_sys_mode_util > 50 FOR 10 MINUTES
START
RED ALERT "Either total CPU util or sys mode CPU high"
172 Chapter 7
ALERT Statement
The ALERT statement allows a message to be sent to OV Performance Manager, Network
Node Manager, or OVO. It also allows the creation and deletion of the alarm symbols in the
Network Node Manager map associated with OV Performance Manager and controls the color
of the alarm symbols, depending on the severity of the alarm. (For more information, see OV
Performance Manager online Help.)
The ALERT statement is most often used as an action within an ALARM. It could also be
used within an IF statement to send a message as soon as a condition is detected instead of
after the duration has passed. If an ALERT is used outside of an ALARM or IF statement, the
message will be sent at every interval.
Syntax
[RED, CRITICAL, ORANGE, MAJOR, YELLOW, MINOR, CYAN,
WARNING, GREEN, NORMAL, RESET] ALERT message
• RED is synonymous with CRITICAL, ORANGE is synonymous with MAJOR, YELLOW is
synonymous with MINOR, CYAN is synonymous with WARNING, and GREEN is synonymous
with NORMAL. These keywords turn the alarm symbol to the color associated with the
alarm condition in the Network Node Manager map associated with OV Performance
Manager. They also send the message with other information to the OV Performance
Manager Alarms window. CYAN is the default. However, if you are using version C.00.08
or earlier of OV Performance Manager, YELLOW is the default.
• RESET records the message in the OV Performance Manager Alarms window and deletes
the alarm symbol in the Network Node Manager map associated with OV Performance
Manager. A RESET ALERT without a message is sent automatically when an ALARM
condition ENDs if you do not define one in the alarm definition.
• message — A combination of strings and numeric values used to create a message.
Numeric values can be formatted with the parameters [|[-]width[|decimals]]. Width
specifies how wide the field should be formatted; decimals specifies how many decimal
places to use. Numeric values are right-justified. The - (minus sign) specifies
left-justification. Alphanumeric strings are always left-justified.
How It Is Used
The ALERT can also trigger an SNMP trap to be sent to Network Node Manager or a message
to be sent to OVO, depending on how you configured your alarm generator. If you configured
OV Performance Manager to receive alarms from this system, the ALERT sends a message to
the OV Performance Manager Alarms window.
If an ALERT statement is used outside of an ALARM statement, the alert message will show
up in the OV Performance Manager Alarms window but no symbol will be created in the
Network Node Manager map.
For alert messages sent to OVO, the WARNINGS will be displayed in blue in the message
browser
Example
An typical ALERT statement is:
RED ALERT "CPU utilization = ", gbl_cpu_total_util
174 Chapter 7
EXEC Statement
The EXEC statement allows you to specify a UNIX command to be performed on the local
system. For example, you could use the EXEC statement to send mail to an IT administrator
each time a certain condition is met.
EXEC should be used within an ALARM or IF statement so the UNIX command is performed
only when specified conditions are met. If an EXEC statement is used outside of an ALARM
or IF statement, the action will be performed at unpredictable intervals.
Syntax
EXEC "UNIX command"
• UNIX command — a command to be performed on the local system.
Do not use embedded double quotes (") in EXEC statements. Doing so causes perfalarm to
fail to send the alarm to OVO. Use embedded single (') quotes instead. For example:
EXEC "echo 'performance problem detected' "
How It Is Used
The EXEC can trigger a local action on your local system, depending on how you configured
your alarm generator. For example, you can turn local actions on or off. If you configured your
alarm generator to send information to OVO, local actions will not usually be performed.
Examples
In the following example, the EXEC statement performs the UNIX mailx command when the
gbl_disk_util_peak metric exceeds 20.
IF gbl_disk_util_peak > 20 THEN
EXEC "echo 'high disk utilization detected'| mailx root"
The next example shows the EXEC statement sending mail to the system administrator when
the network packet rate exceeds 1000 per second average for 15 minutes.
ALARM gbl_net_packet_rate > 1000 for 15 minutes
TYPE = "net busy"
SEVERITY = 5
START
{
RED ALERT "network is busy"
EXEC "echo 'network busy condition detected'| mailx root"
}
END
RESET ALERT "NETWORK OK"
Be careful when using the EXEC statement with commands or scripts that have high
overhead if it will be performed often.
The alarm generator executes the command and waits until it completes before continuing.
We recommend that you not specify commands that take a long time to complete.
Syntax
PRINT message
• message — A combination of strings and numeric values that create a message. Numeric
values can be formatted with the parameters [|[-]width[|decimals]]. Width specifies
how wide the field should be formatted; decimals specifies how many decimal places to
use. Alphanumeric components of a message must be enclosed in quotes. Numeric values
are right-justified. The - (minus sign) specifies left-justification. Alphanumeric strings
are always left-justified.
Example
PRINT "The total time the CPU was not idle is",
gbl_cpu_total_time |6|2, "seconds"
When executed, this statement prints a message such as the following:
The total time the CPU was not idle is 95.00 seconds
IF Statement
Use the IF statement to define a condition using IF-THEN logic. The IF statement should be
used within the ALARM statement. However, it can be used by itself or any place in the
alarmdef file where IF-THEN logic is needed.
If you specify an IF statement outside of an ALARM statement, you do not have control over
how frequently it gets executed.
Syntax
IF condition THEN action [ELSE action]
• IF condition — A condition is defined as a comparison between two items.
item1 {>, <, >=, <=, ==, !=}item2
[AND, OR[item3 {>, <, >=, <=, ==, !=}item4]]
where "==" means "equal", and "!=" means "not equal".
An item can be a metric name, a numeric constant, an alphanumeric string enclosed in
quotes, an alias, or a variable. When comparing alphanumeric strings, only == or != can be
used as operators.
• action — Any action, or set a variable. (ALARM is not valid in this case.)
How It Is Used
The IF statement tests the condition. If the condition is true, the action after the THEN is
executed. If the condition is false, the action depends on the optional ELSE clause. If an ELSE
clause has been specified, the action following it is executed; otherwise the IF statement does
nothing.
176 Chapter 7
Example
In this example, a CPU bottleneck symptom is calculated and the resulting bottleneck
probability is used to define cyan or red ALERTs. If you have OV Performance Manager
configured correctly, the message “End of CPU Bottleneck Alert” is displayed in the OV
Performance Manager Alarms window along with the percentage of CPU used.
The ALERT can also trigger an SNMP trap to be sent to Network Node Manager or a message
to be sent to OVO, depending on how you configured your alarm generator.
SYMPTOM CPU_Bottleneck > type=CPU
RULE gbl_cpu_total_util > 75 prob 25
RULE gbl_cpu_total_util > 85 prob 25
RULE gbl_cpu_total_util > 90 prob 25
RULE gbl_cpu_total_util > 4 prob 25
ALARM CPU_Bottleneck > 50 for 5 minutes
TYPE="CPU"
START
IF CPU_Bottleneck > 90 then
RED ALERT "CPU Bottleneck probability= ",
CPU_Bottleneck, "%"
ELSE
CYAN ALERT "CPU Bottleneck probability= ",
CPU_Bottleneck, "%"
REPEAT every 10 minutes
IF CPU_Bottleneck > 90 then
RED ALERT "CPU Bottleneck probability= ",
CPU_Bottleneck, "%"
ELSE
CYAN ALERT "CPU Bottleneck probability= ",
CPU_Bottleneck, "%"
END
RESET ALERT "End of CPU Bottleneck Alert"
Do not use metrics that are logged at different intervals in the same
statement. For instance, you should not loop on a process (logged at 1-minute
intervals) based on the value of a global metric (logged at 5-minute intervals)
in a statement like this:
IF global_metric THEN
PROCESS LOOP ...
The different intervals cannot be synchronized as you might expect, so results
will not be valid.
LOOP Statement
The LOOP statement goes through multiple-instance data types and performs the action
defined for each instance.
How It Is Used
As LOOP statements iterate through each instance of the data type, metric values change. For
instance, the following LOOP statement prints the name of each application to stdout if you
are using the utility program's analyze command.
APPLICATION LOOP
PRINT app_name
A LOOP can be nested within another LOOP statement up to a maximum of five levels.
In order for the LOOP to execute, the LOOP statement must refer to one or more metrics of the
same data type as the type defined in the LOOP statement.
Example
You can use the LOOP statement to cycle through all active applications.
The following example shows how to determine which application has the highest CPU at
each interval.
highest_cpu = 0
APPLICATION loop
IF app_cpu_total_util > highest_cpu THEN
{
highest_cpu = app_cpu_total_util
big_app = app_name
}
ALERT "Application ", app_name, " has the highest cpu util at
",highest_cpu_util|5|2, "%"
ALARM highest_cpu > 50
START
RED ALERT big_app, " is the highest CPU user at ", highest_cpu, "%"
REPEAT EVERY 15 minutes
CYAN ALERT big_app, " is the highest CPU user at ", highest_cpu, "%"
END
RESET ALERT "No applications using excessive cpu"
INCLUDE Statement
Use the INCLUDE statement to include another alarm definitions file along with the default
alarmdef file.
178 Chapter 7
Syntax
INCLUDE "filename"
where filename is the name of another alarm definitions file. The file name must always be
fully qualified.
How It Is Used
The INCLUDE statement could be used to separate logically distinct sets of alarm definitions
into separate files.
Example
For example, if you have some alarm definitions in a separate file for your transaction metrics
and it is named
trans_alarmdef1
You can include it by adding the following line to the alarm definitions in your alarmdef1
file:
INCLUDE "/var/opt/perf/trans_alarmdef1"
USE Statement
You can add the USE statement to simplify the use of metric names in the alarmdef file when
data sources other than the default SCOPE data source are referenced. This allows you to
specify a metric name without having to include the data source name.
The data source name must be defined in the datasources file. The alarmdef file will fail
its syntax check if an invalid or unavailable data source name is encountered.
The appearance of a USE statement in the alarmdef file does not imply that all metric
names that follow will be from the specified data source.
Syntax
USE "datasourcename"
How It Is Used
As the alarm generator checks the alarmdef file for valid syntax, it builds an ordered search
list of all data sources that are referenced in the file. Perfalarm sequentially adds entries to
this data source search list as it encounters fully-qualified metric names or USE statements.
This list is subsequently used to match metric names that are not fully qualified with the
appropriate data source name. The USE statement provides a convenient way to add data
sources to perfalarm's search list, which then allows for shortened metric names in the
alarmdef file. For a discussion of metric name syntax, see Metric Names on page 167 earlier
in this chapter.
Perfalarm's default behavior for matching metric names to a data source is to look first in the
SCOPE data source for the metric name. This implied USE "SCOPE" statement is executed
when perfalarm encounters the first metric name in the alarmdef file. This feature enables
180 Chapter 7
qualify the metric names with their data source names to guarantee that the metric value is
retrieved from the correct data source. This is shown in the following example where the
metric names in the alarm statements each include their data sources.
ALARM ORACLE7:ActiveTransactions >= 95 FOR 5 MINUTES
START RED ALERT "Nearing limit of transactions for ORACLE7"
ALARM FINANCE:ActiveTransactions >= 95 FOR 5 MINUTES
START RED ALERT "Nearing limit of transactions for FINANCE"
VAR Statement
The VAR statement allows you to define a variable and assign a value to it.
Syntax
[VAR] name = value
• name — Variable names must begin with a letter and can include letters, digits, and the
underscore character. Variable names are not case-sensitive.
• value — If the value is an alphanumeric string, it must be enclosed in quotes.
How It Is Used
VAR assigns a value to the user variable. If the variable did not previously exist, it is created.
Once defined, variables can be used anywhere in the alarmdef file.
Examples
You can define a variable by assigning something to it. The following example defines the
numeric variable highest_CPU_value by assigning it a value of zero.
highest_CPU_value = 0
The next example defines the alphanumeric variable my_name by assigning it an empty
string value.
my_name = ""
ALIAS Statement
The ALIAS statement allows you to substitute an alias if any part of a metric name (class,
instance, or metric) has a case-sensitive name or a name that includes special characters.
These are the only circumstances where the ALIAS statement should be used.
Syntax
ALIAS name = "replaced-name"
• name — The name must begin with a letter and can include letters, digits, and the
underscore character.
• replaced-name — The name that must be replaced by the ALIAS statement to make it
uniquely recognizable to the alarm generator.
Examples
Because you cannot use special characters or upper and lower case in the syntax, using the
application name "AppA" and "appa" could cause errors because the processing would be
unable to distinguish between the two. You would alias "AppA" to give it a uniquely
recognizable name. For example:
ALIAS appa_uc = "AppA"
ALERT "CPU alert for AppA.util is",appa_uc:app_cpu_total_util
If you are using an alias for an instance with a class identifier, include both the instance name
and the class name in the alias. The following example shows the alias for the instance name
'other' and the class name 'APPLICATION.'
ALIAS my_app="other(APPLICATION)"
ALERT my_app:app_cpu_total_util > 50 for 5 minutes
SYMPTOM Statement
A SYMPTOM provides a way to set a single variable value based on a set of conditions. Whenever
any of the conditions is true, its probability value is added to the value of the SYMPTOM
variable.
Syntax
SYMPTOM variable
RULE condition PROB probability
[RULE condition PROB probability]
.
.
.
• The keywords SYMPTOM and RULE are used exclusively in the SYMPTOM statement and
cannot be used in other syntax statements. The SYMPTOM statement must be a top-level
statement and cannot be nested within any other statement. No other statements can
follow SYMPTOM until all its corresponding RULE statements are finished.
• variable is a variable name that will be the name of this symptom. Variable names defined
in the SYMPTOM statement can be used in other syntax statements, but the variable value
should not be changed in those statements.
• RULE is an option of the SYMPTOM statement and cannot be used independently. You can
use as many RULE options as needed within the SYMPTOM statement. The SYMPTOM variable
is evaluated according to the rules at each interval.
• condition is defined as a comparison between two items.
182 Chapter 7
item1 {>, <, >=, <=, ==, !=}item2
[item3 {>, <, >=, <=, ==, !=}item4]
where "==" means "equal" and "!=" means "not equal".
An item can be a metric name, a numeric constant, an alphanumeric string enclosed in
quotes, an alias, or a variable. When comparing alphanumeric items, only == or != can be
used as operators.
• probability is a numeric constant. The probabilities for each true SYMPTOM RULE are
added together to create a SYMPTOM value.
How It Is Used
The sum of all probabilities where the condition between measurement and value is true is
the probability that the symptom is occurring.
Example
SYMPTOM CPU_Bottleneck
RULE gbl_cpu_total_util > 75 PROB 25
RULE gbl_cpu_total_util > 85 PROB 25
RULE gbl_cpu_total_util > 90 PROB 25
RULE gbl_run_queue > 3 PROB 50
IF CPU bottleneck > 50 THEN
CYAN ALERT "The CPU symptom is: ", CPU_bottleneck
184 Chapter 7
Example of Time-Based Alarms
You can specify a time interval during which alarm conditions can be active. For example, if
you are running system maintenance jobs that are scheduled to run at regular intervals, you
can specify alarm conditions for normal operating hours and a different set of alarm
conditions for system maintenance hours.
In this example, the alarm will only be triggered during the day from 8:00AM to 5:00PM.
start_shift = "08:00"
end_shift = "17:00"
ALARM gbl_cpu_total_util > 80
TIME > start_shift
TIME < end_shift for 10 minutes
TYPE = "cpu"
START
CYAN ALERT "cpu too high at ", gbl_cpu_total_util, "%"
REPEAT EVERY 10 minutes
RED ALERT"cpu still too high at ", gbl_cpu_total_util, "%"
END
IF time == end_shift then
{
IF gbl_cpu_total_util > 80 then
RESET ALERT "cpu still too high, but at the end of shift"
ELSE
RESET ALERT "cpu back to normal"
}
ELSE
RESET ALERT "cpu back to normal"
186 Chapter 7
Appendix
187
Viewing and Printing Documents
OV Performance Agent software includes the standard OV Performance Agent documentation
set in viewable and printable file formats. The documents are listed in the following table
along with their file names and online locations.
Table 10 OV Performance Agent Documentation Set
You can view the Adobe Acrobat format (*.pdf) documents online and print as needed. The
ASCII text (*.txt) documents are printable. However, you can view a text file on your screen
using any UNIX text editor such as vi.
188 Appendix
Adobe Acrobat Files
The Adobe Acrobat files were created with Acrobat 7.0 and are viewed with the Adobe Acrobat
Reader versions 4.0 and higher. If the Acrobat Reader is not in your Web browser, you can
download it from Adobe’s web site:
http://www.adobe.com
While viewing a document in the Acrobat Reader, you can print a single page, a group of
pages, or the entire document.
Appendix 189
190 Appendix
Glossary
alarm
An indication of a period of time in which performance meets or exceeds user-specified alarm
criteria. Alarm information can be sent to an OV Performance Manager analysis system and
to HP OpenView Network Node Manager and OpenView Operations (OVO). Alarms can also
be identified in historical log file data.
alarm generator
The service that handles the communication of alarm notification. It consists of perfalarm
and the agdb database. The agsysdb program uses a command line interface for displaying
and changing the actions taken by alarm events.
alarmdef file
An OV Performance Agent text file containing the alarm definitions in which alarm conditions
are specified.
application
A user-defined group of related processes or program files. Applications are defined so that
performance software can collect performance metrics for and report on the combined
activities of the processes and programs.
coda daemon
A daemon that provides collected data to the alarm generator and analysis product data
sources including scopeux log files or DSI log files. coda reads the data from the data sources
listed in the datasources configuration file.
data source
A data source consists of one or more classes of data in a single scopeux or DSI log file set. For
example, the default OV Performance Agent data source, is a scopeux log file set consisting of
global data. See also datasources file, coda and perflbd.rc.
datasources file
A configuration file residing in the /var/opt/OV/conf/perf/ directory. Each entry in the
file represents a scopeux or DSI data source consisting of a single log file set. See also
perflbd.rc, coda and data source.
191
data type
A particular category of data collected by a data collection process. Single-instance data types,
such as global, contain a single set of metrics that appear only once in any data source.
Multiple-instance data types, such as application, disk, and transaction, may have many
occurrences in a single data source, with the same set of metrics collected for each occurrence
of the data type.
device
A device is an input and/or output device connected to a system. Common devices include disk
drives, tape drives, printers, and user terminals.
DSI
data source integration.
dsilog
The OV Performance Agent process that logs self-describing data received from stdin.
empty space
The difference between the maximum size of a log file and its current size.
extract
An OV Performance Agent program that helps you manage your data. In extract mode, raw
or previously extracted log files can be read in, reorganized or filtered as desired, and the
results are combined into a single, easy-to-manage extracted log file. In export mode, raw or
previously extracted log files can be read in, reorganized or filtered as desired, and the results
are written to class-specific exported data files for use in spreadsheets and analysis programs.
global
A qualifier that implies the whole system. Thus, “global metrics” are metrics that describe the
activities and states of each system. Similarly, application metrics describe application
activity; process metrics describe process activity.
192
interesting process
A process becomes interesting when it is first created, when it ends, and when it exceeds user-
defined thresholds for CPU use, disk use, response time, and other resources.
logappl
The raw log file that contains summary measurements of the processes in each user-defined
application.
logdev
The raw log file that contains measurements of individual device (such as disk) performance.
logglob
The raw log file that contains measurements of the system-wide, or global, workload.
logindx
The raw log file that contains additional information required for accessing data in the other
log files.
logproc
The raw log file that contains measurements of selected interesting processes.
See also interesting process.
logls
The raw log file that contains information about logical systems, when OV Performance Agent
is running on a virtualized environment.
logtran
The raw log file that contains measurements of transaction data.
mwa script
The OV Performance Agent script that has options for starting, stopping and restarting OV
Performance Agent processes such as the scopeux data collector, coda, ovc, ovbbccb,
perflbd, rep_server, and the alarm generator. See also the mwa man page.
ovbbccb
The OpenView Operations Communication Broker for HTTP(S) based communication
controlled by ovcd. See also coda and ovc.
ovc
The OpenView Operations controlling and monitoring process. In a standalone OVPA
installation, ovcd monitors and controls coda and ovbbccb. If OVPA is installed on a system
with OpenView Operations for UNIX 8.x agent installed, ovcd also monitors and controls
OpenView Operations for UNIX 8.x processes. See also coda and ovbbccb.
ovpa script
The OV Performance Agent script that has options for starting, stopping and restarting OV
Performance Agent processes such as the scopeux data collector, alarm generator, ttd,
midaemon, and coda. See also the ovpa man page.
193
OV Performance Manager
OV Performance Manager provides integrated performance management for multi-vendor
distributed networks. It uses a single workstation to monitor environment performance on
networks that range in size from tens to thousands of nodes.
parm file
An OV Performance Agent file that contains the collection parameters used by scopeux to
customize data collection.
perflbd
The performance location broker daemon that reads the perflbd.rc file when OV
Performance Agent is started and starts a repository server for each data source that has been
configured.
perflbd.rc
A configuration file residing in the /var/opt/perf/ directory. Each entry in the file
represents a scopeux or DSI data source consisting of a single log file set. This file is
maintained as a symbolic link to the datasources file. See also perflbd, data source, and
repository server.
performance alarms
See alarms
process
Execution of a program file. It can represent an interactive user (processes running at normal,
nice, or real-time priorities) or an operating system process.
PRM
See process resource manager.
real time
The actual time in which an event takes place.
194
repeat time
An action that can be specified for performance alarms. Repeat time designates the amount of
time that must pass before an activated and continuing alarm condition triggers another
alarm signal.
repository server
A server that provides data to the alarm generator and the OV Performance Manager analysis
product. There is one repository server for each data source configured in the perflbd.rc
configuration file consisting of a scopeux or DSI log file set. A default repository server is
provided at start up that contains a single data source consisting of a scopeux log file set.
resize
Changing the overall size of a raw log file using the utility program's resize command.
roll back
Deleting one or more days worth of data from a log file, oldest data deleted first. Roll backs
are performed when a raw log file exceeds its maximum size parameter.
RUN file
The file created by the scopeux collector to indicate that the collection process is running.
Removing the RUN file causes scopeux to terminate.
rxlog
The default output file created when data is extracted from raw log files.
scopeux
The OV Performance Agent collector program that collects performance data and writes (logs)
this raw measurement data to raw log files for later analysis or archiving.
status.scope
The file created by the scopeux collector to record status, data inconsistencies, or errors.
transaction tracking
The OV Performance Agent capability that allows information technology (IT) resource
managers to measure end-to-end response time of business application transactions. To collect
transaction data, OV Performance Agent must have a process running that is instrumented
with the Application Response Measurement (ARM) API.
utility
An OV Performance Agent program that lets you open, scan, and generate reports on raw and
extracted log files. You can also use it to resize raw log files, check parm file syntax, check the
alarmdef file syntax, and obtain alarm information from historical log file data.
195
196
Index
197
extract, 111 default values,parm file, 22
utility, 59 deleting data sources, 14
command line arguments detail command, utility program, 64
extract program, 92
utility program, 47 disk command, extract program, 119
198
extract command, extract program, 124 reptall, 98
extract commands reptfile, 97, 143
application, 115 repthist, 98
class, 116 status.scope, 21
configuration, 117 filesystem command, extract program, 126
cpu, 118 Flush, 31
disk, 119
exit, 120 format parameter
export, 97, 121 export template file, 100
extract, 124
filesystem, 126 G
global, 127 gapapp, 29
guide, 128
help, 129 GlancePlus, 16
list, 131 global command, extract program, 127
lvolume, 134 group parameter, parm file, 33
menu, 135
monthly, 136 guide command, extract program, 128
output, 139 guide command, utility program, 66
process, 141 guided mode
quit, 142 extract, 128
report, 143 utility, 66
sh, 144
shift, 145
show, 146 H
start, 148 headings parameter, export template file, 100
stop, 150 help command, extract command, 129
weekdays, 153
weekly, 154 help command, utility program, 68
yearly, 156 HP Open View, 159
extracting log file data, 124 HP Open View Network Node Manager, 160
extract program, 14, 89
command line arguments, 92 I
command line interface, 92 ID parameter
commands, 111 parm file, 25
interactive versus batch, 90
running, 90 IF statement, alarm syntax, 176
INCLUDE statement, alarm syntax, 178
F interactive mode
file parameter, parm file, 32 extract program, 91
utility program, 45
files
alarmdef, 61, 63, 159, 160, 186 interesting processes, 25, 39
alarm definitions, 61, 159 items parameter, export template file, 101
datasources, 160
export template, 97 J
logappl, 20, 25
logdev, 20, 26 javaarg parameter, parm file, 30
logglob, 20, 25
logindx, 20 L
logproc, 25 layout parameter, export template file, 101
logtran, 20, 26
parm, 13, 23 list command, extract program, 131
perflbd.rc, 14 list command, utility program, 69
199
local actions N
alarms, 175
netif name record, 110
executing, 161
nokilled option, 28
logappl file, 20, 25
PRM groups, 25
O
logdev file, 20, 26
OpenView Operations (OVO), 159, 161
logfile command, utility program, 70
or parameter, parm file, 34
log file data
analyzing for alarm conditions, 162 output command, extract program, 139
archiving, 136, 154, 156 output parameter, export template file, 101
exporting, 121
OV Operations, 16
extracting, 124
OV Performance Agent
log files
components, 13
archiving data, 40
data collection, 13
controlling disk space, 39
description, 10
DSI, 13, 124
extract program, 14, 89
MPE, 187
Operating Systems Supported, 10
resizing, 75
utility program, 14, 43
rolling back, 39, 40
scanning, 80 OV Performance Manager, 10, 16
setting maximum size, 30, 39 OV Reporter, 16
logglob file, 20, 25, 156
logical volume name record, 110 P
logindx file, 20 parameter
subprocinterval, 29
log parameter, parm file, 25
parameters, 23
logproc file, 25
parm file, 13, 23
logtran file, 20, 26
application definition parameters, 31
LOOP statement, alarm syntax, 177 configure data logging intervals, 35
lvolume command, extract program, 134 default values, 22
flush, 31
gapapp, 29
M
modifying, 22
maintenance time, parm file, 30 parameters, 23, 25
mainttime parameter, parm file, 30, 39 subprocinterval parameter, 29
syntax check, 73
managing data collection, 19
parm file application keywords
memory option, 27
argv1, 32
menu command
parm file application parameters
extract program, 135
cmd, 33
utility program, 72
parmfile command, utility program, 73
messages in alarm syntax, 168
parm file parameters
metric names in alarm syntax, 167, 182
application name, 31
missing parameter, export template file, 101 file, 32
modifying group, 33
collection parameters, 22 ID, 25
parm file, 22 javaarg, 30
log, 25
monthly command, extract program, 136
mainttime, 30, 39
MPE log files, viewing, 187 or, 34
mwa script, 37 priority, 34
200
scopeprocinterval, 29 extract program, 90
scopetransactions, 29 utility program, 44
size, 30
user, 33 S
perfalarm, 160, 179, 180 scan command, utility program, 80
perflbd, 160 scanning a log file, 80
perflbd.rc file, 14 SCOPE default data source, 14, 167, 179, 180
performance alarms, 159 scopeprocinterval parameter, parm file, 29
perfstat command, 21 scopetransactions parameter, parm file, 29
printing documentation, 188 scopeux, 13, 20
PRINT statement, alarm syntax, 176 data sources, 14
priority parameter, parm file, 34 stopping, 37
201
utility command, 74 parm file application addition/deletion
utility program, 65 notifications, 52
threshold parameter, parm file parm file global change notifications, 52
cpu option, 27 process log reason summary, 54
disk option, 27 scan start and stop, 55
memory option, 27 scopeux off-time notifications, 53
nokilled option, 28
nonew option, 27 V
shortlived option, 28 Variable Data Logging, 35
transaction name record, 109 variables, alarm syntax, 181
transaction tracking, 14 VAR statement, alarm syntax, 181
viewing
U documentation, 188
user parameter, parm file, 33 viewing MPE log files, 187
USE statement, alarm syntax, 179
utility commands W
analyze, 61 weekdays command, extract program, 153
checkdef, 63
detail, 64 weekly command, extract program, 154
exit, 65 WK1 format, export file, 100
guide, 66
help, 68 Y
list, 69
logfile, 70 yearly command, extract program, 156
menu, 72
parmfile, 73
quit, 74
resize, 45, 75
scan, 80
sh, 82
show, 83
start, 84
stop, 86
utility program, 14, 43, 59, 162
batch mode, 44
batch mode example, 45
command line arguments, 47
command line interface, 44, 47
entering shell commands, 82
interactive mode, 45
interactive program example, 45
interactive versus batch, 44
running, 44
utility scan report
application overall summary, 55
application-specific summary report, 53
collector coverage summary, 55
initial parm file application definitions, 51
initial parm file global information, 51
log file contents summary, 56
log file empty space summary, 56
202