Administration Guide and Reference
Administration Guide and Reference
Version 3.1
IBM
SC28-3211-01
Note
Before using this information and the product it supports, read the information in “Notices” on page
367.
This edition applies to Version 3 Release 1 of IBM Z Performance and Capacity Analytics (5698-AS3) and to all
subsequent releases and modifications until otherwise indicated in new editions.
Last updated: December 2022
© Copyright International Business Machines Corporation 1993, 2017.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
© Teracloud ApS 2018, 2022.
Contents
Figures................................................................................................................. ix
Tables................................................................................................................xvii
Preface...............................................................................................................xix
Who should read this book........................................................................................................................xix
Publications............................................................................................................................................... xix
Accessing publications online............................................................................................................. xix
Accessibility............................................................................................................................................... xix
Support information................................................................................................................................... xx
Conventions used in this book................................................................................................................... xx
Typeface conventions........................................................................................................................... xx
What's new in this edition (December 2022)........................................................................................... xxi
iii
Step 9: Customizing JCL............................................................................................................................ 27
Step 10: Testing the installation of the base.............................................................................................28
Step 11: Reviewing Db2 parameters.........................................................................................................30
Step 12: Installing components................................................................................................................ 31
Installing multiple IBM Z Performance and Capacity Analytics systems................................................ 31
iv
Configuration........................................................................................................................................ 86
Stages, keywords, and parameter settings......................................................................................... 86
Common keywords...............................................................................................................................87
Input stage keywords...........................................................................................................................88
Process stage keywords.......................................................................................................................90
Output stage keywords........................................................................................................................ 92
Advanced configurations......................................................................................................................95
Installing the Collator Function for IBM Z Performance and Capacity Analytics.....................................99
Deployment.......................................................................................................................................... 99
Collation............................................................................................................................................. 104
Collator Configuration........................................................................................................................ 105
Collator Installation........................................................................................................................... 109
Data Splitter............................................................................................................................................. 110
Configuring the SMF Extractor........................................................................................................... 110
Installing the Data Splitter.................................................................................................................110
Configuring the Data Splitter..............................................................................................................110
Direct streaming.................................................................................................................................111
v
The DRLJLDMC collect job and the parameters it uses.................................................................... 247
Modifying the list of successfully collected log data sets................................................................. 250
Modifying the list of unsuccessfully collected log data sets.............................................................252
Working with the Continuous Collector...................................................................................................253
vi
Notices..............................................................................................................367
Trademarks.............................................................................................................................................. 368
Bibliography...................................................................................................... 369
IBM Z Performance and Capacity Analytics publications.......................................................................369
Glossary............................................................................................................ 371
Index................................................................................................................ 375
vii
viii
Figures
6. Administration window................................................................................................................................. 7
9. SMF Extractor.............................................................................................................................................. 11
ix
24. hub.properties - Configure a DataMover on a hub system...................................................................... 41
28. Continuous Collector: hub and spoke configuration option A (separate log streams, partitioned
Db2 database)............................................................................................................................................ 43
29. Continuous Collector: hub and spoke configuration option B (combined log streams)......................... 43
30. Continuous Collector: multiple Spoke systems and one Hub DataMover............................................... 44
31. Continuous Collector: multiple Spoke systems and multiple Hub DataMovers......................................44
x
49. Hybrid Deployment, Split Data Streams................................................................................................ 103
55. Using SQL to define a table space (see definition member DRLSSAMP).............................................. 128
56. Using GENERATE to define a table space (see definition member DRLSKZJB)....................................129
57. Definition member DRLOSAMP, defining reports and report groups.................................................... 131
58. IBM Z Performance and Capacity Analytics definition member DRLQSA01, report query.................. 132
63. Db2 environment for the IBM Z Performance and Capacity Analytics database..................................146
69. DRLJCOPY job for backing up IBM Z Performance and Capacity Analytics table spaces.................... 155
xi
74. Installation Options window...................................................................................................................171
95. Using QMF to display an IBM Z Performance and Capacity Analytics table......................................... 204
xii
99. Condition window................................................................................................................................... 209
xiii
124. Grant Privilege window.........................................................................................................................237
xiv
149. Tablespace Allocation report............................................................................................................... 317
xv
xvi
Tables
1. KPM components...........................................................................................................................................9
11. Relationship of Analytics component report to non-Analytics component report .............................. 347
13. Relationship of Analytics component tables used in view to non- Analytics component tables
used in view..............................................................................................................................................350
21. Usage and Accounting Collector Subsystem Member Names (Partial List)..........................................364
xvii
xviii
Preface
This book provides an introduction to IBM Z Performance and Capacity Analytics, the administration
dialog and the reporting dialog. It describes procedures for installing the base product and its features
and for administering IBM Z Performance and Capacity Analytics through routine batch jobs and the
administration dialog.
It also describes how to setup and configure for extended reporting capability through interfacing
analytics tools, both on and off-platform.
The terms listed are used interchangeably throughout the guide:
• MVS™, OS/390®, and z/OS.
• VM and z/VM®.
Publications
This section describes how to access the IBM Z Performance and Capacity Analytics publications online.
For a list of publications and related documents, refer to “IBM Z Performance and Capacity Analytics
publications” on page 369.
Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. With this product, you can use assistive technologies to hear and
navigate the interface. You can also use the keyboard instead of the mouse to operate all features of the
graphical user interface.
Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM provides the following
ways for you to obtain the support you need:
• Searching knowledge bases: You can search across a large collection of known problems and
workarounds, Technotes, and other information.
• Obtaining fixes: You can locate the latest fixes that are already available for your product.
• Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with
someone from IBM, you can use a variety of ways to contact IBM Software Support.
For more information about these three ways of resolving problems, see Appendix A, “Support
information,” on page 365.
Typeface conventions
This guide uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip, and Operating system considerations)
• Column headings in a table
• Keywords and parameters in text
Italic
• Citations (titles of books, diskettes, and CDs)
• Words defined in text
• Emphasis of words (words as words)
• Letters as letters
• New terms in text (except in a definition list)
• Variables and values you must provide
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
xx Preface
• Values for arguments or command options
Previous editions
May 2022
• APAR PH38474 - ELK reporting:
– Updated ELK version: “Software prerequisites” on page 14
• APAR PH44198 - SMF Extractor:
– Changed TRACE(Y) to TRACE(N) on the PARM option in “DRLJSMFX - SMF Extractor startup
procedure” on page 38
– Updated the F smfext,STOP command in “SMF Extractor console commands” on page 61
• APAR PH40134 - DataImporter:
– Updated steps 5, 8, 10, and 11 in “Setting up the DataImporter” on page 71
– Updated DataImporter historical data tables in “Publishing Historical Data” on page 76
– New command process.r.n.forceFields added in “Process stage keywords” on page 90
November 2021
• APAR PH38154 - Db2 Admin Authority reporting enhancements:
– “Administration reports” on page 308
September 2021
• APAR PH39079 - generate WTOs for selected messages:
– Added new information: “Using log collector language to collect data” on page 136
July 2021
• APAR PH38056 - refresh definitions and lookup tables when running the Continuous Collector:
– Add new information: “Working with the Continuous Collector” on page 253
• APAR PH07965 - Db2 Delta Statistics:
– Added a new lookup table: “TIME_RES” on page 282
– “Example of table contents” on page 283
Preface xxi
June 2021
• “Dialog parameters - variables and fields” on page 115
• “GENERATE_PROFILES” on page 272
• “SMF records” on page 288
May 2021
• Updated Data Splitter description:
– “Introduction to the Data Splitter and the SMF Extractor” on page 11
– “Receiving raw SMF records from the SMF Extractor” on page 11
– “Data Splitter” on page 110
– “Configuring the Data Splitter” on page 110
• APAR PH35178 - new component DataImporter added to stream data from Db2:
– “Setting up data streaming” on page 62
– “Establishing a Publication Mechanism” on page 64
– “Shadowing Data out of Db2” on page 64
– “Processing Published Data in the IBM Z Common Data Provider” on page 66
– “Processing Published Data in an IBM Z Performance and Capacity Analytics Catcher” on page
67
– “Setting up the DataImporter” on page 71
– “Hardware and Network Considerations” on page 76
– “Network Considerations” on page 76
– “Publishing Historical Data” on page 76
March 2021
• Added new information for “Introduction to the Collator” on page 10 and “Introduction to the Data
Splitter and the SMF Extractor” on page 11
• Modified the structure and added new information to Chapter 3:
– “SMF Configuration” on page 33
– “Review the SID parameter” on page 33
– “Review your SYS settings” on page 34
– “Review each of your SUBSYS settings” on page 34
– “SMF Extractor” on page 34
– Moved from Chapter 4: “Sample configuration members” on page 34
• Modified the structure and added new information to Chapter 4:
– “Extending the SMF extractor” on page 59
– “Configuration parameters” on page 59
– “SMF Extractor console commands” on page 61
– “Data Splitter” on page 110
– “Configuring the SMF Extractor” on page 110
– “Installing the Data Splitter” on page 110
– “Configuring the Data Splitter” on page 110
– “Direct streaming” on page 111
February 2021
• The chapter structure has been modified by splitting Chapter 3 into two new chapters.
xxii Preface
– Chapter 3: Chapter 3, “Installing the SMF Extractor and Continuous Collector,” on page 33
– Chapter 4: Chapter 4, “Installation Optional Extensions,” on page 59
November 2020
• Clarified architecture considerations for installation prerequisites: “Software prerequisites” on page
14
• Updated data set description for directory containing the DataMover: “Step 1: Reviewing the results
of the SMP/E installation” on page 16
• Improved instructions for Continuous Collector installation procedure: “Step 2: Installing the
Continuous Collector” on page 47
• Improved instructions for Continuous Collector startup procedure: “Step 4: Activating the
Continuous Collector” on page 56
• APAR PH28368: Improved installation instructions for remote data streaming
• APAR PH31089 and APAR PH28498: Added new instruction for ELK Shadower configuration
• Added new instruction for Collator function: “Installing the Collator Function for IBM Z Performance
and Capacity Analytics” on page 99
• APAR PH29556 - Added new instruction for dynamically modifying the Continuous Collector commit
interval: “Working with the Continuous Collector” on page 253
September 2020
• Improved instruction clarity for Secure Setup Procedure: “Step 3: Initializing the Db2 database” on
page 20
• Improved instructions for SMF Extractor installation: “SMF Extractor tips” on page 46
• Clarified streaming options for off-platform reporting
• Improved sample JCL for log streaming to coupling facility
• Clarified the description of modtenu field: “Dialog parameters - variables and fields” on page 115
• APAR PH26230: Added “Defining triggers” on page 131
August 2020
• Removed outdated CICS Partitioning Feature Customization information in Chapter 2, “Installing
IBM Z Performance and Capacity Analytics ,” on page 13
• Replaced DELPART_PROFILES with GENERATE_PROFILES: “Step 7: Determining partitioning mode
and keys” on page 25
• Removed outdated procedure in “Step 11: Reviewing Db2 parameters” on page 30
• Removed outdated Publisher DataMover information in: Chapter 3, “Installing the SMF Extractor
and Continuous Collector,” on page 33
• Updated Installing the Continuous Collector to include running in zIIP mode: “Step 2: Installing the
Continuous Collector” on page 47
• Added SSL Configuration instructions for Data Mover: “DataMover configuration options” on page
52
• Moved Data streaming setup instructions for off platform reporting from Guide to Reporting
• Added additional detail for Process Stage Keyword FILTER: “Process stage keywords” on page 90
• Modified STEPLIB to be up to date: “DRLJCOLL job for collecting data from an SMF data set” on
page 137
• Removed outdated content about system tables: “Understanding table spaces” on page 146
Preface xxiii
xxiv IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Chapter 1. Introduction to IBM Z Performance and
Capacity Analytics
IBM Z Performance and Capacity Analytics enables you to effectively manage the performance of your
system by collecting performance data in a Db2 database and presenting the data in a variety of formats
for use in systems management.
After reading this topic, you should have a basic understanding of IBM Z Performance and Capacity
Analytics and be ready to install it.
IBM Z Performance and Capacity Analytics has two basic functions:
1. Collecting systems management data into a Db2 database.
2. Reporting on the data.
IBM Z Performance and Capacity Analytics consists of a base product and several optional features.
The IBM Z Performance and Capacity Analytics base can generate graphical and tabular reports using
systems management data it stores in its Db2 database. To display graphical reports, IBM Z Performance
and Capacity Analytics can use IBM Graphical Data Display Manager (GDDM) or web reporting tools such
as IBM Cognos Analytics. You can also send the data to other platforms for further analysis and reporting
using interfacing products such as Splunk and ELK.
The IBM Z Performance and Capacity Analytics Administration and Reporting dialogs enable you to
specify your data collection and reporting requirements. An overview of the setup for data collection and
reporting is shown in the following diagram. Use the Administration dialog to define to the Log Collector
the type of log data that you want to collect from SMF, IMS, and other systems to store in Db2 databases.
Use the Reporting dialog to define to the report generator the reports that you want produced.
Administration Reporting
dialog dialog
Tabular
reports
Report
generator
SMF Charts
log
Reporting
tools
Other
logs
You can use IBM Z Performance and Capacity Analytics for batch data collection and periodic reporting.
When familiar with batch operation, you can implement more advanced configurations for automated data
gathering and continuous collection.
SYS1
SMF Extract SMF must be complete Batch job to run the
Log Collector
written to Db2
before each batch run during the batch run
Data
SMF
Data Batch Log Db2
Extract SMF
SYS2
SYSX
Data
Data
SMF Reports
Data
SMF
SYS3
SMF Extract
Data Updated reports, based
on the aggregated data,
become available at the
end of each batch run
IBM Z Performance
and Capacity Analytics
agents to gather and
transmit the data
incoming data
Continuous Db2
Data Data
SYS2
SMF Collector
SYSX
Sender Receiver
Reports
Data
SYS3
These features are used to collect and report on systems management data, such as System Management
Facility (SMF) data or IMS log data.
Each performance feature has components, which are groups of related IBM Z Performance and Capacity
Analytics definitions. For example, the z/OS Performance Management (MVSPM) component consists
of everything IBM Z Performance and Capacity Analytics needs to collect log data and create reports
showing z/OS performance characteristics.
Log definitions
IBM Z Performance and Capacity Analytics gathers performance data about systems from sequential data
sets such as those written by SMF under z/OS, or by the Information Management System (IMS). These
data sets are called log data sets or logs.
To collect log data, IBM Z Performance and Capacity Analytics needs log descriptions. The log collector
stores descriptions of logs as log definitions in the IBM Z Performance and Capacity Analytics database.
All log definitions used by IBM Z Performance and Capacity Analytics features are provided with the base
product.
The administration dialog enables you to create log definitions or modify existing ones. For more
information, see “Working with log and record definitions” on page 183.
The log collector language statement, DEFINE LOG, also enables you to define logs. For more information,
refer to the description of defining logs in the Language Guide and Reference.
Record definitions
Each record in a log belongs to one unique record type. Examples of record types include SMF record
type 30, generated by z/OS, and SMF record type 110, generated by CICS. For IBM Z Performance and
Capacity Analytics to process a record, the record type must be defined. Detailed record layouts, field
formats, and offsets within a record, are described in IBM Z Performance and Capacity Analytics record
definitions. All record definitions used by IBM Z Performance and Capacity Analytics features are provided
with the base product.
The administration dialog enables you to create and modify record definitions. For more information, see
“Working with log and record definitions” on page 183.
The log collector language statement, DEFINE RECORD, also enables you to define records. For more
information, refer to the description of defining records in the Language Guide and Reference.
Update definitions
Instructions for processing data and inserting it into tables in the IBM Z Performance and Capacity
Analytics database are provided in update definitions. Each update definition describes how data from a
source (either a specific record type, or a row of a table) is manipulated and inserted into a target (a row
in a table). The update definitions used by an IBM Z Performance and Capacity Analytics component are
provided with the feature that contains the component.
The administration dialog enables you to create update definitions or modify them. For more information,
see “Displaying and modifying update definitions of a table” on page 220.
The log collector language statement, DEFINE UPDATE, also enables you to define updates. For more
information, refer to the description of defining updates in the Language Guide and Reference.
Table definitions
IBM Z Performance and Capacity Analytics stores data collected from log data sets in its database tables.
It also stores IBM Z Performance and Capacity Analytics system data in system tables and site-specific
operating definitions in lookup and control tables. A table definition identifies the database and table
space in which a table resides, and identifies columns in the table. The table definitions used exclusively
by the feature components in IBM Z Performance and Capacity Analytics are provided with the feature.
The administration dialog enables you to create or modify lookup and data table definitions. For more
information, see “Working with tables and update definitions” on page 201.
Collect process
When definitions exist for a log, the log records, the log update instructions for record data, and target
data tables, you can collect data from that log. You start the collect process:
• From the administration dialog.
• With the log collector language statement COLLECT.
The log collector retrieves stored definitions and performs the data collection that they define.
Figure 4 on page 5 shows the collect process.
1 Log data
Collect
3a
2 Log definition Log procedure
3b
5a
7
6 Update definitions Lookup tables
9a
11
10 SQL queries Lookup tables
12 Reports
7. IBM Z Performance and Capacity Analytics often selects data from lookup tables to fulfill the data
manipulations that update definitions require.
8. IBM Z Performance and Capacity Analytics writes non-summarized and first-level summarized data
to data tables specified by the update definitions.
9. IBM Z Performance and Capacity Analytics uses updated tables as input for updating other, similar
tables that are for higher summary levels. If update definitions specify data summarization:
a. IBM Z Performance and Capacity Analytics selects data from a table as required by the update
definitions and performs required data summarization.
b. IBM Z Performance and Capacity Analytics updates other data tables as required by update
definitions.
c. IBM Z Performance and Capacity Analytics might select data from lookup tables during this
process (although that is not shown in the diagram for this step).
10. After IBM Z Performance and Capacity Analytics stores the data from a collect, you can display
reports on the data. IBM Z Performance and Capacity Analytics uses a query to select the data for the
report.
11. Optionally, IBM Z Performance and Capacity Analytics might select data from lookup tables specified
in the query.
12. IBM Z Performance and Capacity Analytics creates report data, displaying, printing, and saving it as
you requested.
For more information about collecting log data, see “Setting up operating routines” on page 135.
Lookup tables
Log data
IBM Z Performance and Capacity Analytics stores data that it collects in hourly, daily, weekly, and monthly
tables, and in non-summarized tables. It maintains groups of tables that have identical definitions except
for their summarization levels. For example, the EREP component of the System Performance feature
creates the data tables EREP_DASD_D and EREP_DASD_M, which differ only because one contains daily
data and the other, monthly data.
Because the IBM Z Performance and Capacity Analytics database is relational, you can:
• Combine information from any of your systems into a single report.
• Summarize by system within department, by department within system, or by whatever grouping is
required.
You can keep data tables containing historical data for many years without using much space. The
database size depends mainly on the number of short-term details you keep in it and not on summarized
weekly or monthly data.
The IBM Z Performance and Capacity Analytics database contains operating definitions in its system
tables. These definitions include those for logs, records, updates, and tables shipped with IBM Z
Performance and Capacity Analytics. The database also contains lookup tables of parameters that you
supply, such as performance objectives or department and workload definitions for your site.
When you produce a report, you can specify values for the query that is used to select specific rows of
data. You can display, print, or save, the retrieved data in either a tabular or a graphic report format.
Note: To generate and display graphic reports, IBM Z Performance and Capacity Analytics uses Graphical
Data Display Manager (GDDM). If you are using IBM Z Performance and Capacity Analytics without QMF,
GDDM is not required. If GDDM is not used, all reports are displayed in tabular form.
A report can consist of these items, which are identified in the report definition:
• A query for selecting data (required).
• A form that formats the data and specifies report headings and totals.
• Graphical Data Display Manager (GDDM) format for a graphic report.
• Report attributes (for creating logical groups of reports).
• Report groups to which the report belongs.
• Variables in the report.
When installing a component, you install a comprehensive set of predefined report queries, forms, and,
optionally, GDDM formats for the component. The reporting dialog enables you to:
• Define new report definitions or modify existing ones.
• Define new queries and forms or modify existing ones, using QMF or the IBM Z Performance and
Capacity Analytics built-in report generator.
• Display reports.
• Define reports for batch execution.
The Guide to Reporting describes the host reporting dialog. For a description of using the Common User
Access (CUA) interface presented in IBM Z Performance and Capacity Analytics windows and helps, refer
to the "Getting Started" section of that book.
Each KPM component uses table space profiles which allow the table, table space, and index settings
within each KPM component to be easily modified in one place. Before installing the KPM components,
refer to the topic “Working with table space profiles” on page 177.
In the above scenario, raw SMF records are being gathered from two systems (SYS1 and SYS2) and
forwarded to a control system (SYSX), where they are received and written to a log stream. The Collator
then processes them and sorts them into separate streams of data sets that can be archived.
Installation prerequisites
This section lists the hardware and software prerequisites.
Hardware prerequisites
IBM Z Performance and Capacity Analytics can run in any hardware environment that supports the
required software.
Software prerequisites
This section lists the software requirements to install and use IBM Z Performance and Capacity Analytics
basic functions and optional component features. Refer to the IBM Z Performance and Capacity Analytics
Program Directory for further information about mandatory and conditional requirements.
Architecture considerations
IBM Z Performance and Capacity Analytics can run on a stand-alone system, or can be configured for
multiple systems with hub and spoke architecture. For more information, refer to “Continuous Collector
configuration options” on page 42.
Most of the prerequisites apply to the hub or stand-alone system, and do not need to be installed on every
system that you want IBM Z Performance and Capacity Analytics to collect data from.
It is highly recommended that IBM Z Performance and Capacity Analytics be installed in to its own Db2
subsystem. This will avoid contention with other applications.
Data gathering
IBM Z Performance and Capacity Analytics is installed on one or more hub systems, where data from
multiple spoke systems is analyzed and archived. A system can be both a spoke and a hub.
A hub system requires:
• z/OS V2.2.0 or later
• Db2 V11 or later
While you can manually transfer data from the spoke systems to the hub, there is an automated data
gathering agent that can be installed on the spoke systems running z/OS to gather SMF data.
A spoke system running the automated data gathering agent requires:
• z/OS V2.2.0 or later
• IBM 64-bit Java 8
To enable automated data gathering by spokes, the hub needs:
• IBM 64-bit Java 8
Data can be gathered from the following applications on spoke systems:
z/OS systems
• z/OS V2.2.0 or later
– DFSMS/OAM
– DFSMS/RMM
– DFSMS/MVS
– JES2 and JES3
– EREP
– Communications Server
– Tivoli Workload Scheduler for z/OS
– Tivoli Information Management for z/OS
– MVS
• Db2 for z/OS V11.0.0 or later
• CICS Transaction Server V5.2 or later
• IMS V13 or later
• IBM Security zSecure Manager for RACF z/VM V1.11.1 or later
• IBM Tivoli NetView for z/OS – V5.4.0 or later
• Optionally, the IBM Z Common Data Provider V1.2 or higher to distribute the feed. The
alternative is a direct feed from IBM Z Performance and Capacity Analytics to Splunk or ELK.
The secondary authorization IDs a user has access to can be controlled in different ways. If you
have RACF installed, users can usually use the RACF groups that they are connected to as secondary
authorization IDs. If RACF is not installed, secondary authorization IDs can be assigned by the Db2
authorization exit.
This topic describes how to define the secondary authorization IDs using RACF. If you assign secondary
authorization IDs in another way, consult your Db2 system administrator.
Procedure
1. Create three RACF groups. The default RACF group IDs are DRL, DRLSYS, and DRLUSER
The IDs DRL and DRLSYS are also prefixes for the IBM Z Performance and Capacity Analytics Db2
tables. If you plan to change the prefixes for IBM Z Performance and Capacity Analytics system tables
and views (DRLSYS) or for other IBM Z Performance and Capacity Analytics tables and views (DRL) in
“Step 3: Initializing the Db2 database” on page 20, use your values as RACF group IDs.
If all users on your system need access to the IBM Z Performance and Capacity Analytics data, you do
not need the DRLUSER group. If different users need access to different sets of tables, you can define
several RACF group IDs, such as DRLMVS and DRLCICS, instead of the DRLUSER group.
You can use either RACF commands or RACF dialogs to specify security controls. These commands
are samples. You may have to specify additional operands to comply with the standards of your
organization.
2. Connect IBM Z Performance and Capacity Analytics administrators to all three groups.
Use RACF commands or RACF dialogs to connect user IDs to a group. This command is a sample.
3. Connect IBM Z Performance and Capacity Analytics users to the DRLUSER group only.
Use RACF commands or RACF dialogs to connect user IDs to a group. This command is a sample.
4. If you use different RACF group IDs, be sure to use them throughout all the steps listed .
5. If you use other group IDs than DRLUSER, you must modify the following fields in the Dialog
Parameters window (see Figure 11 on page 24):
Users to grant access to
Users to grant access to must be specified when you create the system tables and when you install
components. When you create the system tables it should contain all group IDs that should have
access to IBM Z Performance and Capacity Analytics. To grant access to all users, specify PUBLIC.
When you install components, Users to grant access to should contain the group IDs that should
have access to the component.
SQL ID to use (in QMF)
If QMF is used with IBM Z Performance and Capacity Analytics in your installation, the SQL ID to
use in QMF must be specified by each user. It should be one of the groups the user is connected to
or the user's own user ID.
6. If you use different RACF group IDs, you can make your RACF group IDs the default for all IBM
Z Performance and Capacity Analytics users. Edit the IBM Z Performance and Capacity Analytics
initialization exec DRLFPROF, described in “Step 4: Preparing the dialog and updating the dialog
profile” on page 21. Variables def_syspref, def_othtbpfx, def_iduser1, and def_idsqluser may need to
be changed, depending on the changes you made to the IDs.
Procedure
1. Grant authority to the administrators:
a) Create all tables and views with the administrator user ID as prefix. That is, replace DRLSYS and
DRL with a user ID. Only one administrator is possible.
b) Grant SYSADM authority to all administrators.
2. Give authority to the users in one of two ways. This is done in “Step 5: Setting personal dialog
parameters” on page 23.
• Specify a list of up to 8 user IDs in the field Users to grant access to in the Dialog Parameters
window (Figure 11 on page 24).
• Specify PUBLIC in the field Users to grant access to. This gives all users access to IBM Z
Performance and Capacity Analytics data. This is easier to maintain than a list of user IDs.
For both cases, each user must specify his own user ID in the SQL ID to use (in QMF) field in the
Dialog Parameters window, if QMF is used with IBM Z Performance and Capacity Analytics in your
installation.
You must specify user IDs in the field Users to grant access to before you create the system tables. It
is also used when you install components.
Example: Installation steps when secondary user IDs are not used
Follow this example if you have several administrators. In the example, we assume that there are three
administrators:
• ADMIN1 is the user who creates system tables.
• ADMIN2 and ADMIN3 are the other administrators.
When performing the installation, note these items:
• “Step 3: Initializing the Db2 database” on page 20 : Change DRL and DRLSYS in the DRLJDBIN job to
ADMIN1, ADMIN2, and ADMIN3.
• “Step 4: Preparing the dialog and updating the dialog profile” on page 21: No changes.
• “Step 5: Setting personal dialog parameters” on page 23: Use ADMIN1 as prefix for system tables,
ADMIN2 and ADMIN3 as prefix for other tables. For Users to grant access to, specify ADMIN1, ADMIN2,
ADMIN3, and all user IDs for the end users. For SQL ID to use (in QMF), specify ADMIN1 (if QMF is used
with IBM Z Performance and Capacity Analytics in your installation).
• “Step 6: Setting up QMF” on page 24: No changes.
• “Step 8: Creating system tables” on page 25: The system tables should be created with the prefix
ADMIN1. Otherwise, there are no changes compared with the information in this step.
• “Step 9: Customizing JCL” on page 27: No changes.
• “Step 10: Testing the installation of the base” on page 28 and “Step 12: Installing components” on
page 31: If one of the secondary administrators, for example ADMIN2, wants to install the Sample
component or any other component, that administrator has to change the dialog parameters before the
installation to use these settings:
Prefix for system tables
ADMIN1
Procedure
1. Copy member DRLJDBIN in the DRLvrm.SDRLCNTL library to the &HLQ.LOCAL.CNTL library.
DRLJDBIN needs to be customized to refer to one of the following samples: DRLJDCVB, DRLJDCVC,
or DRLJDCVD depending on your version of Db2. The required sample must also be copied to the
&HLQ.LOCAL.CNTL library and customized for your environment if you are not using the default
database name and table prefixes. Refer to the instructions in the comments in DRLJDBIN job and
the selected DRLJDCVx member for more information about using these samples.
2. Modify the job card statement to run your job.
3. Customize the job for your site.
Follow the instructions in the job prolog to customize it for your site.
Note:
a. A person with Db2 SYSADM authority (or someone with the authority to create plans, storage
groups, and databases, and who has access to the Db2 catalog) must submit the job.
b. Do not delete steps from DRLJDBIN. Even if you have DBADM authorization, you must grant DRL
and DRLSYS authority for the IBM Z Performance and Capacity Analytics database.
4. Submit the job to:
• Bind the Db2 plan used by IBM Z Performance and Capacity Analytics .
The plan does not give privileges (it contains only dynamic SQL statements) thereby making it safe to
grant access to all users (PUBLIC).
If you change the name of the plan from the default (DRLPLAN) then you must update the
def_db2plan variable in DRLFPROF to specify the new plan name. You also need to modify any
sample jobs that execute DRLPLC, DRL1PRE or DRLPLOGM to specify the PLAN parameter with the
new plan name. Changing the plan name allows you to run versions of the IBM Z Performance and
Capacity Analytics environment with incompatible DBRMs in the same Db2 subsystem.
• Create the Db2 storage group and database used by IBM Z Performance and Capacity Analytics .
• Grant Db2 DBADM authority as database administrators of DRLDB to DRL and DRLSYS.
• Create views on the Db2 catalog for IBM Z Performance and Capacity Analytics dialog functions for
users who do not have access to the Db2 catalog.
Procedure
1. Make the load library (DRLvrm.SDRLLOAD), Db2 load library, QMF load library, and the GDDM load
library accessible by performing one of these tasks:
a) Allocate the SDRLLOAD library, Db2 load library (SDSNLOAD), QMF load library (SDSQLOAD), and
the GDDM load library (SADMMOD) to STEPLIB in the generic logon procedure.
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=QMF.SDSQLOAD
// DD DISP=SHR,DSN=GDDM.SADMMOD
// DD DISP=SHR,DSN=DSN.SDSNLOAD
//SYSPROC DD DISP=SHR,DSN=&HLQ.LOCAL.EXEC
// DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
//SYSEXEC DD DISP=SHR,DSN=&HLQ.LOCAL.EXEC
// DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
//ADMPC DD DISP=SHR,DSN=GDDM.SADMPCF
IBM Z Performance and Capacity Analytics dynamically allocates other libraries and data sets, such as
the GDDM symbols data set GDDM.SADMSYM, when a user starts a dialog. “Allocation overview” on
page 124 describes the libraries that IBM Z Performance and Capacity Analytics allocates and when it
allocates them.
4. If you have used any values other than default values for DRLJDBIN or for IBM Z Performance and
Capacity Analytics data set names, you must modify the userid .DRLFPROF file (allocated copying the
DRLFPROF member of DRLvrm.SDRLCNTL).
DRLEINI1 sets dialog defaults for all users. IBM Z Performance and Capacity Analytics stores defaults
for each user in member DRLPROF in the library allocated to the ISPPROF ddname, which is usually
tsoprefix.ISPF.PROFILE. Edit DRLFPROF to include default values so users do not need to change
dialog parameter fields.
5. Allocate a sequential data set with name user.DRLFPROF, LRECL=80 BLKSIZE=32720 RECFM=FB and
copy the DRLFPROF member of the SDRLCNTL library.
6. Locate and change any variable values that you have changed during installation.
Note:
• Change values for data set names that identify Db2 and, optionally, QMF and GDDM libraries.
• If you do not use QMF with IBM Z Performance and Capacity Analytics, change the value for qmfuse
to NO.
• If you do not use GDDM with IBM Z Performance and Capacity Analytics, change the value for
gddmuse to NO. (If QMF is used, GDDM must be used.)
“Modifying the DRLFPROF data set” on page 113 shows the DRLFPROF file containing the parameters
to be modified.
“Overview of the Dialog Parameters window” on page 114 shows the administration dialog window
and the default initialization values that DRLFPROF sets.
“Dialog parameters - variables and fields” on page 115 describes parameters and shows the
interrelationship of DRLEINI1 and the Dialog Parameters.
7. You can add IBM Z Performance and Capacity Analytics to an ISPF menu by using this ISPF statement:
To access a dialog from the command line of an ISPF window, any authorized user can issue the
command TSO %DRLEINIT from the command line of an ISPF window.
The optional DEBUG parameter sets on a REXX trace for the initialization execs. This helps you solve
problems with data set and library allocation.
The optional RESET parameter sets the ISPF profile variables to their default value. It has the same
effect as deleting the DRLPROF member from the local (ISPPROF) profile library.
The optional DBRES parameter sets the ISPF profile variables for IBM Z Performance and Capacity
Analytics to their default value (like Db2 Subsystem, Db2 Database, Db2 Storage Group). Only these
values are deleted from the DRLPROF member of the local (ISPPROF) profile library. All the other
values set and already stored in the profile are preserved.
The optional REPORTS parameter takes you directly to the reporting dialog. You can abbreviate this to
R.
The optional ADMINISTRATION parameter takes you directly to the administration dialog. You can
abbreviate this to A.
Procedure
1. From the command line of an ISPF/PDF window, type TSO %DRLEINIT to display the IBM Z
Performance and Capacity Analytics Primary Menu (Figure 10 on page 23).
Reporting dialog users can access the Dialog Parameters window from the Options pull-down of the
Primary Menu or the Reports window.
System
1. Dialog parameters
2. System tables
3. Import QMF initialization query
Note: If your installation does not use QMF with IBM Z Performance and Capacity Analytics, the
contents of this window is slightly different from what you see here. Both versions of the Dialog
Parameters window are shown in “Overview of the Dialog Parameters window” on page 114.
Dialog Parameters
More: +
DB2 subsystem name . . . . . DEC1
Database name . . . . . . . . DRLDBYY_
Storage group default . . . . SYSDEFLT
Prefix for system tables . . DRLSYSYY
Prefix for all other tables DRLYY___
Show panel IDs (yes/no) . . . YES
Note: When you see a plus sign indicator (More: +) in the upper-right corner of an IBM Z Performance
and Capacity Analytics window, press F8 to scroll down.
If it shows a minus sign indicator (More: -), press F7 to scroll up. For more information about using
IBM Z Performance and Capacity Analytics dialog windows, refer to the description in the Guide to
Reporting.
You must scroll through the window to display all its fields. “Overview of the Dialog Parameters
window” on page 114 shows the entire Dialog Parameters window, both the version shown if QMF is
used with IBM Z Performance and Capacity Analytics and the version shown if QMF is not used with it.
“Dialog parameters - variables and fields” on page 115 has a description of the fields in the window.
5. Make modifications and press Enter
Changes for administration dialog users and for end users are the same. You must identify the correct
names of any data sets (including prefixes and suffixes) that you changed from default values during
installation.
IBM Z Performance and Capacity Analytics saves the changes and returns to the System window.
Although some changes become effective immediately, all changes become effective in your next
session when IBM Z Performance and Capacity Analytics can allocate any new data sets you may have
selected.
System
1. Dialog parameters
2. System tables
3. Import QMF initialization query
IBM Z Performance and Capacity Analytics imports the query into QMF and then returns to the System
window.
Procedure
1. Consult the guide for the component you are installing.
Many will have a job which must be run to set up Store groups, partition ranges or keys. Follow the
instructions for that component before proceeding. If the component does not support Generated
Table spaces and Indexes, you may skip this step.
2. When using GENERATE TABLESPACE the type of table space created is determined by the
TABLESPACE_TYPE field in the GENERATE_PROFILES system table.
3. If you decide to use Range Partitioned table spaces TABLESPACE_TYPE=RANGE, you will need to
adjust the range values in the GENERATE_KEYS system table.
What to do next
The supplied values for these tables are in the member DRLTKEYS in the SDRLDEFS data set, and the
tables are created and loaded during the creation of the IBM Z Performance and Capacity Analytics
System Tables. These values may be reviewed prior to creating the IBM Z Performance and Capacity
Analytics system tables. If changes are required, you may make a copy in your userid.LOCAL.DEFS data
set and make the required changes prior to System Table creation.
Alternatively, once loaded into the System Tables these values may be changed by various methods:
• Using the IBM Z Performance and Capacity Analytics table edit facility.
• Using SQL UPDATE statements. For example, to change the TABLESPACE_TYPE from the supplied value
of RANGE to GROWTH for IMS the statement would look like the following example:
Procedure
1. From the System window, select 2, System tables.
The System Tables window is displayed. (Figure 12 on page 26).
Administration
Select System
System Tables
Prefix : DRLSYS
Status : Not created
Creator :
Database name : DRLDB
Command ===>
F1=Help F2=Split F3=Exit F9=Swap F10=Actions F12=Cancel
For information about specific Db2 messages, refer to the Messages and Problem Determination.
System messages should be error free, with a Db2 return code of zero. After creating the system
tables, IBM Z Performance and Capacity Analytics returns to the System Tables window where you
must press F12 to return to the System window.
During the process of creating system tables, these administrative reports are also created:
• PRA001 - INDEXSPACE cross-reference.
• PRA002 - ACTUAL TABLESPACE allocation.
• PRA003 - TABLE PURGE condition.
• PRA004 - LIST COLUMNS for a requested table with comments.
• PRA005 - LIST ALL TABLES with comments.
• PRA006 - LIST USER MODIFIED objects.
to program DRLEAPST to create system tables. You can update or delete system tables by passing a
different request to DRLEAPST, as described in the comments in DRLJCSTB.
The TSO/ISPF batch job step must include:
• DRLFPROF DD referring to your DRLFPROF data set
• ISPPROF DD referring to a PDS with RECFM=F and LRECL=80. If you have made changes to the IBM Z
Performance and Capacity Analytics dialog parameters and have not also made those changes in your
DRLFPROF data set, then the ISPPROF DD should refer to your ISPF profile data set and you should not
specify the RESET parameter to DRLEINIT.
• ISPPLIB, ISPMLIB, ISPSLIB, and ISPTLIB DDs referring to your IBM Z Performance and Capacity
Analytics and ISPF panel, message, skeleton, and table data sets.
• ISPLOG DD referring to a data set with RECFM=VA and LRECL=125.
• SYSTSIN DD referring to instream data, or a data set, containing a command to invoke DRLEINIT, for
example:
• DRLBIN (batch input) DD referring to instream data or a data set containing a command to invoke
DRLEAPST with a request to perform the required function, for example:
DRLEAPST CREATE
DRLJREOR
A sample job for reorganizing the IBM Z Performance and Capacity Analytics database with the Db2
REORG utility. See “Purge utility” on page 152 for more information.
DRLJRUNS
A sample job for updating statistics on IBM Z Performance and Capacity Analytics table spaces with
the Db2 RUNSTATS utility. See “Monitoring the size of the IBM Z Performance and Capacity Analytics
database” on page 157 for more information.
DRLJTBSR
A sample job for producing a detailed report about the space required for all, or a subset of, a selected
component’s tables. See “Understanding table spaces” on page 146 for more information.
If you already have jobs for maintaining Db2, for example, COPY, REORG or RUNSTATS, you can continue
to use them for this purpose, instead of using the IBM Z Performance and Capacity Analytics jobs.
Procedure
1. Install the Sample component using the information in “Installing a component” on page 169.
Although editing lookup tables is a usual part of online component installation, you need not edit the
sample lookup table to successfully complete this test. For a description of what is provided with the
sample component, see “Sample component” on page 283
2. After you install the Sample component, select 3, Logs, from the Administration window and press
Enter.
The Logs window is displayed (Figure 13 on page 28).
/ Logs Description
/ SAMPLE Sample log definition
******************************* BOTTOM OF DATA ********************************
Collect
Reprocess . . . . . . 2 1. Yes
2. No
Commit after . . . . . 1 1. Buffer full
2. End of file
3. Specify number of records
Number of records . . ________
Buffer size . . . . . . 10
Extention . . . . . . . 2 1. K
2. M
Condition . . . . . . ________________________________________ >
F1=Help F2=Split F4=Online F5=Include F6=Exclude
F9=Swap F10=Show fld F11=Save def F12=Cancel
Reports Row 1 to 9 of 9
/ Report ID
ACTUAL TABLESPACE SPACE allocation PRA002
INDEXSPACE cross-reference PRA001
List all tables with comments PRA005
List columns for a requested table with comments PRA004
List User Modified Objects PRA006
/ Sample Report 1 SAMPLE01
Sample Report 2 SAMPLE02
Sample Report 3 SAMPLE03
TABLE PURGE Condition PRA003
******************************* Bottom of data ******************************
Command ===>
F1=Help F2=Split F3=Exit F4=Groups F5=Search F6=Listsrch
F7=Bkwd F8=Fwd F9=Swap F10=Actions F11=Showtype F12=Cancel
Command ===>
F1=Help F2=Split F4=Prompt F5=Table F6=Chart F7=Bkwd
F8=Fwd F9=Swap F10=Showfld F11=Hdrval F12=Cancel
What to do next
If you are unsure about the meaning of a field, press F1 to get help. For more information, refer to the
CREATE INDEX and CREATE TABLESPACE command descriptions in the Db2 for z/OS: SQL Reference
IBM Z Performance and Capacity Analytics saves the changed definitions in your local definitions library.
When you save a changed definition, it tells you where it is saving it, and prompts you for a confirmation
before overwriting a member with the same name.
Dialog parameter
Value
Db2 subsystem
DB2T
Database
BILLDB
System table prefix
BILL
Other table prefix
BILL
Users to grant access to
BILL
Local data sets
BILL.DEFS....and so on
Other users cannot use this system because BILL is not a Db2 secondary authorization ID nor a RACF
group ID. If you want to share this new IBM Z Performance and Capacity Analytics system, establish a
valid RACF group ID and use the group ID as the prefix instead of BILL.
SMF Configuration
In order to ensure that messages reach the SMF Extractor, it is necessary to review your SMFPRMxx
member, to ensure that the necessary exits are going to be driven.
• If you are running z/OS 2.3 or later, you must use the IEFU86 exit active.
• If you are running z/OS 2.2 or earlier, you must use the IEFU83, IEFU84 and IEFU85 exits active.
Under z/OS 2.3 you may run either the IEFU83/4/5 exits or the IEFU86 exits. This is to allow for
migration. It is strongly recommended that you move to using the IEFU86 exit as soon as possible.
– This may reduce the value of some of the reports because there will be ‘holes’ in the data while it was
running with a different SMF ID.
– If there is a significant difference in the capacities of the two LPARs such separation may be desirable
so the reports reflect the actual execution of the system in each LPAR it can run in.
While it is possible to change the SMF ID in the data as it is being ingested, it is better not to have to do it.
SMF Extractor
The initialization Message has changed. Instead of:
VSX0160I VSXSMF 08:54:00.087 1st SMF record received; data collection started
It is now:
CKKS160I CKKXSMF 04:30:52.903 1st SMF record received; data collection started
VSX0141I VSXSTA 19:14:28.259 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 193
It is now:
CKKS141I CKKXWTR 04:36:03.255 ** Queue stats for PC to SMF: 4000 / 3950 Curr: 0 Max: 5
NQ=000000B4x DQ=000000B4x
***********************************************************************
* NAME: DRLJSMFO *
* *
* FUNCTION: SMF EXTRACTOR CONTROL FILE. *
* *
* COPY THIS MEMBER TO A DATA SET AS SMFPxxxx WHERE xxxx IS THE *
* 4 CHARACTER SMF ID OF THE SYSTEM WHERE THE MEMBER WILL BE USED. *
* *
* THESE PARAMETERS MUST BE CUSTOMIZED TO MATCH YOUR INSTALLATION *
* REQUIREMENTS PRIOR TO THE FIRST STARTUP OF THE SMF EXTRACTOR. *
* *
* SET OUTLGR TO THE NAME OF THE OUTPUT LOG STREAM *
* SET SMFREC TO A COMMA SEPARATED LIST OF SMF RECORDS TO BE RECORDED *
*=====================================================================*
OUTLGR=log.stream.name
SMFREC=14,15,30,42,60,61,64,65,70,71,72,73,74,85,90,94,99,100,101,113
Figure 20. DRLJSMFO: Parameters for the SMF Extractor control file
For example:
OUTLGR=&SYSNAME.DRL.LOGSTRM
SMFREC=14,15,30,42,60,61,64,65,70,71,72,73,74,85,90,94,99,100,101,113
//*********************************************************************
//* NAME: DRLJSMFX *
//* *
//* FUNCTION: SMF EXTRACTOR STARTED TASK PROC *
//* *
//* The Dataset pointed to by the SMFXPARM DD contains the SMF *
//* extractor parameters for each sysem in the sysplex. *
//* *
//* The parameter for a given system should be in member SMFPxxxx *
//* where xxxx is the systems SMF ID. *
//* *
//*===================================================================*
//SMFEXTP PROC SMFID=SYSA
//CKKSMAI EXEC PGM=CKKSMAI,REGION=0M,
// PARM='TRACE(N),MSGLVL(9),SDUMP(Y),MAXSD(2),MAXQD(4000)'
//*
//STEPLIB DD DISP=SHR,DSN=DRL.SDRLEXTR
//SYSPRINT DD SYSOUT=*
//SMFXPARM DD DISP=SHR,DSN=DRL.USERCNTL(SMFP&SMFID.)
//SYSUDUMP DD SYSOUT=*
//SMFXPRNT DD SYSOUT=*
//*
Figure 21. DRLJSMFX: JCL for the SMF Extractor startup procedure
Figure 22. DRLJCCOL: JCL for the Continuous Collector started task
Stand-alone configuration
When the only system you want to gather SMF data from is also the system on which you want to run
your IBM Z Performance and Capacity Analytics Db2 database on, then you can set up a stand-alone
configuration. This is the simplest configuration.
Db2 Database
Partitioned
Figure 28. Continuous Collector: hub and spoke configuration option A (separate log streams, partitioned
Db2 database)
Option B allows the SMF Extractor to freely write to the local log stream and serializes the access to the
inbound log stream as the data is copied over. Both the Receiver DataMover and the Copy DataMover will
need to specify the same enqueue on their log stream output stages.
Hub gathering its own data (Option B)
SMF Local
SMF Extractor Log stream
Data Mover
(Copy)
Figure 29. Continuous Collector: hub and spoke configuration option B (combined log streams)
Hub System
Spoke Systems
Spoke TCP/IP
Spoke TCP/IP
Logstream
Continuous Db2
Hub Collector
Data Database
Mover
Spoke TCP/IP
Spoke TCP/IP
Figure 30. Continuous Collector: multiple Spoke systems and one Hub DataMover
Partitioned
by SYSID
Continuous
Spoke TCP/IP Hub Logstream
Collector
Figure 31. Continuous Collector: multiple Spoke systems and multiple Hub DataMovers
Communications prerequisites
This section lists the communication prerequisites (log streams and TCP/IP ports) for each part of the
automated data gathering and continuous collection process.
SMF Extractor
• Log stream to send data to either the DataMover or Continuous Collector.
Continuous Collector
• Log stream to receive data from either the DataMover or SMF Extractor.
• The standard recommendation is to have a single Continuous Collector per spoke on the hub. This
could vary based on SMF traffic volume.
DataMover
• Log stream to receive data from the SMF Extractor (on the spokes) or send data to the Continuous
Collector (on the hub).
– Could need multiple log streams on the hub, one to each Continuous Collector.
• TCP/IP port over which DataMovers will communicate with each other (spoke and hub).
– Assume one port per spoke.
– Spoke systems need to be able to write to the port, and hub systems need to be able to read from
the port.
Note on log streams:
• All log streams used by DataMovers can be either DASD-only or coupling facility-based.
• When using DataMovers, one log stream is needed on each spoke and one log stream is needed for each
Continuous Collector on the hub. For example, if there are three spokes and one hub, with a Continuous
Collector on the hub for each spoke plus one on the hub for the local SMF data, there is a total of seven
log streams needed: one on each spoke and one for each Continuous Collector on the hub.
• Any log stream used by more than one system in a sysplex (such as when using sysplex log streams)
must be defined on a coupling facility and cannot be DASD-only.
• When defining log streams:
– Ensure the RETPD is greater than 0.
– AUTODELETE should have a value of YES to keep the log stream cleared of old data.
Procedure
1. On a spoke system, define a log stream for the SMF Extractor. The log stream name must be unique
per system, so include the SYSID in the name.
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only this statement:
b) From the output of the UPDPOL job, extract the CF and STRUCTURE statements for all active
coupling facilities and structures.
For example:
CF statement
Structure statement
SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL
What to do next
This task should not be started until you are ready to collect data using the Continuous Collector.
The task can be tested by starting the task using the z/OS start command and ensuring it collects data
based on the messages in the SMFXLOG output. For example, a successful collection message is:
VSX0160I VSXSMF 08:54:00.087 1st SMF record received; data collection started
Procedure
• The SMF Extractor must run at a high Dispatching Priority (DP).
It must be as close to the SMF DP as possible. The recommended DP is SYSSTC (x'FE') since SMF
usually runs at SYSTEM (x'FF') DP.
F jobname,STATUS
where jobname is the name of the task running the SMF Extractor.
The output from this command has a line that reads as follows:
VSX0141I VSXSTA 19:14:28.259 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 193
If the Max value is close to or exceeds the control values, the following needs to be checked:
– The MAXQD value in the SMF Extractor JCL can be increased up to a maximum of 9999.
– Verify that the Dispatching Priority of the SMF Extractor is at a similar value to the tasks that are
generating SMF records, especially the SMF task.
• The SMF Extractor Region must be 256M at minimum. Running with a smaller region could result in
ABENDSC78.
Procedure
1. Define a log stream to feed the Continuous Collector.
• The name of the log stream must be unique for each Continuous Collector.
• Define the log stream on the coupling facility (CF) if one exists in the sysplex, however it is
recommended to only use DASD. For JCL to do this, refer to sample configuration members:
– “Defining a log stream on a coupling facility” on page 35
– “Defining a DASD-only log stream” on page 35
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only the DATA TYPE(CFRM)
REPORT(YES) statement.
b) From the output of the UPDPOL job, extract the CF and STRUCTURE statements for all active
coupling facilities and structures.
For example:
CF statement
Structure statement
SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL
For the JCL to do this, refer to “Sample configuration members” on page 34:
• “Defining a log stream on a coupling facility” on page 35
• “Defining a DASD-only log stream” on page 35
2. If the log stream is on a coupling facility perform the following steps. If DASD-only, skip this step.
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only this statement:
b) From the output of the above job, extract the CF and STRUCTURE statements for all active coupling
facilities and structures.
Examples of these statements are:
CF statement
STRUCTURE statement
SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL
SYSPREFIX=XRLSYS
&PREFIX=XRL
&DATABASE=XRLDB
f) Change the log stream name to the name defined in either the CRLOGRCF or CRLOGRDS, depending
on which method (CF or DASD) was used.
g) Save the job with the name the Continuous Collector is going to run as.
4. Ensure the job has DBADM access to the Db2 instance.
5. Test starting the Continuous Collector as a started task to make sure the JCL is correct.
Verify the setup as follows:
• Ensure the Continuous Collector was able to open the Db2 database and the log stream.
• If not, correct the problem, for example, the log stream name and permissions. Then restart the task.
• Without data in the log stream, the following message is displayed and the job continues, waiting for
data to arrive:
However, if the Continuous Collector was previously run to test it, there may be some data in the log
stream and this message will not be displayed.
• To enable the Continuous Collector to run in ZIIP mode, SDRLLOAD must be added to the APF
Authorized list.
Procedure
1. On the hub system, in OMVS, locate the source directory created when IBM Z Performance and
Capacity Analytics V3.1.0 was installed.
• It should be in /usr/lpp/IBM/IZPCA/v3r1m0/IBM/. If not in that location, issue this command
to get the directory:
df | grep IZPCA
The output from the command will be similar to this example, noting that the high-level qualifier for
the data sets (TIVDS.V3R1M0) will be different for your installation:
• If the source directory does not exist, it needs to be mounted. The TSO command to mount the
filesystem read-only is:
2. On all systems, create a directory for the DataMover based on your installation standards.
For this example, it will be installed in /var/IZPCA/DataMover. Ensure that /var/IZPCA (or your
alternative directory location) already exists or is created before extracting the Data Mover.
3. On the hub system, install the DataMover by following these steps:
a) Extract the DataMover using the command:
where
• /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM is the source directory/file for IBM Z
Performance and Capacity Analytics
• /var/IZPCA is where you want the contents of the tar file to reside and which is your destination
directory for the DataMover
This will create the following directories:
/var/IZPCA/DataMover
/var/IZPCA/DataMover/config
/var/IZPCA/DataMover/java
/var/IZPCA/DataMover/mappings
Note: The DataMover must be installed on all hub and spoke systems.
• For the spoke systems that do not have IBM Z Performance and Capacity Analytics installed, copy
the /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM file to that system.
• In addition, if multiple DataMovers are being installed on a single system (such as a hub), see
“DataMover tips” on page 51 for more information.
b) Edit the extracted ../IZPCA/DataMover/DataMover.sh script and update the rundir and
JAVA_HOME values in the following lines:
#
# Runtime directory. Other paths are relative to it.
#
# Probably better to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/var/IZPCA/DataMover"
…
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/usr/lpp/java/J8.0_64 <- Path to the Java run libraries
…
routes = 1
route.1.name = Hub <- Ensure this value is Hub
…
input.1.type = TCPIP
input.1.port = 54020 <- Ensure this is the correct, available TCP/IP port
number
input.1.ssl = no <- This only needs to be set if SSL is used
…
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- This is the
IBM Z Performance and Capacity Analytics log
stream
that the Continuous Collector reads.
This must match the log stream name
See “Sample configuration members” on page 34 for the complete member listing.
4. On the spoke systems, install the DataMover by following these steps on each spoke system:
a) Create the /usr/lpp/IBM/ directory if it doesn’t already exist.
b) Send the file /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM to the /usr/lpp/IBM/ directory on
each spoke using the standard data transfer protocol (SFTP, ...) of your installation.
c) Change to the /usr/lpp/IBM/ directory and extract the DataMover using the command:
This creates the same sub directories as were created on the hub in “3.a” on page 50.
d) Edit the extracted ../IZPCA/DataMover/DataMover.sh script and update the rundir and
JAVA_HOME values as for the hub in “3.b” on page 50.
e) Edit the extracted ../IZPCA/DataMover/config/spoke.properties file and update the
following lines:
…
routes = 1
route.1.name = Spoke <- Ensure this value is Spoke
…
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS04 <- SMF Extractor log stream name
…
outputs.1 = 1
output.1.1.type = TCPIP
output.1.1.host = <host system name> <- Name or IP of Hub System
output.1.1.port = 54020 <- TCP/IP port of Hub DataMover
See “Sample configuration members” on page 34 for the complete member listing.
f) Create the procedure to run the spoke Data Mover.
What to do next
Complete the implementation by following the steps in “Step 4: Activating the Continuous Collector” on
page 56.
DataMover tips
When preparing to install the DataMover, consider the following instructions.
Procedure
• Ensure the -Xmx16G parameter is in the Java statement in the DataMover.sh file to specify the
maximum memory size for Java.
See the sample in “DataMover.sh - Run the DataMover” on page 39.
• When defining log streams ensure the RETPD is greater than 0.
• If multiple DataMovers, including DataMovers and Publication DataMovers, are to run on a single
system such as a hub system, configure each DataMover instance with its own working directory as
follows:
– For each DataMover, copy the files from the DataMover directory into a separate directory, such
as ../DataMover/jobname.
– Customize that directory for the DataMover instance.
• The status of the DataMover internal queues is key to ensuring the DataMover is handling the traffic
load sufficiently. To check the current DataMover statistics, issue the command:
F jobname,APPL=DISPLAY
where jobname is the task name of the USS job that started when the DataMover was started.
In this example of the output, the highlighted lines are the ones that indicate the queue depth, if any.
F PRLJDMP1,APPL=DISPLAY
DRLJ0078I ZOS Console: display
DRLJ0075I Route: 1 name: Hub
DRLJ0075I Input type: TCPIP
DRLJ0075I Port: 54020
DRLJ0075I Queued for output: 0
DRLJ0075I No active connections
DRLJ0075I 0 Processes(s):
DRLJ0075I No processes defined.
DRLJ0075I 1 Output(s):
DRLJ0075I Output: 1
DRLJ0075I Output Type: LOGSTREAM
DRLJ0075I log stream: DRL.LOGSTRM
DRLJ0075I Queued for input: 0
DRLJ0075I Received DRL.LOGSTRM packets
F jobname,APPL=STATUS
where jobname is the task name of the USS job that started when the DataMover was started.
An example of the output from this command is:
F PRLJDMP1,APPL=STATUS
DRLJ0078I ZOS Console: status
DRLJ0093I Status for Route 1 is Running
Hub
Spoke Hub Spoke Spoke Spoke
Hub
Spoke Hub Spoke Spoke Spoke
Route
Hub
Route
Spoke Hub Spoke Spoke Spoke
Spoke configuration
For all the hub and spoke configuration options, the spoke configuration is the same.
Spoke configuration
routes = 1
route.1.name = Spoke
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS04 <- The log stream the SMF Extractor writes to
input.1.block = 30
input.1.wipe = YES
input.1.strip_header = NO
input.1.check_LRGH = YES
input.1.sourcename = LOGSTREAM
input.1.sourcetype = RAWSMF
#
outputs.1 = 1
output.1.1.type = TCPIP
output.1.1.host = <name or IP of Hub> <- Name or IP of hub
output.1.1.port = 54020 <- The port the hub DataMover is listening on for this spoke
output.1.1.use_ssl = NO
#
Dedicated hubs
The dedicated hubs configuration specifies there is a one-to-one relationship between DataMovers on the
spoke and the hub. Each spoke uses a unique port number, and each hub DataMover listens on one port.
The advantage of this configuration is that it provides the most flexibility and best capacity for throughput.
The disadvantage of this configuration is that the system resources needed are the most of any option.
The contents of the hub configuration (hub.properties file) is shown in the following example.
Dedicated hubs configuration file
Multi-route hubs
The multi-route configuration defines a many-to-one relationship between DataMovers on the spoke and
hub systems. All spokes use a unique port number and the hub listens to multiple ports simultaneously.
The advantage of this configuration is that it provides good flexibility and capacity for throughput. The
disadvantage of this configuration is that the system resources needed are high although less than
the dedicated approach. The contents of the hub configuration (hub.properties file) is shown in the
following figure. This example is for three spokes.
Multi-route hubs configuration file
Shared hubs
The shared hubs configuration specifies that there is a many-to-one relationship between Data Movers on
the spokes and the hub. Each spoke communicating with a specific hub DataMover uses the same port
number and each hub DataMover listens on one port. Multiple hub DataMovers are run on each hub LPAR.
The advantage of this configuration is that the overhead is much less than the dedicated hub approach.
The disadvantage of this configuration is that the flexibility is not as good as the dedicated and multi-
route approaches and the throughput capacity is somewhat less. The contents of the hub configuration
(hub.properties file) is shown in the following example. This is the same as the dedicated approach
but with more than one hub DataMover, each with a unique port, running.
Shared hubs configuration file
Single hub
The single hub configuration specifies that there is a many-to-one relationship between Data Movers on
the spoke and the hub. Each spoke uses the same port number and there is only one hub Data Mover
listening on the port. The advantage of this configuration is it uses significantly less resources than the
other approaches. The disadvantage of this configuration is significantly less flexibility and capacity for
throughput. The contents of the hub configuration (hub.properties file) is shown in the following
example.
Single hub configuration file
routes = 1
outputs.1 = 1
route.1.name = Hub
input.1.type = TCPIP
input.1.port = 54020 <- This must match the port in all the spoke.properties
input.1.ssl = no
outputs.1 = 1
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.1.1.strip_rdw = NO
time is 02:30:12, and the interval is set to 15 seconds, statistics will be generated at time 02:30:15 (3
seconds later), then 02:30:45, 02:31:00, and so on, at 15 second intervals.
Valid values for x are: 0, 15, 30, 60
Valid values for y are: 0, 1, 2, 5, 10, 15, 20, 30, 60
The text seconds and minutes in the configuration files are not case sensitive. Valid examples include:
15 minutes, 15 Minutes, 15 MINUTES, 15 MiNuTeS
If this line is not present, or if the value is set to zero, no statistics output is generated. This interval is
based on clock time, such that messages will be generated on the next clock interval after the TCPIP
Input/Output stage has been created, and thereafter at the specified interval. For example: If the current
time is 02:30:12, and the interval is set to 15 seconds, statistics will be generated at time 02:30:15 (3
seconds later), then 02:30:45, 02:31:00, and so on, at 15 second intervals.
Valid values for x are: 0, 15, 30, 60
Valid values for y are: 0, 1, 2, 5, 10, 15, 20, 30, 60
The text seconds and minutes in the configuration files are not case sensitive. Valid examples include:
15 minutes, 15 Minutes, 15 MINUTES, 15 MiNuTeS
Optional Encryption
Configure SSL to protect your TCPIP links. By default, DataMover communications use unencrypted TCPIP
links. You can configure your communications to encrypt your TCPIP links.
1. On each Spoke system, create a keystore for each DataMover and produce a .cert file that holds a
certificate for it. On each Spoke system, in the DataMover Directory, issue the following commands:
v Datamover.sh SSL GEN
v DataMover.sh SSL EXPORT
2. The certificate for the hub must be exported to each Spoke system.
3. The certificates for each Spoke must be transported to the Hub.
4. Import each certificate into the system's trust store. Ensure that each certificate is imported with a
unique name.
5. When the certificates arrive, issue the following command: DataMover.sh SSL IMPORT
name.cert
6. Modify the hub and spoke configurations to change the use_ssl=no setting on each TCPIP stage into
use_ssl=yes. The SSL implementation that is used is that of the underlying Java installation.
Procedure
1. Determine an optimal start time.
If you have an existing system for processing and archiving SMF data that is being replaced by
IBM Z Performance and Capacity Analytics, you will need to manage the cut-over to minimize data
duplication. When the SMF Extractor is started, data in the current SMF MANx data sets will be
duplicated by the IBM Z Performance and Capacity Analytics collection. Care must be taken to ensure
data duplication is minimized by ensuring a timely switch of the SMF data sets to coincide with the
startup of the IBM Z Performance and Capacity Analytics SMF Extractor.
2. Begin with starting the Continuous Collector on the hub or stand-alone system by following these
steps:
a) Issue the I SMF command to switch the SMF data set. Allow the SMF dump to occur while doing
the next step.
After processing this dump, the SMF dump process will serve only as a backup source of data for
IBM Z Performance and Capacity Analytics.
b) Start the SMF Extractor as a Started Task. Check the log to ensure data is being collected.
c) Run the IBM Z Performance and Capacity Analytics Batch Collector using the data starting with the
next set of data after the last IBM Z Performance and Capacity Analytics batch collection run and
ending with the SMF dump produced in “2.a” on page 57.
d) After the Batch Collector completes updating, start the Continuous Collector.
This was set up in “Step 2: Installing the Continuous Collector” on page 47.
e) Verify that the data is being collected based on messages in the DRLLOG SYSOUT logs.
3. Pause the implementation at this point for a reasonable period of time (24-48 hours) to ensure that
the Continuous Collector processes data without any issues on the hub or stand-alone system.
4. When ready, cut over the spoke systems one at a time by following these steps:
a) Issue the I SMF command to switch the SMF data set. Allow the SMF dump to occur while doing
the next step.
After processing this dump, the SMF dump process will serve only as a backup source of data for
IBM Z Performance and Capacity Analytics.
b) Clear the log stream.
To do this:
i) Change to the ../DataMover/config directory.
ii) In clear.properties, change the value of the log stream to the name of the log stream to be
cleared.
input.1.logstream = logstreamname
iii) From the OMVS command prompt, change to the ../DataMover/ directory and run the
command DataMover.sh clear
c) Start the SMF Extractor on the spoke system. Check that data is being collected and put into the log
stream.
d) Following standard procedures, process the data from the SMF dump produced in “4.a” on page 57.
e) After dump processing is complete, ensure the DataMover is active on the hub. If it is not currently
running, or this is the first spoke system to be implemented, start the DataMover on the hub.
f) Start the DataMover on the spoke system.
g) Verify that the data is being collected and sent to the hub by checking the messages in the logs.
h) Repeat these steps for each spoke system.
Configuration parameters
The following table lists the SMF Extractor parameters that are relevant for IBM Z Performance and
Capacity Analytics users:
Default Configuration
If the users have followed the instructions in Chapter 2, “Installing IBM Z Performance and Capacity
Analytics ,” on page 13, users will have configured the SMF Extractor using only the SMFREC= and
OUTLGR= parameters, with OUTLGR specifying the name of the log stream to write the records to and
SMFREC specify which SMF records to capture.
*
* Output log stream for IZPCA COLLECTOR
*
OUTLGR=MY.IZPCA.LOG.STREAM
*
* IZPCA wants SMF 30 and SMF 70 thru 79
SMFREC=30,70:79
To get the SMF Extractor to write output to multiple log streams the users need to first define the log
streams (as DASD log streams) and add an OUT*LGR statement for each of them. The SMF Extractor
supports writing to up to 5 additional log streams, numbered 1 thru 5. For example:
*
* Output log stream for IZPCA COLLECTOR
*
OUTLGR=MY.IZPCA.LOG.STREAM
*
* Output log stream for IZPCA Data Splitter redistribution
*OUT1LGR=MY.RAW.LOG.STREAM
In order to capture additional SMF records the users need to ensure that the SMFREC statement will
capture the SMF records the users want write to the additional log streams. For example:
*
* IZPCA wants SMF 30 and SMF 70 thru 79
*
SMFREC=30,70:79
Remember that all SMFREC statements are cumulative (unless one of them specifies an *).
If the users were to run the SMF Extractor at this point, user would get all of the records written to both
of the log streams. To be selective about which records get written to which log stream we must use
OUT*LREC statements. OUT*LREC statements are like SMFREC statements except that the serve to filter
the SMF records written to each output log stream.
*
* Write only the SMF records IZPCA wants to its log streams
*
OUTLREC=30,70:79
*
* Data Splitter records
*
*80 thru 83 for our security app
*
OUT1LREC=80:83
The OUT1LREC is the OUTLREC parameter for the first additional output log stream (numbered 1).
OUT*LREC statements for the same output log stream are cumulative like the SMFREC statements.
Activating changes
If you need to change the SMF details being collected you are advised to:
1. Update the SMF Extractor configuration
2. Wait until the system is quite or in a maintenance window
3. Start the new SMF extractor job with the same name as the old one. It should simply queue up behind
it
4. Stop the old SMF Extractor instance
This will minimise the number of SMF messages that are missed during the change over.
Data transmitted over this path is typically in JSON or CSV formats, depending on the needs of the
ingesting program.
Data transmitted over this path is typically in JSON or CSV formats, depending on the needs of the
ingesting program.
Data transmitted over this path is converted to its destination format as required – JSON, CSV and SQL
are available. It is even possible to configure a DataImporter to fetch the data once from Db2 and then
write it out in two different formats in two different locations.
It should be noted that this is a Pull model mechanism and, as such, is inherently less secure than
the first two options which are Push models. The fundamental difference is that the credentials and
Procedure
1. Enable JDBC for Db2
To Shadow Db2 data, set up JBDC. Consult your Db2 system programmer about installing and
activating JDBC for the Db2 holding your IBM Z Performance and Capacity Analytics data.
Actions:
• JDBC enabled for the Db2.
• A location to copy the correct level of Db2 JDBC drivers from.
• A user id and password for each Shadower to use to access Db2 via JDBC. This should be a new
userid specifically created for the Shadower with access to only the data in the IBM Z Performance
and Capacity Analytics tables that are being shadowed.
2. Configuring Db2 Shadowing
This process creates a Shadower Forecaster which actively polls Db2 to find new data added to views
and tables, transforms it into JSON, and sends it to the designated receiver.
a) Create a new Forecaster from the SMP/E installation directory by using the following example as a
guide:
where:
'/usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJFC'
is the install directory and file for the IZPCA Forecaster archive and
'destination'
input.1.interval = 15
The value is specified in minutes. The shorter the interval, the more frequently the Shadower will query
Db2 to see if there is new data. The polling interval should account for the number of tables being
shadowed and the number of data sources. 200 tables and 50 data sources will drive 10,000 queries
against Db2 per polling interval. It is advisable to break such large tasks up and distribute amongst
multiple Shadowers, with each Shadower specifying its own unique sources list.
4. Specifying Sources for the Shadower
The configuration has a section for specifying sources:
#
input.1.sources = 3
input.1.source.1 = SYS1
input.1.source.2 = SYS2
input.1.source.3 = SYS3
Add a source statement for each unique MVS_SYSTEM_ID whose data is to be shadowed. Data will
not be shadowed for systems that are not in the source list. Having the same system in the list twice
will create additional duplicate records on every active polling cycle. On every active polling cycle, the
Shadower makes a query for each source value for each shadowed table or view.
5. Importing the JDBC Drivers
Make the Db2 JDBC drivers available to the Shadower. Edit the Forecaster.sh file in the Forecaster
directory and change db2jdbc specification to point to the directory holding the drivers. If the
application cannot directly reference the libraries in that location, copy them into the a subdirectory of
the Forecaster directory. Ensure they are marked as being executable.
6. Save your JDBC credentials
Procedure
1. Run DRLJCDPS
DRLELSTT
The first utility (DRLELSTT) creates a list of all the data tables in the IBM Z Performance and Capacity
Analytics database. This is used as an input to the other two utilities to ensure they produce consistent
output.
DRLEMCDS
The second utility (DRLEMCDS) creates an izds.streams.json file, which contains an IBM Z Common
Data Provider stream definition for each database table, as well as for each view based on each
database table. The izds.streams.json file is required by IBM Z Common Data Provider. It needs to be
copied into the configuration folder for the IBM Z Common Data Provider user interface.
DRLEMTJS
The third utility (DRLEMTJS) creates a table.json file for every table and view in the database. The
created files map the contents of the tables. The table.json files are required by IBM Z Common Data
Provider.
2. Install and configure IBM Z Common Data Provider
Refer to the IBM Z Common Data Provider User Guide for information.
3. Configure IBM Z Common Data Provider
Complete the following steps to configure IBM Z Common Data Provider for data streaming to send
data off-platform for Splunk or ELK reporting.
a. Make IBM Z Performance and Capacity Analytics table definitions available to IBM Z Common Data
Provider.
• Copy the izds.streams.json file into the IBM Z Common Data Provider configuration UI folder.
• The izds.streams.json file was created earlier with the mapping utility DRLEMCDS.
b. Update your IBM Z Common Data Provider policy to stream the data.
i) Find the stream definitions that correspond to each table to be streamed from IBM Z
Performance and Capacity Analytics and add it to your configuration.
ii) Data will arrive encoded with the character set specified in process.1.1.encoding during
configuration of the Shadower.
Configure IBM Z Common Data Provider to transcribe the data stream to UTF-8, a requirement
for sending the data off-platform
Procedure
1. Run DRLJMTJS
Customize the sample JCL member DRL.SDRLCNTL(DRLJMTJS) to suit your environment, then
run the customized sample. DRLJMTJS runs the DRLEMTJS utility program. DRLEMTJS creates a
table.json file for every table in the database, as well as for every view based on these tables. Each
file maps the contents of the table. The table.json files are required by the Publication DataMover.
2. Download the Catcher
Install the DataMover on the remote system that is running Splunk or ELK.
a. First, ensure there is a 64-bit Java 8 runtime installed on the target system.
b. Use FTP to download a DataMover working directory from the z/OS system to the target system.
• Download the main directory, the java subdirectory and the config subdirectory.
• Do not download the mapping directory.
• The DataMover.jar file must be transferred as binary.
• The remaining files need to be transferred as text in FTP to convert to ASCII.
3. Customize the DataMover startup scripts
The DataMover startup script directory to be configured and used will depend on your target system
platform. These sample startup scripts reside in the DataMover sub-directory:
or
REM - Windows Batch script to run the DataMover from its working directory
@ECHO OFF
IF [%1]==[SSL] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[version] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE (
set PARMS=config=.\config\%1.properties
)
java -Xmx8G -Djava.util.logging.config.file=".\logging.properties" -cp ".\java\*"
com.twentyfirstcsw.datamover.DataMover %PARMS%
#! /bin/sh
#
# Shell script to run the DataMover from its working directory
#
# -------------------------------------------------------------------
# Configuration area - tailor to suit your installation
#
#
# Runtime directory. Other paths are relative to it.
#
# Probably better to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/opt/IZPCA/DataMover/datamover"
#
# logging.properties controls which messages get sent where
#
logfile="logging.properties"
#
# The main executable
#
jarfile="java/DataMover.jar"
#
# The config file that tells it what to do.
#
config="config/$1.properties"
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home
export PATH=$PATH:$JAVA_HOME/bin
#
# -------------------------------------------------------------------
# Execution area - don't change things below the line
##
# Get to the runtime directory
#
cd $rundir
#
# Work out what the parms are
#
parms="config=$config"
if test "$1" = "SSL"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
fi
if test "$1" = "version"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
4. Verify that the DataMover is installed correctly by issuing the following command
DataMover directory:
or
It should produce a few lines of output identifying itself as a DataMover, identifying the operating
system, and giving a dummy value for the system name and sysplex id.
5. Create a Hopper directory
This is a directory that the DataMover will write incoming data into, and from which Splunk or ELK
will read it. The hopper acts as a disk buffer, allowing the DataMover to write data out ahead of what
Splunk or ELK has ingested. It provides persistence for uningested data in the event that Splunk or
ELK is running slow or is unavailable.
6. Customize the catcher.properties file
This is located in the DataMover/config subdirectory.
a. Update the output.1.1.directory parameter with your hopper directory for the data.
b. Change the port if required, otherwise leave as the default 45020.
c. In Splunk, the hopper directory is determined by the system environment parameter
CDPDR_PATH. Ensure this is set correctly to the desired hopper directory location.
d. In ELK, the directory location for the ELK hopper is specified in the sample file B_IZPCA_Input.lsh.
#
# Config
#
routes = 1
#
# Route 1 – Receive from TCPIP and Buffer to disk for Splunk/ELK to pick up
#
route.1.name = Catcher
#
# Single input is TCPIP
#
input.1.type = TCPIP
input.1.port = 45020
#
# Output to hopper
#
outputs.1 = 1
#
# This puts the data that arrives each week into a different directory
# You need to manually delete directories that are a couple of weeks old – after checking
their data got ingested ok
#
output.1.1.type = File
output.1.1.directory = D:\\Hopper
output.1.1.subdir = week
output.1.1.format = json
#
The DataMover will start and display the Hostname and IP address it is using, listening on port 45020
(default).
Use these values when configuring your DataMover on z/OS to connect with the Catcher.
8. Review the network Interface setting
If there are using multiple networks on your remote platform, a .nif parameter can be specified to
select the network interface that a TCPIP stage will use.
• For the TCPIP input stage, it is specified as:
input.r.nif = id
output.r.i.nif = id
where:
• id can be either a valid IP address (IPv4 or IPv6) or a valid network interface name.
• r is the route number. For example: input.1.nif is the input stage for route 1 and input.2.nif is the
input stage for route 2.
• i is the output stage index (as there can be multiple outputs for the same route). For example:
output.1.3.nif is the output stage for route 1, index 2.
To get a list of your systems network names issue the command:
or
process.1.2.encoding = UTF-8
c. Modify the TCPIP output parameters with the IP address of your Catcher DataMover and the
correct port.
output.1.1.host = x.xx.xxx.xxx
output.1.1.port = 45020
b. The certificate from each system must be installed on the other system. This is a binary file and
must be transported in binary.
c. Import each certificate into the system's trust store.
• Ensure that each certificate is imported with a unique name.
• When the certificates arrive, issue the following command:
d. Modify the DataMover configurations to change the use_ssl=no setting on each TCPIP stage to
use_ssl=yes.
Procedure
1. Download and Unpack the DataImporter
Install the DataImporter on the remote system that is running Splunk or ELK ingestion.
a. First, ensure there is a 64-bit Java 8 runtime installed on the target system.
b. Download the binary file containing the DataImporter .tar file:
where:
'/usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDI
is the install directory and file for IBM Z Performance and Capacity Analytics Forecaster archive
c. Extract the .tar file to create a working directory:
Do not overwrite any existing installations as the files from the .tar file will overwrite any modified
files that might be present.
2. Enable JDBC for Db2
or
@ECHO OFF
REM DataImporter
REM
REM IBM Z Performance and Capacity Analytics 3.1
REM LICENSED MATERIALS - Property of Teracloud S.A.
REM 5698-AS3 (C) Copyright Teracloud S.A. 2020, 2022
REM
REM All rights reserved.
REM
REM Windows Batch script to run the DataImporter from its working directory
REM
IF [%1]==[SSL] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[version] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[networks] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE (
set PARMS=config=.\config\%1.properties
)
SET db2jdbc=./jdbc_db2b10
SET CLASSPATH=./java/*;%db2jdbc%/db2jcc4.jar;%db2jdbc%/sqlj4.zip;%db2jdbc%/
db2jcc_license_cisuz.jar
DataImporter.sh (non-Windows)
The sample DataImporter.sh requires customization:
• Update the rundir parameter with the correct working directory for the DataImporter.
• Update the JAVA_HOME parameter with the correct path for your installed Java.
• The path to the JDBC drivers
#!/bin/sh
#
# DataImporter
#
# IBM Z Performance and Capacity Analytics 3.1
# LICENSED MATERIALS - Property of Teracloud S.A.
# 5698-AS3 (C) Copyright Teracloud S.A. 2020, 2022
#
# All rights reserved.
#
# Shell script to run the DataImporter from its working directory
#
# Configuration area - tailor to suit your installation
#
#
# Runtime directory. Other paths are relative to it.
#
# Use different directories if you are running multiple Data
# Importers on the same system.
#
# Linux: /Users/drl/DataImporter
rundir="/u/drl/DataImporter"
#
# logging.properties controls which messages get sent where
#
logfile="logging.properties"
#
# The main executable
#
jarfile="java/DataImporter.jar"
#
# The config file that tells it what to do.
#
config="config/$1.properties"
#
# The directory where your Db2 JDBC drivers are installed
#
db2jdbc="/apc/tdb2a10/usr/lpp/db2a10/jdbc/classes"
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
# Mac: /Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home
# Linux: /usr/java/jdk1.8.0_20/bin
# zOS: /apc/java800/64bit/usr/lpp/java/J8.0_64
export JAVA_HOME=/apc/java800/64bit/usr/lpp/java/J8.0_64
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=./java/*:$db2jdbc/db2jcc4.jar:$db2jdbc/sqlj4.zip:$db2jdbc/
db2jcc_license_cisuz.jar
echo $CLASSPATH
#
# Get to the runtime directory
#
cd $rundir
#
# Work out what the parms are
#
parms="config=$config"
#
# Run the DataImporter
#
or
It should produce a few lines of output identifying itself as a DataImporter, identifying the operating
system, and giving a dummy value for the system name and sysplex id.
7. Create a Hopper directory
This is a directory that the DataImporter will write incoming data into, and from which Splunk or ELK
will read it. The hopper acts as a disk buffer, allowing the DataImporter to write data out ahead of
what Splunk or ELK has ingested. It provides persistence for uningested data in the event that Splunk
or ELK is running slow or is unavailable.
8. Decide upon the configuration to run
A configuration files is supplied with the DataImporter include:
• Import_CPKPM_H.properties
• Import_CPKPM_R.properties
• Import_CPPROF_R.properties
• Import_TCP_H.properties
• Import_TCP_R.properties
The CPKPM files will import data from the Capacity Planning and Key Performance Metrics
components and the CPPROF files will import data from the Capacity Planning resource Profiling
Metrics components. The TCP files will import statistical data from the Security components.
This will write out a new file (my_import_CPKPM_H.properties) that is the customized version of the
provided configuration.
Use the name of the customized configuration to run the DataImporter:
DataImporter my_import_CPKPM_H
or
The DataImporter will start, connect to Db2 and begin pulling data down.
It will not terminate unless it encounters an error or it was run with a time range with a specified end
date for all tables and it has imported all of the data that is available within the time range.
12. Operating the DataImporter
To stop the DataImporter at anytime issue the command: stop.
It will take a while to shut down and a number of ‘interrupted’ exceptions and ‘waiting’ messages
may occur during the shutdown process.
If the DataImporter fails to terminate issue the command: force or press Ctrl+C.
Network Considerations
For a single LPAR, the IBM Z Performance and Capacity Analytics DataImporter will want to stream
around 6 MB an hour, although this will be higher on larger, busier systems. Customers need to estimate
the total data transfer rate to the server and ensure that they have sufficient network bandwidth. The
load may need to be split between multiple servers if the bandwidth cannot be met by a single server.
Customers should also check the outbound network capacity from the host where IBM Z Performance
and Capacity Analytics is installed.
Customers running a Publisher or Shadower on the host and a Catcher on the distributed system should
allow for baseline data transfer rates of 12 MB an hour for each LPAR. As with the DataImporter these can
be higher for larger, busier systems.
The data volume transferred can also be significantly increased if you activate and stream data for
additional subsystem monitoring features.
This process occurs for each source (MVS_SYSTEM_ID) within every copied table or view. After a
successful query the timestamp is stored in table_source_check files, and is used as the base time for the
next query cycle. Shadower will revert to initial instruction if table_source_check files are not available.
Shadower initial instruction is set by a .initial tag in the shadowing definition of each table or view.
The DataImporter can import data with two different scenarios as follows:
• Real-time scenario: The import_xxxx_R.properties file can import real-time data with setting values
below. The tag specified determines the initial starting point and the DataImporter continues to
executes until it is manually instructed to terminate via the command "stop".
WEEK The historical data copied is based on the specified update frequency of
the table:
Hourly and Timestamp tables - 168 hours prior to the current data and
time.
Daily table - commencing 7 days prior to the current date.
Weekly table - commencing 7 days prior to the current date.
MONTH The historical data copied is based on the specified update frequency of
the table:
Hourly and Timestamp tables - 672 hours prior to the current date and
time.
Daily table - commencing 28 days prior to the current date.
Weekly table - commencing 28 days prior to the current date.
Monthly table - commencing 28 days prior to the current date.
RANGE The Shadower will query all records between two specified dates.
Two additional tags, .range.from and .range.to, must be specified in the
shadowing definition.
• range.from defaults to EPOCH, January 1st, 1970.
• range.to value defaults to NOW, the current time and date.
The Shadow stage will automatically shut down after processing all
the data within the tables that are available within the date range and
terminate the DataImporter.
Figure 33. DRLJCCOL: JCL for the Continuous Collector started task
D LOGGER,C,LSN=logstreamname,D
D LOGGER,C,LSN=DRL.LOGSTRM,D
IXG601I 01.43.45 LOGGER DISPLAY 469
CONNECTION INFORMATION BY LOGSTREAM FOR SYSTEM ZT01
LOGSTREAM STRUCTURE #CONN STATUS
--------- --------- ------ ------
DRL.LOGSTRM *DASDONLY* 000002 IN USE
DUPLEXING: STAGING DATA SET
STGDSN: DRL.DRL.LOGSTRM.ZT00PLEX
VOLUME=TEC000 SIZE=0000002700 (IN 4K) % IN-USE=025
GROUP: PRODUCTION
OFFLOAD DSN FORMAT: DRL.DRL.LOGSTRM.<SEQ#>
CURRENT DSN OPEN: YES SEQ#: A0002221
ADV-CURRENT DSN OPEN: NO SEQ#: -NONE-
JOBNAME: PRLJCCOL ASID: 0028
R/W CONN: 000000 / 000001
RES MGR./CONNECTED: *NONE* / NO
IMPORT CONNECT: NO
JOBNAME: PRLSMFEX ASID: 005B
R/W CONN: 000000 / 000001
RES MGR./CONNECTED: *NONE* / NO
IMPORT CONNECT: NO
F jobname,STATUS
where jobname is the name of the task running the SMF Extractor.
Example of the command output with key status values highlighted.
F PRLSMFEX,STATUS
VSX0111I VSXCON 01:21:47.479 Command <STATUS> received from CONSID=0300000D CONSNAME=JMACERA
VSX0133I VSXSTA 01:21:47.479 ++ SMFU83 Status display: ++
VSX0136I VSXSTA 01:21:47.479 ** > Server is ready to capture SMF records < **
VSX0127I VSXSTA 01:21:47.480 ** #C21CAT Created by PRLSMFEX STC on 2019/04/30 17:01:44
VSX0135I VSXSTA 01:21:47.480 ** SMFU83 Started by PRLSMFEX STC on 2019/04/30 17:01:44
VSX0134I VSXSTA 01:21:47.480 ** SMFU83 Server has been restarted 1 times this IPL
VSX0141I VSXSTA 01:21:47.480 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 166
VSX0173I VSXSTA 01:21:47.480 SQ Cntrs: NQ=000270B4x DQ=000270B4x
VSX0130I VSXUT1 01:21:47.481 ** SMF Types collected:
(014,015,030,042,060,061,064,065,070,071,072,073)
VSX0130I VSXUT1 01:21:47.481 ** SMF Types collected: (074,085,090,094,099,100,101,113,118,119)
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSJES2.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSJES2.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSSTC.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSSTC.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYSSTC.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU83 Module VSXU83 is active ::
VSX0139I VSXSTA 01:21:47.483 ** SVC dumps will be created for certain abends
VSX0131I VSXSTA 01:21:47.484 ** Tracing is active; MSGLVL is 9
VSX0144I VSXSTA 01:21:47.484 ** VSXPC1 abended 0 times; 2 abends are allowed before termination
VSX0349I VSXSTA 01:21:47.484 ++ Id=0000 M=VSXMAI TCB=006F8588x EP=00007000x TT=00000000
07297748x
VSX0349I VSXSTA 01:21:47.484 ++ Id=PR01 M=VSXPRT TCB=006FC580x EP=00017458x TT=00000000
0018A03Ax
VSX0349I VSXSTA 01:21:47.484 ++ Id=CM02 M=VSXCON TCB=006F82E0x EP=00011900x TT=00000000
009B8010x
VSX0349I VSXSTA 01:21:47.484 ++ Id=WR03 M=VSXWTR TCB=006F80A8x EP=0001AA70x TT=00000003
7E560F1Ax
VSX0349I VSXSTA 01:21:47.485 ++ Id=SM04 M=VSXSMF TCB=006CE3A8x EP=00017D90x TT=00000000
9CA1B4F7x
VSX0349I VSXSTA 01:21:47.485 ++ Id=HR05 M=VSXHRB TCB=006CE1F8x EP=00016C70x TT=00000000
37357ECFx
VSX0360I VSXSTA 01:21:47.485 -- Allocated dsn:
. . .
VSX0361I VSXSTA 01:21:47.485 -- SCNT(00000001) MXSTK(00000011) STK1(00000000) CRSH(00000000)
VSX0362I VSXSTA 01:21:47.485 -- RECR( 159924) CPLS( 5147) RECW( 159861) SMXC( 0)
VSX0363I VSXSTA 01:21:47.485 -- EOVC( 0) RSLC( 0) WSLC( 0) IBUF( 1000)
VSX0129I VSXSTA 01:21:47.485 ++ End-of-list ++
DataMover status
This command shows the status of the DataMover task.
The output of this command may differ depending on the number of routes and processes defined in the
parameters.
F DM_USS,APPL=STATUS
where DM_USS is the name of the USS task (usually ending with a 1). For example, if the DataMover is
started as DRLJDM, this value would be DRLJDM1.
Example of the command output with key status values highlighted:
F DRLJDMH,APPL=STATUS
DRLJ0078I ZOS Console: status
DRLJ0093I Status for Route 1 is Running
DataMover statistics
This command shows the statistics from the DataMover task.
The output of this command may differ depending on the number of routes defined in the parameters.
F DM USS,APPL=DISPLAY
where DM USS is the name of the USS task (usually ending with a 1). For example, if the DataMover is
started as DRLJDM, this value would be DRLJDM1.
Example of the command output with key values highlighted.
F DRLJDMH,APPL=DISPLAY
packets associated with a tracker have been unlinked from it (usually when all the output stages are done
with them), the tracker returns to being a free tracker that the input stage can use to track another data
packet. You can set the number of stages an input tracker has by using the .trackers value.
Memory management
Consider these configuration options for memory management to enable the DataMover to run efficiently.
• Use a 64-bit JVM and set the maximum heap size (-Xmx) to 16GB or larger.
• The .block and .trackers value on the input stage control the number of records that are going
to be in storage at any one time. To estimate the memory used by a route, multiply these two values,
multiple again by the average record size (anything from 2K to 32K for SMF records), and add another
20 percent.
• For TCP/IP transmission, you need to specify a fairly high trackers number (150 or more) because of
the way the failure detection on the transmission works. It will hold onto a packet for several seconds
before it decides it has been delivered successfully. The number of trackers on the input stage also
works as a pacing mechanism for the TCP/IP connection. If you have 100 trackers and it holds onto
the packets for 4 seconds, then it will end up sending about 25 packets a second. Multiply by the block
count and the average record size to estimate the data volume being transmitted.
• You also need to specify a fairly high number of trackers (100 or more) for stages that repack the data.
These are typically process stages like JSON, CSV, and SQL. These stages take the data out of the
packets it arrives in, transform it, and then pack the new data into new packets, hanging onto those
packets until they are full. If there are not enough input packets, the process can stall, which will result
in it sending on all of the packets it has after it detects that it is no longer receiving input. This will free
up the packets and release the trackers associated with them, allowing more data to be read. Memory
usage for routes that repack the data can be twice that of normal routes.
• For the other stages, you should need no more than twice as many trackers as you have input and
process stages, with, say, a minimum of 6. This will avoid a backlog of unprocessed data building up in
the DataMover because it can read the data in faster than it can write it out.
Troubleshooting
If the DataMover is not processing data often enough or seems to be stalling, increase the .block
setting (to put more records in each packet) and the .trackers setting (to allow more packets into the
DataMover).
Record formats
The DataMover expects to deal with two primary types of data: textual data and LGRH data.
Textual data
This is unformatted data that is simply split into records. Many of the DataMovers process stages
cannot handle it, so processing is generally restricted to input, transport and output.
LGRH data
This is data that has been output by an IBM Z Performance and Capacity Analytics application,
either the SMF Extractor or the Collector. Each record is prefixed with an LGRH header that contains
metadata describing the record.
There are two major distinctions of LGRH data:
SMF data
Consists of SMF records.
Table data
Consists of data output from the Continuous Collector in a specific binary data format.
The JSON and CSV process stages will only work with LGRH Table data and require access to a set of
mapping files in order to be able to parse the binary data format.
Configuration
The DataMover reads its configuration from a Java .properties file.
The syntax of this file is keyword = value, and all the reading and parsing is done by Java.
The primary keywords in the file are:
routes
This specifies the number of routes defined in the config file. While most configurations just run a
single route, there are some that run two or more routes. You do need to watch the total memory
usage across all of the routes running with an DataMover as they all run within the same JVM and use
a shared memory pool.
route.r.name
This defines a name for the rth route that is used in some message and trace output.
input.r
This is the input stage for the rth route. Its parameters take the form input.r.keyword.
process.r
This is the number of processes stages that are defined for the rth. If not present it defaults to 0.
process.r.n
This is the nth process stage for the rth route. Its parameters take the form process.r.n.keyword.
outputs.r
This is the number of output stages that are defined for the rth route.
output.r.n
This is the nth output stage for the rth route. Its parameters take the form output.r.n.keyword.
The attributes of each stage are specified with further keyword extensions.
FILTER
Filter by required record types
SPLIT
Propagate data to a JOIN input stage
Output stages
CONSOLE
Dump data to the sysout (JOBLOG)
FILE
Write data to the USS file system
LOGSTREAM
Write to a log stream
TCPIP
Connect and write data to a remote TCP/IP server
Common keywords
The common keywords are used on many stages and always have the same meaning.
Common keywords
.type
The name of the stage.
.block
For stages that generate data packets, this is the number of records to put in each data packet.
.trackers
For input stages, this is the number of trackers to use to manage the data loaded from the input
source.
.sourcetype
A short, single word description of the source of the data. Some stages (JSON and CSV) generate the
source name for the data packets they output from the data they place inside them. Specifically they
set the source name to be a value derived from the name of the IBM Z Performance and Capacity
Analytics Db2 table associated with the source of the data.
.sourcename
A short, single token name for the data stream.
.pacing
A delay, in milliseconds, that is added between each data packet that is processed. Used to artificially
slow the DataMover down.
.encoding
The Java name for the code page that is to be used to encode the data. Used on stages that transform
the data.
.float_format
This tells the stage the format of the floating point numbers in its input. Values are:
IEEE
The Java standard
IBM
IBM S360 encoding
input.1.block = 10
input.1.trackers = 100
On the hub, there is a TCPIP input stage, with defaults:
input.1.trackers = 100
There is no .block setting as the TCPIP stage simply receives complete data packets with however many
blocks the spoke put inside them.
Each spoke that connects to the hub gets its own set of trackers, using the value above. These trackers do
not have the 4 second lifespan that the spoke trackers do, so the hub can cycle through them all within a
second or so. The value should be at least 25% of the highest spoke tracker value. Higher values can be
specified, but that increases the volume of data, although the data is not held very long.
If you increase the spoke .trackers value, then also increase the corresponding hub .trackers value.
JOIN
Receive data from a process SPLIT stage.
.channel
This specifies the name of a channel that the input stage listens on. When the SPLIT stage sends a
packet to a channel, JOIN stages listening to that channel will receive a copy of it.
Because the JOIN stage is an input stage, each pipeline processing split data must be defined as a
separate route.
LOGSTREAM
Read from a log stream.
.logstream
The name of the log stream to read data from.
Symbolic substitution is supported.
.wipe
Set to yes to mark data in the log stream for deletion after it has been processed by the
DataMover.
This parameter is only relevant to the LOGSTREAM input stage. It only applies to spoke systems as
it is only a spoke system that takes input from a log stream.
.checkpoint
Set to yes (the default) to maintain a restart checkpoint in a USS file. It is updated only after the
DataMover has finished processing the data, and indicates the point in the log stream to start
reading from if the DataMover crashes.
If you turn it off (set to no), then ensure the .checkpoint file is erased to avoid problems if you later
turn it back on.
This parameter is only relevant to the LOGSTREAM input stage. It only applies to spoke systems as
it is only a spoke system that takes input from a log stream.
.check_LGRH
When set to yes, this causes the stage to check that each record it reads has a valid LGRH header
at the front of it. If it finds a record without such a header (indicating that it is either reading
the wrong log stream, or an application other than IBM Z Performance and Capacity Analytics is
writing to the log stream), the DataMover issues an error message and shuts down.
.clear
Use with caution. If set to yes, it causes the DataMover to mark all the data in the input log stream
as ready for deletion. The DataMover then shuts down, leaving the log stream effectively empty.
See the clear.properties sample configuration file “clear.properties - Erase all records from a log
stream” on page 42.
TCPIP
Open a TCP/IP port to listen for connection requests.
.port
This specifies the TCP/IP port number that the input stage is to open to listen for connections on.
It expects to be connected to by another DataMover. Once connected, it will receive complete
data packets from the other DataMover.
CONSOLE
Send data to the console (stdout) as text. Packets are passed through to the next stage.
CSV
Convert output records from the Continuous Collector into Comma Separate Value records.
.delimiter
This specifies a single character that is to be used as a delimiter between values. The default is a
comma.
If you use a different character in the CSV file, you must specify that delimiter to the program
reading in the CSV data.
FILTER
Filter by required record types.
.as
Only data in LGRH format can be filtered. This specifies whether it is SMF encoded data (SMF) or
table encoded data (IBM Z Performance and Capacity Analytics).
.smf_type.i
This contains the ith SMF filtering directive. Specify no more than one directive for each SMF type.
The format of the entry is:
The type is required, the subtypes are optional and, if present, delimited by a space.
The directives define a pass list. Any SMF record that arrives at the filter stage and does not match
a directive will be blocked. Packets propagated onwards from the filter stage will only contain SMF
records that match a pass directive.
Only use subtype filtering for those records that have subtypes and that follow the normal SMF
conventions for defining subtypes in the SMFxSTY field.
.smf_types
This specifies the count of smf_types entries
.smf_types.i
This contains either
• an SMF record id (id)
• a range of SMF record id's (low_id:high_id)
• or an SMF record id and a list of subtypes to pass (id st1l st2 st3)
.smf_notypes
This specifies the count of smf_notypes entries
.smf_notypes.i
This contains one of the following and acts to block the records.
• an SMF record id (id)
• a range of SMF record id's (low_id:high_id)
• or an SMF record id and a list of subtypes to pass (id st1l st2 st3)
.sysplex
This specifies a list of sysplex names the filter should pass. If omitted, the sysplex name is not
checked.
.system
This specifies a list of systems the filter should pass. If omitted the system name is not checked.
.tables
This specifies the number of table filter directives present for table filtering.
.table.i
This contains the ith table filtering directive. Specify no more than one directive for each SMF type.
The format of the entry is:
table_name
This is the name of the table specified in the record. For IBM Z Performance and Capacity
Analytics originated data, this is the Db2 table name, such as KPMZ_LPAR_H. There is no support
for wild cards.
The directives define a pass list. Any table record that arrives at the filter stage and does not
match a directive will be blocked. Packets propagated onwards from the filter stage will only
contain table records that match a pass directive.
JSON
Convert output records from the Continuous Collector into JSON records.
.forceFields
If set to yes, a field will be created in the JSON output holding a default value if the field is not
present in the Db2 record.
If set to no, fields will be excluded from the JSON output if they are not present in the Db2 table.
SPLIT
Propagate data to a JOIN input stage.
.channels
This is the count of join channels that the data packet should be copied to.
.channel.i
This is the name of the ith channel.
The data packet (and all the records in it) is propagated to the JOIN input stage corresponding to
each listed channel.
Typically, each Split or Join route passses the data through a different process stage (Filter, JSON,
CSV) and then sends its output to a different destination.
Note that the data continues onwards in the route that executed the split.
CONSOLE
Dump data to the sysout (JOBLOG).
FILE
Write data to the USS file system.
.directory
This is the directory that the data gets written out to.
.filename
If present, this forces all of the data to be written out to a file with this file name.
If not present, the data will be written out into files corresponding to the sourceType value in the
arriving data packets. After processing by a JSON or CSV stage, this will correspond to the name of
the IBM Z Performance and Capacity Analytics Db2 table that the data came from.
.subdir
This has a range of values and serves to automatically break the data files up amongst a
subdirectory structure. The values are:
None
No subdirectory is used.
Hour
A separate subdirectory is created for data for each hour.
Day
A separate subdirectory is created for data for each day.
Week
A separate subdirectory is created for data for each week.
Month
A separate directory is created for data for each month.
Year
A separate directory is created for data for each year.
Period
The code will pick a subdirectory to use based upon the aggregation period indicated in the
sourceType. Timestamp data is written to a daily directory. Hourly data is written to a weekly
directory. Daily data is written to a monthly directory. Weekly and monthly data is written to a
yearly directory.
Packet
This uses the sourceType as the subdirectory name and writes each packet's data out into a
separate timestamped file.
.format
This specifies the format of the data. Values are: Text, LGRH, JSON, CSV. The latter 3 enable some
special processing in the File stage to improve the quality of the output.
.data_dating
This works with LGRH, JSON, and CSV format data and requires a time-based subdir setting (that
is, neither None nor Packet).
Normally the date from the data packet is used by the subdir element to determine the
subdirectory that the data will be written to. This will typically reflect the time that the record
was processed.
If you specify yes, then the records in the packet will be processed individually and each record's
timestamp (Epoch for JSON and CSV files) will be extracted and used to determine which
directory it gets written to. This can significantly increase the overhead of writing a data packet
out, so only use it when really required.
LOGSTREAM
Write to a log stream.
.logstream
The name of the output log stream.
Symbolic substitution is supported.
.sync
If set to yes, log stream I/O occurs in synchronous mode.
If set to no or omitted, the faster async I/O method is used.
.enqueue_name
If you have multiple DataMovers writing to the same log stream, ensure that they all have the
same .enqueue_name values specified. This will cause them to wait on the same Sysplex scoped
enqueue to get access to the log stream before they write a data packet out to it. This ensures that
the sequence of the records they are writing out is preserved.
If the DataMover is the only one writing to the log stream, you should not specify a value
for .enqueue_name as it incurs a small performance overhead.
TCPIP
Connect and write data to a remote TCP/IP server.
.hostname
This is the name of the machine that the TCP/IP output stage should try and connect to.
You can use localhost for this machine (although this may vary with your TCP/IP implementation).
.port
This specifies the port on the target machine that the stage should try and connect to.
The destination can be either a DataMover (local or remote), or a local IBM Z Common Data
Provider Data Streamer as described in the IBM Z Common Data Provider Open Streaming API.
.use_ssl
If set to yes, SSL will be used for secure communications. See the installation instructions for
details.
.buffer
This is the name of a USS directory used to buffer data packets if communications with the remote
system are down. It can accumulate quite a lot of data packets.
Advanced configurations
The samples provided with IBM Z Performance and Capacity Analytics cover the simple scenarios for
which you need to configure the DataMover. This section looks at some of the more complex scenarios
you might wish to deploy.
SMF filtering
In this scenario, the DataMover is to receive SMF records, but needs to remove some of them before
feeding them into the log stream for the Continuous Collector. Note that it is more efficient to change the
parameters of the SMF Extractor to only collect the records that you want the Continuous Collector to
process, but if that is not possible, you can use the technique described here for the DataMover.
The data flow is illustrated in Figure 40 on page 95 .
INPUT
TCPIP
PROCESS
Filter
OUTPUT
Logstream
The input is received over TCP/IP and written out to the log stream for the Continuous Collector, as usual
for the hub, but a filter process has been inserted into the middle. The filter process will be given a list of
the SMF types and subtypes that are to be passed through to the Collector.
A configuration for this scenario is shown in the following example.
#
# This is a sample configuration for a DataMover running on a
# Hub System
#
routes = 1
route.1.name = Hub
#
# Listen for connections from the Spokes and receive their data.
#
# If you want to use SSL you need to perform the certificate exchange
# before you change the value to YES
#
input.1.type = TCPIP
input.1.port = 54020
input.1.ssl = no
#
# Filter out unwanted SMF records
#
process.1 = 1
#
process.1.1.type = FILTER
process.1.1.as = SMF
process.1.1.smf_types = 5
process.1.1.smf_type.1 = 30
process.1.1.smf_type.2 = 42
process.1.1.smf_type.3 = 70
process.1.1.smf_type.4 = 113
process.1.1.smf_type.5 = 120
#
# For output, we write the data into the log stream
#
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = IFASMF.CF.LS02
output.1.1.sync = YES
#
The filter will be automatically invoked against all data packets received by the TCPIP stage. What records
remain from each data packet will then be passed to the LOGSTREAM stage for output.
Data splitting
In this scenario, the DataMover is to receive data from a single source and distribute it two recipients,
optionally filtering the stream sent to one or both of the data destinations.
The data flow is illustrated in Figure 42 on page 97.
INPUT
TCPIP JOIN
PROCESS
SPLIT Filter
OUTPUT
Logstream Logstream
This is a more complex configuration. Two routes are used, one being the primary input and the other the
joined secondary route.
• The primary route receives the data, splits a copy off to the secondary route, and then writes it out to a
log stream.
• The secondary route receives the split input, filters it, and then writes the filtered SMF data out to a
separate log stream.
A configuration for this scenario is shown in the following example.
#
# This is a sample configuration for a DataMover running on a
# Hub System
#
routes = 2
route.1.name = Primary
route.2.name = Secondary
#
# Listen for connections from the Spokes and receive their data.
#
# If you want to use SSL you need to perform the certificate exchange
# before you change the value to YES
#
input.1.type = TCPIP
input.1.port = 54020
input.1.ssl = no
#
# Copy the data across to the other side
#
process.1 = 1
#
process.1.1.type = SPLIT
process.1.1.channels = 1
process.1.1.channel.1 = secondary
#
# For output, we write the data into the log stream
#
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = IFASMF.CF.PRI
output.1.1.sync = YES
#
# Receive the joined input
#
input.2.type = JOIN
input.2.channel = secondary
#
# Filter the data in the secondary channel
#
process.2 = 1
#
process.2.1.type = FILTER
process.2.1.as = SMF
process.2.1.smf_types = 5
process.2.1.smf_type.1 = 30
process.2.1.smf_type.2 = 42
process.2.1.smf_type.3 = 70
process.2.1.smf_type.4 = 113
process.2.1.smf_type.5 = 120
#
# For output, we write the data into the log stream
#
outputs.2 = 1
#
output.2.1.type = LOGSTREAM
output.2.1.logstream = IFASMF.CF.SEC
output.2.1.sync = YES
#
Data packets received through the TCPIP stage will be copied to the secondary stream and then written
out to the primary log stream. Data packets reaching the secondary route will be filtered and the surviving
records will be written out to the secondary log stream.
Deployment
There are several options for deploying the Collator in a production environment. These range from
stand-alone deployment, where a system is simply set up to archive its own records, through remote
sysplex-based archiving, to hybrid deployments where SMF records are streamed to a hub system and fed
to both the Collator and the Collector.
Stand-alone Deployment
The simplest, stand-alone deployment works with the IBM Z Performance and Capacity Analytics SMF
Extractor:
The collator function extends as far as provided sets of files containing sorted SMF data. The archival
(and retrieval and searching and purging) of those files is something that you need to implement. The
log stream here used is a simple DASD log stream – this keeps the SMF archival traffic away from your
Coupling Facilities. For performance reasons it is important that the SMF Extractor only be configured to
trap SMF records that you want to archive. Any trapped records that do not meet the criteria for at least
one of the collation groups will be discarded by the Collator after checking it against all of them.
Sysplex Deployment
Typically, you would deploy the collator within a Sysplex, enabling you to produce files that contain data
from multiple systems within the sysplex. For this deployment you need to use an IBM Z Performance and
Capacity Analytics DataMover as a Sender and Receiver to transport the SMF records to a single central
system for collating.
In this case, data is being gathered on SYS1 and SYS2 by the SMF Extractor, which writes it out into a
log stream. A DataMover configured as a Sender (Input: Log stream, Output: TCPIP) then transmits it to
another DataMover that is configured as a Receiver (Input: TCPIP, Output: Log stream) on the collation
system (SYSX). The Collator then reads the data from the log stream and sorts it into files. The Collation
system can be any system of the user's choice. The deployment will use some TCPIP bandwidth and
100 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
some CPU – although the CPU can be drawn from the system zIIP processors. The Collating system does
not have to be in the same sysplex as SYS1 and SYS2. It is important for performance reasons to ensure
the SMF Extractor is configured to only extract the SMF records that you want to archive. Unwanted SMF
records will only be discarded once the Collator has decided that they are not of interest. Transmission
occurs over TCPIP, which can be configured to be protected by SSL encryption. While the input systems
can be in different sysplexes, beware of combining streams of SMF data that you don’t want to have
archived in the same set of files. While it is possible to split the input stream in the collator to separate the
streams, it is more efficient to duplicate the collation configuration (DataMover, Log Stream, and Collator)
and feed the data for each sysplex through a separate collation pipeline.
Intra-Sysplex Deployment
In the event that your collation system is inside your sysplex, it is possible to configure the SMF Extractors
and the Collator to use a shared sysplex log stream.
While this should work, you need to be aware that it has a different resource consumption profile
and behavior under load to the DataMover and TCPIP model. The primary difference is that it will be
consuming Coupling Facility resources (bandwidth and storage) and will need to be offloaded from the
CF to DASD from time to time (a process that can take longer than simply allocating another DASD
segment). While this should be fine for low volume SMF records, the TCPIP implementation (using DASD
log streams) is recommended for medium to high volume SMF data streams to minimize the impact to the
Coupling Facilities and delays due to offloading.
Hybrid Deployment
The deployment on the Sysplex Spoke systems is very similar to the deployment for IBM Z Performance
and Capacity Analytics Automated Data Gathering. You can combine these two functions:
• The SMF Extractor has to be configured to gather the SMF records you need for archiving and the SMF
records you need for processing.
• If you are Collating the SMF records on a different system to the Hub where you are running the
Collector, you need a more complex configuration in your DataMover (Sender) on the Spoke. It needs to
Split the stream of SMF data into two separate streams – one for the Collator and one for the Collector,
filter out unwanted records from each stream and then send each stream to the correct destination.
• If you are running the Collation on your Hub system, you should send all the data to the Hub in a single
data stream. The receiver on the Hub will then need to be configured to split and filter the data streams
and write the SMF records out to two DASD log streams – one for the Collator and one for the Collector.
This approach minimizes TCPIP bandwidth. If you would sooner minimize the Hubs CPU usage, you can
do the stream splitting and filtering on the Spoke systems and run multiple Receiver DataMovers on the
Hub.
Each Spoke can specify the IP address of its collation system independently, so there is no need for them
all to feed into the same system. This allows you to perform the collation and archiving close to the source
of (at least some of) the data, reducing data transmission overhead. There is some additional CPU cost on
the Spoke system (it is zIIP eligible), but it is less than the cost of running two SMF Extractors and two
Senders.
102 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
This reduces the TCPIP bandwidth used for transmission, by not duplicating SMF records that are going
to be passed to both the Collector and the Collator until after the data has reached the Hub. The hub will
incur the cost of splitting and filtering the data streams from each Spoke system using this model. For a
significant number of Spoke systems, this may add up to a noticeable CPU burden on the Hub system.
This is a good solution is the set of records you are collating is very similar to the set of records you are
collecting for IBM Z Performance and Capacity Analytics, as there are performance benefits from only
transmitting the data once. You may be able to omit the filtering in this instance, simply writing the same
data out to both log steams and paying the cost for rejecting the occasional record on both sides.
This approach keeps the cost for splitting and filtering the streams on the Spoke systems, at the cost of
duplicating the transmission of records destined for both Collation and Collection. In a large deployment
this may be necessary to manage Hubs total CPU consumption.
This is a good solution if there is little overlap between the set of records you wish to archive and the set
you wish to feed into IBM Z Performance and Capacity Analytics. This is because there would be very little
data that would be transmitted to the hub twice in this situation.
Collation
This is the process of collecting the data into appropriate groups of SMF records and then writing the
records for that group out into archive files. The same SMF record may be in multiple in multiple groups.
In this case it will be written our into the archive file for each group.
Collation Cycle
Fundamentally the execution cycle of the Collator is:
1. Read a record from the Collators input log stream.
2. Determine its key attributes
3. See which collation groups (if any) it fits into.
4. Write it out to the files for each indicated collation group
5. Remove the record from the Collators input log stream
Collation Rules
Each collation group is named and has one or more rules defined that determine which SMF records are
members of it.
Note: The formation of the rules the customer will use during test and production is an exercise for the
customer to complete. No ‘standard’ rules are shipped with IBM Z Performance and Capacity Analytics.
Classification decisions may be based on:
1. SMF record Type is
2. SMF record Type is between
3. SMF record Type and Subtype are
4. SMF record Type is not
5. SMF record Type is not between
6. SMF record Type and Subtype are not
7. MVS System ID Is
8. Sysplex Name Is
Collation Shifts
The files for a collation group are automatically split up by year and day, with a separate file for each day
They can be further split up by specifying a shift value. The default is DAILY, which will result in all of the
SMF records that are received and are classified as a part of the group during each day being written out
to the same file. By specify lower shift values, you can cause the file to be split at predictable times of the
day. The splits are timed to occur by the local time on the system the SMF records were issued on, not by
GMT or UTC and not adjusted to any other time zone
104 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
Segmentation
A shift for a collation group could contain billions of SMF records. The collation group may need to be
broken in smaller segments to facilitate the file system it will be written out to. The mechanism used for
this is a low level segmentation index. A maximum record count for a segment is specified in the ZWriter
output specification and, when that number is met, the existing segment is closed and a new one is
started. Additional segments may also be started if the Collator is stopped and restarted or if data arrives
significantly later than data that has already been written out to the last segment.
Collator Configuration
The basis of the Collator is a DataMover. This is a configurable pipeline processor that provides reusable
processing stages. It will be a continuously running started task.
New Stages
For the Collator function there are two new stages:
• COLLATE – This is a process stage that takes the input data and sorts it into the separate collation
groups. Collate performs the High and Mid level collation actions. As it gathers records for each set it
will accumulate them into a packet. When a packet is full – or hasn’t been updated for a few seconds –
it will send it to the next processing stage.
• ZWRITER – This is an output stage that will write the collated archive data out to one or more data files.
Zwriter performs low level segmentation and the associated file IO. It is strongly recommended that the
file it is writing to are located on DASD. They can be compressed and moved to tape after they have
been output.
Typical usage would be:
1. Input: LOGSTREAM Stage
Existing Stages
In addition you may need to use the following stages
• JOIN – This is an input stage, which takes the output from a matching SPLIT stage. The correlation is
achieved through a matching channel name that is specified as a parameter.
• SPLIT – This process stage duplicates a data stream to each JOIN stage that is subscribed to the same
channel.
• FILTER – This process stage can be used to remove records from the SMF stream. The SMF data stream
must be unpacked to provide filtering. The filtering targets the same parameters as the grouping.
Records that do not pass the filter are discarded.
• PACKSMF – This process stage is used to repack SMF records after they have been unpacked by the
FILTER stage. It should be used before the records are transmitted over TCPIP or written to a log
stream. If this stage is omitted it can result in inefficient usage of log stream space (making the log
stream a lot bigger than it needs to be) and less efficient TCPIP packet transmission
Typical usage to duplicate and filter a stream is to add a Split stage and a Filter stage, then to add a
second route using the join for input:
• Route 1:
1. Input: LOGSTREAM Stage
2. Process: SPLIT Stage
3. Process: FILTER Stage
4. Process: PACKSMF
5. Output: LOGSTREAM Stage
• Route 2:
1. Input: JOIN Stage
2. Process: FILTER Stage
3. Process: PACKSMF
4. Output: LOGSTREAM Stage
This would write two filter streams to different log streams and is the basis for the Hub splitter
configuration. The Spoke splitter is the same thing with TCPIP output stages instead of LOGSTREAM
output stages. These stages are available in the Collator, the DataMover and the Forecaster
Stage Parameter
Existing DataMover mechanisms will be used, reading configuration data from the main Java properties
file that describes the DataMovers active configuration.
COLLATE Parameters
process.r.p.groups = 2
#
process.r.p.group.1.name = SECURE
process.r.p.group.1.hlq = ARCH.&SYSPLEX.&SYSTEM
process.r.p.group.1.smf_types = 3
process.r.p.group.1.smf_type.1 = 80
process.r.p.group.1.smf_type.2 = 70 3 4
process.r.p.group.1.smf_type.3 = 14:18
process.r.p.group.1.system = SYS1 SYS2 SYS3
process.r.p.group.1.sysplex = PLEX1 PLEX2
process.r.p.group.1.shift = 4_HOUR
#
106 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
process.r.p.group.2.name = TSO
process.r.p.group.2.hlq = ARCH.&SYSPLEX.&SYSTEM
process.r.p.group.2.smf_types = 1
process.r.p.group.2.smf_type.1 = 63
process.r.p.group.2.shift = DAILY
The route number is ‘r’ and the process stage number is ‘p’.
This would produce data sets starting with:
• ARCH.PLEX1.SECURE...
• ARCH.PLEX2.SECURE...
• ARCH.PLEX1.SYS1.TSO...
• ARCH.PLEX1.SYS2.TSO...
• ARCH.PLEX1.SYS3.TSO...
• ARCH.PLEX2.SYS1.TSO...
• ARCH.PLEX2.SYS2.TSO...
Qualifier values specified on groups override qualifier values specified on the COLLATE stage itself
The hlq value specified for each group is the first part of the data set name. This may contain multiple
segments
The special values for the hlq are: &SYSTEM and &SYSPLEX, which are replaced with the corresponding
values from the SMF event. It is the users responsibility to ensure that the total length of the final group
name (including the dots between them and the segmentation suffix added by the ZWRITER stage) is not
more than 44 characters long and is a valid dataset name.
If the .system or .sysplex parameter is omitted from a group definition, then no records will be excluded
from the group on the basis of the omitted conditions. A group that was defined as a name would collect
all SMF records generated within all Systems feeding into the Collator.
You may specify a list of included SMF types with an .smf_types qualifier. The individual records must
specify one of:
1. A single SMF type (e.g. 80)
2. A range of SMF types (e.g. 80:85)
3. A single SMF type and a list of one or more blank delimited subtypes (e.g. 80 2 3)
You may also specify a list of excluded SMF types with an .smf_notypes qualifier. The individual records
must specify one of:
• A single SMF type (e.g. 80)
• A range of SMF types (e.g. 80:85)
• A single SMF type and a list of one or more blank delimited subtypes (e.g. 80 2 3)
The exclusion rules are applied after the inclusion rules, so including 80:90 and then excluding 84:87
would leave you with just types 80, 81, 82, 83, 88, 89 and 90.
If there are no inclusion rules, then all SMF records are included. If there are no exclusion rules, then no
records are excluded.
The allowed shift values are:
• DAILY – Each days data is kept in a single file.
• 12_HOUR – The days data is split at noon and midnight into 2 files
• 8_HOUR – The days data is split at midnight, 8am and 4pm.
• 6_HOUR – The days data is split at midnight, 6am, noon and 6pm
• 4_HOUR – The days data is split at midnight, 4am, 8am, noon, 4pm and 8pm.
• 3_HOUR – The days data is split at midnight, 3am, 6am, 9am, noon, 3pm, 6 pm and 9pm.
• 2_HOUR – The days data is split every 2 hours starting at midnight
output.r.o.fileopts = ab+,noseek,type=record,recfm=VBS,lrecl=32756,blksize=27998
output.r.o.max_records = 500000
The route number is ‘o’ and the process stage number is ‘o’. Then name above is the group name from
the COLLATE stage. The range of values for the fileopts string are described here: https://www.ibm.com/
docs/en/zos/2.5.0?topic=functions-fopen-open-file
The filename that will be used will be the collation name derived above with the addition of a low level
segmentation suffix. The suffix will start at .AA and count through .AZ, .A9, BA, .B9, .CA, .C9 etc... and end
at .Z9. This will allow up to 936 low level segments for each collation group shift. Optimal fileopts and
max_record settings will be the responsibility of the customer to determine. SMF records must be written
to VBS format data sets, or issues with record length may be encountered. The output will be a sequence
of SMF records written to a VBS data set in the same format as your SMF extracted files.
Note: Once the files have been written out no further management of the files are offered by IBM Z
Performance and Capacity Analytics.
• Policies for size, retention, compression etc... can be set through system storage policy management
using DFSMS.
• Offload to tape should be able to be automated through DFHSM or equivalent.
#
routes = 1
#
route.1.name = Collate
#
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS02
input.1.check_LGRH = yes
input.1.block = 100
input.1.wipe = no
input.1.checkpoint = no
input.1.max_pack = 10
input.1.sourcename = LOGSTERAM
input.1.sourcetype = AGGREGATE
#
process.1 = 1
process.1.1.type = COLLATE
process.1.1.groups = 3
process.1.1.group.1.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.1.name = SMF100
process.1.1.group.1.smf_types = 1
process.1.1.group.1.smf_type.1 = 100
process.1.1.group.1.shift = 2_HOUR
process.1.1.group.2.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.2.name = SMF90
process.1.1.group.2.smf_types = 1
process.1.1.group.2.smf_type.1 = 90:105
process.1.1.group.2.shift = 2_HOUR
process.1.1.group.3.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.3.name = BAR90
process.1.1.group.3.smf_notypes = 1
process.1.1.group.3.smf_notype.1 = 90:105
process.1.1.group.3.shift = 2_HOUR
#
outputs.1 = 1
output.1.1.type = ZWRITER
output.1.1.fileopts = ab+,noseek,type=record,recfm=VBS,lrecl=32756,blksize=27998,space=(Cyl,
(100,50),rlse)
output.1.1.maxrecoreds = 500000
#
108 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
This will read SMF records from a log stream called IFASMF.CF.LS02 and collate them into three groups:
• The first, SMF100, will contain all of the SMF type 100 records in the input stream
• The second, SMF90, will contain all SMF records of type 90 thru 105
• The third, BAR90, will contain all SMF records except records of type 90 thru 105
No records will be discarded as they will all match either the SMF90 or the BAR90 group.
The files will be written out to three data sets:
• DEMO.COLTEST.&SYSPLEX.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.&SYSPLEX.SMF90.Dyyyyddd.Shh.ss
• DEMO.COLTEST.&SYSPLEX.BAR90.Dyyyyddd.Shh.ss
The &SYSPLEX. Symbolic will be replaced with the name of the sysplex that the records came from. If
the records are from more than 1 sysplex, then more that one output data set may be created for each
group. If you had a mixture of records from PLEX1, PLEX2 and PELX3 as input, then the output files for the
SMF100 group would be:
• DEMO.COLTEST.PLEX1.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.PLEX2.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.PLEX3.SMF100.Dyyyyddd.Shh.ss
Each file would only contain records from the indicated sysplex
Collator Installation
Unpack the Collator.tar:
tar -xovf Collator.tat
Copy the Collator directory to where you want a working directory for the Collator. dit the Collator.sh file
and fill in the configuration details.
• The path to the working directory
• The path to a 64-bit Java 8 installation
If you aren’t turning an IBM Z Performance and Capacity Analytics installation into a Hybrid installation,
you’ll need to allocate a new DASD Logstream. The SMF Extractor must be configured to capture and
archive SMF Records. See “Step 1: Installing the SMF Extractor” on page 45.
Edit Collator/config/collate.properties
• Specify the name of the input log stream
• Specify your collation rules
• Ensure the hlq for the output data sets is something the job will have the authority to create new data
sets under
There are two JCL samples in the CNTL data set. You’ll need to copy them to a JCL data set.
• DRLJCOP is the PROC to run the Collator
– You need to change the working directory name
• DRLJCOJ is JCL to run the PROC as a JOB
– You need to change the PROCLIB
Submit the job and the collator will start running.
Data Splitter
Overview
The Data Splitter is used with the newly enhanced SMF Extractor to distribute raw SMF data from
additional log stream outputs to one or more subscribers. If the users have not already done so, please
review the “Introduction to the Data Splitter and the SMF Extractor” on page 11
The only other component the Data Splitter requires to be installed and running is the SMF Extractor.
The user can set up a DataMover (Catcher or Receiver) to receive the streamed raw SMF records and write
them to disk , if that makes it easier for the client to process them. If the user chose to implement your
own TCPIP code to receive the streamed SMF records from the Data Splitter, they will arrive packed into
IBM Z Common Data Provider Type 2 Open Streaming API Data Packets “Receiving raw SMF records from
the SMF Extractor” on page 11.
Copy the Data Splitter directory to where the user wants the working directory for the Data Splitter.
• Each Data Splitter that the user wants to run should have its own working directory.
Edit the DataSplitter.sh file and fill in the configuration details.
• The path to the working directory
• The path to a 64-bit Java 8 installation
There are two JCL samples in the CNTL data set. User will need to copy them to a JCL data set.
• DRLJDSP is the PROC to run the Data Splitter
– The user will need to change the working directory name
• DRLJDSJ is JCL to run the PROC as a JOB
– The user will need to change the PROCLIB
When the user submits the job, the Data Splitter will start running. The user needs to prepare its
configuration before you do this.
110 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference
Direct streaming
Using the RawSplitter configuration, each TCPIP output stage will use the SourceType attribute to decide
which packets to send to its receiver. Each receiver will only get packets that the Data Splitter has been
configured to send them.
112 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the DRLFPROF data set
Dialog parameters
This topic describes dialog parameters that are set initially by member DRLEINI1 in the
DRLxxx.SDRLEXEC library and read from the userid.DRLFPROF data set. IBM Z Performance and Capacity
Analytics initializes a new user's first dialog session with parameter settings from userid.DRLFPROF. From
that point forward, a user's dialog parameters are in personal storage in member DRLPROF in the library
allocated to the ISPPROF ddname, which is usually tsoprefix.ISPF.PROFILE. If DRLFPROF exists, a user
changes parameter values through the Dialog Parameters window. DRLEINI1 continues to set parameters
that do not appear in the Dialog Parameters window. It does this when a user starts IBM Z Performance
and Capacity Analytics.
“Step 4: Preparing the dialog and updating the dialog profile” on page 21 describes the installation step
where userid.DRLFPROF is customized for your site. It refers to this section for descriptions of:
• “Modifying the DRLFPROF data set” on page 113
• “Overview of the Dialog Parameters window” on page 114
• “Dialog parameters - variables and fields” on page 115
• “Allocation overview” on page 124
Dialog Parameters
More: +
114 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields
Dialog Parameters
More: +
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
The Db2 panel library, which, depending on the value of db2def, is either a fully qualified name or a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
The English Db2 panel library, which, depending on the value of db2def, is either a fully qualified name or a value that IBM Z Performance
and Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
Specifies whether the QMF output is saved in the DSQPRINT data set (YES) or in the SYSOUT class (NO).
The Db2 subsystem where IBM Z Performance and Capacity Analytics resides.
This required field can be 4 alphanumeric characters. The first character must be alphabetic.
The default value is DSN. If the value in this field is something other than DSN, it was changed during installation to name the correct Db2
subsystem.
Do not change the value to name another Db2 subsystem to which you might have access. IBM Z Performance and Capacity Analytics must
use the Db2 subsystem that contains its system, control, and data tables.
The Db2 plan name to which the distributed IBM Z Performance and Capacity Analytics for z/OS DBRM has been bound.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default value for this field is DRLPLAN. If the value in this field is something other than DRLPLAN, it may have been changed during
installation to refer to a customized plan name forIBM Z Performance and Capacity Analytics .
Only change the plan name shown here if instructed to do so by your IBM Z Performance and Capacity Analytics system administrator.
The Db2 database that contains all IBM Z Performance and Capacity Analytics system, control, and data tables. The value of this field is set
during installation.
This required field can be up to 8 alphanumeric characters. The first character must be alphabetic. The value of this field depends on the
naming conventions at your site.
The default database is DRLDB. If this value is something other than DRLDB, it is likely the default value for your site.
Do not change this name to identify another Db2 database to which you have access. You must use the Db2 database that contains IBM Z
Performance and Capacity Analytics.
The storage group that IBM Z Performance and Capacity Analytics uses for the Db2 database identified in the Database name field.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRLSG. If the value of the field is something other than DRLSG, it was changed during installation.
Do not change the value of this field to another storage group to which you might have access; IBM Z Performance and Capacity Analytics
uses the value of this field to create new tables.
The prefix of all IBM Z Performance and Capacity Analytics system and control Db2 tables. The value of this field depends upon your
naming conventions and is determined during installation.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRLSYS. If the value is something other than DRLSYS, it was changed during installation.
Do not change the value; IBM Z Performance and Capacity Analytics uses this value to access its system tables.
The prefix of IBM Z Performance and Capacity Analytics data tables in the Db2 database.
Valid values are determined at installation.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRL. If the value is something other than DRL, it was changed during installation.
116 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
Specifies whether or not to display the IBM Z Performance and Capacity Analytics environment data in the main panels.
This required field can have a value of YES or NO.
The default value for this field is NO.
The default buffer pool for IBM Z Performance and Capacity Analytics table spaces. This field can have values from BP0 to BP49, from
BP8K0 to BP8K9, from BP16K0 to BP16K9, from BP32K to BP32K9. The buffer pool implicitly determines the page size. The buffer pools
BP0, BP1, ..., BP49 hold 4-KB pages. The buffer pools BP8K0, BP8K1, ..., BP8K9 hold 8-KB pages. The buffer pools BP16K0, BP16K1, ...,
BP16K9 hold 16-KB pages. The buffer pools BP32K, BP32K1, ..., BP32K9 hold 32-KB pages.
The default buffer pool for IBM Z Performance and Capacity Analytics indexes. This field can have values from BP0 to BP49 (The buffer
pool for indexes must identify a 4-KB buffer pool).
The user IDs or group IDs of users who are granted Db2 access to the next component you install. Users or user groups with Db2 access to
a component have access to the tables and views of the component. You can specify up to 8 users or group IDs in these fields.
You must specify a value for at least one of the fields.
Each user ID or group ID can be 8 alphanumeric characters. The first character must not be numeric.
The default is DRLUSER, as shipped by IBM. You can use any user group ID that is valid for your Db2 system. You should use one such
group ID to define a list of core IBM Z Performance and Capacity Analytics users (who might include yourself). It is a good idea to leave
such a core group as the value in one of the fields, regardless of whether you control user access to various components by adding other
group IDs.
You can grant users access to the tables and views of a component by listing them here before you install the component.
Consider using RACF group IDs or Db2 secondary authorization IDs and specifying them in these fields before installing a component. It is
easier to connect individual user IDs to an authorized group than it is to grant each individual access to each table or view that they need.
The QMF language for creating reports and queries, either SQL (structured query language) or PROMPTED QUERY.
PROMPTED QUERY is the default QMF language for IBM Z Performance and Capacity Analytics.
This is a required field, if your installation uses QMF.
The SYSOUT class for report data sets that QMF generates, or for output that QMF routes to a printer. The default value is Q.
This is a required field, if your installation uses QMF.
The GDDM nickname of a printer to use for printing graphic reports. The printer should be one capable of printing GDDM-based graphics.
The printer name must be defined in the GDDM nicknames file, allocated to the ADMDEFS ddname. Refer to QMF: Reference and GDDM
User's Guide for more information about defining GDDM nicknames.
This field is used only if your installation does not use QMF.
A valid SYSOUT class for printing tabular reports in batch. Valid values are A-Z, 0-9, and *.
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
This field is used only if your installation does not use QMF.
The number of report lines that should be printed on each page when you print tabular reports online and in batch.
The maximum number of rows for any single retrieval from an IBM Z Performance and Capacity Analytics table when using an IBM Z
Performance and Capacity Analytics-Db2 interface for such functions as listing tables, reports, or log definitions.
The value of this required field is the maximum allowed size of the IBM Z Performance and Capacity Analytics Db2 table to be retrieved.
The default value is 5000 rows of data.
The dialog mode for using the reporting dialog. Any option you save applies to future sessions.
You can choose administrator mode to access reports belonging to all users if you have an IBM Z Performance and Capacity Analytics
administrator authority. You can choose end-user mode to access reports that you have created or that have been created for you
(including public reports).
Type 1 to use end-user mode or 2 to specify administrator mode. If you leave the field blank, the default is end-user mode.
The language in which IBM Z Performance and Capacity Analytics displays all its windows.
IBM Z Performance and Capacity Analytics supports those languages listed in the window. Choose the language your site has installed.
If you leave this field blank, IBM Z Performance and Capacity Analytics displays its windows in English.
Any changes you make to this field become effective in your next dialog session, when IBM Z Performance and Capacity Analytics allocates
its libraries.
The prefix to which IBM Z Performance and Capacity Analytics appends Db2 data set names as it performs tasks.
This field is required if db2def is SUFFIX. If db2def is DATASET, this field is ignored.
This field can be 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default is DB2.V810. If the value of this field is something other than DB2.V810, it was changed during installation.
Any changes you make to this field become effective in your next session, when IBM Z Performance and Capacity Analytics allocates Db2
libraries and data sets.
The suffix that IBM Z Performance and Capacity Analytics appends as the low-level qualifier for Db2 data sets that IBM Z Performance and
Capacity Analyticsuses. Most sites do not use a Db2 data set suffix, but this depends on your Db2 naming conventions.
This field can be used if db2def is SUFFIX. If db2def is DATASET, this field is ignored.
This field can be 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
Your IBM Z Performance and Capacity Analytics administrator can set a default value for this field if it is in use at your site. If the field is
blank, it is very likely not in use.
Do not use this field to qualify data sets that you create; this is not its purpose. Use it to identify Db2 modules only.
Any changes you make to this field are not effective until your next invocation of the dialog, when IBM Z Performance and Capacity
Analytics has a chance to reallocate Db2 libraries and data.
118 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
This field is used only if your installation uses QMF. The prefix to which IBM Z Performance and Capacity Analytics appends all QMF data
set names. This includes all QMF libraries allocated by the dialog during invocation. It also includes all QMF queries and forms.
If qmfdef is SUFFIX, this field is required. If qmfdef is DATASET, this field is ignored.
This field can be up to 35 alphanumeric characters. Names longer than 8 characters must be in groups of not more than 8 characters,
separated by periods. The first character of each group must be alphabetic.
The default is DB2.V810. If the value is something other than DB2.V810, it was changed during installation.
Do not use this value to identify your personal QMF data sets. IBM Z Performance and Capacity Analytics uses this value for all QMF data
sets.
Any changes you make to this field become effective in your next session, whenIBM Z Performance and Capacity Analytics allocates its
libraries.
The prefix for any temporary data sets you create while using IBM Z Performance and Capacity Analytics.
This required field can be up to 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default value is your user_ID or the TSO_prefix.user_ID.
The partitioned data set (PDS) that contains definitions of IBM Z Performance and Capacity Analytics objects you have created. The value
of this field depends on naming conventions that apply to IBM Z Performance and Capacity Analytics.
The members of this PDS contain definition statements that define new objects to IBM Z Performance and Capacity Analytics. IBM Z
Performance and Capacity Analytics uses the value of this field to locate local definition members.
This optional field can be 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.DEFS. Your administrator can set a different default for this field during installation. Do not change the value
that your IBM Z Performance and Capacity Analytics administrator sets.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates local definition data sets.
The partitioned data set (PDS) that contains definitions of IBM Z Performance and Capacity Analytics objects you have modified. The value
of this field depends on naming conventions that apply to IBM Z Performance and Capacity Analytics.
The members of this PDS contain definition statements that define user modified objects to IBM Z Performance and Capacity Analytics.
This PDS also contains members with alter statements built by the update processor on the definitions contained in the same PDS. IBM Z
Performance and Capacity Analytics uses the value of this field to locate local user definition members.
This optional field can be 44 alphanumeric characters. Names longer than 8 characters must be in groups of not more than 8 characters,
separated by periods. The first character of each group must be alphabetic.
The default PDS is DRL.LOCAL.USER.DEFS. Your administrator can set a different default for this field during installation. Do not change the
value that your IBM Z Performance and Capacity Analytics administrator sets.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates local definition data sets.
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
The data set where you keep your GDDM formats for graphic reports.
Use this field to identify a PDS that contains messages generated by users during communication with IBM Z Performance and Capacity
Analytics administrators.
The value of this field depends on naming conventions that your IBM Z Performance and Capacity Analytics administrator has established.
This required field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates the message data set.
The PDS where IBM Z Performance and Capacity Analytics saves your tabular reports.
This optional field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.REPORTS.
The PDS where IBM Z Performance and Capacity Analytics saves the graphic reports you choose to save.
This optional field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.ADMGDF.
The job statement information to be used for batch jobs that the dialogs create for you.
You must use correct JCL in the job statement. IBM Z Performance and Capacity Analytics does not validate job statement information.
Do not use JCL comments in these JCL statements.
You can specify up to four card images in these job statement fields.
The first "//" card image should contain the job name. Press Enter to save any job statements for all future sessions.
The IBM Z Performance and Capacity Analytics definitions data set suffix.
The IBM Z Performance and Capacity Analytics exec data set suffix.
The IBM Z Performance and Capacity Analytics skeleton data set suffix.
120 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
The IBM Z Performance and Capacity Analytics report definitions library suffix.
The IBM Z Performance and Capacity Analytics GDDM formats library suffix.
eng_qmf_sfx N/A E
The method of describing QMF library names to IBM Z Performance and Capacity Analytics, either SUFFIX or DATASET.
If qmfdef is SUFFIX (the default), IBM Z Performance and Capacity Analytics implements the QMF library naming standard, requiring a
prefix for QMF data sets (def_qmfdspfx) and a suffix (described below). IBM Z Performance and Capacity Analytics appends each suffix to
the QMF prefix to identify QMF libraries, which it then allocates.
If qmfdef is DATASET, IBM Z Performance and Capacity Analytics does not use a prefix or suffix and you must specify fully-qualified data
set names for the QMF library variables described below.
In either case, IBM Z Performance and Capacity Analytics uses the next several variables to allocate QMF libraries.
The QMF CLIST library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The English QMF CLIST library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance
and Capacity Analyticsappends to def_qmfdspfx. IBM Z Performance and Capacity Analytics requires this library even though you might be
using another language.
The QMF EXEC library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The English QMF EXEC library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance
and Capacity Analytics appends to def_qmfdspfx. IBM Z Performance and Capacity Analytics requires this library even though you might be
using another language.
The QMF panel library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The QMF message library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The QMF skeleton library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The ADMGGMAP library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
The QMF panel library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The ddname of QMF DSQPNLx library. Even if you use fully-qualified data set names to identify QMF data sets, you must specify the
ddname of your DSQPNLx library as the value of this variable.
The QMF load library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The ADMCFORM library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.
The fully-qualified name of the data set to be allocated to ddname DSQUDUMP, or DUMMY.
The fully-qualified name of the data set to be allocated to ddname DSQDEBUG, or DUMMY.
db2ver N/A 10
db2rel N/A 1
The method of describing Db2 library names to IBM Z Performance and Capacity Analytics, either SUFFIX or DATASET.
If db2def is SUFFIX (the default), IBM Z Performance and Capacity Analytics implements the Db2 library naming standard, requiring a
prefix for Db2 data sets (def_db2dspfx), a library name, and an optional suffix (def_db2dssfx).
If db2def is DATASET, IBM Z Performance and Capacity Analytics does not use a prefix or a suffix and you must specify fully-qualified data
set names for the Db2 library variables described below.
In either case, IBM Z Performance and Capacity Analytics uses the next several variables to allocate Db2 libraries.
The Db2 runlib load library name, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z
Performance and Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
The Db2 load library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analyticsappends to def_db2dspfx before appending def_db2dssfx.
The Db2 CLIST library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
The Db2 message library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
The Db2 panel library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.
122 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
The data set name of the GDDM master print queue, if any. This overrides any value specified for TSOPRNT in the GDDM external defaults
file. If you supply a value, IBM Z Performance and Capacity Analytics adds an ADMPRNTQ DD statement to the batch JCL for graphic
reports.
The application ID (usually sent as a TSO user ID) that has an assigned Information/Management privilege class. The default is the user ID
of the IBM Z Performance and Capacity Analytics user.
Specifies if QMF is used with IBM Z Performance and Capacity Analytics in your installation. Any other value than YES or NO causes IBM Z
Performance and Capacity Analytics to use YES.
Specifies if GDDM is used with IBM Z Performance and Capacity Analytics in your installation. (If QMF is used, GDDM must be used.) If
GDDM is not used, reports are always shown in tabular format. Any other value than YES or NO causes IBM Z Performance and Capacity
Analytics to use YES.
When generating tabular reports without QMF, IBM Z Performance and Capacity Analytics uses period as decimal separator and comma
as thousands separator. You can exchange the decimal and thousands separators by specifying decsep="COMMA". In that case, period
is used as thousands separator. Any other value of decsep causes IBM Z Performance and Capacity Analytics to use period as a decimal
separator.
subhdrv N/A N
This value is used only for QMF (where qmfuse='YES'). Specify Y if you want IBM Z Performance and Capacity Analytics to replace empty
variables in the report header with a text string. You specify the text string using F11 on the Data Selection panel, or when you get message
DRLA171.
Note: Replacing empty variables increases the time taken to generate a report.
Specify N to leave the empty variable in the report.
def_useaot N/A NO
Specifies whether Analytics component tables are created as Accelerator Only Tables in IBM Db2 Analytics Accelerator or as tables in Db2.
"YES": Tables are created as Accelerator Only Tables.
"NO": Tables are created in Db2 and are applicable for use either as Db2 tables or as IDAA_ONLY table.
The default value is "NO".
This parameter is only applicable for Analytics components.
def_accelerator N/A
The name of the Accelerator where the Analytics components tables reside. Required only if using Accelerator Only Tables, that is, if
def_useaot is set to "YES".
This parameter is only applicable for Analytics components.
def_timeint N/A T
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name
Specifies the time interval granularity for records collected for Analytics components tables.
"H": The timestamp for records is rounded to hourly intervals, which is similar to non-Analytics tables with a suffix of "_H" in other
components.
"S": The timestamp for records is rounded to intervals of a second, which is similar to non-Analytics tables with time field instead of
timestamp in other components.
"T": The timestamp for tables is the actual timestamp in the SMF log record, which is similar to non-Analytics tables with suffix "_T".
The default value is "T".
This parameter is only applicable for Analytics components.
When installing components, this variable specifies if SQL GRANTs are issued. When set to "NO" the pre-processor will replace the SQL
GRANT with a comment stating that the GRANT has been omitted.
Allocation overview
This section describes the data sets allocated by IBM Z Performance and Capacity Analytics .
IBM Z Performance and Capacity Analytics allocates the following data sets as a user starts a dialog session:
DRLTABL Userprefix.DRLTABL (for values in query variables) DRLEINI1
ADMGDF Saved charts data set DRLEINI1
DRLMSGDD IBM Z Performance and Capacity Analytics user message data set (drlmsgs) DRLEINI1
IBM Z Performance and Capacity Analytics allocates the following libraries as a user starts a function that uses QMF:
SYSPROC QMF CLIST library (def_qmfdspfx.qmfclib+E) DRLEQMF
SYSEXEC QMF exec library (def_qmfdspfx.qmfelib+E) DRLEQMF
ADMGGMAP SDSQMAP library (def_qmfdspfx.qmfmap) DRLEQMF
ADMCFORM Saved forms data set + DSQCHART library (dsnpref.formsfx + DRLEQMF
def_qmfdspfx.qmfchart)
DSQUCFRM Saved forms data set DRLEQMF
124 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview
/**********************************************************************/
/* Sample Component */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMPONENTS
(COMPONENT_NAME, DESCRIPTION, USER_ID)
VALUES('SAMPLE','Sample Component',USER);
/**********************************************************************/
/* Log and record definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','LOG ','SAMPLE','DRLLSAMP');
⋮
/**********************************************************************/
/* Table space, table, and update definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','TABSPACE','DRLSSAMP','DRLSSAMP');
⋮
/**********************************************************************/
/* Report and report group definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','REPGROUP','SAMPLE','DRLOSAMP');
⋮
Executing these statements populates the IBM Z Performance and Capacity Analytics system tables
with component definitions. These component definitions describe the installable components and the
SDRLDEFS members that can be used to install the component.
126 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview
the product version. For example IBM.310 indicates objects created or modified by IBM Z Performance
and Capacity Analytics V3.1.0.
If an object is modified by an APAR, then the APAR number is used as the VERSION variable, for example,
VERSION 'PH10636'.
IBM Z Performance and Capacity Analytics recognizes the following version variable patterns as being
standard objects shipped by the product:
• Version numbers beginning with 'IBM'.
• Version numbers with no text (the empty string or no version clause).
• Version numbers beginning with an APAR number, that is, two letters followed by any number of
digits up to an optional decimal point. For example, the version numbers PM123, PX123456.V310,
RW987654, and OK123.2014101, are all considered 'standard' version numbers, but PK1234A and
MXC1234 are not.
The order of installation within a definition type is determined by the sort sequence of the definition
member names. The examples that follow appear in the same order that IBM Z Performance and Capacity
Analytics would install them.
Figure 55. Using SQL to define a table space (see definition member DRLSSAMP)
128 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview
The following figure shows the definition for the DRLSKZJ1 table space of the z/OS Key Performance
Metrics component.
Figure 56. Using GENERATE to define a table space (see definition member DRLSKZJB)
/**********************************************************************/
/* Define table SAMPLE_USER */
/**********************************************************************/
SQL CREATE TABLE &PREFIX.SAMPLE_USER
(USER_ID CHAR(8) NOT NULL,
DEPARTMENT_NAME CHAR(8) NOT NULL,
Definition member DRLTSAMP, defining tables and updates (using DEFINE UPDATE)
⋮
/**********************************************************************/
/* Define update from record SAMPLE_01 */
/**********************************************************************/
DEFINE UPDATE SAMPLE_01_H
VERSION 'IBM.110'
FROM SAMPLE_01
TO &PREFIX.SAMPLE_H
GROUP BY
(DATE = S01DATE,
TIME = ROUND(S01TIME,1 HOUR),
SYSTEM_ID = S01SYST,
DEPARTMENT_NAME = VALUE(LOOKUP DEPARTMENT_NAME
IN &PREFIX.SAMPLE_USER
WHERE S01USER = USER_ID,
'?'),
USER_ID = S01USER)
SET
(TRANSACTIONS = SUM(S01TRNS),
RESPONSE_SECONDS = SUM(S01RESP),
CPU_SECONDS = SUM(S01CPU/100.0),
PAGES_PRINTED = SUM(S01PRNT));
⋮
/**********************************************************************/
/* Define update from SAMPLE_H */
/**********************************************************************/
DEFINE UPDATE SAMPLE_H_M
VERSION 'IBM.110'
FROM &PREFIX.SAMPLE_H
TO &PREFIX.SAMPLE_M
GROUP BY
(DATE = SUBSTR(CHAR(DATE),1,8) || '01',
SYSTEM_ID = SYSTEM_ID,
DEPARTMENT_NAME = DEPARTMENT_NAME,
USER_ID = USER_ID)
SET
(TRANSACTIONS = SUM(TRANSACTIONS),
RESPONSE_SECONDS = SUM(RESPONSE_SECONDS),
CPU_SECONDS = SUM(CPU_SECONDS),
PAGES_PRINTED = SUM(PAGES_PRINTED));
130 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview
Defining triggers
Triggers are treated like updates and defined in the DEFS member containing the table for which the
trigger is required. Triggers are defined using SQL and follow the SQL rules. The exception to the SQL
rules is for the BEGIN ATOMIC clause, where DEFS coding does not support the SPUFI parm SQLTERM()
or the command --#SET TERMINATOR. This enables the termination character to be changed, which in
turn allows a semi-colon “;” to be nested within the SQL command. When coding a trigger in DEFS this
capability is achieved by terminating the BEGIN AUTOMIC clause with a hash character with a blank
character before and after it “ # ”.
Example:
Defining reports
DRLOxxxx members of the DRL310.SDRLRENU library use report definition language to define report
groups and reports in IBM Z Performance and Capacity Analytics system tables. Report definition
members are contained in national language-specific definition libraries.
Figure 57 on page 131 shows the definition for the reports and report group of the Sample component.
Figure 57. Definition member DRLOSAMP, defining reports and report groups
The IBM Z Performance and Capacity Analytics report definition program uses the definitions in DRLOxxxx
members to locate these types of members for each report:
Member type
Description
DRLQxxxx
Report queries in DRL310.SDRLRxxx
DRLFxxxx
Report forms in DRL310.SDRLRxxx
DRLGxxxx
Report charts in DRL310.SDRLFxxx
where xxx refers to your national-language code (for example, ENU for English.
IBM Z Performance and Capacity Analytics imports members in these data sets to QMF to provide queries
and forms for predefined reports. If QMF is not used, the contents of the report queries and forms are
stored in IBM Z Performance and Capacity Analytics system tables.
DRLQxxxx members in the DRL310.SDRLRENU library are queries for predefined reports. Figure 58 on
page 132 shows the query for Sample Report 1.
Figure 58. IBM Z Performance and Capacity Analytics definition member DRLQSA01, report query
DRLFxxxx members in the DRL310.SDRLRENU library are QMF forms for predefined English reports. For
example, DRLFSA01 is the QMF form for Sample Report 1.
DRLGxxxx members in the DRL310.SDRLFENU library are GDDM/ICU formats for predefined English
reports. For example, DRLGSURF is the GDDM/ICU format used for Sample Report 1.
132 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Naming convention for members of DRL310.SDRLRENU
DRLTxxxx
Table and update definitions
DRLUxxxx
Update definitions (when separate from tables)
DRLVxxxx
View definitions
DRLWxxxx
Migration definitions
134 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data
Procedure
1. From the Administration window, select 3, Logs, and press Enter.
The Logs window is displayed.
2. From the Logs window, select Sample and press F11.
The Collect window is displayed.
3. Type DRL310.SDRLDEFS(DRLSAMPL) in the Data set field.
This is the name of the data set that contains log data
4. Press F4 to start an online collect process.
After the data collection is complete, IBM Z Performance and Capacity Analytics displays statistics
about the collect. (See “Sample collect messages” on page 140 for more information about the
statistics.)
5. When the collect is complete, press F3.
IBM Z Performance and Capacity Analytics uses the log collector program (DRLPLC) to collect the SAMPLE
log type, using these ddnames:
DD statement name
Description
DRLIN
Contains the log collector language statements. It can contain fixed-length or varying-length records
of any length, but the log collector reads a maximum of 72 bytes from each record.
DRLLOG
Identifies the log data set. The data set attributes are determined by the program creating the log.
DRLOUT
Identifies where collect messages are routed. It can have fixed-length or varying-length records of
any length, but the log collector assumes a length of at least 80 bytes for formatting. Lines that are
longer than the specified record length are wrapped to the next line. DRLOUT is allocated as RECFM=F
and LRECL=80 if no DCB attributes are specified.
DRLDUMP
Identifies where collect diagnostics are routed. It can have fixed-length or varying-length records
of any length, but the log collector assumes a length of at least 80 bytes for formatting. Lines that
are longer than the specified record length are wrapped to the next line. DRLDUMP is allocated as
RECFM=F and LRECL=80 if no DCB attributes are specified.
DRLSMSG
Contains message numbers to be written to SYSLOG. It can contain fixed-length or varying-length
records of any length. Only messages 0000-0999 and 2000-2999 are eligible to be written to
SYSLOG. By default, only messages 0290-0298 are written to SYSLOG. A hash sign designates the
start of a comment which goes until the end of the line.
136 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data
space
*I
*W
*E
*S
*T
integer
-integer
where:
*I This enables all DRLnnnnI messages to be written to SYSLOG.
*W This enables all DRLnnnnW messages to be written to SYSLOG.
*E This enables all DRLnnnnE messages to be written to SYSLOG.
*S This enables all DRLnnnnS messages to be written to SYSLOG.
*T This enables all DRLnnnnT messages to be written to SYSLOG.
integer This enables the message with the specified number to be written to SYSLOG.
-integer This disables the message with the specified number from being written to SYSLOG.
//DRLSMSG DD *
*S -0390 -0398 -0399
/*
COLLECT SMF;
//DRLLOG DD DISP=SHR,DSN=log-data-set
//DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
/*
Some logs require special collect procedures, which are supplied by the product. They are:
138 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data
140 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data
Procedure
1. Identify which log was collected and when it started.
The first messages in a set of collect messages show when the collect starts and identify the data set.
The product then shows the timestamp of the first identified record in the log, which looks like this:
• All log data set records have been processed. An example message is:
• A specific number of records have been read. The number is specified in the COMMIT AFTER
operand of the COLLECT statement. An example message (where 1000 was specified as the COMMIT
AFTER operand) is:
3. Determine the last record that the product identified in the log.
5. Verify that user-defined log, record, and update definitions are performing as expected. Check that
appropriate data is being collected and stored in the appropriate tables.
7. Examine database activity to identify tables with the most activity during collect processing.
Database inserts and updates show the number of rows inserted or updated in Db2 tables. The
number of rows inserted in the database and the number of rows updated in the database equal the
number of buffer inserts. Statistical messages of this sort look like these:
8. You can use message DRL0356I to optimize the collect process by selecting the SCAN or DIRECT
parameter. For more details, refer to the Language Guide and Reference. Following is an example of
message DRL0356I
DRL0356I To update the database, the algorithm SCAN was most selected.
142 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data
Note: First timestamp is the first record selected, Last timestamp is the last record selected. Last
timestamp might show an earlier date and time than the first timestamp.
IBM Z Performance and Capacity Analytics can produce a report from DRLLOGDATASETS that shows
statistics for every collect job in the table.
The product does not update DRLLOGDATASETS until a collection results in a successful commit. If it
finds an error that terminates processing of a log data set, such as a locking error or an out of space error,
it does not update DRLLOGDATASETS. If it has already created a row for the log data set (which it does
at the first commit), it does not update such indicators of a successful conclusion to processing as the
Elapsed seconds column or the Complete column. See “Recovering from database errors” on page 156
for more information.
Refer to “DRLLOGDATASETS” on page 259 for a description of its columns.
//DRLIN DD *
COLLECT log-name
...
//DRLLOG DD DISP=SHR,DSN=log-data-set-1
DD DISP=SHR,DSN=log-data-set-2
DD DISP=SHR,DSN=log-data-set-3
//DRLOUT DD SYSOUT=*
If the log collecting job stops prematurely, you can start it again. In this case, the log collector does not
collect the records of the data sets that were already completely processed and the following messages
are issued:
Note: If the IMS checkpoint mechanism (DRLICHKI, DRLICHKO) is used, you cannot resubmit the same
collect job when using multiple concatenated IMS data sets. If you resubmit the same collect job you
could encounter a problem of duplicate key, because the DRLICHKI of the previous job would be used.
Procedure
1. Optimize the collect buffer size.
Optimizing the size of the collect buffer has the greatest impact on performance
a) Reduce the number of times IBM Z Performance and Capacity Analytics stops reading a log data set
to write data to the database by increasing the buffer size.
Message DRL0313I shows the number of database updates because of a full buffer. Look for cases
where the number of updates could be reduced by increasing the size of the buffer.
The optimum is to reduce the number of updates to 0.
b) The default buffer size is 10 MB. Use the buffer size operand of the COLLECT statement to increase
the size to 20 MB to 30 MB, or more.
Refer to the Language Guide and Reference for more information about the COLLECT statement.
c) Do not use the COMMIT AFTERnn records operand on the COLLECT statement.
2. Reduce the amount of data committed to the database.
a) Remove unnecessary tables using the INCLUDE/EXCLUDE clauses of the COLLECT statement.
b) Examine collect messages to determine the most active tables.
c) Concentrate on tables with a lot of buffer and database inserts and updates shown in DRL0326I
messages.
d) Modify update definitions to eliminate needless rows in tables.
For example, set a key column to a constant (such as a blank) instead of to a value from a record if
the detail is unnecessary.
e) Reduce the number of columns collected
i) Delete unneeded columns from the update definition of the table.
ii) Remove the columns in the SQL CREATE TABLE statement of the table definition.
iii) Drop the table.
iv) Re-create the table.
Note: With Db2 multiple insert functionality, when data is collected to data tables, the insert
statements are issued in bulk. Multiple rows are inserted with a single Db2 multiple insert
statement. This results in significant performance improvements. However, this performance
improvement decreases as the number of columns inserted increases.
3. Improve update effectiveness.
a) Define an index on the primary key but no other indexes for tables you create.
b) Do not use a LOOKUP expression with the LIKE operand (especially for large lookup tables) in
update definitions you create. Use an = operand where possible.
c) Minimize the number of rows in lookup tables that allow global search characters and in the
PERIOD_PLAN control table.
d) Run collect when the processing load from other programs is low and when Db2 use is light.
e) Optionally, choose the appropriate algorithm to update the Db2 database by specifying the DIRECT
or SCAN parameter in the COLLECT statement.
144 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
If you do not specify any parameter, the collect process automatically chooses an algorithm among
the DIRECT, SCAN, and INSERT algorithms. This automatic selection, however, can be very time
consuming. To improve the performance, you can force the collect process to use either the DIRECT
or SCAN algorithm only, by specifying the DIRECT or SCAN parameter in the COLLECT statement.
For details about these parameters, refer to the Language Guide and Reference manual.
Figure 63. Db2 environment for the IBM Z Performance and Capacity Analytics database
Quantity
/ Tablespace Primary Secondary Storage grp Type Locksize
DRLSAIX 6000 3000 SYSDEFLT SEGMENTED TABLE
DRLSCI08 100 52 STOEPDM SEGMENTED TABLE
DRLSCOM 20000 10000 SYSDEFLT SEGMENTED TABLE
DRLSCP 60 32 SYSDEFLT SEGMENTED TABLE
DRLSDB2 40000 20000 SYSDEFLT SEGMENTED TABLE
DRLSDFSM 60000 30000 SYSDEFLT SEGMENTED TABLE
DRLSDPAM 100 52 SYSDEFLT SIMPLE ANY
When you change table space or index space parameters, the product uses SQL commands to alter
the space directly, and creates a job to unload and load table data as necessary. IBM Z Performance
146 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
and Capacity Analytics does not change the definition of the table space. To do this, select the Space
pull-down on the Components window.
If you create a table in the product database, you must specify the database and table space in which Db2
is to create the table. Once created, a table can be addressed by its table name only. You do not need to
specify the table space name.
“Working with tables and update definitions” on page 216 describes how to use the administration dialog
to view, change, or create table spaces.
//********************************************************************
//SPACE EXEC PGM=IKJEFT01,DYNAMNBR=25
//*
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD <== DATA SET NAME
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC <== DATA SET NAME
//SYSEXEC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC <== DATA SET NAME
//***************************
//* START EXEC DRLETBSR
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLETBSR LIBRARY= DRLvrm.SDRLDEFS -
Db2SUBSYS= DSN -
SYSPREFIX= DRLSYS -
COMPONENT= xxxx -
TABLENAME= * -
RECNUMBER= xxxx -
PAGESIZE= 4K -
MAXROWS= 255 -
PCTFREE= 5 -
FREEPAGE= 0 -
COMPRESS= 0
/*
Following is sample output for job DRLJTBSR that shows the space required for all tables of the IMS
collect component.
Sample output for DRLJTBSR
LIBRARY IBM Z Performance and The name of the partitioned data set that
Capacity Analytics definition contains definitions of the product tables.
library (UPPERCASE) This is a required parameter. It is used for
component tables that do not yet exist.
148 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
Db2SUBSYS Db2 subsystems name The Db2 subsystem where the product
(UPPERCASE) resides. This is a required parameter.
SYSPREFIX Prefix for system tables The prefix of all IBM Z Performance and
(UPPERCASE) Capacity Analytics system and control Db2
tables. This is a required parameter. The
value of this parameter depends on your
naming convention and is determined during
installation.
TABLENAME The name of the table The name of the IBM Z Performance and
(UPPERCASE) Capacity Analytics table. This is a required
parameter. To specify all component tables,
type an asterisk, *. To specify all component
tables whose names start with a particular
string, type the string. For example, type
CICS_S for all component tables whose name
starts with this string.
PAGESIZE Db2 page size The Db2 page size. This is an optional 4096 (4K)
parameter; when specified, it must be either
4K or 32K.
MAXROWS Maximum number of rows The maximum number of rows per page. This 255
per page is an optional parameter; when specified, it
must be a numeric value between 1 and 255.
PCTFREE Percentage of free space on The percentage of free space per page. This is 5
each page an optional Db2 parameter; when specified, it
must be a numeric value between 1 and 255.
FREEPAGE Number of free space pages The number of free space pages. This is an 0
optional Db2 parameter; when specified, it
must be a numeric value between 1 and 255.
For detailed information about the parameters, refer to the Db2 for z/OS: SQL Reference.
For information about Db2, refer to the Db2 for z/OS: Administration Guide and Reference.
For information about the algorithm used for calculating table space requirements, refer to the Db2 for
OS/390 Installation Guide.
variable length fields, the average record size is calculated using the maximum length. The average record
size does not include GRAPHIC, VARGRAPHIC and LONG VARGRAPHIC Db2 data-types. When you specify
the estimated number of records, remember that the product collects data from tables according to rules
specified in the update definitions. Tables containing the same data may therefore have different numbers
of rows. For example, an hourly table may contain a greater number of rows than a daily table.
Reorg/Discard utility
The Reorg/Discard utility enables you to delete the data included in the tables using the Purge condition
included in the DRLPURGECOND table. This table is provided in IBM Z Performance and Capacity
Analytics. At the same time, the Reorg/Discard utility automatically reorganizes the table space where
data has been deleted.
The records deleted by the Discard function are automatically saved in a specific data set, SYSPUNCH.
SYSPUNCH can be used at a later time to reload discarded data in the table, if required.
During the Discard step, the Reorg function reorganizes the table space to improve access performance
and reclaim fragmented space. Also, the keyword STATISTICS is automatically selected for the Reorg/
Discard, enabling you to collect online statistics during database reorganization.
See the Db2 for z/OS: Utility Guide and Reference, for more information about Reorg/Discard utility.
There are two ways to run the Reorg/Discard utility from the Administration window of IBM Z Performance
and Capacity Analytics:
From the Tables window, select option 12 from the Utilities pull-down menu.
In this way, the data contained in the table or tables selected from the table list is discarded, and a
space reorganization is automatically performed in the table space where the selected tables reside. The
Discard operation is only performed on the selected tables, while the Reorg operation is performed on all
the tables contained in the table space. You cannot run the Discard utility on Views or Tables that have
any discard condition specified in the DRLPURGECOND table.
As an alternative, use option 1 from the Maintenance pull-down menu of the Tables window to open the
Tablespace window, then select option 3 from the Utilities pull-down menu.
150 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
Quantity
/ Tablespace Primary Secondary Storage grp Type Locksize
DRLSAIX 6000 3000 SYSDEFLT SEGMENTED TABLE
DRLSCI08 100 52 STOEPDM SEGMENTED TABLE
DRLSCOM 20000 10000 SYSDEFLT SEGMENTED TABLE
DRLSCP 60 32 SYSDEFLT SEGMENTED TABLE
DRLSDB2 40000 20000 SYSDEFLT SEGMENTED TABLE
DRLSDFSM 60000 30000 SYSDEFLT SEGMENTED TABLE
DRLSDPAM 100 52 SYSDEFLT SIMPLE ANY
In this second scenario, from the Tablespace window, you select the table spaces for the Reorg operation.
The Discard operation is automatically run on all the tables contained in the selected table spaces,
according to the conditions specified in the DRLPURGECOND table.
All the tables that have the Discard operation specified in the DRLPURGECOND table are included in the
processing. All the tables that do not have the Discard operation specified in the DRLPURGECOND table
are ignored.
When you run Reorg/Discard, whichever procedure you use, a JCL is created and saved in your library,
so that it can be used at a later time, if required. When the JCL is launched, the following data sets are
created:
SYSPUNCH
Used to reload the discarded data, if required, using the Load utility.
SYSDISC
Contains the records discarded by the utility.
In addition, SYSREC data set is available. It contains all the records in the table, and you can
specify whether you want it to be Temporary or Permanent. If you specify Temporary, the data set is
automatically erased at the end of the reorganization job. If you specify Permanent, it is permanently
allocated on your disk.
When using the Reorg/Discard utility, you can select one or more tables and table spaces at a time.
However, in the data sets SYSPUNCH and SYSDISC, data is overwritten, therefore each data set maintains
only the information contained in the last table you processed.
The following is an example of how the Reorg/Discard utility works on a table space that contains several
tables:
//DSNUPROC.SORTOUT DD DSN=MYUID.DRLSROUT,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)
//DSNUPROC.WORK DD DSN=MYUID.WORK1,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)
//DSNUPROC.SYSPUNCH DD DISP=(MOD,CATLG),
// DSN=MYUID.TAB.SYSPUNCH,
// SPACE=(4096,(1,1)),UNIT=SYSDA
//DSNUPROC.SYSDISC DD DISP=(MOD,CATLG),
// DSN=MYUID.TAB.DISCARDS,
// SPACE=(4096,(5040,504)),UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=410,BLKSIZE=27880)
//DSNUPROC.SYSIN DD *
REORG TABLESPACE MYDB.DRLSCOM LOG YES
STATISTICS INDEX(ALL) DISCARD
FROM TABLE MYDB.AVAILABILITY_D
WHEN (
DATE < CURRENT DATE - 90 DAYS
)
FROM TABLE MYDB.AVAILABILITY_T
WHEN (
DATE < CURRENT DATE - 14 DAYS
)
FROM TABLE MYDB.AVAILABILITY_M
WHEN (
DATE < CURRENT DATE - 104 DAYS
)
/*
In this example, the Reorg/Discard utility reorganizes the MYUID.DRLSCOM table space and discards
data from the MYDB.AVAILABILITY_D, MYDB.AVAILABILITY_M, and MYDB.AVAILABILITY_T tables. This
example shows that the DDNAME for the SYSPUNCH data set is SYSPUNCH, the DDNAME for the discard
results data set is SYSDISC, and the DDNAME for the sort output data set is defaulted to SORTOUT. The
SYSDISC and SYSPUNCH data sets are reused every time the utility is run for all tables.
Purge utility
As an alternative to the Reorg/Discard utility, you can delete data and reorganize table space using the
Purge utility.
Each data table in a component has a Purge condition that specifies which data is to be purged from that
table. When you use the Purge function, the data specified in the purge condition is deleted.
Purge the contents of your database at least weekly. The sample job, DRLJPURG (in the
DRL310.SDRLCNTL library), purges all product database tables with Purge conditions. Figure 67 on page
153 shows part of DRLJPURG.
152 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
PURGE;
//DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
/*
The Purge utility generates messages that show if the job ran as expected:
After purging the database, use the Db2 REORG utility to free the purged space for future use. There are
three methods of reorganizing your database:
1. Use option 1, Run Db2 REORG utility, from the Utilities menu on the Tablespace list window, shown in
Figure 64 on page 146. This reorganizes a whole table space.
2. Use option 10, Unload, from the Utilities menu on the Tables window, after having selected one or
more tables. When you Unload and then Load a table, it reorganizes it without affecting the other
tables in the table space.
Figure 68 on page 154 shows the list of tables, with the Utilities pull-down.
154 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
Figure 69. DRLJCOPY job for backing up IBM Z Performance and Capacity Analytics table spaces
Determining when to back up the IBM Z Performance and Capacity Analytics database
Procedure
1. Increase the primary and secondary quantities using the IBM Z Performance and Capacity Analytics
administration dialog (Figure 121 on page 230), or by using the Db2 SQL statements, ALTER
TABLESPACE or ALTER INDEX.
2. Reorganize the table space using the Db2 REORG utility as described in “Purge utility” on page 152 or
drop the index and recreate it as described in “Displaying and adding a table index” on page 218.
Correcting corrupted data in the IBM Z Performance and Capacity Analytics database
156 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database
If the database has been incorrectly updated (for example, accidentally collecting the same log data
set twice or deleting required data), restore a previous backup copy with the Db2 RECOVER utility. For
information about backing up and recovering Db2 databases, refer to the Db2 for z/OS: Administration
Guide and Reference.
You need not restore product data after a collect job terminates from locking or out of space. After
correcting the error, run the job again. If the database has been updated, the collect resumes from the
last checkpoint recorded in the DRLSYS.DRLLOGDATASETS system table. If it had not committed data to
the database before the error, IBM Z Performance and Capacity Analytics recovers the data by collecting
from the first record in the log.
Learn more about the Db2 RUNSTATS utility from the description of its use in the Db2 for z/OS:
Administration Guide and Reference.
Start the RUNSTATS utility from the administration dialog by choosing it from the Utilities menu in the
Tables window. After using the RUNSTATS utility, use the administration dialog to see the number of bytes
used for data in the product database (described in “Showing the size of a table” on page 206).
More than one IBM Z Performance and Capacity Analytics user or function can request access to the data
at the same time. The way Db2 maintains data integrity during such times is by locking out data to all
processes but one.
Learn more about Db2 locking and how it allows more than one process to work with data concurrently
from the discussion of improving concurrency in the Guide to Reporting.
Deadlock or timeout conditions can occur when more than one user works with IBM Z Performance and
Capacity Analytics tables, which causes Db2 to generate messages; for example:
DSNT408I SQLCODE = -911, ERROR: THE CURRENT UNIT OF WORK HAS BEEN
ROLLED BACK DUE TO DEADLOCK OR TIMEOUT. REASON 00C90088,
TYPE OF RESOURCE 00000100, AND RESOURCE NAME DRLDB
For more information, refer to the description of monitoring Db2 locking in the Db2 for z/OS:
Administration Guide and Reference.
158 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering lookup and control tables
For information about DB2PM, refer to the Db2 for z/OS: Administration Guide and Reference and to the
IBM Db2 Performance Monitor: User's Guide.
Using available tools to work with the IBM Z Performance and Capacity
Analytics database
For more information about DB2I, refer to the description of utility jobs in the Db2 for z/OS: Administration
Guide and Reference.
Administering reports
Procedure
1. Specify batch settings for the reports.
2. Define queries and forms suitable for batch reports.
3. Print reports or save them in data sets, using a batch job or the reporting dialog.
4. Optionally, save the reports for reporting dialog users and regularly replace the saved report data with
new data.
5. Optionally, include saved charts in BookMaster® documents.
160 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports
When displayed from the dialog, IBM Z Performance and Capacity Analytics prompts you for values for
FROM_DATE, TO_DATE, and SYSTEM_ID. To run the report in batch, you must supply the values in the job
and you must change them when you want the reports to cover a different period.
You can change the query to require no variables and always cover the last week:
162 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports
//*********************
//* QMF LIBRARIES
//*
//DSQDEBUG DD DUMMY
//DSQUDUMP DD DUMMY
//DSQPNL DD DISP=SHR,DSN=QMFDSQPNLxlibrary
//DSQSPILL DD DSN=&&SPILL,DISP=(NEW,DELETE),UNIT=SYSDA,
// SPACE=(CYL,(1,1),RLSE),DCB=(RECFM=F,LRECL=4096,BLKSIZE=4096)
//DSQEDIT DD DSN=&&EDIT,UNIT=SYSDA,SPACE=(CYL,(1,1),RLSE),
// DCB=(RECFM=FBA,LRECL=79,BLKSIZE=4029)
//DRLFORM DD DSN=&&FORMDS,UNIT=SYSDA,SPACE=(TRK,(5,5),RLSE),
// DCB=(RECFM=VB,LRECL=255,BLKSIZE=2600),DISP=(NEW,DELETE)
//**********************
//* START EXEC DRLEBATR
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLEBATR SYSTEM=DSN SYSPREFIX=DRLSYS PREFIX=DRL -
PRINTER=XXX -
REPORT=XXXXXXXX,YYYYYYYY -
&SYSTEM_ID='SYS1' -
&FROM_DATE='1993-01-01' -
&TO_DATE='1993-04-01' -
DIALLANG=1 -
PRODNAME=IBM Z Performance an
/*
//*********************
//* GDDM LIBRARIES
//*
//ADMGGMAP DD DISP=SHR,DSN=ADMGGMAPlibrary
//ADMCFORM DD DISP=SHR,DSN=ADMCFORMlibrary
// DD DISP=SHR,DSN=DRLvrm.SDRLFENU
//ADMSYMBL DD DISP=SHR,DSN=SYS1.GDDMSYM
//ADMDEFS DD DISP=SHR,DSN=SYS1.GDDMNICK
//*ADMPRNTQ DD DISP=SHR,DSN=ADMPRINT.REQUEST.QUEUE
//DSQUCFRM DD DISP=SHR,DSN=DRLvrm.SDRLFENU
//*********************
//* QMF LIBRARIES
//*
//DSQDEBUG DD DUMMY
//DSQUDUMP DD DUMMY
//DSQPNL DD DISP=SHR,DSN=QMFDSQPNLxlibrary
//DSQSPILL DD DSN=&&SPILL,DISP=(NEW,DELETE),UNIT=SYSDA,
// SPACE=(CYL,(1,1),RLSE),DCB=(RECFM=F,LRECL=4096,BLKSIZE=4096)
//DSQEDIT DD DSN=&&EDIT,UNIT=SYSDA,SPACE=(CYL,(1,1),RLSE),
// DCB=(RECFM=FBA,LRECL=79,BLKSIZE=4029)
//DRLFORM DD DSN=&&FORMDS,UNIT=SYSDA,SPACE=(TRK,(5,5),RLSE),
// DCB=(RECFM=VB,LRECL=255,BLKSIZE=2600),DISP=(NEW,DELETE)
//**********************
//* START EXEC DRLEBATR
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLEBATR SYSTEM=DSN SYSPREFIX=DRLSYS PREFIX=DRL -
PRINTER=XXX -
REPORT=XXXXXXXX,YYYYYYYY -
&SYSTEM_ID='SYS1' -
&FROM_DATE='1993-01-01' -
&TO_DATE='1993-04-01' -
DIALLANG=1 -
PRODNAME=IBM Z Performance an
/*
5. Edit the job, specifying the parameters described in “Parameters for batch reporting” on page 164.
Then type SUBMIT on the command line, and press Enter.
IBM Z Performance and Capacity Analytics submits your job to run in background.
6. Press F3 to return to the Reports window.
Refer to the Guide to Reporting for more information about running reports in batch.
SYSTEM Db2 subsystem name The Db2 subsystem where IBM Z Performance and DSN
(UPPERCASE) Capacity Analytics resides.
This required parameter can be 4 alphanumeric
characters. The first character must be alphabetic.
The default value is DSN. If the value in this field is
something other than DSN, it was changed during
installation to name the correct Db2 subsystem.
Do not change the value to name another Db2
subsystem to which you might have access. IBM Z
Performance and Capacity Analytics must use the
Db2 subsystem that contains its system, control,
and data tables.
SYSPREFIX Prefix for system The prefix of all IBM Z Performance and Capacity DRLSYS
tables (UPPERCASE) Analytics system and control Db2 tables. The
value of this field depends upon your naming
conventions and is determined during installation.
This required parameter can be 8 alphanumeric
characters. The first character must be alphabetic.
The default is DRLSYS. If the value is something
other than DRLSYS, it was changed during
installation.
Do not change the value; IBM Z Performance and
Capacity Analytics uses this value to access its
system tables.
PREFIX Prefix for all other The prefix of IBM Z Performance and Capacity DRL
tables (UPPERCASE) Analytics data tables in the Db2 database.
Valid values are determined at installation.
This required parameter can be 8 alphanumeric
characters. The first character must be alphabetic.
The default is DRL. If the value is something other
than DRL, it was changed during installation.
CYCLE DAILY, WEEKLY The run cycle for reports. If you do not specify All reports
or MONTHLY daily, weekly, or monthly, all reports are printed.
(UPPERCASE)
GROUP A report group ID Here you can specify the ID of a report group. If All reports
(UPPERCASE) you do not specify a group, all reports are printed.
REPORT One or more report Here you can specify one or more reports to be All reports
IDs (UPPERCASE) printed. If you do not specify any reports, all
reports are printed.
164 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports
PRINTER Default printer name The GDDM nickname of a printer to use for printing As defined in the
(UPPERCASE) graphic reports. The printer should be capable of QMF profile
printing GDDM-based graphics.
The printer name must be defined in the GDDM
nicknames file, allocated to the ADMDEFS ddname.
Refer to the QMF: Reference and GDDM User's
Guide for more information about defining GDDM
nicknames.
This parameter cannot be used if QMF=NO.
DIALLANG 1. English With this parameter, you specify the language to be 1=English
used.
QMF YES or NO With this parameter, you specify whether your YES
(UPPERCASE) installation uses QMF or not.
GDDM YES or NO With this parameter, you specify if your installation YES
(UPPERCASE) uses GDDM.
DRLMAX nnnn If your installation does not use QMF, you use 5000
this parameter to specify the maximum number of
result rows from a query.
PAGELEN nn If your installation does not use QMF, you use this 60
parameter to specify the page length when printing
tabular reports.
PAGE The word for page If your installation does not use QMF, the word you PAGE
(Mixed case) specify here is inserted before the page number for
tabular reports.
You can type the word in mixed case, for example,
Page.
TOTAL The word for total If your installation does not use QMF, the word you TOTAL
(Mixed case) specify here is used as column heading for across
summary columns in tabular reports.
You can type the word in mixed case, for example,
Total.
DECSEP Period or comma If your installation does not use QMF, you use this PERIOD
parameter to specify the decimal separator to be
used in tabular reports. If you use a comma as a
decimal separator, a period is used as thousands
separator, if applicable.
PRODNAME IBM Z Performance This text is used in the report footer. If specified, IBM Z
and Capacity PRODNAME must be the last parameter. Performance
Analytics Report and Capacity
(Mixed case) Analytics Report
These books contain more information about using QMF in this way:
• QMF Advanced User's Guide
• QMF Reference
166 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering problem records
Procedure
1. Select 2, Generate problem records, from the Utilities pull-down of the IBM Z Performance and
Capacity Analytics Administration window and press Enter.
The Exception Selection window is displayed.
2. Type 2, No, in the Problems only field to list all exception records.>
Note: The default update definitions do not classify exceptions as problems. You can modify them to
set the problem flag (column PROBLEM_FLAG='Y' in the EXCEPTION_T table).
3. Type 1, Yes, in the Not generated only field to select exception records that have not yet been
generated as problem records in the Tivoli Information Management for z/OS database.
4. Select values for other required fields in the window.
Use the fields to restrict the number of exceptions in the list of exceptions.
Use F4 (Prompt) to see a selection list for any field in the Exception Selection window.
5. Press Enter to see the list of exceptions.
The Exception List window is displayed.
6. Select an exception and press Enter
The Generate Record window is displayed, showing the exception record in detail.
7. If the exception record is one you want to add to the Tivoli Information Management for z/OS
database, press Enter.
IBM Z Performance and Capacity Analytics generates the problem record.
168 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component
– Tables
– Lookup tables
– Views
– Triggers
– Procedures
• Report definitions for the component:
– Report groups
– Reports
Each IBM Z Performance and Capacity Analytics Key Performance Metrics (KPM) component also includes
table space profiles. Refer to the following section on working with table space profiles before installing
any of the Key Performance Metrics components.
Definition members in product libraries contain component object definitions. You can use the
administration dialog to examine statements in these definitions. For an explanation of the statements,
see the Language Guide and Reference.
You can use the administration dialog to work with components. From the Administration window (see
Figure 6 on page 7), select 2, Components, and press Enter.
The Components window is displayed.
Installing a component
Procedure
1. Refer to these books to plan the tasks you must perform to complete the installation:
Feature
Book name
AS⁄400 Performance
IBM i System Performance Feature Guide and Reference
CICS Performance
CICS Performance Feature Guide and Reference
Distributed Systems Performance
Distributed Systems Performance Feature Guide and Reference
IMS Performance
IMS Performance Feature Guide and Reference
System Performance
System Performance Feature Guide
Resource Accounting
Resource Accounting for z/OS
2. If you want to review Db2 parameters before installing a component, select the component in the
Components window, and select Space, as shown in Figure 73 on page 170.
You can use this pull-down to review and change Db2 space parameters such as:
• Buffer pool
• Compression
• Erase on deletion
• Free space
• Lock size
• Number of partitions, for a partitioned space
• Number of subpages, for an index space
• Primary and secondary space
• Segment size
• Type of space
• VSAM data set password
These parameters can affect the performance of your system. If you are unsure how these parameters
affect your system, you are recommended to use the defaults provided with the product. If you are
unsure about the meaning of a field, press F1 to get help. You should also refer to the CREATE INDEX
and CREATE TABLESPACE command descriptions in the Db2 documentation.
IBM Z Performance and Capacity Analytics saves the changed definitions in your local definitions
library. When you save a changed definition, it tells you where it is saving it, and prompts you for a
confirmation before overwriting a member with the same name.
3. From the Components window, select the component to install and press F6 (Install).
If the component you selected contains subcomponents, the Component Parts window is displayed.
Either select the subcomponents to install or press F12 to install only those objects that are not in a
subcomponent. (IBM Z Performance and Capacity Analytics might install some common definitions for
the component even though you do not select any of the parts to install.)
The Installation Options window is displayed.
170 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component
Installation Options TO 30 OF 48
/ 1. Online Date
/ 2. Batch
– F1=Help F2=Split F6=Objects F9=Swap F12=Cancel
–
/ RACF Component
– Sample Component
– Storage Management Component
– VM Accounting Component
– VM Performance Component
******************************** BOTTOM OF DATA **********************************
Command ===>
F1=Help F2=Split F3=Exit F5=New F6=Install F7=Bkwd
F8=Fwd F9=Swap F10=Actions F12=Cancel
Procedure
1. If the return code is greater than 0, investigate the messages. For example, the following message
indicates a problem accessing the database. Db2 messages are described in Db2 for z/OS: Messages. If
you get this message, you must reinstall the component:
Correct any error conditions that the product discovers, and install the component again. If the return
code is 8 or lower, the status of the component status is Installed.
If there are no Db2 messages, userid.DRLOUT can look like Figure 75 on page 172.
Db2 Messages
SQL statements executed successfully
--------------------------------------------------------------------------
172 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component
Select a lookup table. Then press Enter to Edit the table in ISPF Edit
mode.
/ Lookup table
– RACF_EVENT_CODE
– RACF_RES_OWNER
– RACF_USER_OWNER
**************************** BOTTOM OF DATA *****************************
Command ===>
F1=Help F2=Split F5=QMF add F6=QMF chg F7=Bkwd F8=Fwd
F9=Swap F12=Cancel
– VM Accounting Component
– VM Performance Component
Command ===>
F1=Help F2=Split F3=Exit F5=New F6=Install F7=Bkwd
F8=Fwd F9=Swap F10=Actions F12=Cancel
Refer to the appropriate feature book (shown in “Installing a component” on page 169) for a
description of its component lookup tables and how you must edit them.
3. To edit a lookup table using ISPF edit, select a table, and press Enter.
IBM Z Performance and Capacity Analytics accesses the ISPF editor where you can edit the lookup
table as described in “Editing the contents of a table” on page 204.
If you have QMF installed, you can use the QMF table editor to edit tables wider than 255 characters.
If the table has more rows than the value you set for the SQLMAX value field in the Dialog Parameters
window, IBM Z Performance and Capacity Analytics prompts you to temporarily override the default
for this edit session. To edit a lookup table using the QMF table editor in add mode, press F5 (QMF
add). To edit a lookup table using the QMF table editor in change mode, press F6 (QMF chg). “Editing
the contents of a table” on page 204 also describes using QMF to edit tables.
4. After you make any necessary changes to a lookup table, press F3 (Exit) to save your changes.
IBM Z Performance and Capacity Analytics returns to the Lookup Tables window.
5. Edit any other lookup tables that the component requires.
When you finish, the installation is complete.
6. Press F12 (Cancel).
IBM Z Performance and Capacity Analytics returns to the Components window.
The product has changed the Status field for the component to read Installed.
7. Press F3 (Exit).
The product returns to the Administration window.
Procedure
1. Type SUBMIT on the command line and press Enter.
2. Press F3 after submitting the job.
IBM Z Performance and Capacity Analytics returns to the Components window. The Status field
shows Batch which does not mean that the job completed, or that it completed successfully. The
installation job changes the value to Installed at its successful completion.
3. When the job completes, use a tool such as the Spool Display and Search Facility (SDSF) to look at
the job spool.
4. Review messages for errors as described in step “1” on page 171.
5. Exit SDSF (or whatever tool you are using to review the job spool).
6. Exit the Components window.
7. Refer to the book for the appropriate feature for a description of the component lookup tables you
must edit.
8. Select 4, Tables, from the Administration window.
The Tables window is displayed.
9. Select 2, Some, from the View pull-down.
The Select Table window is displayed (Figure 78 on page 175).
174 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component
S Select Table
Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel
Tables ROW 1 TO 3 OF 3
Select one or more tables. Then press Enter to Open table definition.
Procedure
1. Collect data from a log data set and review any messages, as described in “Using collect messages” on
page 141.
Note: Depending on the component you installed, you might not be able to collect its log data in an
online collect. Refer to “Collecting data from a log into Db2 tables” on page 186 for more information.
2. Display a table to ensure that it exists and that it contains the correct information as described in the
book for the appropriate feature:
Feature name
Book name
AS⁄400 Performance
IBM i System Performance Feature Guide and Reference
CICS Performance
CICS Performance Feature Guide and Reference
Distributed Systems Performance
Distributed Systems Performance Feature Guide and Reference
IMS Performance
IMS Performance Feature Guide and Reference
Network Performance
Network Performance Feature Reference
System Performance
System Performance Feature Reference
For Resource Accounting, see the Resource Accounting for z/OS book.
3. Display a report to ensure it is correctly installed.
Uninstalling a component
176 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with table space profiles
Procedure
1. From the Components window, select the component you want to uninstall. From the Component
pull-down, select the Uninstall option.
If the component you selected contains subcomponents, the Component Parts window is displayed.
Either select the parts to uninstall or press F12 to cancel.
A confirmation window is displayed.
2. Press Enter to confirm the uninstallation.
IBM Z Performance and Capacity Analytics deletes from its system tables any component definitions
not used by other components. It also deletes all Db2 objects of the component or selected
subcomponents, including any tables and table spaces. The component remains in the list of
components, but with its Status field cleared. If the component contains subcomponents, they remain
in the list of subcomponents but with their Status field cleared.
Note: If a component (or subcomponent) including a common object is uninstalled, the common
object is not dropped, unless it is the only installed component (or subcomponent) that includes the
common object. When a component or subcomponent is uninstalled, all its data tables are dropped
and their contents lost.
3. Customize the job for your site. Follow the instructions in the job prolog.
4. Submit the job.
Note: A person with Db2 SYSADMIN authority (or someone who has access to the Db2 catalog) must
submit the job.
Reviewing the GENERATE statements for table spaces, tables, and indexes
Components can make use of table space profiling by using GENERATE statements when creating table
spaces, tables, and indexes.
Each GENERATE statement will refer to a profile name in the GENERATE_PROFILES and GENERATE_KEYS
system tables. Default profiles are provided for use by the supplied components. For example, the
following GENERATE statement refers to the SMF table space profile:
If you install components that use these profiles, no customizations are required in the GENERATE
statements which create the table spaces, tables, and indexes for the components.
If you want to use a different profile name, you will need to customize all the GENERATE statements
by copying the definitions members into your LOCAL.DEFS data set, and modifying the profile names
accordingly.
If you want to use the default profile names but with a different set of table space parameters, you will
need to update the GENERATE_PROFILES and GENERATE_KEYS system tables with your new table space
settings for the default profiles.
Refer to the IBM Z Performance and Capacity Analytics Language Guide and Reference for the syntax and
additional information on using the GENERATE statements.
178 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition
The variable VERSION has the value IBM.nnnAPAR_number, where nnn is the version, release, and
modification level. For example, IBM.310 is an object supplied with IBM Z Performance and Capacity
Analytics version 3 release 1 modification level 0. The value of VERSION is set for all objects when the
object is installed (see “How IBM Z Performance and Capacity Analytics controls object replacement” on
page 126 for details).
Important:
If you change an object supplied by IBM Z Performance and Capacity Analytics , you must set the variable
VERSION to a custom version number as defined in “IBM Z Performance and Capacity Analytics Version
variable format” on page 126. During component installation, the product can then recognize an object as
having been modified by you. When you select the component you wish to install (from the Components
window) and press F6=Install, the User Modified Objects window is automatically displayed, listing the
supplied objects that you have later modified.
Procedure
1. From the Components window, select the component, and press Enter.
The Component window is displayed (Figure 80 on page 179) for the component. All IBM Z
Performance and Capacity Analytics objects in the component are listed..
Procedure
1. From the Component window, select an object to work with, and press Enter.
IBM Z Performance and Capacity Analytics accesses the ISPF editor, where you can edit (or view) the
object definition.
2. When you finish editing the object definition, press F3 to exit the ISPF edit session.
IBM Z Performance and Capacity Analytics returns to the Component window.
Procedure
1. From the Component window, press F5.
The Add Object window is displayed.
2. Type information about the new object, and press Enter.
You must use the same name in the Object name field as the one that appears in the definition
member for the object. For example, if there is a definition member, DRLLSAMP, that contains the log
collector language statement DEFINE LOG SAMPLE;, you must specify SAMPLE as the name of the
log definition object.
IBM Z Performance and Capacity Analytics saves the object specification (that includes the name of
the member that defines it) and returns to the Component window.
3. Repeat this procedure to add additional objects.
180 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition
Note: When you delete an object using the dialog, IBM Z Performance and Capacity Analytics deletes
references to the object from the component. It does not delete the definition member that contains log
collector language statements that define the object. You can add the object again at a later time.
To delete an object from a component:
Procedure
1. From the Component window, select the object to delete, and press F11.
A Confirmation window is displayed.
2. From the Confirmation window, press Enter to confirm the deletion.
IBM Z Performance and Capacity Analytics deletes from its system tables all references from the
component to the object and returns to the Component window.
Procedure
1. From the Components window, select the component. Then select the Show user objects option in the
Component pull-down.
2. From the User Modified Objects window, select the object to exclude, and press F4.
A Confirmation window is displayed.
3. From the Confirmation window, press Enter to confirm that the object should be excluded from the
installation.
Procedure
1. From the Components window, select the component. Then select the Show excluded option in the
Component pull-down.
2. From the Objects Excluded window, select the object to include, and press F4.
A Confirmation window is displayed.
3. From the Confirmation window, press Enter to confirm that the object should be included in the
installation.
Deleting a component
Procedure
1. Uninstall the component that you plan to delete. See “Uninstalling a component” on page 176 for
more information.
You must uninstall a component before deleting it. Uninstalling deletes all objects of the component.
2. From the Components window, select the component. Then select the Delete option in the Component
pull-down.
A confirmation window is displayed.
3. Press Enter to confirm the deletion.
IBM Z Performance and Capacity Analytics deletes from its system tables all references to the
component. The component no longer appears in the list of components in the Components window.
The feature definition member (see “Overview of IBM Z Performance and Capacity Analytics objects”
on page 125) still exists, however, and you can reinstall it at a later time. Before reinstalling deleted
components, you must update the system tables to refresh the list of components available for
installation.
Creating a component
182 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition
7. Update
8. View
9. Report group
10. Report
The order of installation within a definition type is determined by the sorting sequence of the definition
member names.
If you plan to use a component on the same IBM Z Performance and Capacity Analytics system on which
you are creating it, you can use the administration dialog to create the component.
Procedure
1. Optionally, you can select an existing component for IBM Z Performance and Capacity Analytics to use
as a template for the new component before performing the next step.
2. From the Components window, press F5.
The New Component window is displayed.
3. Type information about the new component in the fields.
4. Press F5 to add an object to the component.
The Add Object window is displayed. See “Adding an object to a component” on page 180 for more
information.
5. Select an object, and press Enter to edit its definition.
IBM Z Performance and Capacity Analytics accesses the ISPF editor, where you can edit the object
definition. See “Viewing or editing an object definition” on page 180 for more information.
6. To delete an object that currently exists (either it existed in the template or you decided not to use an
object you added), select the object, and press F11.
A Confirmation window is displayed for you to confirm the deletion. See “Deleting an object from a
component” on page 180 for more information.
7. When you finish adding, editing, or deleting objects, press F3.
IBM Z Performance and Capacity Analytics returns to the Components window and lists the new
component.
– View and modify a log definition and its header fields (page “Viewing and modifying a log definition”
on page 191)
– Create a log definition (page “Creating a log definition” on page 193)
– Delete a log definition (page “Deleting a log definition” on page 193)
• Work with record definitions (page “Working with record definitions in a log” on page 194):
– View and modify a record definition (page “Viewing and modifying a record definition” on page 194):
- Work with fields in a record definition (page “Working with fields in a record definition” on page
196)
- Work with sections in a record definition (page “Working with sections in a record definition” on
page 197)
– Create a record definition (page “Creating a record definition” on page 198)
– Display update definitions associated with a record (page “Displaying update definitions associated
with a record” on page 198)
– Delete a record definition (page “Deleting a record definition” on page 199)
– View and modify a record procedure definition (page “Viewing and modifying a record procedure
definition” on page 199)
– Create a record procedure definition (page “Creating a record procedure definition” on page 200)
– Delete a record procedure definition (page “Deleting a record procedure definition” on page 201)
Procedure
1. From the IBM Z Performance and Capacity Analytics Administration window, select 3, Logs.
2. Press Enter.
IBM Z Performance and Capacity Analytics displays the Logs window.
Procedure
1. From the Logs window, select a log definition and press F6.
184 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs
IBM Z Performance and Capacity Analytics displays the Data Sets window for the log type you selected
(see Figure 81 on page 185). You can then display collect statistics for each data set.
What to do next
To display the contents of a data set record by record, select the data set and press F5.
IBM Z Performance and Capacity Analytics displays the Record Selection window. Refer to “Displaying the
contents of a log” on page 188 for more information.
Procedure
1. From the Data Sets window, select the data set and press F11.
IBM Z Performance and Capacity Analytics displays a confirmation window.
2. Press Enter to confirm the deletion.
The product deletes any references it has to the data set, which no longer appears in the list of
collected data sets.
Procedure
1. From the Logs window, select a log and press F11.
The Collect window is displayed (see Figure 83 on page 186).
Collect
Reprocess 2 1. Yes
2. No
Commit after 1 1. Buffer full
2. End of file
3. Specify number of records
Number of records
Buffer size 10
Extention 2 1.K
Condition 2.M >
186 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs
Note: The log data sets used as input for the collect (DRLLOG DD statement) are expected to be sorted
in chronological order).
3. Optionally, specify other collect options in fields in the window.
Note: Entry fields followed by a greater than (>) sign respond to the F10 (Show fld) function key, which
displays all of the data in the field or lets you type more data in the Show Field window.
4. Press F5 to include only specific Db2 tables in the collect process.
The Include Tables window is displayed.
5. Select those tables to include in the collect process and press Enter.
You are returned to the Collect window.
You can exclude tables as well. You need exclude only tables that the product would normally update
during the collection.
6. Press F6 to exclude tables from the collect process.
The Exclude Tables window is displayed. Select tables to exclude from the collect process and press
Enter. You are returned to the Collect window.
7. Run the collect either in batch or online:
a) Press Enter to run the collect in batch mode.
IBM Z Performance and Capacity Analytics builds a JCL job stream for the collect job and accesses
the ISPF editor where you can edit and submit the JCL.
b) Press F4 to perform an online collection.
IBM Z Performance and Capacity Analytics starts the collect process online. When the collection is
complete, collect messages are displayed in an ISPF browse window.
8. Press F3 to return to the Logs window.
Procedure
1. From the Logs window, select a log definition.
2. Select 3, Show log statistics, from the Log pull-down.
You are prompted for the name of a log data set.
3. Type the name of the data set and press Enter.
The product displays statistics for the log (see Figure 14 on page 29).
Procedure
1. From the Logs window, select the log.
2. From the Utilities pull-down, select 2, Display log, and press Enter.
Note: You can also display the contents of a log by selecting Display record from the Record Definition
window or by pressing F5 from the Data Sets window.
The Record Selection window is displayed.
3. Type the log data set name and, optionally, the name of a record type (to display only one record
definition), or a record sequence number (to start displaying records at that position in the log). Press
Enter.
The Record Data window is displayed.
188 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs
Procedure
1. From the Logs window, select the log and press Enter.
The Record Definitions window for the log is displayed (see Figure 89 on page 194).
2. Select a record and press F11.
The List Record window for the record is displayed (see Figure 86 on page 190).
Condition >
Command===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Show fld F12=Cancel
190 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with log definitions
7. From the Record Definitions window, repeat this procedure for more records or press F3 to return to
the Logs window.
Procedure
1. From the Logs window, select the log and press F5.
The Log Definition window is displayed (see Figure 88 on page 192) for the log you specified.
Log procedure
Log procedure parameter >
Log procedure language 1. ASM
2. C
Command===>
F1=Help F2=Split F3=Exit F5=Log def F6=Datasets F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Collect F12=Cancel
Procedure
1. From the Header Fields window, press F5 to add a header field.
A blank Header Field Definition window is displayed.
2. Type the required information in the fields and press Enter.
The Header Field Definition window for the next field is displayed. IBM Z Performance and Capacity
Analytics carries forward values for the Type and Length fields from the previous field and increments
the Offset field by the length of the previous field.
3. Press F12 when you finish adding fields.
You are returned to the Header Fields window.
4. Press F3 to return to the Log Definition window.
192 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with log definitions
Procedure
1. From the Header Fields window, select the header field and press Enter.
The Header Field Definition window for the header field you specified is displayed.
2. Type changes in the fields and press Enter.
You are returned to the Header Fields window.
3. Press F3 to return to the Log Definition window.
Procedure
1. To delete a header field, select the field and press F11.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The header field is deleted from the list and you are returned to the Header Fields window.
3. Press F3 to return to the Log Definition window.
Procedure
1. To use an existing log definition as a template, select a log definition from the Logs window. Otherwise,
do not select a log definition before the next step.
2. Select 1, New, from the Log pull-down and press Enter.
The New Log Definition window is displayed.
3. Type information for the new log definition in the fields.
4. Press F5 to add header fields to the log definition.
The Header Fields window is displayed. See “Working with header fields” on page 192 for more
information on adding header fields.
5. After you add all the information, press Enter.
The new log definition is saved and you are returned to the Logs window.
Performance and Capacity Analytics system tables, but you do not delete the member that defines the log
type.
To delete a log definition:
Procedure
1. From the Logs window, select a log and then select the Delete option from the Log pull-down.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The log definition is deleted and you are returned to the Logs window.
Procedure
1. From the product Administration window, select 3, Logs, and press Enter.
The Logs window is displayed.
2. From the Logs window, select the log that contains the record and press Enter.
The Record Definitions window for the log is displayed (see Figure 89 on page 194).
194 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
contains fields or other sections. For more information about defining records, sections, and fields, refer
to the Language Guide and Reference.
You can use the administration dialog to modify record definitions. To view and modify a record definition:
Procedure
1. From the Record Definitions window, select the record definition and press Enter.
The Record Definition window for the record definition is displayed (see Figure 90 on page 195).
Note: If you have incorrectly modified the record definition, IBM Z Performance and Capacity Analytics
displays error messages in an ISPF browse window. Examine the messages and press F3 to return to
the Record Definition window where you can correct the errors.
Procedure
1. From the Record Definition window, press F5.
A blank Field Definition window is displayed.
2. Type the required information in the fields and press Enter.
Another Field Definition window is displayed (see Figure 91 on page 196).
T Field Definition
d
Type information.Then press Enter to save.
L
I Field name (required)
B Type + (required)
D Length
Offset
/
_ Section name +
_ Description >
_
_
_
_ F1=Help F2=Split F4=Prompt F9=Swap F10=Show fld
F12=Cancel
/
_
_ SMF30STP BINARY 2 22
Command===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Add fld F6=Add sec
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel
Procedure
1. From the Record Definition window, select the field and press Enter.
The Field Definition window is displayed.
2. Type changes in the fields and press Enter.
196 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
Your changes are saved and you are returned to the Record Definition window.
Procedure
1. From the Record Definition window, select the section and press Enter.
The Section Definition window is displayed (see Figure 92 on page 197).
Section Definition
Repeated . . . . . . . 2 1. Yes
2. No
Command ===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Add fld F6=Addsec
F7=Bkwd F8=Fwd F9=Swap F10=Showfld F11=Delete F12=Cancel
Procedure
1. From the Record Definition window, press F5.
A blank Section Definition window is displayed.
2. Type the required information in the fields and press Enter.
Another Section Definition window is displayed.
3. Press F12 when you finish adding sections.
You are returned to the Record Definition window.
Procedure
1. To use an existing record definition as a template, select a record definition from the Record
Definitions window. Otherwise, do not select a record definition.
2. From the Record Definitions window, select 1, New, from the Record pull-down.
The New Record Definition window is displayed.
3. Type information for the new record definition in fields of the window.
4. Press F5 to add fields to the record definition.
The Field Definition window is displayed. See “Working with fields in a record definition” on page 196
for more information.
5. Press F6 to add sections to the record definition.
The Section Definition window is displayed. See “Working with sections in a record definition” on page
197 for more information.
6. Press F3 when you finish adding fields and sections.
The new record definition is saved and you are returned to the Record Definitions window.
Procedure
1. From the Record Definitions window, select the record with associated update definitions you plan to
view and press F6.
The Update Definitions window lists all the update definitions that use the selected record definition
as input. From this window, you can view, modify, or add update definitions. See “Displaying and
modifying update definitions of a table” on page 220 or “Creating an update definition” on page 235
for more information.
2. Press F3 when you finish viewing update definitions.
You are returned to the Record Definitions window.
198 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
Procedure
1. From the Record Definitions window, select the record definition to delete. Then select 5, Delete, from
the Record pull-down.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The record definition is deleted and you are returned to the Record Definitions window.
Procedure
1. From the Record Definitions window, select the record definition that is input to the record procedure
you plan to modify and press F5.
The Record Procedures window for the record definition is displayed. This window lists all record
procedure names that use the record as input.
2. From the Record Procedures window, select the record procedure whose definition you plan to modify
and press Enter.
The Record Procedure Definition window for the record procedure is displayed (see Figure 93 on page
200).
Description >
Language 1 1. ASM
2. C
Command===>
F1=Help F2=Split F3=Exit F5=New F6=Delete F7=Bkwd
F8=Fwd F9=Swap F11=Save def F12=Cancel
Procedure
1. From the Record Definitions window, select the record definition from which the new record procedure
derives its input and press F5.
The Record Procedures window for the record definition is displayed.
2. From the Record Procedures window, press F5.
The New Record Procedure Definition window is displayed.
3. Type information for the new record procedure in the fields.
200 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
4. Press F5 if you want to link the record procedure to additional record definitions that describe record
types on which the record procedure acts. The record procedure is automatically linked to the record
type selected in step 1 above.
The Record Definitions window is displayed.
5. From the Record Definitions window, select record definitions to link to the record procedure and press
Enter.
The record procedure is linked to the record definitions you selected and you are returned to the
Record Procedure Definition window.
6. When you finish entering information, press Enter.
The new record procedure is saved and returns to the Record Procedures window.
7. Repeat this procedure to add more record procedures or press F3 to return to the Record Definitions
window.
What to do next
In addition, you must define a record type as the record procedure's output. Do this in the Record
Definition window (Figure 90 on page 195). Type the record procedure name in the Built by field, to
identify a record type as one that is created by the record procedure.
Procedure
1. From the Record Definitions window, select the record definition that is associated with the record
procedure to delete and press F5.
The Record Procedures window for the record definition is displayed.
2. From the Record Procedures window, select the record procedure to delete and press F6.
A confirmation window is displayed.
3. Press Enter to confirm the deletion.
You are returned to the Record Procedures window.
4. Repeat this procedure to delete more record procedures or press F3 to return to the Record Definitions
window.
Results
The record procedure is deleted.
202 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
Select one or more tables. Then press Enter to Open table definition.
• If you display a very large table, data table, or system table, you might run out of REXX storage. If this
happens, there are some actions you can take that enable you to display the table, or the part of the
table you want to see.
– Increase the region size.
– If you need to see only the first part of the table, you can decrease the SQLMAX parameter on the
Dialog Parameters window.
– Use F4 (Run) on the SQL Query pop-up in the reporting dialog. Write an SQL SELECT statement that
restricts the retrieved table information to the columns and rows you are interested in. This is a way
to create and run a query without having to save it.
To display the contents of a table:
Procedure
1. From the Tables window, select the name of the table that you plan to display.
2. Press F11, or select 1, Display, from the Utilities pull-down.
The product displays the contents of the table in rows and columns.
Note: The table is not necessarily sorted in key sequence.
Figure 95. Using QMF to display an IBM Z Performance and Capacity Analytics table
3. Press F3 when you finish viewing the contents of the table,
You are returned to the Tables window.
204 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit.
2. Select 1, Add rows, from the Edit pull-down.
The product calls the QMF table editor in add mode.
3. Enter values for columns, and press F2.
4. Press F3 when you finish adding rows.
QMF prompts you for confirmation.
5. Press Enter.
You are returned to the Tables window.
Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit
2. Select 2, Change rows, from the Edit pull-down.
IBM Z Performance and Capacity Analytics calls the QMF table editor in change mode.
3. To search for rows to change or delete, type values to search for, and press F2.
QMF displays the first row that matches the search criteria.
4. To change the row, type values for columns, and press F2.
5. To delete the row, press F11.
6. Press F3 when you finish changing or deleting rows.
QMF prompts you for confirmation.
Note: The ISPF edit function in the product administration dialog works according to ISPF rules. If no
value is entered or if the value is removed, the character-type fields are filled with blanks. The ISPF
Editor works the same way outside the dialog: that is, you can enter NULL values in Edit mode by
typing HEX on the command line and X'00' in the field.
7. Press Enter.
You are returned to the Tables window.
What to do next
If all columns in a table row can be displayed in 32 760 characters (if you are using ISPF version 4 or later,
otherwise 255 characters), you can use the ISPF editor to edit the table. If the table has more rows than
the value you set for the SQLMAX value field in the Dialog Parameters window, IBM Z Performance and
Capacity Analytics prompts you to temporarily override the default for this edit session.
IBM Z Performance and Capacity Analytics deletes all rows from the table and then reinserts them when
you use this function. Because of this, the ISPF editor is not recommended for large tables.
Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit
2. Select 3, ISPF editor, from the Edit pull-down.
3. IBM Z Performance and Capacity Analytics copies table rows to a sequential file and accesses the ISPF
editor.
Procedure
1. From the list of tables, select the Maintenance pull-down without selecting a table.
2. Select option 1, Tablespace.
3. From the list of table spaces, select one or more table spaces (or make no selection to process all the
table spaces) and select the Utilities pull-down, as shown in Figure 64 on page 146.
4. Select option 2, Run Db2 RUNSTATS.
206 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
What to do next
To learn more about the Db2 RUNSTATS utility, refer to the Db2 for z/OS: Administration Guide and
Reference.
Use the administration dialog to check the size of tables in the product database:
1. From the Tables window (Figure 94 on page 203), select tables to display their sizes.
Note: If you do not select any tables, IBM Z Performance and Capacity Analytics displays the size of all
tables in the product database.
2. Select 2, Show size, from the Utilities pull-down.
The Table Size window is displayed (Figure 97 on page 207).
Command===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel
Note:
a. You can use the SORT command (for example, SORT KBYTES DESC) to find the largest tables.
b. If the information shown in the Table Size window is incomplete, run the Db2 RUNSTATS utility and
restart this procedure.
3. After you finish viewing this window, press Enter.
You are returned to the Tables window.
You can use the administration dialog to recalculate the contents of tables. For more information about
the RECALCULATE log collector language statement, refer to the Language Guide and Reference.
To recalculate the contents of tables:
Procedure
1. From the Tables window (Figure 94 on page 203), select the source table (the table you plan to
modify)
2. Select 8, Recalculate, from the Utilities pull-down.
The Recalculate window is displayed (Figure 98 on page 208).
Command===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel
208 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
S S Condition
S S Condition
dialog to import data in the Integration Exchange Format (IXF). Refer to the QMF Application Development
Guide for a description of the IXF format.
Note: When you import the file, IBM Z Performance and Capacity Analytics replaces the contents of the
table.
To import data into a table:
Procedure
1. From the Tables window (Figure 94 on page 203), select the table.
2. Select 3, Import, from the Utilities pull-down.
The Import Data Set window is displayed.
3. Type the name of the data set that contains the data you want to import and press Enter.
The data is imported into the table and you are returned to the Tables window.
Procedure
1. From the Tables window (Figure 94 on page 203), select the table.
2. Select 4, Export, from the Utilities pull-down
The Export Data Set window is displayed.
3. Type the name of the data set to export data into, and press Enter.
The data is exported into the data set you specified and you are returned to the Tables window.
Purging a table
Procedure
1. From the Tables window (Figure 94 on page 203), select tables to purge.
Note: If you do not select any tables, IBM Z Performance and Capacity Analytics purges the contents
of all data tables with purge conditions.
2. Select 9, Purge, from the Utilities pull-down.
The Purge Confirmation window is displayed.
3. Press Enter to confirm the purge.
210 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
The purge conditions associated with the tables are run and the statistics on the number of rows
deleted from each table are displayed.
Procedure
1. From the Tables window (Figure 94 on page 203), select the tables to unload, as shown in Figure 101
on page 211.
UNLOAD Utility
The UNLOAD utility will unload table data to a data set. Type the fully
qualified data set name, without quotes. Then press Enter to create the
JCL.
212 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
What to do next
To generate a job that reloads the data, from the Tables window, select option 11, Load. Then enter the
required information, as explained above.
The following example shows control statements for the Unload utility. Data is unloaded from the
AVAILABILITY_D table onto tape. The DDNAME for SYSPUNCH data set is completed with the UNIT and
VOLSER information about the Tape Unit used. The data set input from panel is SYSREC00.
The following example shows control statements for the Load utility. Data is loaded from tape into the
AVAILABILITY_D table. The DDNAME for the SYSPUNCH data set is completed with the UNIT and VOLSER
information about the Tape Unit used. The data set input from panel is SYSREC00.
Procedure
1. From the Tables window, select the table to unload, as shown in Figure 94 on page 203
2. From the Utilities pull-down menu, select option DB2HP Unload, as shown in Figure 101 on page 211.
Note: The Db2 High Performance Unload utility can only be run on tables. It cannot be run on views.
214 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log
3. From the Db2 High Performance Unload Utility window, specify the unload type by inserting 1 for disk
unload or 2 for tape unload. The default value is disk unload. Then, specify the name of data set that
will be used to store the unloaded data, as shown in the following window.
The DB2HP Unload will unload table data to a data set. You can use the
utility only if the DB2HPU product is present on system. Type the fully
qualified data set name, without quotes. Then press Enter to create the
JCL.
OUTDDN (SYSREC00)
FORMAT DSNTIAUL
LOADDDN SYSPUNCH LOADOPT (RESUME NO REPLACE)
/*
//SYSPRINT DD SYSOUT=*
//*
//******* DDNAMES USED BY THE SELECT STATEMENTS ********
//*
//SYSREC00 DD DSN=SAMPLE.DAT,
// UNIT=3390,
// SPACE=(4096,(1,1)),
// DISP=(NEW,CATLG,CATLG),
// DCB=(RECFM=FB,LRECL=410,BLKSIZE=27880),
// VOL=SER=MYVOL
//SYSPUNCH DD DSN=USERID.SYSPUNCH,
// UNIT=xxxx,
// VOL=SER=xxxxxx,
// SPACE=(4096,(1,1)),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
// DISP=(NEW,CATLG,CATLG)
Procedure
1. From the Tables window (Figure 94 on page 203), select the table definition you plan to view.
2. Press Enter.
The table definition is opened. Figure 104 on page 216 shows an example of an opened table
definition.
216 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Table window, select the column, and press Enter.
The Column Definition window for the column is displayed (Figure 105 on page 217).
S Column Definition
Command ===>
F1=Help F2=Split F3=Exit F5=Add col F6=Indexes F7=Bkwd
F8=Fwd F9=Swap F10=Show fld F12=Cancel
Procedure
1. From the Table window, press F5.
The Add Column window is displayed(Figure 106 on page 218).
– Add Column
S
Type column information. Then press Enter to save and return.
D
C Name (required)
/ Comments >
– Length Primary key . 2 1. Yes
– 2. No
– Type 1. Char Nulls 1 . 1. Default
– 2. Varchar 2. NOT NULL
– 3. Smallint 3. NOT NULL WITH
– 4. Integer DEFAULT
– 5. Float
– 6. Decimal
– 7. Date
* 8. Time
9. Timestamp
10. Graphic
11. Vargraphic
12. Long varchar
C 13. Long vargraphic
F1=Help F2=Split F9=Swap F10=Show fld F12=Cancel
Procedure
1. From the Tables window, select a table and press Enter.
2. From the Table window, press F6.
The Indexes window is displayed (Figure 107 on page 219).
218 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
C Command ===>
C Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel
What to do next
Note: To modify an index, delete and recreate it.
Procedure
1. From the Indexes window, select the index and press F11.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
You are returned to the Indexes window.
220 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Tables window (Figure 94 on page 203), select the table and press F5.
The Update Definitions window for the table is displayed (Figure 110 on page 221). All update
definitions where the selected table is either the source or the target are included.
Condition
A condition that is applied to source fields or columns.
Type an expression that evaluates as either true or false in this field. IBM Z Performance and
Capacity Analytics evaluates the expression to determine if it is true before processing the source
with the update.
Comments
A description of the update definition.
Column
All columns of the target table.
Function
Describes the accumulation function to use. Blank means that the column is a key (a GROUP BY
column). For data columns, the value of this field can be SUM, MIN, MAX, COUNT, FIRST, LAST,
AVG, and PERCENT.
To use the MERGE function, identify input to the function by designating a column for each of these
functions: INTTYPE, START, END, and QUIET.
Expression
Describes how the value in the column should be derived from source fields, columns, or
abbreviated names of expressions. (See “Working with abbreviations” on page 223 for more
information.) If the update does not affect the value of the column, there is no entry in the
expression field.
For an AVG column, type the expression, followed by a comma, and a column name. For a
PERCENT column, type the expression, followed by a comma, a column name, a comma, and a
percentile value (without the percent sign).
Refer to the Language Guide and Reference for more information about using log collector language:
• Functions
• Accumulation functions
• Expressions
• Statements
• Averages
• Percentiles
3. Type any modifications to the update definition in the fields.
4. Press F5 to modify abbreviations in this update definition.
The Abbreviations window is displayed. See “Working with abbreviations” on page 223, for more
information.
5. Press F6 to modify the distribution clause associated with the update definition.
The Distribution window is displayed. See “Modifying a distribution clause” on page 224 for more
information.
6. Press F11 to modify the apply schedule clause associated with an update definition.
The Apply Schedule window is displayed. See “Modifying an apply schedule clause” on page 224 for
more information.
7. Press F3 when you finish modifying the update definition.
The changes are saved and you are returned to the Update Definitions window.
8. Repeat this procedure to modify other update definitions or press F3 again to return to the Tables
window.
222 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Update Definition window (Figure 111 on page 221), press F5.
The Abbreviations window is displayed (Figure 112 on page 223).
Abbreviations ROW 1 TO 3 OF 3
/ Abbreviation Expression
TS1 TIMESTAMP(SMF70DAT,SMF70IST)+(SMF70INT
D1 DATE(TS1)
T1 TIME(TS1)
BOTTOM OF DATA
Command ===>
F1=Help F2=Split F3=Exit F5=Add abbr F7=Bkwd
F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel
Procedure
1. From the Abbreviations window, press F5.
The Abbreviation window is displayed.
2. Type the abbreviation and the expression in the fields and press Enter.
The abbreviation is added and you are returned to the Abbreviations window.
Procedure
From the Abbreviations window, select the abbreviation to delete, and press F11.
The abbreviation is deleted from the list.
Procedure
1. From the Update Definition window (Figure 111 on page 221), press F6.
The Distribution window is displayed (Figure 113 on page 224).
Distribution ROW 1 TO 7 OF 65
/ Column/Field
– SMF33ACL
– SMF33ACT
– SMF33ALN
– SMF33AOF
– SMF33AON
/ SMF33CN
/ SMF33CNA
Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Show fld F11=Delete F12=Cancel
224 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Update Definition window (Figure 111 on page 221), press F11.
The Apply Schedule window is displayed (Figure 114 on page 225).
Apply Schedule
Command ===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Abbrev F6=Distrib
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Schedule F12=Cancel
What to do next
Refer to the Language Guide and Reference for more information about using the log collector language to:
• Determine resource availability
• Calculate the actual availability of a resource
• Compare actual availability to scheduled availability
Procedure
1. From the Tables window (Figure 94 on page 203), select the table to update and press F6.
The Retention Period window is displayed (Figure 115 on page 226) if the purge condition is blank or
has the standard format (column_name < CURRENT DATE - n DAYS), and if the column name,
which can be an expression (for example, DATE(START_TIMESTAMP)), is less than 18 characters.
Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel
/ SQL condition
Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel
226 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Tables window, select the Maintenance pull-down. Do not select a table first.
2. The pull-down has these options:
1. Tablespace...
2. Index and index space...
To change table space parameters, select 1. The Tablespace window is displayed (with the Tablespace
pull-down illustrating the options available: you can use the Utilities pull-down to reorganize or get
statistics on a table space).
Command ===>
F1=Help F2=Split F3=Exit F5=New F7=Bkwd F8=Fwd
F9=Swap F10=Actions F12=Cancel
You can use the Save definition option to create SQL commands that can recreate the selected table
space. Note that this does not update the component definition: only the definition of the selected
table space is saved.
3. Select a table space and press Enter. The Tablespace window is displayed, which you can use to
change the table space parameters. Change the parameters and press Enter.
Tablespace DRLSCI06
Type . . . . . . . . . : 2 1.Simple
2.Segmented
3.Partitioned
Locksize . . . . . . . . 4 1. Any
2. Tablespace
3. Page
4. Table
IBM Z Performance and Capacity Analytics takes action depending on the parameters to be changed:
Where reorganization is needed
Some parameter changes need a database reorganization before they take effect. In this case the
product :
a. Makes the change, using the ALTER TABLESPACE command.
b. Creates a batch job to reorganize the database, which you can submit when it is convenient.
Where the database needs to be stopped
Some parameter changes need exclusive use of the database. In this case the product creates a
batch job that:
a. Stops the database.
b. Makes the change, using the ALTER TABLESPACE command.
c. Starts the database again.
Do not submit the job if some task, for example a collect, is using the table space, because this
stops the collect job.
In other cases
Some parameter changes can be made immediately. IBM Z Performance and Capacity Analytics
issues the ALTER TABLESPACE command online.
Press F1 to get more information about a parameter, or refer to the discussion of designing a database
in Db2 for z/OS: Administration Guide and Reference.
228 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Tables window (Figure 94 on page 203), select the Maintenance pull-down. Do not select a
table first.
2. To change index space parameters, select 2. The Indexes window is displayed (with the Index pull-
down illustrating the options available; you can use the Utilities pull-down to reorganize an index
space).
Command ===>
F1=Help F2=Split F3=Exit F7 =Bkwd F8=Fwd F9=Swap
F10=Actions 12=Cancel
Index CICS_A_DLI_USR_W
Subpages . . . . . . . . 4 Pctfree . . . . . . 10
Bufferpool . . . . . . . BP0 Dsetpass . . . . . . ________
Freepage . . . . . . . . 0
IBM Z Performance and Capacity Analytics takes action depending on the parameters to be changed:
Where the index must be recreated
In this case the product:
a. Asks you to confirm the change.
b. Deletes the index, with the DROP command.
c. Redefines the index, using the DEFINE command.
Where the database needs to be stopped
Some parameter changes need exclusive use of the database. In this case the product creates a
batch job that:
a. Stops the database.
b. Makes the change, using the ALTER command.
c. Starts the database again.
Do not submit the job if some task, for example a collect, is using the index space, because this
stops the collect job.
In other cases
Some parameter changes can be made immediately. IBM Z Performance and Capacity Analytics
issues the ALTER command online.
Press F1 to get more information about a parameter, or refer to the discussion of designing a database
in Db2 for z/OS: Administration Guide and Reference.
Making table space parameter changes that do not require offline or batch action
Procedure
1. From the Tables window (Figure 94 on page 203), select a table in the table space to open.
2. Select 5, Open Tablespace, from the Table pull-down.
IBM Z Performance and Capacity Analytics displays the Tablespace window.
- Tablespace DRLSSAMP
D
Type information. Then press Enter to save and return.
S
More: +
/ Name ......... : DRLSSAMP
s
Type ......... : 2 1.Simple
2.Segmented
3.Partitioned
VCAT
Command ===>
C F1=Help F2=Split F5=Tables F7=Bkwd F8=Fwd F9=Swap
F12=Cancel
230 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Tables window, select a view to display, and press Enter.
The View window is displayed (Figure 122 on page 231).
Column Comments
DATE Date when the SYSLOG records were written >
PERIOD_NAME Name of the period. This is derived using >
/
JES_COMPLEX Name of the JES complex. From the SET JE >
MESSAGES_TOT Total number of messages. This is the co >
BOTTOM OF DATA
Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Showfld F12=Cancel
Procedure
1. From the Table pull-down in the Tables window (Figure 94 on page 203), select 8, Print list.
The Print Options window is displayed.
2. Type the required information, and press Enter.
The list of IBM Z Performance and Capacity Analytics tables is routed to the destination you specified.
Procedure
1. From the Tables window (Figure 94 on page 203), select the table definition to save in a data set.
2. Select 7, Save definition, from the Table pull-down.
The Save Data Set window is displayed.
3. Type the data set name in the field, and press Enter.
The table definition in the data set that you specified is saved and you are returned to the Tables
window.
Procedure
1. From the View pull-down in the Tables window (Figure 94 on page 203), select 2, Some, and press
Enter.
IBM Z Performance and Capacity Analytics displays the Select Table window.
2. Type selection criteria in the fields, and press Enter.
Note: You can see a list of components by pressing F4.
The tables that correspond to the criteria you specified are listed.
To list all the tables, from the View pull-down in the Tables window, select 1, All. All the tables in the
IBM Z Performance and Capacity Analytics database are listed.
Creating a table
232 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
Procedure
1. From the Table pull-down in the Tables window (Figure 94 on page 203), select 1, New, and press
Enter.
The New Table window is displayed (Figure 123 on page 233).
2. Type required information in the fields.
3. To see a list of available table spaces, place the cursor in the Tablespace field, and press F4.
The Prompt for Tablespace window is displayed. If the table is related to existing tables, you might
want to put the table in the same table space.
4. Select a table space from the list, and press Enter.
The product returns to the New Table window, and the table space appears in the Tablespace field.
Note: To create a table space, see “Creating a table space” on page 234.
New Table
Procedure
1. From the Tables window, select the table to use as a template.
2. Select 1, New, from the Table pull-down.
The New Table window is displayed.
The fields are filled with information from the template table.
3. The rest of the procedure is the same as when creating a table without a template.
Note: The index for the template table is not copied and must be added for the primary key. To add an
index, see “Displaying and adding a table index” on page 218.
Procedure
1. From the New Table window, select an existing column.
2. Press F11 to delete the column.
A confirmation window is displayed.
3. Verify the deletion by pressing Enter.
The column is deleted and you are returned to the New Table window.
Procedure
1. Select the table or view to delete in the Tables window (Figure 94 on page 203) and select 6, Delete,
from the Table pull-down.
Note: IBM Z Performance and Capacity Analytics prevents you from deleting table definitions that
affect, or are affected by, other product objects. To delete a table definition, remove links from the
table to other product objects.
A confirmation window is displayed.
2. Verify the deletion by pressing Enter.
The table or view is deleted and you are returned to the Tables window.
Note: A table in a partitioned table space cannot be explicitly deleted (dropped). You can drop the
table space that contains it. This does not have any impact on other tables because only one table can
be defined in a single table space.
234 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
You can use the administration dialog to create a table space. You must have some knowledge of Db2
databases before creating the table space. See “Understanding table spaces” on page 146 for more
information about table spaces, or refer to the discussion of designing a database in Db2 for z/OS:
Administration Guide and Reference.
To create a table space:
Procedure
1. From the New Table window (Figure 123 on page 233), place the cursor in the Tablespace field and
press F4.
The Prompt for Tablespace window is displayed.
2. From the Prompt for Tablespace window, press F5.
The New Tablespace window is displayed.
3. Type required information in the fields, and press Enter.
A table space is created and you are returned to the Prompt for Tablespace window.
4. Press Enter again to return to the New Table window.
5. Continue creating the table as described in “Creating a table” on page 232.
Note: It is also possible to create a table space without creating a table: use the Maintenance pull-
down in the Tables window (as described in “Displaying and modifying a table or index space” on page
227) and select New from the Tablespace pull-down in the Tablespaces window.
Procedure
1. From the Tables window (Figure 94 on page 203), select a table for addition of an update definition,
and press F5.
The Update Definitions window is displayed (Figure 110 on page 221).
2. To use an existing update definition as a template, select one of the update definitions from the list
and press F5. Otherwise, do not select an update definition.
The New Update Definition window is displayed. The columns are filled with values from the template.
3. To create an update definition without a template, press F5 from the Update Definitions window.
You are prompted for the name of the target table in the Target Table of New Update window. Type the
name of the target table, and press Enter.
The New Update Definition window is displayed.
4. Type required information in the fields, and press F3.
The new update definition is saved and you are returned to the Update Definitions window.
You might choose to use abbreviations for expressions in the expression fields. Or you might require
that data be distributed over some interval or used in availability processing. See these topics in
““Displaying and modifying update definitions of a table” on page 220” for information:
• “Working with abbreviations” on page 223
• “Modifying a distribution clause” on page 224
• “Modifying an apply schedule clause” on page 224
5. Press F3 again to return to the Tables window.
Procedure
1. From the Tables window (Figure 94 on page 203), select the table and press F5.
The Update Definitions window for the table is displayed (Figure 110 on page 221). All update
definitions where the selected table is either the source or the target are included.
2. Select the update definition to delete, and press F11.
A confirmation window is displayed.
3. Verify the deletion by pressing Enter.
The definition is updated and you are returned to the Update Definitions window.
4. Press F3 to return to the Tables window.
Procedure
1. From the Tables window (Figure 94 on page 203), select one or more tables to grant access to.
2. Select 5, Grant, from the Utilities pull-down.
The Grant Privilege window is displayed (Figure 124 on page 237).
236 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions
User ID
Procedure
1. From the Tables window (Figure 94 on page 203), select one or more tables to revoke access to.
2. Select 6, Revoke, from the Utilities pull-down.
The Revoke Privilege window (Figure 125 on page 238) is displayed.
Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel
238 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Invoking the log data manager
Procedure
1. From the Administration Dialog window, select 3, Logs, to display the Logs window.
2. Select one of the displayed logs, then select 5, Open Log Data Manager (a new option provided with
the log data manager) from the Log pull-down.
The log data manager Main Selection window (Figure 126 on page 239) is displayed.
Procedure
1. Ensure that your log data sets are cataloged (otherwise the DRLJLDML job step does not work).
2. Take a copy of the supplied sample DRLJLDML job step.
3. Insert the DRLJLDML job step in each job that creates a log data set, and which you want to be
collected by the log data manager. For Generation Data Sets, you must insert the DRLJLDML job step
after each Generation Data Set member that has been created.
4. Enter the name of the log data set (*.stepname.ddname) in the DRLLOG DD statement of the job step
(described in “DRLJLDML sample job” on page 240.
5. Run the job you have now amended, to create the log data set.
240 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements
//***********************************************
//* START EXEC DRLELDML
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLELDML SYSTEM=DSN SYSPREFIX=DRLSYS -
LOGTYPE=SMF -
LOGID='' ONTAPE=N
/*
Procedure
Select 1, Log collector statements, from the log data manager Main Selection window.
The Collect Statements window (Figure 127 on page 242) is displayed,one row for each log ID defined for
the log type. When a default row is created during installation of a product component, the field log ID is
always blank.
Select a Log ID. Then press Enter to edit the collect statement
Procedure
1. Select the log ID whose collect statements you want to edit, and press Enter. The Edit window (Figure
128 on page 243) is displayed.
2. Edit the collect statements using the ISPF editor. If the member does not exist, it will be automatically
created by the edit. If the collect statements data set does not exist or is not cataloged, an error
message is displayed. A confirmation window is displayed if a member of the product definition library
is selected for editing. If you want to edit collect statements that reside in the product distribution
library, follow the instructions given in “Modifying IBM Z Performance and Capacity Analytics-supplied
collect statements” on page 243
3. On completion of the editing, you are returned to the Log Data Manager Collect Statements window.
Results
Note: The COMMIT AFTER BUFFER FULL ONLY parameter will not be accepted in the collect statement
member if the collect involves concatenated log data sets (an appropriate error message is displayed).
The reason is that such concatenated data sets are never recorded in the DRLLOGDATASETS system table
as being collected.
242 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements
Procedure
1. Copy the member containing the collect statements to your local library.
2. Use option F6=Modify of the Log Data Manager Collect Statements window to change the data set
name of the default log ID (see “Modifying log collector statements” on page 241 for details).
3. Edit the collect statements member as you require.
Procedure
1. Press F5 and the Add Collect Statements Definition window is displayed (Figure 129 on page 244
2. Type a log ID and data set name and press Enter. The log ID and data set name are added to the Log
Data Manager Collect Statements list in alphanumeric sequence. However, a non-existent data set is
not created
Procedure
1. Select the log ID corresponding to the data set name which you want to modify, and press F6. The
Modify Collect Statements Definition window is displayed (Figure 130 on page 244)
2. Type the modified data set name and press Enter. The data set name is changed in the Log Data
Manager Collect Statements list.
Log ID MVSA____
Data set DRLxxx.LOCAL.DEFS(MVSACOLL)__________________________
Procedure
Select 2, Log data sets to be collected, from the log data manager Main Selection window.
The Log Data Sets To Be Collected window (Figure 131 on page 245) is displayed, one row for each log ID
and log data set.
244 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements
What to do next
Each list of log data sets are sorted firstly by log ID, and then by the date the log data set was added.
Each log data set displayed in this window has a value in the Status column, which can contain one of
these values:
• blank
The log data set is ready to be collected by the DRLMLDMC job (see “The DRLJLDMC collect job and the
parameters it uses” on page 247 for details).
• 'SELECT'
This value occurs when the log data set has been selected for collect by the DRLMLDMC job, but the
collect has not completed. The data set is protected from a collect by a “parallel” invocation of the
DRLMLDMC job. If theDRLMLDMC job abends, the action you take depends upon how many log data
sets have the status 'SELECT' after the abend has occurred:
– If there are many log data sets with status 'SELECT', run job DRLELDMC with parameter
CLEANUP=YES, to record the log data sets as ready for collection again.
– If there are only a few log data sets with status 'SELECT', it is easier to manually record the data sets
as ready for collection again by selecting F4=Rerun for these log data sets.
• A log collector return code or a system or user abend code
This occurs when the log data set was collected with a failure, and the Rerun option was selected for
this log data set in the Log Data Sets Collected with Failure window (described in “Modifying the list of
unsuccessfully collected log data sets” on page 252). The data set is collected again the next time job
DRLELDMC is run.
Procedure
1. Select the log ID and press Enter.
The Modify Log ID for a Log Data Set window is displayed (Figure 132 on page 246).
2. Type the modified log ID and press Enter. The log ID is then changed in the Log Data Sets To Be
Collected list.
Note: You can also use this window to display the full length of a truncated log data set name. Data set
names longer than 34 characters are truncated in the Log Data Sets To Be Collected window, but are
displayed in full in the Modify Log ID for a Log Data Set window.
Procedure
1. Select the log ID and log data set and press F11.
2. Press Enter to confirm deletion.
Procedure
1. Select the log ID and log data set and press F4
2. Press Enter to confirm.
Procedure
1. Press F5 and the Add a Data Set To Be Collected window is displayed (Figure 133 on page 247).
2. Type the log ID and log data set name and press Enter.
The Log Data Sets To Be Collected window is displayed, containing the added entry.
246 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The collect job and the parameters it uses
An error message is displayed in this window if you attempt to add an already existing log data set.
//* *
//* This job is used to collect log data sets that are recorded *
//* in the DRLLDM_LOGDATASETS system table as being ready for *
//* collect by the Log Data Manager. *
//* *
//* Input: *
//* The exec DRLELDMC accepts the following parameters: *
//* *
//* SYSPREFIX=xxxxxxxx Prefix for system tables. default=DRLSYS *
//* SYSTEM=xxxxxx Db2 subsystem name. default=DSN *
//* PREFIX=xxxxxxxx Prefix for all other tables.default=DRL *
//* PLAN=xxxxxxxx Db2 plan name default=DRLPLAN *
//* DSPREFIX=xxxxxxxx Prefix for creation of data sets DRLOUT and *
//* DRLDUMP. default=DRL *
//* SHOWSQL=xxx Show SQL. YES/NO default=NO *
//* SHOWINPUT=xxx Copy DRLIN to DRLOUT. YES/NO default=YES *
//* LOGTYPE=xxxxxxxxxx Log type (e.g. SMF). If not specified, *
//* all log types are selected for processing. *
//* LOGID=xxxxxx Log ID. If not specified, all log id's are *
//* are selected for processing. Default Log ID *
//* should be coded as =''. *
//* RETENTION=xxx Retention period for DRLOUT, DRLDUMP and *
//* collect result info. default=10 days *
//* PURGE=xxx Purge info for successful collects that *
//* are older than its Retention period *
//* YES/NO default=YES *
//* CLEANUP=xxx Option only to be used after an Abend. *
//* No collect is done. Processes only log data *
//* sets marked with SELECT in the Log Data Sets*
//* To Be Collected list (on panel DRLDLDMT). *
//* Output: the data set being collected when *
//* the abend occurred will be moved to the *
//* Collected With Failure list. Other concate- *
//* nated data sets are moved to the Successful *
//* list or made ready for a renewed collect. *
//* YES/NO default=NO *
//* *
//* DRLOUT/DRLDUMP DD card: if any of these files are specified *
//* they will be used by all collects started by*
//* this job. They will then not be controlled *
//* or viewed by the Log Data Manager dialog. *
//* *
//* DRLLOG DD card: Must not be allocated. *
//* *
//* LMDLOG EXEC card: The value used for DYNAMNBR should be *
//* as a minimum, 2 plus the number of *
//* log data sets to be collected. *
//* *
//* Output: The results of the collects are recorded in *
//* sysprefix.DRLLDM_LOGDATASETS together *
//* with LOG_NAME, LOG_ID and TIME_ADDED. *
//* Job messages in the DRLMSG file *
//* *
//* Notes: *
//* Before you submit the job, do the following: *
//* 1. Check that the steplib db2loadlibrary is correct. *
//* 2. Change the parameters to DRLELDMC as required. *
//* 3. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* 4. Change the IZPCA data set HLQ (default is DRLvrm.) *
//* *
//********************************************************************
//LDMLOG EXEC PGM=IKJEFT01,DYNAMNBR=20
//*
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC --
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD --
// DD DISP=SHR,DSN=db2loadlibrary --
//*********************************************************
//*DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*********************************************************
//* MESSAGES
//*
//DRLMSG DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*********************************************************
//* Add the next three DD statements if you collect IMS.
//* Note 1: IMSVER must specify the same release as the
//* collect statement used by the Log Data Manager.
//* Note 2: DRLICHKI must be DUMMY or point out an empty
//* data set after an IMS restart.
//*********************************************************
248 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The collect job and the parameters it uses
RETENTION
The retention period for DRLOUT, DRLDUMP and the log data manager information that is produced by
the collects. The default is 10 days.
PURGE
This parameter determines whether or not the information resulting from successful collects should
be purged when the date of the information is older than the retention period. The parameter can be
set to the value YES or NO. If PURGE is set to YES, all log data manager information about successfully
collected log data sets is deleted (for all log types and log IDs). The default value is PURGE=YES.
CLEANUP
This parameter is used when the DRLELDMC job has had an abend during a collect of concatenated
log data sets. If you run the DRLELDMC job with parameter CLEANUP set to YES, log data sets that
were successfully collected before the abend occurred are moved to the Log Data Sets Successfully
Collected list. The log data set that was being collected when the abend occurred is moved to the Log
Data Sets Collected With Failure list. The default value is CLEANUP=NO.
DRLOUT DD statement
If this file is specified, it is used by all collects started by this job. However, this file is not used by the
log data manager dialog.
DRLDUMP DD statement
If this file is specified, it is used by all collects started by this job. However, this file is not used by the
log data manager dialog.
DRLLOG DD statement
Must not be allocated.
Procedure
Select 3, Log data sets collected successfully, from the log data manager Main Selection window.
The Log Data Sets Collected Successfully window (Figure 134 on page 250) is displayed, one row for each
log data set that has been successfully collected by the Log Data Manager for this log type.
The list of data sets are sorted by the Time collected column.
250 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets
Procedure
Select a log data set and press Enter.
The DRLOUT data set is displayed in ISPF Browse mode (if a DRLOUT statement was not included in the
collect job).
Procedure
Select the log data set and press F5.
The DRLDUMP data set is displayed using the ISPF Browse function, if a DRLDUMP DD statement was not
present in the collect job. DRLDUMP should be empty if the return code from the collect was 0.
Procedure
1. Select the log data set and press F6. The Retention Period for Collect Information window is displayed
(Figure 135 on page 251).
2. Type the retention period field the number of days you require, and press Enter
Note: You are not changing the retention period for the collected log data here, but only the retention
period for the log data manager information about the log data set.
Procedure
1. Select the log data set for which you want to delete the log data manager information from, and press
F11.
Procedure
Select 4, Log Data Sets Collected with Failure, from the log data manager Main Selection window.
The Log Data Sets Collected with Failure window (Figure 136 on page 252) is displayed, one row for each
log data set that has been unsuccessfully collected by the Log Data Manager for this log type.
The list of data sets are sorted by the Time collected column.
Procedure
1. Select the log data set and press Enter.
2. The DRLOUT data set is displayed in ISPF Browse mode (if a DRLOUT statement was not included in
the collect job).
Procedure
Select the log data set and press F5.
The DRLDUMP data set is displayed using the ISPF Browse function, if a DRLDUMP DD statement was not
present in the collect job. DRLDUMP is empty in most cases if the return code from the collect was 0.
252 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets
Procedure
1. Select the log data set.
2. Press F4.
An error message is displayed if this log data set is already included in the list of data sets to be
collected.
Procedure
1. Select the log data set you want to delete, and press F11.
2. Press Enter to confirm deletion.
Command Explanation
Stop the Continuous Collector.
STOP jobname
P jobname
Modify commands
To modify the Continuous Collector during operation, choose from the following commands:
Command Explanation
Turn the commit heart beat message DRL0383I ON
MODIFY jobname,INTERVAL MESSAGE ON|OFF
or OFF. The message is ON by default when the
COMMIT phrase FULL STATISTICS AFTER is used.
F jobname,INTERVAL MESSAGE ON|OFF
F jobname,IM ON|OFF
MODIFY jobname, COMMIT AFTER BUFFER FULL Changes when the Continuous Collector issues
a COMMIT to make the database updates
permanent. After this Modify statement a COMMIT
F jobname, COMMIT AFTER BUFFER FULL
will occur only when the collect buffer is filled.
F jobname, CA BUFFER FULL
254 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets
Command Explanation
MODIFY jobname,REFRESH AT hhmm Changes the time at which the refresh of the
internal definitions (DEFS) and lookup tables for
F jobname,REFRESH AT hhmm the Continuous Collector will occur. The time
will replace the previous time set by either the
COLLECT syntax or an earlier modify command and
turn the refresh function back on
MODIFY jobname,REFRESH OFF Turn off the refresh function. This will prevent a
refresh occurring.
F jobname,REFRESH OFF
MODIFY jobname,REFRESH NOW Refresh the internal definitions (DEFS) and lookup
tables for the Continuous Collector immediately.
F jobname,REFRESH NOW This command does not alter the hhmm set by
the COLLECT statement or a previous modify
command.
MODIFY jobname,LOGSTREAM FREE NOW This command will cause the Continuous Collector
to close and re-open the log stream immediately.
F jobname,LOGSTREAM FREE NOW This enables log stream overflow data sets that
have been marked as freeable to be freed. This
command does not change frequency or the ON
or OFF status of the hourly log stream free set by
the LOGSTREAM FREE ON or LOGSTREAM FREE n
commands.
MODIFY jobname,LOGSTREAM FREE OFF This command turns off the regular close and re-
open of the log stream function started by the
F jobname,LOGSTREAM FREE OFF LOGSTREAM FREE ON or LOGSTREAM FREE n
commands.
MODIFY jobname,LOGSTREAM FREE ON If the log stream close and re-open function is off
by default or has been turned off by the MODIFY
F jobname,LOGSTREAM FREE ON jobname,LOGSTREAM FREE OFF command, it may
be turned on again using this command. When
turned back on, the close and re-open will happen
immediately and then repeat at the interval that
was last set, the default being one hour.
MODIFY jobname,LOGSTREAM FREE n Start the log stream close and re-open function at
the frequency required. The options are 1, 2, 3, 4,
F jobname,LOGSTREAM FREE n 5, or 6 hours. After this command, the log stream
close and re-open will happen immediately and
then repeat at the selected interval. This command
may be repeated with a different interval without
an intervening LOGSTREAM FREE OFF command.
When the Continuous Collector is running, log stream overflow data sets that have been marked as
freeable may not be correctly freed until the log stream is closed.
If required, the log stream being read by the Continuous Collector may be closed and re-opened using the
MODIFY LOGSTREAM command.
The MODIFY jobname,LOGSTREAM FREE command has several options; the function may be performed
ad hoc with the NOW option or started to run regularly at a selected interval.
The COLLECT parameter REFRESH AT hhmm sets a time for when the Continuous Collector will refresh
the internal definitions and lookup tables from the IBM Z Performance and Capacity Analytics Db2
system tables. If you are installing a new component to your IBM Z Performance and Capacity Analytics
system you must ensure that a REFRESH does not occur while the component install is still in progress.
The safest way to do this is to set REFRESH OFF and turn it back on once the component install has
completed.
256 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables
DRLEXPRESSIONS
This system table contains one row for each expression or condition in a log, record, record procedure, or
update definition.
DRLFIELDS
This system table contains one row for every field in each defined record type.
LENGTH SMALLINT Length of the field. For DECIMAL and ZONED fields, this is a
1-byte precision followed by a 1-byte scale.
OFFSET SMALLINT Offset of the field in the record or section.
INSECTION_NO SMALLINT Number of the section where the field is contained. This is
zero if the field is not in a section.
REMARKS VARCHAR(254) Description of the field, set by the COMMENT ON statement.
DRLLDM_COLLECTSTMT
This system table contains one row for each combination of log type and log ID that is defined to the Log
Data Manager. Each row identifies the collect statement that is used for the log type/log ID combination.
DRLLDM_LOGDATASETS
This system table contains one or more rows for each log data set recorded by the Log Data Manager.
258 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables
DRLLOGDATASETS
This system table contains one row for each collected log data set.
DRLLOGS
This system table contains one row for each defined log type.
260 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables
DRLPURGECOND
This system table contains one row for each purge condition in defined data tables.
DRLRECORDPROCS
This system table contains one row for each defined record procedure.
DRLRECORDS
This system table contains one row for each defined record type and one row for each defined header in
log definitions.
DRLRPROCINPUT
This system table contains one row for every defined record type that must be processed by a record
procedure.
262 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables
DRLSECTIONS
This system table contains one row for every defined section in defined record types.
DRLUPDATECOLS
This system table contains one row for every column in each update definition, including GROUP BY, SET,
and MERGE columns.
DRLUPDATEDISTR
This system table contains one row for every distributed field or column in each update definition.
DRLUPDATELETS
This system table contains one row for every identifier in the LET clause of each update definition. (The
identifiers are defined as abbreviations in the administration dialog.)
DRLUPDATES
This system table contains one row for each update definition.
264 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables
DRLCHARTS
This system table stores information extracted from the host graphical report formats (ADMCFORM data).
Data is inserted into this table at installation time by the host DRLIRD2 member. If GDDM version 3 or
later is installed and available, DRLCHARTS is also updated by the host exec DRLECHRT when a report is
saved in the host ISPF dialog.
266 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables
DRLCOMPONENTS
This system table contains one row for each IBM Z Performance and Capacity Analytics component.
DRLCOMP_OBJECTS
This system table contains one row for every object in each component.
DRLCOMP_PARTS
This system table contains one row for every part in each component.
DRLGROUPS
This system table contains one row for each defined report group.
DRLGROUP_REPORTS
This system table contains one row for every report in each defined report group.
DRLREPORTS
This system table contains one row for each defined report.
268 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables
DRLREPORT_ATTR
This system table contains one row for every attribute in each defined report.
DRLREPORT_COLUMNS
This system table contains one row for every column in each defined report if QMF is not used. The
information is taken from the QMF form.
DRLREPORT_QUERIES
This system table contains one row for every query line in each defined report, if QMF is not used.
DRLREPORT_TEXT
This system table is used for for host reports when QMF is not used. It contains one row for every heading
and footing row. It also contains one row if there is a final summary line with a final text, and one row if
there is an expression that limits the number of output rows in the report.
270 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables
DRLREPORT_VARS
This system table contains one row for every variable in each defined report. The variables may be
specified in the DEFINE REPORT statement or extracted from the query.
DRLSEARCH_ATTR
This system table contains one row for every attribute in each saved report search.
DRLSEARCHES
This system table contains one row for each saved report search.
GENERATE_PROFILES
This system table contains one row for each profile for each generate statement profile. It is used
when installing components that use the GENERATE statement to create table spaces, partitioning, and
indexes.
272 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views
GENERATE_KEYS
This system table contains one row for each partition of a generate statement profile using range-
partitioning. It is used when installing components that use the GENERATE statement to create range-
partitioned table spaces, partitioning and indexes.
274 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views
Each table description includes information about the table, and a description of each key column and
data column in the table.
Key columns are marked with a "K".
Data columns come after the last key column and are sorted in alphabetical order, with any underscores
ignored.
The tables appear in alphabetical order, with any underscores ignored.
Note: Data tables with similar contents (that is, data tables with the same name but different suffixes) are
described under one heading. For example, “AVAILABILITY_D, _W, _M” on page 279 contains information
about three similar tables:
AVAILABILITY_D
AVAILABILITY_W
AVAILABILITY_M
Except for the DATE column and TIME column, the contents of these three tables are identical.
Differences in the contents of similar tables are explained in the column descriptions.
The DATE and TIME information are stored in the standard Db2 format and displayed in the local format.
Control tables
The control tables are created during installation of the IBM Z Performance and Capacity Analytics base.
The tables control results returned by some log collector functions.
Control tables appear in the tables list in the administration dialog.
DAY_OF_WEEK
This control table defines the day type to be returned by the DAYTYPE function for each day of the week.
The day type is used as a key in the PERIOD_PLAN and SCHEDULE control tables.
PERIOD_PLAN
This control table defines the periods to be returned by the PERIOD function, which is described in the
Language Guide and Reference. A period plan defines the partition of a day into periods (such as shifts) for
each day type defined by the DAY_OF_WEEK and SPECIAL_DAY control tables.
SCHEDULE
This control table defines the schedules to be returned by the APPLY SCHEDULE function. A schedule is a
time period when a resource is planned to be up; it is used in availability calculations.
276 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views
SPECIAL_DAY
This control table defines the day type to be returned by the DAYTYPE function for special dates such as
holidays. The day type is used as a key in the PERIOD_PLAN and SCHEDULE control tables.
CICS_DICTIONARY
This control table is used during CICS log data collection. The CICS record procedure, DRL2CICS,
uses CICS_DICTIONARY to store the latest dictionary record processed for each unique combination
of MVS_SYSTEM_ID, CICS_SYSTEM_ID, CLASS and VERSION. For more information, refer to the CICS
Performance Feature Guide and Reference.
CICS_FIELD
This control table is used during CICS log data collection. The CICS record procedure, DRL2CICS, uses
CICS_FIELD to store field lengths and offsets for dictionary fields described in “CICS_DICTIONARY”
on page 277. For more information, refer to the CICS Performance Feature Guide and ReferenceCICS
Performance Feature Guide and Reference.
278 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common data tables
AVAILABILITY_D, _W, _M
These tables provide daily, weekly, and monthly statistics on the availability of systems and subsystems.
They contain consolidated data from the AVAILABILITY_T table.
The default retention periods for these tables are:
AVAILABILITY_D
90 days
AVAILABILITY_W
400 days
AVAILABILITY_M
800 days
AVAILABILITY_T
This table provides detailed availability data about the system as a whole and all its subsystems. The data
comes from many different sources. For every resource tracked, this table contains one row for each time
interval with a different status.
The default retention period for this table is 10 days.
EXCEPTION_T
This table provides a list of exceptions that have occurred in the system and require attention. The data
comes from many different sources.
The layout of this table cannot be changed by the user.
The default retention period for this table is 14 days.
280 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common lookup tables
MIGRATION_LOG
This table holds information on what migration jobs have been run, and the results of each step.
The layout of this table cannot be changed by the user.
The default retention period for this table is 14 days.
AVAILABILITY_PARM
This lookup table sets availability parameters. It contains the schedule names and availability objectives
to use for the different resources in the system. Its values are used in the AVAILABILITY_D, _W, and _M
tables.
USER_GROUP
This lookup table groups the users of the system into user groups. The values are used in many tables.
You can also assign division and department names to the user groups; however, the names are left blank
in the predefined tables.
TIME_RES
This lookup table defines the time resolution to use for each row of data stored in a set of tables. This
enables you to specify that data should be recorded for a time period other than 1 hour. The values are
used in these data tables:
• D_DB2_BUFF_POOL_T
• D_DB2_DATABASE_T
• D_DB2_SYSTEM_T
• D_KPM_DB2_BP_T
• D_KPM_DB2_DBASE_T
• D_KPM_DB2_SYSTEM_T
282 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common lookup tables
AGGR_VALUE
This table is to be used to assign a default value to a key field if it is not required in the aggregation. If a
record is found in the AGGR_VALUE for a particular table and column, then the default value is used in the
aggregation. This has the potential to reduce the number of rows collected for that particular table.
Sample component
This topic describes the Sample component, the only component shipped with the IBM Z Performance
and Capacity Analytics base product. You can use the Sample component for testing the installation of the
base product or to demonstrate functionality.
The sample component consists of:
• A sample log and record definition
• Three sample tables with update definitions
• Three sample reports
• A log data set with sample data that can be collected
Figure 137 on page 284 shows an overview of the flow of data from the sample log data set, DRLSAMPL
(in the DRLxxx.SDRLDEFS library), through the Sample component of IBM Z Performance and Capacity
Analytics, and finally into reports.
Collect
Records
SAMPLE_01
Data tables
Lookup and
control tables
SAMPLE_H
SAMPLE_M
SAMPLE_USER
Reports
284 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Reports
Sample Report 1
This surface chart shows the processor time consumed by different projects. It gives an hourly profile for
an average day.
This information identifies the report:
Report ID
SAMPLE01
Report group
Sample Reports
Source
SAMPLE_H
Chart format
DRLGSURF
Attributes
Sample
Variables
System ID
Sample Report 2
This report shows the resources consumed by each user and department.
This information identifies the report:
Report ID
SAMPLE02
Report group
Sample Reports
286 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Reports
Source
SAMPLE_M
Attributes
Sample
Variables
From_month, To_month, System_ID
Sample Report 2
Average
Month Department User Trans- response CPU Pages
start date name ID actions seconds seconds printed
---------- ---------- -------- -------- -------- -------- --------
2000-01-01 Appl Dev ADAMS 1109 3.84 244.13 821
JONES 1138 3.40 228.79 1055
SMITH 870 4.27 183.03 864
-------- -------- -------- --------
* 3117 3.84 655.95 2740
Sample Report 3
This bar chart shows the processor time consumed by each project during the selected time period,
sorted as a toplist.
This information identifies the report:
Report ID
SAMPLE03
Report group
Sample Reports
Source
SAMPLE_M
Chart format
DRLGHORB
Attributes
Sample
Variables
From_date, To_date, System_ID
SMF records
Record name Member name Description
SMF_000 DRLRS000 IPL
SMF_002 DRLRS002 Dump header
SMF_003 DRLRS003 Dump trailer
SMF_004 DRLRS004 Step termination
SMF_005 DRLRS005 Job termination
SMF_006 DRLRS006 JES2/JES3/PSF/External writer
288 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records
290 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records
292 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records
These records are user-defined; that is, they are not part of the standard IBM records in the range 0-127.
However, they are written by IBM licensed programs.
The default record numbers are provided within parentheses.
DFSMS/RMM records
Record name Member name Description
DFRMM_VOLUME DRLRRMMV Extract file volume record
DFRMM_RACK DRLRRMMR Extract file rack number record
DFRMM_SLBIN DRLRRMMS Extract file storage location bin record
DFRMM_PRODUCT DRLRRMMP Extract file product record
DFRMM_VRS DRLRRMMK Extract file VRS record
DFRMM_OWNER DRLRRMMO Extract file owner record
DFRMM_DATASET DRLRRMMD Extract file data set record
294 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IMS SLDS records
296 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DCOLLECT records
DCOLLECT records
These records are produced by the DFP DCOLLECT utility.
For a description of these records, refer to z/OS DFSMS: Access Method Services for Catalog.
EREP records
For a description of these records, refer to the Environmental Record Editing and Printing Program (EREP)
User's Guide and Reference.
RACF records
These records come from the RACF Database Unload utility output that contains RACF configuration data.
298 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Z Workload Scheduler records
VM accounting records
For a description of these records, refer to z/VM: CP Planning and Administration.
VMPRF records
For a description of these records, refer to the VMPRF User's Guide and Reference.
300 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands
Administration window
_ 1. Dialog parameters...
2. Reporting dialog defaults...
3. Exit
Dialog parameters
See “Dialog parameters - variables and fields” on page 115.
Reporting dialog defaults
Refer to the Guide to Reporting for more information.
Exit
Returns to the previous window.
Other
_ 1. QMF
2. DB2I
3. ISPF/PDF
4. Process IZPCA statements...
5. Messages...
QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.
Process IZPCA statements
See “Working with fields in a record definition” on page 196.
Messages
Refer to the Guide to Reporting for more information.
Utilities
_ 1. Network
2. Workstation interface
3. Generate problem records...
4. System Diagnostics
5. TPM Extract
6. Search installed objects
Network
Refer to the Network Performance Feature Installation and Administration manual .
Generate problem records
See “Administering problem records” on page 166.
System Diagnostics
Refer to the topic "System Diagnostics" in the Messages and Problem Determination manual.
TPM Extract
Extracts usage data from IBM Z Performance and Capacity Analytics data tables which can be
imported into Tivoli Performance Modeller.
Search installed objects
Utility for searching installed component objects such as table columns, table comments, records,
updates, and reports.
Help
_ 1. Using help
2. General help
3. Keys help
4. Product information
Using help
Refer to the Guide to Reporting for more information.
General help
Refer to the Guide to Reporting for more information.
Keys help
Refer to the Guide to Reporting for more information.
302 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands
Product information
Displays IBM Z Performance and Capacity Analytics copyright and release information.
Components window
Messages
Refer to the Guide to Reporting for more information.
Help
As for Help on the Administration window (see “Help” on page 302).
Logs window
/ Logs Description
_ DCOLLECT DFSMS DCOLLECT log
304 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands
Other
QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.
Process IBM Z Performance and Capacity Analytics statements
See “Working with fields in a record definition” on page 196.
Messages
Refer to the Guide to Reporting for more information.
Help
As for Help on the Administration window (see “Help” on page 302).
Tables window
306 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands
QMF
If your installation uses QMF, this command starts QMF and displays either its SQL primary window or
its prompted query primary menu.
REPORTs
Starts the reporting dialog.
SOrt column_name|position ASC|DES
Sorts an IBM Z Performance and Capacity Analytics list by the column you specify as column_name in
either ascending or descending order. (You can also sort by column number by specifying the number
of the column instead of the name. The first column after the selection field column on the left is
column 1.)
SYStem (see Note)
Displays the System window.
TABle (see Note)
Displays the Tables window.
Note: This command is not available in end-user mode from the reporting dialog.
Administration reports
This chapter describes the administration reports that are created when you create or update the IBM Z
Performance and Capacity Analytics system tables. The reports listed in this chapter are the following:
3270 Reports
• “PRA001 - Indexspace Cross-Reference” on page 309
• “PRA002 - Actual Tablespace Space Allocation” on page 310
• “PRA003 - Table Purge Condition” on page 311
• “PRA004 - Table Structure with Comments” on page 312
• “PRA005 - Table Names with Comments” on page 313
• “PRA006 - Object Change Level” on page 313
• “PRA007 - Collected Log Data Sets” on page 314
• “PRA008 - Components and Subcomponents” on page 315
• “PRA009 - Tablespace Allocation” on page 316
• “PRA010 - Update Definitions” on page 317
• “PRA011 - Update Details” on page 318
• “PRA012 - Table Name to Tablespace Cross-Reference ” on page 319
• “PRA013 - Tablespace to Table Name Cross-Reference ” on page 320
• “PRA014 - System Tables” on page 321
• “PRA015 - Non-System Tables Installed” on page 322
Cognos Reports
• “PRA001 - Indexspace Cross-Reference” on page 323
• “PRA002 - Actual Tablespace Space Allocation” on page 324
• “PRA003 - Table Purge Condition” on page 325
• “PRA004 - Table Structure with Comments” on page 326
• “PRA005 - Table Names with Comments” on page 327
• “PRA006 - Object Change Level” on page 328
• “PRA007 - Collected Log Data Sets” on page 330
• “PRA008 - Components and Subcomponents” on page 331
308 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA001 - Indexspace Cross-Reference
3270 reports
PRA001 - Indexspace Cross-Reference
The PRA001 report provides a cross-reference between index spaces and indexes that are present in the
IBM Z Performance and Capacity Analytics environment at the time of running the report. This report
enables you to extract the real name of an index, so that you can locate the index in the administration
dialog and adjust its space allocation if required
This report enables you to extract the real name of an index, so that you can locate the index in the
administration dialog and adjust its space allocation if required.
This information identifies the report:
Report ID
PRA001
Report group
ADMIN
Reports Source
DRLINDEXES
Indexspace
The name of the index space whose index name has been extracted. This is either the name
associated with a single index space or the complete cross reference between index and index space
names for all indexes.
Index Name
The name of the index associated with the indexspace.
For information about:
• The DRLINDEXES system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index
space” on page 227.
310 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA003 - Table Purge Condition
Space Allocated
The SPACE value as reported in the Db2 catalog (SYSIBM.SYSTABLESPACES table). The column
SPACE contains data only if the STOSPACE utility has been run.
For information about:
• The DRLTABLESPACE system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index space”
on page 227.
• The SYSTABLESPACE table, refer to the Db2 for z/OS: SQL Reference.
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
Date Installed
The date the purge condition was installed.
Creator
The ID of the person who installed the purge condition.
For information about:
• The DRLPURGCOND system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or edit purge conditions, see “Displaying and editing the purge condition of a table” on
page 225.
312 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA005 - Table Names with Comments
Length
Column length.
Comments
Column comment (if defined for the table column). It is 255 char long.
314 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA008 - Components and Subcomponents
316 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA010 - Update Definition
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
318 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA012 - Table Name to Tablespace Cross-Reference
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
320 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA014 - System Tables
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
322 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA001 - Indexspace Cross-Reference
Cognos reports
PRA001 - Indexspace Cross-Reference
The PRA001 report provides a cross-reference between index spaces and indexes that are present in the
IBM Z Performance and Capacity Analytics environment at the time of running the report. This report
enables you to extract the real name of an index, so that you can locate the index in the administration
dialog and adjust its space allocation if required
This report enables you to extract the real name of an index, so that you can locate the index in the
administration dialog and adjust its space allocation if required.
This information identifies the report:
Report ID
PRA001
Report group
ADMIN
Reports Source
DRLINDEXES
Variables
Indexspace. Optional.
324 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA003 - Table Purge Condition
Variables
Table space. Optional.
Reports Source
DRLPURGECOND
Variables
Table Name, Latest Changes and Creator are optional. You can select the purge condition associated
with a single table or accept the default setting to obtain a complete list of current purge conditions.
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
326 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA005 - Table Names with Comments
Report ID
PRA004
Report group
ADMIN
Reports Source
DRLCOLUMNS
Variables
Table Name. Optional.
Reports Source
DRLCOLUMNS
Variables
Table Name. Optional.
328 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA006 - Object Change Level
PARKER
PARKER
PARKER
PARKER
PARKER
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
ODIN
ODIN
ODIN
ODIN
ODIN
ODIN
FORTE
FORTE
FORTE
330 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA008 - Components and Subcomponents
Last Timestamp
Last timestamp located on the log data set.
User ID
The ID of the person who collected the log data sets.
332 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA010 - Update Definition
Report ID
PRA010
Report group
ADMIN
Reports Source
DRLUPDATES
Variables
Source Prefix, Source Name, Target Name, Target Prefix and Creator are optional.
ODIN
FORTE
FORTE
FORTE
FORTE
FORTE
FORTE
ODIN
ODIN
ODIN
SMITH
SMITH
SMITH
SMITH
SMITH
PARKER
PARKER
PARKER
PARKER
PARKER
334 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA011 - Update Details
Creator
The name of the installer.
336 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA013 - Tablespace to Table Name Cross-Reference
338 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA014 - System Tables
340 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA015 - Non-System Tables Installed
sql-statement
'TERM'
where:
INIT
Establishes a call attachment facility (CAF) connection to Db2 that leaves the connection open until
a DRL1SQLX TERM statement is executed. There is not an implied COMMIT until the DRL1SQL TERM
statement.
If the REXX program passes INIT as the argument for the CALL DRL1SQLX statement, the
connection remains open for each SQL statement call. The connection does not terminate until a
CALL DRL1SQLX TERM statement closes it.
If the REXX program does not pass INIT as the argument for the CALL DRL1SQLX statement,
the connection is opened at the beginning of each CALL DRL1SQLXsql_statement and closed at its
conclusion, which makes SQL ROLLBACK impossible.
If you are making more than three calls to DRL1SQLX, it is more efficient to use the CALL DRL1SQLX
INIT statement first.
sql-statement
An SQL SELECT or another SQL statement that can be executed with an EXECUTE IMMEDIATE
statement. DRL1SQLX appends the SQL statement to SQL EXECUTE IMMEDIATE and executes it.
TERM
Terminates an existing connection to Db2 and performs an implied COMMIT.
342 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Calling the DRL1SQLX module
These variables are set by DRL1SQLX after a successful execution of an SQL SELECT statement. For each
variable below, sqlstem is the value of the SQLSTEM input variable, y is the column number, and z is the
row number:
sqlstem.NAME.0
The number of selected columns.
sqlstem.NAME.y
The names of the selected columns.
The column name of an expression is blank. Each value of y is a whole number from 1 through
sqlstem.NAME.0.
sqlstem.LENGTH.y
The maximum length of the value of the selected columns.
A column name can be longer than the value. Each value of y is a whole number from 1 through
sqlstem.NAME.0.
sqlstem.TYPE.y
The data types of the selected columns.
Each type is copied from the SQLTYPE field in the SQL descriptor area (SQLDA) and is a number
ranging from 384 to 501. Each value of y is a whole number from 1 through sqlstem.NAME.0.
sqlstem.0
The number of rows in the result table.
sqlstem.y.z
The value of the column.
Each value of y is a whole number from 1 through sqlstem.NAME.0.
Each value of z is a whole number from 1 through sqlstem.0.
344 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
/**************************************************************/
/* Display column names and values for all rows */
/**************************************************************/
If res.0 > 0 Then /* number of rows */
Do z = 1 To res.0
Say ' '
Say 'Following values were returned for row 'z':'
Do y = 1 To res.name.0
Say res.name.y': 'res.y.z
End
End
Else
Say 'No rows were returned'
Exit
IBM Z Performance and Capacity Analytics includes Analytics Components that are designed to support
the IBM Db2 Analytics Accelerator. These components are based on existing non-Analytics components
that are modified to allow for the following functions:
• Store data directly to an IBM Db2 Analytics Accelerator removing the need to store data on Db2 for
z/OS®.
• Allow for more detailed timestamp level records to be stored.
• Allow for more CPU work to move from z/OS to the IBM Db2 Analytics Accelerator appliance.
• Report to make use of the high query speeds of the IBM Db2 Analytics Accelerator.
The System Data Engine component of the IBM Z Common Data Provider is used to convert SMF log
data into data sets that contain the IBM Z Performance and Capacity Analytics components tables in Db2
internal format. The IBM Db2 Analytics Accelerator Loader for z/OS is then used to load the Db2 internal
format data sets directly into the IBM Db2 Analytics Accelerator.
The Analytics components comprise the following items:
• Analytics - z/OS Performance
• Analytics - Db2
• Analytics - KPM CICS®
The Analytics components are based on the following existing non-Analytics components:
Table 9. Relationship of Analytics components to non-Analytics components
Analytics Non-Analytics
The Analytics components include Lookup tables that must be customized as per their equivalent Lookup
tables in the non-Analytics components:
Table 10. Relationship of Analytics Lookup table to non-Analytics Lookup table
Member name Analytics Lookup table non-Analytics Lookup table
The following table lists all the reports per Analytics component, and their equivalent non-Analytics
component reports.
346 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
The following table lists all the tables per Analytics component, and their equivalent non-Analytics
component tables.
348 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
Table 12. Relationship of Analytics component table to non-Analytics component table (continued)
Component Type Analytics component table Equivalent to non-Analytics
component table
Analytics - Db2 Table A_DB2_SYS_PARM_I DB2_SYS_PARAMETER
A_DB2_DB_I DB2_DATABASE_T
A_DB2_DB_BIND_I DB2_DATABASE_T
A_DB2_DB_QIST_I DB2_DATABASE_T
A_DB2_DB_SYS_I DB2_SYSTEM_T
A_DB2_BP_I DB2_BUFFER_POOL_T
A_DB2_USERTRAN_I DB2_USER_TRAN_H
A_DB2_UT_BP_I DB2_USER_TRAN_H
A_DB2_UT_SACC_I DB2_USER_TRAN_H
A_DB2_UT_IDAA_I DB2_USER_TRAN_H
A_DB2_IDAA_STAT_I DB2_IDAA_STAT_H
A_DB2_IDAA_ACC_I DB2_IDAA_ACC_H
A_DB2_IDAA_ST_A_I DB2_IDAA_STAT_A_H
A_DB2_IDAA_ST_S_I DB2_IDAA_STAT_S_H
A_DB2_PACK_I DB2_PACKAGE_H
A_DB2_SHR_BP_I DB2_BP_SHARING_T
A_DB2_SHR_BPAT_I DB2_BPATTR_SHR_T
A_DB2_SHR_LOCK_I DB2_LOCK_SHARING_T
A_DB2_SHR_INIT_I DB2_SHARING_INIT
A_DB2_SHR_TRAN_I DB2_US_TRAN_SHAR_H
A_DB2_DDF_I DB2_USER_DIST_H
A_DB2_SYSTEM_I DB2_SYSTEM_DIST_T
A_DB2_STORAGE_I DB2_STORAGE_T
Table 12. Relationship of Analytics component table to non-Analytics component table (continued)
Component Type Analytics component table Equivalent to non-Analytics
component table
Analytics - KPM z/OS Table A_KPM_EXCEPTION_I KPM_EXCEPTION_T
A_KZ_JOB_INT_I KPMZ_JOB_INT_T
A_KZ_JOB_STEP_I KPMZ_JOB_STEP_T
A_KZ_LPAR_I KPMZ_LPAR_T
A_KZ_STORAGE_I KPMZ_STORAGE_T
A_KZ_WORKLOAD_I KPMZ_WORKLOAD_T
A_KZ_CHANNEL_I KPMZ_CHANNEL_T
A_KZ_CF_I KPMZ_CF_T
A_KZ_CF_STRUC_I KPMZ_CF_STRUCTR_T
A_KZ_CPUMF_I KPMZ_CPUMF_T
A_KZ_CPUMF1_I KPMZ_CPUMF1_T
A_KZ_CPUMF_PT_I KPMZ_CPUMF_PT_T
A_KZ_CPUMF1_PT_I KPMZ_CPUMF1_PT_T
A_KZ_SRM_WKLD_I KPMZ_SRM_WKLD_T
There are cases where multiple tables from an Analytics component are combined into a single view.
In these cases, the resulting view matches an existing table from an IBM Z Performance and Capacity
Analytics non-Analytics component. See the following table for views in the Analytics components that
are based on multiple tables from non-Analytics components.
Table 13. Relationship of Analytics component tables used in view to non- Analytics component tables used in view
Component View Analytics component Equivalent to non-
tables used in view Analytics component
table
Analytics - Db2 A_DB2_USERTRAN_IV A_DB2_USERTRAN_I DB2_USER_TRAN_H
A_DB2_UT_BP_I
A_DB2_UT_SACC_I
A_DB2_UT_IDAA_I
350 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
Procedure
1. Ensure to apply the PTFs for APAR PI70968 to the IBM Z Performance and Capacity Analytics system.
2. Bind the Db2 plan that is used by IBM Z Performance and Capacity Analytics by specifying the
BIND option QUERYACCELERATION(ELIGIBLE) or QUERYACCELERATION(ENABLE). For example,
assuming the default plan name to be DRLPLAN, the BIND PACKAGE to set ELIGIBLE for the query
acceleration register is as follows:
//SYSTSIN DD *
DSN SYSTEM(DSN)
BIND PACKAGE(DRLPLAN) OWNER(authid) MEMBER(DRLPSQLX) -
ACTION(REPLACE) ISOLATION(CS) ENCODING(EBCDIC) -
QUERYACCELERATION(ELIGIBLE)
BIND PLAN(DRLPLAN) OWNER(authid) PKLIST(*.DRLPLAN.*) -
ACTION(REPLACE) RETAIN
RUN PROGRAM(DSNTIAD) PLAN(DSNTIAxx) -
LIB(’xxxx.RUNLIB.LOAD’)
END
For more information about the sample instructions to BIND with QUERYACCELERATION specified, see
SDRLCNTL(DRLJDBIN).
3. Modify the DRLFPROF data set to reflect the settings to apply when installing Analytics components.
DRLFPROF is the IBM Z Performance and Capacity Analytics data set that contains user modified
parameters. The following parameters in DRLFPROF provide support for the IBM Db2 Analytics
Accelerator:
def_useaot = "YES" | "NO"
"YES": Tables are created as Accelerator Only Tables.
"NO": Tables are created in Db2 and are suitable for use either as Db2 tables or as IDAA_ONLY
tables. The default value is "NO".
def_accelerator = "xxxxxxxx"
"xxxxxxxx": The name of the Accelerator where the tables reside. Required only if using
Accelerator Only Tables.
def_timeint = "H" | "S" | "T"
"H": The timestamp for records is rounded to hourly intervals that is similar to non-Analytics tables
with a suffix of "_H" in other components.
"S": The timestamp for records is rounded to intervals of a second that is similar to non-Analytics
tables with time field instead of timestamp in other components.
"T": The timestamp for tables is the actual timestamp in the SMF log record that is similar to
non-Analytics tables with suffix "_T". The default value is "T".
4. Important: This step is required only if you use IBM Z Performance and Capacity Analytics to collect
and populate the component tables on Db2 for z/OS, or if you use IBM Z Performance and Capacity
Analytics reporting. If you only collect data into the IBM Db2 Analytics Accelerator and does not have
the data reside on Db2 for z/OS, configure the lookup tables in IBM Z Common Data Provider. See the
information about collecting data for direct load to the Accelerator in the IBM Z Common Data Provider
V1.1.0 User's Guide (SC27-4624- 01).
Customize each lookup table in the Analytics components as per the existing IBM Z Performance and
Capacity Analytics non-Analytics lookup tables.
For example, insert the same rows that are currently in DB2_APPLICATION into
A_DB2_APPLICATION.
5. Install the desired Analytics component(s).
6. Add tables to the Accelerator.
If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), then the DRLFPROF
setting for def_useaot is "YES", and Db2 creates the tables on the IBM Db2 Analytics Accelerator when
the Analytics components are being installed.
If IBM Z Performance and Capacity Analytics doesn't use AOTs, the tables need to be added to the
IBM Db2 Analytics Accelerator. Tables can be added by using the Data Studio Eclipse application,
or by using stored procedures. To use stored procedures to add the tables to an IBM Db2 Analytics
Accelerator, modify and submit the SDRLCNTL members in the following table:
352 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
To collect data for direct load to tables on an IBM Db2 Analytics Accelerator, the following items are
required:
• The System Data Engine (SDE) component of the IBM Common Data Provider for z Systems to collect
the SMF data instead of using IBM Z Performance and Capacity Analytics Collect. The PTFs for APARs
OA52196 and OA52200 must be applied.
• The Db2 Analytics Accelerator Loader for z/OS V2.1 by using IDAA-Only load mode to load the data that
is created by the SDE into the IDAA.
See the information about collecting data for direct load to the Accelerator in the IBM Z Common Data
Provider V1.1.0 User's Guide (SC27-4624- 01).
After the data has been collected, it can be loaded direct to the IBM Db2 Analytics Accelerator.
//HLODUMMY DD DUMMY
• A statement that tells the loader to load data into the Accelerator. This statement indicates the data is
only to reside on the IDAA_ONLY Accelerator, the name of the Accelerator, the schema and the table
name:
//SYSIN DD *
LOAD DATA RESUME YES LOG NO INDDN input_data_set_ddname
IDAA_ONLY ON accelerator-name
INTO TABLE DRLxx.table-name FORMAT INTERNAL;
Procedure
1. To load the data that is created by the System Data Engine, modify and submit the SDRLCNTL
members in the following table based on the installed components:
Procedure
1. Remove tables from the Accelerator.
If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), then the DRLFPROF
setting for def_useaot is "YES", and you don't need to remove tables on the IBM Db2 Analytics
Accelerator because the next step will automatically remove them.
If IBM Z Performance and Capacity Analytics doesn't use AOTs, the tables must be removed from
the Accelerator prior to uninstalling the component. Modify and submit the SDRLCNTL members in
the following table based on the components to be uninstalled. Modify and submit the SDRLCNTL
members in the following table according to the components to be uninstalled.
354 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator
2. Uninstall the Analytics component(s) by using IBM Z Performance and Capacity Analytics menus.
356 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Chapter 8. Installing the Usage and Accounting
Collector
The CIMS Lab Mainframe collector is incorporated into IBM Z Performance and Capacity Analytics and
called the Usage and Accounting Collector.
For a description of the Usage and Accounting Collector, see the System Overview section in the Usage
and Accounting Collector User Guide.
To install the Usage and Accounting Collector, follow these steps:
• “Step 1: Customizing the Usage and Accounting Collector” on page 357
• “Step 2: Allocating and initializing Usage and Accounting files” on page 360
• To verify your installation, follow these steps:
– “Step 3: Processing SMF data using DRLNJOB2 (DRLCDATA and DRLCACCT)” on page 360
– “Step 4: Running DRLNJOB3 (DRLCMONY) to create invoices and reports” on page 363
– “Step 5: Processing Usage and Accounting Collector subsystems” on page 364
To support programs such as CICS, Db2, IDMS, IMS, VM/CMS, VSE, DASD Space Chargeback, and Tape
Storage Accounting, edit and run the appropriate jobs. Examples of member names are DRLNCICS,
DRLNDB2, DRLNDISK.
Procedure
1. Replace sample job card with user job card.
2. Insert or replace data set name high-level qualifiers.
3. Insert serial numbers on the VOLUME parameter.
4. Insert DSCB model names.
Note: If you do not run DRLCINIT, you must change each job member manually as you use it.
DRLNINIT
Procedure
1. DRL.SDRLCNTL (DRLMFLST) contains the list of Usage and Accounting Collector jobs that are used in
this utility.
358 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The default filenames are changed to start with 'DRL.IZPCAUAC'.
HLQSKIP=
Specify any non-blank character and the HLQ processing is skipped.
For example: HLQSKIP=Y
No customization of the Usage and Accounting Collector data set names is done.
5. Insert VOLSER numbers. At various places within the Usage and Accounting Collector jobs, volume
serial numbers are needed. The DRLCINIT job allows you to replace them all globally. The default
volume serial numbers are “??????” throughout the JCL. The default volume serial appears in IDCAMS
processing as VOL(??????) and VOL=SER=?????? and is used for VSAM file allocation. The JCL also uses
VOL=SER=?????? for temporary space allocations. The following parameters in STEP020 control the
VOLSER processing:
VOL=
The replacement volume serial to use instead of “??????”
VSSKIP=
Specify any non-blank character and the VOLSER processing is skipped.
For example: VSSKIP=Y
No customization of the Usage and Accounting Collector VOL or VOL=SER parameters is done.
6. Insert DSCB model names.
A model DSCB parameter is used for the proper functioning of Generation Data Groups (GDGs). The
Usage and Accounting Collector JCL is distributed with all model DSCBreferences set to 'MODELDCB'.
If your installation does not require the use of this parameter, you can delete it manually from the
JCL. The DSCB processing can be used to change the default to a value used at your installation. The
following parameters in STEP020 control the DSCB processing:
MDDSCB=
The replacement model DSCB to use instead of MODELDSCB.
MDSKIP=
Specify any non-blank character and the model DSCB processing will be skipped.
For example: MDSKIP=Y
No customization of the Usage and Accounting Collector model DSCB will be done.
The DRLCINIT utility produces statistics for the execution. If any exceptions are noted, they can be
found listed in the DRLMXCEP member of &HLQ.LOCAL.CNTL. These exceptions might or might not be
severe enough to cause a JCL error; check DRLMXCEP if exceptions are reported.
Processing......
Completed SYSTSIN
69 Files
0 Exceptions
JobCard : 68 Replacements
HLQ : 1389 Replacements
Volume : 30 Replacements
ModelDSCB: 207 Replacements
Normal completion
Procedure
1. OB STEP DRLC2A
This executes program DRLCDATA. For more information, see “SMF Interface Program - DRLCDATA” in
the Usage and Accounting Collector User Guide.
360 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Table 19. Explanation of Program DRLCDATA (continued)
Input/output DDNAME Description
INPUT CIMSCNTL Data set DRL310.SDRLCNTL (DATAINPT)
Contains input control statements. For more
information, see the Control Statement Table in
Chapter 2. “SMF Interface Program - DRLCDATA”
in the Usage and Accounting Collector User Guide.
OUTPUT CIMSSMF Usage and Accounting Collector reformatted
SMF data set. Contains each SMF record from
the input data set unless limited by a records
statement. This data set is designed as a backup
data set of reformatted SMF Records. Depending
on installation requirements, you might choose
to DD DUMMY this data set, or to COMMENT the
statement.
OUTPUT CIMSACCT This data set contains selected SMF chargeback
records (6, 30, 101, 110). This data set is used as
input in step DRLC2B.
OUTPUT CIMSCICS This data set contains CICS records (SMF Type
110). This record is used by the Usage and
Accounting Collector CICS interface programs.
OUTPUT CIMSDb2 This data set contains Db2 records (SMF Type
101). This record is used by the Usage and
Accounting Collector Db2 interface programs.
2. SMF Merge
It is recommended that you insert a merge between steps DRLC2A and DRLC2B to create a history of
data set DRL.SMF.HISTORY (see member DRLNSMFM in DRL310.SDRLCNTL). The merge field is 7 for
one character. Use a cartridge tape and block the output data set to 32K (BLKSIZE = 32760).
The Usage and Accounting Collector Merge is a sample SORT/MERGE set of JCL that creates a sorted
history data set of Usage and Accounting Collector accounting records can be found in data set
DRL310.SDRLCNTL member DRLNMERG. This job should be run daily after the batch and online Usage
and Accounting Collector jobs have been executed.
If DRLNMERG is done on a daily basis, at the end of the month, the Usage and Accounting Collector
master file is in account code sort sequence.
You should maintain the history data sets on tape. Leave the daily files on disk for daily reports and set
up generation data sets to tape for the history file.
3. JOB STEP DRLC2B
This executes program DRLCACCT, which processes the data set created by program DRLCDATA
(DDNAME CIMSACCT) and generates the Usage and Accounting Collector batch chargeback data set.
For details, see “Accounting File Creation Program - DRLCACCT” in the Usage and Accounting Collector
User Guide.
362 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Step 4: Running DRLNJOB3 (DRLCMONY) to create invoices and
reports
About this task
DRLNJOB3 contains the JCL to run program DRLCMONY, which creates invoices and zero-cost invoices
(rate determination).
Billing control statements are contained in member DRLMMNY. Edit these statements to customize Usage
and Accounting Collector for your installation.
You can use the Usage and Accounting Collector defaults as distributed until you decide on client
information, billing rates, and control information.
To run DRLNJOB3, follow these steps:
Procedure
1. Run DRLN3A.
This step converts the 79x accounting records into CSR+ records. DRLCMONY supports only CSR+
records.
2. Run DRLC3B.
This step sorts the data set created by step DRLC3A into account code, job name, and job log number
sequence.
3. Run DRLC3C.
This step is for the Computer Center Billing System - DRLCMONY.
For record descriptions, refer to “Accounting File Record Descriptions” in the Usage and Accounting
Collector User Guide.
For JCL information, see member DRLNJOB3 in DRL310.SDRLCNTL.
Procedure
1. Edit the appropriate JCL member. For example, DRLNCICS.
2. Create an account code conversion table.
3. Process the job.
4. Merge the output with the input to program DRLCMONY (DRLNJOB3).
5. Run DRLNJOB3 to generate the integrated invoices.
Results
The following list provides a list of member names for some of the most commonly-used Usage and
Accounting Collector subsystems.
Table 21. Usage and Accounting Collector Subsystem Member Names (Partial List)
Subsystem Member name Description
DRLNCICS CICS Support
DRLNDB2 Db2
DRLNMQSR MQSeries®
DRLNDISK DASD Space
DRLNTAPE Tape Storage
DRLNIMS IMS
DRLNUNIV ROSCOE, ADABAS/SMF, IDMS/SMF, RJE, WYLBUR,
Oracle, MEMO, Control-T, BETA
364 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM provides a number of
ways for you to obtain the support you need.
• Searching knowledge bases: You can search across a large collection of known problems and
workarounds, Technotes, and other information.
• Obtaining fixes: You can locate the latest fixes that are already available for your product.
• Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with
someone from IBM, you can use a variety of ways to contact IBM Support.
• What software versions were you running when the problem occurred?
• Do you have logs, traces, and messages that are related to the problem symptoms? IBM Support is
likely to ask for this information.
• Can you re-create the problem? If so, what steps were performed to re-create the problem?
• Did you make any changes to the system? For example, did you make changes to the hardware,
operating system, networking software, product-specific customization, and so on.
• Are you currently using a workaround for the problem? If so, be prepared to explain the workaround
when you report the problem.
366 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who want to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. For a current list of IBM trademarks, refer to the Copyright and
trademark information at https://www.ibm.com/legal/copytrade.
368 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Bibliography
370 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Glossary
A
administration
An IBM Z Performance and Capacity Analytics task that includes maintaining the database, updating
environment information, and ensuring the accuracy of data collected.
administration dialog
A set of host windows used to administer IBM Z Performance and Capacity Analytics.
C
collect
A process used by IBM Z Performance and Capacity Analytics to read data from input log data sets,
interpret records in the data set, and store the data in Db2 tables in the IBM Z Performance and
Capacity Analytics database.
compatibility mode
A mode of processing, in which the IEAIPSxx and IEAICSxx members of SYS1.PARMLIB determine
system resource management.
component
An optionally installable part of an IBM Z Performance and Capacity Analytics feature. Specifically in
IBM Z Performance and Capacity Analytics , a component refers to a logical group of objects used
to collect log data from a specific source, to update the IBM Z Performance and Capacity Analytics
database using that data, and to create reports from data in the database.
control table
A predefined IBM Z Performance and Capacity Analytics table that controls results returned by some
log collector functions.
D
data table
An IBM Z Performance and Capacity Analytics table that contains performance data used to create
reports.
DFHSM
In this book, DFHSM is referred to by its new product name. See DFSMShsm.
DFSMShsm
Data Facility Storage Management Subsystem hierarchical storage management facility. A functional
component of DFSMS/MVS used to back up and recover data, and manage space on volumes in the
storage hierarchy.
DFSMS
Data Facility Storage Management Subsystem. An IBM licensed program that consists of DFSMSdfp,
DFSMSdss, and DFSMShsm.
E
environment information
All of the information that is added to the log data to create reports. This information can include data
such as performance groups, shift periods, installation definitions, and so on.
G
goal mode
A mode of processing where the active service policy determines system resource management.
H
host
The MVS system where IBM Z Performance and Capacity Analytics runs collect and where the IBM Z
Performance and Capacity Analytics database is installed.
K
372 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
T
target
In an update definition, the Db2 table in which IBM Z Performance and Capacity Analytics stores data
from the source record or table.
threshold
The maximum or minimum acceptable level of usage. Usage measurements are compared with
threshold levels.
U
update definition
Instructions for entering data into Db2 tables from records of different types or from other Db2 tables.
V
view
An alternative representation of data from one or more tables. A view can include all or some of the
columns contained in the table on which it is defined.
Glossary 373
374 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Index
A B
abbreviation backing up the IBM Z Performance and Capacity Analytics
adding to an update definition 223 database 154
deleting from an update definition 224 backup, incremental-image or full-image 155
adding a column to a table 217 base, IBM Z Performance and Capacity Analytics 28
adding a log data set for collection 246 batch
adding a log ID and collect statements data set 243 generating problem records in 167
adding an abbreviation to an update definition 223 running reports 160
adding an object to a component 180 batch mode
ADMDEFS nickname ddname, description 160 collect 136
ADMGDF ddname, graphic reports data set 160 installing component 173
administering lookup and control tables 159 installing component in 173
administering problem records 166 reporting 166
administering reports 160 batch print SYSOUT class dialog parameter 117
administering the IBM Z Performance and Capacity Analytics books xix
database 145 books for IBM Z Performance and Capacity Analytics 369
administering user access to tables 236
administration dialog
collecting data 135
C
collecting log data 137 calculating and monitoring table space requirements 147
commands 307 calculating table space requirements
introduction 7 monitoring, and 147
options Catcher DataMover 62
Administration window 301 CFRM policy update
Components window 301 UPDPOL 36
Components window options changes in this edition xxi
Help pull-down 301 changing or deleting rows using the QMF table editor 205
Logs window 301 changing space definitions 30
Primary Menu 301 changing the collect statements data set name 244
Tables window 301 changing the retention period of information about a log data
ADMINISTRATION parameter 21 set 251
Administration window options CICS control tables
Other pull-down CICS_DICTIONARY 277
DB2I option 159 CICS_FIELD 278
ISPF/PDF option 301 clear.properties 42, 81
AGGR_VALUE control table 283 Collator
allocating libraries in the generic logon procedure introduction 10
SYSPROC collect
DRL.LOCAL.EXEC 21 batch mode 137
allocation overview, ddname 124 COLLECT log collector language statement 136
APAR (Authorized Program Analysis Report) 365 deciding which data sets to 247
APF-authorized data sets improving performance 144
SDRLEXTR 45 log collector messages 140
APPLY SCHEDULE clause monitoring activity 140
modifying 224 network configuration data collect job 139
authorization ID, Db2 secondary 17 sample collect job 137
authorized data sets vital product data collect job 139
SDRLEXTR 45 collect activity
Authorized Program Analysis Report (APAR) 365 monitoring 140
automated data gathering 33 collect messages
AVAILABILITY_D, _W, _M 279 sample 140
AVAILABILITY_PARM lookup table 281 using 141
AVAILABILITY_T 279 collect performance
improving 144
collect process 4
collect statements
Index 375
collect statements (continued) Components window options (continued)
changing data set name 244 Space pull-down 30
editing 242 concatenation
IBM Z Performance and Capacity Analytics supplied, of log data sets 247
modifying 243 concatenation of log data sets 247
collect statements data set configuration
adding 243 DataMover 86
collect statements for log data manager DataMover, data splitting 96
listing data sets containing 242 DataMover, SMF filtering 95
Collect Statistics window 142 configuration options
Collect window 28 clear.properties 42, 81
collected data sets DASD-only log stream 35
viewing information about 251 DataMover, advanced options 95
collecting data DataMover.sh 39, 79
through administration dialog 135 DRLJSMFO 37
collecting data from a log into Db2 tables 186 DRLJSMFX 38
collecting data from IMS 139 hub DataMover 49, 52
collecting data from Tivoli Information Management for z/OS hub.properties 40
139 log stream on a coupling facility 35
collecting log data 135 publication.properties 80
collecting multiple log data sets 143 sample members 34, 79
collecting network configuration data 139 spoke data mover 49, 52
column spoke.properties 41
adding to a table 217 UPDPOL 36
column definition considerations when running DRLJTBSR 149
displaying and modifying 217 continuous collector
commands and options, administration dialog 301 configuration options 42
common data tables hub and spoke configuration 43
AVAILABILITY_D, _W, _M 279 modifying 253
AVAILABILITY_T 279 stand-alone configuration 42
EXCEPTION_T 280 stopping 253
MIGRATION_LOG 281 working with 253
retention periods 279 Continuous Collector
summarization level 278 CRLOGRDS 47
communications prerequisites 44, 99, 104, 105, 109 DRLJCCOL 38, 79
COMPonen command 307 hub and spoke 56
component implementing 56
adding an object 180 installing 33, 47
creating 182 stand-alone 56
deleting 182 control and common tables
deleting object 180 AGGR_VALUE 283
installation AVAILABILITY_PARM 281
definition members 125 CICS control tables
installing and uninstalling 169 CICS_DICTIONARY 277
installing online 171 CICS_FIELD 278
Sample component common data tables 279
definition member 125 DAY_OF_WEEK 275
description 283 description, lookup tables 281
uninstalling 176 PERIOD_PLAN 275
viewing objects 179 SCHEDULE 276
component definition SPECIAL_DAY 277
working with 178 TIME_RES 282
component installation USER_GROUP 282
excluding and object from 181 control and lookup tables
including an object 181 administering 159
components controlling objects that you have modified 178
installing 31 conventions
Components window 169 typeface xx
Components window options correcting corrupted data in the IBM Z Performance and
Component pull-down Capacity Analytics database 156
Print list option 303 correcting out-of-space condition in table or index space 156
Other pull-down corrupted data in the IBM Z Performance and Capacity
DB2I option 159 Analytics database
ISPF/PDF option 302 correcting 156
376 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
coupling facilityRETPD data tables, common (continued)
defining log stream 35 retention periods 279
create invoices and reports summarization level 278
running DRLNJOB3 363 database
create system tables 25 access 158
creating a component 182 administration 145
creating a log definition 193 backing up 154, 155
creating a record definition 198 error recovery 156
creating a record procedure definition 200 initialization 20
creating a report on a record 189 introduction 6
creating a table monitoring size 157
deleting a column using the administration dialog 234 name dialog parameter 116
using an existing table as a template 233 security 17, 158
creating a table space 234 tools 159
creating an update definitions 235 database access
creating and updating system tables with a batch job 26 monitoring 158
creating report groups 166 database backup
CRLOGRDS determining when 155
Continuous Collector, installing 47 database security
customer support maintaining 158
contacting IBM Support 365 DataMover
customizing additional features 84
DRLEINI1 21 Catcher 62
generic logon procedure 21 configuration 86
JCL sample jobs 27 configuration options, advanced 95
customizing Usage and Accounting Collector data splitting configuration 96
execute DRLNINIT 357 hub and spoke configuration 43
hub configuration options 49, 52
installing 33, 49
D memory management 85
DASD-only log streamRETPD record formats 85
defining 35 remote 62
data SMF filtering configuration 95
collecting from IMS 139 spoke configuration options 49, 52
collecting from Tivoli Information Management for z/OS stages 86
139 TCP/IP 43
data backup, incremental-image or full-image 155 tips 51
data collecting DataMover.sh 39, 79
batch collect 139 Datasplitter and SMF Extractor
IMS 139 introduction 11
data from Tivoli Information Management for date set
z/OS viewing dump 251
collecting 139 DAY_OF_WEEK control table 275
data in tables Db2
working with 203 data sets prefix dialog parameter 118
Data Publication 62, 64 Db2 plan name for IZPCA 116
data security how IBM Z Performance and Capacity Analytics uses
controlling 145 146
initializing 20 locking and concurrency 157
Data Selection window 28 messages
data set during system table creation 25
prefix dialog parameter 118 performance 20
saving a table definition in 232 statistics 157
data sets subsystem name dialog parameter 116
deciding which to collect 247 tools 159
data sets collected Db2 concepts
viewing list 184 understanding 145
data splitting configuration Db2 High Performance Unload
DataMover 96 integration 214
data tables, common Db2 High Performance Unload utility
AVAILABILITY_D, _W, _M 279 running 214
AVAILABILITY_T 279 sample control statement 215
EXCEPTION_T 280 Db2 tables
MIGRATION_LOG 281 collecting data from a log into 186
Index 377
Db2 utility dialog (continued)
RUNSTATS 206 language options 118
DB2I parameters 23, 25
concepts 146 preparing 21
DB2I Primary Option Menu 159 dialog parameters
IBM Z Performance and Capacity Analytics interaction variables and fields 115
146 Dialog Parameters
secondary authorization IDs 20 window 23
statistics 157 Dialog Parameters window
tools 159 overview 110, 114
DB2I command 307 QMF not used 114
DCOLLECT records 297 when QMF is used 114
DEBUG parameter 21 DISPLay RECORD record_type command 307
deciding which log data sets to collect 247 DISPLay REPort report_ID command 307
DEFINE LOG log collector language statement 3, 128 DISPLay report_ID command 307
DEFINE RECORD DISPLay TABLE table_name command 307
log collector language statement 3 DISPLay table_name command 307
DEFINE RECORD log collector language statement 128 displaying a view definition 231
DEFINE UPDATE displaying and adding a table index 218
log collector language statement 4 displaying and editing the purge condition of a table 225
DEFINE UPDATE log collector language statement 129 displaying and modifying a column definition 217
defining objects, overview 125 displaying and modifying a table or index space 227
defining reports 131 displaying and modifying update definitions of a table 220
defining table spaces and indexes using the GENERATE displaying log statistics 187
statement 129 displaying the contents of a log 188
defining triggers 131 displaying the contents of a table 203
defining updates and views 131 displaying update definitions associated with a record 198
definition members distribution clause
component definitions 125 modifying 224
DRLxxxx.SDRLDEFS library 127 documentation
feature 127, 128 IBM Z Performance and Capacity Analytics 369
installation order 127 DRL.LOCAL.CHARTS 160
log 128 DRL.LOCAL.DEFS definitions library 119
record 128 DRL.LOCAL.EXEC, allocating 21
report 131 DRL.LOCAL.REPORTS 160
Sample component definition member 127 DRL.LOCAL.USER.DEFS definitions library 119
table and update definition members 129 DRLCHARTS system table 265
table space 128 DRLCOLUMNS view on Db2 catalog 274
deleting a column from a table being created 234 DRLCOMP_OBJECTS system table 267
deleting a component 182 DRLCOMP_PARTS system table 267
deleting a log data set 186 DRLCOMPONENTS system table 267
deleting a log definition 193 DRLEINI1 listing 113
deleting a record definition 199 DRLELDMC
deleting a record procedure definition 201 sample job 247
deleting a table index 220 DRLESTRA command 307
deleting a table or view 234 DRLEXPRESSIONS
deleting an abbreviation from an update definition 224 system table 257
deleting an object from a component 180 DRLFIELDS
deleting an update definition 236 system table 257
deleting information about a log data set 246 DRLGROUP_REPORTS system table 268
deleting or changing rows using the QMF table editor 205 DRLGROUPS system table 268
deleting the information about a log data set 251, 253 DRLINDEXES view on Db2 catalog 274
detail tables DRLINDEXPART view on Db2 catalog 274
AVAILABILITY_T 279 DRLJBATR batch reporting job 161
EXCEPTION_T 280 DRLJCCOL
MIGRATION_LOG 281 Continuous Collector JCL for started task 38, 79
determining partitioning mode and keys 25 DRLJCOIM
determining when to back up the IBM Z Performance and IMS collect job 139
Capacity Analytics database 155 DRLJCOIN collect job 139
dialog DRLJCOLL sample collect job 137
commands 307 DRLJCOVP network VPD collect job 139
Dialog Parameters window DRLJEXCE problem record job 167
when QMF is not used 114 DRLJLDMC
DRLEINI1 initialization exec 21 setting the parameters for job 249
378 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DRLJLDMC collect job DRLUSER_REPORTS 274
parameters it uses 247 DRLUSER_REPORTTEXT 274
DRLJLDML DRLUSER_REPORTVARS 274
job step, using 239 DRLUSER_SEARCHATTR 274
sample job 240 DRLUSER_SEARCHES 274
setting the parameters for 241 DRLVIEWS view on Db2 catalog 274
DRLJLDML sample job 240 DRLvrm.SDRLDEFS
DRLJPURG purge job 152 naming convention for members 132
DRLJRFT report format table 139 DRLvrm.SDRLRENU
DRLJRUNS job naming convention for members 133
RUNSTATS utility 157 DRLxxx.SDRLDEFS definitions library
DRLJSMFO definition members 127
SMF Extractor control file 37 DRLxxx.SDRLEXEC, allocating 21
DRLJSMFX DRLxxx.SDRLLOAD, allocating in the logon procedure 21
SMF Extractor startup procedure 38 DSNxxx.SDSNLOAD, allocating in the logon procedure 21
DRLKEYS view on Db2 catalog 274 dump data set
DRLLDM_COLLECTSTMT viewing 251, 252
system table 258 DYNAMNBR
DRLLDM_LOGDATASETS setting the value 249
system table 258
DRLLOGDATASETS
system table 259
E
DRLLOGDATASETS system table 142 editing
DRLLOGS object definition 180
system table 260 editing a table using the ISPF editor 205
DRLNINIT editing the collect statements 242
executing 357 editing the contents of a table 204
DRLNJOB2 EREP
processing SMF data, using 360 records shipped with IBM Z Performance and Capacity
DRLOBJECT_DATA view on Q.OBJECT_DATA 274 Analytics 298
DRLPURGECOND errors, recovering from database 156
system table 261 EXCEPTION_T detail table 280
DRLRECORDPROCS exceptions
system table 261 reviewing 167
DRLRECORDS exceptions and problem records 167
system table 262 excluding an object from a component installation 181
DRLREP ddname, tabular reports data set 160 exporting table data to an IXF file 210
DRLREPORT_ATTR system table 269
DRLREPORT_COLUMNS system table 270
DRLREPORT_QUERIES system table 270 F
DRLREPORT_TEXT system table 270
features, IBM Z Performance and Capacity Analytics
DRLREPORT_VARS system table 271
performance
DRLREPORTS system table 268
definition member description
DRLRPROCINPUT system table 262
record 128
DRLSEARCH_ATTR system table 271
update and view 131
DRLSEARCHES system table 271
installation with base 16
DRLSECTIONS
flow of IBM Z Performance and Capacity Analytics 4
system table 263
DRLTABAUTH view on Db2 catalog 274
DRLTABLEPART view on Db2 catalog 274 G
DRLTABLES view on Db2 catalog 274
DRLTABLESPACE view on Db2 catalog 274 GDDM
DRLUPDATECOLS allocating load library 21
system table 263 GDDM-PGF
DRLUPDATEDISTR formats data set 120
system table 263 local formats data set 120
DRLUPDATELETS nicknames, ADMDEFS ddname 160
system table 264 GDDM.SADMMOD, allocating in the logon procedure 21
DRLUPDATES system table 264 GENERATE statement
DRLUSER_GROUPREPS 274 defining table spaces and indexes 129
DRLUSER_GROUPS 274 GENERATE_KEYS 273
DRLUSER_REPORTATTR 274 GENERATE_PROFILES 272
DRLUSER_REPORTCOLS 274 generating problem records in batch 167
DRLUSER_REPORTQRYS 274 generic logon procedure, customizing 21
Index 379
graphic reports IBM Z Performance and Capacity Analytics administration dialog windows (
data set ddname, ADMGDF system window 23
allocation overview 124 IBM Z Performance and Capacity Analytics definition
dialog parameter description 120 members
naming convention 132
IBM Z Performance and Capacity Analytics dialog options
H 301
hardware IBM Z Performance and Capacity Analytics performance
prerequisites 13 features
Hardware and Network Considerations 76 introduction 2
header fields IBM Z Performance and Capacity Analytics Primary Menu 23
working with 192 IBM Z Performance and Capacity Analytics tables
HELP command 307 printing list 231
hub and spoke IBM Z Performance and Capacity Analytics Version variable
installing 33 format 126
hub and spoke configuration importing the contents of an IXF file to a table 209
continuous collector 43 improving collect performance 144
DataMover 43 including an object in a component installation 181
hub DataMover index space
configuration options 49, 52 displaying and modifying 227
hub system making changes 228
Continuous Collector, implementing 56 out of space 156
Continuous Collector, installing 47 index space definitions 30
DataMover, installing 49 index space, out of space condition 156
hub.properties 40 Indexes window options
hub.properties configuration file options 49, 52 Utilities pull-down
Run Db2 REORG utility 146
install and configure
I for ELK reporting 62, 64
for Splunk reporting 62, 64
IBM Documentation
installation
publications xix
base product and feature installation 16, 28
IBM Support 365
DRLEINI1 listing, variables 113
IBM Z Performance and Capacity Analytics
hardware prerequisites 13
administration dialog windows
software prerequisites 14
System Tables window 25
installation prerequisites 13
Administration window options 301
installation reference 113
component installation 125
installing
data flow 4
Continuous Collector 33
data sets 16
Data Mover 33
data sets prefix dialog parameter 119
hub and spoke 33
database administration 145
Publication DataMover 33
database, introduction to 6
SMF Extractor 33
installation
Usage and Accounting Collector 357
data sets 16
installing a component 169
database security 17
installing and uninstalling a component 169
Db2 database initialization 20
installing components 31
personal dialog parameters 23
installing IBM Z Performance and Capacity Analytics
QMF setup 24
determining partitioning mode and keys 25
test 28
installing components 31
introducing 1
reviewing results of SMP/E installation 16
migration 13
installing other IBM Z Performance and Capacity Analytics
objects overview 125
systems 31
performance features 2
installing the component in batch mode 173
Primary Menu options 301
installing the component online 171
record definitions shipped with IBM Z Performance and
installing Usage and Accounting Collector
Capacity Analytics 288
DRLNJOB1 360
IBM Z Performance and Capacity Analytics administration
DRLNJOB3 (DRLCMONY), running 363
dialog windows
JCL
Administration window 23
customizing Usage and Accounting Collector 357
Data Selection window 28
processing Usage and Accounting Collector subsystems
Dialog Parameters window 114
364
Logs window 28
integration with Db2 High Performance Unload 214
Primary Menu 23
introduction to the Key Performance Metrics components 8
380 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
introduction to the SMF Log Records component 9 log data manager (continued)
invoking the log data manager 239 summary of use of 238
ISPF log data manager option
ISPF.PROFILE 21 working with 238
ISPF command 307 log data set
ISPF editor adding for collection 246
editing a table 205 changing the retention period of information 251
IXF deleting 186
exporting table data to 210 deleting information about 246, 251, 253
IXF file modifying log ID 245
importing contents to a table 209 recording for re-collection 246
recording to be collected again 253
viewing unsuccessfully collected 252
J log data sets
JCL sample jobs concatenation of 247
customizing 27 listing sets to be collected 244
job statement information dialog parameter 120 modifying list of successfully collected 250
job step for recording a log data set for collection 239 modifying the list of unsuccessfully collected 252
viewing information about successfully collected 251
viewing list collected 184
K log definition
creating 193
Key Performance Metrics components
deleting 193
introduction 8
viewing and modifying 191
log definitions
L defining a log 128
introduction 3
language-dependent data sets 16 working with 191
library allocation in the generic logon procedure log ID
STEPLIB adding collect statements data set 243
DRLxxx.SDRLLOAD 21 log statistics
SYSEXEC 21 displaying 187
library definition members, DRLxxx.SDRLDEFS 127 log stream
listing coupling facility 35
subset of tables in the Tables window 232 define a DASD-only log stream 35
listing a subset of tables in the Tables window 232 log streams 44, 99, 104, 105, 109
listing the log data sets to be collected 244 logon procedure, customizing 21
loading tables 211 logs
local data sets 16 working with the contents of 184
local definitions data set dialog parameter 119 LOGS command 307
local messages data set dialog parameter 120 Logs window options
local user definitions data set dialog parameter 119 Log pull-down
LOcate argument command 307 Exit option 304
locking and concurrency 157 Print list option 304
log and record definitions Save definition option 304
working with 183 Other pull-down
log and record procedures 4 DB2I option 159
log collector ISPF/PDF option 303
introduction 3 View pull-down
modifying statements 241 All option 304
system tables 257 Some option 304
log collector language LOGSTAT, log data set statistics 142, 187
COLLECT 136 lookup and control tables
DEFINE LOG 3, 128 administering 159
DEFINE RECORD 128 AGGR_VALUE 283
DEFINE UPDATE 129 AVAILABILITY_PARM 281
SQL 125 CICS control tables
SQL CREATE 129 CICS_DICTIONARY 277
log data CICS_FIELD 278
collecting 135 common data tables 279
log data manager DAY_OF_WEEK 275
invoking 239 description, lookup tables 281
listing log data sets containing collect statements 242 PERIOD_PLAN 275
modifying list of log data sets to be collected 244 SCHEDULE 276
Index 381
lookup and control tables (continued) object (continued)
SPECIAL_DAY 277 excluding from a component installation 181
TIME_RES 282 including in a component installation 181
USER_GROUP 282 object definition
viewing or editing 180
object definitions 125
M objects
maintaining database security 158 controlling 178
making changes to an index space 228 overview 125
making table space parameter changes that do not require viewing in a component 179
offline or batch action 230 OPC/ESA
manuals records shipped 298
IBM Z Performance and Capacity Analytics 369 opening a table to display columns 216
marking objects user-modified 178 operating routines
memory management setting up 135
DataMover 85 options and commands, administration dialog 301
messages out of space condition
collect 140 correcting 156
Db2 output options for reports 160
system table creation 25 overview of defining objects 125, 137
migration instructions 13 overview of Dialog Parameters window 110, 114
migration of objects, using VERSION variable 126 overview of IBM Z Performance and Capacity Analytics data
MIGRATION_LOG detail table 281 flow 4
modifiable area of DRLEINI1 113
modifying a distribution clause 224 P
modifying an apply schedule clause 224
modifying IBM Z Performance and Capacity Analytics parameters
supplied collect statements 243 setting for job DRLJLDMC 249
modifying log collector statements 241 table space reporting 148
modifying the continuous collector 253 parameters for table space reporting 148
modifying the list of successfully collected log data sets 250 partitioning mode and keys
modifying the list of unsuccessfully collected log data sets determining 25
252 PDF command 307
modifying the log ID for a log data set 245 performing routine data collection 139
monitoring collect activity 140 PERIOD_PLAN control table 275
monitoring database access 158 policy
monitoring size of the IBM Z Performance and Capacity update CFRM policy 36
Analytics database 157 UPDPOL 36
monitoring table space requirements ports 44, 99, 104, 105, 109
calculating, and 147 prefix for all other tables dialog parameter 116
multiple IBM Z Performance and Capacity Analytics , prefix for system tables dialog parameter 116
installing 31 prerequisites
multiple log data sets software 14
collecting 143 Primary Menu options
Options pull-down
Dialog parameters option 115
N printed reports from batch 161
naming convention for IBM Z Performance and Capacity Printer line count per page dialog parameter 118
Analytics definition members 132 printing a list of tables IBM Z Performance and Capacity
naming convention for members of DRLvrm.SDRLDEFS 132 Analytics tables 231
naming convention for members of DRLvrm.SDRLRENU 133 problem determination, IBM Support
navigation-administration dialog options and commands 301 determining business impact 365
network problem records
collecting configuration data 139 administering 166
network configuration data generating 167
collecting 139 procedures
Network Considerations 76 log and record 4
network data collect job 139 processing SMF data
nonsummarized data tables 279–281 using DRLNJOB2 (DRLCDATA and DRLCACCT) 360
processing Usage and Accounting Collector subsystems 364
Publication DataMover
O installing 33
Publication Mechanism 64
object
publication.properties 80
382 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
publications record definitions (continued)
accessing online xix record definitions shipped with IBM Z Performance and Capacity Analy
IBM Documentation xix SMF 288
IBM Z Performance and Capacity Analytics 369 Tivoli Workload Scheduler for z/OS (OPC) 299
Publishing Historical Data 76 VM accounting 299
pull-down options VMPRF 299, 300
Administration window 301 record definitions in a log
Components window 301 working with 194
Logs window 301 Record Definitions window options
Primary Menu 301 Other pull-down
Tables window 301 DB2I option 159
purge condition ISPF/PDF option 303
displaying and editing 225 record formats
Purge utility 152 DataMover 85
purging a table 210 record procedure definition
purging data 152 creating 200
viewing and modifying 199
record procedure definitions
Q deleting 201
Q.OBJECT_DATA QMF control table, view of 274 recording a log data set
QMF job step for 239
batch reporting 166 recording a log data set for re-collection 246
data sets prefix dialog parameter 118 recording a log data set to be collected again 253
initialization 24 records
language option dialog parameter 117 Linux on zSeries 298
query 24 recovering from database errors 156
query, importing 24 reference, installation 113
setup 24 remote DataMover 62
view on objects table 274 remote receiver
QMF command 308 ELK 62
QMF table editor Splunk 62
changing or deleting rows 205 Reorg/Discard utility 150
QMFxxx.SDSQLOAD, allocating in the logon procedure 21 report
query creating on a record 189
modifying to eliminate report variables 161 report definition language, defining report groups 131
typical report 161 report format table, DRLJRFT 139
reporting dialog
introduction 7
R reporting dialog mode dialog parameter 118
reports
RACF
PRA002 310
records shipped 298
PRA003 311
recalculating the content of a table 207
PRA004 312
Receiving raw SMF records from the SMF Extractor
reports and report groups
introduction 11
adding to report group 166
record
administering 160
creating a report 189
administration 161, 165
record definition
batch creation 163
creating 198
creating groups 166
deleting 199
customizing for batch processing 161
viewing and modifying 194
defining 131
working with sections 197
examples 131
record definition fields
graphic reports 160
working with 196
output options 160
record definitions
print options 160
DEFINE RECORD log collector language statement 128
printing or saving in batch 161
definition members 128, 131
QMF batch reporting 166
introduction 3, 128
query example 161
record definitions shipped with IBM Z Performance and
Reports window 28
Capacity Analytics
running in batch 160, 163
DCOLLECT 297
saved reports 160
EREP 298
saving in batch 161, 165
IMS SLDS 294
REPORTs command 308
OPC 298
REPORTS parameter 21
Index 383
requirements saving a table definition in a data set 232
software 14 SCHEDULE control table 276
RESET parameter 21 secondary authorization IDs
retention periods, common data tables 279 security without 19
RETPD sections in a record definition
DASD-only log stream 35 working with 197
log stream on a coupling facility 35 security
Review the SID parameter 33 without secondary authorization IDs 19
Review your SYS settings 34 security without secondary authorization IDs 19
reviewing Db2 parameters 30 security, database
reviewing exceptions and generating problem records 167 secondary authorization IDs 17
reviewing table space profiles prior to installation 177 setting the DYNAMNBR value 249
reviewing the GENERATE statements for table spaces, setting the parameters for job DRLJLDMC 249
tables, and indexes 178 setting the parameters for job DRLJLDML 241
reviewing the results of the SMP/E installation 16 severity
routine data collection contacting IBM Support 365
performing< 139 determining business impact 365
routines show IZPCA environment data 117
performing data collection 139 showing the size of the table 206
running Db2 High Performance Unload utility 214 SLDS records 294
running DRLJTBSR SMF Configuration 33
considerations 149 SMF Extractor
running DRLNJOB3 to create invoices and reports 363 DRLJSMFO 37
running reports in batch 160 DRLJSMFX 38
RUNSTATS utility installing 33, 45
DRLJRUNS job 157 introduction 11
tips 46
SMF filtering configuration
S DataMover 95
sample collect messages 140 SMF Log Records component
Sample component introduction 9
component definition member 125 SMF records 288
description 283 SMF_VPD data collect 139
object definition members 125 SMP/E installation
Sample Report 1 285 reviewing results 16
Sample Report 2 286 software
Sample Report 3 287 prerequisites 14
SAMPLE_H, _M tables 284 SOrt column_name|position ASC|DES command 308
SAMPLE_USER lookup table 284 SPECIAL_DAY control table 277
Sample component reports spoke and hub
introduction 285 installing 33
sample configuration members 34, 79 spoke DataMover
sample configurations configuration options 49, 52
clear.properties 42, 81 spoke.properties 41
DASD-only log stream 35 spoke.properties configuration file options 49, 52
DataMover.sh 39, 79 SQL ID to use (in QMF) dialog parameter 117
DRLJSMFO 37 SQL log collector language statement 125
DRLJSMFX 38 SQLMAX dialog parameter 118
hub.properties 40 stage keywords
log stream on a coupling facility 35 common 87
publication.properties 80 input 88
spoke.properties 41 output 92
UPDPOL 36 process 90
sample JCL stages
DRLJCCOL 38, 79 DataMover 86
sample JCL jobs 27 stand-alone configuration
sample job continuous collector 42
DRLELDMC 247 stand-alone system
DRLJLDML 240 Continuous Collector, implementing 56
SAMPLE log type Continuous Collector, installing 47
collecting log data 135 DataMover, implementing 49
saved charts data set dialog parameter 120 statistics, status monitoring 82
saved reports data set dialog parameter 120 status monitoring commands 82
saved reports, batch creation 165 stop the continuous collector 253
384 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
storage group default dialog parameter 116 system tables and views (continued)
streaming data DRLUSER_GROUPREPS 274
remote DataMover 62 DRLUSER_GROUPS 274
subset of tables DRLUSER_REPORTATTR 274
listing in the Tables window 232 DRLUSER_REPORTCOLS 274
successfully collected data sets DRLUSER_REPORTQRYS 274
modifying list of 250 DRLUSER_REPORTS 274
summary of changes xxi DRLUSER_REPORTTEXT 274
Support DRLUSER_REPORTVARS 274
contacting IBM 365 DRLUSER_SEARCHATTR 274
describing problems 365 DRLUSER_SEARCHES 274
determining business impact 365 DRLVIEWS 274
submitting problems 365 GENERATE_KEYS 273
support information 365 GENERATE_PROFILES 272
supported products and releases System Tables window 25
software prerequisites 14 system window
SYSOUT class (in QMF) dialog parameter 117 IBM Z Performance and Capacity Analytics
SYSPROC administration dialog windows
DRL.LOCAL.EXEC 21 system window 23
SYStem command 308 systems, installing other IBM Z Performance and Capacity
system table Analytics 31
DRLEXPRESSIONS 257
DRLFIELDS 257
system tables
T
log collector 257 table
system tables and views adding a column 217
creating system tables 25 creating 232
dialog system tables 265 deleting 234
DRLCHARTS 265 deleting index 220
DRLCOLUMNS 274 displaying and adding index 218
DRLCOMP_OBJECTS 267 displaying and editing purge condition of 225
DRLCOMP_PARTS 267 displaying and modifying 220
DRLCOMPONENTS 267 displaying contents of 203
DRLGROUP_REPORTS 268 editing contents of 204
DRLGROUPS 268 opening to display columns 216
DRLINDEXES 274 purging 210
DRLINDEXPART 274 recalculating contents of 207
DRLKEYS 274 table access 237
DRLLDM_COLLECTSTMT 258 table and update definitions
DRLLDM_LOGDATASETS 258 creating
DRLLOGDATASETS 142, 259 system tables 25
DRLLOGS 260 definition members 129
DRLOBJECT_DATA 274 introduction 129
DRLPURGECOND 261 IXF files, importing 209
DRLRECORDPROCS 261 lookup and control tables 159
DRLRECORDS 262 modifying an APPLY SCHEDULE clause 224
DRLREPORT_ATTR 269 TABle command 308
DRLREPORT_COLUMNS 270 table data
DRLREPORT_QUERIES 270 exporting to an IXF file 210
DRLREPORT_TEXT 270 table definition
DRLREPORT_VARS 271 saving in a data set 232
DRLREPORTS 268 table definitions
DRLRPROCINPUT 262 introduction 4
DRLSEARCH_ATTR 271 table space
DRLSEARCHES 271 backing up 155
DRLSECTIONS 263 creating 234
DRLTABAUTH 274 definition members 128
DRLTABLEPART 274 displaying and modifying 227
DRLTABLES 274 introduction 7
DRLTABLESPACE 274 making parameter changes that do not require offline or
DRLUPDATECOLS 263 batch action 230
DRLUPDATEDISTR 263 out of space 156
DRLUPDATELETS 264 table space definitions 30
DRLUPDATES 264 table space profiles
Index 385
table space profiles (continued) Tables Window (continued)
reviewing prior to installation 177 listing a subset of tables 232
working with 177 Tables window options
table space reporting Other pull-down
parameters 148 ISPF/PDF option 305
table space, out of space condition 156 Table pull-down
table spaces Exit option 305
understanding 146 tables, control and common
table spaces, tables and indexes AGGR_VALUE 283
reviewing the GENERATE statements 178 AVAILABILITY_PARM 281
table summarization levels, common 278 CICS control tables
tables CICS_DICTIONARY 277
administering user access to 236 CICS_FIELD 278
unloading and loading 211 common data tables 279
tables and update definitions DAY_OF_WEEK 275
working with 201 description, lookup tables 281
tables and views PERIOD_PLAN 275
GENERATE_PROFILES 272 SCHEDULE 276
tables and views, system SPECIAL_DAY 277
creating system tables 25 TIME_RES 282
dialog system tables 265 USER_GROUP 282
DRLCHARTS 265 tables, control and lookup
DRLCOLUMNS 274 administering 159
DRLCOMP_OBJECTS 267 tabular reports, DRLREP ddname 160
DRLCOMP_PARTS 267 TCP/IP
DRLCOMPONENTS 267 DataMover 43
DRLGROUP_REPORTS 268 TCP/IP ports 44, 99, 104, 105, 109
DRLGROUPS 268 tech note
DRLINDEXES 274 migration instructions 13
DRLINDEXPART 274 temporary data sets prefix dialog parameter 119
DRLKEYS 274 testing component
DRLLOGDATASETS 142 verify proper installation 176
DRLOBJECT_DATA 274 testing the component to verify its proper installation 176
DRLREPORT_ATTR 269 the DRLJLDMC collect job and the parameters it uses 247
DRLREPORT_COLUMNS 270 TIME_RES lookup table 282
DRLREPORT_QUERIES 270 timestamp tables
DRLREPORT_TEXT 270 AVAILABILITY_T 279
DRLREPORT_VARS 271 EXCEPTION_T 280
DRLREPORTS 268 Tivoli Workload Scheduler for z/OS
DRLSEARCH_ATTR 271 (OPC)
DRLSEARCHES 271 records shipped 299
DRLSECTIONS 263 trigger definitions
DRLTABAUTH 274 definition member 131
DRLTABLEPART 274 trigger, IBM Z Performance and Capacity Analytics
DRLTABLES 274 performance
DRLTABLESPACE 274 definition member description
DRLUPDATECOLS 263 update and view 131
DRLUPDATEDISTR 263 typeface conventions xx
DRLUPDATELETS 264
DRLUPDATES 264
DRLUSER_GROUPREPS 274
U
DRLUSER_GROUPS 274 understanding Db2 concepts 145
DRLUSER_REPORTATTR 274 understanding how IBM Z Performance and Capacity
DRLUSER_REPORTCOLS 274 Analytics uses Db2 146
DRLUSER_REPORTQRYS 274 understanding how IBM Z Performance and Capacity
DRLUSER_REPORTS 274 Analytics uses Db2 locking and concurrency 157
DRLUSER_REPORTTEXT 274 understanding table spaces 146
DRLUSER_REPORTVARS 274 uninstalling
DRLUSER_SEARCHATTR 274 component 176
DRLUSER_SEARCHES 274 uninstalling a component 176
DRLVIEWS 274 unloading and loading tables 211
GENERATE_KEYS 273 unloading tables 211
tables naming standard, common 278 update definition
Tables Window adding an abbreviation 223
386 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
update definition (continued) views and tables, system (continued)
creating 235 DRLINDEXPART 274
deleting 236 DRLKEYS 274
deleting an abbreviation 224 DRLLOGDATASETS 142
Update Definition window 223 DRLOBJECT_DATA 274
update definitions DRLREPORT_ATTR 269
APPLY SCHEDULE clause 224 DRLREPORT_COLUMNS 270
definition member 131 DRLREPORT_QUERIES 270
displaying 198 DRLREPORT_TEXT 270
displaying and modifying 220 DRLREPORT_VARS 271
introduction 4, 129 DRLREPORTS 268
UPDPOL DRLSEARCH_ATTR 271
update CFRM policy 36 DRLSEARCHES 271
Usage and Accounting Collector DRLSECTIONS 263
installing 357 DRLTABAUTH 274
introduction 12 DRLTABLEPART 274
USER_GROUP lookup table 282 DRLTABLES 274
using collect messages 141 DRLTABLESPACE 274
using the DRLJLDML job step 239 DRLUPDATECOLS 263
utility DRLUPDATEDISTR 263
Purge 152 DRLUPDATELETS 264
Reorg/Discard 150 DRLUPDATES 264
DRLUSER_GROUPREPS 274
DRLUSER_GROUPS 274
V DRLUSER_REPORTATTR 274
variables and fields DRLUSER_REPORTCOLS 274
dialog parameters 115 DRLUSER_REPORTQRYS 274
variables, eliminating report 161 DRLUSER_REPORTS 274
verify installation DRLUSER_REPORTTEXT 274
testing the component 176 DRLUSER_REPORTVARS 274
VERSION DRLUSER_SEARCHATTR 274
IBM Z Performance and Capacity Analytics variable DRLUSER_SEARCHES 274
format 126 DRLVIEWS 274
VERSION variable 126 GENERATE_PROFILES 272
view views on Db2 catalog tables 273
deleting 234 views on IBM Z Performance and Capacity Analytics system
view definition tables 274
changing a comment 231 VM accounting records 299
displaying 231 VMPRF
view definitions record definitions 299, 300
definition member 131 VPD data collecting 139
viewing
object definition 180 W
viewing a list of log data sets collected 184
viewing and modifying a log definition 191 what's new xxi
viewing and modifying a record definition 194 windows, administration dialog windows
viewing and modifying a record procedure definition 199 Administration window 7, 23
viewing objects in a component 179 Collect Statistics window 142
viewing or editing an object definitions 180 Collect window 28
viewing the dump data set 251, 252 Data Selection window 28
viewing the information about successfully collected log data Logs window 28
sets 251 Primary Menu 23
viewing the unsuccessfully collected log data set 252 Reports window 28
views and tables, system System Tables window 25
creating system tables 25 system window 23
dialog system tables 265 System window 24
DRLCHARTS 265 working with a component definition 178
DRLCOLUMNS 274 working with abbreviations 223
DRLCOMP_OBJECTS 267 working with components 168
DRLCOMP_PARTS 267 working with data in tables 203
DRLCOMPONENTS 267 working with fields in a record definition 196
DRLGROUP_REPORTS 268 working with header fields 192
DRLGROUPS 268 working with log and record definitions 183
DRLINDEXES 274 working with log definitions 191
Index 387
working with record definitions in a log 194
working with sections in a record definition 197
working with table space profiles 177
working with tables and update definitions 201, 216
working with the contents of logs 184
working with the continuous collector 253
working with the log data manager option 238
Z
z/VM Performance Toolkit
record definitions 300
388 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM®
SC28-3211-01