0% found this document useful (0 votes)
265 views414 pages

Administration Guide and Reference

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
265 views414 pages

Administration Guide and Reference

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 414

IBM Z Performance and Capacity Analytics

Version 3.1

Administration Guide and Reference

IBM

SC28-3211-01
Note
Before using this information and the product it supports, read the information in “Notices” on page
367.

This edition applies to Version 3 Release 1 of IBM Z Performance and Capacity Analytics (5698-AS3) and to all
subsequent releases and modifications until otherwise indicated in new editions.
Last updated: December 2022
© Copyright International Business Machines Corporation 1993, 2017.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
© Teracloud ApS 2018, 2022.
Contents

Figures................................................................................................................. ix

Tables................................................................................................................xvii

Preface...............................................................................................................xix
Who should read this book........................................................................................................................xix
Publications............................................................................................................................................... xix
Accessing publications online............................................................................................................. xix
Accessibility............................................................................................................................................... xix
Support information................................................................................................................................... xx
Conventions used in this book................................................................................................................... xx
Typeface conventions........................................................................................................................... xx
What's new in this edition (December 2022)........................................................................................... xxi

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics.................... 1


Introduction to the performance features.................................................................................................. 2
Introduction to the log collector..................................................................................................................3
Log definitions........................................................................................................................................ 3
Record definitions...................................................................................................................................3
Update definitions.................................................................................................................................. 4
Table definitions..................................................................................................................................... 4
Log and record procedures.....................................................................................................................4
Collect process....................................................................................................................................... 4
Introduction to the database.......................................................................................................................6
Introduction to the administration dialog................................................................................................... 7
Introduction to the reporting dialog............................................................................................................ 7
Introduction to the Key Performance Metrics components....................................................................... 8
Introduction to the SMF Log Records component...................................................................................... 9
Introduction to the Collator.......................................................................................................................10
Introduction to the Data Splitter and the SMF Extractor..........................................................................11
The SMF Extractor................................................................................................................................ 11
Receiving raw SMF records from the SMF Extractor........................................................................... 11
Introduction to the Usage and Accounting Collector................................................................................12

Chapter 2. Installing IBM Z Performance and Capacity Analytics ..........................13


Installation prerequisites.......................................................................................................................... 13
Hardware prerequisites........................................................................................................................13
Software prerequisites......................................................................................................................... 14
Step 1: Reviewing the results of the SMP/E installation........................................................................... 16
Step 2: Setting up security.........................................................................................................................17
Security using secondary authorization IDs........................................................................................ 17
Security without secondary authorization IDs.................................................................................... 19
Step 3: Initializing the Db2 database........................................................................................................ 20
Step 4: Preparing the dialog and updating the dialog profile................................................................... 21
Step 5: Setting personal dialog parameters..............................................................................................23
Step 6: Setting up QMF.............................................................................................................................. 24
Step 7: Determining partitioning mode and keys..................................................................................... 25
Step 8: Creating system tables..................................................................................................................25
Creating and updating system tables with a batch job....................................................................... 26

iii
Step 9: Customizing JCL............................................................................................................................ 27
Step 10: Testing the installation of the base.............................................................................................28
Step 11: Reviewing Db2 parameters.........................................................................................................30
Step 12: Installing components................................................................................................................ 31
Installing multiple IBM Z Performance and Capacity Analytics systems................................................ 31

Chapter 3. Installing the SMF Extractor and Continuous Collector......................... 33


SMF Configuration......................................................................................................................................33
Review the SID parameter................................................................................................................... 33
Review your SYS settings..................................................................................................................... 34
Review each of your SUBSYS settings................................................................................................. 34
SMF Extractor.............................................................................................................................................34
Sample configuration members................................................................................................................ 34
Defining a DASD-only log stream.........................................................................................................35
Defining a log stream on a coupling facility......................................................................................... 35
UPDPOL - Update CFRM policy............................................................................................................ 36
DRLJSMFO - SMF Extractor control file............................................................................................... 37
DRLJSMFX - SMF Extractor startup procedure................................................................................... 38
DRLJCCOL - Continuous Collector started task...................................................................................38
DataMover.sh - Run the DataMover..................................................................................................... 39
hub.properties - Configure a DataMover on a hub system..................................................................40
spoke.properties - Configure a DataMover on a spoke system...........................................................41
clear.properties - Erase all records from a log stream........................................................................ 42
Continuous Collector configuration options..............................................................................................42
Stand-alone configuration....................................................................................................................42
Hub and spoke configurations using the DataMover...........................................................................43
Communications prerequisites................................................................................................................. 44
Step 1: Installing the SMF Extractor......................................................................................................... 45
SMF Extractor tips................................................................................................................................ 46
Step 2: Installing the Continuous Collector.............................................................................................. 47
Step 3: Installing the DataMover...............................................................................................................49
DataMover tips..................................................................................................................................... 51
DataMover configuration options.........................................................................................................52
Step 4: Activating the Continuous Collector............................................................................................. 56

Chapter 4. Installation Optional Extensions..........................................................59


Extending the SMF extractor..................................................................................................................... 59
Configuration parameters..........................................................................................................................59
SMF Extractor console commands...................................................................................................... 61
Setting up data streaming......................................................................................................................... 62
Data Publication................................................................................................................................... 62
Establishing a Publication Mechanism................................................................................................ 64
Hardware and Network Considerations...............................................................................................76
Publishing Historical Data.................................................................................................................... 76
Sample configuration members................................................................................................................ 79
DRLJCCOL - Continuous Collector started task...................................................................................79
DataMover.sh - Run the DataMover..................................................................................................... 79
publication.properties - Transfer data from the log stream................................................................80
clear.properties - Erase all records from a log stream........................................................................ 81
Status monitoring commands reference...................................................................................................82
Log stream status................................................................................................................................. 82
SMF Extractor statistics....................................................................................................................... 82
DataMover status................................................................................................................................. 83
DataMover statistics.............................................................................................................................84
DataMover features and parameters reference........................................................................................84
Memory management.......................................................................................................................... 85
Record formats..................................................................................................................................... 85

iv
Configuration........................................................................................................................................ 86
Stages, keywords, and parameter settings......................................................................................... 86
Common keywords...............................................................................................................................87
Input stage keywords...........................................................................................................................88
Process stage keywords.......................................................................................................................90
Output stage keywords........................................................................................................................ 92
Advanced configurations......................................................................................................................95
Installing the Collator Function for IBM Z Performance and Capacity Analytics.....................................99
Deployment.......................................................................................................................................... 99
Collation............................................................................................................................................. 104
Collator Configuration........................................................................................................................ 105
Collator Installation........................................................................................................................... 109
Data Splitter............................................................................................................................................. 110
Configuring the SMF Extractor........................................................................................................... 110
Installing the Data Splitter.................................................................................................................110
Configuring the Data Splitter..............................................................................................................110
Direct streaming.................................................................................................................................111

Chapter 5. Installation reference........................................................................113


Dialog parameters................................................................................................................................... 113
Modifying the DRLFPROF data set.....................................................................................................113
Overview of the Dialog Parameters window......................................................................................114
Dialog parameters - variables and fields...........................................................................................115
Allocation overview............................................................................................................................ 124
Defining objects....................................................................................................................................... 125
IBM Z Performance and Capacity Analytics component installation............................................... 125
Naming convention for IBM Z Performance and Capacity Analytics definition members.................... 132
Naming convention for members of DRLvrm.SDRLDEFS..................................................................132
Naming convention for members of DRLvrm.SDRLRENU................................................................. 133

Chapter 6. Administering IBM Z Performance and Capacity Analytics..................135


Setting up operating routines..................................................................................................................135
Collecting log data..............................................................................................................................135
Improving collect performance......................................................................................................... 144
Administering the IBM Z Performance and Capacity Analytics database........................................145
Administering lookup and control tables.......................................................................................... 159
Administering reports........................................................................................................................ 160
Administering problem records......................................................................................................... 166
Working with components.......................................................................................................................168
Installing and uninstalling a component........................................................................................... 169
Working with table space profiles..................................................................................................... 177
Reviewing table space profiles prior to installation.......................................................................... 177
Reviewing the GENERATE statements for table spaces, tables, and indexes..................................178
Working with a component definition................................................................................................178
Working with definitions.......................................................................................................................... 183
Working with the contents of logs..................................................................................................... 184
Working with log definitions.............................................................................................................. 191
Working with record definitions in a log............................................................................................ 194
Working with tables and definitions........................................................................................................201
Working with data in tables............................................................................................................... 203
Working with tables and update definitions......................................................................................216
Working with the log data manager option............................................................................................. 238
Summary of how the log data manager is used................................................................................ 238
Invoking the log data manager.......................................................................................................... 239
Job step for recording a log data set for collection...........................................................................239
Modifying log collector statements................................................................................................... 241
Listing and modifying the list of log data sets to be collected..........................................................244

v
The DRLJLDMC collect job and the parameters it uses.................................................................... 247
Modifying the list of successfully collected log data sets................................................................. 250
Modifying the list of unsuccessfully collected log data sets.............................................................252
Working with the Continuous Collector...................................................................................................253

Chapter 7. Administration reference...................................................................257


System tables and views......................................................................................................................... 257
Log collector system tables............................................................................................................... 257
Dialog system tables.......................................................................................................................... 265
Views on Db2 and QMF tables........................................................................................................... 273
Views on IBM Z Performance and Capacity Analytics system tables...............................................274
Control tables and common tables......................................................................................................... 274
Control tables..................................................................................................................................... 275
CICS control tables............................................................................................................................ 277
Common data tables.......................................................................................................................... 278
Common lookup tables...................................................................................................................... 281
Sample component..................................................................................................................................283
SAMPLE_H, _M data tables................................................................................................................284
SAMPLE_USER lookup table.............................................................................................................. 284
Sample component reports............................................................................................................... 285
Record definitions....................................................................................................................................288
SMF records........................................................................................................................................288
DFSMS/RMM records......................................................................................................................... 294
IMS SLDS records...............................................................................................................................294
DCOLLECT records............................................................................................................................. 297
EREP records...................................................................................................................................... 298
Linux on zSeries records.................................................................................................................... 298
RACF records......................................................................................................................................298
Tivoli Workload Scheduler for z/OS (OPC) records........................................................................... 299
VM accounting records.......................................................................................................................299
VMPRF records................................................................................................................................... 299
z/VM Performance Toolkit records.................................................................................................... 300
Administration dialog options and commands....................................................................................... 301
IBM Z Performance and Capacity Analytics dialog options.............................................................. 301
IBM Z Performance and Capacity Analytics commands................................................................... 307
Administration reports............................................................................................................................ 308
3270 reports...................................................................................................................................... 309
Cognos reports................................................................................................................................... 323
Using the REXX-SQL interface................................................................................................................. 342
Calling the DRL1SQLX module...........................................................................................................342
Using the IBM Db2 Analytics Accelerator............................................................................................... 345
Relationship of Analytics Components to non-Analytics Components............................................ 346
Configuring Analytics Components for use with IBM Db2 Analytics Accelerator............................ 351
Collecting data for direct load to the Accelerator............................................................................. 353
Loading data into the Accelerator......................................................................................................353
Uninstalling components used with an IBM Db2 Analytics Accelerator.......................................... 354

Chapter 8. Installing the Usage and Accounting Collector................................... 357


Step 1: Customizing the Usage and Accounting Collector..................................................................... 357
DRLNINIT........................................................................................................................................... 357
Step 2: Allocating and initializing Usage and Accounting files...............................................................360
Step 3: Processing SMF data using DRLNJOB2 (DRLCDATA and DRLCACCT)....................................... 360
Step 4: Running DRLNJOB3 (DRLCMONY) to create invoices and reports............................................ 363
Step 5: Processing Usage and Accounting Collector subsystems......................................................... 364

Appendix A. Support information....................................................................... 365


Contacting IBM Support.......................................................................................................................... 365

vi
Notices..............................................................................................................367
Trademarks.............................................................................................................................................. 368

Bibliography...................................................................................................... 369
IBM Z Performance and Capacity Analytics publications.......................................................................369
Glossary............................................................................................................ 371

Index................................................................................................................ 375

vii
viii
Figures

1. Overview of IBM Z Performance and Capacity Analytics............................................................................. 1

2. Overview of batch data collection.................................................................................................................2

3. Overview of automated data gathering and continuous collection............................................................. 2

4. Overview of the data collection and processing steps................................................................................. 5

5. Overview of the setup for data collection and reporting.............................................................................. 6

6. Administration window................................................................................................................................. 7

7. Introducing the Reporting dialog.................................................................................................................. 8

8. SMF Records from SYS1 and SYS2............................................................................................................. 10

9. SMF Extractor.............................................................................................................................................. 11

10. IBM Z Performance and Capacity Analytics Primary Menu..................................................................... 23

11. Dialog Parameters window....................................................................................................................... 24

12. System Tables (not created) window....................................................................................................... 26

13. Logs window..............................................................................................................................................28

14. Collect window.......................................................................................................................................... 29

15. Reports window........................................................................................................................................ 29

16. Data Selection window............................................................................................................................. 30

17. JCL to define a DASD-only log stream......................................................................................................35

18. JCL to define a log stream on a coupling facility...................................................................................... 36

19. UPDPOL: Update CFRM policy.................................................................................................................. 37

20. DRLJSMFO: Parameters for the SMF Extractor control file......................................................................37

21. DRLJSMFX: JCL for the SMF Extractor startup procedure....................................................................... 38

22. DRLJCCOL: JCL for the Continuous Collector started task...................................................................... 39

23. DataMover.sh - Run the DataMover.......................................................................................................... 40

ix
24. hub.properties - Configure a DataMover on a hub system...................................................................... 41

25. spoke.properties - Configure a DataMover on a spoke system............................................................... 41

26. clear.properties - Erase all records from a log stream.............................................................................42

27. Continuous Collector: stand-alone configuration.................................................................................... 42

28. Continuous Collector: hub and spoke configuration option A (separate log streams, partitioned
Db2 database)............................................................................................................................................ 43

29. Continuous Collector: hub and spoke configuration option B (combined log streams)......................... 43

30. Continuous Collector: multiple Spoke systems and one Hub DataMover............................................... 44

31. Continuous Collector: multiple Spoke systems and multiple Hub DataMovers......................................44

32. Hub configuration options........................................................................................................................ 53

33. DRLJCCOL: JCL for the Continuous Collector started task...................................................................... 79

34. DataMover.sh - Run the DataMover.......................................................................................................... 80

35. publication.properties - Transfer data from the log stream.................................................................... 81

36. clear.properties - Erase all records from a log stream.............................................................................81

37. Status command: SMF Extractor statistics.............................................................................................. 82

38. Status command: SMF Extractor statistics.............................................................................................. 83

39. DataMover or Publication DataMover statistics....................................................................................... 84

40. DataMover configuration scenario: SMF filtering..................................................................................... 95

41. SMF filtering configuration........................................................................................................................96

42. DataMover configuration scenario: data splitting.................................................................................... 97

43. Data splitting configuration.......................................................................................................................98

44. Collator Stand-alone Deployment..........................................................................................................100

45. Collator Sysplex Deployment................................................................................................................. 100

46. Intra-Sysplex Deployment......................................................................................................................101

47. Hybrid Deployment, Separate Systems................................................................................................. 102

48. Hybrid Deployment, Shared Data Stream.............................................................................................. 102

x
49. Hybrid Deployment, Split Data Streams................................................................................................ 103

50. Dialog Parameters window, when QMF is used..................................................................................... 114

51. Dialog Parameters window, when QMF is not used...............................................................................115

52. Definition member DRLISAMP, setting component definitions.............................................................126

53. Definition member DRLLSAMP, defining a log type............................................................................... 128

54. Definition member DRLRSAMP, defining a record type......................................................................... 128

55. Using SQL to define a table space (see definition member DRLSSAMP).............................................. 128

56. Using GENERATE to define a table space (see definition member DRLSKZJB)....................................129

57. Definition member DRLOSAMP, defining reports and report groups.................................................... 131

58. IBM Z Performance and Capacity Analytics definition member DRLQSA01, report query.................. 132

59. Invoking the log collector in batch to collect data.................................................................................136

60. DRLSMSG example................................................................................................................................. 137

61. Sample collect messages....................................................................................................................... 140

62. Collect Statistics window........................................................................................................................143

63. Db2 environment for the IBM Z Performance and Capacity Analytics database..................................146

64. Tablespace list window...........................................................................................................................146

65. Tables window - Option 12..................................................................................................................... 150

66. Tablespace list window...........................................................................................................................151

67. DRLJPURG job that uses all purge conditions........................................................................................153

68. Tables window -Option 10......................................................................................................................154

69. DRLJCOPY job for backing up IBM Z Performance and Capacity Analytics table spaces.................... 155

70. DRLJRUNS job for generating Db2 statistics..........................................................................................157

71. DB2I Primary Option Menu.....................................................................................................................159

72. Using QMF to report in batch..................................................................................................................166

73. Space pull-down..................................................................................................................................... 170

xi
74. Installation Options window...................................................................................................................171

75. Sample log collector messages.............................................................................................................. 172

76. Lookup Tables window............................................................................................................................173

77. Editing an installation job....................................................................................................................... 174

78. Select Table window............................................................................................................................... 175

79. Tables window - showing component's lookup tables.......................................................................... 175

80. Component window................................................................................................................................ 179

81. Data Sets window................................................................................................................................... 185

82. Collect Statistics window........................................................................................................................185

83. Collect window........................................................................................................................................186

84. Sample log statistics output................................................................................................................... 188

85. Record Data window............................................................................................................................... 189

86. List Record window.................................................................................................................................190

87. Output from List record function............................................................................................................ 190

88. Log Definition window.............................................................................................................................192

89. Record Definitions window..................................................................................................................... 194

90. Record Definition window.......................................................................................................................195

91. Field Definition window.......................................................................................................................... 196

92. Section Definition window...................................................................................................................... 197

93. Record Procedure Definition window.....................................................................................................200

94. Tables window........................................................................................................................................ 203

95. Using QMF to display an IBM Z Performance and Capacity Analytics table......................................... 204

96. Editing a table in ISPF.............................................................................................................................206

97. Table Size window...................................................................................................................................207

98. Recalculate window................................................................................................................................208

xii
99. Condition window................................................................................................................................... 209

100. Column Values window........................................................................................................................ 209

101. Selecting tables to unload.................................................................................................................... 211

102. Unload Utility window...........................................................................................................................212

103. Db2 High Performance Unload utility...................................................................................................215

104. Table window........................................................................................................................................ 216

105. Column Definition window................................................................................................................... 217

106. Add Column window.............................................................................................................................218

107. Indexes window....................................................................................................................................219

108. Index window....................................................................................................................................... 219

109. Add Index window................................................................................................................................ 220

110. Update Definitions window.................................................................................................................. 221

111. Update Definition window.................................................................................................................... 221

112. Abbreviations window.......................................................................................................................... 223

113. Distribution window..............................................................................................................................224

114. Apply Schedule window....................................................................................................................... 225

115. Retention Period window..................................................................................................................... 226

116. Purge Condition window.......................................................................................................................226

117. Tablespaces window............................................................................................................................ 227

118. Tablespace DRLxxx...............................................................................................................................228

119. Indexes window....................................................................................................................................229

120. Index window....................................................................................................................................... 229

121. Tablespace window.............................................................................................................................. 230

122. View window......................................................................................................................................... 231

123. New Table window................................................................................................................................233

xiii
124. Grant Privilege window.........................................................................................................................237

125. Revoke Privilege window...................................................................................................................... 238

126. Log Data Manager Main Selection window.......................................................................................... 239

127. Collect Statements window..................................................................................................................242

128. Edit collect statements window........................................................................................................... 243

129. Add Collect Statements Definition window......................................................................................... 244

130. Modify Collect Statements Definition window.....................................................................................244

131. SMF Log Data Sets To Be Collected window........................................................................................ 245

132. Modify Log ID For a Log Data Set window............................................................................................246

133. Add a Data Set To Be Collected window.............................................................................................. 247

134. Log Data Sets Collected Successfully window.....................................................................................250

135. Retention Period window..................................................................................................................... 251

136. Log Data Sets Collected with Failure window...................................................................................... 252

137. Sample data flow.................................................................................................................................. 284

138. Sample Report 1................................................................................................................................... 286

139. Sample Report 2................................................................................................................................... 287

140. Sample Report 3................................................................................................................................... 288

141. Indexspace Cross-Reference report.................................................................................................... 309

142. Actual Tablespace Space Allocation report......................................................................................... 310

143. Table Purge Condition report................................................................................................................311

144. Table Structure with Comments report................................................................................................312

145. Table Names with Comments report................................................................................................... 313

146. Object Change Level report.................................................................................................................. 314

147. Collected Log Data Sets report.............................................................................................................315

148. Components and Subcomponents report............................................................................................316

xiv
149. Tablespace Allocation report............................................................................................................... 317

150. Update Definitions report..................................................................................................................... 318

151. Update Details report........................................................................................................................... 319

152. Table Name to Tablespace Cross-Reference report............................................................................ 320

153. Tablespace to Table Name Cross-Reference report............................................................................ 321

154. System Tables report............................................................................................................................322

155. Non-System Tables Installed report.................................................................................................... 323

156. Indexspace Cross-Reference report.................................................................................................... 324

157. Actual Tablespace Space Allocation report......................................................................................... 325

158. Table Purge Condition report................................................................................................................326

159. Table Structure with Comments report................................................................................................327

160. Table Names with Comments report................................................................................................... 328

161. Object Change Level report.................................................................................................................. 329

162. Collected Log Data Sets report.............................................................................................................330

163. Components and Subcomponents report............................................................................................331

164. Tablespace Allocation report............................................................................................................... 333

165. Update Definitions report..................................................................................................................... 334

166. Update Details report........................................................................................................................... 335

167. Table Name to Tablespace Cross-Reference report............................................................................ 336

168. Tablespace to Table Name Cross-Reference report............................................................................ 338

169. System Tables report............................................................................................................................339

170. Non-System Tables Installed report.................................................................................................... 341

171. Example of REXX-SQL interface call.................................................................................................... 345

xv
xvi
Tables

1. KPM components...........................................................................................................................................9

2. DataMover: Input stage parameter settings...............................................................................................88

3. DataMover: Process stage parameter settings...........................................................................................90

4. DataMover: Output stage parameter settings............................................................................................ 92

5. Collation shift values and split times........................................................................................................104

6. Order of installation of feature definition members................................................................................ 127

7. Parameters for table space reporting.......................................................................................................148

8. Parameters for batch reporting................................................................................................................ 164

9. Relationship of Analytics components to non-Analytics components.................................................... 346

10. Relationship of Analytics Lookup table to non-Analytics Lookup table................................................ 346

11. Relationship of Analytics component report to non-Analytics component report .............................. 347

12. Relationship of Analytics component table to non-Analytics component table...................................348

13. Relationship of Analytics component tables used in view to non- Analytics component tables
used in view..............................................................................................................................................350

14. Relationship of SDRLCNTL member name to Analytics component..................................................... 352

15. Relationship of SDRLCNTL member name to Analytics component..................................................... 353

16. Relationship of SDRLCNTL member name to Analytics component..................................................... 354

17. Relationship of SDRLCNTL member name to Analytics component..................................................... 354

18. Relationship of SDRLCNTL member name to Analytics component..................................................... 355

19. Explanation of Program DRLCDATA........................................................................................................ 360

20. Explanation of Program DRLCACCT........................................................................................................361

21. Usage and Accounting Collector Subsystem Member Names (Partial List)..........................................364

xvii
xviii
Preface
This book provides an introduction to IBM Z Performance and Capacity Analytics, the administration
dialog and the reporting dialog. It describes procedures for installing the base product and its features
and for administering IBM Z Performance and Capacity Analytics through routine batch jobs and the
administration dialog.
It also describes how to setup and configure for extended reporting capability through interfacing
analytics tools, both on and off-platform.
The terms listed are used interchangeably throughout the guide:
• MVS™, OS/390®, and z/OS.
• VM and z/VM®.

Who should read this book


The Administration Guide and Reference is for the IBM Z Performance and Capacity Analytics
administrator, the person who initializes the IBM Z Performance and Capacity Analytics database, and
performs the customization and administration of the system.
Readers should be familiar with the following:
• Db2® and its utilities
• Query Management Facility (QMF), if QMF is used with IBM Z Performance and Capacity Analytics
• Time Sharing Option Extensions (TSO/E)
• Restructured Extended Executor (REXX) language
• Job control language (JCL)
• Interactive System Productivity Facility/Program Development Facility (ISPF/PDF) and its dialog
manager functions

Publications
This section describes how to access the IBM Z Performance and Capacity Analytics publications online.
For a list of publications and related documents, refer to “IBM Z Performance and Capacity Analytics
publications” on page 369.

Accessing publications online


Publications for this and all other IBM products, as they become available and whenever they are
updated, can be viewed in IBM Documentation website from where you can also download the associated
PDF.
IBM Z Performance and Capacity Analytics V3.1.0
https://www.ibm.com/docs/en/zp-and-ca/3.1.0
IBM Documentation
https://www.ibm.com/docs/

Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. With this product, you can use assistive technologies to hear and
navigate the interface. You can also use the keyboard instead of the mouse to operate all features of the
graphical user interface.

© Copyright IBM Corp. 1993, 2017 xix


For additional information, refer to the IBM Accessibility website:
https://www.ibm.com/accessibility

Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM provides the following
ways for you to obtain the support you need:
• Searching knowledge bases: You can search across a large collection of known problems and
workarounds, Technotes, and other information.
• Obtaining fixes: You can locate the latest fixes that are already available for your product.
• Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with
someone from IBM, you can use a variety of ways to contact IBM Software Support.
For more information about these three ways of resolving problems, see Appendix A, “Support
information,” on page 365.

Conventions used in this book


This guide uses several conventions for special terms and actions, operating system-dependent
commands and paths, and margin graphics.
The following terms are used interchangeably throughout this book:
• MVS, OS/390, and z/OS.
• VM and z/VM.

Typeface conventions
This guide uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip, and Operating system considerations)
• Column headings in a table
• Keywords and parameters in text
Italic
• Citations (titles of books, diskettes, and CDs)
• Words defined in text
• Emphasis of words (words as words)
• Letters as letters
• New terms in text (except in a definition list)
• Variables and values you must provide
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type

xx Preface
• Values for arguments or command options

What's new in this edition (December 2022)


The changes in this edition relate to IBM Z Performance and Capacity Analytics V3.1.0 new function and
enhancements.
The changes apply to the PTFs for the following APARs and other documentation improvements.
• APAR PH45558 - Db2 component:
– Added Db2 V13 toleration: “Step 3: Initializing the Db2 database” on page 20
• APAR PH45278 - Continuous Collector:
– Updated MODIFY command LOGSTREAM FREE NOW|OFF|ON descriptions, and added LOGSTREAM
FREE n: “Working with the Continuous Collector” on page 253
• Editorial changes:
– Added DRLEMTJS description: “Processing Published Data in the IBM Z Common Data Provider” on
page 66
– Moved .forceFields description to JSON stage: “Process stage keywords” on page 90
Technical changes are marked in the PDF with a vertical bar in the margin to the left of the change.

Previous editions
May 2022
• APAR PH38474 - ELK reporting:
– Updated ELK version: “Software prerequisites” on page 14
• APAR PH44198 - SMF Extractor:
– Changed TRACE(Y) to TRACE(N) on the PARM option in “DRLJSMFX - SMF Extractor startup
procedure” on page 38
– Updated the F smfext,STOP command in “SMF Extractor console commands” on page 61
• APAR PH40134 - DataImporter:
– Updated steps 5, 8, 10, and 11 in “Setting up the DataImporter” on page 71
– Updated DataImporter historical data tables in “Publishing Historical Data” on page 76
– New command process.r.n.forceFields added in “Process stage keywords” on page 90
November 2021
• APAR PH38154 - Db2 Admin Authority reporting enhancements:
– “Administration reports” on page 308
September 2021
• APAR PH39079 - generate WTOs for selected messages:
– Added new information: “Using log collector language to collect data” on page 136
July 2021
• APAR PH38056 - refresh definitions and lookup tables when running the Continuous Collector:
– Add new information: “Working with the Continuous Collector” on page 253
• APAR PH07965 - Db2 Delta Statistics:
– Added a new lookup table: “TIME_RES” on page 282
– “Example of table contents” on page 283

Preface xxi
June 2021
• “Dialog parameters - variables and fields” on page 115
• “GENERATE_PROFILES” on page 272
• “SMF records” on page 288
May 2021
• Updated Data Splitter description:
– “Introduction to the Data Splitter and the SMF Extractor” on page 11
– “Receiving raw SMF records from the SMF Extractor” on page 11
– “Data Splitter” on page 110
– “Configuring the Data Splitter” on page 110
• APAR PH35178 - new component DataImporter added to stream data from Db2:
– “Setting up data streaming” on page 62
– “Establishing a Publication Mechanism” on page 64
– “Shadowing Data out of Db2” on page 64
– “Processing Published Data in the IBM Z Common Data Provider” on page 66
– “Processing Published Data in an IBM Z Performance and Capacity Analytics Catcher” on page
67
– “Setting up the DataImporter” on page 71
– “Hardware and Network Considerations” on page 76
– “Network Considerations” on page 76
– “Publishing Historical Data” on page 76
March 2021
• Added new information for “Introduction to the Collator” on page 10 and “Introduction to the Data
Splitter and the SMF Extractor” on page 11
• Modified the structure and added new information to Chapter 3:
– “SMF Configuration” on page 33
– “Review the SID parameter” on page 33
– “Review your SYS settings” on page 34
– “Review each of your SUBSYS settings” on page 34
– “SMF Extractor” on page 34
– Moved from Chapter 4: “Sample configuration members” on page 34
• Modified the structure and added new information to Chapter 4:
– “Extending the SMF extractor” on page 59
– “Configuration parameters” on page 59
– “SMF Extractor console commands” on page 61
– “Data Splitter” on page 110
– “Configuring the SMF Extractor” on page 110
– “Installing the Data Splitter” on page 110
– “Configuring the Data Splitter” on page 110
– “Direct streaming” on page 111
February 2021
• The chapter structure has been modified by splitting Chapter 3 into two new chapters.

xxii Preface
– Chapter 3: Chapter 3, “Installing the SMF Extractor and Continuous Collector,” on page 33
– Chapter 4: Chapter 4, “Installation Optional Extensions,” on page 59
November 2020
• Clarified architecture considerations for installation prerequisites: “Software prerequisites” on page
14
• Updated data set description for directory containing the DataMover: “Step 1: Reviewing the results
of the SMP/E installation” on page 16
• Improved instructions for Continuous Collector installation procedure: “Step 2: Installing the
Continuous Collector” on page 47
• Improved instructions for Continuous Collector startup procedure: “Step 4: Activating the
Continuous Collector” on page 56
• APAR PH28368: Improved installation instructions for remote data streaming
• APAR PH31089 and APAR PH28498: Added new instruction for ELK Shadower configuration
• Added new instruction for Collator function: “Installing the Collator Function for IBM Z Performance
and Capacity Analytics” on page 99
• APAR PH29556 - Added new instruction for dynamically modifying the Continuous Collector commit
interval: “Working with the Continuous Collector” on page 253
September 2020
• Improved instruction clarity for Secure Setup Procedure: “Step 3: Initializing the Db2 database” on
page 20
• Improved instructions for SMF Extractor installation: “SMF Extractor tips” on page 46
• Clarified streaming options for off-platform reporting
• Improved sample JCL for log streaming to coupling facility
• Clarified the description of modtenu field: “Dialog parameters - variables and fields” on page 115
• APAR PH26230: Added “Defining triggers” on page 131
August 2020
• Removed outdated CICS Partitioning Feature Customization information in Chapter 2, “Installing
IBM Z Performance and Capacity Analytics ,” on page 13
• Replaced DELPART_PROFILES with GENERATE_PROFILES: “Step 7: Determining partitioning mode
and keys” on page 25
• Removed outdated procedure in “Step 11: Reviewing Db2 parameters” on page 30
• Removed outdated Publisher DataMover information in: Chapter 3, “Installing the SMF Extractor
and Continuous Collector,” on page 33
• Updated Installing the Continuous Collector to include running in zIIP mode: “Step 2: Installing the
Continuous Collector” on page 47
• Added SSL Configuration instructions for Data Mover: “DataMover configuration options” on page
52
• Moved Data streaming setup instructions for off platform reporting from Guide to Reporting
• Added additional detail for Process Stage Keyword FILTER: “Process stage keywords” on page 90
• Modified STEPLIB to be up to date: “DRLJCOLL job for collecting data from an SMF data set” on
page 137
• Removed outdated content about system tables: “Understanding table spaces” on page 146

Preface xxiii
xxiv IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Chapter 1. Introduction to IBM Z Performance and
Capacity Analytics
IBM Z Performance and Capacity Analytics enables you to effectively manage the performance of your
system by collecting performance data in a Db2 database and presenting the data in a variety of formats
for use in systems management.
After reading this topic, you should have a basic understanding of IBM Z Performance and Capacity
Analytics and be ready to install it.
IBM Z Performance and Capacity Analytics has two basic functions:
1. Collecting systems management data into a Db2 database.
2. Reporting on the data.
IBM Z Performance and Capacity Analytics consists of a base product and several optional features.
The IBM Z Performance and Capacity Analytics base can generate graphical and tabular reports using
systems management data it stores in its Db2 database. To display graphical reports, IBM Z Performance
and Capacity Analytics can use IBM Graphical Data Display Manager (GDDM) or web reporting tools such
as IBM Cognos Analytics. You can also send the data to other platforms for further analysis and reporting
using interfacing products such as Splunk and ELK.
The IBM Z Performance and Capacity Analytics Administration and Reporting dialogs enable you to
specify your data collection and reporting requirements. An overview of the setup for data collection and
reporting is shown in the following diagram. Use the Administration dialog to define to the Log Collector
the type of log data that you want to collect from SMF, IMS, and other systems to store in Db2 databases.
Use the Reporting dialog to define to the report generator the reports that you want produced.

Administration Reporting
dialog dialog

Tabular
reports

Report
generator
SMF Charts
log
Reporting
tools

Log Jobs and


IMS collector utilities
Db2
log

Other
logs

Figure 1. Overview of IBM Z Performance and Capacity Analytics

You can use IBM Z Performance and Capacity Analytics for batch data collection and periodic reporting.
When familiar with batch operation, you can implement more advanced configurations for automated data
gathering and continuous collection.

© Copyright IBM Corp. 1993, 2017 1


Introduction to the performance features

Data transfer Aggregate data is

SYS1
SMF Extract SMF must be complete Batch job to run the
Log Collector
written to Db2
before each batch run during the batch run
Data

SMF
Data Batch Log Db2
Extract SMF
SYS2

SMF SMF Collector

SYSX
Data
Data

SMF Reports
Data

SMF
SYS3

SMF Extract
Data Updated reports, based
on the aggregated data,
become available at the
end of each batch run

Figure 2. Overview of batch data collection

IBM Z Performance
and Capacity Analytics
agents to gather and
transmit the data

Data Aggregate data is


SYS1

Log Collector adapted


SMF IBM Z Performance
for continuous
written to Db2 as
Sender and Capacity Analytics
agent to read the
operation
it becomes available

incoming data

Continuous Db2
Data Data
SYS2

SMF Collector

SYSX
Sender Receiver

Reports

Data
SYS3

SMF Sender Current reports available


as soon as the aggregate
data behind them has
been updated

Figure 3. Overview of automated data gathering and continuous collection

Introduction to the performance features


IBM Z Performance and Capacity Analytics performance features provide Db2 table definitions and table
update instructions for collecting required systems management data. They also provide predefined
queries, forms, and reports for presenting that data.
Resource Accounting for z/OS is part of the IBM Z Performance and Capacity Analytics base function.
The following performance features are additional to the base function:
• IBM i System Performance Feature
• Customer Information Control System (CICS) Performance Feature
• Distributed Systems Performance Feature
• Information Management System (IMS) Performance Feature
• Network Performance Feature
• System Performance Feature

2 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Introduction to the log collector

These features are used to collect and report on systems management data, such as System Management
Facility (SMF) data or IMS log data.
Each performance feature has components, which are groups of related IBM Z Performance and Capacity
Analytics definitions. For example, the z/OS Performance Management (MVSPM) component consists
of everything IBM Z Performance and Capacity Analytics needs to collect log data and create reports
showing z/OS performance characteristics.

Introduction to the log collector


At the center of IBM Z Performance and Capacity Analytics is the log collector program that reads
and processes performance data. Log collector tasks are controlled by log, record, update, and other
definitions in IBM Z Performance and Capacity Analytics system tables. For more information, see “Log
collector system tables” on page 257. You can add or modify definitions with both the administration
dialog (see “Introduction to the reporting dialog” on page 7) and log collector language statements. For
information on the administration dialog, see “Introduction to the administration dialog” on page 7.
IBM Z Performance and Capacity Analytics provides both batch and interactive processing of log collector
language statements. For a description of the log collector and the language, refer to the Language Guide
and Reference.
The key function of the log collector is to read data and store it in data tables in the IBM Z Performance
and Capacity Analytics database. The log collector groups the data by hour, day, week, or month.
It computes sums, maximum or minimum values, averages, and percentiles, and calculates resource
availability. The collect process, also referred to as collecting data or as collect, includes gathering,
processing, and storing the data.

Log definitions
IBM Z Performance and Capacity Analytics gathers performance data about systems from sequential data
sets such as those written by SMF under z/OS, or by the Information Management System (IMS). These
data sets are called log data sets or logs.
To collect log data, IBM Z Performance and Capacity Analytics needs log descriptions. The log collector
stores descriptions of logs as log definitions in the IBM Z Performance and Capacity Analytics database.
All log definitions used by IBM Z Performance and Capacity Analytics features are provided with the base
product.
The administration dialog enables you to create log definitions or modify existing ones. For more
information, see “Working with log and record definitions” on page 183.
The log collector language statement, DEFINE LOG, also enables you to define logs. For more information,
refer to the description of defining logs in the Language Guide and Reference.

Record definitions
Each record in a log belongs to one unique record type. Examples of record types include SMF record
type 30, generated by z/OS, and SMF record type 110, generated by CICS. For IBM Z Performance and
Capacity Analytics to process a record, the record type must be defined. Detailed record layouts, field
formats, and offsets within a record, are described in IBM Z Performance and Capacity Analytics record
definitions. All record definitions used by IBM Z Performance and Capacity Analytics features are provided
with the base product.
The administration dialog enables you to create and modify record definitions. For more information, see
“Working with log and record definitions” on page 183.
The log collector language statement, DEFINE RECORD, also enables you to define records. For more
information, refer to the description of defining records in the Language Guide and Reference.

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics 3


Introduction to the log collector

Update definitions
Instructions for processing data and inserting it into tables in the IBM Z Performance and Capacity
Analytics database are provided in update definitions. Each update definition describes how data from a
source (either a specific record type, or a row of a table) is manipulated and inserted into a target (a row
in a table). The update definitions used by an IBM Z Performance and Capacity Analytics component are
provided with the feature that contains the component.
The administration dialog enables you to create update definitions or modify them. For more information,
see “Displaying and modifying update definitions of a table” on page 220.
The log collector language statement, DEFINE UPDATE, also enables you to define updates. For more
information, refer to the description of defining updates in the Language Guide and Reference.

Table definitions
IBM Z Performance and Capacity Analytics stores data collected from log data sets in its database tables.
It also stores IBM Z Performance and Capacity Analytics system data in system tables and site-specific
operating definitions in lookup and control tables. A table definition identifies the database and table
space in which a table resides, and identifies columns in the table. The table definitions used exclusively
by the feature components in IBM Z Performance and Capacity Analytics are provided with the feature.
The administration dialog enables you to create or modify lookup and data table definitions. For more
information, see “Working with tables and update definitions” on page 201.

Log and record procedures


Log procedures and record procedures are user exit programs for specific data collection scenarios.
Record procedures work on specific record types. Log procedures work on an entire log. The log and
record procedures used by IBM Z Performance and Capacity Analytics features are provided with the base
product.
For information about creating log and record procedure exits, refer to the Language Guide and Reference.
The administration dialog enables you to view and modify record procedure definitions, to identify
record definitions that require processing by record procedures, and to define record definitions that are
output from a record procedure. For more information, see “Viewing and modifying a record procedure
definition” on page 199.

Collect process
When definitions exist for a log, the log records, the log update instructions for record data, and target
data tables, you can collect data from that log. You start the collect process:
• From the administration dialog.
• With the log collector language statement COLLECT.
The log collector retrieves stored definitions and performs the data collection that they define.
Figure 4 on page 5 shows the collect process.

4 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Introduction to the log collector

1 Log data

IBM Z Performance and Capacity Analytics

Collect

3a
2 Log definition Log procedure

3b

5a

4 Record definitions 5b Record procedures

7
6 Update definitions Lookup tables

9a

8 Data tables 9b Update definitions

11
10 SQL queries Lookup tables

12 Reports

Figure 4. Overview of the data collection and processing steps

IBM Z Performance and Capacity Analytics processes data in these steps:


1. The operating system or other program writes data to a sequential log data set, which is the input to
IBM Z Performance and Capacity Analytics.
2. You initiate the collect either through the dialog or by using an IBM Z Performance and Capacity
Analytics language statement in a job, identifying a specific log type definition.
3. Optionally, the log definition might process the log data with a user exit program; a log procedure. If
the log definition calls a log procedure:
a. The log procedure receives each record in the log as input.
b. Output from a log procedure varies in format and is usually a record mapped by an IBM Z
Performance and Capacity Analytics record definition.
4. IBM Z Performance and Capacity Analytics looks for record definitions associated with the log
definition in its system tables. It applies those record definitions to specific record types from the
log or log procedure.
5. Optionally, a record definition might require processing by a user exit program; a record procedure. If
a record definition requires processing by a record procedure:
a. The record procedure receives only a specific record type and is not called for other record types.
b. Output from a record procedure varies in format and is usually a record mapped by an IBM Z
Performance and Capacity Analytics record definition.
6. IBM Z Performance and Capacity Analytics applies a specific update definition to each known record
type and performs the data manipulations and database updates as specified.

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics 5


Introduction to the IBM Z Performance and Capacity Analytics database

7. IBM Z Performance and Capacity Analytics often selects data from lookup tables to fulfill the data
manipulations that update definitions require.
8. IBM Z Performance and Capacity Analytics writes non-summarized and first-level summarized data
to data tables specified by the update definitions.
9. IBM Z Performance and Capacity Analytics uses updated tables as input for updating other, similar
tables that are for higher summary levels. If update definitions specify data summarization:
a. IBM Z Performance and Capacity Analytics selects data from a table as required by the update
definitions and performs required data summarization.
b. IBM Z Performance and Capacity Analytics updates other data tables as required by update
definitions.
c. IBM Z Performance and Capacity Analytics might select data from lookup tables during this
process (although that is not shown in the diagram for this step).
10. After IBM Z Performance and Capacity Analytics stores the data from a collect, you can display
reports on the data. IBM Z Performance and Capacity Analytics uses a query to select the data for the
report.
11. Optionally, IBM Z Performance and Capacity Analytics might select data from lookup tables specified
in the query.
12. IBM Z Performance and Capacity Analytics creates report data, displaying, printing, and saving it as
you requested.
For more information about collecting log data, see “Setting up operating routines” on page 135.

Introduction to the database


The IBM Z Performance and Capacity Analytics database contains system tables, lookup tables, and
collected data. Log collector processing transforms large amounts of log data into useful information
about your systems and networks. The volume of this information in the data tables is less than the
volume of data read from logs.
An overview of the architecture and data flow from collecting, filtering and storing in the IBM Z
Performance and Capacity Analytics Db2 database through to analysis and reporting is shown in Figure 5
on page 6.

Lookup tables
Log data

Log Record Update


SQL queries
definition definitions definitions Data tables

Log procedure Record Update


Reports
procedures definitions

Architecture - Data Flow

Figure 5. Overview of the setup for data collection and reporting

IBM Z Performance and Capacity Analytics stores data that it collects in hourly, daily, weekly, and monthly
tables, and in non-summarized tables. It maintains groups of tables that have identical definitions except
for their summarization levels. For example, the EREP component of the System Performance feature
creates the data tables EREP_DASD_D and EREP_DASD_M, which differ only because one contains daily
data and the other, monthly data.

6 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Introduction to the administration dialog

Because the IBM Z Performance and Capacity Analytics database is relational, you can:
• Combine information from any of your systems into a single report.
• Summarize by system within department, by department within system, or by whatever grouping is
required.
You can keep data tables containing historical data for many years without using much space. The
database size depends mainly on the number of short-term details you keep in it and not on summarized
weekly or monthly data.
The IBM Z Performance and Capacity Analytics database contains operating definitions in its system
tables. These definitions include those for logs, records, updates, and tables shipped with IBM Z
Performance and Capacity Analytics. The database also contains lookup tables of parameters that you
supply, such as performance objectives or department and workload definitions for your site.

Introduction to the administration dialog


The administration dialog enables you to carry out the following tasks:
1. Install and customize IBM Z Performance and Capacity Analytics and its features.
2. Install and customize IBM Z Performance and Capacity Analytics components.
3. Work with log and record definitions.
4. Work with tables in the IBM Z Performance and Capacity Analytics database.
5. Create and run reports.
All of these options are available from the Administration window (Figure 6 on page 7).

Options Other Utilities Help


------------------------------------------------------------------------------
IBM Z Performance and Capacity Analytics Administration
Option ===> __________________________________________________________________

1 System Perform system tasks System ID . . : AUS1


2 Components Install components Db2 Subsystem : DEC1
3 Logs Show installed log objects Db2 plan name : DRLPLAN
4 Tables Show installed data tables System tables : DRLSYSYY
5 Reports Run reports Data tables . : DRLYY

F1=Help F2=Split F3=Exit F9=Swap F10=Actions F12=Cancel

Figure 6. Administration window

Introduction to the reporting dialog


The IBM Z Performance and Capacity Analytics reporting dialog enables you to display reports that
present the log data stored in the product database. When you use the reporting dialog to display or
print a report, IBM Z Performance and Capacity Analytics runs a query associated with the report to
retrieve data from the database, and then displays, or prints, the results according to an associated form.
If your installation uses QMF with IBM Z Performance and Capacity Analytics, QMF is started up when you
work with queries and reports. Otherwise, IBM Z Performance and Capacity Analytics uses its own report
generator.
Figure 7 on page 8 shows the Reporting dialog.

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics 7


Introduction to the Key Performance Metrics components

Reporting Dialog Defaults

Type information. Then press Enter to save defaults.

Entry to dialog . . . 2_ 1. Display of previous selection


2. Display of all reports
3. Display of a selected group of reports

Group ID . . . . . . . __________________ + (required if group selected)


Group owner . . . . . ________ (blank for public group)

Display of this window 1_ 1. No display


2. Display at exit from dialog
3. Display at entry to dialog

Confirmation of exit 1_ 1. Yes


2. No

F1=Help F2=Split F4=Prompt F9=Swap F12=Cancel

Figure 7. Introducing the Reporting dialog

When you produce a report, you can specify values for the query that is used to select specific rows of
data. You can display, print, or save, the retrieved data in either a tabular or a graphic report format.
Note: To generate and display graphic reports, IBM Z Performance and Capacity Analytics uses Graphical
Data Display Manager (GDDM). If you are using IBM Z Performance and Capacity Analytics without QMF,
GDDM is not required. If GDDM is not used, all reports are displayed in tabular form.
A report can consist of these items, which are identified in the report definition:
• A query for selecting data (required).
• A form that formats the data and specifies report headings and totals.
• Graphical Data Display Manager (GDDM) format for a graphic report.
• Report attributes (for creating logical groups of reports).
• Report groups to which the report belongs.
• Variables in the report.
When installing a component, you install a comprehensive set of predefined report queries, forms, and,
optionally, GDDM formats for the component. The reporting dialog enables you to:
• Define new report definitions or modify existing ones.
• Define new queries and forms or modify existing ones, using QMF or the IBM Z Performance and
Capacity Analytics built-in report generator.
• Display reports.
• Define reports for batch execution.
The Guide to Reporting describes the host reporting dialog. For a description of using the Common User
Access (CUA) interface presented in IBM Z Performance and Capacity Analytics windows and helps, refer
to the "Getting Started" section of that book.

Introduction to the Key Performance Metrics components


IBM Z Performance and Capacity Analytics has four components called the Key Performance Metrics (also
referred to as KPM) components. Specifically, there is one KPM component for each of z/OS, Db2, CICS,
and IMS.
Within the IBM Z Performance and Capacity Analytics component list, you will see the components
named as follows:
• Key Performance Metrics - z/OS
• Key Performance Metrics - CICS

8 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Introduction to the SMF Log Records components

• Key Performance Metrics - Db2


• Key Performance Metrics - IMS
These components are designed to only collect data that is considered to be key metrics for the
monitoring of these subsystems. They can be installed stand-alone, or they can be installed along with the
corresponding existing base component. For example, the Db2 KPM component could be installed with or
without the existing Db2 component being installed. Note that if you had both the Db2 KPM component
and the existing base Db2 component installed, at collect time you only need to collect the SMF log the
once to populate the data tables for both components.
The number of tables and columns within each of these tables will be significantly reduced in each
of the KPM components. For this reason, the performance of collecting data into these components
should be significantly improved when compared against their associated existing base components.
For users who only reference metrics from the KPM tables, collecting only the KPM components should
result in considerable CPU and elapsed time savings at collect time when compared to collecting the
corresponding base components.
For details on each of the individual KPM components, refer to the appropriate guide in the table below.

Table 1. KPM components


KPM Component Guide
Key Performance Metrics - System Performance Feature Reference Volume I
z/OS
Key Performance Metrics - CICS Performance Feature Guide and Reference
CICS
Key Performance Metrics - System Performance Feature Reference Volume I
Db2
Key Performance Metrics - IMS Performance Feature Guide and Reference
IMS

Each KPM component uses table space profiles which allow the table, table space, and index settings
within each KPM component to be easily modified in one place. Before installing the KPM components,
refer to the topic “Working with table space profiles” on page 177.

Introduction to the SMF Log Records component


The SMF Log Records component provides record definitions that allow you to view and analyze SMF
records, whether or not you are currently collecting those records.
The SMF Log Records component provides functions that can be performed online using the display
facility and in batch using the Log Collector LIST RECORD statement. You can perform these functions
without installing a collect component that is based on the records of interest. Other utilities and Log
Collector statements can also be used in conjunction with the SMF Log Records component.
You can examine the contents of records you are not currently collecting to help determine if they contain
data you want to collect. You can also examine records you are currently collecting to provide a better
understanding of the summarized data you are seeing in your reports.

Getting ready to use the SMF Log Records component


Take the following steps to prepare for using the SMF Log Records component:
1. Install the SMF Log Records component.
2. Familiarize yourself with the Display Utility and LIST RECORD statement. Use the SMF Log Records
component to explore these options.
3. Set a goal and work out how to achieve it using the SMF Log Records component.

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics 9


Introduction to the Collator

Using the SMF Log Records component


For more information about the SMF records you can display and analyze, refer to the topic “SMF records”
on page 288.
Before you start examining SMF records in detail it is helpful to understand the contents of the log you are
using. You can see the SMF record mappings applied to the records in your log by using the log statistics
utility online or running the LOGSTAT statement online or in batch. For more information about using the
log statistics utility, refer to the topic “Displaying log statistics” on page 187. For more information about
using the LOGSTAT statement, refer to the topic "LOGSTAT" in the Language Guide and Reference.
To display the contents of individual records in detail you can use the online display utility. For more
information about using the display utility, refer to the topic “Displaying the contents of a log” on page
188.
To generate detailed reports on individual records or sets of records focusing on specific fields of interest,
you can use the list record utility online or the LIST RECORD statement online or in batch. For more
information about using the list record utility, refer to the topic “Creating a report on a record” on page
189. For more information about using the LIST RECORD statement, refer to the topic “LIST RECORD” in
the Language Guide and Reference.

Introduction to the Collator


The Collator is an IBM Z Performance and Capacity Analytics function to sort raw SMF records into
streams of data sets for archival. The input is a log stream containing raw SMF records which produces
multiple sequences of data sets containing a subset of those records. A COLLATE stage is used to sort the
SMF records into multiple streams and to work out the name of the data sets that each stream should be
written to. Once the SMF records have been written to the stream it is the users responsibility to archive
and manage them. The Collator only writes to DASD data sets.
A Collator is typically be used to gather a subset of critical SMF records from all the systems in a sysplex
and sort them into streams for archiving – one for security records, one for system usage records for
example. The streams would have different archival requirements.

Figure 8. SMF Records from SYS1 and SYS2

In the above scenario, raw SMF records are being gathered from two systems (SYS1 and SYS2) and
forwarded to a control system (SYSX), where they are received and written to a log stream. The Collator
then processes them and sorts them into separate streams of data sets that can be archived.

10 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Introduction to the Data Splitter and the SMF Extractor

Introduction to the Data Splitter and the SMF Extractor


The Data Splitter is used with the SMF Extractor to distribute raw SMF records to whatever receivers users
wish to set up and the records are delivered over TCPIP.

The SMF Extractor


The SMF Extractor is the component of IBM Z Performance and Capacity Analytics that captures SMF
records for processing by IBM Z Performance and Capacity Analytics . This is a common component with
IBM Z Batch Resiliency (IZBR) and, following APAR PH29065, a single SMF Extractor instance can feed
both IBM Z Performance and Capacity Analytics and IBM Z Batch Resiliency (IZBR) when they are running
on the same system. The SMF Extractor can produce additional log stream outputs, one or more of which
can be used to feed an IBM Z Performance and Capacity Analytics Data Splitter.

Figure 9. SMF Extractor

Receiving raw SMF records from the SMF Extractor


The SMF Extractor supports the specification of multiple output streams – up to six sets of data sets and
up to six log streams.
• When data sets are used, they are rotated at a user specifiable interval (defaulting to 10 minutes). This
means that the currently open data sets will be closed and a new set of data sets will be opened. The
data from the closed data sets may then be read. Data set output is used by IBM Z Batch Resiliency, see
the IBM Z Batch Resiliency (IZBR) documentation for more details.
• When log streams are used, new SMF records are appended to the end of the log stream and the
consumer is expected to wipe records from the front of the log stream after they have been read (or
leave them to be automatically purged after the log stream retention period is reached).
While you may write an application to read raw SMF records from one of the log streams or a sequence of
data sets, you can also use a Data Splitter to read, subdivide and, optionally, clean up the stream before
transmitting the records over TCPIP.
The following diagram show how the SMF Extractor and a Data Splitter might be deployed in this scenario.

Chapter 1. Introduction to IBM Z Performance and Capacity Analytics 11


Introduction to Usage and Accounting Collector

The role of the Data Splitter in these scenarios is to:


• Read the records from the log stream.
• Sort them into separate 'streams' using a Collage stage to sort them.
• Send each stream to one or more remote receivers.

Introduction to the Usage and Accounting Collector


The CIMS Lab Mainframe collector is incorporated into IBM Z Performance and Capacity Analytics and
called the Usage and Accounting Collector. This extracts z/OS accounting data which is used to populate
IBM Tivoli Usage and Accounting Manager databases on distributed platforms. The Usage and Accounting
Collector does not require Db2 as prerequisite software on z/OS.
For a description of the Usage and Accounting Collector, see “System Overview” in the Usage and
Accounting Collector User Guide.
For information on how to install the Usage and Accounting Collector, see Chapter 8, “Installing the Usage
and Accounting Collector,” on page 357.
Note: Spectrum Writer is not included with UAC. Former CIMS Lab customers have a perpetual license for
Spectrum Writer and should retain the CIMS Lab data sets so that they can make use of it. For support
of Spectrum Writer, contact Pacific Systems. Customers that require access to CIMS Mainframe 12.2.1
should contact IBM support.

12 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

Chapter 2. Installing IBM Z Performance and


Capacity Analytics
How to install IBM Z Performance and Capacity Analytics.
Follow these instructions to install IBM Z Performance and Capacity Analytics for the first time.
If you are migrating from an earlier version of IBM Z Performance and Capacity Analytics, follow
the migration instructions documented in the tech note that is available from https://www.ibm.com/
support/pages/node/2950341. However, the migration instructions should be read in conjunction with
the installation instructions in this chapter to develop an installation and migration plan appropriate for
your environment.
The initial installation process starts after a system programmer has performed the SMP/E installation.
The SMP/E installation of the IBM Z Performance and Capacity Analytics base and its features is described
in the IBM Z Performance and Capacity Analytics Program Directory.
You can also use this information to install IBM Z Performance and Capacity Analytics on other systems,
or to install features that you did not install previously with the IBM Z Performance and Capacity Analytics
base.
This section describes the following installation tasks:
• “Installation prerequisites” on page 13
• “Step 1: Reviewing the results of the SMP/E installation” on page 16
• “Step 2: Setting up security” on page 17
• “Step 3: Initializing the Db2 database” on page 20
• “Step 4: Preparing the dialog and updating the dialog profile” on page 21
• “Step 5: Setting personal dialog parameters” on page 23
• “Step 6: Setting up QMF” on page 24
• “Step 7: Determining partitioning mode and keys” on page 25
• “Step 8: Creating system tables” on page 25
• “Step 9: Customizing JCL” on page 27
• “Step 10: Testing the installation of the base” on page 28
• “Step 11: Reviewing Db2 parameters” on page 30
• “Step 12: Installing components” on page 31
• “Installing multiple IBM Z Performance and Capacity Analytics systems” on page 31

Installation prerequisites
This section lists the hardware and software prerequisites.

Hardware prerequisites
IBM Z Performance and Capacity Analytics can run in any hardware environment that supports the
required software.

© Copyright IBM Corp. 1993, 2017 13


Installing IBM Z Performance and Capacity Analytics

Software prerequisites
This section lists the software requirements to install and use IBM Z Performance and Capacity Analytics
basic functions and optional component features. Refer to the IBM Z Performance and Capacity Analytics
Program Directory for further information about mandatory and conditional requirements.

Architecture considerations
IBM Z Performance and Capacity Analytics can run on a stand-alone system, or can be configured for
multiple systems with hub and spoke architecture. For more information, refer to “Continuous Collector
configuration options” on page 42.
Most of the prerequisites apply to the hub or stand-alone system, and do not need to be installed on every
system that you want IBM Z Performance and Capacity Analytics to collect data from.
It is highly recommended that IBM Z Performance and Capacity Analytics be installed in to its own Db2
subsystem. This will avoid contention with other applications.
Data gathering
IBM Z Performance and Capacity Analytics is installed on one or more hub systems, where data from
multiple spoke systems is analyzed and archived. A system can be both a spoke and a hub.
A hub system requires:
• z/OS V2.2.0 or later
• Db2 V11 or later
While you can manually transfer data from the spoke systems to the hub, there is an automated data
gathering agent that can be installed on the spoke systems running z/OS to gather SMF data.
A spoke system running the automated data gathering agent requires:
• z/OS V2.2.0 or later
• IBM 64-bit Java 8
To enable automated data gathering by spokes, the hub needs:
• IBM 64-bit Java 8
Data can be gathered from the following applications on spoke systems:
z/OS systems
• z/OS V2.2.0 or later
– DFSMS/OAM
– DFSMS/RMM
– DFSMS/MVS
– JES2 and JES3
– EREP
– Communications Server
– Tivoli Workload Scheduler for z/OS
– Tivoli Information Management for z/OS
– MVS
• Db2 for z/OS V11.0.0 or later
• CICS Transaction Server V5.2 or later
• IMS V13 or later
• IBM Security zSecure Manager for RACF z/VM V1.11.1 or later
• IBM Tivoli NetView for z/OS – V5.4.0 or later

14 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

• Tivoli Storage Manager for z/OS (ADSM) V7.1.6 or later


• WebSphere MQ for z/OS V7.1.0 or later
• IBM HTTP Server V6.1.0 or later
z/Linux systems
• Red Hat Enterprise Linux V7.3
• SUSE Linux Enterprise Server V11+
Power Systems running IBM i
• Power Systems running IBM i V7.2 or V7.3
AIX systems
• AIX V7.1+
HP-UX systems
• HP-UX 11i V3+
Solaris systems
• Solaris V11.1
Windows systems
• Windows Server 2008, 2012 or 2016
Reporting
Reports can be drawn directly from Db2 on the hub system or indirectly from Splunk or ELK
installations that are fed by one or more hub systems. Basic reporting on a hub producing tabular
textual reports requires no additional software. Other reporting options are:
3270 graphical reporting
• GDDM-PGF
Cognos
• Cognos Analytics V11.1.7 (license bundled with IBM Z Performance and Capacity Analytics )
Splunk
• Splunk V8.0
ELK
• Elastic Stack (ELK) V7.14.1
QMF reporting
• IBM QMF for IBM Z - to enable QMF reporting
• GDDM-REXX - this is required only if you are not using QMF for IBM Z
Remote QMF reporting
• QMF For Workstations V11
IBM Db2 Analytics Accelerator
To use IBM Db2 Analytics Accelerator to accelerate your queries:
• High performance unload is not necessary to accelerate queries with the Db2 Analytics
Accelerator. The Accelerator is used when Db2 processes Accelerator Only Tables, which require
Acceleration, or when processing Shadow tables, which gain a performance benefit from the
Accelerator.
To feed data off-platform for web reporting by Splunk or ELK
• IBM 64-bit Java 8

Chapter 2. Installing IBM Z Performance and Capacity Analytics 15


Installing IBM Z Performance and Capacity Analytics

• Optionally, the IBM Z Common Data Provider V1.2 or higher to distribute the feed. The
alternative is a direct feed from IBM Z Performance and Capacity Analytics to Splunk or ELK.

Step 1: Reviewing the results of the SMP/E installation


After SMP/E installation the following will be on your system. The locations and names listed here are
based on the defaults, and may have been changed during the SMP/E installation at your site.

IBM Z Performance and Capacity Analytics data sets


The data set names and descriptions of the product data sets are:
DRLvrm.SDRLCNTL
Sample jobs and Db2 DBRM module
DRLvrm.SDRLDEFS
Definitions of records, tables, and other objects
DRLvrm.SDRLEXEC
REXX execs
DRLvrm.SDRLLOAD
Load modules
DRLvrm.SDRLSKEL
ISPF skeletons
DRLvrm.SDRLBIN
IBM i
DRLvrm.SDRLBIN
Content for distributed platforms as well as reports.
DRLvrm.SDRLEXTR
SMF extractor load modules
usr/lpp/IBM/IZPCA/v3r1m0/IBM
The directory that contains the DataMover and other apps. This is the target of the SDRLUSS DDDEF.

Local data sets


The data set names and descriptions of the local operational data sets are:
&HLQ.LOCAL.ADMCFORM
Local GDDM-Presentation Graphics Facility (GDDM-PGF) interactive chart utility (GDDM/ICU) formats
&HLQ.LOCAL.CHARTS
Saved graphic reports (GDDM ADMGDF format)
&HLQ.LOCAL.CNTL
Local IBM Z Performance and Capacity Analytics jobs
&HLQ.LOCAL.DEFS
Local IBM Z Performance and Capacity Analytics definitions
&HLQ.LOCAL.EXEC
Local IBM Z Performance and Capacity Analytics execs
&HLQ.LOCAL.MESSAGES
Messages sent through the dialog
&HLQ.LOCAL.REPORTS
Saved tabular reports
&HLQ.LOCAL.USER.DEFS
Local IBM Z Performance and Capacity Analytics user/alter definitions

16 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

Language-dependent IBM Z Performance and Capacity Analytics data sets


The last three letters in these data set names indicate the language version. xxx is ENU for English. For
example, SDRLRENU contains the English report definition files.
DRLvrm.SDRLFxxx
GDDM/ICU formats
DRLvrm.SDRLMxxx
ISPF messages
DRLvrm.SDRLPxxx
ISPF windows
DRLvrm.SDRLRxxx
Definitions of reports
DRLvrm.SDRLTxxx
ISPF tables

Step 2: Setting up security


About this task
This topic describes how you can protect IBM Z Performance and Capacity Analytics data sets and the
database.
Use RACF® or a similar product to protect the IBM Z Performance and Capacity Analytics data sets.
Administrators and users must have read access to the DRL310 data sets and update access to the local
data sets.
The data in the database is protected by Db2. Administrators and users must be granted Db2 privileges to
be able to access the data, as follows:
• Administrators need SYSADM (system Db2 administrator authority for the IBM Z Performance and
Capacity Analytics database. They also need the ability to use the prefixes of IBM Z Performance and
Capacity Analytics tables (DRLSYS and DRL) as authorization IDs in Db2.
• Users need read access to the tables they use to produce reports, and update access to some of the
IBM Z Performance and Capacity Analytics system tables (to be able to create their own reports).
• The user IDs that you use for IBM Z Performance and Capacity Analytics production jobs, such as
collect, need DBADM authority.
This step describes two ways you can define authorities for IBM Z Performance and Capacity Analytics
administrators and users:
• Using secondary authorization IDs.
• Without secondary authorization IDs.
Find out through the Db2 system administrator whether secondary authorization IDs are used on your
Db2 system.
Note: If you are defining authorities without using secondary user IDs, the installation process is slightly
different. See “Security without secondary authorization IDs” on page 19 for more information.

Security using secondary authorization IDs

About this task


The most efficient way to give users privileges is to use secondary authorization IDs in Db2. With this
method, privileges are granted to group IDs rather than user IDs, and all users who can use these
secondary authorization IDs get the privileges.

Chapter 2. Installing IBM Z Performance and Capacity Analytics 17


Installing IBM Z Performance and Capacity Analytics

The secondary authorization IDs a user has access to can be controlled in different ways. If you
have RACF installed, users can usually use the RACF groups that they are connected to as secondary
authorization IDs. If RACF is not installed, secondary authorization IDs can be assigned by the Db2
authorization exit.
This topic describes how to define the secondary authorization IDs using RACF. If you assign secondary
authorization IDs in another way, consult your Db2 system administrator.

Procedure
1. Create three RACF groups. The default RACF group IDs are DRL, DRLSYS, and DRLUSER

ADDGROUP DRL DATA ('IBM Z Performance and Capacity Analytics TABLES')


ADDGROUP DRLSYS DATA (’IBM Z Performance and Capacity Analytics SYSTEM TABLES')
ADDGROUP DRLUSER DATA ('IBM Z Performance and Capacity Analytics USERS')

The IDs DRL and DRLSYS are also prefixes for the IBM Z Performance and Capacity Analytics Db2
tables. If you plan to change the prefixes for IBM Z Performance and Capacity Analytics system tables
and views (DRLSYS) or for other IBM Z Performance and Capacity Analytics tables and views (DRL) in
“Step 3: Initializing the Db2 database” on page 20, use your values as RACF group IDs.
If all users on your system need access to the IBM Z Performance and Capacity Analytics data, you do
not need the DRLUSER group. If different users need access to different sets of tables, you can define
several RACF group IDs, such as DRLMVS and DRLCICS, instead of the DRLUSER group.
You can use either RACF commands or RACF dialogs to specify security controls. These commands
are samples. You may have to specify additional operands to comply with the standards of your
organization.
2. Connect IBM Z Performance and Capacity Analytics administrators to all three groups.
Use RACF commands or RACF dialogs to connect user IDs to a group. This command is a sample.

CONNECT (user_ID1 user_ID2 ...) GROUP(DRLUSER)

3. Connect IBM Z Performance and Capacity Analytics users to the DRLUSER group only.
Use RACF commands or RACF dialogs to connect user IDs to a group. This command is a sample.

CONNECT (user_ID1 user_ID2 ...) GROUP(DRLUSER)

4. If you use different RACF group IDs, be sure to use them throughout all the steps listed .
5. If you use other group IDs than DRLUSER, you must modify the following fields in the Dialog
Parameters window (see Figure 11 on page 24):
Users to grant access to
Users to grant access to must be specified when you create the system tables and when you install
components. When you create the system tables it should contain all group IDs that should have
access to IBM Z Performance and Capacity Analytics. To grant access to all users, specify PUBLIC.
When you install components, Users to grant access to should contain the group IDs that should
have access to the component.
SQL ID to use (in QMF)
If QMF is used with IBM Z Performance and Capacity Analytics in your installation, the SQL ID to
use in QMF must be specified by each user. It should be one of the groups the user is connected to
or the user's own user ID.
6. If you use different RACF group IDs, you can make your RACF group IDs the default for all IBM
Z Performance and Capacity Analytics users. Edit the IBM Z Performance and Capacity Analytics
initialization exec DRLFPROF, described in “Step 4: Preparing the dialog and updating the dialog
profile” on page 21. Variables def_syspref, def_othtbpfx, def_iduser1, and def_idsqluser may need to
be changed, depending on the changes you made to the IDs.

18 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

Security without secondary authorization IDs

About this task


If you are not using secondary authorization IDs in Db2, the installation process is slightly different. See
“Example: Installation steps when secondary user IDs are not used” on page 19 for more information.
If you are not using secondary authorization IDs in Db2, all privileges must be granted to individual users.

Procedure
1. Grant authority to the administrators:
a) Create all tables and views with the administrator user ID as prefix. That is, replace DRLSYS and
DRL with a user ID. Only one administrator is possible.
b) Grant SYSADM authority to all administrators.
2. Give authority to the users in one of two ways. This is done in “Step 5: Setting personal dialog
parameters” on page 23.
• Specify a list of up to 8 user IDs in the field Users to grant access to in the Dialog Parameters
window (Figure 11 on page 24).
• Specify PUBLIC in the field Users to grant access to. This gives all users access to IBM Z
Performance and Capacity Analytics data. This is easier to maintain than a list of user IDs.
For both cases, each user must specify his own user ID in the SQL ID to use (in QMF) field in the
Dialog Parameters window, if QMF is used with IBM Z Performance and Capacity Analytics in your
installation.
You must specify user IDs in the field Users to grant access to before you create the system tables. It
is also used when you install components.

Example: Installation steps when secondary user IDs are not used
Follow this example if you have several administrators. In the example, we assume that there are three
administrators:
• ADMIN1 is the user who creates system tables.
• ADMIN2 and ADMIN3 are the other administrators.
When performing the installation, note these items:
• “Step 3: Initializing the Db2 database” on page 20 : Change DRL and DRLSYS in the DRLJDBIN job to
ADMIN1, ADMIN2, and ADMIN3.
• “Step 4: Preparing the dialog and updating the dialog profile” on page 21: No changes.
• “Step 5: Setting personal dialog parameters” on page 23: Use ADMIN1 as prefix for system tables,
ADMIN2 and ADMIN3 as prefix for other tables. For Users to grant access to, specify ADMIN1, ADMIN2,
ADMIN3, and all user IDs for the end users. For SQL ID to use (in QMF), specify ADMIN1 (if QMF is used
with IBM Z Performance and Capacity Analytics in your installation).
• “Step 6: Setting up QMF” on page 24: No changes.
• “Step 8: Creating system tables” on page 25: The system tables should be created with the prefix
ADMIN1. Otherwise, there are no changes compared with the information in this step.
• “Step 9: Customizing JCL” on page 27: No changes.
• “Step 10: Testing the installation of the base” on page 28 and “Step 12: Installing components” on
page 31: If one of the secondary administrators, for example ADMIN2, wants to install the Sample
component or any other component, that administrator has to change the dialog parameters before the
installation to use these settings:
Prefix for system tables
ADMIN1

Chapter 2. Installing IBM Z Performance and Capacity Analytics 19


Installing IBM Z Performance and Capacity Analytics

Prefix for other tables


ADMIN2
SQL ID to use (in QMF)
ADMIN2
When the component is installed by ADMIN2, the installed Db2 objects are created with the prefix
ADMIN2.
All Db2 objects can be read by all administrators, but an object can be created only with the current
administrator's primary user ID.
To make your changes the default for all IBM Z Performance and Capacity Analytics users, you must
change the initialization exec DRLFPROF as described in “Step 4: Preparing the dialog and updating the
dialog profile” on page 21.

Step 3: Initializing the Db2 database


Follow these instructions to run the DRLJDBIN job to initialize the Db2 database when you are installing
IBM Z Performance and Capacity Analytics for the first time.

About this task


You must use IBM Z Performance and Capacity Analytics to perform several Db2-related installation
tasks, which are described below.
Note: IBM Z Performance and Capacity Analytics is an update/insert intensive Db2 application. This
means that during a collect, IBM Z Performance and Capacity Analytics adds and updates many rows in
the Db2 tables. Normal Db2 processing logs these changes. Your Db2 administrator should verify that the
capacity of the Db2 logs is sufficient to cope with the increase in logging activity.
If your operational Db2 system is constrained, consider implementing another (analytical) Db2 system for
the IBM Z Performance and Capacity Analytics environment.

Procedure
1. Copy member DRLJDBIN in the DRLvrm.SDRLCNTL library to the &HLQ.LOCAL.CNTL library.
DRLJDBIN needs to be customized to refer to one of the following samples: DRLJDCVB, DRLJDCVC,
or DRLJDCVD depending on your version of Db2. The required sample must also be copied to the
&HLQ.LOCAL.CNTL library and customized for your environment if you are not using the default
database name and table prefixes. Refer to the instructions in the comments in DRLJDBIN job and
the selected DRLJDCVx member for more information about using these samples.
2. Modify the job card statement to run your job.
3. Customize the job for your site.
Follow the instructions in the job prolog to customize it for your site.
Note:
a. A person with Db2 SYSADM authority (or someone with the authority to create plans, storage
groups, and databases, and who has access to the Db2 catalog) must submit the job.
b. Do not delete steps from DRLJDBIN. Even if you have DBADM authorization, you must grant DRL
and DRLSYS authority for the IBM Z Performance and Capacity Analytics database.
4. Submit the job to:
• Bind the Db2 plan used by IBM Z Performance and Capacity Analytics .
The plan does not give privileges (it contains only dynamic SQL statements) thereby making it safe to
grant access to all users (PUBLIC).
If you change the name of the plan from the default (DRLPLAN) then you must update the
def_db2plan variable in DRLFPROF to specify the new plan name. You also need to modify any

20 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

sample jobs that execute DRLPLC, DRL1PRE or DRLPLOGM to specify the PLAN parameter with the
new plan name. Changing the plan name allows you to run versions of the IBM Z Performance and
Capacity Analytics environment with incompatible DBRMs in the same Db2 subsystem.
• Create the Db2 storage group and database used by IBM Z Performance and Capacity Analytics .
• Grant Db2 DBADM authority as database administrators of DRLDB to DRL and DRLSYS.
• Create views on the Db2 catalog for IBM Z Performance and Capacity Analytics dialog functions for
users who do not have access to the Db2 catalog.

Step 4: Preparing the dialog and updating the dialog profile


About this task
The load library and the exec library must be allocated at the startup of your TSO logon procedure. IBM
Z Performance and Capacity Analytics dynamically allocates other libraries and data sets as it starts,
and allocates others as certain functions are performed. This step describes how to set up procedures
for startup and for allocating the libraries and data sets that IBM Z Performance and Capacity Analytics
needs.
Ensure that the load library, exec library, Db2 load library, QMF load library (optional), GDDM libraries, and
load libraries, are accessible to your TSO session

Procedure
1. Make the load library (DRLvrm.SDRLLOAD), Db2 load library, QMF load library, and the GDDM load
library accessible by performing one of these tasks:
a) Allocate the SDRLLOAD library, Db2 load library (SDSNLOAD), QMF load library (SDSQLOAD), and
the GDDM load library (SADMMOD) to STEPLIB in the generic logon procedure.

//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=QMF.SDSQLOAD
// DD DISP=SHR,DSN=GDDM.SADMMOD
// DD DISP=SHR,DSN=DSN.SDSNLOAD

b) Add SDRLLOAD, SDSQLOAD, SADMMOD, and SDSNLOAD to the link list.


c) Copy SDRLLOAD, SDSQLOAD, SADMMOD, and SDSNLOAD members to a library already in the
link list. Make sure that the Db2 modules DSNALI, DSNHLI2, and DSNTIAR are linked in 31-bit
addressing mode
2. Make the local exec library and the IBM Z Performance and Capacity Analytics exec library
(DRLvrm.SDRLEXEC) accessible by performing one of these tasks:
a) Allocate the libraries to SYSPROC in the logon procedure. For example:

//SYSPROC DD DISP=SHR,DSN=&HLQ.LOCAL.EXEC
// DD DISP=SHR,DSN=DRLvrm.SDRLEXEC

b) Allocate the libraries to SYSEXEC in the logon procedure. For example:

//SYSEXEC DD DISP=SHR,DSN=&HLQ.LOCAL.EXEC
// DD DISP=SHR,DSN=DRLvrm.SDRLEXEC

c) Use the ALTLIB function to allocate the libraries.


If IBM Z Performance and Capacity Analytics is invoked by using the ALTLIB function on the
application level, make sure that only the IBM Z Performance and Capacity Analytics exec library
is included. Allocate other exec libraries to user level by using the ALTLIB ACT USER(EXEC)
command.
3. Make the ADMPC data set accessible by allocating it in the logon procedure. For example:

Chapter 2. Installing IBM Z Performance and Capacity Analytics 21


Installing IBM Z Performance and Capacity Analytics

//ADMPC DD DISP=SHR,DSN=GDDM.SADMPCF

IBM Z Performance and Capacity Analytics dynamically allocates other libraries and data sets, such as
the GDDM symbols data set GDDM.SADMSYM, when a user starts a dialog. “Allocation overview” on
page 124 describes the libraries that IBM Z Performance and Capacity Analytics allocates and when it
allocates them.
4. If you have used any values other than default values for DRLJDBIN or for IBM Z Performance and
Capacity Analytics data set names, you must modify the userid .DRLFPROF file (allocated copying the
DRLFPROF member of DRLvrm.SDRLCNTL).
DRLEINI1 sets dialog defaults for all users. IBM Z Performance and Capacity Analytics stores defaults
for each user in member DRLPROF in the library allocated to the ISPPROF ddname, which is usually
tsoprefix.ISPF.PROFILE. Edit DRLFPROF to include default values so users do not need to change
dialog parameter fields.
5. Allocate a sequential data set with name user.DRLFPROF, LRECL=80 BLKSIZE=32720 RECFM=FB and
copy the DRLFPROF member of the SDRLCNTL library.
6. Locate and change any variable values that you have changed during installation.
Note:
• Change values for data set names that identify Db2 and, optionally, QMF and GDDM libraries.
• If you do not use QMF with IBM Z Performance and Capacity Analytics, change the value for qmfuse
to NO.
• If you do not use GDDM with IBM Z Performance and Capacity Analytics, change the value for
gddmuse to NO. (If QMF is used, GDDM must be used.)
“Modifying the DRLFPROF data set” on page 113 shows the DRLFPROF file containing the parameters
to be modified.
“Overview of the Dialog Parameters window” on page 114 shows the administration dialog window
and the default initialization values that DRLFPROF sets.
“Dialog parameters - variables and fields” on page 115 describes parameters and shows the
interrelationship of DRLEINI1 and the Dialog Parameters.
7. You can add IBM Z Performance and Capacity Analytics to an ISPF menu by using this ISPF statement:

CMD(%DRLEINIT) [DEBUG] [RESET] [DBRES] [REPORTS | R] [ADMINISTRATION | A]

To access a dialog from the command line of an ISPF window, any authorized user can issue the
command TSO %DRLEINIT from the command line of an ISPF window.
The optional DEBUG parameter sets on a REXX trace for the initialization execs. This helps you solve
problems with data set and library allocation.
The optional RESET parameter sets the ISPF profile variables to their default value. It has the same
effect as deleting the DRLPROF member from the local (ISPPROF) profile library.
The optional DBRES parameter sets the ISPF profile variables for IBM Z Performance and Capacity
Analytics to their default value (like Db2 Subsystem, Db2 Database, Db2 Storage Group). Only these
values are deleted from the DRLPROF member of the local (ISPPROF) profile library. All the other
values set and already stored in the profile are preserved.
The optional REPORTS parameter takes you directly to the reporting dialog. You can abbreviate this to
R.
The optional ADMINISTRATION parameter takes you directly to the administration dialog. You can
abbreviate this to A.

22 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

Step 5: Setting personal dialog parameters


About this task
If you have edited the dialog parameters profile, file DRLFPROF from the DRLvrm.SDRLCNTL library, and
copied it into the sequential data set userid.DRLFPROF in “Step 4: Preparing the dialog and updating the
dialog profile” on page 21 to match your installation values, you do not need to follow the instructions in
this step to change the parameters unless you want to use the reporting dialog in administrator mode
Authorized administrators can use the reporting dialog in administrator mode to view or modify all
reports. Otherwise, a reporting dialog user uses the dialog in end-user mode, the default. In this mode, a
user can view only public and privately-owned reports. In end-user mode, a user can modify only reports
he or she created.
IBM Z Performance and Capacity Analytics stores parameters for each user in member DRLPROF in the
library allocated to the ISPPROF ddname, which is usually tsoprefix.ISPF.PROFILE
This topic describes the procedure for the IBM Z Performance and Capacity Analytics dialogs if you did
not edit the DRLFPROF file. Perform this step if necessary.

Procedure
1. From the command line of an ISPF/PDF window, type TSO %DRLEINIT to display the IBM Z
Performance and Capacity Analytics Primary Menu (Figure 10 on page 23).
Reporting dialog users can access the Dialog Parameters window from the Options pull-down of the
Primary Menu or the Reports window.

Options Other Utilities Help


------------------------------------------------------------------------------
IBM Z Performance and Capacity Analytics Administration
Option ===> __________________________________________________________________

1 System Perform system tasks System ID . . : SYS1


2 Components Install components Db2 Subsystem : SUB1
3 Logs Show installed log objects Db2 plan name : PLNPLAN
4 Tables Show installed data tables System tables : SYTSYSYY
5 Reports Run reports Data tables . : DATBL

F1=Help F2=Split F3=Exit F9=Swap F10=Actions F12=Cancel

Figure 10. IBM Z Performance and Capacity Analytics Primary Menu


2. If you start from the Primary Menu, type 2 Administration, and press Enter to display the
Administration window (see Figure 6 on page 7).
3. From the Administration window, select 1 System, to display the System window.
Note: If your installation does not use QMF, Import QMF initialization query is not selectable, and
indicated by an asterisk (*).
4. From the System window, select 1 Dialog parameters.

System

Select one of the following. Then press Enter.

1. Dialog parameters
2. System tables
3. Import QMF initialization query

F1=Help F2=Split F9=Swap F12=Cancel

Chapter 2. Installing IBM Z Performance and Capacity Analytics 23


Installing IBM Z Performance and Capacity Analytics

Note: If your installation does not use QMF with IBM Z Performance and Capacity Analytics, the
contents of this window is slightly different from what you see here. Both versions of the Dialog
Parameters window are shown in “Overview of the Dialog Parameters window” on page 114.

Dialog Parameters

Type information. Then press Enter to save and return.

More: +
DB2 subsystem name . . . . . DEC1
Database name . . . . . . . . DRLDBYY_
Storage group default . . . . SYSDEFLT
Prefix for system tables . . DRLSYSYY
Prefix for all other tables DRLYY___
Show panel IDs (yes/no) . . . YES

Buffer pool for data . . . . BP0___


Buffer pool for indexes . . . BP0___

Users to grant access to . . DRLUSRYY ________ ________ ________


________ ________ ________ ________

Batch print SYSOUT class . . A


Printer line count per page 60_
SQLMAX value . . . . . . . . 5000____
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 11. Dialog Parameters window

Note: When you see a plus sign indicator (More: +) in the upper-right corner of an IBM Z Performance
and Capacity Analytics window, press F8 to scroll down.
If it shows a minus sign indicator (More: -), press F7 to scroll up. For more information about using
IBM Z Performance and Capacity Analytics dialog windows, refer to the description in the Guide to
Reporting.
You must scroll through the window to display all its fields. “Overview of the Dialog Parameters
window” on page 114 shows the entire Dialog Parameters window, both the version shown if QMF is
used with IBM Z Performance and Capacity Analytics and the version shown if QMF is not used with it.
“Dialog parameters - variables and fields” on page 115 has a description of the fields in the window.
5. Make modifications and press Enter
Changes for administration dialog users and for end users are the same. You must identify the correct
names of any data sets (including prefixes and suffixes) that you changed from default values during
installation.
IBM Z Performance and Capacity Analytics saves the changes and returns to the System window.
Although some changes become effective immediately, all changes become effective in your next
session when IBM Z Performance and Capacity Analytics can allocate any new data sets you may have
selected.

Step 6: Setting up QMF


About this task
Note: IBM Z Performance and Capacity Analytics can use QMF, for example, to display and work with
reports. If your installation does not use QMF, the information in this topic does not apply, and option 3,
Import QMF initialization query, is not selectable in the System window.
When IBM Z Performance and Capacity Analytics starts QMF, it runs a query, (DRLQINIT), to set the
current SQL ID (by default, DRLUSER) that gives users required authority in QMF and lets them access
objects in the QMF lists.
To import the QMF query from member DRLQINIT (in the DRLvrm.SDRLDEFS library) and save it in QMF as
DRLSYS.DRLQINIT, from the System window, select 3, Import QMF initialization query, and press Enter.

24 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

System

Select one of the following. Then press Enter.

1. Dialog parameters
2. System tables
3. Import QMF initialization query

F1=Help F2=Split F9=Swap F12=Cancel

IBM Z Performance and Capacity Analytics imports the query into QMF and then returns to the System
window.

Step 7: Determining partitioning mode and keys


About this task
Some component definitions use the GENERATE statement to create the tables paces, partitioning,
and indexes. The table space, partitioning, and index attributes can easily be changed by updating the
appropriate profile in the GENERATE_PROFILES and GENERATE_KEYS system tables.

Procedure
1. Consult the guide for the component you are installing.
Many will have a job which must be run to set up Store groups, partition ranges or keys. Follow the
instructions for that component before proceeding. If the component does not support Generated
Table spaces and Indexes, you may skip this step.
2. When using GENERATE TABLESPACE the type of table space created is determined by the
TABLESPACE_TYPE field in the GENERATE_PROFILES system table.
3. If you decide to use Range Partitioned table spaces TABLESPACE_TYPE=RANGE, you will need to
adjust the range values in the GENERATE_KEYS system table.

What to do next
The supplied values for these tables are in the member DRLTKEYS in the SDRLDEFS data set, and the
tables are created and loaded during the creation of the IBM Z Performance and Capacity Analytics
System Tables. These values may be reviewed prior to creating the IBM Z Performance and Capacity
Analytics system tables. If changes are required, you may make a copy in your userid.LOCAL.DEFS data
set and make the required changes prior to System Table creation.
Alternatively, once loaded into the System Tables these values may be changed by various methods:
• Using the IBM Z Performance and Capacity Analytics table edit facility.
• Using SQL UPDATE statements. For example, to change the TABLESPACE_TYPE from the supplied value
of RANGE to GROWTH for IMS the statement would look like the following example:

SQL UPDATE <sysprefix>.GENERATE_PROFILES


SET TABLESPACE_TYPE='GROWTH'
WHERE PROFILE='IMS';

Step 8: Creating system tables


About this task
Before you can use all dialog functions, you must create the Db2 tables. These Db2 tables are used by
IBM Z Performance and Capacity Analytics to store its definitions and are known as system tables.
To create system tables follow these steps:

Chapter 2. Installing IBM Z Performance and Capacity Analytics 25


Installing IBM Z Performance and Capacity Analytics

Procedure
1. From the System window, select 2, System tables.
The System Tables window is displayed. (Figure 12 on page 26).

Other Utilities Help

Administration

Select System

System Tables

Press Enter to list all system tables.

Prefix : DRLSYS
Status : Not created
Creator :
Database name : DRLDB

F1=Help F2=Split F5=Create F6=Update


F9=Swap F11=Delete F12=Cancel

Command ===>
F1=Help F2=Split F3=Exit F9=Swap F10=Actions F12=Cancel

Figure 12. System Tables (not created) window


2. Press F5 (Create).
IBM Z Performance and Capacity Analytics creates system tables and fills in information about feature
components by searching DRLvrm.SDRLDEFS to see which features you have installed with SMP/E.
IBM Z Performance and Capacity Analytics displays messages in a browse window, if a problem has
occurred. In this case, look for errors at the beginning of the listing. Resolve any errors such as this:

DSNT408I SQLCODE = -904, ERROR: UNSUCCESSFUL EXECUTION CAUSED BY AN


UNAVAILABLE RESOURCE. REASON 00D70025, TYPE OF RESOURCE
AND RESOURCE NAME DB2A.DSNDBC.DRLDB.A.I0001.A001

For information about specific Db2 messages, refer to the Messages and Problem Determination.
System messages should be error free, with a Db2 return code of zero. After creating the system
tables, IBM Z Performance and Capacity Analytics returns to the System Tables window where you
must press F12 to return to the System window.
During the process of creating system tables, these administrative reports are also created:
• PRA001 - INDEXSPACE cross-reference.
• PRA002 - ACTUAL TABLESPACE allocation.
• PRA003 - TABLE PURGE condition.
• PRA004 - LIST COLUMNS for a requested table with comments.
• PRA005 - LIST ALL TABLES with comments.
• PRA006 - LIST USER MODIFIED objects.

Creating and updating system tables with a batch job

About this task


You can also create, update, and delete IBM Z Performance and Capacity Analytics system tables by
running TSO/ISPF in batch mode. Sample job DRLJCSTB shows an example of how to submit a request

26 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

to program DRLEAPST to create system tables. You can update or delete system tables by passing a
different request to DRLEAPST, as described in the comments in DRLJCSTB.
The TSO/ISPF batch job step must include:
• DRLFPROF DD referring to your DRLFPROF data set
• ISPPROF DD referring to a PDS with RECFM=F and LRECL=80. If you have made changes to the IBM Z
Performance and Capacity Analytics dialog parameters and have not also made those changes in your
DRLFPROF data set, then the ISPPROF DD should refer to your ISPF profile data set and you should not
specify the RESET parameter to DRLEINIT.
• ISPPLIB, ISPMLIB, ISPSLIB, and ISPTLIB DDs referring to your IBM Z Performance and Capacity
Analytics and ISPF panel, message, skeleton, and table data sets.
• ISPLOG DD referring to a data set with RECFM=VA and LRECL=125.
• SYSTSIN DD referring to instream data, or a data set, containing a command to invoke DRLEINIT, for
example:

IPSTART CMD(%DRLEINIT RESET)

• DRLBIN (batch input) DD referring to instream data or a data set containing a command to invoke
DRLEAPST with a request to perform the required function, for example:

DRLEAPST CREATE

DRLEAPST is the only program that can be invoked in this way.

Step 9: Customizing JCL


About this task
The DRLvrm.SDRLCNTL library contains several batch jobs that you can copy to &HLQ.LOCAL.CNTL and
customize. Customization includes inserting correct data set names and the correct Db2 subsystem ID.
These jobs, described in “Setting up operating routines” on page 135, are:
DRLJBATR
A sample job for printing and saving all (or a selected subset) of the batch reports. See “Using job
DRLJBATR to run reports in batch” on page 161 for more information.
DRLJCOLL and DRLJCOxx
A sample job for collecting log data. See “Collecting log data” on page 135 for more information.
DRLJCOPY
A sample job for backing up an IBM Z Performance and Capacity Analytics table space with the Db2
COPY utility. See “Backing up the IBM Z Performance and Capacity Analytics database” on page 154
for more information.
DRLJDICT
A sample job for partitioning the CICS_DICTIONARY table, if the CICS Partitioning Feature is going to
be used. See the CICS Partitioning feature chapter in CICS Performance Feature Guide and Reference
for more information.
DRLJEXCE
A sample job for producing Tivoli Information Management for z/OS problem records. See
“Administering problem records” on page 166 for more information.
DRLJEXCP
A sample job for partitioning the EXCEPTION_T table, if the CICS Partitioning Feature is going to be
used. See the CICS Partitioning feature chapter in CICS Performance Feature Guide and Reference for
more information.
DRLJPURG
A sample job for purging data from the database. See “Purge utility” on page 152 for more
information.

Chapter 2. Installing IBM Z Performance and Capacity Analytics 27


Installing IBM Z Performance and Capacity Analytics

DRLJREOR
A sample job for reorganizing the IBM Z Performance and Capacity Analytics database with the Db2
REORG utility. See “Purge utility” on page 152 for more information.
DRLJRUNS
A sample job for updating statistics on IBM Z Performance and Capacity Analytics table spaces with
the Db2 RUNSTATS utility. See “Monitoring the size of the IBM Z Performance and Capacity Analytics
database” on page 157 for more information.
DRLJTBSR
A sample job for producing a detailed report about the space required for all, or a subset of, a selected
component’s tables. See “Understanding table spaces” on page 146 for more information.
If you already have jobs for maintaining Db2, for example, COPY, REORG or RUNSTATS, you can continue
to use them for this purpose, instead of using the IBM Z Performance and Capacity Analytics jobs.

Step 10: Testing the installation of the base


About this task
Before you install IBM Z Performance and Capacity Analytics feature components, ensure that the
installation has been successful:

Procedure
1. Install the Sample component using the information in “Installing a component” on page 169.
Although editing lookup tables is a usual part of online component installation, you need not edit the
sample lookup table to successfully complete this test. For a description of what is provided with the
sample component, see “Sample component” on page 283
2. After you install the Sample component, select 3, Logs, from the Administration window and press
Enter.
The Logs window is displayed (Figure 13 on page 28).

Log Utilities View Other Help


--------------------------------------------------------------------------
Logs ROW 1 TO 1 OF 1

Select a log. Then press Enter to display record definitions.

/ Logs Description
/ SAMPLE Sample log definition
******************************* BOTTOM OF DATA ********************************

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=Log def F6=Datasets F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Collect F12=Cancel

Figure 13. Logs window


3. From the Logs window, select the SAMPLE log and press F11.
The Collect window is displayed.

28 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

Collect

Type information. Then press Enter to edit the collect JCL.

Data set DRLxxx.SDRLDEFS(DRLSAMPL)_____________________________ (reqd)


Volume . . ______ (If not cataloged)
Unit . . . __________________ (Required for batch if Volume defined)

Reprocess . . . . . . 2 1. Yes
2. No
Commit after . . . . . 1 1. Buffer full
2. End of file
3. Specify number of records
Number of records . . ________
Buffer size . . . . . . 10
Extention . . . . . . . 2 1. K
2. M
Condition . . . . . . ________________________________________ >
F1=Help F2=Split F4=Online F5=Include F6=Exclude
F9=Swap F10=Show fld F11=Save def F12=Cancel

Figure 14. Collect window


4. Type DRLxxx.SDRLDEFS(DRLSAMPL) in the Data set field and press F4.
The online collect is started. When it finishes, it displays statistics about the data it collected.
5. Press F3 to return to the Logs window after you finish looking at the messages.
6. Press F3 to return to the Administration window
7. From the Administration window, select 5, Reports, and press Enter.
The Reporting Dialog Defaults window is displayed. (Refer to Guide to Reporting for more
information.)
8. Press Enter to display the Reports window.

Report Batch Group Search Options Other Help

Reports Row 1 to 9 of 9

Select a report. Then press Enter to display.

Group . . . . . : All reports

/ Report ID
ACTUAL TABLESPACE SPACE allocation PRA002
INDEXSPACE cross-reference PRA001
List all tables with comments PRA005
List columns for a requested table with comments PRA004
List User Modified Objects PRA006
/ Sample Report 1 SAMPLE01
Sample Report 2 SAMPLE02
Sample Report 3 SAMPLE03
TABLE PURGE Condition PRA003
******************************* Bottom of data ******************************

Command ===>
F1=Help F2=Split F3=Exit F4=Groups F5=Search F6=Listsrch
F7=Bkwd F8=Fwd F9=Swap F10=Actions F11=Showtype F12=Cancel

Figure 15. Reports window


9. From the Reports window, select Sample Report 1. Type a character other than a question mark in
the selection field and press Enter.
The Data Selection window is displayed.

Chapter 2. Installing IBM Z Performance and Capacity Analytics 29


Installing IBM Z Performance and Capacity Analytics

Data Selection ROW 1 TO 1 OF 1

Type values. Then press Enter to generate the report.

Report ID : Sample Report 1

Variable Value Oper Req


SYSTEM_ID > + = No
**************************** BOTTOM OF DATA ****************************

Command ===>
F1=Help F2=Split F4=Prompt F5=Table F6=Chart F7=Bkwd
F8=Fwd F9=Swap F10=Showfld F11=Hdrval F12=Cancel

Figure 16. Data Selection window


10. Press Enter to generate the report.
The query associated with the report is run and the report is displayed through GDDM/ICU. If your
installation does not have GDDM, the report is displayed in tabular format. (Figure 138 on page 286
shows the report.)
11. When you finish viewing the report, press F9 to exit from GDDM/ICU, and press F3 (Exit) to return to
the Reports window.
12. From the Reports window, press F3 to return to the Administration window.

Step 11: Reviewing Db2 parameters


About this task
Before you install components, review Db2 table and index space parameters such as:
• Buffer pool.
• Compression.
• Erase on deletion.
• Free space.
• Lock size.
• Number of partitions, for a partitioned space.
• Number of subpages, for an index space.
• Primary and secondary space.
• Segment size.
• Type of space.
• VSAM data set password.
These parameters can affect the performance of your system.
Note: Before you assign a buffer pool to a component index or table space, activate the buffer pool and
add the USE privilege to the privilege set for the buffer pool.
To alter the Tablespace or Index definitions, the GENERATE_PROFILES table should be reviewed.

30 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing IBM Z Performance and Capacity Analytics

What to do next
If you are unsure about the meaning of a field, press F1 to get help. For more information, refer to the
CREATE INDEX and CREATE TABLESPACE command descriptions in the Db2 for z/OS: SQL Reference
IBM Z Performance and Capacity Analytics saves the changed definitions in your local definitions library.
When you save a changed definition, it tells you where it is saving it, and prompts you for a confirmation
before overwriting a member with the same name.

Step 12: Installing components


About this task
You can now install IBM Z Performance and Capacity Analytics features. To install components, use the
information in “Installing a component” on page 169, and in these books:
Feature name
Book name
AS⁄400 Performance
IBM i System Performance Feature Guide and Reference
CICS Performance
CICS Performance Feature Guide and Reference
Distributed Systems Performance
Distributed Systems Performance Feature Guide and Reference
IMS Performance
IMS Performance Feature Guide and Reference
System Performance
System Performance Feature Reference Volume I and II
To install Resource Accounting for z/OS (part of the base function), see the Resource Accounting for z/OS
book.

Installing multiple IBM Z Performance and Capacity Analytics


systems
About this task
You can install more than one IBM Z Performance and Capacity Analytics system on the same Db2
subsystem. This is useful if you want to develop and test newIBM Z Performance and Capacity Analytics
applications.
Note: You cannot use Db2 Copy to copy the objects from the first installation to the new one. If you do,
QMF definitions may be lost.
To install another IBM Z Performance and Capacity Analytics system, repeat the installation from “Step
2: Setting up security” on page 17 to “Step 12: Installing components” on page 31 and specify different
values for:
• Db2 subsystem
• Database
• System table prefix
• Other tables prefix
• RACF groups (if necessary)
• Local data sets
For example, assume your user ID is BILL, and you want a private IBM Z Performance and Capacity
Analytics system.

Chapter 2. Installing IBM Z Performance and Capacity Analytics 31


Installing IBM Z Performance and Capacity Analytics

Dialog parameter
Value
Db2 subsystem
DB2T
Database
BILLDB
System table prefix
BILL
Other table prefix
BILL
Users to grant access to
BILL
Local data sets
BILL.DEFS....and so on
Other users cannot use this system because BILL is not a Db2 secondary authorization ID nor a RACF
group ID. If you want to share this new IBM Z Performance and Capacity Analytics system, establish a
valid RACF group ID and use the group ID as the prefix instead of BILL.

32 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Chapter 3. Installing the SMF Extractor and
Continuous Collector
After IBM Z Performance and Capacity Analytics has been installed, the database has been set up,
components using SMF have been installed, and batch data collection is running successfully, you can
optionally setup IBM Z Performance and Capacity Analytics for automated data gathering and continuous
collection.
This section describes how to setup IBM Z Performance and Capacity Analytics in a hub and spoke
configuration for automated data gathering and processing of SMF records whereby data is brought from
the source systems (spoke systems) to a central system (hub system) where it is collected, stored, and
analyzed.
This requires the implementation of the following processes:
SMF Extractor
To extract SMF data to be sent to the Continuous Collector or DataMover.
Continuous Collector
To update the IBM Z Performance and Capacity Analytics Db2 databases using data from the SMF
Extractor or DataMover.
DataMover
To send SMF data from the spoke system (sender) and receive data on the hub system (receiver).
Shadower
To Shadow Db2 data off platform for reporting in Splunk and ELK.
Collator
To package SMF records for archival.
Note: You can continue to collect SMF data using batch collection and transmitting it manually rather than
using automated data gathering. Also you can use a combination of methods, collecting some data using
batch collection, and other data using automated data gathering and continuous collection.

SMF Configuration
In order to ensure that messages reach the SMF Extractor, it is necessary to review your SMFPRMxx
member, to ensure that the necessary exits are going to be driven.
• If you are running z/OS 2.3 or later, you must use the IEFU86 exit active.
• If you are running z/OS 2.2 or earlier, you must use the IEFU83, IEFU84 and IEFU85 exits active.
Under z/OS 2.3 you may run either the IEFU83/4/5 exits or the IEFU86 exits. This is to allow for
migration. It is strongly recommended that you move to using the IEFU86 exit as soon as possible.

Review the SID parameter


The SID parameter identifies the systems SMF ID. Users probably do not want to change it, but there are
some aspects of how IBM Z Performance and Capacity Analytics process that they need to be aware of.
• The value turns up in the IBM Z Performance and Capacity Analytics database in a column usually called
MVS_SYSTEM_ID.
– The system name is not recorded and is not present in most SMF records.
– Neither is the SYSPLEX name.
• Each system feeding data into the same IBM Z Performance and Capacity Analytics database should
have a unique SMF ID.
• If a system is re-IPL’d on a different LPAR and ends up with a different SMF ID, then IBM Z Performance
and Capacity Analytics will see it as a separate system in its reports.

© Copyright IBM Corp. 1993, 2017 33


Sample configuration members

– This may reduce the value of some of the reports because there will be ‘holes’ in the data while it was
running with a different SMF ID.
– If there is a significant difference in the capacities of the two LPARs such separation may be desirable
so the reports reflect the actual execution of the system in each LPAR it can run in.
While it is possible to change the SMF ID in the data as it is being ingested, it is better not to have to do it.

Review your SYS settings


The SYS setting specifies overall SMF recording options for the whole system.
• The TYPE field should contain each type of SMF record that the user want to collect from the system.
• The EXITS field must contain the names of the exit(s) that the user want the SMF Extractor to use.
If either of these are wrong, then IBM Z Performance and Capacity Analytics will not receive the data that
it requires.

Review each of your SUBSYS settings


Each SUBSYS setting specifies SMF recording options for SMF records coming from a particular
subsystem.
• The TYPE field should contain each type of SMF record that the user want to collect from the
subsystem.
• The EXITS field must contain the names of the exit(s) that the user want the SMF Extractor to use.
– If the SUBSYS settings contain any active exit points other than IEFU83, IEFU84, IEFU85 (z/OS
2.2 or earlier) or IEFU86 (z/OS2.3 or higher) then the relevant IEFU** exit points need to be
explicitly included for the SUBSYS. The SUBSYS EXITS specification is used in place of the SYS EXITS
specification if it is present for a SUBSYS.
If either of these are wrong, then IBM Z Performance and Capacity Analytics will not receive the data that
it requires.

SMF Extractor
The initialization Message has changed. Instead of:

VSX0160I VSXSMF 08:54:00.087 1st SMF record received; data collection started

It is now:

CKKS160I CKKXSMF 04:30:52.903 1st SMF record received; data collection started

The Queue depth message has changed. Instead of:

VSX0141I VSXSTA 19:14:28.259 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 193

It is now:

CKKS141I CKKXWTR 04:36:03.255 ** Queue stats for PC to SMF: 4000 / 3950 Curr: 0 Max: 5
NQ=000000B4x DQ=000000B4x

Sample configuration members


Sample JCL and associated members are provided with IBM Z Performance and Capacity Analytics to
help with your configuration and installation tasks.
Before running for the first time, tailor the members to suit the specific requirements of the user
installation.

34 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

Defining a DASD-only log stream


This is an example of the JCL for defining a DASD-only log stream.
Customize the JCL before running the job by following the instructions in the comments.
Note: RETPD must be greater than zero.

//CRLOGRDS JOB (IZPCA)


//******************************************************************
//* DEFINE IZPCA LOG STREAM ON DASD *
//******************************************************************
//* Change the following: *
//* <LOG_STRM> - Log stream name. Name must be unique by *
//* SYSID. Recommended name is *
//* <SYSID>.DRL.LOGSTRM *
//****************************************************************** //*
//IXCMIAPU EXEC PGM=IXCMIAPU
//STEPLIB DD DISP=SHR,DSN=SYS1.MIGLIB
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR)

DEFINE LOGSTREAM NAME(<LOG_STRM>)


DESCRIPTION(IZPCA_LOGSTREAM)
DASDONLY(YES)
MAXBUFSIZE(33024)
LS_SIZE(1024)
AUTODELETE(YES)
RETPD(1)
HLQ(DRL)
HIGHOFFLOAD(80)
LOWOFFLOAD(0)
DIAG(YES)
/*
//

Figure 17. JCL to define a DASD-only log stream

Defining a log stream on a coupling facility


This is an example of the JCL for defining a log stream on a coupling facility.
Customize the JCL before running the job by following the instructions in the comments.
Note: RETPD must be greater than zero.

Chapter 3. Installing the SMF Extractor and Continuous Collector 35


Sample configuration members

//CRLOGRCF JOB (IZPCA)


//******************************************************************
//* DEFINE IZPCA LOG STREAM ON COUPLING FACILITY *
//******************************************************************
//* Change the following: *
//* <LOG_STRUCT> - Structure name. Name must be unique by *
//* SYSID. Recommended name is *
//* <SYSID>_DRL_LOG *
//* <LOG_STRM> - Log stream name. Name must be unique by *
//* SYSID. Recommended name is *
//* <SYSID>.DRL.LOGSTRM *
//****************************************************************** //*
//IXCMIAPU EXEC PGM=IXCMIAPU
//STEPLIB DD DISP=SHR,DSN=SYS1.MIGLIB
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR)

DEFINE STRUCTURE NAME(<LOG_STRUCT>)


LOGSNUM(2)
MAXBUFSIZE(33024)
AVGBUFSIZE(1024)

DEFINE LOGSTREAM NAME(<LOG_STRM>)


DESCRIPTION(IZPCA_LOGSTREAM)
STRUCTNAME(<LOG_STRUCT>)
STG_DUPLEX(NO)
LS_SIZE(1024)
AUTODELETE(YES)
RETPD(1)
HLQ(DRL)
HIGHOFFLOAD(80)
LOWOFFLOAD(0)
DIAG(YES)
REGION(256M)
/*
//

Figure 18. JCL to define a log stream on a coupling facility

UPDPOL - Update CFRM policy


UPDPOL is an example of the JCL required to update the CFRM policy.
Customize the JCL before running the job by following the instructions in the comments.

36 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

//UPDPOL JOB (IZPCA)


//******************************************************************
//* UPDATE CFRM POLICY *
//******************************************************************
//* First, run this job with only the 'DATA TYPE(CFRM) REPORT(YES)'*
//* statement. From the output select all active CF and *
//* STRUCTURE statements. Copy those statements where indicated *
//* below. Then, make the changes below and submit the job *
//* *
//* Change the following: *
//* <POLICY_NAME> - Policy Name. The policy name should be *
//* unique but must include all active *
//* definitions from currently active CFRM *
//* <CF_LIST> - Coupling Facility Names. Create this list *
//* of CFs within the SYSPLEX (based on the CF *
//* statements in Policy). *
//* <LOG_STRUCT> - Log structure name. This name must be *
//* unique by SYSID. Recommended name is: *
//* <SYSID>.DRL.LOGSTRM *
//* Duplicate the STRUCTURE statement for each *
//* SYSID in the SYSPLEX, creating a unique for *
//* each SYSID. *
//******************************************************************
//IXCMIAPU EXEC PGM=IXCMIAPU
//STEPLIB DD DISP=SHR,DSN=SYS1.MIGLIB
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(CFRM) REPORT(YES)
/* Remove this line after the first run of this job */
DEFINE POLICY NAME(<POLICY_NAME>) REPLACE(YES)

<INSERT ACTIVE CF AND STRUCTURE STATEMENTS FROM REPORT GENERATED>

STRUCTURE NAME(<LOG_STRUCT>) SIZE(10M)


INITSIZE(8000K)
PREFLIST(<CF LIST>)
/*
//

Figure 19. UPDPOL: Update CFRM policy

DRLJSMFO - SMF Extractor control file


Sample member DRLJSMFO is provided in the SDRLCNTL library. It contains parameters and instructions
for configuring the SMF Extractor control file.
Before the first startup of the SMF Extractor, customize the parameters by following the instructions in the
comments.

***********************************************************************
* NAME: DRLJSMFO *
* *
* FUNCTION: SMF EXTRACTOR CONTROL FILE. *
* *
* COPY THIS MEMBER TO A DATA SET AS SMFPxxxx WHERE xxxx IS THE *
* 4 CHARACTER SMF ID OF THE SYSTEM WHERE THE MEMBER WILL BE USED. *
* *
* THESE PARAMETERS MUST BE CUSTOMIZED TO MATCH YOUR INSTALLATION *
* REQUIREMENTS PRIOR TO THE FIRST STARTUP OF THE SMF EXTRACTOR. *
* *
* SET OUTLGR TO THE NAME OF THE OUTPUT LOG STREAM *
* SET SMFREC TO A COMMA SEPARATED LIST OF SMF RECORDS TO BE RECORDED *
*=====================================================================*
OUTLGR=log.stream.name
SMFREC=14,15,30,42,60,61,64,65,70,71,72,73,74,85,90,94,99,100,101,113

Figure 20. DRLJSMFO: Parameters for the SMF Extractor control file

For example:

OUTLGR=&SYSNAME.DRL.LOGSTRM
SMFREC=14,15,30,42,60,61,64,65,70,71,72,73,74,85,90,94,99,100,101,113

Chapter 3. Installing the SMF Extractor and Continuous Collector 37


Sample configuration members

DRLJSMFX - SMF Extractor startup procedure


Sample member DRLJSMFX is provided in the SDRLCNTL library. It contains the JCL for the SMF Extractor
startup procedure.
Before the first startup of the SMF Extractor, customize the parameters by following the instructions in the
comments.

//*********************************************************************
//* NAME: DRLJSMFX *
//* *
//* FUNCTION: SMF EXTRACTOR STARTED TASK PROC *
//* *
//* The Dataset pointed to by the SMFXPARM DD contains the SMF *
//* extractor parameters for each sysem in the sysplex. *
//* *
//* The parameter for a given system should be in member SMFPxxxx *
//* where xxxx is the systems SMF ID. *
//* *
//*===================================================================*
//SMFEXTP PROC SMFID=SYSA
//CKKSMAI EXEC PGM=CKKSMAI,REGION=0M,
// PARM='TRACE(N),MSGLVL(9),SDUMP(Y),MAXSD(2),MAXQD(4000)'
//*
//STEPLIB DD DISP=SHR,DSN=DRL.SDRLEXTR
//SYSPRINT DD SYSOUT=*
//SMFXPARM DD DISP=SHR,DSN=DRL.USERCNTL(SMFP&SMFID.)
//SYSUDUMP DD SYSOUT=*
//SMFXPRNT DD SYSOUT=*
//*

Figure 21. DRLJSMFX: JCL for the SMF Extractor startup procedure

DRLJCCOL - Continuous Collector started task


Sample member DRLJCCOL is provided in the SDRLCNTL library. It contains the JCL for a started task to
collect SMF data continuously.
Customize the sample JCL by following the instructions in the comments.

38 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

//DRLJJCOL JOB (ACCT#),'CONTINUOUS COLLECT'


//***************************************************************
//* NAME: DRLJCCOL *
//* *
//* FUNCTION: Sample procedure for started task to collect *
//* SMF data continuously. *
//* *
//***************************************************************
//* *
//* Customization: *
//* a. Change VER to match your IZPCA version. *
//* b. Change IZPCALD to match the HLQ you used for IZPCA. *
//* c. Change DB2LOD to match your Db2 HLQ and version. *
//* d. Change the IZPCA parameters to match your system *
//* SYSPREFIX - default DRLSYS *
//* SYSTEM - default DSN *
//* &PREFIX - default DRL *
//* &USERS - default DRLUSER *
//* &STOGROUP - default DRLSG *
//* &DATABASE - default DRLDB *
//* e. Change logstream-name to a valid log stream name. *
//* *
//***************************************************************
// SET VER=310
// SET IZPCALD=IZPCA.V&VER.
// SET DB2LOD=DB2.VC10
//*
//DRLSMFCC EXEC PGM=DRLPLC,REGION=0M,
// PARM=('SYSPREFIX=DRLSYS,SYSTEM=DSN,&STOGROUP=DRLSG,',
// '&PREFIX=DRL,&DATABASE=DRLDB,&USERS=DRLUSER')
//STEPLIB DD DISP=SHR,DSN=&IZPCALD..SDRLLOAD
// DD DISP=SHR,DSN=&DB2LOD..SDSNLOAD
//DRLIN DD *
SET USERS='DRLUSER';
COLLECT SMF CONTINUOUSLY FROM logstream-name
COMMIT AFTER 5 MINUTES
BUFFER SIZE 100 M;
/*
//DRLOUT DD SYSOUT=*
//DRLDUMP DD SYSOUT=*

Figure 22. DRLJCCOL: JCL for the Continuous Collector started task

DataMover.sh - Run the DataMover


This is a sample Shell script to run the DataMover.
Customize the script with installation-specific details before running it.

Chapter 3. Installing the SMF Extractor and Continuous Collector 39


Sample configuration members

# Shell script to run the DataMover


#
# -------------------------------------------------------------------
# Configuration area - tailor to suit your installation
#
# Runtime directory. Other paths are relative to it.
#
# Recommended to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/IDSz/DataMover"
#
# logging.properties controls which messages get sent where
#
logfile="logging.properties"
#
# The main executable
#
jarfile="java/DataMover.jar"
#
# The config file that tells it what to do.
#
config="config/$1.properties"
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/usr/lpp/java/J8.0_64
export PATH=$PATH:$JAVA_HOME/bin
#
# -------------------------------------------------------------------
# Execution area - don't change things below the line
#
# Get to the runtime directory
#
cd $rundir
#
# Work out what the parms are
#
if test "$1" = "SSL"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
else
parms="config=$config"
fi
#
# Run the DataMover. Specify the maximum memory size for Java.
#
Java -Xmx16G -Djava.util.logging.config.file="$logfile" -jar "$jarfile" $parms

Figure 23. DataMover.sh - Run the DataMover

hub.properties - Configure a DataMover on a hub system


This is a sample configuration for a DataMover running on a hub system.
Customize before use by following the instructions in the comments.

40 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

# This is a sample configuration to for a DataMover running on a Hub System


#
# You need to modify:
#
# The port the input TCPIP stage will listen on
# The name of the log stream in the output stage
#
routes = 1
route.1.name = Hub
#
# Listen for connections from the Spokes and receive their data.
#
# If you want to use SSL you need to perform the certificate exchange before you
# change the value to YES
#
input.1.type = TCPIP
input.1.port = 54020
input.1.ssl = no
#
# For output, the data is written into the log stream
#
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = <Log stream name>
output.1.1.strip_rdw = NO
#

Figure 24. hub.properties - Configure a DataMover on a hub system

spoke.properties - Configure a DataMover on a spoke system


This is a sample configuration for a DataMover running on a spoke system.
Customize before use by following the instructions in the comments.

# This is a sample configuration to for a DataMover running on a Spoke System


#
# You need to modify:
#
# The name of the log stream in the INPUT stage
# The host name and port in the output TCPIP stage
#
routes = 1
route.1.name = Spoke
#
# Our input arrives in a log stream, put there by RT/SMF
#
# This will take the data from the log stream and then erase it once it is sent.
#
input.1.type = LOGSTREAM
input.1.logstream = <log stream name>
input.1.block = 30
input.1.wipe = YES
input.1.strip_header = NO
input.1.check_LRGH = YES
input.1.sourcename = LOGSTREAM
input.1.sourcetype = RAWSMF
#
# For output, we send them to the hub
#
# If you want to use SSL you need to perform the certificate exchange before you
# change the value to YES
#
outputs.1 = 1
#
output.1.1.type = TCPIP
output.1.1.host = <host system name>
output.1.1.port = 54020
output.1.1.use_ssl = NO
#

Figure 25. spoke.properties - Configure a DataMover on a spoke system

Chapter 3. Installing the SMF Extractor and Continuous Collector 41


Continuous Collector configuration options

clear.properties - Erase all records from a log stream


This is a sample configuration for erasing all records from a log stream.
Customize before use by following the instructions in the comments.

# This is a sample configuration to erase all records from a log stream


#
# You need to modify:
#
# The name of the log stream that the input stage will clear
#
# Note the clear operation is irreversible.
# Once the data is gone, it is gone.
# So be sure you want to do this.
#
routes = 1
route.1.name = Clear
#
# clear=yes makes the DataMover clear the log stream and then terminate.
#
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS0
input.1.clear = yes
input.1.block = 30
input.1.sourcename = LOGSTREAM
input.1.sourcetype = RAWSMF
#
# Dummy console output stage
#
outputs.1 = 1
#
output.1.1.type = CONSOLE
#

Figure 26. clear.properties - Erase all records from a log stream

Continuous Collector configuration options


There are several ways the Continuous Collector can be configured.

Stand-alone configuration
When the only system you want to gather SMF data from is also the system on which you want to run
your IBM Z Performance and Capacity Analytics Db2 database on, then you can set up a stand-alone
configuration. This is the simplest configuration.

One system, one database


A stand-alone system is a hub system that is just processing its own SMF data.
The configuration is a mixture of the spoke and hub configurations. The SMF Extractor on the system
with IBM Z Performance and Capacity Analytics and Db2 running collects SMF data and writes it directly
to a local log stream either on DASD or a Coupling Facility. The paired Sender and Receiver DataMovers
are omitted. The Continuous Collector (CC) reads the data from the log stream and updates the IBM Z
Performance and Capacity Analytics database.
Hub
Standalone System

SMF Local Continuous Db2


SMF Extractor Log Stream Collector Database

Figure 27. Continuous Collector: stand-alone configuration

42 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Continuous Collector configuration options

Hub and spoke configurations using the DataMover


When there are multiple systems sending data to a single IBM Z Performance and Capacity Analytics
system and a single Db2 database, IBM Z Performance and Capacity Analytics is set up in a hub and spoke
configuration. SMF Extractor data is sent from the spoke systems to the hub using the IBM Z Performance
and Capacity Analytics DataMover and TCP/IP.

One Spoke DataMover, one Hub DataMover


The configuration of a hub that is also processing its own SMF data is a combination of the Hub
configuration and Stand-alone configuration. In this configuration, the SMF Extractor data written to a
log stream on the remote (spoke) systems is read by the spoke DataMover (Sender) and sent to the hub
DataMover (Receiver). The hub DataMover writes the data from the spoke system to a local log stream.
The Continuous Collector reads the data and updates the IBM Z Performance and Capacity Analytics
database.
There are two primary ways to implement this:
Option A uses a a completely separate processing route for the local data, keeping it isolated from the
data from the Spokes until the aggregate records are written into the Db2 database. It requires that Db2 is
set up with partitioning.
Hub gathering its own data (Option A)

SMF Local Continuous


SMF Extractor Log stream Collector

Db2 Database
Partitioned

from Spokes Data Mover Inbound Continuous


(Receiver) Log stream Collector

Figure 28. Continuous Collector: hub and spoke configuration option A (separate log streams, partitioned
Db2 database)

Option B allows the SMF Extractor to freely write to the local log stream and serializes the access to the
inbound log stream as the data is copied over. Both the Receiver DataMover and the Copy DataMover will
need to specify the same enqueue on their log stream output stages.
Hub gathering its own data (Option B)

SMF Local
SMF Extractor Log stream

Data Mover
(Copy)

from Spokes Data Mover Continuous Db2


Inbound
(Receiver) Log stream Collector Database

Figure 29. Continuous Collector: hub and spoke configuration option B (combined log streams)

Chapter 3. Installing the SMF Extractor and Continuous Collector 43


Communication prerequisites

Multiple Spoke DataMovers, one Hub DataMover


When there are multiple spoke systems, either in the same sysplex or in multiple sysplexes, the
configuration is set up with the data from the spoke systems sent along parallel data paths to the hub.
Depending on the volume of data the spoke systems are sending, the Spoke DataMovers can send data to
a single Hub DataMover (Figure 30 on page 44) or multiple Hub DataMovers (Figure 31 on page 44).

Hub System
Spoke Systems

Spoke TCP/IP

Spoke TCP/IP
Logstream
Continuous Db2
Hub Collector
Data Database
Mover
Spoke TCP/IP

Spoke TCP/IP

Figure 30. Continuous Collector: multiple Spoke systems and one Hub DataMover

Multiple Spoke DataMovers, multiple Hub DataMovers


Important: When multiple Hub DataMovers are used, it is recommended that multiple Continuous
Collectors are used and the IBM Z Performance and Capacity Analytics Db2 database is profiled (or
partitioned) to prevent contention between multiple update processes. This is shown in Figure 31 on page
44.

Spoke Systems Hub System

Spoke TCP/IP Hub Logstream Continuous


Collector

Spoke TCP/IP Hub Logstream Continuous Db2 Database


Collector

Partitioned
by SYSID
Continuous
Spoke TCP/IP Hub Logstream
Collector

Figure 31. Continuous Collector: multiple Spoke systems and multiple Hub DataMovers

Communications prerequisites
This section lists the communication prerequisites (log streams and TCP/IP ports) for each part of the
automated data gathering and continuous collection process.
SMF Extractor
• Log stream to send data to either the DataMover or Continuous Collector.

44 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the SMF Extractor

Continuous Collector
• Log stream to receive data from either the DataMover or SMF Extractor.
• The standard recommendation is to have a single Continuous Collector per spoke on the hub. This
could vary based on SMF traffic volume.
DataMover
• Log stream to receive data from the SMF Extractor (on the spokes) or send data to the Continuous
Collector (on the hub).
– Could need multiple log streams on the hub, one to each Continuous Collector.
• TCP/IP port over which DataMovers will communicate with each other (spoke and hub).
– Assume one port per spoke.
– Spoke systems need to be able to write to the port, and hub systems need to be able to read from
the port.
Note on log streams:
• All log streams used by DataMovers can be either DASD-only or coupling facility-based.
• When using DataMovers, one log stream is needed on each spoke and one log stream is needed for each
Continuous Collector on the hub. For example, if there are three spokes and one hub, with a Continuous
Collector on the hub for each spoke plus one on the hub for the local SMF data, there is a total of seven
log streams needed: one on each spoke and one for each Continuous Collector on the hub.
• Any log stream used by more than one system in a sysplex (such as when using sysplex log streams)
must be defined on a coupling facility and cannot be DASD-only.
• When defining log streams:
– Ensure the RETPD is greater than 0.
– AUTODELETE should have a value of YES to keep the log stream cleared of old data.

Step 1: Installing the SMF Extractor


This section describes how to install the SMF Extractor.

Before you begin


Consider the instructions in “SMF Extractor tips” on page 46.

About this task


The SMF Extractor runs on all systems (hub, spoke, and stand-alone). To install the SMF Extractor, the
steps are as follows:

Procedure
1. On a spoke system, define a log stream for the SMF Extractor. The log stream name must be unique
per system, so include the SYSID in the name.
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only this statement:

DATA TYPE(CFRM) REPORT(YES)

b) From the output of the UPDPOL job, extract the CF and STRUCTURE statements for all active
coupling facilities and structures.
For example:

Chapter 3. Installing the SMF Extractor and Continuous Collector 45


Installing the SMF Extractor

CF statement

CF NAME(CF01) DUMPSPACE(2000K) PARTITION(0A) CPCID(00)


TYPE(002827) MFG(IBM) PLANT(84) SEQUENCE(0000000168F7)

Structure statement

STRUCTURE NAME(CKPT1) SIZE(2004K)


PREFLIST(CF01, CF02, CF03, CF04)

c) Update the Structure statement in for the DRL structure.


d) Run the UPDPOL job again with all the definitions added in the previous steps.
e) Implement the new CFRM policy.
The command to do that using a policy name of CSWPOL would be:

SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL

For the JCL to do this, refer to the sample configurations:


• “Defining a DASD-only log stream” on page 35
• “Defining a log stream on a coupling facility” on page 35
2. Ensure the SDRLEXTR data set is APF-authorized.
On the spoke system, this data set may need to be transferred from the hub system.
3. Copy the DRLJSMFO member of SDRLCNTL to an appropriate PARMLIB and rename the member to
SMFPsysi where sysi is the SMF System ID.
See “DRLJSMFO - SMF Extractor control file” on page 37.
4. Customize this member to match your installation requirements:
a) Change the value of OUTLGR to the name of the output log stream created in Step “1” on page 45.
b) Optionally, you can change the list of SMF records to be collected. For guidance, see Chapter 2,
“Installing IBM Z Performance and Capacity Analytics ,” on page 13.
5. Copy the DRLJSMFX member of SDRLCNTL to the appropriate PROCLIB and rename the member to
match the desired task name.
This will run as a Started Task. See“DRLJSMFX - SMF Extractor startup procedure” on page 38.
6. Modify the JCL in this member:
a) Change the STEPLIB to point to the SDRLEXTR data set.
b) Do not change SMFP&SYSTEM. in the SMFXPARM statement.
It must point to the PARMLIB and member name created in Step “3” on page 46.

What to do next
This task should not be started until you are ready to collect data using the Continuous Collector.
The task can be tested by starting the task using the z/OS start command and ensuring it collects data
based on the messages in the SMFXLOG output. For example, a successful collection message is:

VSX0160I VSXSMF 08:54:00.087 1st SMF record received; data collection started

SMF Extractor tips


When preparing to install the SMF Extractor, consider the following instructions.

Procedure
• The SMF Extractor must run at a high Dispatching Priority (DP).
It must be as close to the SMF DP as possible. The recommended DP is SYSSTC (x'FE') since SMF
usually runs at SYSTEM (x'FF') DP.

46 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the Continuous Collector

• Use the following values in the SMF Extractor JCL.


– REGION=0M (on either the JOB or EXEC statement).
– MAXQD (in the parameters for the VSXMAI) can be updated from the default of 2000 to a maximum
of 9999, if necessary, when the SMF Extractor runs out of queue space. The value should be
increased incrementally to balance functionality with available resources. A value of 5000 is usually
enough for a very busy LPAR.
– Do not specify a value for SUBTIME in the parameter member for VSXMAI.
• The primary performance metric for the SMF Extractor is the queue depth.
This represents how much data is in the input queue to the SMF Extractor that is waiting to be written
to the log stream. To check the queue depth issue the MVS command:

F jobname,STATUS

where jobname is the name of the task running the SMF Extractor.
The output from this command has a line that reads as follows:

VSX0141I VSXSTA 19:14:28.259 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 193

If the Max value is close to or exceeds the control values, the following needs to be checked:
– The MAXQD value in the SMF Extractor JCL can be increased up to a maximum of 9999.
– Verify that the Dispatching Priority of the SMF Extractor is at a similar value to the tasks that are
generating SMF records, especially the SMF task.
• The SMF Extractor Region must be 256M at minimum. Running with a smaller region could result in
ABENDSC78.

Step 2: Installing the Continuous Collector


This section describes how to install the Continuous Collector.

About this task


The Continuous Collector runs on a hub or stand-alone system. To install the Continuous Collector, follow
these steps.

Procedure
1. Define a log stream to feed the Continuous Collector.
• The name of the log stream must be unique for each Continuous Collector.
• Define the log stream on the coupling facility (CF) if one exists in the sysplex, however it is
recommended to only use DASD. For JCL to do this, refer to sample configuration members:
– “Defining a log stream on a coupling facility” on page 35
– “Defining a DASD-only log stream” on page 35
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only the DATA TYPE(CFRM)
REPORT(YES) statement.
b) From the output of the UPDPOL job, extract the CF and STRUCTURE statements for all active
coupling facilities and structures.
For example:
CF statement

CF NAME(CF01) DUMPSPACE(2000K) PARTITION(0A) CPCID(00)


TYPE(002827) MFG(IBM) PLANT(84) SEQUENCE(0000000168F7)

Chapter 3. Installing the SMF Extractor and Continuous Collector 47


Installing the Continuous Collector

Structure statement

STRUCTURE NAME(CKPT1) SIZE(2004K)


PREFLIST(CF01, CF02, CF03, CF04)

c) Update the Structure statement in for the DRL structure.


d) Run the UPDPOL job again with all the definitions added in the previous steps.
e) Implement the new CFRM policy.
The command to do that using a policy name of CSWPOL would be:

SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL

For the JCL to do this, refer to “Sample configuration members” on page 34:
• “Defining a log stream on a coupling facility” on page 35
• “Defining a DASD-only log stream” on page 35

2. If the log stream is on a coupling facility perform the following steps. If DASD-only, skip this step.
a) Run the sample job “UPDPOL - Update CFRM policy” on page 36 with only this statement:

DATA TYPE(CFRM) REPORT(YES)

b) From the output of the above job, extract the CF and STRUCTURE statements for all active coupling
facilities and structures.
Examples of these statements are:
CF statement

CF NAME(CF01) DUMPSPACE(2000K) PARTITION(0A) CPCID(00)


TYPE(002827) MFG(IBM) PLANT(84) SEQUENCE(0000000168F7)

STRUCTURE statement

STRUCTURE NAME(CKPT1) SIZE(2004K)


PREFLIST(CF01, CF02, CF03, CF04)

c) Update the STRUCTURE statement in for the DRL structure.


d) Run the UPDPOL job again with all the definitions added in the previous steps.
e) Implement the new CFRM policy.
For example, the command to do that using a policy name of CSWPOL would be:

SETXCF START,POLICY,TYPE=CFRM,POLNAME=CSWPOL

3. Set up the procedure to run the Continuous Collector.


The collector can run as a batch job or a started task (STC). The model JCL is in the DRLJCCOL member
of SDRLCNTL. See “DRLJCCOL - Continuous Collector started task” on page 38. Copy this member into
the appropriate procedure library (PROCLIB) and make the following changes:
a) Confirm the VER parameter is correct for the version of IBM Z Performance and Capacity Analytics
that is running, that is, 310 for the V3.1.0 release.
b) Change the IZPCALD parameter to match the high-level qualifier for the IBM Z Performance and
Capacity Analytics SMP/E target library, for example, IZPCA.V310.
c) Change the DB2LOD parameter to match the high-level qualifier for the Db2 SMP/E target library, for
example, DB2.VC10.
d) Change the SYSTEM parameter to match the name of the Db2 instance, such as DSNA.
e) Change &PREFIX to match the IBM Z Performance and Capacity Analytics configuration.
For example, if the default prefix of DRL is changed to XRL, the following values would be used:

48 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the DataMover

SYSPREFIX=XRLSYS
&PREFIX=XRL
&DATABASE=XRLDB

f) Change the log stream name to the name defined in either the CRLOGRCF or CRLOGRDS, depending
on which method (CF or DASD) was used.
g) Save the job with the name the Continuous Collector is going to run as.
4. Ensure the job has DBADM access to the Db2 instance.
5. Test starting the Continuous Collector as a started task to make sure the JCL is correct.
Verify the setup as follows:
• Ensure the Continuous Collector was able to open the Db2 database and the log stream.
• If not, correct the problem, for example, the log stream name and permissions. Then restart the task.
• Without data in the log stream, the following message is displayed and the job continues, waiting for
data to arrive:

DRL0300I Collect started at 2019-09-17-09.40.51.


DRL0386I Logstream logstreamname is empty, waiting for input.

However, if the Continuous Collector was previously run to test it, there may be some data in the log
stream and this message will not be displayed.
• To enable the Continuous Collector to run in ZIIP mode, SDRLLOAD must be added to the APF
Authorized list.

Step 3: Installing the DataMover


The DataMover is used on all hubs and spokes of hub and spoke systems. It is used to transport SMF data
from separate z/OS images (spoke systems) to the hub.

Before you begin


Before installing the DataMover, consider the following and plan accordingly:
• “DataMover tips” on page 51
• “DataMover configuration options” on page 52.
• “DataMover features and parameters reference” on page 84

About this task


To install the DataMover, follow these steps.

Procedure
1. On the hub system, in OMVS, locate the source directory created when IBM Z Performance and
Capacity Analytics V3.1.0 was installed.
• It should be in /usr/lpp/IBM/IZPCA/v3r1m0/IBM/. If not in that location, issue this command
to get the directory:

df | grep IZPCA

The output from the command will be similar to this example, noting that the high-level qualifier for
the data sets (TIVDS.V3R1M0) will be different for your installation:

/IZPCA (TIVCFG.TIVDS.V3R1M0.ZFS) 32974/34560 4294967269 Available


/V2R3/usr/lpp/IBM/IZPCA/v3r1m0/IBM (TIVDS.V3R1M0.ZFS) 6350/7200 4294967290 Available

• If the source directory does not exist, it needs to be mounted. The TSO command to mount the
filesystem read-only is:

Chapter 3. Installing the SMF Extractor and Continuous Collector 49


Installing the DataMover

MOUNT FILESYSTEM(prefix.zfs) TYPE (ZFS) MODE(READ) MOUNTPOINT(local_mountpoint)

2. On all systems, create a directory for the DataMover based on your installation standards.
For this example, it will be installed in /var/IZPCA/DataMover. Ensure that /var/IZPCA (or your
alternative directory location) already exists or is created before extracting the Data Mover.
3. On the hub system, install the DataMover by following these steps:
a) Extract the DataMover using the command:

tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM -C /var/IZPCA/

where
• /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM is the source directory/file for IBM Z
Performance and Capacity Analytics
• /var/IZPCA is where you want the contents of the tar file to reside and which is your destination
directory for the DataMover
This will create the following directories:

/var/IZPCA/DataMover
/var/IZPCA/DataMover/config
/var/IZPCA/DataMover/java
/var/IZPCA/DataMover/mappings

Note: The DataMover must be installed on all hub and spoke systems.
• For the spoke systems that do not have IBM Z Performance and Capacity Analytics installed, copy
the /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM file to that system.
• In addition, if multiple DataMovers are being installed on a single system (such as a hub), see
“DataMover tips” on page 51 for more information.
b) Edit the extracted ../IZPCA/DataMover/DataMover.sh script and update the rundir and
JAVA_HOME values in the following lines:

#
# Runtime directory. Other paths are relative to it.
#
# Probably better to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/var/IZPCA/DataMover"

#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/usr/lpp/java/J8.0_64 <- Path to the Java run libraries

c) Edit the extracted ../IZPCA/DataMover/config/hub.properties file and update the


following lines:


routes = 1
route.1.name = Hub <- Ensure this value is Hub

input.1.type = TCPIP
input.1.port = 54020 <- Ensure this is the correct, available TCP/IP port
number
input.1.ssl = no <- This only needs to be set if SSL is used

outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- This is the
IBM Z Performance and Capacity Analytics log
stream
that the Continuous Collector reads.
This must match the log stream name

50 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the DataMover

created in the first step of installing


the Continuous Collector.

See “Sample configuration members” on page 34 for the complete member listing.
4. On the spoke systems, install the DataMover by following these steps on each spoke system:
a) Create the /usr/lpp/IBM/ directory if it doesn’t already exist.
b) Send the file /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM to the /usr/lpp/IBM/ directory on
each spoke using the standard data transfer protocol (SFTP, ...) of your installation.
c) Change to the /usr/lpp/IBM/ directory and extract the DataMover using the command:

tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDM -C /var/IZPCA/

This creates the same sub directories as were created on the hub in “3.a” on page 50.
d) Edit the extracted ../IZPCA/DataMover/DataMover.sh script and update the rundir and
JAVA_HOME values as for the hub in “3.b” on page 50.
e) Edit the extracted ../IZPCA/DataMover/config/spoke.properties file and update the
following lines:


routes = 1
route.1.name = Spoke <- Ensure this value is Spoke

input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS04 <- SMF Extractor log stream name

outputs.1 = 1
output.1.1.type = TCPIP
output.1.1.host = <host system name> <- Name or IP of Hub System
output.1.1.port = 54020 <- TCP/IP port of Hub DataMover

See “Sample configuration members” on page 34 for the complete member listing.
f) Create the procedure to run the spoke Data Mover.

What to do next
Complete the implementation by following the steps in “Step 4: Activating the Continuous Collector” on
page 56.

DataMover tips
When preparing to install the DataMover, consider the following instructions.

Procedure
• Ensure the -Xmx16G parameter is in the Java statement in the DataMover.sh file to specify the
maximum memory size for Java.
See the sample in “DataMover.sh - Run the DataMover” on page 39.
• When defining log streams ensure the RETPD is greater than 0.
• If multiple DataMovers, including DataMovers and Publication DataMovers, are to run on a single
system such as a hub system, configure each DataMover instance with its own working directory as
follows:
– For each DataMover, copy the files from the DataMover directory into a separate directory, such
as ../DataMover/jobname.
– Customize that directory for the DataMover instance.
• The status of the DataMover internal queues is key to ensuring the DataMover is handling the traffic
load sufficiently. To check the current DataMover statistics, issue the command:

F jobname,APPL=DISPLAY

Chapter 3. Installing the SMF Extractor and Continuous Collector 51


Installing the DataMover

where jobname is the task name of the USS job that started when the DataMover was started.
In this example of the output, the highlighted lines are the ones that indicate the queue depth, if any.

F PRLJDMP1,APPL=DISPLAY
DRLJ0078I ZOS Console: display
DRLJ0075I Route: 1 name: Hub
DRLJ0075I Input type: TCPIP
DRLJ0075I Port: 54020
DRLJ0075I Queued for output: 0
DRLJ0075I No active connections
DRLJ0075I 0 Processes(s):
DRLJ0075I No processes defined.
DRLJ0075I 1 Output(s):
DRLJ0075I Output: 1
DRLJ0075I Output Type: LOGSTREAM
DRLJ0075I log stream: DRL.LOGSTRM
DRLJ0075I Queued for input: 0
DRLJ0075I Received DRL.LOGSTRM packets

• To check the current DataMover status, issue the command:

F jobname,APPL=STATUS

where jobname is the task name of the USS job that started when the DataMover was started.
An example of the output from this command is:

F PRLJDMP1,APPL=STATUS
DRLJ0078I ZOS Console: status
DRLJ0093I Status for Route 1 is Running

DataMover configuration options


This section describes the configuration options for setting up the DataMover and shows examples of
the spoke and hub properties files. The DataMover transfers SMF data from remote (spoke) systems to a
central (hub) system.

Hub configuration options


The hub system can be set up in many ways. This section describes the four configurations shown
in Figure 32 on page 53 and for each configuration, shows an example of the configuration file
(hub.properties file).

52 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the DataMover

Dedicated Hubs Multiple-Route Hubs Shared Hubs Single Hub


- - Overhead - Overhead + Overhead + + Overhead
+ + Flexible, Throughput + Flexible, Throughput - Flexible, Throughput - - Flexible, Throughput

Spoke Hub Spoke Spoke Spoke

Spoke Hub Spoke Hub Spoke Spoke


Route
Hub
Spoke Hub Spoke Route Spoke Spoke

Spoke Hub Spoke Spoke Spoke

Hub
Spoke Hub Spoke Spoke Spoke

Hub
Spoke Hub Spoke Spoke Spoke
Route
Hub
Route
Spoke Hub Spoke Spoke Spoke

Spoke Hub Spoke Spoke Spoke

Figure 32. Hub configuration options

DataMover configuration files


The DataMover configuration files are contained in the …/DataMover/config directory. The default
names are hub.properties for the hub and spoke.properties for the spoke. These names are
referenced in the DataMover JCL (DRLJDMP) by specifying the name without the .properties extension.

//DRLJDMP PROC CONFIG='hub'

Spoke configuration
For all the hub and spoke configuration options, the spoke configuration is the same.
Spoke configuration

routes = 1
route.1.name = Spoke
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS04 <- The log stream the SMF Extractor writes to
input.1.block = 30
input.1.wipe = YES
input.1.strip_header = NO
input.1.check_LRGH = YES
input.1.sourcename = LOGSTREAM
input.1.sourcetype = RAWSMF
#
outputs.1 = 1
output.1.1.type = TCPIP
output.1.1.host = <name or IP of Hub> <- Name or IP of hub
output.1.1.port = 54020 <- The port the hub DataMover is listening on for this spoke
output.1.1.use_ssl = NO
#

Chapter 3. Installing the SMF Extractor and Continuous Collector 53


Installing the DataMover

Dedicated hubs
The dedicated hubs configuration specifies there is a one-to-one relationship between DataMovers on the
spoke and the hub. Each spoke uses a unique port number, and each hub DataMover listens on one port.
The advantage of this configuration is that it provides the most flexibility and best capacity for throughput.
The disadvantage of this configuration is that the system resources needed are the most of any option.
The contents of the hub configuration (hub.properties file) is shown in the following example.
Dedicated hubs configuration file

routes = 1 <- The number of ports the DataMover listens to


outputs.1 = 1 <- The number of outputs for each input
route.1.name = Hub
input.1.type = TCPIP
input.1.port = 54020 <- The port in spoke.properties
input.1.ssl = no
outputs.1 = 1
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.1.1.strip_rdw = NO

Multi-route hubs
The multi-route configuration defines a many-to-one relationship between DataMovers on the spoke and
hub systems. All spokes use a unique port number and the hub listens to multiple ports simultaneously.
The advantage of this configuration is that it provides good flexibility and capacity for throughput. The
disadvantage of this configuration is that the system resources needed are high although less than
the dedicated approach. The contents of the hub configuration (hub.properties file) is shown in the
following figure. This example is for three spokes.
Multi-route hubs configuration file

routes = 3 <- This is the number of ports the DataMover listens to


outputs.1 = 1 <- This is the number of outputs for First Spoke
route.1.name = Hub_1
route.2.name = Hub_2
route.3.name = Hub_3
# Input for First LPAR
input.1.type = TCPIP
input.1.port = 54020 <- The port in spoke.properties on Spoke 1
input.1.ssl = no
# Output for First Spoke
outputs.1 = 1 <- The number of outputs for Spoke 1
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.1.1.strip_rdw = NO
#
# Input for Second Spoke
input.2.type = TCPIP
input.2.port = 54021 <- The port in spoke.properties on Spoke 2
input.2.ssl = no
# Output for First LPAR
outputs.2 = 1 <- The number of outputs for Spoke 2
output.2.1.type = LOGSTREAM
output.2.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.2.1.strip_rdw = NO
#
# Input for Third Spoke
input.3.type = TCPIP
input.3.port = 54020 <- The port in spoke.properties on Spoke 1
input.3.ssl = no
# Output for Third Spoke
outputs.3 = 1 <- The number of outputs for Spoke 1
output.3.1.type = LOGSTREAM
output.3.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.3.1.strip_rdw = NO
#

54 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Installing the DataMover

Shared hubs
The shared hubs configuration specifies that there is a many-to-one relationship between Data Movers on
the spokes and the hub. Each spoke communicating with a specific hub DataMover uses the same port
number and each hub DataMover listens on one port. Multiple hub DataMovers are run on each hub LPAR.
The advantage of this configuration is that the overhead is much less than the dedicated hub approach.
The disadvantage of this configuration is that the flexibility is not as good as the dedicated and multi-
route approaches and the throughput capacity is somewhat less. The contents of the hub configuration
(hub.properties file) is shown in the following example. This is the same as the dedicated approach
but with more than one hub DataMover, each with a unique port, running.
Shared hubs configuration file

routes = 1 <- The number of ports the DataMover listens to


outputs.1 = 1 <- The number of outputs for each input
route.1.name = Hub
input.1.type = TCPIP
input.1.port = 54020 <- The port in spoke.properties and must be unique on the LPAR
input.1.ssl = no
outputs.1 = 1
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.1.1.strip_rdw = NO

Single hub
The single hub configuration specifies that there is a many-to-one relationship between Data Movers on
the spoke and the hub. Each spoke uses the same port number and there is only one hub Data Mover
listening on the port. The advantage of this configuration is it uses significantly less resources than the
other approaches. The disadvantage of this configuration is significantly less flexibility and capacity for
throughput. The contents of the hub configuration (hub.properties file) is shown in the following
example.
Single hub configuration file

routes = 1
outputs.1 = 1
route.1.name = Hub
input.1.type = TCPIP
input.1.port = 54020 <- This must match the port in all the spoke.properties
input.1.ssl = no
outputs.1 = 1
output.1.1.type = LOGSTREAM
output.1.1.logstream = DRL.LOGSTRM <- Log stream used by the Continuous Collector
output.1.1.strip_rdw = NO

Optional monitoring options


You have the option of specifying the interval a TCPIP stage uses to generate statistics.
• For the TCPIP input stage, it is specified as:
input.r.stats_interval = x seconds
or
input.r.stats_interval = y minutes
• For the TCPIP output stage, it is specified as:
output.r.i.stats_interval = x seconds
or
output.r.i.stats_interval = y minutes
If this line is not present, or if the value is set to zero, no statistics output is generated. This interval is
based on clock time, such that messages will be generated on the next clock interval after the TCPIP
Input/Output stage has been created, and thereafter at the specified interval. For example: If the current

Chapter 3. Installing the SMF Extractor and Continuous Collector 55


Activating the Continuous Collector

time is 02:30:12, and the interval is set to 15 seconds, statistics will be generated at time 02:30:15 (3
seconds later), then 02:30:45, 02:31:00, and so on, at 15 second intervals.
Valid values for x are: 0, 15, 30, 60
Valid values for y are: 0, 1, 2, 5, 10, 15, 20, 30, 60
The text seconds and minutes in the configuration files are not case sensitive. Valid examples include:
15 minutes, 15 Minutes, 15 MINUTES, 15 MiNuTeS
If this line is not present, or if the value is set to zero, no statistics output is generated. This interval is
based on clock time, such that messages will be generated on the next clock interval after the TCPIP
Input/Output stage has been created, and thereafter at the specified interval. For example: If the current
time is 02:30:12, and the interval is set to 15 seconds, statistics will be generated at time 02:30:15 (3
seconds later), then 02:30:45, 02:31:00, and so on, at 15 second intervals.
Valid values for x are: 0, 15, 30, 60
Valid values for y are: 0, 1, 2, 5, 10, 15, 20, 30, 60
The text seconds and minutes in the configuration files are not case sensitive. Valid examples include:
15 minutes, 15 Minutes, 15 MINUTES, 15 MiNuTeS

Optional Encryption
Configure SSL to protect your TCPIP links. By default, DataMover communications use unencrypted TCPIP
links. You can configure your communications to encrypt your TCPIP links.
1. On each Spoke system, create a keystore for each DataMover and produce a .cert file that holds a
certificate for it. On each Spoke system, in the DataMover Directory, issue the following commands:
v Datamover.sh SSL GEN
v DataMover.sh SSL EXPORT
2. The certificate for the hub must be exported to each Spoke system.
3. The certificates for each Spoke must be transported to the Hub.
4. Import each certificate into the system's trust store. Ensure that each certificate is imported with a
unique name.
5. When the certificates arrive, issue the following command: DataMover.sh SSL IMPORT
name.cert
6. Modify the hub and spoke configurations to change the use_ssl=no setting on each TCPIP stage into
use_ssl=yes. The SSL implementation that is used is that of the underlying Java installation.

Step 4: Activating the Continuous Collector


After the SMF Extractors and any DataMovers (in a multiple-system configuration) have been installed,
follow the instructions in this section to start up the Continuous Collector in a controlled manner.

About this task


If you are not using IBM Z Performance and Capacity Analytics for the first time, the assumption is that
the data from spoke systems, if used, is already being sent to the hub, or data collection for the hub
systems is being implemented without any historical data. If multiple IBM Z Performance and Capacity
Analytics databases are in use and need to be merged during this process, contact your IBM Support
representative for assistance.
To implement the Continuous Collector , follow these steps:

Procedure
1. Determine an optimal start time.

56 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Activating the Continuous Collector

If you have an existing system for processing and archiving SMF data that is being replaced by
IBM Z Performance and Capacity Analytics, you will need to manage the cut-over to minimize data
duplication. When the SMF Extractor is started, data in the current SMF MANx data sets will be
duplicated by the IBM Z Performance and Capacity Analytics collection. Care must be taken to ensure
data duplication is minimized by ensuring a timely switch of the SMF data sets to coincide with the
startup of the IBM Z Performance and Capacity Analytics SMF Extractor.
2. Begin with starting the Continuous Collector on the hub or stand-alone system by following these
steps:
a) Issue the I SMF command to switch the SMF data set. Allow the SMF dump to occur while doing
the next step.
After processing this dump, the SMF dump process will serve only as a backup source of data for
IBM Z Performance and Capacity Analytics.
b) Start the SMF Extractor as a Started Task. Check the log to ensure data is being collected.
c) Run the IBM Z Performance and Capacity Analytics Batch Collector using the data starting with the
next set of data after the last IBM Z Performance and Capacity Analytics batch collection run and
ending with the SMF dump produced in “2.a” on page 57.
d) After the Batch Collector completes updating, start the Continuous Collector.
This was set up in “Step 2: Installing the Continuous Collector” on page 47.
e) Verify that the data is being collected based on messages in the DRLLOG SYSOUT logs.
3. Pause the implementation at this point for a reasonable period of time (24-48 hours) to ensure that
the Continuous Collector processes data without any issues on the hub or stand-alone system.
4. When ready, cut over the spoke systems one at a time by following these steps:
a) Issue the I SMF command to switch the SMF data set. Allow the SMF dump to occur while doing
the next step.
After processing this dump, the SMF dump process will serve only as a backup source of data for
IBM Z Performance and Capacity Analytics.
b) Clear the log stream.
To do this:
i) Change to the ../DataMover/config directory.
ii) In clear.properties, change the value of the log stream to the name of the log stream to be
cleared.

input.1.logstream = logstreamname

iii) From the OMVS command prompt, change to the ../DataMover/ directory and run the
command DataMover.sh clear
c) Start the SMF Extractor on the spoke system. Check that data is being collected and put into the log
stream.
d) Following standard procedures, process the data from the SMF dump produced in “4.a” on page 57.
e) After dump processing is complete, ensure the DataMover is active on the hub. If it is not currently
running, or this is the first spoke system to be implemented, start the DataMover on the hub.
f) Start the DataMover on the spoke system.
g) Verify that the data is being collected and sent to the hub by checking the messages in the logs.
h) Repeat these steps for each spoke system.

Chapter 3. Installing the SMF Extractor and Continuous Collector 57


Activating the Continuous Collector

58 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Chapter 4. Installation Optional Extensions

Extending the SMF extractor


The SMF Extractor can write multiple, concurrent streams of SMF records out to different log streams.
This section documents how to operate and configure the SMF Extractor from an IBM Z Performance
and Capacity Analytics point of view. There are several parameters and concepts that are used with IBM
Z Batch Resiliency that are omitted from this documentation. Please review the IBM Z Batch Resiliency
documentation for information about these.

Configuration parameters
The following table lists the SMF Extractor parameters that are relevant for IBM Z Performance and
Capacity Analytics users:

Parameter Default Range Description


OUTLGR= LSName None Valid log stream Name of the log stream files to write captured SMF
OUT1lGR= LSName1 name records to. Each OUT*LGR has its own SMF record
OUT1lGR= LSName2 filter, specified by the OUT*LREC parameters.
OUT1lGR= LSName3 At least one OUT*LGR parameter should be specified.
OUT1lGR= LSName4
OUT1lGR= LSName5
OUTLREC= n, x, y, z 14, 15, 30, 60, 0 to 255 or 0 to SMF Record types to be collected by each OUT*LGR.
OUT1LREC= n, x, y, z 61, 64, 65 2047 If not specified, then the values in the SMFREC
OUT2LREC= n, x, y, z parameter below are used.
OUT3LREC= n, x, y, z The syntax for the entries is the same as for the
OUT4LREC= n, x, y, z SMFREC parameter.
OUT5LREC= n, x, y, z
SMFREC= n, x, y, z 4, 15, 30, 60, 0 to 255 or 0 to SMF record types to be collected by the SMF
61, 63, 64 2047 Extractor.
• Each value must be a valid numeric from 0 to 255,
separated by a comma with no intervening blanks
– If the users are using the IEFU86 exit the
upper value is 2047 as it allows the capture of
extended SMF records.
• Alternatively, a range of values can be specified as
nnn:ppp, where nnn denotes the start of the range,
and ppp denotes the end of the range (for example,
60:64 specifies SMF record types 60 through 64,
inclusive).
• An SMF record type of asterisk (*) resets
all prior specified types (for example,
SMFREC=24,28,*,32,80:84 would result in only
SMF record types 32, 80, 81, 82, 83, and 84 being
captured).
If the user wants to specify more SMF types than can
fit on a single statement, specify multiple SMFREC
option statements (as many as necessary).

© Copyright IBM Corp. 1993, 2017 59


Parameter Default Range Description
LHDRS Y Y or N Specifying L1HDRS=N prevents LGRH headers from
L1HDRS being added to SMF records written to OUT1LGR log
L2HDRS stream.
L3HDRS
L4HDRS IBM Z Performance and Capacity Analytics requires
L5HDRS LGRH headers to be written to the log streams for the
Collector and the Data Splitter.

Supported SMF record values


If the users are running with a traditional installation using the IEFU83, IEFU84 and IEFU85 exits, then
the SMF extractor supports capturing the traditional range of SMF records – 0 to 255.
If the users are running with IEFU86 enabled the SMF Extractor then Extended SMF records are also
available to be captured. The range of SMF values supported in these circumstances are from 0 thru
2047.

Default Configuration
If the users have followed the instructions in Chapter 2, “Installing IBM Z Performance and Capacity
Analytics ,” on page 13, users will have configured the SMF Extractor using only the SMFREC= and
OUTLGR= parameters, with OUTLGR specifying the name of the log stream to write the records to and
SMFREC specify which SMF records to capture.

Configuring for multiple log stream output


If the users are going to make use of additional log stream outputs from the SMF Extractor to, for
example, stream some raw SMF records to an application, then the user needs to configure additional
output streams.
The example scenario we are going to use here is that the SMF extractor is currently capturing SMF 30
and 70 thru 79 for IBM Z Performance and Capacity Analytics and we want to add a second log stream to
capture SMF 80 thru 83 records.

The default configuration for this scenario would be:

*
* Output log stream for IZPCA COLLECTOR
*
OUTLGR=MY.IZPCA.LOG.STREAM
*
* IZPCA wants SMF 30 and SMF 70 thru 79
SMFREC=30,70:79

To get the SMF Extractor to write output to multiple log streams the users need to first define the log
streams (as DASD log streams) and add an OUT*LGR statement for each of them. The SMF Extractor
supports writing to up to 5 additional log streams, numbered 1 thru 5. For example:

*
* Output log stream for IZPCA COLLECTOR
*
OUTLGR=MY.IZPCA.LOG.STREAM
*
* Output log stream for IZPCA Data Splitter redistribution
*OUT1LGR=MY.RAW.LOG.STREAM

In order to capture additional SMF records the users need to ensure that the SMFREC statement will
capture the SMF records the users want write to the additional log streams. For example:

*
* IZPCA wants SMF 30 and SMF 70 thru 79
*
SMFREC=30,70:79

60 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


*
* Out security app wants SMF 80 thru 83
*
*SMFREC=80:83

Remember that all SMFREC statements are cumulative (unless one of them specifies an *).
If the users were to run the SMF Extractor at this point, user would get all of the records written to both
of the log streams. To be selective about which records get written to which log stream we must use
OUT*LREC statements. OUT*LREC statements are like SMFREC statements except that the serve to filter
the SMF records written to each output log stream.

*
* Write only the SMF records IZPCA wants to its log streams
*
OUTLREC=30,70:79
*
* Data Splitter records
*
*80 thru 83 for our security app
*
OUT1LREC=80:83

The OUT1LREC is the OUTLREC parameter for the first additional output log stream (numbered 1).
OUT*LREC statements for the same output log stream are cumulative like the SMFREC statements.

Activating changes
If you need to change the SMF details being collected you are advised to:
1. Update the SMF Extractor configuration
2. Wait until the system is quite or in a maintenance window
3. Start the new SMF extractor job with the same name as the old one. It should simply queue up behind
it
4. Stop the old SMF Extractor instance
This will minimise the number of SMF messages that are missed during the change over.

SMF Extractor console commands


The user can use these console commands to manage the SMF Extractor started task. Change smfext to
match the name of the user's SMF Extractor:
S smfext,SMFID=smfid
Start the SMF Extractor on the system the command is issued on using the supplied SMF ID to pick
the SMFPxxxx member to get its parameters from.
F smfext,STOP
Terminate the SMF Extractor on the system the command is issued on.
F smfext,PTRACE
Prints out the last 100 trace records.
F smfext,RESUME
Resumes SMF recording after a SUSPEND had been used.
F smfext,STATUS
• Displays detailed information related to the performance of the SMF Extractor.
• Of note in the output is the CKKS141I message that shows current queue depths.
F smfext,STATUS PROCESS
The PROCESS parameter display differs depending on the z/OS system version. Pre z/OS V2.3 only
displays SMF record types 0 - 255. z/OS V2.3 or higher displays SMF record types 0 - 2047.
F smfext,STATUS PROCESS(ZERO)
The PROCESS(ZERO) parameter only displays SMF record types that have a zero count.

Chapter 4. Installation Optional Extensions 61


F smfext,STATUS PROCESS(NZERO)
The PROCESS(NZERO) parameter only displays SMF record types that have a non-zero count.
F smfext,SUSPEND
Suspends SMF recording. SMF records issued while the extractor is suspended will not be captured or
processed by IBM Z Performance and Capacity Analytics. Don’t do this unless the user has to.
F smfext,VER
Display SMF Extractor Version/Release/Modification level.
There are a number of additional console commands that are only relevant to IBM Z Batch Resiliency. See
the IBM Z Batch Resiliency documentation for details of them.

Setting up data streaming


Data Publication
IBM Z Performance and Capacity Analytics has several options to publish data. These are:
• Shadowing from Db2
• Data Import from a remote Db2
The publication destinations can be:
• The IBM Z Common Data Provider
• An IBM Z Performance and Capacity Analytics Data Catcher
• A local Splunk or ELK instance running on a distributed system
Configuration files are provided to load published data into Splunk and ELK. These are detailed in Chapter
5, “Installation reference,” on page 113. Users may create additional rules to import the data, in any of
our supported formats, into the destination of their choice.

Db2 Shadowing to CDPz


This configuration sets up a Db2 Shadower to extract data from Db2 via JDBC and passes it through to an
instance of the IBM Z Common Data Provider Data Streamer that is running on the same system. The Data
Streamer's policy should be configured to push the data out to its eventual consumers. These can include
Splunk and ELK running on remote systems.
Data transmitted over this path is typically in a JSON format.

Data transmitted over this path is typically in JSON or CSV formats, depending on the needs of the
ingesting program.

62 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Db2 Shadowing to an IBM Z Performance and Capacity Analytics Catcher
This configuration sets up a Db2 Shadower to extract data from Db2 via JDBC and stream it off platform
to a Catcher task that has been set up on a distributed system. The catcher task receives the data and,
typically, writes it out to disk, from where it can be picked up by another process, such as Splunk or ELK.

Data transmitted over this path is typically in JSON or CSV formats, depending on the needs of the
ingesting program.

Remote shadowing with an IBM Z Performance and Capacity Analytics


DataImporter
This configuration establishes a DataImporter running on a distributed system that uses JDBC to connect
to Db2 on the host and extracts the data it wants. The data is then reformatted as needed and typically
written out to disk, from where it can be picked up any another process, such as Splunk or ELK.

Data transmitted over this path is converted to its destination format as required – JSON, CSV and SQL
are available. It is even possible to configure a DataImporter to fetch the data once from Db2 and then
write it out in two different formats in two different locations.
It should be noted that this is a Pull model mechanism and, as such, is inherently less secure than
the first two options which are Push models. The fundamental difference is that the credentials and

Chapter 4. Installation Optional Extensions 63


configuration information for extracting the data from Db2 is stored on the distributed system as opposed
to being stored on the host system. If the distributed system is compromised, it could allow an attacker
to access any data that the credentials provide access to. For this reason, the credentials used should be
limited to just read the IBM Z Performance and Capacity Analytics data that they need to shadow.

Establishing a Publication Mechanism


With either of the first two approaches, where data is steamed out of Db2 on the host, it is necessary to
first set up Db2 Shadowing and then set up your receiver. Users may wish to bring the receiver online first.

Shadowing Data out of Db2

About this task


This task shows how to create a Shadower started task that will extract data from Db2, convert it into
JSON and stream it to one or more receivers. The receivers can be local (CDPz Data Streamer) or remotes
(IBM Z Performance and Capacity Analytics Catchers).
This involves setting up a Forecaster running a Shadower configuration that will actively poll Db2 for
updates to tables and views using JDBC. Provide it with credentials and authority to connect to Db2 over
JDBC.
It will then enter an active polling cycle (by default at a 15 minute interval). On each cycle it will query the
data sources of targeted views and tables for any new or updated records since the last successful poll.
An identified data source is one listed in the Shadower configuration and generally corresponds to an
MVS_SYSTEM_ID for a system whose data is processed by the Collector. One Shadower should be run for
each Collector, with data sources specified that match the MVS system IDs collected.

Procedure
1. Enable JDBC for Db2
To Shadow Db2 data, set up JBDC. Consult your Db2 system programmer about installing and
activating JDBC for the Db2 holding your IBM Z Performance and Capacity Analytics data.
Actions:
• JDBC enabled for the Db2.
• A location to copy the correct level of Db2 JDBC drivers from.
• A user id and password for each Shadower to use to access Db2 via JDBC. This should be a new
userid specifically created for the Shadower with access to only the data in the IBM Z Performance
and Capacity Analytics tables that are being shadowed.
2. Configuring Db2 Shadowing
This process creates a Shadower Forecaster which actively polls Db2 to find new data added to views
and tables, transforms it into JSON, and sends it to the designated receiver.
a) Create a new Forecaster from the SMP/E installation directory by using the following example as a
guide:

tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJFC -C destination

where:

'/usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJFC'

is the install directory and file for the IZPCA Forecaster archive and

'destination'

64 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


is where the contents of the TAR file is to be places. It is suggested to have lower level directory
name of shadower. Do not specify one of your existing working directories as the destination
because any files in it will be overwritten by files extracted from the tar file.
b) Edit the extracted Shadower.properties file in the config/ subdirectory as follows:
• Change input.1.connection to the localhost and the port number that the Db2 holding the IBM Z
Performance and Capacity Analytics schema is to read data from.
• Change input.1.location to the location value that the Db2 Subsystem had when it was
configured.
• Change input.1.schema to the schema that holds the IBM Z Performance and Capacity Analytics
tables and views it will shadow.
• Change the input.1.interval values to the required Db2 polling interval.
• Modify the list of tables and views specified by input.1.table.* to reflect the tables and views
installed in Db2.
• Change the input.1.sources list to reflect the systems that the Shadower is to process.
• Change output.1.1.host and output.1.1.port to point to your designated receiver.
• Remove shadow entries for Db2 targeting components that are not installed. KPMZ and Capacity
Planning are targeted by default.
c) Optional: edit the Foreshadowing.properties file to specify range.
d) Run the Data mapping utility SDRLCNTL(CRLJCDPS), specifying the Forecaster directory where the
mapping files are created.
e) Create a Started Task for Shadowing.
3. Setting the Shadower Active Polling Interval
This is specified in the Shadower.properties file, in the SHADOW input stage. It is specified by
the .interval tag:

input.1.interval = 15

The value is specified in minutes. The shorter the interval, the more frequently the Shadower will query
Db2 to see if there is new data. The polling interval should account for the number of tables being
shadowed and the number of data sources. 200 tables and 50 data sources will drive 10,000 queries
against Db2 per polling interval. It is advisable to break such large tasks up and distribute amongst
multiple Shadowers, with each Shadower specifying its own unique sources list.
4. Specifying Sources for the Shadower
The configuration has a section for specifying sources:

#
input.1.sources = 3
input.1.source.1 = SYS1
input.1.source.2 = SYS2
input.1.source.3 = SYS3

Add a source statement for each unique MVS_SYSTEM_ID whose data is to be shadowed. Data will
not be shadowed for systems that are not in the source list. Having the same system in the list twice
will create additional duplicate records on every active polling cycle. On every active polling cycle, the
Shadower makes a query for each source value for each shadowed table or view.
5. Importing the JDBC Drivers
Make the Db2 JDBC drivers available to the Shadower. Edit the Forecaster.sh file in the Forecaster
directory and change db2jdbc specification to point to the directory holding the drivers. If the
application cannot directly reference the libraries in that location, copy them into the a subdirectory of
the Forecaster directory. Ensure they are marked as being executable.
6. Save your JDBC credentials

Chapter 4. Installation Optional Extensions 65


In the Forecaster directory edit the secure/db2access.properties file and enter the userid and
password. Access to the secure subdirectory should be highly restricted.

Processing Published Data in the IBM Z Common Data Provider

About this task


IBM Z Common Data Provider enables the gathering of data from many different data sources on a Z
system and provides a single framework to forward that data to one or more remote receivers.
IBM Z Performance and Capacity Analytics provides sample JCL member DRL.SDRLCNTL(DRLJCDPS) that
invokes three utilities for generating stream definitions and table mappings from the Db2 database as
required for the streaming of the Db2 data table content to the Splunk or ELK databases.
Note: If maintenance is subsequently applied or customizations are made to your IBM Z Performance and
Capacity Analytics Db2 database that result in database schema changes, it becomes necessary to rerun
the sample JCL DRLJCDPS to create new data tables mappings and stream definitions
Follow these setup instructions for data streaming through IBM Z Common Data Provider:

Procedure
1. Run DRLJCDPS
DRLELSTT
The first utility (DRLELSTT) creates a list of all the data tables in the IBM Z Performance and Capacity
Analytics database. This is used as an input to the other two utilities to ensure they produce consistent
output.
DRLEMCDS
The second utility (DRLEMCDS) creates an izds.streams.json file, which contains an IBM Z Common
Data Provider stream definition for each database table, as well as for each view based on each
database table. The izds.streams.json file is required by IBM Z Common Data Provider. It needs to be
copied into the configuration folder for the IBM Z Common Data Provider user interface.
DRLEMTJS
The third utility (DRLEMTJS) creates a table.json file for every table and view in the database. The
created files map the contents of the tables. The table.json files are required by IBM Z Common Data
Provider.
2. Install and configure IBM Z Common Data Provider
Refer to the IBM Z Common Data Provider User Guide for information.
3. Configure IBM Z Common Data Provider
Complete the following steps to configure IBM Z Common Data Provider for data streaming to send
data off-platform for Splunk or ELK reporting.
a. Make IBM Z Performance and Capacity Analytics table definitions available to IBM Z Common Data
Provider.
• Copy the izds.streams.json file into the IBM Z Common Data Provider configuration UI folder.
• The izds.streams.json file was created earlier with the mapping utility DRLEMCDS.
b. Update your IBM Z Common Data Provider policy to stream the data.
i) Find the stream definitions that correspond to each table to be streamed from IBM Z
Performance and Capacity Analytics and add it to your configuration.
ii) Data will arrive encoded with the character set specified in process.1.1.encoding during
configuration of the Shadower.
Configure IBM Z Common Data Provider to transcribe the data stream to UTF-8, a requirement
for sending the data off-platform

66 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


iii) For each recipient of the data (Splunk or ELK), define a subscriber to the IBM Z Common Data
Provider using the correct IP address for the receiver.
• Configure Splunk receivers to use the IBM Z Common Data Provider protocol.
• Configure ELK receivers to use the Logstash protocol.
iv) Build, distribute, and activate the policy by following the instructions in the IBM Z Common Data
Provider User Guide.

Processing Published Data in an IBM Z Performance and Capacity Analytics


Catcher

About this task


This shows how to download an IBM Z Performance and Capacity Analytics DataMover to run on a
distributed system and configure it to receive data streamed from an IBM Z Performance and Capacity
Analytics Shadower and write it out to a directory on disk from where it can be ingested by Splunk or ELK.
The DataMover running on the remote system is referred to as a Catcher DataMover.
This is the preferred setup because it provides a level of buffering in the event of a data surge, and data
not yet ingested into Splunk or ELK is saved to disk. It also enables IBM Z Performance and Capacity
Analytics to use its buffered TCP/IP transmission technology to reduce the risk of data loss if either end
drops out.
Currently there is no automatic deletion of old ingestion files, so they must be manually erased.
Users should review the Hardware and Network Considerations section below when planning this
deployment.
Follow these setup instructions for data streaming using the IBM Z Performance and Capacity Analytics
DataMover as a Data Receiver on the remote system:

Procedure
1. Run DRLJMTJS
Customize the sample JCL member DRL.SDRLCNTL(DRLJMTJS) to suit your environment, then
run the customized sample. DRLJMTJS runs the DRLEMTJS utility program. DRLEMTJS creates a
table.json file for every table in the database, as well as for every view based on these tables. Each
file maps the contents of the table. The table.json files are required by the Publication DataMover.
2. Download the Catcher
Install the DataMover on the remote system that is running Splunk or ELK.
a. First, ensure there is a 64-bit Java 8 runtime installed on the target system.
b. Use FTP to download a DataMover working directory from the z/OS system to the target system.
• Download the main directory, the java subdirectory and the config subdirectory.
• Do not download the mapping directory.
• The DataMover.jar file must be transferred as binary.
• The remaining files need to be transferred as text in FTP to convert to ASCII.
3. Customize the DataMover startup scripts
The DataMover startup script directory to be configured and used will depend on your target system
platform. These sample startup scripts reside in the DataMover sub-directory:

DataMover.bat (for Windows)

or

Chapter 4. Installation Optional Extensions 67


DataMover.sh (for non-Windows)

Sample DataMover.bat (Windows)


The sample DataMover.bat does not require customization but needs to be started from the
DataMover working directory.

REM - Windows Batch script to run the DataMover from its working directory
@ECHO OFF
IF [%1]==[SSL] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[version] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE (
set PARMS=config=.\config\%1.properties
)
java -Xmx8G -Djava.util.logging.config.file=".\logging.properties" -cp ".\java\*"
com.twentyfirstcsw.datamover.DataMover %PARMS%

Sample DataMover.sh (non-Windows)


The sample DataMover.sh requires customization:
• Update the rundir parameter with the correct working directory for the DataMover.
• Update the JAVA_HOME parameter with the correct path for your installed Java

#! /bin/sh
#
# Shell script to run the DataMover from its working directory
#
# -------------------------------------------------------------------
# Configuration area - tailor to suit your installation
#
#
# Runtime directory. Other paths are relative to it.
#
# Probably better to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/opt/IZPCA/DataMover/datamover"
#
# logging.properties controls which messages get sent where
#
logfile="logging.properties"
#
# The main executable
#
jarfile="java/DataMover.jar"
#
# The config file that tells it what to do.
#
config="config/$1.properties"
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home
export PATH=$PATH:$JAVA_HOME/bin
#
# -------------------------------------------------------------------
# Execution area - don't change things below the line
##
# Get to the runtime directory
#
cd $rundir
#
# Work out what the parms are
#
parms="config=$config"
if test "$1" = "SSL"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
fi
if test "$1" = "version"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"

68 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


fi
#
# Run the DataMover
#
export CLASSPATH=./java/*
java -Djava.util.logging.config.file="$logfile" com.twentyfirstcsw.datamover.DataMover
$parms

4. Verify that the DataMover is installed correctly by issuing the following command
DataMover directory:

.\DataMover version (for Windows)

or

./DataMover.sh version (for non-Windows)

It should produce a few lines of output identifying itself as a DataMover, identifying the operating
system, and giving a dummy value for the system name and sysplex id.
5. Create a Hopper directory
This is a directory that the DataMover will write incoming data into, and from which Splunk or ELK
will read it. The hopper acts as a disk buffer, allowing the DataMover to write data out ahead of what
Splunk or ELK has ingested. It provides persistence for uningested data in the event that Splunk or
ELK is running slow or is unavailable.
6. Customize the catcher.properties file
This is located in the DataMover/config subdirectory.
a. Update the output.1.1.directory parameter with your hopper directory for the data.
b. Change the port if required, otherwise leave as the default 45020.
c. In Splunk, the hopper directory is determined by the system environment parameter
CDPDR_PATH. Ensure this is set correctly to the desired hopper directory location.
d. In ELK, the directory location for the ELK hopper is specified in the sample file B_IZPCA_Input.lsh.

#
# Config
#
routes = 1
#
# Route 1 – Receive from TCPIP and Buffer to disk for Splunk/ELK to pick up
#
route.1.name = Catcher
#
# Single input is TCPIP
#
input.1.type = TCPIP
input.1.port = 45020
#
# Output to hopper
#
outputs.1 = 1
#
# This puts the data that arrives each week into a different directory
# You need to manually delete directories that are a couple of weeks old – after checking
their data got ingested ok
#
output.1.1.type = File
output.1.1.directory = D:\\Hopper
output.1.1.subdir = week
output.1.1.format = json
#

7. Start the Catcher


From the DataMover directory issue the command:

.\DataMover catcher (for Windows)

Chapter 4. Installation Optional Extensions 69


or

./DataMover.sh catcher (for non-Windows)

The DataMover will start and display the Hostname and IP address it is using, listening on port 45020
(default).

DRLJ0188I DataMover on Hostname: LAPTOP-XXXXXX IP-Addr: x.xx.xxx.xxx


DRLJ0047I TCPIP Listening for input on port 45020

Use these values when configuring your DataMover on z/OS to connect with the Catcher.
8. Review the network Interface setting
If there are using multiple networks on your remote platform, a .nif parameter can be specified to
select the network interface that a TCPIP stage will use.
• For the TCPIP input stage, it is specified as:

input.r.nif = id

• For the output stages, it is specified as:

output.r.i.nif = id

where:
• id can be either a valid IP address (IPv4 or IPv6) or a valid network interface name.
• r is the route number. For example: input.1.nif is the input stage for route 1 and input.2.nif is the
input stage for route 2.
• i is the output stage index (as there can be multiple outputs for the same route). For example:
output.1.3.nif is the output stage for route 1, index 2.
To get a list of your systems network names issue the command:

.\DataMover networks (for Windows)

or

./DataMover.sh networks (for non-Windows)

9. Operating the Catch


To stop the DataMover anytime issue the command: stop.
If the DataMover fails to terminate issue the command: force or press Ctrl+C.
The status and display commands can be used but, with output redirection, their output will be
redirected as well.
10. Modify the Shadower configuration
a. On z/OS, configure a DataMover with a customized shadower.properties to connect to your Data
Mover running remotely for Splunk or ELK.
b. Modify the encoding to ASCII in the process step as follows:

process.1.2.encoding = UTF-8

c. Modify the TCPIP output parameters with the IP address of your Catcher DataMover and the
correct port.

output.1.1.host = x.xx.xxx.xxx
output.1.1.port = 45020

11. Optional: Configure SSL to protect your TCP/IP links

70 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


In the previous step, we setup your communications. By default, these communications use
unencrypted TCP/IP links. To configure the communications to use encrypted TCP/IP links:
a. On each system, create a keystore for each DataMover and produce a .cert file that holds a
certificate for it. On each system, in the DataMover Directory, issue the following commands:

Datamover.sh SSL GEN


Datamover.sh SSL EXPORT

b. The certificate from each system must be installed on the other system. This is a binary file and
must be transported in binary.
c. Import each certificate into the system's trust store.
• Ensure that each certificate is imported with a unique name.
• When the certificates arrive, issue the following command:

DataMover.sh SSL IMPORT name.cert

d. Modify the DataMover configurations to change the use_ssl=no setting on each TCPIP stage to
use_ssl=yes.

Setting up the DataImporter

About this task


This shows how to download and configure a DataImporter to run on a distributed system. The
DataImporter combines the function of the Shadower and the Catcher in a single process that runs on the
distributed system. It uses JDBC to access Db2, extracts IBM Z Performance and Capacity Analytics data,
converts it to json and writes it out to a hopper directory from where Splunk and ELK can ingest it.
While this configuration off loads the CPU for exporting and converting the data into the distributed
platform it should be remembered that for it to work it has to have a set of JDBC credentials configured
for it and that your firewall must let it connect back to Db2 to access the data it needs.
Users should review the Hardware and Network Considerations section below when planning this
deployment.
Follow these setup instructions to set up a DataImporter on the remote system:

Procedure
1. Download and Unpack the DataImporter
Install the DataImporter on the remote system that is running Splunk or ELK ingestion.
a. First, ensure there is a 64-bit Java 8 runtime installed on the target system.
b. Download the binary file containing the DataImporter .tar file:

get /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDI DataImporter.tar

where:

'/usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDI

is the install directory and file for IBM Z Performance and Capacity Analytics Forecaster archive
c. Extract the .tar file to create a working directory:

untar -ovxf DataImporter.tar -C destination

Do not overwrite any existing installations as the files from the .tar file will overwrite any modified
files that might be present.
2. Enable JDBC for Db2

Chapter 4. Installation Optional Extensions 71


To Shadow Db2 data, set up JBDC. Consult your Db2 system programmer about installing and
activating JDBC for the Db2 holding your IBM Z Performance and Capacity Analytics data.
Actions:
• JDBC enabled for the Db2.
• A location to copy the correct level of Db2 JDBC drivers from.
• A user id and password for each DataImporter to use to access Db2 via JDBC. This should be a
new userid, specifically created for the DataImporter with access to only the data in the IBM Z
Performance and Capacity Analytics tables that are being imported.
• If needed, organize the addition of a rule or rules to permit your DataImporter to open an inbound
connection to Db2 through your firewall.
3. Importing the JDBC Drivers
To make the Db2 JDBC drivers available to the DataImporter.
a. Download the files from Db2 onto your distributed system.
b. Edit the DataImporter shell script file in the DataImporter directory and change db2jdbc
specification to point to the directory holding the drivers. If the application cannot directly
reference the libraries in that location, copy them into the java/ subdirectory of the DataImporter
directory. Ensure they are marked as being executable.
4. Save your JDBC credentials
In the DataImporter directory edit the secure/db2access.properties file and enter the userid and
password. Access to the secure subdirectory should be highly restricted.
5. Customize the DataImporter startup scripts
The DataImporter startup script directory to be configured and used will depend on your target
system platform. These sample startup scripts reside in the DataImporter sub-directory:

DataImporter.bat (for Windows)

or

DataImporter.sh (for non-Windows)

Sample DataImporter.bat (Windows)


The sample DataImporter.bat needs to be started from the DataImporter working directory.
• Specify the path to the JDBC drivers.

@ECHO OFF
REM DataImporter
REM
REM IBM Z Performance and Capacity Analytics 3.1
REM LICENSED MATERIALS - Property of Teracloud S.A.
REM 5698-AS3 (C) Copyright Teracloud S.A. 2020, 2022
REM
REM All rights reserved.
REM
REM Windows Batch script to run the DataImporter from its working directory
REM

IF [%1]==[SSL] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[version] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE IF [%1]==[networks] (
set PARMS=%1 %2 %3 %4 %5 %6 %7 %8 %9
) ELSE (
set PARMS=config=.\config\%1.properties
)

SET db2jdbc=./jdbc_db2b10

SET CLASSPATH=./java/*;%db2jdbc%/db2jcc4.jar;%db2jdbc%/sqlj4.zip;%db2jdbc%/
db2jcc_license_cisuz.jar

72 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


ECHO %CLASSPATH%

java -Xmx8G -Djava.util.logging.config.file=".\logging.properties" -cp %CLASSPATH%


com.twentyfirstcsw.datamover.DataImporter %PARMS%Sample

DataImporter.sh (non-Windows)
The sample DataImporter.sh requires customization:
• Update the rundir parameter with the correct working directory for the DataImporter.
• Update the JAVA_HOME parameter with the correct path for your installed Java.
• The path to the JDBC drivers

#!/bin/sh
#
# DataImporter
#
# IBM Z Performance and Capacity Analytics 3.1
# LICENSED MATERIALS - Property of Teracloud S.A.
# 5698-AS3 (C) Copyright Teracloud S.A. 2020, 2022
#
# All rights reserved.
#
# Shell script to run the DataImporter from its working directory
#
# Configuration area - tailor to suit your installation
#
#
# Runtime directory. Other paths are relative to it.
#
# Use different directories if you are running multiple Data
# Importers on the same system.
#
# Linux: /Users/drl/DataImporter

rundir="/u/drl/DataImporter"

#
# logging.properties controls which messages get sent where
#

logfile="logging.properties"

#
# The main executable
#

jarfile="java/DataImporter.jar"

#
# The config file that tells it what to do.
#

config="config/$1.properties"

#
# The directory where your Db2 JDBC drivers are installed
#

db2jdbc="/apc/tdb2a10/usr/lpp/db2a10/jdbc/classes"

#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
# Mac: /Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home
# Linux: /usr/java/jdk1.8.0_20/bin
# zOS: /apc/java800/64bit/usr/lpp/java/J8.0_64

export JAVA_HOME=/apc/java800/64bit/usr/lpp/java/J8.0_64
export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=./java/*:$db2jdbc/db2jcc4.jar:$db2jdbc/sqlj4.zip:$db2jdbc/
db2jcc_license_cisuz.jar

echo $CLASSPATH

Chapter 4. Installation Optional Extensions 73


#
# -------------------------------------------------------------------
# Execution area - don't change things below the line
#

#
# Get to the runtime directory
#

cd $rundir

#
# Work out what the parms are
#

parms="config=$config"

if test "$1" = "SSL"


then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
fi

if test "$1" = "version"


then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
fi

if test "$1" = "networks"


then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
fi

#
# Run the DataImporter
#

java -Xmx16G -Djava.util.logging.config.file="$logfile"


com.twentyfirstcsw.datamover.DataImporter $parms

6. Verify that the DataImporter is installed correctly


Issue the following command from the DataImporter directory:

.\DataImporter version (for Windows)

or

./DataImporter.sh version (for non-Windows)

It should produce a few lines of output identifying itself as a DataImporter, identifying the operating
system, and giving a dummy value for the system name and sysplex id.
7. Create a Hopper directory
This is a directory that the DataImporter will write incoming data into, and from which Splunk or ELK
will read it. The hopper acts as a disk buffer, allowing the DataImporter to write data out ahead of
what Splunk or ELK has ingested. It provides persistence for uningested data in the event that Splunk
or ELK is running slow or is unavailable.
8. Decide upon the configuration to run
A configuration files is supplied with the DataImporter include:
• Import_CPKPM_H.properties
• Import_CPKPM_R.properties
• Import_CPPROF_R.properties
• Import_TCP_H.properties
• Import_TCP_R.properties
The CPKPM files will import data from the Capacity Planning and Key Performance Metrics
components and the CPPROF files will import data from the Capacity Planning resource Profiling
Metrics components. The TCP files will import statistical data from the Security components.

74 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


The import_xxxx_H files import historical data and the import_xxxx_R files import real-time data.
Due to the computational cost of deriving profile data points, it is suggested that the data from the
Capacity Planning PROFILE table can be imported real-time data by setting .initial = RECENT in the
import_CPPROF_R.properties file. It can limit the number of values when the DataImporter imports
real-time data from each profile table.
The mappings subdirectory delivered with the DataImporter contains mappings for all of the tables in
the configurations.
9. Run DRLJMTJS
On the host, customize the sample JCL member DRL.SDRLCNTL(DRLJMTJS) to suit your environment,
then run the customized sample. DRLJMTJS runs the DRLEMTJS utility program. DRLEMTJS creates
a table.json file for every table installed into the database, as well as for every view based on these
tables. Each file maps the contents of the table. The table.json files are required by the DataImporter.
Although the DataImporter is supplied with copies of a number of mapping files, it is advisable to
rerun DRLJMTJS after installation and after the subsequent application of any APAR that changes
the structure of the tables that are installed or after installing a new component or subcomponent.
The updated mapping files should be downloaded (in ASCII) into the mappings sub directory created
during the DataImporters installation process, overwriting whatever mapping files are already in the
folder.
10. Customize the configuration user is using
The supplied configuration files have ‘tokens’ that are intended to be used with global change
commands. This enables completing your customization with only a few commands, rather than
having to hand edit many lines in the file.
A script suitable for use with sed is further provided to make the customization easier and repeatable.
Edit the tailor.sed script to put in the correct substitution values. With multiple configurations to pull
down data for multiple sysplexes, create a separate .sed tailoring script for each one.
To apply the tailoring, run:

sed –f tailor.sed import_CPKPM_H.properties > my_import_CPKPM_H.properties

This will write out a new file (my_import_CPKPM_H.properties) that is the customized version of the
provided configuration.
Use the name of the customized configuration to run the DataImporter:

DataImporter my_import_CPKPM_H

11. Start the DataImporter


From the DataImporter directory issue the command:

.\DataImporter my_import_CPKPM_H (for Windows)

or

./ DataImporter.sh my_import_CPKPM_H (for non-Windows)

The DataImporter will start, connect to Db2 and begin pulling data down.
It will not terminate unless it encounters an error or it was run with a time range with a specified end
date for all tables and it has imported all of the data that is available within the time range.
12. Operating the DataImporter
To stop the DataImporter at anytime issue the command: stop.
It will take a while to shut down and a number of ‘interrupted’ exceptions and ‘waiting’ messages
may occur during the shutdown process.
If the DataImporter fails to terminate issue the command: force or press Ctrl+C.

Chapter 4. Installation Optional Extensions 75


The status and display commands can be used but, with output redirection, their output will be
redirected as well.
13. Automating the DataImporter
User must arrange for the DataImporter to be restarted if the system it is running on is rebooted or if
it stops for some reason. When it is restarted it will use its .check files to pick up where it left off.
14. Managing DataImporter Artifacts
Managing .check files
These are written out to a subdirectory specified in the configuration. The default is shadow_xxx,
where xxx is the configuration name – shadow_CPKPM for example. There is one file for each
combination of table and source that exists in the table. The files contain the date and time of the last
records successfully copied from the table for that source.
In general, no action is required for these files. If they are erased, it will ‘reset’ the shadow process
and the next run will use the .initial specification to determine the oldest records to copy.
Hopper files
The hopper files are output files. The user must arrange for them to be erased after they have been
ingested by whatever tool is reading them.

Hardware and Network Considerations


Both the Catcher and DataImporter solutions require the user to provide one or more servers for them to
run on.
The minimum (virtual) hardware requirements for the DataImporter are as follows:
• 48GB of RAM
• 6 Server Class CPU cores
• 1TB free disk space
• 1GbE LAN

Network Considerations
For a single LPAR, the IBM Z Performance and Capacity Analytics DataImporter will want to stream
around 6 MB an hour, although this will be higher on larger, busier systems. Customers need to estimate
the total data transfer rate to the server and ensure that they have sufficient network bandwidth. The
load may need to be split between multiple servers if the bandwidth cannot be met by a single server.
Customers should also check the outbound network capacity from the host where IBM Z Performance
and Capacity Analytics is installed.
Customers running a Publisher or Shadower on the host and a Catcher on the distributed system should
allow for baseline data transfer rates of 12 MB an hour for each LPAR. As with the DataImporter these can
be higher for larger, busier systems.
The data volume transferred can also be significantly increased if you activate and stream data for
additional subsystem monitoring features.

Publishing Historical Data


When a new Shadower or DataImporter is started, it will incorporate some historical data for context.
By default, the scope of this historical data is determined by the data's Shadow update frequency. The
following table shows the default length of historical data copied over for each Db2 table based upon the
specified update frequency for the table. This is the RECENT setting for the .initial parameter.

Update Frequency Time Period


TIMESTAMP 1 HOUR

76 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Update Frequency Time Period
HOURLY 1 DAY
DAILY 1 WEEK
WEEKLY 1 MONTH
MONTHLY 1 QUARTER
Other 1 HOUR

This process occurs for each source (MVS_SYSTEM_ID) within every copied table or view. After a
successful query the timestamp is stored in table_source_check files, and is used as the base time for the
next query cycle. Shadower will revert to initial instruction if table_source_check files are not available.
Shadower initial instruction is set by a .initial tag in the shadowing definition of each table or view.
The DataImporter can import data with two different scenarios as follows:
• Real-time scenario: The import_xxxx_R.properties file can import real-time data with setting values
below. The tag specified determines the initial starting point and the DataImporter continues to
executes until it is manually instructed to terminate via the command "stop".

.initial tag specification Effect


RECENT The default value. The historical data copied is based on the specified
update frequency of the table being; Timestamp, Hourly, Daily, Weekly
or Monthly.
HOUR Relative time commencing in the previous hour.
DAY The histroical data copied is based on the specified update frequency of
the table:
Hourly or Timestamp tables - commencing in the previous 24 hours
from the current start date and time.
Daily table - commencing at the start of the previous day.

WEEK The historical data copied is based on the specified update frequency of
the table:
Hourly and Timestamp tables - 168 hours prior to the current data and
time.
Daily table - commencing 7 days prior to the current date.
Weekly table - commencing 7 days prior to the current date.

MONTH The historical data copied is based on the specified update frequency of
the table:
Hourly and Timestamp tables - 672 hours prior to the current date and
time.
Daily table - commencing 28 days prior to the current date.
Weekly table - commencing 28 days prior to the current date.
Monthly table - commencing 28 days prior to the current date.

Chapter 4. Installation Optional Extensions 77


.initial tag specification Effect
QUARTER The historical data copied is based on the specified update frequency of
the table:
Hourly and Timestamp tables - commencing 2016 hours prior to the
current date and time.
Daily table - commencing 96 days prior to the current date.
Weekly table - commencing 12 weeks prior to the current date.
Monthly table - commencing 12 weeks prior to the current date.

Note: It is strongly recommended the provided template config file - import_CPPROF_R.properties -


uses .initial = RECENT. The profile tables are defined with an update frequency of hourly. This will result in
the DataImporter executing with relative date and time commencing 7 days prior to the current date.
• Historical scenario: The import_xxxx_H.properties file can import historical data with setting values
below.

.initial tag specification Effect


ALL The initial query uses the start of the Unix Epoch (January 1st, 1970) as
the oldest data date. This will copy all the records in the table for each
source being shadowed.
This continues to execute until it is manually instructed to terminate via
the command "stop".

RANGE The Shadower will query all records between two specified dates.
Two additional tags, .range.from and .range.to, must be specified in the
shadowing definition.
• range.from defaults to EPOCH, January 1st, 1970.
• range.to value defaults to NOW, the current time and date.
The Shadow stage will automatically shut down after processing all
the data within the tables that are available within the date range and
terminate the DataImporter.

Streaming Current Data and Historical Data


Streaming historical data through the Shadower can increase the implementation time required to reach
near real time processing. Starting a Shadower with .initial=ALL will result in all data in all tables being
streamed, which may take considerable time.
The recommended approach is to start a Shadower with .initial=RECENT. This will result in a faster initial
cycle with less initial data, achieving near real time processing faster. It is then possible to start a second
Shadower and specify .initial=RANGE to selectively copy data from systems in a targeted approach.
The second Shadower can be configured as a single source value and run as a batch job; modify the
source specified in the configuration to change upon completion.
A sample ShadowRange configuration is provided, and can be modified by changing the .range.from
and .range.to dates.

78 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

Sample configuration members


Sample JCL and associated members are provided with IBM Z Performance and Capacity Analytics to
help with your configuration and installation tasks.
Before running for the first time, tailor the members to suit the specific requirements of the user
installation.

DRLJCCOL - Continuous Collector started task


Sample member DRLJCCOL is provided in the SDRLCNTL library. It contains the JCL for a started task to
collect SMF data continuously.
Customize the sample JCL by following the instructions in the comments.

//DRLJJCOL JOB (ACCT#),'CONTINUOUS COLLECT'


//***************************************************************
//* NAME: DRLJCCOL *
//* *
//* FUNCTION: Sample procedure for started task to collect *
//* SMF data continuously. *
//* *
//***************************************************************
//* *
//* Customization: *
//* a. Change VER to match your IZPCA version. *
//* b. Change IZPCALD to match the HLQ you used for IZPCA. *
//* c. Change DB2LOD to match your Db2 HLQ and version. *
//* d. Change the IZPCA parameters to match your system *
//* SYSPREFIX - default DRLSYS *
//* SYSTEM - default DSN *
//* &PREFIX - default DRL *
//* &USERS - default DRLUSER *
//* &STOGROUP - default DRLSG *
//* &DATABASE - default DRLDB *
//* e. Change logstream-name to a valid log stream name. *
//* *
//***************************************************************
// SET VER=310
// SET IZPCALD=IZPCA.V&VER.
// SET DB2LOD=DB2.VC10
//*
//DRLSMFCC EXEC PGM=DRLPLC,REGION=0M,
// PARM=('SYSPREFIX=DRLSYS,SYSTEM=DSN,&STOGROUP=DRLSG,',
// '&PREFIX=DRL,&DATABASE=DRLDB,&USERS=DRLUSER')
//STEPLIB DD DISP=SHR,DSN=&IZPCALD..SDRLLOAD
// DD DISP=SHR,DSN=&DB2LOD..SDSNLOAD
//DRLIN DD *
SET USERS='DRLUSER';
COLLECT SMF CONTINUOUSLY FROM logstream-name
COMMIT AFTER 5 MINUTES
BUFFER SIZE 100 M;
/*
//DRLOUT DD SYSOUT=*
//DRLDUMP DD SYSOUT=*

Figure 33. DRLJCCOL: JCL for the Continuous Collector started task

DataMover.sh - Run the DataMover


This is a sample Shell script to run the DataMover.
Customize the script with installation-specific details before running it.

Chapter 4. Installation Optional Extensions 79


Sample configuration members

# Shell script to run the DataMover


#
# -------------------------------------------------------------------
# Configuration area - tailor to suit your installation
#
# Runtime directory. Other paths are relative to it.
#
# Recommended to use different directories if you are running
# multiple DataMovers on the same system.
#
rundir="/IDSz/DataMover"
#
# logging.properties controls which messages get sent where
#
logfile="logging.properties"
#
# The main executable
#
jarfile="java/DataMover.jar"
#
# The config file that tells it what to do.
#
config="config/$1.properties"
#
# -------------------------------------------------------------------
# Environment area - Where's Java?
# Need when running as a batch job/started task
#
export JAVA_HOME=/usr/lpp/java/J8.0_64
export PATH=$PATH:$JAVA_HOME/bin
#
# -------------------------------------------------------------------
# Execution area - don't change things below the line
#
# Get to the runtime directory
#
cd $rundir
#
# Work out what the parms are
#
if test "$1" = "SSL"
then
parms="$1 $2 $3 $4 $5 $6 $7 $8 $9"
else
parms="config=$config"
fi
#
# Run the DataMover. Specify the maximum memory size for Java.
#
Java -Xmx16G -Djava.util.logging.config.file="$logfile" -jar "$jarfile" $parms

Figure 34. DataMover.sh - Run the DataMover

publication.properties - Transfer data from the log stream


This is a sample configuration for taking the data from the log stream then erasing it after it is sent.
Customize before use by following the instructions in the comments.

80 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Sample configuration members

# Input arrives in a log stream, put there by the COLLECTOR


#
# This will take the data from the log stream and then erase it after it is sent.
#
# You need to modify:
#
# The name of the log stream in the input stage
# The name of the host and port number in the TCPIP output stage
#
input.1.type = LOG STREAM
input.1.log stream = IFASMF.CF.LS04
input.1.wipe = YES
input.1.strip_header = NO
input.1.check_LRGH = YES
input.1.sourcename = COLLECTOR
input.1.sourcetype = AGGREGATE
#
process.1 = 1
#
# Convert to JSON
#
process.1.1.type = JSON
process.1.1.encoding = IBM-1047
process.1.1.float_format = IBM
#
# Send to CDPz
#
outputs.1 = 1
#
# Over TCPIP
#
output.1.1.type = TCPIP
output.1.1.host = host_name
output.1.1.port = 45020
output.1.1.directory = buffer
#

Figure 35. publication.properties - Transfer data from the log stream

clear.properties - Erase all records from a log stream


This is a sample configuration for erasing all records from a log stream.
Customize before use by following the instructions in the comments.

# This is a sample configuration to erase all records from a log stream


#
# You need to modify:
#
# The name of the log stream that the input stage will clear
#
# Note the clear operation is irreversible.
# Once the data is gone, it is gone.
# So be sure you want to do this.
#
routes = 1
route.1.name = Clear
#
# clear=yes makes the DataMover clear the log stream and then terminate.
#
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS0
input.1.clear = yes
input.1.block = 30
input.1.sourcename = LOGSTREAM
input.1.sourcetype = RAWSMF
#
# Dummy console output stage
#
outputs.1 = 1
#
output.1.1.type = CONSOLE
#

Figure 36. clear.properties - Erase all records from a log stream

Chapter 4. Installation Optional Extensions 81


Status monitoring commands reference

Status monitoring commands reference


This section shows the status commands used to monitor the operation of the IBM Z Performance and
Capacity Analytics data collection processes.

Log stream status


This command shows the status of a log stream.

D LOGGER,C,LSN=logstreamname,D

where logstreamname is the name of the log stream of interest.


Example of the command output with key values highlighted.

D LOGGER,C,LSN=DRL.LOGSTRM,D
IXG601I 01.43.45 LOGGER DISPLAY 469
CONNECTION INFORMATION BY LOGSTREAM FOR SYSTEM ZT01
LOGSTREAM STRUCTURE #CONN STATUS
--------- --------- ------ ------
DRL.LOGSTRM *DASDONLY* 000002 IN USE
DUPLEXING: STAGING DATA SET
STGDSN: DRL.DRL.LOGSTRM.ZT00PLEX
VOLUME=TEC000 SIZE=0000002700 (IN 4K) % IN-USE=025
GROUP: PRODUCTION
OFFLOAD DSN FORMAT: DRL.DRL.LOGSTRM.<SEQ#>
CURRENT DSN OPEN: YES SEQ#: A0002221
ADV-CURRENT DSN OPEN: NO SEQ#: -NONE-
JOBNAME: PRLJCCOL ASID: 0028
R/W CONN: 000000 / 000001
RES MGR./CONNECTED: *NONE* / NO
IMPORT CONNECT: NO
JOBNAME: PRLSMFEX ASID: 005B
R/W CONN: 000000 / 000001
RES MGR./CONNECTED: *NONE* / NO
IMPORT CONNECT: NO

NUMBER OF LOGSTREAMS: 000001

Figure 37. Status command: SMF Extractor statistics

SMF Extractor statistics


This command shows the SMF Extractor statistics.

F jobname,STATUS

where jobname is the name of the task running the SMF Extractor.
Example of the command output with key status values highlighted.

82 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


Status monitoring commands reference

F PRLSMFEX,STATUS
VSX0111I VSXCON 01:21:47.479 Command <STATUS> received from CONSID=0300000D CONSNAME=JMACERA
VSX0133I VSXSTA 01:21:47.479 ++ SMFU83 Status display: ++
VSX0136I VSXSTA 01:21:47.479 ** > Server is ready to capture SMF records < **
VSX0127I VSXSTA 01:21:47.480 ** #C21CAT Created by PRLSMFEX STC on 2019/04/30 17:01:44
VSX0135I VSXSTA 01:21:47.480 ** SMFU83 Started by PRLSMFEX STC on 2019/04/30 17:01:44
VSX0134I VSXSTA 01:21:47.480 ** SMFU83 Server has been restarted 1 times this IPL
VSX0141I VSXSTA 01:21:47.480 ** Queue depth control values: 2000 / 1950 Curr: 0 Max: 166
VSX0173I VSXSTA 01:21:47.480 SQ Cntrs: NQ=000270B4x DQ=000270B4x
VSX0130I VSXUT1 01:21:47.481 ** SMF Types collected:
(014,015,030,042,060,061,064,065,070,071,072,073)
VSX0130I VSXUT1 01:21:47.481 ** SMF Types collected: (074,085,090,094,099,100,101,113,118,119)
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.481 :: SMF exit SYSTSO.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSJES2.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSJES2.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSOMVS.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSSTC.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.482 :: SMF exit SYSSTC.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYSSTC.IEFU83 Module VSXU83 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU85 Module VSXU85 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU84 Module VSXU84 is active ::
VSX0159I VSXLSE 01:21:47.483 :: SMF exit SYS.IEFU83 Module VSXU83 is active ::
VSX0139I VSXSTA 01:21:47.483 ** SVC dumps will be created for certain abends
VSX0131I VSXSTA 01:21:47.484 ** Tracing is active; MSGLVL is 9
VSX0144I VSXSTA 01:21:47.484 ** VSXPC1 abended 0 times; 2 abends are allowed before termination
VSX0349I VSXSTA 01:21:47.484 ++ Id=0000 M=VSXMAI TCB=006F8588x EP=00007000x TT=00000000
07297748x
VSX0349I VSXSTA 01:21:47.484 ++ Id=PR01 M=VSXPRT TCB=006FC580x EP=00017458x TT=00000000
0018A03Ax
VSX0349I VSXSTA 01:21:47.484 ++ Id=CM02 M=VSXCON TCB=006F82E0x EP=00011900x TT=00000000
009B8010x
VSX0349I VSXSTA 01:21:47.484 ++ Id=WR03 M=VSXWTR TCB=006F80A8x EP=0001AA70x TT=00000003
7E560F1Ax
VSX0349I VSXSTA 01:21:47.485 ++ Id=SM04 M=VSXSMF TCB=006CE3A8x EP=00017D90x TT=00000000
9CA1B4F7x
VSX0349I VSXSTA 01:21:47.485 ++ Id=HR05 M=VSXHRB TCB=006CE1F8x EP=00016C70x TT=00000000
37357ECFx
VSX0360I VSXSTA 01:21:47.485 -- Allocated dsn:
. . .
VSX0361I VSXSTA 01:21:47.485 -- SCNT(00000001) MXSTK(00000011) STK1(00000000) CRSH(00000000)
VSX0362I VSXSTA 01:21:47.485 -- RECR( 159924) CPLS( 5147) RECW( 159861) SMXC( 0)
VSX0363I VSXSTA 01:21:47.485 -- EOVC( 0) RSLC( 0) WSLC( 0) IBUF( 1000)
VSX0129I VSXSTA 01:21:47.485 ++ End-of-list ++

Figure 38. Status command: SMF Extractor statistics

DataMover status
This command shows the status of the DataMover task.
The output of this command may differ depending on the number of routes and processes defined in the
parameters.

F DM_USS,APPL=STATUS

where DM_USS is the name of the USS task (usually ending with a 1). For example, if the DataMover is
started as DRLJDM, this value would be DRLJDM1.
Example of the command output with key status values highlighted:

F DRLJDMH,APPL=STATUS
DRLJ0078I ZOS Console: status
DRLJ0093I Status for Route 1 is Running

Chapter 4. Installation Optional Extensions 83


DataMover features and parameters reference

DataMover statistics
This command shows the statistics from the DataMover task.
The output of this command may differ depending on the number of routes defined in the parameters.

F DM USS,APPL=DISPLAY

where DM USS is the name of the USS task (usually ending with a 1). For example, if the DataMover is
started as DRLJDM, this value would be DRLJDM1.
Example of the command output with key values highlighted.

F DRLJDMH,APPL=DISPLAY

DRLJ0078I ZOS Console: display


DRLJ0075I Route: 1 name: Hub
DRLJ0075I Input type: TCPIP
DRLJ0075I Port: 54020
DRLJ0075I Queued for output: 0
DRLJ0075I 1 active connections
DRLJ0075I Inbound connection from: 129.42.208.176:
DRLJ0075I Generated 44,133 packets
DRLJ0075I 0 Processes(s):
DRLJ0075I No processes defined.
DRLJ0075I 1 Output(s):
DRLJ0075I Output: 1
DRLJ0075I Output Type: LOGSTREAM
DRLJ0075I Log stream: IFASMF.DEMOPLX.IZPCA.RECS
DRLJ0075I Queued for input: 0
DRLJ0075I DRLJ0075I Received 44,133 packets

Figure 39. DataMover or Publication DataMover statistics

DataMover features and parameters reference


This section describes features of the DataMover and the parameters for configuring and running the
DataMover efficiently.
The DataMover runs one or more internal data transfer pipelines (routes). Each route has:
• one input stage
• zero or more process stages
• one or more output stages
Internally the data is processed in packets, the size of which is determined by the .block value in the
input stage. Typically, this is a small batch of 50-100 records. This strikes a good balance between the
overhead of moving the packets and the time spent processing individual records.
The input, process, and output stages do the following:
1. The input stage executes first, reads data and passes packets of data to the first process stage (or to
the output stages if there are no process stages).
2. The process stages then execute sequentially, passing data packets down the chain until they are fully
processed.
3. The final process stage passes the packet to the output stage.
4. The output stages then execute in parallel, writing the data out to one or more destinations.
There is a pacing mechanism in place that provides dynamic feedback to the input stage on how well the
DataMover is processing the data packets that it has generated. If it is taking a long time to process them,
the input stage will stop reading data until they have caught up. This avoids scenarios where the input
stage just keeps reading data until the DataMover runs out of storage.
The pacing mechanism uses trackers. A tracker is an object that keeps track of data packets created by
the input stage (or derived from packets it is tracking). Each data packet gets a single tracker to watch
it. If there are no free trackers, the input stage cannot create another data packet. When all of the data

84 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

packets associated with a tracker have been unlinked from it (usually when all the output stages are done
with them), the tracker returns to being a free tracker that the input stage can use to track another data
packet. You can set the number of stages an input tracker has by using the .trackers value.

Memory management
Consider these configuration options for memory management to enable the DataMover to run efficiently.
• Use a 64-bit JVM and set the maximum heap size (-Xmx) to 16GB or larger.
• The .block and .trackers value on the input stage control the number of records that are going
to be in storage at any one time. To estimate the memory used by a route, multiply these two values,
multiple again by the average record size (anything from 2K to 32K for SMF records), and add another
20 percent.
• For TCP/IP transmission, you need to specify a fairly high trackers number (150 or more) because of
the way the failure detection on the transmission works. It will hold onto a packet for several seconds
before it decides it has been delivered successfully. The number of trackers on the input stage also
works as a pacing mechanism for the TCP/IP connection. If you have 100 trackers and it holds onto
the packets for 4 seconds, then it will end up sending about 25 packets a second. Multiply by the block
count and the average record size to estimate the data volume being transmitted.
• You also need to specify a fairly high number of trackers (100 or more) for stages that repack the data.
These are typically process stages like JSON, CSV, and SQL. These stages take the data out of the
packets it arrives in, transform it, and then pack the new data into new packets, hanging onto those
packets until they are full. If there are not enough input packets, the process can stall, which will result
in it sending on all of the packets it has after it detects that it is no longer receiving input. This will free
up the packets and release the trackers associated with them, allowing more data to be read. Memory
usage for routes that repack the data can be twice that of normal routes.
• For the other stages, you should need no more than twice as many trackers as you have input and
process stages, with, say, a minimum of 6. This will avoid a backlog of unprocessed data building up in
the DataMover because it can read the data in faster than it can write it out.

Troubleshooting
If the DataMover is not processing data often enough or seems to be stalling, increase the .block
setting (to put more records in each packet) and the .trackers setting (to allow more packets into the
DataMover).

Record formats
The DataMover expects to deal with two primary types of data: textual data and LGRH data.
Textual data
This is unformatted data that is simply split into records. Many of the DataMovers process stages
cannot handle it, so processing is generally restricted to input, transport and output.
LGRH data
This is data that has been output by an IBM Z Performance and Capacity Analytics application,
either the SMF Extractor or the Collector. Each record is prefixed with an LGRH header that contains
metadata describing the record.
There are two major distinctions of LGRH data:
SMF data
Consists of SMF records.
Table data
Consists of data output from the Continuous Collector in a specific binary data format.
The JSON and CSV process stages will only work with LGRH Table data and require access to a set of
mapping files in order to be able to parse the binary data format.

Chapter 4. Installation Optional Extensions 85


DataMover features and parameters reference

JSON and CSV data


The JSON and CSV stages transform LGRH Table data into textual data in specific JSON and CSV formats.
The File output stage partially parses these formats and makes use of information from the data records
when deciding where to write them out.

Configuration
The DataMover reads its configuration from a Java .properties file.
The syntax of this file is keyword = value, and all the reading and parsing is done by Java.
The primary keywords in the file are:
routes
This specifies the number of routes defined in the config file. While most configurations just run a
single route, there are some that run two or more routes. You do need to watch the total memory
usage across all of the routes running with an DataMover as they all run within the same JVM and use
a shared memory pool.
route.r.name
This defines a name for the rth route that is used in some message and trace output.
input.r
This is the input stage for the rth route. Its parameters take the form input.r.keyword.
process.r
This is the number of processes stages that are defined for the rth. If not present it defaults to 0.
process.r.n
This is the nth process stage for the rth route. Its parameters take the form process.r.n.keyword.
outputs.r
This is the number of output stages that are defined for the rth route.
output.r.n
This is the nth output stage for the rth route. Its parameters take the form output.r.n.keyword.
The attributes of each stage are specified with further keyword extensions.

Stages, keywords, and parameter settings


Stages are specified through the stage.type keyword. Parameters for each stage can be adjusted to tailor
the DataMover to suit your installation.
The stages and type keywords are:
Input stages
JOIN
Receive data from a process SPLIT stage
LOGSTREAM
Read from a log stream
TCPIP
Open a TCP/IP port to listen for connection requests
Process stages
CONSOLE
Send data to the console (stdout) as text
CSV
Convert output records from the Continuous Collector into Comma Separate Value records
JSON
Convert output records from the Continuous Collector into JSON records

86 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

FILTER
Filter by required record types
SPLIT
Propagate data to a JOIN input stage
Output stages
CONSOLE
Dump data to the sysout (JOBLOG)
FILE
Write data to the USS file system
LOGSTREAM
Write to a log stream
TCPIP
Connect and write data to a remote TCP/IP server

Common keywords
The common keywords are used on many stages and always have the same meaning.

Common keywords
.type
The name of the stage.
.block
For stages that generate data packets, this is the number of records to put in each data packet.
.trackers
For input stages, this is the number of trackers to use to manage the data loaded from the input
source.
.sourcetype
A short, single word description of the source of the data. Some stages (JSON and CSV) generate the
source name for the data packets they output from the data they place inside them. Specifically they
set the source name to be a value derived from the name of the IBM Z Performance and Capacity
Analytics Db2 table associated with the source of the data.
.sourcename
A short, single token name for the data stream.
.pacing
A delay, in milliseconds, that is added between each data packet that is processed. Used to artificially
slow the DataMover down.
.encoding
The Java name for the code page that is to be used to encode the data. Used on stages that transform
the data.
.float_format
This tells the stage the format of the floating point numbers in its input. Values are:
IEEE
The Java standard
IBM
IBM S360 encoding

Setting .block and .trackers


On the spoke, the input is from a log stream and needs packing up for transmission to the hub.
The defaults, with 17161 for a log stream stage are:

Chapter 4. Installation Optional Extensions 87


DataMover features and parameters reference

input.1.block = 10
input.1.trackers = 100
On the hub, there is a TCPIP input stage, with defaults:
input.1.trackers = 100
There is no .block setting as the TCPIP stage simply receives complete data packets with however many
blocks the spoke put inside them.
Each spoke that connects to the hub gets its own set of trackers, using the value above. These trackers do
not have the 4 second lifespan that the spoke trackers do, so the hub can cycle through them all within a
second or so. The value should be at least 25% of the highest spoke tracker value. Higher values can be
specified, but that increases the volume of data, although the data is not held very long.
If you increase the spoke .trackers value, then also increase the corresponding hub .trackers value.

Input stage keywords


The input stages are: JOIN, LOGSTREAM, TCPIP.

Table 2. DataMover: Input stage parameter settings


Input stage Command Required Default Data Accepted values Case
value type sensitive
JOIN input.r.channel Y No default String Channel name Y
LOGSTREAM input.r.logstream Y No default String Log stream name; Y
1-26 characters
input.r.block N 10 Integer value > 0
input.r.sourcename Y No default String Source name Y
associated with the
records
input.r.sourcetype Y No default String Source type Y
associated with the
records
input.r.wipe N no String yes | no N
input.r.strip_header N no String yes | no N
input.r.check_LGRH N yes String yes | no N
input.r.clear N no String yes | no N
input.r.encoding N The String UTF-8 | The N
environme encoding use for
nt's local the data being
encoding placed in the
packet
input.r.trackers N 100 Integer value > 0
input.r.checkpoint N yes String yes | no N
input.r.pacing N 0 Integer value ≥ 0
input.r.max_pack N 0 Integer value ≥ 0
input.r.max_idle N 0 Integer value ≥ 0
input.r.max_overflow N 50 Integer value ≥ 0

88 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

Table 2. DataMover: Input stage parameter settings (continued)


Input stage Command Required Default Data Accepted values Case
value type sensitive
TCPIP input.r.port Y No default Integer TCP/IP port; value ≤
65535
input.r.use_ssl N no String yes | no N
input.r.trackers N 100 Integer value > 0
input.r.nif N null String Network interface;
must be a valid
IPv4, IPv6_STD or
IPv6_HEX_COMPR
ESSED address
input.r.stats_interval N 0 Integer 0, 15, 30, 60 N
seconds, or 0, 1, 2,
5, 10, 15, 20, 30,
60 minutes;
Example:
input.r.stats_interva
l = 15 seconds

JOIN
Receive data from a process SPLIT stage.
.channel
This specifies the name of a channel that the input stage listens on. When the SPLIT stage sends a
packet to a channel, JOIN stages listening to that channel will receive a copy of it.
Because the JOIN stage is an input stage, each pipeline processing split data must be defined as a
separate route.
LOGSTREAM
Read from a log stream.
.logstream
The name of the log stream to read data from.
Symbolic substitution is supported.
.wipe
Set to yes to mark data in the log stream for deletion after it has been processed by the
DataMover.
This parameter is only relevant to the LOGSTREAM input stage. It only applies to spoke systems as
it is only a spoke system that takes input from a log stream.
.checkpoint
Set to yes (the default) to maintain a restart checkpoint in a USS file. It is updated only after the
DataMover has finished processing the data, and indicates the point in the log stream to start
reading from if the DataMover crashes.
If you turn it off (set to no), then ensure the .checkpoint file is erased to avoid problems if you later
turn it back on.
This parameter is only relevant to the LOGSTREAM input stage. It only applies to spoke systems as
it is only a spoke system that takes input from a log stream.
.check_LGRH
When set to yes, this causes the stage to check that each record it reads has a valid LGRH header
at the front of it. If it finds a record without such a header (indicating that it is either reading

Chapter 4. Installation Optional Extensions 89


DataMover features and parameters reference

the wrong log stream, or an application other than IBM Z Performance and Capacity Analytics is
writing to the log stream), the DataMover issues an error message and shuts down.
.clear
Use with caution. If set to yes, it causes the DataMover to mark all the data in the input log stream
as ready for deletion. The DataMover then shuts down, leaving the log stream effectively empty.
See the clear.properties sample configuration file “clear.properties - Erase all records from a log
stream” on page 42.
TCPIP
Open a TCP/IP port to listen for connection requests.
.port
This specifies the TCP/IP port number that the input stage is to open to listen for connections on.
It expects to be connected to by another DataMover. Once connected, it will receive complete
data packets from the other DataMover.

Process stage keywords


The process stages are: CONSOLE, CSV, FILTER, JSON, SPLIT.

Table 3. DataMover: Process stage parameter settings


Process stage Command Required Default Data Accepted values Case
value type sensitive
CONSOLE
CSV process.r.n.encoding N UTF-8 String UTF-8 | The N
encoding use for
the data being
placed in the
packet
process.r.n.delimiter N , (comma) String The delimiter in N
your CSV
process.r.n.float_format N IEEE String IEEE | IBM N
process.r.n.block N 30 Integer value > 0
FILTER process.r.n.as Y No default String SMF | IZPCA N
JSON process.r.n.encoding N UTF-8 String UTF-8 | The N
encoding use for
the data being
placed in the
packet
process.r.n.float_format N IEEE String IEEE | IBM N
process.r.n.block N 15 Integer value > 0
process.r.n.forceFields N No String Yes | No N
SPLIT process.r.n.channels Y No default Integer value > 0
process.r.n.channel.x Y- if No default String Channel name Y
process.r.
n.channel
s>0

CONSOLE
Send data to the console (stdout) as text. Packets are passed through to the next stage.

90 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

CSV
Convert output records from the Continuous Collector into Comma Separate Value records.
.delimiter
This specifies a single character that is to be used as a delimiter between values. The default is a
comma.
If you use a different character in the CSV file, you must specify that delimiter to the program
reading in the CSV data.
FILTER
Filter by required record types.
.as
Only data in LGRH format can be filtered. This specifies whether it is SMF encoded data (SMF) or
table encoded data (IBM Z Performance and Capacity Analytics).
.smf_type.i
This contains the ith SMF filtering directive. Specify no more than one directive for each SMF type.
The format of the entry is:

type subtype subtype subtype …

The type is required, the subtypes are optional and, if present, delimited by a space.
The directives define a pass list. Any SMF record that arrives at the filter stage and does not match
a directive will be blocked. Packets propagated onwards from the filter stage will only contain SMF
records that match a pass directive.
Only use subtype filtering for those records that have subtypes and that follow the normal SMF
conventions for defining subtypes in the SMFxSTY field.
.smf_types
This specifies the count of smf_types entries
.smf_types.i
This contains either
• an SMF record id (id)
• a range of SMF record id's (low_id:high_id)
• or an SMF record id and a list of subtypes to pass (id st1l st2 st3)
.smf_notypes
This specifies the count of smf_notypes entries
.smf_notypes.i
This contains one of the following and acts to block the records.
• an SMF record id (id)
• a range of SMF record id's (low_id:high_id)
• or an SMF record id and a list of subtypes to pass (id st1l st2 st3)
.sysplex
This specifies a list of sysplex names the filter should pass. If omitted, the sysplex name is not
checked.
.system
This specifies a list of systems the filter should pass. If omitted the system name is not checked.
.tables
This specifies the number of table filter directives present for table filtering.
.table.i
This contains the ith table filtering directive. Specify no more than one directive for each SMF type.
The format of the entry is:

Chapter 4. Installation Optional Extensions 91


DataMover features and parameters reference

table_name

This is the name of the table specified in the record. For IBM Z Performance and Capacity
Analytics originated data, this is the Db2 table name, such as KPMZ_LPAR_H. There is no support
for wild cards.
The directives define a pass list. Any table record that arrives at the filter stage and does not
match a directive will be blocked. Packets propagated onwards from the filter stage will only
contain table records that match a pass directive.
JSON
Convert output records from the Continuous Collector into JSON records.
.forceFields
If set to yes, a field will be created in the JSON output holding a default value if the field is not
present in the Db2 record.
If set to no, fields will be excluded from the JSON output if they are not present in the Db2 table.
SPLIT
Propagate data to a JOIN input stage.
.channels
This is the count of join channels that the data packet should be copied to.
.channel.i
This is the name of the ith channel.
The data packet (and all the records in it) is propagated to the JOIN input stage corresponding to
each listed channel.
Typically, each Split or Join route passses the data through a different process stage (Filter, JSON,
CSV) and then sends its output to a different destination.
Note that the data continues onwards in the route that executed the split.

Output stage keywords


The output stages are: CONSOLE, FILE, LOGSTREAM, TCPIP.

Table 4. DataMover: Output stage parameter settings


Output stage Command Required Default Data Accepted values Case
value type sensitive
CONSOLE
FILE output.r.n.filename N a blank String Output filename Y
string
output.r.n.directory N . (a String Output directory Y
fullstop,
meaning
the
current
working
directory)
output.r.n.format N text String Text | LGRH | JSON N
| CSV | SQL
output.r.n.subdir N None String None | Packet | N
Period | Hour | Day |
Week | Month | Year
output.r.n.data_dating N no String yes | no N

92 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

Table 4. DataMover: Output stage parameter settings (continued)


Output stage Command Required Default Data Accepted values Case
value type sensitive
LOGSTREAM output.r.n.logstream Y No default String Logstream name Y
output.r.n.check_LGRH N yes String yes | no N
output.r.n.sync N no String yes | no N
output.r.n.enqueue_name N null String Enqueue name Y
TCPIP output.r.n.port Y No default Integer TCPIP port
output.r.n.host Y No default String TCPIP host Y
output.r.n.use_ssl N no String yes | no N
output.r.n.use_buffer N yes String yes | no N
output.r.n.directory N buffer String TCPIP Directory Y
output.r.n.pacing N 0 Integer value ≥ 0
output.r.n.catchup_tracker N 100 Integer value > 0
s
output.r.n.nif N null String Network interface
output.r.n.stats_interval N 0 Integer 0, 15, 30, 60 N
seconds, or 0, 1, 2,
5, 10, 15, 20, 30,
60 minutes.
Example:
input.r.stats_interva
l=15 seconds

CONSOLE
Dump data to the sysout (JOBLOG).
FILE
Write data to the USS file system.
.directory
This is the directory that the data gets written out to.
.filename
If present, this forces all of the data to be written out to a file with this file name.
If not present, the data will be written out into files corresponding to the sourceType value in the
arriving data packets. After processing by a JSON or CSV stage, this will correspond to the name of
the IBM Z Performance and Capacity Analytics Db2 table that the data came from.
.subdir
This has a range of values and serves to automatically break the data files up amongst a
subdirectory structure. The values are:
None
No subdirectory is used.
Hour
A separate subdirectory is created for data for each hour.
Day
A separate subdirectory is created for data for each day.

Chapter 4. Installation Optional Extensions 93


DataMover features and parameters reference

Week
A separate subdirectory is created for data for each week.
Month
A separate directory is created for data for each month.
Year
A separate directory is created for data for each year.
Period
The code will pick a subdirectory to use based upon the aggregation period indicated in the
sourceType. Timestamp data is written to a daily directory. Hourly data is written to a weekly
directory. Daily data is written to a monthly directory. Weekly and monthly data is written to a
yearly directory.
Packet
This uses the sourceType as the subdirectory name and writes each packet's data out into a
separate timestamped file.
.format
This specifies the format of the data. Values are: Text, LGRH, JSON, CSV. The latter 3 enable some
special processing in the File stage to improve the quality of the output.
.data_dating
This works with LGRH, JSON, and CSV format data and requires a time-based subdir setting (that
is, neither None nor Packet).
Normally the date from the data packet is used by the subdir element to determine the
subdirectory that the data will be written to. This will typically reflect the time that the record
was processed.
If you specify yes, then the records in the packet will be processed individually and each record's
timestamp (Epoch for JSON and CSV files) will be extracted and used to determine which
directory it gets written to. This can significantly increase the overhead of writing a data packet
out, so only use it when really required.
LOGSTREAM
Write to a log stream.
.logstream
The name of the output log stream.
Symbolic substitution is supported.
.sync
If set to yes, log stream I/O occurs in synchronous mode.
If set to no or omitted, the faster async I/O method is used.
.enqueue_name
If you have multiple DataMovers writing to the same log stream, ensure that they all have the
same .enqueue_name values specified. This will cause them to wait on the same Sysplex scoped
enqueue to get access to the log stream before they write a data packet out to it. This ensures that
the sequence of the records they are writing out is preserved.
If the DataMover is the only one writing to the log stream, you should not specify a value
for .enqueue_name as it incurs a small performance overhead.
TCPIP
Connect and write data to a remote TCP/IP server.
.hostname
This is the name of the machine that the TCP/IP output stage should try and connect to.
You can use localhost for this machine (although this may vary with your TCP/IP implementation).
.port
This specifies the port on the target machine that the stage should try and connect to.

94 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

The destination can be either a DataMover (local or remote), or a local IBM Z Common Data
Provider Data Streamer as described in the IBM Z Common Data Provider Open Streaming API.
.use_ssl
If set to yes, SSL will be used for secure communications. See the installation instructions for
details.
.buffer
This is the name of a USS directory used to buffer data packets if communications with the remote
system are down. It can accumulate quite a lot of data packets.

Advanced configurations
The samples provided with IBM Z Performance and Capacity Analytics cover the simple scenarios for
which you need to configure the DataMover. This section looks at some of the more complex scenarios
you might wish to deploy.

SMF filtering
In this scenario, the DataMover is to receive SMF records, but needs to remove some of them before
feeding them into the log stream for the Continuous Collector. Note that it is more efficient to change the
parameters of the SMF Extractor to only collect the records that you want the Continuous Collector to
process, but if that is not possible, you can use the technique described here for the DataMover.
The data flow is illustrated in Figure 40 on page 95 .
INPUT

TCPIP
PROCESS

Filter
OUTPUT

Logstream

Figure 40. DataMover configuration scenario: SMF filtering

The input is received over TCP/IP and written out to the log stream for the Continuous Collector, as usual
for the hub, but a filter process has been inserted into the middle. The filter process will be given a list of
the SMF types and subtypes that are to be passed through to the Collector.
A configuration for this scenario is shown in the following example.

Chapter 4. Installation Optional Extensions 95


DataMover features and parameters reference

#
# This is a sample configuration for a DataMover running on a
# Hub System
#
routes = 1
route.1.name = Hub
#
# Listen for connections from the Spokes and receive their data.
#
# If you want to use SSL you need to perform the certificate exchange
# before you change the value to YES
#
input.1.type = TCPIP
input.1.port = 54020
input.1.ssl = no
#
# Filter out unwanted SMF records
#
process.1 = 1
#
process.1.1.type = FILTER
process.1.1.as = SMF
process.1.1.smf_types = 5
process.1.1.smf_type.1 = 30
process.1.1.smf_type.2 = 42
process.1.1.smf_type.3 = 70
process.1.1.smf_type.4 = 113
process.1.1.smf_type.5 = 120
#
# For output, we write the data into the log stream
#
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = IFASMF.CF.LS02
output.1.1.sync = YES
#

Figure 41. SMF filtering configuration

The filter will be automatically invoked against all data packets received by the TCPIP stage. What records
remain from each data packet will then be passed to the LOGSTREAM stage for output.

Data splitting
In this scenario, the DataMover is to receive data from a single source and distribute it two recipients,
optionally filtering the stream sent to one or both of the data destinations.
The data flow is illustrated in Figure 42 on page 97.

96 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

INPUT
TCPIP JOIN

PROCESS

SPLIT Filter
OUTPUT

Logstream Logstream

Figure 42. DataMover configuration scenario: data splitting

This is a more complex configuration. Two routes are used, one being the primary input and the other the
joined secondary route.
• The primary route receives the data, splits a copy off to the secondary route, and then writes it out to a
log stream.
• The secondary route receives the split input, filters it, and then writes the filtered SMF data out to a
separate log stream.
A configuration for this scenario is shown in the following example.

Chapter 4. Installation Optional Extensions 97


DataMover features and parameters reference

#
# This is a sample configuration for a DataMover running on a
# Hub System
#
routes = 2
route.1.name = Primary
route.2.name = Secondary
#
# Listen for connections from the Spokes and receive their data.
#
# If you want to use SSL you need to perform the certificate exchange
# before you change the value to YES
#
input.1.type = TCPIP
input.1.port = 54020
input.1.ssl = no
#
# Copy the data across to the other side
#
process.1 = 1
#
process.1.1.type = SPLIT
process.1.1.channels = 1
process.1.1.channel.1 = secondary
#
# For output, we write the data into the log stream
#
outputs.1 = 1
#
output.1.1.type = LOGSTREAM
output.1.1.logstream = IFASMF.CF.PRI
output.1.1.sync = YES
#
# Receive the joined input
#
input.2.type = JOIN
input.2.channel = secondary
#
# Filter the data in the secondary channel
#
process.2 = 1
#
process.2.1.type = FILTER
process.2.1.as = SMF
process.2.1.smf_types = 5
process.2.1.smf_type.1 = 30
process.2.1.smf_type.2 = 42
process.2.1.smf_type.3 = 70
process.2.1.smf_type.4 = 113
process.2.1.smf_type.5 = 120
#
# For output, we write the data into the log stream
#
outputs.2 = 1
#
output.2.1.type = LOGSTREAM
output.2.1.logstream = IFASMF.CF.SEC
output.2.1.sync = YES
#

Figure 43. Data splitting configuration

Data packets received through the TCPIP stage will be copied to the secondary stream and then written
out to the primary log stream. Data packets reaching the secondary route will be filtered and the surviving
records will be written out to the secondary log stream.

98 IBM Z Performance and Capacity Analytics : Administration Guide and Reference


DataMover features and parameters reference

Installing the Collator Function for IBM Z Performance and


Capacity Analytics
Overview
The Collator is a stand-alone tool to sort and package SMF records that have been captured by the IBM
Z Performance and Capacity Analytics SMF Extractor for archiving. It enables you to create files holding
SMF record sorted by:
• SMF Type
• SMF Subtype
• Sysplex of Origin
• System of Origin
• Date and Time of Creation
This enables you to split records into time and origin bounded bundles that can be archived and easily
searched. For example, if you had two sysplexes you could set up several streams of SMF records for
archiving:
• All the security related SMF records from PLEX1, split into 8 hour shifts.
• All the security related records from PLEX2, split into 6 hour shifts.
• All the I/O related records from PLEX1, split into 4 hour shifts.
• All the TCPIP related records from PLEX2, split into 2 hour shifts
The Collator must be combined with the IBM Z Performance and Capacity Analytics SMF Extractor to
obtain the SMF records it will sort. It can also be combined with the IBM Z Performance and Capacity
Analytics DataMover to split off copies of SMF records and then combine streams from multiple systems.
The execution flow of the Collator is to read the SMF records from its input, sort them according to a set of
user provided collation rules and then write the SMF files out to a number of files on DASD. These files can
then be compressed and archived by the software of your choice. The format of these files is identical to
that of a typical MANx file – a VBS data set containing 1 SMF record per row.

Deployment
There are several options for deploying the Collator in a production environment. These range from
stand-alone deployment, where a system is simply set up to archive its own records, through remote
sysplex-based archiving, to hybrid deployments where SMF records are streamed to a hub system and fed
to both the Collator and the Collector.

Stand-alone Deployment
The simplest, stand-alone deployment works with the IBM Z Performance and Capacity Analytics SMF
Extractor:

Chapter 4. Installation Optional Extensions 99


DataMover features and parameters reference

Figure 44. Collator Stand-alone Deployment

The collator function extends as far as provided sets of files containing sorted SMF data. The archival
(and retrieval and searching and purging) of those files is something that you need to implement. The
log stream here used is a simple DASD log stream – this keeps the SMF archival traffic away from your
Coupling Facilities. For performance reasons it is important that the SMF Extractor only be configured to
trap SMF records that you want to archive. Any trapped records that do not meet the criteria for at least
one of the collation groups will be discarded by the Collator after checking it against all of them.

Sysplex Deployment
Typically, you would deploy the collator within a Sysplex, enabling you to produce files that contain data
from multiple systems within the sysplex. For this deployment you need to use an IBM Z Performance and
Capacity Analytics DataMover as a Sender and Receiver to transport the SMF records to a single central
system for collating.

Figure 45. Collator Sysplex Deployment

In this case, data is being gathered on SYS1 and SYS2 by the SMF Extractor, which writes it out into a
log stream. A DataMover configured as a Sender (Input: Log stream, Output: TCPIP) then transmits it to
another DataMover that is configured as a Receiver (Input: TCPIP, Output: Log stream) on the collation
system (SYSX). The Collator then reads the data from the log stream and sorts it into files. The Collation
system can be any system of the user's choice. The deployment will use some TCPIP bandwidth and

100 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

some CPU – although the CPU can be drawn from the system zIIP processors. The Collating system does
not have to be in the same sysplex as SYS1 and SYS2. It is important for performance reasons to ensure
the SMF Extractor is configured to only extract the SMF records that you want to archive. Unwanted SMF
records will only be discarded once the Collator has decided that they are not of interest. Transmission
occurs over TCPIP, which can be configured to be protected by SSL encryption. While the input systems
can be in different sysplexes, beware of combining streams of SMF data that you don’t want to have
archived in the same set of files. While it is possible to split the input stream in the collator to separate the
streams, it is more efficient to duplicate the collation configuration (DataMover, Log Stream, and Collator)
and feed the data for each sysplex through a separate collation pipeline.

Intra-Sysplex Deployment
In the event that your collation system is inside your sysplex, it is possible to configure the SMF Extractors
and the Collator to use a shared sysplex log stream.

Figure 46. Intra-Sysplex Deployment

While this should work, you need to be aware that it has a different resource consumption profile
and behavior under load to the DataMover and TCPIP model. The primary difference is that it will be
consuming Coupling Facility resources (bandwidth and storage) and will need to be offloaded from the
CF to DASD from time to time (a process that can take longer than simply allocating another DASD
segment). While this should be fine for low volume SMF records, the TCPIP implementation (using DASD
log streams) is recommended for medium to high volume SMF data streams to minimize the impact to the
Coupling Facilities and delays due to offloading.

Hybrid Deployment
The deployment on the Sysplex Spoke systems is very similar to the deployment for IBM Z Performance
and Capacity Analytics Automated Data Gathering. You can combine these two functions:
• The SMF Extractor has to be configured to gather the SMF records you need for archiving and the SMF
records you need for processing.
• If you are Collating the SMF records on a different system to the Hub where you are running the
Collector, you need a more complex configuration in your DataMover (Sender) on the Spoke. It needs to
Split the stream of SMF data into two separate streams – one for the Collator and one for the Collector,
filter out unwanted records from each stream and then send each stream to the correct destination.
• If you are running the Collation on your Hub system, you should send all the data to the Hub in a single
data stream. The receiver on the Hub will then need to be configured to split and filter the data streams
and write the SMF records out to two DASD log streams – one for the Collator and one for the Collector.
This approach minimizes TCPIP bandwidth. If you would sooner minimize the Hubs CPU usage, you can
do the stream splitting and filtering on the Spoke systems and run multiple Receiver DataMovers on the
Hub.

Chapter 4. Installation Optional Extensions 101


DataMover features and parameters reference

Hybrid Deployment, Separate Systems


In this scenario, the Collator is running on one system and the Collector is running on a different one.

Figure 47. Hybrid Deployment, Separate Systems

Each Spoke can specify the IP address of its collation system independently, so there is no need for them
all to feed into the same system. This allows you to perform the collation and archiving close to the source
of (at least some of) the data, reducing data transmission overhead. There is some additional CPU cost on
the Spoke system (it is zIIP eligible), but it is less than the cost of running two SMF Extractors and two
Senders.

Hybrid Deployment, Shared Data Stream


In the scenario, the Collator and the Collector are running on the same Hub system and the data is sent to
the Hub in a single Data Stream.

Figure 48. Hybrid Deployment, Shared Data Stream

102 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

This reduces the TCPIP bandwidth used for transmission, by not duplicating SMF records that are going
to be passed to both the Collector and the Collator until after the data has reached the Hub. The hub will
incur the cost of splitting and filtering the data streams from each Spoke system using this model. For a
significant number of Spoke systems, this may add up to a noticeable CPU burden on the Hub system.
This is a good solution is the set of records you are collating is very similar to the set of records you are
collecting for IBM Z Performance and Capacity Analytics, as there are performance benefits from only
transmitting the data once. You may be able to omit the filtering in this instance, simply writing the same
data out to both log steams and paying the cost for rejecting the occasional record on both sides.

Hybrid Deployment, Singe Data Stream and Log Stream


There is a further potential variant of this where the data is just written to a single log stream on the
Hub from where both the Collector and the Collator read it. For this to work both of them would have to
be configured to use checkpointing to track their read position in the log stream (as opposed to simply
deleting records that they have read) and both will then incur some significant extra cost from filtering and
or processing records that they end up rejecting.
The Collector cannot, currently, be configured not to delete records it has processed, which could
cause the Collator to miss some records. For this reason, this solution is neither supported nor
recommended.

Hybrid Deployment, Split Data Streams


In this scenario, the Collator and the Collector are running on the same Hub system, but the Data
Stream is Split and Filtered on the Spoke system, before transmission to the Hub. The Hub runs two
Data Receivers – one for the Collator and another for the Collector. Each Spoke sends data to both Data
Receivers.

Figure 49. Hybrid Deployment, Split Data Streams

This approach keeps the cost for splitting and filtering the streams on the Spoke systems, at the cost of
duplicating the transmission of records destined for both Collation and Collection. In a large deployment
this may be necessary to manage Hubs total CPU consumption.
This is a good solution if there is little overlap between the set of records you wish to archive and the set
you wish to feed into IBM Z Performance and Capacity Analytics. This is because there would be very little
data that would be transmitted to the hub twice in this situation.

Chapter 4. Installation Optional Extensions 103


DataMover features and parameters reference

Collation
This is the process of collecting the data into appropriate groups of SMF records and then writing the
records for that group out into archive files. The same SMF record may be in multiple in multiple groups.
In this case it will be written our into the archive file for each group.

Collation Cycle
Fundamentally the execution cycle of the Collator is:
1. Read a record from the Collators input log stream.
2. Determine its key attributes
3. See which collation groups (if any) it fits into.
4. Write it out to the files for each indicated collation group
5. Remove the record from the Collators input log stream

Collation Rules
Each collation group is named and has one or more rules defined that determine which SMF records are
members of it.
Note: The formation of the rules the customer will use during test and production is an exercise for the
customer to complete. No ‘standard’ rules are shipped with IBM Z Performance and Capacity Analytics.
Classification decisions may be based on:
1. SMF record Type is
2. SMF record Type is between
3. SMF record Type and Subtype are
4. SMF record Type is not
5. SMF record Type is not between
6. SMF record Type and Subtype are not
7. MVS System ID Is
8. Sysplex Name Is

Collation Shifts
The files for a collation group are automatically split up by year and day, with a separate file for each day
They can be further split up by specifying a shift value. The default is DAILY, which will result in all of the
SMF records that are received and are classified as a part of the group during each day being written out
to the same file. By specify lower shift values, you can cause the file to be split at predictable times of the
day. The splits are timed to occur by the local time on the system the SMF records were issued on, not by
GMT or UTC and not adjusted to any other time zone

Table 5. Collation shift values and split times


Shift Value Split Times
DAILY 00:00:00
12_HOUR 00:00:00, 12:00:00
8_HOUR 00:00:00, 08:00:00, 16:00:00
6_HOUR 00:00:00, 06:00:00, 12:00:00, 18:00:00
4_HOUR 00:00:00, 04:00:00, 08:00:00, 12:00:00,
16:00:00, 20:00:00

104 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

Table 5. Collation shift values and split times (continued)


Shift Value Split Times
3_HOUR Every 3 hours, starting at 00:00:00
2_HOUR Every 2 hours, starting at 00:00:00
1_HOUR Every hour, starting at 00:00:00

Segmentation
A shift for a collation group could contain billions of SMF records. The collation group may need to be
broken in smaller segments to facilitate the file system it will be written out to. The mechanism used for
this is a low level segmentation index. A maximum record count for a segment is specified in the ZWriter
output specification and, when that number is met, the existing segment is closed and a new one is
started. Additional segments may also be started if the Collator is stopped and restarted or if data arrives
significantly later than data that has already been written out to the last segment.

Output File Names


The names of the data sets produced by the Collator follow the following template:
hlq.name.Dyyyyjjj.Shh.l
This generates a data set name.
• The hlq is the start of the data set name and may contain multiple segments. Special values for
segment names are &SYSPLEX and &SYSTEM which get replaced with the corresponding values from
the SMF event.
• name is the name of the collation group
• yyyy is the current year•jjj is the julian date (the day of the year)
• hh is the hour at the start of each shift
• ll is a segmentation index
The segmentation index uses a base 36 numbering scheme and starts at the value AA. There are 936
possible segments within a shift period (the 360 names starting with a numeric digit are unusable). If you
need more than this, you need to use a smaller shift period.

Collator Configuration
The basis of the Collator is a DataMover. This is a configurable pipeline processor that provides reusable
processing stages. It will be a continuously running started task.

New Stages
For the Collator function there are two new stages:
• COLLATE – This is a process stage that takes the input data and sorts it into the separate collation
groups. Collate performs the High and Mid level collation actions. As it gathers records for each set it
will accumulate them into a packet. When a packet is full – or hasn’t been updated for a few seconds –
it will send it to the next processing stage.
• ZWRITER – This is an output stage that will write the collated archive data out to one or more data files.
Zwriter performs low level segmentation and the associated file IO. It is strongly recommended that the
file it is writing to are located on DASD. They can be compressed and moved to tape after they have
been output.
Typical usage would be:
1. Input: LOGSTREAM Stage

Chapter 4. Installation Optional Extensions 105


DataMover features and parameters reference

2. Process: COLLATE Stage


3. Output: ZWRITER Stage

Existing Stages
In addition you may need to use the following stages
• JOIN – This is an input stage, which takes the output from a matching SPLIT stage. The correlation is
achieved through a matching channel name that is specified as a parameter.
• SPLIT – This process stage duplicates a data stream to each JOIN stage that is subscribed to the same
channel.
• FILTER – This process stage can be used to remove records from the SMF stream. The SMF data stream
must be unpacked to provide filtering. The filtering targets the same parameters as the grouping.
Records that do not pass the filter are discarded.
• PACKSMF – This process stage is used to repack SMF records after they have been unpacked by the
FILTER stage. It should be used before the records are transmitted over TCPIP or written to a log
stream. If this stage is omitted it can result in inefficient usage of log stream space (making the log
stream a lot bigger than it needs to be) and less efficient TCPIP packet transmission
Typical usage to duplicate and filter a stream is to add a Split stage and a Filter stage, then to add a
second route using the join for input:
• Route 1:
1. Input: LOGSTREAM Stage
2. Process: SPLIT Stage
3. Process: FILTER Stage
4. Process: PACKSMF
5. Output: LOGSTREAM Stage
• Route 2:
1. Input: JOIN Stage
2. Process: FILTER Stage
3. Process: PACKSMF
4. Output: LOGSTREAM Stage
This would write two filter streams to different log streams and is the basis for the Hub splitter
configuration. The Spoke splitter is the same thing with TCPIP output stages instead of LOGSTREAM
output stages. These stages are available in the Collator, the DataMover and the Forecaster

Stage Parameter
Existing DataMover mechanisms will be used, reading configuration data from the main Java properties
file that describes the DataMovers active configuration.

COLLATE Parameters

process.r.p.groups = 2
#
process.r.p.group.1.name = SECURE
process.r.p.group.1.hlq = ARCH.&SYSPLEX.&SYSTEM
process.r.p.group.1.smf_types = 3
process.r.p.group.1.smf_type.1 = 80
process.r.p.group.1.smf_type.2 = 70 3 4
process.r.p.group.1.smf_type.3 = 14:18
process.r.p.group.1.system = SYS1 SYS2 SYS3
process.r.p.group.1.sysplex = PLEX1 PLEX2
process.r.p.group.1.shift = 4_HOUR
#

106 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

process.r.p.group.2.name = TSO
process.r.p.group.2.hlq = ARCH.&SYSPLEX.&SYSTEM
process.r.p.group.2.smf_types = 1
process.r.p.group.2.smf_type.1 = 63
process.r.p.group.2.shift = DAILY

The route number is ‘r’ and the process stage number is ‘p’.
This would produce data sets starting with:
• ARCH.PLEX1.SECURE...
• ARCH.PLEX2.SECURE...
• ARCH.PLEX1.SYS1.TSO...
• ARCH.PLEX1.SYS2.TSO...
• ARCH.PLEX1.SYS3.TSO...
• ARCH.PLEX2.SYS1.TSO...
• ARCH.PLEX2.SYS2.TSO...
Qualifier values specified on groups override qualifier values specified on the COLLATE stage itself
The hlq value specified for each group is the first part of the data set name. This may contain multiple
segments
The special values for the hlq are: &SYSTEM and &SYSPLEX, which are replaced with the corresponding
values from the SMF event. It is the users responsibility to ensure that the total length of the final group
name (including the dots between them and the segmentation suffix added by the ZWRITER stage) is not
more than 44 characters long and is a valid dataset name.
If the .system or .sysplex parameter is omitted from a group definition, then no records will be excluded
from the group on the basis of the omitted conditions. A group that was defined as a name would collect
all SMF records generated within all Systems feeding into the Collator.
You may specify a list of included SMF types with an .smf_types qualifier. The individual records must
specify one of:
1. A single SMF type (e.g. 80)
2. A range of SMF types (e.g. 80:85)
3. A single SMF type and a list of one or more blank delimited subtypes (e.g. 80 2 3)
You may also specify a list of excluded SMF types with an .smf_notypes qualifier. The individual records
must specify one of:
• A single SMF type (e.g. 80)
• A range of SMF types (e.g. 80:85)
• A single SMF type and a list of one or more blank delimited subtypes (e.g. 80 2 3)
The exclusion rules are applied after the inclusion rules, so including 80:90 and then excluding 84:87
would leave you with just types 80, 81, 82, 83, 88, 89 and 90.
If there are no inclusion rules, then all SMF records are included. If there are no exclusion rules, then no
records are excluded.
The allowed shift values are:
• DAILY – Each days data is kept in a single file.
• 12_HOUR – The days data is split at noon and midnight into 2 files
• 8_HOUR – The days data is split at midnight, 8am and 4pm.
• 6_HOUR – The days data is split at midnight, 6am, noon and 6pm
• 4_HOUR – The days data is split at midnight, 4am, 8am, noon, 4pm and 8pm.
• 3_HOUR – The days data is split at midnight, 3am, 6am, 9am, noon, 3pm, 6 pm and 9pm.
• 2_HOUR – The days data is split every 2 hours starting at midnight

Chapter 4. Installation Optional Extensions 107


DataMover features and parameters reference

• HOURLY – The days data is split off every hour


These shift based files may be further subdivided down by the low level segmentation performed by the
ZWRITER stage

output.r.o.fileopts = ab+,noseek,type=record,recfm=VBS,lrecl=32756,blksize=27998
output.r.o.max_records = 500000

The route number is ‘o’ and the process stage number is ‘o’. Then name above is the group name from
the COLLATE stage. The range of values for the fileopts string are described here: https://www.ibm.com/
docs/en/zos/2.5.0?topic=functions-fopen-open-file
The filename that will be used will be the collation name derived above with the addition of a low level
segmentation suffix. The suffix will start at .AA and count through .AZ, .A9, BA, .B9, .CA, .C9 etc... and end
at .Z9. This will allow up to 936 low level segments for each collation group shift. Optimal fileopts and
max_record settings will be the responsibility of the customer to determine. SMF records must be written
to VBS format data sets, or issues with record length may be encountered. The output will be a sequence
of SMF records written to a VBS data set in the same format as your SMF extracted files.
Note: Once the files have been written out no further management of the files are offered by IBM Z
Performance and Capacity Analytics.
• Policies for size, retention, compression etc... can be set through system storage policy management
using DFSMS.
• Offload to tape should be able to be automated through DFHSM or equivalent.

Sample Collate File


A sample Collate configuration file is provided:

#
routes = 1
#
route.1.name = Collate
#
input.1.type = LOGSTREAM
input.1.logstream = IFASMF.CF.LS02
input.1.check_LGRH = yes
input.1.block = 100
input.1.wipe = no
input.1.checkpoint = no
input.1.max_pack = 10
input.1.sourcename = LOGSTERAM
input.1.sourcetype = AGGREGATE
#
process.1 = 1
process.1.1.type = COLLATE
process.1.1.groups = 3
process.1.1.group.1.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.1.name = SMF100
process.1.1.group.1.smf_types = 1
process.1.1.group.1.smf_type.1 = 100
process.1.1.group.1.shift = 2_HOUR
process.1.1.group.2.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.2.name = SMF90
process.1.1.group.2.smf_types = 1
process.1.1.group.2.smf_type.1 = 90:105
process.1.1.group.2.shift = 2_HOUR
process.1.1.group.3.hlq = DEMO.COLTEST.&SYSPLEX
process.1.1.group.3.name = BAR90
process.1.1.group.3.smf_notypes = 1
process.1.1.group.3.smf_notype.1 = 90:105
process.1.1.group.3.shift = 2_HOUR
#
outputs.1 = 1
output.1.1.type = ZWRITER
output.1.1.fileopts = ab+,noseek,type=record,recfm=VBS,lrecl=32756,blksize=27998,space=(Cyl,
(100,50),rlse)
output.1.1.maxrecoreds = 500000
#

108 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

This will read SMF records from a log stream called IFASMF.CF.LS02 and collate them into three groups:
• The first, SMF100, will contain all of the SMF type 100 records in the input stream
• The second, SMF90, will contain all SMF records of type 90 thru 105
• The third, BAR90, will contain all SMF records except records of type 90 thru 105
No records will be discarded as they will all match either the SMF90 or the BAR90 group.
The files will be written out to three data sets:
• DEMO.COLTEST.&SYSPLEX.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.&SYSPLEX.SMF90.Dyyyyddd.Shh.ss
• DEMO.COLTEST.&SYSPLEX.BAR90.Dyyyyddd.Shh.ss
The &SYSPLEX. Symbolic will be replaced with the name of the sysplex that the records came from. If
the records are from more than 1 sysplex, then more that one output data set may be created for each
group. If you had a mixture of records from PLEX1, PLEX2 and PELX3 as input, then the output files for the
SMF100 group would be:
• DEMO.COLTEST.PLEX1.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.PLEX2.SMF100.Dyyyyddd.Shh.ss
• DEMO.COLTEST.PLEX3.SMF100.Dyyyyddd.Shh.ss
Each file would only contain records from the indicated sysplex

Collator Installation
Unpack the Collator.tar:
tar -xovf Collator.tat
Copy the Collator directory to where you want a working directory for the Collator. dit the Collator.sh file
and fill in the configuration details.
• The path to the working directory
• The path to a 64-bit Java 8 installation
If you aren’t turning an IBM Z Performance and Capacity Analytics installation into a Hybrid installation,
you’ll need to allocate a new DASD Logstream. The SMF Extractor must be configured to capture and
archive SMF Records. See “Step 1: Installing the SMF Extractor” on page 45.
Edit Collator/config/collate.properties
• Specify the name of the input log stream
• Specify your collation rules
• Ensure the hlq for the output data sets is something the job will have the authority to create new data
sets under
There are two JCL samples in the CNTL data set. You’ll need to copy them to a JCL data set.
• DRLJCOP is the PROC to run the Collator
– You need to change the working directory name
• DRLJCOJ is JCL to run the PROC as a JOB
– You need to change the PROCLIB
Submit the job and the collator will start running.

Chapter 4. Installation Optional Extensions 109


DataMover features and parameters reference

Data Splitter
Overview
The Data Splitter is used with the newly enhanced SMF Extractor to distribute raw SMF data from
additional log stream outputs to one or more subscribers. If the users have not already done so, please
review the “Introduction to the Data Splitter and the SMF Extractor” on page 11
The only other component the Data Splitter requires to be installed and running is the SMF Extractor.
The user can set up a DataMover (Catcher or Receiver) to receive the streamed raw SMF records and write
them to disk , if that makes it easier for the client to process them. If the user chose to implement your
own TCPIP code to receive the streamed SMF records from the Data Splitter, they will arrive packed into
IBM Z Common Data Provider Type 2 Open Streaming API Data Packets “Receiving raw SMF records from
the SMF Extractor” on page 11.

Configuring the SMF Extractor


If the user needs to send a stream of raw SMF records to one or more specific receivers it is best to
configure the SMF Extractor to write those specific records to a separate log stream. This lets the Data
Splitter process them efficiently (having it read records only to discard them is a waste of CPU) and it
prevents the other consumers of SMF data – the IBM Z Performance and Capacity Analytics DataMover
and, optionally, the IBM Z Performance and Capacity Analytics Collator from having to process and
discard SMF records that are only of interest to the Data Splitter.
The user will need to allocate a new DASD log stream for the SMF Extractor to write the SMF records the
Data Splitter is to process to.

Installing the Data Splitter


Unpack the Data Splitters tar file:

tar -xvof /usr/lpp/IBM/IZPCA/v3r1m0/IBM/DRLPJDS -C /var/IZPCA

Copy the Data Splitter directory to where the user wants the working directory for the Data Splitter.
• Each Data Splitter that the user wants to run should have its own working directory.
Edit the DataSplitter.sh file and fill in the configuration details.
• The path to the working directory
• The path to a 64-bit Java 8 installation
There are two JCL samples in the CNTL data set. User will need to copy them to a JCL data set.
• DRLJDSP is the PROC to run the Data Splitter
– The user will need to change the working directory name
• DRLJDSJ is JCL to run the PROC as a JOB
– The user will need to change the PROCLIB
When the user submits the job, the Data Splitter will start running. The user needs to prepare its
configuration before you do this.

Configuring the Data Splitter


Like other DataMover class tools, the Data Splitter reads its configuration from a .properties file in its
config directory. The following sample configuration is provided:
• RawSplitter – which sends the data directly to each targeted receiver

110 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DataMover features and parameters reference

Understanding the sourcetype attribute


SourceType is an attribute of a Data Packet. It is a text string that identifies the nature of the contents
of the packet – much like a declaration on an envelope or a parcel. It allows the packet to be efficiently
routed without the need to continuously open and repack it in order to determine where to send its
contents.
The Data Splitter uses a Collate stage in .mode=split to sort the raw SMF data into data packets and tag
them each with a SourceType. The value it puts into the SourceType attribute is the name of the collation
group that trapped the SMF records. The same SMF record may be trapped by multiple collation groups.
Receivers that are being sent multiple data streams need to be aware of the possibility that they may
get sent the same record multiple times in different streams – depending on how the collate stage is
programmed.
• Outside of SPLIT mode, the COLLATE stage ends up putting the resolved data sets name in the
sourceType, from where it is picked up by the ZWriter stage.

Direct streaming
Using the RawSplitter configuration, each TCPIP output stage will use the SourceType attribute to decide
which packets to send to its receiver. Each receiver will only get packets that the Data Splitter has been
configured to send them.

Chapter 4. Installation Optional Extensions 111


DataMover features and parameters reference

112 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the DRLFPROF data set

Chapter 5. Installation reference

Dialog parameters
This topic describes dialog parameters that are set initially by member DRLEINI1 in the
DRLxxx.SDRLEXEC library and read from the userid.DRLFPROF data set. IBM Z Performance and Capacity
Analytics initializes a new user's first dialog session with parameter settings from userid.DRLFPROF. From
that point forward, a user's dialog parameters are in personal storage in member DRLPROF in the library
allocated to the ISPPROF ddname, which is usually tsoprefix.ISPF.PROFILE. If DRLFPROF exists, a user
changes parameter values through the Dialog Parameters window. DRLEINI1 continues to set parameters
that do not appear in the Dialog Parameters window. It does this when a user starts IBM Z Performance
and Capacity Analytics.
“Step 4: Preparing the dialog and updating the dialog profile” on page 21 describes the installation step
where userid.DRLFPROF is customized for your site. It refers to this section for descriptions of:
• “Modifying the DRLFPROF data set” on page 113
• “Overview of the Dialog Parameters window” on page 114
• “Dialog parameters - variables and fields” on page 115
• “Allocation overview” on page 124

Modifying the DRLFPROF data set

About this task


The DRLFPROF data set contains user modifiable parameters. A sample of the DRLFPROF data set is
provided in member DRLFPROF in library SDRLCNTL. To customize DRLFPROF with your site specific
values, allocate a data set with the name userid.DRLFPROF and copy in the sample DRLFPROF member
from the SDRLCNTL library.
For a description of the fields that can be modified in the userid.DRLFPROF data set, see “Dialog
parameters - variables and fields” on page 115.
When editing the userid.DRLFPROF data set, note that:
• IBM Z Performance and Capacity Analytics regards any characters after the /* characters as comments.
This means that //* JCL comments cannot be used. A closing */ is recommended but not required.
• The format for field assignment is: field-name = value [/* comment [*/]] except as noted below. No other
tokens may be present. Tokens are case insensitive.
• Each field assignment must be completed on one line. Continuation is not supported.
• Any value (even integer values) can be given as a REXX-style string, delimited by the single (') or double
(") quotation marks. Escaping of delimiter characters works in the same way as a REXX string.
• If a value does not begin with a ' or " character, only the first blank-separated word present after the =
character is taken.
• Though sequence numbering in DRLFPROF may not cause errors, it is not supported and should be
turned off.
• For the fields DEF_JCLSTA1, DEF_JCLSTA2, DEF_JCLSTA3 and DEF_JCLSTA4, the value is taken as any
characters between the = and the '/*, or end of the line if no comment is present. Delimiting this value
with double quotation marks (") is highly recommended but not required.
• If the above recommendations are adhered to, the DRLFPROF file syntax is a subset of REXX syntax and
so syntax highlighting can be used for easier editing.

© Copyright IBM Corp. 1993, 2017 113


Overview of the Dialog Parameters window

Overview of the Dialog Parameters window


The parameters displayed in the Dialog Parameters window depend on whether your installation uses
QMF. This section shows the parameters used when QMF is used. For an overview of the parameters used
when QMF is not installed on your system, refer to Figure 51 on page 115.

Dialog Parameters when QMF is used


Figure 50 on page 114 is a logical view of the Dialog Parameters window, which is available from the
System window of the administration dialog and from the Other pull-down of the reporting dialog. You can
change the personal settings that control their your dialog sessions. For a description of the fields in this
window, see “Dialog parameters - variables and fields” on page 115.

Dialog Parameters

Type information. Then press Enter to save and return.

More: +

Db2 subsystem name . . . . . DSN


Db2 plan name for IZPCA . . . . DRLPLAN
Database name . . . . . . . . DRLDB
Storage group default . . . . DRLSG
Prefix for system tables . . DRLSYS
Prefix for all other tables . DRL
Show IZPCA environment data . . NO (YES or NO)

Buffer pool for data . . . . BP0


Buffer pool for indexes . . . BP0

Users to grant access to . . DRLUSER ________ ________ ________


________ ________ ________ ________

SQL ID to use (in QMF) . . . DRLUSER


QMF language . . . . . . . . PROMPTED (SQL or PROMPTED)
SYSOUT class (in QMF) . . . Q
Default printer . . . . . . . ________ (for graphic reports)
SQLMAX value . . . . . . . . 5000

Reporting dialog mode . . . . 1 1. End-user mode


2. Administrator mode

Dialog language . . . . . . . 1 1. English

Db2 data sets


Prefix . . . . . . . . . . DSN810
Suffix . . . . . . . . . . ___________________________________
QMF data sets prefix . . . . QMF810

IBM Z Performance and Capacity Analytics data sets prefix . . . . DRL310


Temporary data sets prefix . (user_ID substituted)

Local definitions data set DRL.LOCAL.DEFS


Local GDDM formats data set DRL.LOCAL.ADMCFORM
Local messages data set . . . DRL.LOCAL.MESSAGES
Saved reports data set . . . DRL.LOCAL.REPORTS
Saved charts data set . . . . DRL.LOCAL.CHARTS

Job statement information (required for batch jobs):


//(user_ID substituted) JOB (000000,XXXX),'USER1',MSGLEVEL=(1,1),
// NOTIFY=(user_ID substituted),MSGCLASS=Q,CLASS=E,REGION=4096K
//*

F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 50. Dialog Parameters window, when QMF is used

Dialog Parameters when QMF is not used


Figure 51 on page 115 is a logical view of the Dialog Parameters window, which is available from the
System window of the administration dialog and from the Other pull-down of the reporting dialog. IBM Z
Performance and Capacity Analytics users can change personal settings that control their dialog sessions.
For a description of the fields in this window, see “Dialog parameters - variables and fields” on page 115.

114 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields

Dialog Parameters

Type information. Then press Enter to save and return.

More: +

Db2 subsystem name . . . . . DSN


Db2 plan name for IZPCA . . . . DRLPLAN
Database name . . . . . . . . DRLDB
Storage group default . . . . DRLSG
Prefix for system tables . . DRLSYS
Prefix for all other tables . DRL
Show IZPCA environment data . . NO (YES or NO)

Buffer pool for data . . . . BP0


Buffer pool for indexes . . . BP0

Users to grant access to . . DRLUSER ________ ________ ________


________ ________ ________ ________

Batch print SYSOUT class . . A


Printer line count per page 60
SQLMAX value . . . . . . . . 5000

Reporting dialog mode . . . . 1 1. End-user mode


2. Administrator mode

Dialog language . . . . . . . 1 1. English

Db2 data sets


Prefix . . . . . . . . . . DB2.V820
Suffix . . . . . . . . . .

IZPCA data sets prefix . .. . IZPCA182


Temporary data sets prefix (user_ID substituted)

Local defs data set . . . . . DRL.LOCAL.DEFS


Local User defs data set . . DRL.LOCAL.USER.DEFS
Local GDDM formats data set DRL.LOCAL.ADMCFORM
Local messages data set . . . DRL.LOCAL.MESSAGES
Saved reports data set . . . DRL.LOCAL.REPORTS
Saved charts data set . . . . DRL.LOCAL.CHARTS

Job statement information (required for batch jobs):

//(user_ID substituted) JOB (000000,XXXX),'USER1',MSGLEVEL(1,1),


// NOTIFY=&SYSUID,MSGCLASS=Q,CLASS=E,REGION=4096K
//*
//*
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 51. Dialog Parameters window, when QMF is not used

Dialog parameters - variables and fields


Most variable names in userid.DRLFPROF and field names in the Dialog Parameters window are directly
related. The following table describes the relationship between the variables and fields and describes
how IBM Z Performance and Capacity Analytics uses the values to allocate libraries or control other dialog
functions. It also describes variables and fields that do not have exact equivalents.
“Modifying the DRLFPROF data set” on page 113 shows the user-modifiable area of the file that is
processed at the product startup. The “Overview of the Dialog Parameters window” on page 114 shows
the Dialog Parameters window. “Allocation overview” on page 124 describes the data sets allocated by
IBM Z Performance and Capacity Analytics.
userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

modtenu N/A None

The fully-qualified name of the user's ISPF table library, if any.

db2plib2 N/A SDSNPFP

Chapter 5. Installation reference 115


Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

The Db2 panel library, which, depending on the value of db2def, is either a fully qualified name or a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

db2plibe N/A SDSNPFPE

The English Db2 panel library, which, depending on the value of db2def, is either a fully qualified name or a value that IBM Z Performance
and Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

db2plibk N/A SDSNPFPK

qmfprint N/A YES

Specifies whether the QMF output is saved in the DSQPRINT data set (YES) or in the SYSOUT class (NO).

def_db2subs Db2 subsystem name DSN

The Db2 subsystem where IBM Z Performance and Capacity Analytics resides.
This required field can be 4 alphanumeric characters. The first character must be alphabetic.
The default value is DSN. If the value in this field is something other than DSN, it was changed during installation to name the correct Db2
subsystem.
Do not change the value to name another Db2 subsystem to which you might have access. IBM Z Performance and Capacity Analytics must
use the Db2 subsystem that contains its system, control, and data tables.

def_db2plan Db2 plan name for IZPCA DRLPLAN

The Db2 plan name to which the distributed IBM Z Performance and Capacity Analytics for z/OS DBRM has been bound.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default value for this field is DRLPLAN. If the value in this field is something other than DRLPLAN, it may have been changed during
installation to refer to a customized plan name forIBM Z Performance and Capacity Analytics .
Only change the plan name shown here if instructed to do so by your IBM Z Performance and Capacity Analytics system administrator.

def_dbname Database name DRLDB

The Db2 database that contains all IBM Z Performance and Capacity Analytics system, control, and data tables. The value of this field is set
during installation.
This required field can be up to 8 alphanumeric characters. The first character must be alphabetic. The value of this field depends on the
naming conventions at your site.
The default database is DRLDB. If this value is something other than DRLDB, it is likely the default value for your site.
Do not change this name to identify another Db2 database to which you have access. You must use the Db2 database that contains IBM Z
Performance and Capacity Analytics.

def_storgrp Storage group default DRLSG

The storage group that IBM Z Performance and Capacity Analytics uses for the Db2 database identified in the Database name field.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRLSG. If the value of the field is something other than DRLSG, it was changed during installation.
Do not change the value of this field to another storage group to which you might have access; IBM Z Performance and Capacity Analytics
uses the value of this field to create new tables.

def_syspref Prefix for system tables DRLSYS

The prefix of all IBM Z Performance and Capacity Analytics system and control Db2 tables. The value of this field depends upon your
naming conventions and is determined during installation.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRLSYS. If the value is something other than DRLSYS, it was changed during installation.
Do not change the value; IBM Z Performance and Capacity Analytics uses this value to access its system tables.

def_othtbpfx Prefix for all other tables DRL

The prefix of IBM Z Performance and Capacity Analytics data tables in the Db2 database.
Valid values are determined at installation.
This required field can be 8 alphanumeric characters. The first character must be alphabetic.
The default is DRL. If the value is something other than DRL, it was changed during installation.

116 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

def_drlshwid Show IZPCA environment data NO

Specifies whether or not to display the IBM Z Performance and Capacity Analytics environment data in the main panels.
This required field can have a value of YES or NO.
The default value for this field is NO.

def_tsbpool Buffer pool for data BP0

The default buffer pool for IBM Z Performance and Capacity Analytics table spaces. This field can have values from BP0 to BP49, from
BP8K0 to BP8K9, from BP16K0 to BP16K9, from BP32K to BP32K9. The buffer pool implicitly determines the page size. The buffer pools
BP0, BP1, ..., BP49 hold 4-KB pages. The buffer pools BP8K0, BP8K1, ..., BP8K9 hold 8-KB pages. The buffer pools BP16K0, BP16K1, ...,
BP16K9 hold 16-KB pages. The buffer pools BP32K, BP32K1, ..., BP32K9 hold 32-KB pages.

def_ixbpool Buffer pool for indexes BP0

The default buffer pool for IBM Z Performance and Capacity Analytics indexes. This field can have values from BP0 to BP49 (The buffer
pool for indexes must identify a 4-KB buffer pool).

def_iduser1 Users to grant access to DRLUSER

The user IDs or group IDs of users who are granted Db2 access to the next component you install. Users or user groups with Db2 access to
a component have access to the tables and views of the component. You can specify up to 8 users or group IDs in these fields.
You must specify a value for at least one of the fields.
Each user ID or group ID can be 8 alphanumeric characters. The first character must not be numeric.
The default is DRLUSER, as shipped by IBM. You can use any user group ID that is valid for your Db2 system. You should use one such
group ID to define a list of core IBM Z Performance and Capacity Analytics users (who might include yourself). It is a good idea to leave
such a core group as the value in one of the fields, regardless of whether you control user access to various components by adding other
group IDs.
You can grant users access to the tables and views of a component by listing them here before you install the component.
Consider using RACF group IDs or Db2 secondary authorization IDs and specifying them in these fields before installing a component. It is
easier to connect individual user IDs to an authorized group than it is to grant each individual access to each table or view that they need.

def_idsqlusr SQL ID to use (in QMF) DRLUSER

This field is used only if your installation uses QMF.


The Db2 primary or secondary authorization ID to which you are connected. IBM Z Performance and Capacity Analytics uses the value of
this field in the SET CURRENT SQLID as it starts QMF. The ID must have Db2 authorization to IBM Z Performance and Capacity Analytics
tables and views.
This required field can be up to 8 alphanumeric characters. The first character must be alphabetic.
The default is DRLUSER. If the value is something other than DRLUSER, it was changed during installation.
You can change this value to your user ID if you have Db2 authorization to IBM Z Performance and Capacity Analytics tables and views.

def_qmflng QMF language PROMPTED

The QMF language for creating reports and queries, either SQL (structured query language) or PROMPTED QUERY.
PROMPTED QUERY is the default QMF language for IBM Z Performance and Capacity Analytics.
This is a required field, if your installation uses QMF.

def_qmfprt SYSOUT class (in QMF) Q

The SYSOUT class for report data sets that QMF generates, or for output that QMF routes to a printer. The default value is Q.
This is a required field, if your installation uses QMF.

def_printer Default printer blank

The GDDM nickname of a printer to use for printing graphic reports. The printer should be one capable of printing GDDM-based graphics.
The printer name must be defined in the GDDM nicknames file, allocated to the ADMDEFS ddname. Refer to QMF: Reference and GDDM
User's Guide for more information about defining GDDM nicknames.

def_drlprt Batch print SYSOUT class A

This field is used only if your installation does not use QMF.
A valid SYSOUT class for printing tabular reports in batch. Valid values are A-Z, 0-9, and *.

Chapter 5. Installation reference 117


Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

def_pagelen Printer line count per page 60

This field is used only if your installation does not use QMF.
The number of report lines that should be printed on each page when you print tabular reports online and in batch.

def_drlmax SQLMAX value 5000

The maximum number of rows for any single retrieval from an IBM Z Performance and Capacity Analytics table when using an IBM Z
Performance and Capacity Analytics-Db2 interface for such functions as listing tables, reports, or log definitions.
The value of this required field is the maximum allowed size of the IBM Z Performance and Capacity Analytics Db2 table to be retrieved.
The default value is 5000 rows of data.

def_rptdialg Reporting dialog mode 1

The dialog mode for using the reporting dialog. Any option you save applies to future sessions.
You can choose administrator mode to access reports belonging to all users if you have an IBM Z Performance and Capacity Analytics
administrator authority. You can choose end-user mode to access reports that you have created or that have been created for you
(including public reports).
Type 1 to use end-user mode or 2 to specify administrator mode. If you leave the field blank, the default is end-user mode.

N/A Dialog language 1

The language in which IBM Z Performance and Capacity Analytics displays all its windows.
IBM Z Performance and Capacity Analytics supports those languages listed in the window. Choose the language your site has installed.
If you leave this field blank, IBM Z Performance and Capacity Analytics displays its windows in English.
Any changes you make to this field become effective in your next dialog session, when IBM Z Performance and Capacity Analytics allocates
its libraries.

def_db2dspfx Db2 data sets-prefix DSN710

The prefix to which IBM Z Performance and Capacity Analytics appends Db2 data set names as it performs tasks.
This field is required if db2def is SUFFIX. If db2def is DATASET, this field is ignored.
This field can be 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default is DB2.V810. If the value of this field is something other than DB2.V810, it was changed during installation.
Any changes you make to this field become effective in your next session, when IBM Z Performance and Capacity Analytics allocates Db2
libraries and data sets.

Db2 data sets-suffix def_db2dssfx blank

The suffix that IBM Z Performance and Capacity Analytics appends as the low-level qualifier for Db2 data sets that IBM Z Performance and
Capacity Analyticsuses. Most sites do not use a Db2 data set suffix, but this depends on your Db2 naming conventions.
This field can be used if db2def is SUFFIX. If db2def is DATASET, this field is ignored.
This field can be 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
Your IBM Z Performance and Capacity Analytics administrator can set a default value for this field if it is in use at your site. If the field is
blank, it is very likely not in use.
Do not use this field to qualify data sets that you create; this is not its purpose. Use it to identify Db2 modules only.
Any changes you make to this field are not effective until your next invocation of the dialog, when IBM Z Performance and Capacity
Analytics has a chance to reallocate Db2 libraries and data.

def_qmfdspfx QMF data sets prefix QMF710

118 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

This field is used only if your installation uses QMF. The prefix to which IBM Z Performance and Capacity Analytics appends all QMF data
set names. This includes all QMF libraries allocated by the dialog during invocation. It also includes all QMF queries and forms.
If qmfdef is SUFFIX, this field is required. If qmfdef is DATASET, this field is ignored.
This field can be up to 35 alphanumeric characters. Names longer than 8 characters must be in groups of not more than 8 characters,
separated by periods. The first character of each group must be alphabetic.
The default is DB2.V810. If the value is something other than DB2.V810, it was changed during installation.
Do not use this value to identify your personal QMF data sets. IBM Z Performance and Capacity Analytics uses this value for all QMF data
sets.
Any changes you make to this field become effective in your next session, whenIBM Z Performance and Capacity Analytics allocates its
libraries.

def_dsnpref IBM Z Performance and Capacity DRL310


Analytics data sets prefix

The prefix of IBM Z Performance and Capacity Analytics libraries.


This required field can be up to 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default is DRL310. If the value of this field is something other than DRL310, it was changed during installation.
Any changes you make to this field become effective in your next session, when IBM Z Performance and Capacity Analytics allocates its
libraries.

No equivalent Temporary data sets prefix user_ID

The prefix for any temporary data sets you create while using IBM Z Performance and Capacity Analytics.
This required field can be up to 35 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default value is your user_ID or the TSO_prefix.user_ID.

def_dsnlocdn Local definitions data set DRL.LOCAL.DEFS

The partitioned data set (PDS) that contains definitions of IBM Z Performance and Capacity Analytics objects you have created. The value
of this field depends on naming conventions that apply to IBM Z Performance and Capacity Analytics.
The members of this PDS contain definition statements that define new objects to IBM Z Performance and Capacity Analytics. IBM Z
Performance and Capacity Analytics uses the value of this field to locate local definition members.
This optional field can be 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.DEFS. Your administrator can set a different default for this field during installation. Do not change the value
that your IBM Z Performance and Capacity Analytics administrator sets.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates local definition data sets.

def_usrlocdn Local User alter/definitions data DRL.LOCAL.USER.DEFS


set

The partitioned data set (PDS) that contains definitions of IBM Z Performance and Capacity Analytics objects you have modified. The value
of this field depends on naming conventions that apply to IBM Z Performance and Capacity Analytics.
The members of this PDS contain definition statements that define user modified objects to IBM Z Performance and Capacity Analytics.
This PDS also contains members with alter statements built by the update processor on the definitions contained in the same PDS. IBM Z
Performance and Capacity Analytics uses the value of this field to locate local user definition members.
This optional field can be 44 alphanumeric characters. Names longer than 8 characters must be in groups of not more than 8 characters,
separated by periods. The first character of each group must be alphabetic.
The default PDS is DRL.LOCAL.USER.DEFS. Your administrator can set a different default for this field during installation. Do not change the
value that your IBM Z Performance and Capacity Analytics administrator sets.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates local definition data sets.

Chapter 5. Installation reference 119


Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

def_modform The local GDDM formats data set DRL.LOCAL. ADMCFORM

The data set where you keep your GDDM formats for graphic reports.

def_drlmsgs Local messages data set DRL.LOCAL.MESSAGES

Use this field to identify a PDS that contains messages generated by users during communication with IBM Z Performance and Capacity
Analytics administrators.
The value of this field depends on naming conventions that your IBM Z Performance and Capacity Analytics administrator has established.
This required field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
Any changes you make to this field are not effective until you start the dialog again, when IBM Z Performance and Capacity Analytics
reallocates the message data set.

def_dsnreprt Saved reports data set DRL.LOCAL.REPORTS

The PDS where IBM Z Performance and Capacity Analytics saves your tabular reports.
This optional field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.REPORTS.

def_dsnchrts Saved charts data set DRL.LOCAL.CHARTS

The PDS where IBM Z Performance and Capacity Analytics saves the graphic reports you choose to save.
This optional field can be up to 44 alphanumeric characters.
Names longer than 8 characters must be in groups of not more than 8 characters, separated by periods. The first character of each group
must be alphabetic.
The default PDS is DRL.LOCAL.ADMGDF.

def_jclsta1, def_jclsta2, Job statement information Sample job card in which


def_jclsta3, def_jclsta4 (required for batch jobs) IBM Z Performance and
Capacity Analytics dynamically
substitutes the user ID.

The job statement information to be used for batch jobs that the dialogs create for you.
You must use correct JCL in the job statement. IBM Z Performance and Capacity Analytics does not validate job statement information.
Do not use JCL comments in these JCL statements.
You can specify up to four card images in these job statement fields.
The first "//" card image should contain the job name. Press Enter to save any job statements for all future sessions.

dsnsufx N/A SDRLDEFS

The IBM Z Performance and Capacity Analytics definitions data set suffix.

execsfx N/A SDRLEXEC

The IBM Z Performance and Capacity Analytics exec data set suffix.

loadsfx N/A SDRLLOAD

The IBM Z Performance and Capacity Analytics load library suffix.

skelsfx N/A SDRLSKEL

The IBM Z Performance and Capacity Analytics skeleton data set suffix.

eng_lib_sfx N/A ENU

The English library suffix.

def_nlslang N/A eng_lib_sfx

The national language library suffix.

120 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

repsufx N/A "SDRLR"+def_nlslang

The IBM Z Performance and Capacity Analytics report definitions library suffix.

plibsfx N/A "SDRLP"+def_nlslang

The IBM Z Performance and Capacity Analytics panel library suffix.

messsfx N/A "SDRLM"+def_nlslang

The IBM Z Performance and Capacity Analytics message library suffix.

formsfx N/A "SDRLF"+def_nlslang

The IBM Z Performance and Capacity Analytics GDDM formats library suffix.

eng_qmf_sfx N/A E

The English library suffix.

def_qmflang N/A eng_qmf_sfx

The national language default library suffix.

qmfdef N/A SUFFIX

The method of describing QMF library names to IBM Z Performance and Capacity Analytics, either SUFFIX or DATASET.
If qmfdef is SUFFIX (the default), IBM Z Performance and Capacity Analytics implements the QMF library naming standard, requiring a
prefix for QMF data sets (def_qmfdspfx) and a suffix (described below). IBM Z Performance and Capacity Analytics appends each suffix to
the QMF prefix to identify QMF libraries, which it then allocates.
If qmfdef is DATASET, IBM Z Performance and Capacity Analytics does not use a prefix or suffix and you must specify fully-qualified data
set names for the QMF library variables described below.
In either case, IBM Z Performance and Capacity Analytics uses the next several variables to allocate QMF libraries.

qmfclib N/A SDSQCLST+def_qmflang

The QMF CLIST library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfclibe N/A SDSQCLST+eng_qmf_sfx

The English QMF CLIST library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance
and Capacity Analyticsappends to def_qmfdspfx. IBM Z Performance and Capacity Analytics requires this library even though you might be
using another language.

qmfelib N/A SDSQEXEC+def_qmflang

The QMF EXEC library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfelibe N/A SDSQEXEC+eng_qmf_sfx

The English QMF EXEC library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance
and Capacity Analytics appends to def_qmfdspfx. IBM Z Performance and Capacity Analytics requires this library even though you might be
using another language.

qmfplib N/A SDSQPLIB+def_qmflang

The QMF panel library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfmlib N/A SDSQMLIB+def_qmflang

The QMF message library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfslib N/A SDSQSLIB+def_qmflang

The QMF skeleton library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfmap N/A SDSQMAP+def_qmflang

The ADMGGMAP library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

Chapter 5. Installation reference 121


Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

qmfpnl N/A DSQPNL+def_qmflang

The QMF panel library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

dsqpnl N/A DSQPNL+def_qmflang

The ddname of QMF DSQPNLx library. Even if you use fully-qualified data set names to identify QMF data sets, you must specify the
ddname of your DSQPNLx library as the value of this variable.

qmfload N/A SDSQLOAD

The QMF load library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfchart N/A DSQCHART

The ADMCFORM library, which (depending on the value of qmfdef), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_qmfdspfx.

qmfdsdum N/A DUMMY

The fully-qualified name of the data set to be allocated to ddname DSQUDUMP, or DUMMY.

qmfdebug N/A DUMMY

The fully-qualified name of the data set to be allocated to ddname DSQDEBUG, or DUMMY.

dsunit N/A SYSDA

The disk unit.

db2ver N/A 10

The version of Db2. Must be a decimal number 1-99.

db2rel N/A 1

The release of Db2.

db2def N/A SUFFIX

The method of describing Db2 library names to IBM Z Performance and Capacity Analytics, either SUFFIX or DATASET.
If db2def is SUFFIX (the default), IBM Z Performance and Capacity Analytics implements the Db2 library naming standard, requiring a
prefix for Db2 data sets (def_db2dspfx), a library name, and an optional suffix (def_db2dssfx).
If db2def is DATASET, IBM Z Performance and Capacity Analytics does not use a prefix or a suffix and you must specify fully-qualified data
set names for the Db2 library variables described below.
In either case, IBM Z Performance and Capacity Analytics uses the next several variables to allocate Db2 libraries.

db2llib N/A RUNLIB.LOAD

The Db2 runlib load library name, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z
Performance and Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

db2load N/A SDSNLOAD

The Db2 load library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analyticsappends to def_db2dspfx before appending def_db2dssfx.

db2clst N/A SDSNCLIST

The Db2 CLIST library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

db2mlib N/A SDSNSPFM

The Db2 message library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

db2plib N/A SDSNSPFP

The Db2 panel library, which (depending on the value of db2def), is the fully-qualified name or is a value that IBM Z Performance and
Capacity Analytics appends to def_db2dspfx before appending def_db2dssfx.

gddmload N/A GDDM.SADMMOD

122 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog parameters - variables and fields

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

The GDDM load library.

admsymbl N/A GDDM.SADMSYM

The GDDM symbols library.

admdefs N/A SYS1.GDDMNICK

The GDDM nicknames library.

admprntq N/A None

The data set name of the GDDM master print queue, if any. This overrides any value specified for TSOPRNT in the GDDM external defaults
file. If you supply a value, IBM Z Performance and Capacity Analytics adds an ADMPRNTQ DD statement to the batch JCL for graphic
reports.

def_geapplid N/A zuser

The application ID (usually sent as a TSO user ID) that has an assigned Information/Management privilege class. The default is the user ID
of the IBM Z Performance and Capacity Analytics user.

def_gesessn N/A BLGSES00

The session member (module) used to start an Information/Management session.

def_geprivcl N/A MASTER

The privilege class specified in an Information/Management group record.

qmfuse N/A YES

Specifies if QMF is used with IBM Z Performance and Capacity Analytics in your installation. Any other value than YES or NO causes IBM Z
Performance and Capacity Analytics to use YES.

gddmuse N/A YES

Specifies if GDDM is used with IBM Z Performance and Capacity Analytics in your installation. (If QMF is used, GDDM must be used.) If
GDDM is not used, reports are always shown in tabular format. Any other value than YES or NO causes IBM Z Performance and Capacity
Analytics to use YES.

decsep N/A PERIOD

When generating tabular reports without QMF, IBM Z Performance and Capacity Analytics uses period as decimal separator and comma
as thousands separator. You can exchange the decimal and thousands separators by specifying decsep="COMMA". In that case, period
is used as thousands separator. Any other value of decsep causes IBM Z Performance and Capacity Analytics to use period as a decimal
separator.

subhdrv N/A N

This value is used only for QMF (where qmfuse='YES'). Specify Y if you want IBM Z Performance and Capacity Analytics to replace empty
variables in the report header with a text string. You specify the text string using F11 on the Data Selection panel, or when you get message
DRLA171.
Note: Replacing empty variables increases the time taken to generate a report.
Specify N to leave the empty variable in the report.

def_useaot N/A NO

Specifies whether Analytics component tables are created as Accelerator Only Tables in IBM Db2 Analytics Accelerator or as tables in Db2.
"YES": Tables are created as Accelerator Only Tables.
"NO": Tables are created in Db2 and are applicable for use either as Db2 tables or as IDAA_ONLY table.
The default value is "NO".
This parameter is only applicable for Analytics components.

def_accelerator N/A

The name of the Accelerator where the Analytics components tables reside. Required only if using Accelerator Only Tables, that is, if
def_useaot is set to "YES".
This parameter is only applicable for Analytics components.

def_timeint N/A T

Chapter 5. Installation reference 123


Allocation overview

userid.DRLFPROF variable Dialog Parameters field name Default value Your value
name

Specifies the time interval granularity for records collected for Analytics components tables.
"H": The timestamp for records is rounded to hourly intervals, which is similar to non-Analytics tables with a suffix of "_H" in other
components.
"S": The timestamp for records is rounded to intervals of a second, which is similar to non-Analytics tables with time field instead of
timestamp in other components.
"T": The timestamp for tables is the actual timestamp in the SMF log record, which is similar to non-Analytics tables with suffix "_T".
The default value is "T".
This parameter is only applicable for Analytics components.

drl_grant N/A YES

When installing components, this variable specifies if SQL GRANTs are issued. When set to "NO" the pre-processor will replace the SQL
GRANT with a comment stating that the GRANT has been omitted.

Allocation overview
This section describes the data sets allocated by IBM Z Performance and Capacity Analytics .

Library type or data set Library or data set Allocated by


ddname (EPDM exec)
IBM Z Performance and Capacity Analytics allocates the following libraries as a user starts a dialog session:
ISPPLIB • IBM Z Performance and Capacity Analytics panel library DRLEINI1
• QMF panel library
• Db2 panel library

ISPTLIB • IBM Z Performance and Capacity Analytics tables library DRLEINI1


• QMF tables library

ISPMLIB • IBM Z Performance and Capacity Analytics message library DRLEINI1


• QMF message library
• Db2 message library

ISPLLIB • IBM Z Performance and Capacity Analytics load library DRLEINI1


• QMF load library

ISPSLIB • IBM Z Performance and Capacity Analytics skeleton library DRLEINI1


• QMF skeleton library

IBM Z Performance and Capacity Analytics allocates the following data sets as a user starts a dialog session:
DRLTABL Userprefix.DRLTABL (for values in query variables) DRLEINI1
ADMGDF Saved charts data set DRLEINI1
DRLMSGDD IBM Z Performance and Capacity Analytics user message data set (drlmsgs) DRLEINI1
IBM Z Performance and Capacity Analytics allocates the following libraries as a user starts a function that uses QMF:
SYSPROC QMF CLIST library (def_qmfdspfx.qmfclib+E) DRLEQMF
SYSEXEC QMF exec library (def_qmfdspfx.qmfelib+E) DRLEQMF
ADMGGMAP SDSQMAP library (def_qmfdspfx.qmfmap) DRLEQMF
ADMCFORM Saved forms data set + DSQCHART library (dsnpref.formsfx + DRLEQMF
def_qmfdspfx.qmfchart)
DSQUCFRM Saved forms data set DRLEQMF

124 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview

Library type or data set Library or data set Allocated by


ddname (EPDM exec)
DSQPNLE QMF panel library DRLEQMF
DSQPRINT QMF sysout class (qmfprt) DRLEQMF
DSQSPILL NEW DELETE (temporary file allocation) DRLEQMF
DSQEDIT NEW DELETE (temporary file allocation) DRLEQMF
DSQDEBUG (qmfdebug) DRLEQMF
DSQUDUMP (qmfdsdum) DRLEQMF
IBM Z Performance and Capacity Analytics allocates the following library as a user starts a function that uses GDDM:
ADMSYMBL GDDM symbols data set DRLEINI1
IBM Z Performance and Capacity Analytics allocates the following libraries when a table or report is displayed
without QMF:
DRLTAB Userprefix.DRLTAB (for table display) DRLEADIT
DRLREP Userprefix.DRLREP (for report display) DRLERDIR
IBM Z Performance and Capacity Analytics allocates the following library as a user starts Db2 Interactive (DB2I)
SYSPROC Db2 CLIST library (db2dspfx.db2clst) DRLEDB2I

Overview of IBM Z Performance and Capacity Analytics objects


This section describes how a feature definition member is used to update system tables. It then describes
how IBM Z Performance and Capacity Analytics uses the resulting component definitions to install a
component's objects. There is also a section on how to create and change definitions in both the dialog
and log collector language.
For more information about the log collector language and report definition language statements, see the
Language Guide and Reference.
This topic uses the Sample component as the basis of most of its examples. For more information, see
“Sample component” on page 283.
For information on the naming convention for IBM Z Performance and Capacity Analytics definition
members, see “Naming convention for IBM Z Performance and Capacity Analytics definition members” on
page 132.

IBM Z Performance and Capacity Analytics component installation


Component installation starts with the SMP/E installation of the definition members of a feature in
the DRL310.SDRLDEFS library. IBM Z Performance and Capacity Analytics features provide definition
members that update the system tables with information about the definitions in a feature.

Defining definition library members with SQL


Before installing IBM Z Performance and Capacity Analytics for z/OS components, you must create or
update the system tables. When you do this from the dialog or in batch, the DRLIxxxx members, in the
DRL310.SDRLDEFS library, contain SQL statements that are executed.
Figure 52 on page 126 shows the DRLIxxxx definition member for the Sample component. These
members use the SQL log collector language statement to pass an SQL statement to Db2.

Chapter 5. Installation reference 125


Allocation overview

/**********************************************************************/
/* Sample Component */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMPONENTS
(COMPONENT_NAME, DESCRIPTION, USER_ID)
VALUES('SAMPLE','Sample Component',USER);
/**********************************************************************/
/* Log and record definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','LOG ','SAMPLE','DRLLSAMP');

/**********************************************************************/
/* Table space, table, and update definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','TABSPACE','DRLSSAMP','DRLSSAMP');

/**********************************************************************/
/* Report and report group definitions */
/**********************************************************************/
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS
(COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME)
VALUES('SAMPLE','REPGROUP','SAMPLE','DRLOSAMP');

Figure 52. Definition member DRLISAMP, setting component definitions

Executing these statements populates the IBM Z Performance and Capacity Analytics system tables
with component definitions. These component definitions describe the installable components and the
SDRLDEFS members that can be used to install the component.

How IBM Z Performance and Capacity Analytics controls object replacement


Once the system tables have been updated with the installation members, you must reinstall all affected
components in order to replace all objects. Each component installed is controlled by a variable VERSION
which is specified in the DEFINE statements and a corresponding column VERSION is included in the IBM
Z Performance and Capacity Analytics system tables where objects are defined.
During the installation of the IBM Z Performance and Capacity Analytics components, a preprocessor
checks each definition member to see if an object already exists (from the installation of an earlier level of
the component).
If the object does not already exist, the DEFINE statement for this object is passed to the log collector.
If the object does already exist, and providing the variable VERSION is specified in the DEFINE statement
for the object, then the values of VERSION in the DEFINE statement and in the system table where
the object is defined, are compared. If the values of VERSION are the same, the log collector replaces
theDEFINE statement for the object with a comment, saying that the most recent version of the object
already exists in the system table. If the values of VERSION are different, the log collector inserts an
DROP statement. This DROP statement drops the object so that it can be redefined.
Note: IBM Z Performance and Capacity Analytics only checks the VERSION variable when you install
using option 2 Components.
All log, record, record procedure, and update objects shipped with the product contain the VERSION
variable, which takes the value: IBM.xxx
where xxx corresponds to the product version. For example, IBM.310 indicates objects created or
modified by IBM Z Performance and Capacity Analytics V3.1.0. If an object is modified by an APAR,
then the APAR number is used as the VERSION variable, for example, VERSION 'PH10636'.

IBM Z Performance and Capacity Analytics Version variable format


All IBM Z Performance and Capacity Analytics log, record, record procedure, and update objects shipped
with the product contain the VERSION variable, which takes the value IBM.xxx, where xxx corresponds to

126 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview

the product version. For example IBM.310 indicates objects created or modified by IBM Z Performance
and Capacity Analytics V3.1.0.
If an object is modified by an APAR, then the APAR number is used as the VERSION variable, for example,
VERSION 'PH10636'.
IBM Z Performance and Capacity Analytics recognizes the following version variable patterns as being
standard objects shipped by the product:
• Version numbers beginning with 'IBM'.
• Version numbers with no text (the empty string or no version clause).
• Version numbers beginning with an APAR number, that is, two letters followed by any number of
digits up to an optional decimal point. For example, the version numbers PM123, PX123456.V310,
RW987654, and OK123.2014101, are all considered 'standard' version numbers, but PK1234A and
MXC1234 are not.

Custom Version numbers:


When customizing IBM Z Performance and Capacity Analytics objects (see “Controlling objects that you
have modified” on page 178) you must choose a version number which does not conform to the standards
above. A version number might be 'ALTERED' or 'MODIFIED', or your own version system such as 'C.V2'.

How IBM Z Performance and Capacity Analytics determines installation order


After IBM Z Performance and Capacity Analytics stores the names of a feature's component objects and
definition members in the system tables, you can use the dialog to install the feature's components.
The product queries the system tables to determine the names of definition members in the
DRL310.SDRLDEFS, DRL310.SDRLRxxx, and DRL310.SDRLFxxx libraries. (xxx is ENU for the English
language version of IBM Z Performance and Capacity Analytics.
IBM Z Performance and Capacity Analytics requires some definitions to exist before it can install
others. For example, if a component contains a record procedure, The product must install the record
definition that maps the source record for the record procedure before installing the record procedure.
Furthermore, it must install the record procedure before installing the record definition that maps the
record procedure's output. The definition members that are supplied by the product supplies often
combine several definitions in the same member to ensure their order of installation.
Table 6 on page 127 shows the order in which IBM Z Performance and Capacity Analytics installs a
feature's definitions.

Table 6. Order of installation of feature definition members


Order Member naming Definition types
convention
1 DRLLxxxx Logs.
2 DRLRxxxx Records and record procedures. Record definitions mapping record procedure input
must appear before the associated record procedure definition. Record definitions
mapping record procedure output must appear after the associated record procedure
definition.
3 DRLSxxxx Table spaces.
4 DRLTxxxx Lookup tables, tables, updates, and views. Lookup tables and tables must be defined
before update definitions that refer to them. Tables must also be defined before views
that refer to them.
5 DRLUxxxx Updates (also found in DRLTxxxx members).
6 DRLVxxxx Views (also found in DRLTxxxx members).
7 DRLOxxxx Report groups and reports. Report groups must be defined before the report definitions
that reference them.

Chapter 5. Installation reference 127


Allocation overview

The order of installation within a definition type is determined by the sort sequence of the definition
member names. The examples that follow appear in the same order that IBM Z Performance and Capacity
Analytics would install them.

Defining logs with log collector language


DRLLxxxx members of the DRL310.SDRLDEFS library define log types to IBM Z Performance and Capacity
Analytics. Figure 53 on page 128 shows the definition member for the SAMPLE log type.

DEFINE LOG SAMPLE VERSION 'IBM.110';

COMMENT ON LOG SAMPLE IS 'Sample log definition';

Figure 53. Definition member DRLLSAMP, defining a log type

Defining records with log collector language


DRLRxxxx members of the DRL310.SDRLDEFS library define record types to IBM Z Performance and
Capacity Analytics. Figure 54 on page 128 shows the definition for the SAMPLE_01 record type. (“Record
definitions supplied with IBM Z Performance and Capacity Analytics” on page 288 describes IBM Z
Performance and Capacity Analytics record definitions.)

DEFINE RECORD SAMPLE_01


VERSION 'IBM.110'
IN LOG SAMPLE
IDENTIFIED BY S01TYPE = '01'
FIELDS
(S01TYPE OFFSET 4 LENGTH 2 CHAR,
S01DATE OFFSET 7 DATE(MMDDYY),
S01TIME OFFSET 14 TIME(HHMMSS),
S01SYST OFFSET 21 LENGTH 4 CHAR,
S01USER OFFSET 26 LENGTH 8 CHAR,
S01TRNS OFFSET 35 LENGTH 6 EXTERNAL INTEGER,
S01RESP OFFSET 42 LENGTH 6 EXTERNAL INTEGER,
S01CPU OFFSET 49 LENGTH 6 EXTERNAL INTEGER,
S01PRNT OFFSET 56 LENGTH 6 EXTERNAL INTEGER);

COMMENT ON RECORD SAMPLE_01 IS 'Sample record type 01';

Figure 54. Definition member DRLRSAMP, defining a record type

Defining table spaces


DRLSxxxx members of the definition library SDRLDEFS define table spaces to IBM Z Performance and
Capacity Analytics.
At least one table space is defined per component to contain the tables in the component.
Table spaces can be defined explicitly using SQL statements. Alternatively, the Log Collector Language
statement GENERATE TABLESPACE may be used to create table spaces, which use values in the
GENERATE_PROFILES and GENERATE_KEYS system tables to determine the partitioning type. Using the
GENERATE statement eliminates the need to change the DRLSxxxx members, and allows multiple table
spaces to be configured in the same manner.
The following figure shows the definition for the DRLSSAMP table space of the IBM Z Performance and
Capacity Analytics Sample component.

SQL CREATE TABLESPACE DRLSSAMP


IN &DATABASE
USING STOGROUP &STOGROUP
PRIQTY 60
SECQTY 30
SEGSIZE 8
BUFFERPOOL &TSBUFFERPOOL
LOCKSIZE TABLE;

Figure 55. Using SQL to define a table space (see definition member DRLSSAMP)

128 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview

The following figure shows the definition for the DRLSKZJ1 table space of the z/OS Key Performance
Metrics component.

GENERATE TABLESPACE DRLSKZJ1


PROFILE 'SMF';

Figure 56. Using GENERATE to define a table space (see definition member DRLSKZJB)

Defining table spaces and indexes using the GENERATE statement


The GENERATE statement may be used to create table spaces, partitioning on tables, and indexes
on tables. The GENERATE statement has a PROFILE parameter which is the major key to the
GENERATE_PROFILES and GENERATE_KEYS system tables. All customization for creating table spaces,
partitioning, and indexes may be performed using these system tables. The definition member DRLTKEYS
is used to create and load the default values into the GENERATE_PROFILES and GENERATE_KEYS tables
when the system tables are created.
These system tables provide default profiles referenced by GENERATE statements in the supplied
definition members. The defaults may be changed by updating the data in these system tables without
modifying the GENERATE statements in the definition members. The profiles may be made more granular
by using the COMPONENT_ID, SUBCOMPONENT_ID or TABLESPACE_NAME key fields, with no changes
required to the product definition members.
When RANGE is specified in the TABLESPACE_TYPE column of the GENERATE_PROFILES table,
GENERATE statements referring to the profile will create range-partitioned table spaces. In this case, the
number of partitions created is determined by the number of entries for the key in the GENERATE_KEYS
system table. For example, the supplied profile of IMS has PART_NUM 1-4 which will generate four
partitions. This number of key entries can be increased or decreased to generate the required number of
partitions. Changing the number of partitions does not require a change to the GENERATE statement in
the IBM Z Performance and Capacity Analytics definition members.

Defining tables and updates


DRLTxxxx members of the DRL310.SDRLDEFS library define tables and updates to IBM Z Performance
and Capacity Analytics.
These members use the SQL log collector language statement to create tables in the database, populate
lookup tables, and grant access to the tables. They also use the DEFINE UPDATE log collector language
statement to create update definitions in the system tables. The following example shows the definition
for tables (that includes the lookup table) and updates of the Sample component, DRLTSAMP. “Definition
member DRLTSAMP, defining tables and updates (using SQL)” on page 129 uses the SQL log collector
language statement and “Definition member DRLTSAMP, defining tables and updates (using DEFINE
UPDATE)” on page 130 uses the DEFINE UPDATE log collector language statement.
Definition member DRLTSAMP, defining tables and updates (using SQL)

/**********************************************************************/
/* Define table SAMPLE_USER */
/**********************************************************************/
SQL CREATE TABLE &PREFIX.SAMPLE_USER
(USER_ID CHAR(8) NOT NULL,
DEPARTMENT_NAME CHAR(8) NOT NULL,

PRIMARY KEY (USER_ID))


IN &DATABASE.DRLSSAMP;

SQL CREATE UNIQUE INDEX &PREFIX.SAMPUSER_IX


ON &PREFIX.SAMPLE_USER
(USER_ID)
USING STOGROUP &STOGROUP.
PRIQTY 12
SECQTY 4
CLUSTER
BUFFERPOOL &IXBUFFERPOOL;
/**********************************************************************/
/* Define comments for SAMPLE_USER */
/**********************************************************************/

Chapter 5. Installation reference 129


Allocation overview

SQL COMMENT ON TABLE &PREFIX.SAMPLE_USER


IS 'This lookup table assigns department names to users.';

SQL COMMENT ON &PREFIX.SAMPLE_USER


(USER_ID IS 'User ID.',
DEPARTMENT_NAME IS 'Department name.');
/**********************************************************************/
/* Grant users read access to SAMPLE_USER */
/**********************************************************************/
SQL GRANT SELECT ON &PREFIX.SAMPLE_USER TO &USERS.;
/**********************************************************************/
/* Insert data in SAMPLE_USER */
/**********************************************************************/
SQL INSERT INTO &PREFIX.SAMPLE_USER
VALUES('ADAMS ','Appl Dev');

/**********************************************************************/
/* Define table SAMPLE_H */
/**********************************************************************/
SQL CREATE TABLE &PREFIX.SAMPLE_H
(DATE DATE NOT NULL,
TIME TIME NOT NULL,
SYSTEM_ID CHAR(4) NOT NULL,
DEPARTMENT_NAME CHAR(8) NOT NULL,
USER_ID CHAR(8) NOT NULL,
TRANSACTIONS INTEGER,
RESPONSE_SECONDS INTEGER,
CPU_SECONDS FLOAT,
PAGES_PRINTED INTEGER,
PRIMARY KEY (DATE, TIME, SYSTEM_ID, DEPARTMENT_NAME, USER_ID))
IN &DATABASE.DRLSSAMP;

Definition member DRLTSAMP, defining tables and updates (using DEFINE UPDATE)


/**********************************************************************/
/* Define update from record SAMPLE_01 */
/**********************************************************************/
DEFINE UPDATE SAMPLE_01_H
VERSION 'IBM.110'
FROM SAMPLE_01
TO &PREFIX.SAMPLE_H
GROUP BY
(DATE = S01DATE,
TIME = ROUND(S01TIME,1 HOUR),
SYSTEM_ID = S01SYST,
DEPARTMENT_NAME = VALUE(LOOKUP DEPARTMENT_NAME
IN &PREFIX.SAMPLE_USER
WHERE S01USER = USER_ID,
'?'),
USER_ID = S01USER)
SET
(TRANSACTIONS = SUM(S01TRNS),
RESPONSE_SECONDS = SUM(S01RESP),
CPU_SECONDS = SUM(S01CPU/100.0),
PAGES_PRINTED = SUM(S01PRNT));

/**********************************************************************/
/* Define update from SAMPLE_H */
/**********************************************************************/
DEFINE UPDATE SAMPLE_H_M
VERSION 'IBM.110'
FROM &PREFIX.SAMPLE_H
TO &PREFIX.SAMPLE_M
GROUP BY
(DATE = SUBSTR(CHAR(DATE),1,8) || '01',
SYSTEM_ID = SYSTEM_ID,
DEPARTMENT_NAME = DEPARTMENT_NAME,
USER_ID = USER_ID)
SET
(TRANSACTIONS = SUM(TRANSACTIONS),
RESPONSE_SECONDS = SUM(RESPONSE_SECONDS),
CPU_SECONDS = SUM(CPU_SECONDS),
PAGES_PRINTED = SUM(PAGES_PRINTED));

130 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Allocation overview

Defining updates and views


DRLUxxxx members of the DRL310.SDRLDEFS library define updates not previously defined in DRLTxxxx
definition members. For example, member DRLUMVAV in the DRL310.SDRLDEFS library defines updates
from record types SMF_030 and SMF_070 to the AVAILABILITY_T table.
DRLVxxxx members of the DRL310.SDRLDEFS library define views not previously defined in DRLTxxxx
definition members. For example, member DRLVC901 in the DRL310.SDRLDEFS library defines views on
the CICS_T_TRAN_T table for CICS unit-of-work processing.

Defining triggers
Triggers are treated like updates and defined in the DEFS member containing the table for which the
trigger is required. Triggers are defined using SQL and follow the SQL rules. The exception to the SQL
rules is for the BEGIN ATOMIC clause, where DEFS coding does not support the SPUFI parm SQLTERM()
or the command --#SET TERMINATOR. This enables the termination character to be changed, which in
turn allows a semi-colon “;” to be nested within the SQL command. When coding a trigger in DEFS this
capability is achieved by terminating the BEGIN AUTOMIC clause with a hash character with a blank
character before and after it “ # ”.
Example:

SQL CREATE TRIGGER &PREFIX.CP_CPU_LPAR_HI


NO CASCADE
BEFORE INSERT ON &PREFIX.CP_CPU_LPAR_H
REFERENCING NEW AS NEW_ROW
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
SET PEAK_TIME = NEW_ROW.TIME #
END;

Defining reports
DRLOxxxx members of the DRL310.SDRLRENU library use report definition language to define report
groups and reports in IBM Z Performance and Capacity Analytics system tables. Report definition
members are contained in national language-specific definition libraries.
Figure 57 on page 131 shows the definition for the reports and report group of the Sample component.

DEFINE GROUP SAMPLE


VERSION 'IBM.110'
DESC 'Sample Reports';

DEFINE REPORT SAMPLE01


VERSION 'IBM.110'
DESC 'Sample Report 1'
QUERY DRLQSA01
FORM DRLFSA01
CHART DRLGSURF
ATTRIBUTES SAMPLE
GROUPS SAMPLE;

DEFINE REPORT SAMPLE02


VERSION 'IBM.110'
DESC 'Sample Report 2'
QUERY DRLQSA02
FORM DRLFSA02
ATTRIBUTES SAMPLE
GROUPS SAMPLE;

DEFINE REPORT SAMPLE03


VERSION 'IBM.110'
DESC 'Sample Report 3'
QUERY DRLQSA03
FORM DRLFSA03
CHART DRLGHORB
ATTRIBUTES SAMPLE
GROUPS SAMPLE;

Figure 57. Definition member DRLOSAMP, defining reports and report groups

Chapter 5. Installation reference 131


Naming convention for members of DRL310.SDRLDEFS

The IBM Z Performance and Capacity Analytics report definition program uses the definitions in DRLOxxxx
members to locate these types of members for each report:
Member type
Description
DRLQxxxx
Report queries in DRL310.SDRLRxxx
DRLFxxxx
Report forms in DRL310.SDRLRxxx
DRLGxxxx
Report charts in DRL310.SDRLFxxx
where xxx refers to your national-language code (for example, ENU for English.
IBM Z Performance and Capacity Analytics imports members in these data sets to QMF to provide queries
and forms for predefined reports. If QMF is not used, the contents of the report queries and forms are
stored in IBM Z Performance and Capacity Analytics system tables.
DRLQxxxx members in the DRL310.SDRLRENU library are queries for predefined reports. Figure 58 on
page 132 shows the query for Sample Report 1.

SELECT TIME, DEPARTMENT_NAME, SUM(CPU_SECONDS)


FROM &PREFIX.SAMPLE_H
WHERE SYSTEM_ID = &SYSTEM_ID.
GROUP BY TIME, DEPARTMENT_NAME

Figure 58. IBM Z Performance and Capacity Analytics definition member DRLQSA01, report query

DRLFxxxx members in the DRL310.SDRLRENU library are QMF forms for predefined English reports. For
example, DRLFSA01 is the QMF form for Sample Report 1.
DRLGxxxx members in the DRL310.SDRLFENU library are GDDM/ICU formats for predefined English
reports. For example, DRLGSURF is the GDDM/ICU format used for Sample Report 1.

Naming convention for IBM Z Performance and Capacity Analytics


definition members
This section describes the naming convention for members of the DRL310.SDRLDEFS and
DRL310.SDRLRENU libraries. For information on defining these libraries, see “Overview of IBM Z
Performance and Capacity Analytics objects” on page 125.

Naming convention for members of DRLvrm.SDRLDEFS


The naming convention for the IBM Z Performance and Capacity Analytics definitions library is:
Naming convention
Description
DRLBxxxx
Log data manager collect statements
DRLIxxxx
Component definitions (SQL statements that are executed when the system tables are created or
updated)
DRLLxxxx
Log definitions
DRLRxxxx
Record definitions “Record definitions supplied with IBM Z Performance and Capacity Analytics” on
page 288 describes record definitions.
DRLSxxxx
Table space definitions

132 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Naming convention for members of DRL310.SDRLRENU

DRLTxxxx
Table and update definitions
DRLUxxxx
Update definitions (when separate from tables)
DRLVxxxx
View definitions
DRLWxxxx
Migration definitions

Naming convention for members of DRLvrm.SDRLRENU


The naming convention for the IBM Z Performance and Capacity Analytics (predefined) reports definitions
library SDRLRENU is:
Naming convention
Description
DRLOxxxx
Report definitions
DRLQxxxx
SQL queries
DRLFxxxx
QMF forms

Chapter 5. Installation reference 133


Naming convention for members of DRL310.SDRLRENU

134 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data

Chapter 6. Administering IBM Z Performance and


Capacity Analytics

Setting up operating routines


About this task
This section describes how to develop operating routines for:
• “Collecting log data” on page 135
• “Administering the IBM Z Performance and Capacity Analytics database” on page 145
• “Administering reports” on page 160
The sample jobs described may not be identical to those shipped with IBM Z Performance and Capacity
Analytics. Before using these jobs, refer to the samples in the DRL310.SDRLCNTL library.

Collecting log data

About this task


One of your primary responsibilities is to establish routines to collect data. To do this, you can use either
the IBM Z Performance and Capacity Analytics administration dialog or log collector language statements
that you execute through either a job or the dialog. This section describes:
1. How to collect data from the SAMPLE log type. The Sample component contains a log definition, record
definitions, and update definitions for collecting SAMPLE log data sets.
2. How to collect data in batch without using the dialog. See “Collecting data from a log into Db2
tables” on page 186 for information about using the dialog to collect data. You can also automate the
collection of data using the log data manager option, described in “Working with the log data manager
option” on page 238.

Collecting data through the administration dialog

About this task


To collect log data from a SAMPLE log data set:

Procedure
1. From the Administration window, select 3, Logs, and press Enter.
The Logs window is displayed.
2. From the Logs window, select Sample and press F11.
The Collect window is displayed.
3. Type DRL310.SDRLDEFS(DRLSAMPL) in the Data set field.
This is the name of the data set that contains log data
4. Press F4 to start an online collect process.
After the data collection is complete, IBM Z Performance and Capacity Analytics displays statistics
about the collect. (See “Sample collect messages” on page 140 for more information about the
statistics.)
5. When the collect is complete, press F3.

© Copyright IBM Corp. 1993, 2017 135


Collecting log data

The product returns to the Logs window.


6. From the Logs window, press F3.
The product returns to the Administration window.

Using log collector language to collect data

About this task


To collect log data using the SAMPLE log definition, create and submit the JCL (Figure 59 on page 136).

//jobname JOB parameters


//LC EXEC PGM=DRLPLC,PARM=('SYSPREFIX=DRLSYS SYSTEM=DSN')
//STEPLIB DD DISP=SHR,DSN=DRLxxx.SDRLLOAD
//DRLIN DD *
COLLECT SAMPLE;
//DRLLOG DD DISP=SHR,DSN=DRLxxx.SDRLDEFS(DRLSAMPL)
//DRLOUT DD SYSOUT=*
//DRLDUMP DD SYSOUT=*

Figure 59. Invoking the log collector in batch to collect data

IBM Z Performance and Capacity Analytics uses the log collector program (DRLPLC) to collect the SAMPLE
log type, using these ddnames:
DD statement name
Description
DRLIN
Contains the log collector language statements. It can contain fixed-length or varying-length records
of any length, but the log collector reads a maximum of 72 bytes from each record.
DRLLOG
Identifies the log data set. The data set attributes are determined by the program creating the log.
DRLOUT
Identifies where collect messages are routed. It can have fixed-length or varying-length records of
any length, but the log collector assumes a length of at least 80 bytes for formatting. Lines that are
longer than the specified record length are wrapped to the next line. DRLOUT is allocated as RECFM=F
and LRECL=80 if no DCB attributes are specified.
DRLDUMP
Identifies where collect diagnostics are routed. It can have fixed-length or varying-length records
of any length, but the log collector assumes a length of at least 80 bytes for formatting. Lines that
are longer than the specified record length are wrapped to the next line. DRLDUMP is allocated as
RECFM=F and LRECL=80 if no DCB attributes are specified.
DRLSMSG
Contains message numbers to be written to SYSLOG. It can contain fixed-length or varying-length
records of any length. Only messages 0000-0999 and 2000-2999 are eligible to be written to
SYSLOG. By default, only messages 0290-0298 are written to SYSLOG. A hash sign designates the
start of a comment which goes until the end of the line.

136 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data

space

*I

*W

*E

*S

*T

integer

-integer

where:
*I This enables all DRLnnnnI messages to be written to SYSLOG.
*W This enables all DRLnnnnW messages to be written to SYSLOG.
*E This enables all DRLnnnnE messages to be written to SYSLOG.
*S This enables all DRLnnnnS messages to be written to SYSLOG.
*T This enables all DRLnnnnT messages to be written to SYSLOG.
integer This enables the message with the specified number to be written to SYSLOG.
-integer This disables the message with the specified number from being written to SYSLOG.

//DRLSMSG DD *
*S -0390 -0398 -0399
/*

Figure 60. DRLSMSG example

The DRLJCOLL job


The sample job DRLJCOLL is a generic collect job, adaptable for most logs.
“DRLJCOLL job for collecting data from an SMF data set” on page 137 shows sample job DRLJCOLL, used
to collect data from an SMF log data set.
Note: Ensure that the log data sets that are used as input for the collect (DRLLOG DD statement) are
sorted in chronological order.
DRLJCOLL job for collecting data from an SMF data set

//DRLJCOLL JOB (ACCT#),'COLLECT'


//******************************************************************
//* Name: DRLJCOLL *
//* *
//* Function: *
//* IBM Z Performance and Capacity Analytics collect job *
//* *
//* Replace "COLLECT SMF" below with one of the following *
//* statements to collect other logs: *
//* *
//* COLLECT DCOLLECT *
//* WHERE DCUDATE > DATE(LOOKUP LAST_DCOLLECT_TIME *
//* IN DRL.DFSMS_LAST_RUN *
//* WHERE DCUSYSID = MVS_SYSTEM_ID *
//* AND DCURCTYP = RECORD_TYPE); *
//* (replace DRL with the table prefix you use) *
//* (the lookup table DFSMS_LAST_RUN must be initialized *
//* before the first collect as described in the DFSMS *
//* customization section of the SP Reference manual) *
//* *
//* COLLECT EREP; *
//* *

Chapter 6. Administering IBM Z Performance and Capacity Analytics 137


Collecting log data

//* SET JES_COMPLEX = ' '; *


//* COLLECT SYSLOG_JES2; *
//* *
//* For operations log (OPERLOG) produced using the System *
//* Logger, use the COLLECT statement above and change the *
//* //DRLLOG statement as follows: *
//* //DRLLOG DD DSN=SYSPLEX.OPERLOG,DISP=SHR, *
//* DCB=(LRECL=32756, BLKSIZE=32760, RECFM=VB), *
//* SUBSYS=(LOGR,, *
//* 'FROM=(2015/152,00:00),TO=(2015/153,23:59)',) *
//* *
//* SET JES_COMPLEX = 'JES3COMP'; *
//* COLLECT SYSLOG_JES3; *
//* (replace JES3COMP with the name of the JES3 complex) *
//* *
//* SET MVS_SYSTEM_ID = 'MVS1'; *
//* COLLECT NETVIEW; *
//* (replace MVS1 with the name of the MVS system) *
//* *
//* COLLECT OPC; *
//* *
//* SET VMID = 'VM1'; *
//* COLLECT VMACCT; *
//* (replace VM1 with the name of the VM system) *
//* *
//* COLLECT VMPRF; *
//* COLLECT VMPERFT; *
//* *
//* COLLECT UNIX; *
//* *
//* COLLECT OS400_JOURNAL; *
//* COLLECT OS400_CONFIG; *
//* COLLECT OS400_HISTORY; *
//* COLLECT OS400_PM_DISK; *
//* COLLECT OS400_PM_POOL; *
//* COLLECT OS400_PM_SYS; *
//* *
//* SET UNLOAD_DATE = 'YYYY-MM-DD'; *
//* SET SYSTEM_ID = 'MVS1'; *
//* COLLECT RACFCONF REPROCESS; *
//* (Replace YYYY-MM-DD with the date when you run the *
//* RACF Database Unload utility. As default, the current *
//* date is used) *
//* (Replace MVS1 with the name of your system. As default, *
//* $UNK is used) *
//* *
//* COLLECT LINUX; *
//* COLLECT ZLINUX; *
//* *
//* For some logs, special collect jobs are required: *
//* DRLJCOIM IMS log *
//* DRLJCOVP Network configuration data *
//* DRLJCOIN Tivoli Information Management for z/OS data *
//* *
//* Before you submit the job: *
//* - Check the IBM Z Performance and Capacity Analytics *
//* and Db2 data set names. *
//* - Check the Db2 subsystem name (default is DSN) *
//* and IBM Z Performance and Capacity Analytics *
//* system table *
//* prefix (default is DRLSYS). *
//* - Insert the correct collect statement in DRLIN *
//* (as described above). *
//* - Specify the name of the log data set in DRLLOG. *
//******************************************************************
//COLLECT EXEC PGM=DRLPLC,PARM=('SYSTEM=DSN SYSPREFIX=DRLSYS')
//STEPLIB DD DISP=SHR,DSN=DRL310.SDRLLOAD
// DD DISP=SHR,DSN=db2loadlibrary
//DRLIN DD *

COLLECT SMF;

//DRLLOG DD DISP=SHR,DSN=log-data-set
//DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
/*

Some logs require special collect procedures, which are supplied by the product. They are:

138 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data

Collect job name


Description
DRLJCOIM
Collects IMS log data.
DRLJCOIN
Collects Tivoli Information Management for z/OS data.
DRLJCOVP
Collects network configuration data.

Collecting data from IMS

About this task


DRLJCOIM is a sample job for collecting data from the IMS SLDS log. For information about collecting IMS
data and generating composite data records that combine various types of IMS log records, refer to the
IMS Performance Feature Guide and Reference.

Collecting data from Tivoli Information Management for z/OS

About this task


The sample job, DRLJCOIN uses DRLJRFT2 to read data from the Tivoli Information Management for
z/OS database. DRLJRFT2 is a Tivoli Information Management for z/OS report format table (RFT) in the
DRLxxx.SDRLCNTL library. For information about collecting data from the Tivoli Information Management
for z/OS database, refer to the System Performance Feature Reference Volume 1.

Collecting network configuration data

About this task


DRLJCOVP is a sample job for collecting network configuration data (vital product data). For information
about collecting network configuration data, refer to the Network Performance Feature Reference.

Performing routine data collection

About this task


When you set up your collect jobs, consider these guidelines:
• Collect data at off-peak hours.
Log data sets are generally available, online systems have been taken down, and there is less contention
for processing resources.
• Collect data daily, at least in the beginning (and especially from SMF and IMS logs).
• If you collect data from several systems, establish a procedure to get all the log data into the system
that contains the product database.
• Set up automatic procedures for submitting collect jobs. For example, use Tivoli Workload Scheduler
for z/OS (previously known as OPC, Operation Planning and Control) to initiate collect jobs. Refer to the
Tivoli Workload Scheduler for z/OS documentation for more information about the product. You can also
use the log data manager option to automate and obtain better control of the submitting of collect jobs.
This option is described in “Working with the log data manager option” on page 238.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 139


Collecting log data

Monitoring collect activity

About this task


IBM Z Performance and Capacity Analytics provides statistics about collect activity in messages called
collect messages and in the DRLSYS.DRLLOGDATASETS system table, described in the following sections.
Review collect activity to identify:
• Tables in high demand during collect processing. These tables are candidates for tuning to improve
performance.
• Errors that occur in user-defined IBM Z Performance and Capacity Analytics objects.
• Any other errors that the log collector finds.

Sample collect messages


Figure 61 on page 140 shows a set of sample messages generated during a collect job.

DRL0300I Collect started at 2000-12-04-10.04.15


DRL0302I Processing SMF.DATA.SET on VOL001
DRL0341I The first record timestamp is 2000-06-03-07.00.01.730000.
DRL0308I A database update started after 2608 records due to a buffer-full condition
DRL0342I The last record timestamp is 2000-06-03-11.52.40.220000.
DRL0310I A database update started after 4582 records due to end of log
DRL0313I The collect buffer was filled 1 times. Consider increasing the
collect buffer size.
DRL0003I
DRL0315I Records read from the log or built by log procedure:
DRL0317I Record name | Number
DRL0318I -------------------|----------
DRL0319I SMF_000 | 0
DRL0319I SMF_006 | 6
DRL0319I SMF_007 | 0
DRL0319I SMF_021 | 0
DRL0319I SMF_025 | 0
DRL0319I SMF_026 | 476
DRL0319I SMF_030 | 3737
DRL0319I SMF_070 | 40
DRL0319I SMF_071 | 40
DRL0319I SMF_072_1 | 280
DRL0319I SMF_090 | 0
DRL0320I Unrecognized | 3
DRL0318I -------------------|----------
DRL0321I Total | 4582
DRL0003I
DRL0316I Records built by record procedures:
DRL0317I Record name | Number
DRL0318I -------------------|----------
DRL0319I SMF_030_X | 2012
DRL0319I SMF_070_X | 200
DRL0318I -------------------|----------
DRL0321I Total | 2212
DRL0003I
DRL0323I -------Buffer------ ------Database-----
DRL0324I Table name | Inserts Updates Inserts Updates
DRL0325I ----------------------------|----------------------------------------
DRL0326I DRL .AVAILABILITY_D | 3 23 2 1
DRL0326I DRL .AVAILABILITY_M | 3 1 2 1
DRL0326I DRL .AVAILABILITY_T | 9 76 9 0

DRL0326I DRL .MVS_WORKLOAD_H | 144 336 132 12
DRL0326I DRL .MVS_WORKLOAD_M | 60 12 48 12
DRL0325I ----------------------------|----------------------------------------
DRL0327I Total | 2643 99019 2148 495
DRL0003I
DRL0301I Collect ended at 2000-12-04-10.09.43
DRL0356I To update the database, the algorithm SCAN was most selected.

Figure 61. Sample collect messages

140 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data

Using collect messages

About this task


To use collect messages effectively, follow this procedure:

Procedure
1. Identify which log was collected and when it started.
The first messages in a set of collect messages show when the collect starts and identify the data set.
The product then shows the timestamp of the first identified record in the log, which looks like this:

DRL0341I The first record timestamp is


2000-06-03-07.00.01.730000.

2. Look for database activity.


The product writes data to the database when:
• The buffer is full. See “Improving collect performance” on page 144 if the buffer fills often. An
example message is:

DRL0308I A database update started after 2608 records


due to a buffer-full condition

• All log data set records have been processed. An example message is:

DRL0310I A database update started after 4582 records


due to end of log

• A specific number of records have been read. The number is specified in the COMMIT AFTER
operand of the COLLECT statement. An example message (where 1000 was specified as the COMMIT
AFTER operand) is:

DRL0309I A database update started after 1000 records.

3. Determine the last record that the product identified in the log.

DRL0342I The last record timestamp is


2000-06-03-11.52.40.220000.

4. Review record-type statistical messages.


Collection statistics for record-type processing include:
• The type of each record processed
• The number of each record type found in the log data set
• The total number of records processed
IBM Z Performance and Capacity Analytics does not process any log records whose record type is
either not defined, or defined but not used by collect. It issues a statistical message that labels the
records unrecognized:

DRL0315I Records read from the log or built by log procedure:


DRL0317I Record name | Number
DRL0318I -------------------|----------

DRL0319I SMF_026 | 476
DRL0319I SMF_030 | 3737

DRL0320I Unrecognized | 3
DRL0318I -------------------|----------
DRL0321I Total | 4582

5. Verify that user-defined log, record, and update definitions are performing as expected. Check that
appropriate data is being collected and stored in the appropriate tables.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 141


Collecting log data

6. Examine the processing performed by log and record procedures.


When IBM Z Performance and Capacity Analytics finds records that require handling by record
procedures, it produces temporary, intermediate records for further IBM Z Performance and Capacity
Analytics processing. Messages show the names and numbers of intermediate records built by record
procedures while IBM Z Performance and Capacity Analytics was processing the log data set.
The messages appear in a group; for example:

DRL0316I Records built by record procedures:


DRL0317I Record name | Number
DRL0318I -------------------|----------
DRL0319I SMF_030_X | 2012
DRL0319I SMF_070_X | 200
DRL0318I -------------------|----------
DRL0321I Total | 2212

7. Examine database activity to identify tables with the most activity during collect processing.
Database inserts and updates show the number of rows inserted or updated in Db2 tables. The
number of rows inserted in the database and the number of rows updated in the database equal the
number of buffer inserts. Statistical messages of this sort look like these:

DRL0323I -------Buffer------ ------Database-----


DRL0324I Table name | Inserts Updates Inserts Updates
DRL0325I ----------------------------|----------------------------------------
DRL0326I DRL .AVAILABILITY_D | 3 23 2 1

DRL0326I DRL .MVS_WORKLOAD_M | 60 12 48 12
DRL0325I ----------------------------|----------------------------------------
DRL0327I Total | 2643 99019 2148 495

8. You can use message DRL0356I to optimize the collect process by selecting the SCAN or DIRECT
parameter. For more details, refer to the Language Guide and Reference. Following is an example of
message DRL0356I

DRL0356I To update the database, the algorithm SCAN was most selected.

Reviewing log statistics

About this task


Use the administration dialog to create a log statistics file for any log data set, regardless of whether it has
been collected. See “Displaying log statistics” on page 187 for more information.
Note: There are no lookup tables in the table name list.

Using the DRLLOGDATASETS table

About this task


The DRLSYS.DRLLOGDATASETS system table contains one row of information for each log data set IBM
Z Performance and Capacity Analytics collects. DRLLOGDATASETS contains collect statistics, such as
elapsed time for a collection, record types collected, and numbers of records processed.
The product uses the data set name, log type, and the first 80 bytes from the first recognized record, to
warn against attempts to collect a log data set already collected.
Data sets can contain identical records, but with different names. If you want to be notified when the
second data set is processed, redefine the DRLLOGDATASETS system table so that it does not use the
DATASET_NAME column as a key. Collection of the second data set fails with ABEND U0016 and an SQL
code -803 against the DRLLOGDATASETS system table.
To view collect statistics, select a log definition from the Logs window, press F6 to see the data sets
that have been collected for the log, choose a data set, and press Enter. The Collect Statistics window is
displayed (Figure 62 on page 143).

142 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Collecting log data

Note: First timestamp is the first record selected, Last timestamp is the last record selected. Last
timestamp might show an earlier date and time than the first timestamp.

DCOLLECT Collect Statistics

Press Enter to return.

Data set : IM3.DCOLLECT.SLOG14


Volume : TSOL06

Time collected : 2000-02-11-12.38.00 Collected by : LASZLOM


Elapsed time : 54 Return code : 4
Times collected : 3 Completed : Y

First record : 000000700000E540000ID5D9C4F10048D2740092


276F00000000D7D9C9F0F0F0E700000000280010
First timestamp : 2000-10-02-13.15.24
Last timestamp : 2000-10-02-13.15.24

Records read : 16458 Records selected : 16458

Database updates : 7 Inserts :4954 Deletes : 0

F1=Help F2=Split F9=Swap F12=Cancel

Figure 62. Collect Statistics window

IBM Z Performance and Capacity Analytics can produce a report from DRLLOGDATASETS that shows
statistics for every collect job in the table.
The product does not update DRLLOGDATASETS until a collection results in a successful commit. If it
finds an error that terminates processing of a log data set, such as a locking error or an out of space error,
it does not update DRLLOGDATASETS. If it has already created a row for the log data set (which it does
at the first commit), it does not update such indicators of a successful conclusion to processing as the
Elapsed seconds column or the Complete column. See “Recovering from database errors” on page 156
for more information.
Refer to “DRLLOGDATASETS” on page 259 for a description of its columns.

Collecting multiple log data sets

About this task


To collect multiple log data sets, specify the log data set names in the DRLLOG job card of the collect job
as follows:

//DRLIN DD *
COLLECT log-name
...
//DRLLOG DD DISP=SHR,DSN=log-data-set-1
DD DISP=SHR,DSN=log-data-set-2
DD DISP=SHR,DSN=log-data-set-3
//DRLOUT DD SYSOUT=*

If the log collecting job stops prematurely, you can start it again. In this case, the log collector does not
collect the records of the data sets that were already completely processed and the following messages
are issued:

DRL0302I Processing log-data-set-1 on EPDM0F


DRL0303W The log data set has already been processed. Data set name: log-data-set-1

The COLLECT process completes with a return code of 4.


If a log data set was only partially processed, the log collector does not collect the records that were
already collected. In this way, the same data is not summarized twice.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 143


Improving collect performance

Note: If the IMS checkpoint mechanism (DRLICHKI, DRLICHKO) is used, you cannot resubmit the same
collect job when using multiple concatenated IMS data sets. If you resubmit the same collect job you
could encounter a problem of duplicate key, because the DRLICHKI of the previous job would be used.

Improving collect performance

About this task


Correct collect performance problems using these tuning tasks:

Procedure
1. Optimize the collect buffer size.
Optimizing the size of the collect buffer has the greatest impact on performance
a) Reduce the number of times IBM Z Performance and Capacity Analytics stops reading a log data set
to write data to the database by increasing the buffer size.
Message DRL0313I shows the number of database updates because of a full buffer. Look for cases
where the number of updates could be reduced by increasing the size of the buffer.
The optimum is to reduce the number of updates to 0.
b) The default buffer size is 10 MB. Use the buffer size operand of the COLLECT statement to increase
the size to 20 MB to 30 MB, or more.
Refer to the Language Guide and Reference for more information about the COLLECT statement.
c) Do not use the COMMIT AFTERnn records operand on the COLLECT statement.
2. Reduce the amount of data committed to the database.
a) Remove unnecessary tables using the INCLUDE/EXCLUDE clauses of the COLLECT statement.
b) Examine collect messages to determine the most active tables.
c) Concentrate on tables with a lot of buffer and database inserts and updates shown in DRL0326I
messages.
d) Modify update definitions to eliminate needless rows in tables.
For example, set a key column to a constant (such as a blank) instead of to a value from a record if
the detail is unnecessary.
e) Reduce the number of columns collected
i) Delete unneeded columns from the update definition of the table.
ii) Remove the columns in the SQL CREATE TABLE statement of the table definition.
iii) Drop the table.
iv) Re-create the table.
Note: With Db2 multiple insert functionality, when data is collected to data tables, the insert
statements are issued in bulk. Multiple rows are inserted with a single Db2 multiple insert
statement. This results in significant performance improvements. However, this performance
improvement decreases as the number of columns inserted increases.
3. Improve update effectiveness.
a) Define an index on the primary key but no other indexes for tables you create.
b) Do not use a LOOKUP expression with the LIKE operand (especially for large lookup tables) in
update definitions you create. Use an = operand where possible.
c) Minimize the number of rows in lookup tables that allow global search characters and in the
PERIOD_PLAN control table.
d) Run collect when the processing load from other programs is low and when Db2 use is light.
e) Optionally, choose the appropriate algorithm to update the Db2 database by specifying the DIRECT
or SCAN parameter in the COLLECT statement.

144 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

If you do not specify any parameter, the collect process automatically chooses an algorithm among
the DIRECT, SCAN, and INSERT algorithms. This automatic selection, however, can be very time
consuming. To improve the performance, you can force the collect process to use either the DIRECT
or SCAN algorithm only, by specifying the DIRECT or SCAN parameter in the COLLECT statement.
For details about these parameters, refer to the Language Guide and Reference manual.

Administering the IBM Z Performance and Capacity Analytics database

About this task


Maintaining the IBM Z Performance and Capacity Analytics database includes purging unneeded data,
reorganizing the database, updating Db2 statistics, backing up data, updating views on the Db2 catalog,
and protecting the integrity of data by controlling the access.
Regular maintenance tasks are as follows:
Besides regularly scheduled jobs, run the RUNSTATS utility periodically while the database is growing to:
• Provide the Db2 optimizer with information. (After the database stabilizes, RUNSTATS does not make a
significant contribution to the Db2 optimizer.)
• Provide table size statistics for IBM Z Performance and Capacity Analytics.
See “Monitoring the size of the IBM Z Performance and Capacity Analytics database” on page 157 for
more information.
The remainder of this section introduces the use of Db2 as the product database manager and shows how
to use Db2 to maintain the database.

Understanding Db2 concepts


By default, IBM Z Performance and Capacity Analytics names for Db2-related items are:
DSN
Names the Db2 subsystem.
DRLDB
Names the product database.
DRLSSYS1
Names the product table space that contains log collector system tables.
DRLSSYS2
Names the product table space that contains other system tables.
DRLSSAMP
Names the product table space that contains tables for the Sample component
DRLSCOM
Names the product table space that contains common tables that the product components use.
The names of other product table spaces depend on the components you install. There is at least one
table space for each component.
Figure 63 on page 146 shows the data areas in the Db2 subsystem.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 145


Administering the database

DSN Db2 subsystem

DRLDB database Other


Db2
DRLSSAMP databases
DRLSSYS1
Other
System Sample IBM Z
tables tables
Performance
and
DRLSSYS2 DRLSCOM
Capacity
Analytics
System Common table
tables tables spaces

Figure 63. Db2 environment for the IBM Z Performance and Capacity Analytics database

Understanding how IBM Z Performance and Capacity Analytics uses Db2


Figure 63 on page 146 shows an IBM Z Performance and Capacity Analytics installation that uses one
product database. There can be more than one product database in the installation of IBM Z Performance
and Capacity Analytics , more than one IBM Z Performance and Capacity Analytics installation in one Db2
subsystem, more than one Db2 subsystem with an installation of the product, and so on.

Understanding table spaces


Figure 63 on page 146 shows that the product uses several table spaces in the DRLDB database. A table
space contains one or more tables and is the logical unit addressed by Db2 utilities such as COPY and
REORGanize.
To list the table spaces belonging to the current database:
1. Select 4, Tables, from the Administration window.
2. Without selecting a table, select the Maintenance pull-down.
3. Select 1, Tablespace, from the options.
Figure 64 on page 146 shows the list of table spaces, with the Utilities pull-down.

Tablespace Utilities Other Help

1. Run DB2 REORG utility... Row 1 to 20 of 37


2. Run DB2 RUNSTATS utility..
Select a ta 3. Run DB2 REORG/DISCAD utility.. ablespace definition.

Quantity
/ Tablespace Primary Secondary Storage grp Type Locksize
DRLSAIX 6000 3000 SYSDEFLT SEGMENTED TABLE
DRLSCI08 100 52 STOEPDM SEGMENTED TABLE
DRLSCOM 20000 10000 SYSDEFLT SEGMENTED TABLE
DRLSCP 60 32 SYSDEFLT SEGMENTED TABLE
DRLSDB2 40000 20000 SYSDEFLT SEGMENTED TABLE
DRLSDFSM 60000 30000 SYSDEFLT SEGMENTED TABLE
DRLSDPAM 100 52 SYSDEFLT SIMPLE ANY

Figure 64. Tablespace list window

When you change table space or index space parameters, the product uses SQL commands to alter
the space directly, and creates a job to unload and load table data as necessary. IBM Z Performance

146 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

and Capacity Analytics does not change the definition of the table space. To do this, select the Space
pull-down on the Components window.
If you create a table in the product database, you must specify the database and table space in which Db2
is to create the table. Once created, a table can be addressed by its table name only. You do not need to
specify the table space name.
“Working with tables and update definitions” on page 216 describes how to use the administration dialog
to view, change, or create table spaces.

Calculating and monitoring table space requirements

About this task


To make effective use of the available space, you need to monitor the storage required for your data
tables. The sample job, DRLJTBSR (in the DRL310.SDRLCNTL library), produces a detailed report about
the space required for some or all of the selected component tables, based on the average record size and
estimated number of rows.
To customize the job to your requirements, you must change some parameters in DRLJTBSR. For a
description of these parameters, see “Parameters for table space reporting” on page 148.
DRLJTBSR job that reports table space requirements

//DRLJTBSR JOB (ACCT#),'SPACE'


//********************************************************************
//* Name: DRLJTBSR *
//* *
//* FUNCTION: Print a report of estimated total kilobytes based on *
//* estimated records number and average record length *
//* for each table on component. *
//* Average records length is calculated,if the table is *
//* not created, reading the IBM Z Performance and *
//* Capacity Analytics definition library *
//* *
//* The exec DRLETBSR accepts the following parameters: *
//* *
//* LIBRARY= IZPCA definition library *
//* SYSPREFIX= IZPCA system table prefix *
//* DB2SUBSYS= Db2 subsystem name *
//* COMPONENT= Component name. To have a complete list of *
//* component short name read the DRLCOMPONENTS *
//* system table *
//* TABLENAME= Table name ('*' to select all table) *
//* RECNUMBER= Estimated record numbers *
//* PAGESIZE= Value of pagesize . Can be 4K or 32K. *
//* Optional parameter, default value when *
//* not specified is 4K *
//* MAXROWS= Maximum number of rows per pages. Maximum *
//* value allowed is 255. *
//* Optional parameter, default value when not *
//* specified is 255 *
//* PCTFREE= Percentage of free space on each page. *
//* Value allowed from 0 to 99. *
//* Optional parameter, default value when not *
//* specified is 5 *
//* FREEPAGE= Number of free space pages. Value allowed *
//* from 0 to 255. *
//* Optional parameter, default value when not *
//* specified is 0 *
//* COMPRESS= Compression ratio. Optional parameter. *
//* The value must be in range from 0 to a value *
//* less than 1 *
//* Default value when not specified is 0. *
//* *
//* Notes: *
//* Before you submit the job, do the following: *
//* 1. Check that the data set names are correct. *
//* 2. Change the parameters to DRLETBRS as required. *
//* 3. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* *

Chapter 6. Administering IBM Z Performance and Capacity Analytics 147


Administering the database

//********************************************************************
//SPACE EXEC PGM=IKJEFT01,DYNAMNBR=25
//*
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD <== DATA SET NAME
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC <== DATA SET NAME
//SYSEXEC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC <== DATA SET NAME
//***************************
//* START EXEC DRLETBSR
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLETBSR LIBRARY= DRLvrm.SDRLDEFS -
Db2SUBSYS= DSN -
SYSPREFIX= DRLSYS -
COMPONENT= xxxx -
TABLENAME= * -
RECNUMBER= xxxx -
PAGESIZE= 4K -
MAXROWS= 255 -
PCTFREE= 5 -
FREEPAGE= 0 -
COMPRESS= 0
/*

Following is sample output for job DRLJTBSR that shows the space required for all tables of the IMS
collect component.
Sample output for DRLJTBSR

Statistics for space required for a component:


---------------------------------------------:

Input library : DRL310.SDRLDEFS


Db2 subsystem : DSN7
PR system prefix : PRM3SYS
Component : IMSV710C
Table name : *
Estimated records number : 500000
Page size : 4096
Maxrows per page : 255
Percentage of free space : 5
Number of free pages : 0
Compression ratio : 0

Definition Avg record Record Estimated


Estimated
Table name New Tablespace member length per page total pages
kilobytes
------------------ --- ----------- ----------- ----------- ----------- -----------
-------

IMS_APPLICATION_H N DRLSIA01 DRLTIMSA 651 5 100002 400008


IMS_APPLICATION_W N DRLSIA02 DRLTIMSA 648 5 100002 400008
IMS_CHKPT_IOSAM_T N DRLSIS01 DRLTIMSS 169 22 22730 90920
IMS_CHKPT_POOLS_T N DRLSIS02 DRLTIMSS 99 39 12823 51292
IMS_CHKPT_REGION_T N DRLSIS03 DRLTIMSS 101 38 13160 52640
IMS_CHKPT_STATS_T N DRLSIS04 DRLTIMSS 518 7 71430 285720
IMS_CHKPT_VSAM_T N DRLSIS05 DRLTIMSS 194 19 26318 105272
IMS_SYSTEM_D N DRLSIY01 DRLTIMSY 642 6 83335 333340
IMS_SYSTEM_Q N DRLSIY02 DRLTIMSY 645 6 83335 333340
IMS_TRANSACTION_D N DRLSIT02 DRLTIMSR 646 5 100002 400008
IMS_TRANSACTION_H N DRLSIT01 DRLTIMSR 649 5 100002 400008
IMS_TRANSACTION_W N DRLSIT03 DRLTIMSR 646 5 100002 400008

Parameters for table space reporting


Table 7. Parameters for table space reporting

Parameter Value to set Explanation Default value Your value

LIBRARY IBM Z Performance and The name of the partitioned data set that
Capacity Analytics definition contains definitions of the product tables.
library (UPPERCASE) This is a required parameter. It is used for
component tables that do not yet exist.

148 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

Table 7. Parameters for table space reporting (continued)

Parameter Value to set Explanation Default value Your value

Db2SUBSYS Db2 subsystems name The Db2 subsystem where the product
(UPPERCASE) resides. This is a required parameter.

SYSPREFIX Prefix for system tables The prefix of all IBM Z Performance and
(UPPERCASE) Capacity Analytics system and control Db2
tables. This is a required parameter. The
value of this parameter depends on your
naming convention and is determined during
installation.

COMPONENT Component name The name of an IBM Z Performance and


(UPPERCASE) Capacity Analytics component. This is a
required parameter.

TABLENAME The name of the table The name of the IBM Z Performance and
(UPPERCASE) Capacity Analytics table. This is a required
parameter. To specify all component tables,
type an asterisk, *. To specify all component
tables whose names start with a particular
string, type the string. For example, type
CICS_S for all component tables whose name
starts with this string.

RECNUMBER Number of rows The estimated number of rows. This is a


required parameter and must be numeric.

PAGESIZE Db2 page size The Db2 page size. This is an optional 4096 (4K)
parameter; when specified, it must be either
4K or 32K.

MAXROWS Maximum number of rows The maximum number of rows per page. This 255
per page is an optional parameter; when specified, it
must be a numeric value between 1 and 255.

PCTFREE Percentage of free space on The percentage of free space per page. This is 5
each page an optional Db2 parameter; when specified, it
must be a numeric value between 1 and 255.

FREEPAGE Number of free space pages The number of free space pages. This is an 0
optional Db2 parameter; when specified, it
must be a numeric value between 1 and 255.

COMPRESS Compression ratio The compression ratio calculated as 0


PERCSAVE/100 (PERCSAVE is the percentage
of kilobytes saved by compression as reported
by Db2 utility DSN1COMP). This parameter is
optional; when specified, it must be a numeric
value.

For detailed information about the parameters, refer to the Db2 for z/OS: SQL Reference.
For information about Db2, refer to the Db2 for z/OS: Administration Guide and Reference.
For information about the algorithm used for calculating table space requirements, refer to the Db2 for
OS/390 Installation Guide.

Considerations when running DRLJTBSR


The sample job DRLSJTBSR invokes the DRLETBSR exec. Before you can use DRLETBSR, the IBM Z
Performance and Capacity Analytics system tables must have already been created or updated. If a
component is already installed, DRLETBSR obtains the average record size of each component table
directly from the product system tables.
The column NEW in the report shows the table status (N for a table already created, Y for a table that does
not exist). The DRLETBSR exec calculates the average record size for each component table.
If a component is not installed, the DRLETBSR exec reads each partitioned data set member that defines
each component table (see the LIBRARY parameter). Use this exec only for standard IBM Z Performance
and Capacity Analytics libraries. Using it for customized libraries can produce unpredictable results. For

Chapter 6. Administering IBM Z Performance and Capacity Analytics 149


Administering the database

variable length fields, the average record size is calculated using the maximum length. The average record
size does not include GRAPHIC, VARGRAPHIC and LONG VARGRAPHIC Db2 data-types. When you specify
the estimated number of records, remember that the product collects data from tables according to rules
specified in the update definitions. Tables containing the same data may therefore have different numbers
of rows. For example, an hourly table may contain a greater number of rows than a daily table.

Reorganizing the database


It is important to delete obsolete data from the tables as this updates the product database and improves
performance during the query activity. Also, it is important to reorganize table space after data deletion
to optimize the available space. You can use the utility Reorg/Discard to delete data and reorganize table
space.

Reorg/Discard utility
The Reorg/Discard utility enables you to delete the data included in the tables using the Purge condition
included in the DRLPURGECOND table. This table is provided in IBM Z Performance and Capacity
Analytics. At the same time, the Reorg/Discard utility automatically reorganizes the table space where
data has been deleted.
The records deleted by the Discard function are automatically saved in a specific data set, SYSPUNCH.
SYSPUNCH can be used at a later time to reload discarded data in the table, if required.
During the Discard step, the Reorg function reorganizes the table space to improve access performance
and reclaim fragmented space. Also, the keyword STATISTICS is automatically selected for the Reorg/
Discard, enabling you to collect online statistics during database reorganization.
See the Db2 for z/OS: Utility Guide and Reference, for more information about Reorg/Discard utility.
There are two ways to run the Reorg/Discard utility from the Administration window of IBM Z Performance
and Capacity Analytics:
From the Tables window, select option 12 from the Utilities pull-down menu.

Table Maintenance Utilities Edit View Other Help


------------------------------------------------------------------------------
| 12 1. Display... F11 | Row 1 to 21 of 129
| 2. Show size... |
Select one or more | 3. *mport... | definition.
| 4. *xport... |
/ Tables | 5. Grant... |
_ CICS_DICTIONARY | 6. Revoke... |
_ CICS_FIELD | 7. Document... |
_ DAY_OF_WEEK | 8. Recalculate... |
_ EXCEPTION_T | 9. Purge... |
_ IMS_APPLICATION | 10. Unload... |
_ IMS_APPLICATION | 11. Load... |
_ IMS_APPLICATION | 12. Reorg/Discard... |
_ IMS_CHKPT_IOSAM | 13. DB2HP Unload... |
_ IMS_CHKPT_POOLS |____________________________________|

Figure 65. Tables window - Option 12

In this way, the data contained in the table or tables selected from the table list is discarded, and a
space reorganization is automatically performed in the table space where the selected tables reside. The
Discard operation is only performed on the selected tables, while the Reorg operation is performed on all
the tables contained in the table space. You cannot run the Discard utility on Views or Tables that have
any discard condition specified in the DRLPURGECOND table.
As an alternative, use option 1 from the Maintenance pull-down menu of the Tables window to open the
Tablespace window, then select option 3 from the Utilities pull-down menu.

150 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

Tablespace Utilities Other Help

1. Run DB2 REORG utility... Row 1 to 20 of 37


2. Run DB2 RUNSTATS utility..
Select a ta 3. Run DB2 REORG/DISCAD utility.. ablespace definition.

Quantity
/ Tablespace Primary Secondary Storage grp Type Locksize
DRLSAIX 6000 3000 SYSDEFLT SEGMENTED TABLE
DRLSCI08 100 52 STOEPDM SEGMENTED TABLE
DRLSCOM 20000 10000 SYSDEFLT SEGMENTED TABLE
DRLSCP 60 32 SYSDEFLT SEGMENTED TABLE
DRLSDB2 40000 20000 SYSDEFLT SEGMENTED TABLE
DRLSDFSM 60000 30000 SYSDEFLT SEGMENTED TABLE
DRLSDPAM 100 52 SYSDEFLT SIMPLE ANY

Figure 66. Tablespace list window

In this second scenario, from the Tablespace window, you select the table spaces for the Reorg operation.
The Discard operation is automatically run on all the tables contained in the selected table spaces,
according to the conditions specified in the DRLPURGECOND table.
All the tables that have the Discard operation specified in the DRLPURGECOND table are included in the
processing. All the tables that do not have the Discard operation specified in the DRLPURGECOND table
are ignored.
When you run Reorg/Discard, whichever procedure you use, a JCL is created and saved in your library,
so that it can be used at a later time, if required. When the JCL is launched, the following data sets are
created:
SYSPUNCH
Used to reload the discarded data, if required, using the Load utility.
SYSDISC
Contains the records discarded by the utility.
In addition, SYSREC data set is available. It contains all the records in the table, and you can
specify whether you want it to be Temporary or Permanent. If you specify Temporary, the data set is
automatically erased at the end of the reorganization job. If you specify Permanent, it is permanently
allocated on your disk.
When using the Reorg/Discard utility, you can select one or more tables and table spaces at a time.
However, in the data sets SYSPUNCH and SYSDISC, data is overwritten, therefore each data set maintains
only the information contained in the last table you processed.
The following is an example of how the Reorg/Discard utility works on a table space that contains several
tables:

//REODIS JOB (ACCOUNT),'NAME'


//*
//***************************************************************
//* Run Db2 Utility
//*
//* WARNING (REORG/DISCARD):
//* If you want, you can specify the SORTKEYES option:
//* a subtask sorts the index keys. For this optional
//* operation you have need of enough space in your
//* default Storage Diskfor this SORT operation.
//*
//***************************************************************
//DB2UTIL EXEC DSNUPROC,
// SYSTEM=DSN6,UID=MYUID
//*
//DSNUPROC.STEPLIB DD DISP=SHR,DSN='db2loadlibrary'
//DSNUPROC.SYSREC DD DSN=MYUID.DRLUNLD,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)
//DSNUPROC.SYSUT1 DD DSN=MYUID.DRLWORK,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)

Chapter 6. Administering IBM Z Performance and Capacity Analytics 151


Administering the database

//DSNUPROC.SORTOUT DD DSN=MYUID.DRLSROUT,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)
//DSNUPROC.WORK DD DSN=MYUID.WORK1,UNIT=SYSDA,
// SPACE=(4096,(1,1)),DISP=(MOD,DELETE,CATLG)
//DSNUPROC.SYSPUNCH DD DISP=(MOD,CATLG),
// DSN=MYUID.TAB.SYSPUNCH,
// SPACE=(4096,(1,1)),UNIT=SYSDA
//DSNUPROC.SYSDISC DD DISP=(MOD,CATLG),
// DSN=MYUID.TAB.DISCARDS,
// SPACE=(4096,(5040,504)),UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=410,BLKSIZE=27880)
//DSNUPROC.SYSIN DD *
REORG TABLESPACE MYDB.DRLSCOM LOG YES
STATISTICS INDEX(ALL) DISCARD
FROM TABLE MYDB.AVAILABILITY_D
WHEN (
DATE < CURRENT DATE - 90 DAYS
)
FROM TABLE MYDB.AVAILABILITY_T
WHEN (
DATE < CURRENT DATE - 14 DAYS
)
FROM TABLE MYDB.AVAILABILITY_M
WHEN (
DATE < CURRENT DATE - 104 DAYS
)
/*

In this example, the Reorg/Discard utility reorganizes the MYUID.DRLSCOM table space and discards
data from the MYDB.AVAILABILITY_D, MYDB.AVAILABILITY_M, and MYDB.AVAILABILITY_T tables. This
example shows that the DDNAME for the SYSPUNCH data set is SYSPUNCH, the DDNAME for the discard
results data set is SYSDISC, and the DDNAME for the sort output data set is defaulted to SORTOUT. The
SYSDISC and SYSPUNCH data sets are reused every time the utility is run for all tables.

Purge utility
As an alternative to the Reorg/Discard utility, you can delete data and reorganize table space using the
Purge utility.
Each data table in a component has a Purge condition that specifies which data is to be purged from that
table. When you use the Purge function, the data specified in the purge condition is deleted.
Purge the contents of your database at least weekly. The sample job, DRLJPURG (in the
DRL310.SDRLCNTL library), purges all product database tables with Purge conditions. Figure 67 on page
153 shows part of DRLJPURG.

152 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

//DRLJPURG JOB (ACCT#),'PURGE'


//***************************************************************
//* NAME: DRLJPURG *
//* *
//* FUNCTION: *
//* PURGE DATA FROM ALL IZPCA TABLES *
//* ACCORDING TO THE PURGE CONDITIONS DEFINED FOR THE TABLES)*
//* IF YOU WANT TO PURGE ONLY SOME TABLES, SPECIFY THE *
//* INCLUDE OR EXCLUDE OPTIONS. EXAMPLE: *
//* *
//* PURGE INCLUDE LIKE 'DRL.CICS%' *
//* *
//* NOTES: *
//* 1.CHECK Db2 SUBSYSTEM AND DATA SET NAMES. *
//* 2.Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* *
//***************************************************************
//PURGE EXEC PGM=DRLPLC,PARM=('SYSTEM=DSN SYSPREFIX=DRLSYS')
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=db2loadlibrary
//DRLIN DD *

PURGE;

//DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
/*

Figure 67. DRLJPURG job that uses all purge conditions

The Purge utility generates messages that show if the job ran as expected:

DRL0300I Purge started at 2000-05-24-15.12.30.


DRL0404I Table name | Deletes
DRL0405I ---------------------------|---------
DRL0406I DRL .RACF_RESOURCE_T | 12376
DRL0406I DRL .RACF_LOGON_T | 98
DRL0406I DRL .RACF_OPERATION_T | 457
DRL0406I DRL .RACF_COMMAND_T | 17
DRL0301I Purge ended at 2000-05-24-15.12.44.

After purging the database, use the Db2 REORG utility to free the purged space for future use. There are
three methods of reorganizing your database:
1. Use option 1, Run Db2 REORG utility, from the Utilities menu on the Tablespace list window, shown in
Figure 64 on page 146. This reorganizes a whole table space.
2. Use option 10, Unload, from the Utilities menu on the Tables window, after having selected one or
more tables. When you Unload and then Load a table, it reorganizes it without affecting the other
tables in the table space.
Figure 68 on page 154 shows the list of tables, with the Utilities pull-down.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 153


Administering the database

Table Maintenance Utilities Edit View Other Help

10 1. Display... F11 Row 28 to 48 of 427


2. Show size...
Select one or more 3. Import... definition.
4. Export...
/ Tables 5. Grant...
CICS_S_FILE_D 6. Revoke...
CICS_S_FILE_T 7. Document...
CICS_S_GLOBAL_D 8. Recalculate...
CICS_S_GLOBAL_T 9. Purge...
CICS_S_INTERCOM 10. Unload...
CICS_S_INTERCOM 11. Load...
CICS_S_JOURNAL_ 12. Rorg/Discard...
CICS_S_JOURNAL_T 13. DB2HP Unload...
s CICS_S_LSR_POOL_D DRL TABLE
CICS_S_LSR_POOL_T DRL TABLE
CICS_S_LSRP_FILE_D DRL TABLE
CICS_S_LSRP_FILE_T DRL TABLE
.
.
.

Figure 68. Tables window -Option 10


3. Use the sample job DRLJREOR (in the DRL310.SDRLCNTL library) to build your own job.
Refer to the description of the REORG utility in the Db2 for z/OS: Administration Guide and Reference for
more information.

Backing up the IBM Z Performance and Capacity Analytics database


Back up the IBM Z Performance and Capacity Analytics database regularly.

About this task


Ask your Db2 administrator to add your requirements to their site Db2 procedures for backing up the data.
If you cannot do this, copy and modify the sample job DRLJCOPY (in the DRL310.SDRLCNTL library), to
back up all product tables.
Determine:
• How often to back up the product database
• Whether to back up all data or just changed data
• The names of table spaces in the database
Figure 69 on page 155 shows job DRLJCOPY, used to back up the DRLSSYS1 and DRLSSYS2 table spaces.

154 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

//DRLJCOPY JOB (ACCT#),'IMAGE COPY'


//***************************************************************
//* NAME: DRLJCOPY *
//* *
//* FUNCTION: *
//* RUN THE DB2 IMAGE COPY UTILITY TO MAKE BACKUP COPIES *
//* OF IBM Z Performance and Capacity Analytics *
//* TABLE SPACES. THIS JOB COPIES *
//* TABLE SPACES DRLSSYS1 AND DRLSSYS2. YOU MUST ADD A COPY *
//* STATEMENT AND DATA SET FOR EACH TABLE SPACE THAT YOU *
//* WANT TO BACK UP. *
//* *
//* NOTES: *
//* CHECK THE FOLLOWING: *
//* LIB='db2loadlibrary' DB2 LOAD LIBRARY *
//* SYSTEM=DSN DB2 SUBSYSTEM NAME *
//* DSN=COPYDSN NAME OF BACKUP DATASET *
//* SPACE= SPACE REQUIRED *
//* COPY TABLESPACE DB.TS DATABASE.TABLESPACE NAME *
//* FULL YES/NO FULL OR INCREMENTAL COPY *
//* *
//***************************************************************
//UTIL EXEC DSNUPROC,LIB='db2loadlibrary',
// SYSTEM=DSN,UID='TEMP',UTPROC=''
//*
//COPY01 DD DSN=COPYDSN1,
// DISP=(MOD,CATLG,CATLG),
// SPACE=(16384,(50,50),,,ROUND),
// UNIT=SYSDA
//COPY02 DD DSN=COPYDSN2,
// DISP=(MOD,CATLG,CATLG),
// SPACE=(16384,(50,50),,,ROUND),
// UNIT=SYSDA
//SYSIN DD *
COPY TABLESPACE DRLDB.DRLSSYS1
COPYDDN COPY01
FULL YES
COPY TABLESPACE DRLDB.DRLSSYS2
COPYDDN COPY02
FULL YES
/*

Figure 69. DRLJCOPY job for backing up IBM Z Performance and Capacity Analytics table spaces

Determining when to back up the IBM Z Performance and Capacity Analytics database

About this task


Back up the database at least weekly to make it easier to recover from errors.

Determining a level of backup

About this task


Db2 provides two methods for backing up data, full-image copy (copy all data), and incremental-image
copy (copy only changed data). You can combine the two copies.

Determining which table spaces to back up

About this task


The Db2 COPY utility operates on table spaces. Ensure that all table spaces are part of the backup
procedures. For more information about backing up a Db2 database, refer to the discussion of backing up
and recovering databases in the Db2 for z/OS: Administration Guide and Reference.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 155


Administering the database

Recovering from database errors

About this task


These errors might occur in an IBM Z Performance and Capacity Analytics database that sees significant
activity:
• Out of space in one of the table spaces or index spaces
• Corrupted data in the database
The following sections contain descriptions of each condition, how it might occur, and how to correct it.
A description of how to restore Db2 database backups appears in “Correcting corrupted data in the IBM Z
Performance and Capacity Analytics database” on page 156.

Correcting out-of-space condition in an IBM Z Performance and Capacity Analytics


table space or index space

About this task


A table space or index space can be out of space if:
• Volumes in the IBM Z Performance and Capacity Analytics storage group are full.
If DASD is not constrained, the database continues to grow and performance can be an issue. If
performance is not an issue, ask the Db2 administrator to add volumes to the IBM Z Performance and
Capacity Analytics storage group.
If you cannot add more volumes to your storage group, purge the database before continuing.
After purging data, reorganize the affected table spaces. See “Purge utility” on page 152 for more
information.
• The table space or index space uses its maximum number of extents.
This scenario can occur if the primary quantity and all secondary quantity (PRIQTY and SECQTY) extents
have been exhausted. IBM Z Performance and Capacity Analytics table spaces and index spaces have
a default size specification based on an estimated number of rows in tables in the table space. These
default values may be too small for a very large site.
To recover from an out-of-space condition:

Procedure
1. Increase the primary and secondary quantities using the IBM Z Performance and Capacity Analytics
administration dialog (Figure 121 on page 230), or by using the Db2 SQL statements, ALTER
TABLESPACE or ALTER INDEX.
2. Reorganize the table space using the Db2 REORG utility as described in “Purge utility” on page 152 or
drop the index and recreate it as described in “Displaying and adding a table index” on page 218.

DSNT408I SQLCODE = -904, ERROR: UNSUCCESSFUL EXECUTION


CAUSED BY AN UNAVAILABLE RESOURCE. REASON
00D70025, TYPE OF RESOURCE 00000220 AND RESOURCE
NAME DB2A.DSNDBC.DRLDB.A.I0001.A001

Correcting corrupted data in the IBM Z Performance and Capacity Analytics database

About this task


Corrupted data can occur because of:
• Db2 errors
• Erroneously collecting the same log data set more than once

156 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering the database

If the database has been incorrectly updated (for example, accidentally collecting the same log data
set twice or deleting required data), restore a previous backup copy with the Db2 RECOVER utility. For
information about backing up and recovering Db2 databases, refer to the Db2 for z/OS: Administration
Guide and Reference.
You need not restore product data after a collect job terminates from locking or out of space. After
correcting the error, run the job again. If the database has been updated, the collect resumes from the
last checkpoint recorded in the DRLSYS.DRLLOGDATASETS system table. If it had not committed data to
the database before the error, IBM Z Performance and Capacity Analytics recovers the data by collecting
from the first record in the log.

Monitoring the size of the IBM Z Performance and Capacity Analytics


database
Monitor the size of the database regularly.

About this task


Use the Db2 RUNSTATS utility to generate current statistics in the Db2 catalog about any Db2 table space,
including those in the IBM Z Performance and Capacity Analytics database.
The sample job, DRLJRUNS (in the DRL310.SDRLCNTL library), calls the Db2 RUNSTATS utility. Figure 70
on page 157 shows the RUNSTATS statements used to generate statistics for table spaces DRLSSYS1 and
DRLSSYS2.

//DRLJRUNS JOB (ACCT#),'RUNSTATS'


//******************************************************************
//* Name: DRLJRUNS *
//* *
//* Function: *
//* Run the Db2 RUNSTATS utility to update the Db2 catalog *
//* information about Performance Reporter tables. *
//* This job only runs RUNSTATS for the table spaces *
//* DRLSSYS1 and DRLSSYS2. You must add a statement for *
//* each Performance Reporter table space. *
//* *
//* Notes: *
//* Check the following: *
//* LIB='db2loadlibrary' Db2 load library *
//* SYSTEM=DSN Db2 subsystem name *
//* *
//******************************************************************
//UTIL EXEC DSNUPROC,LIB='db2loadlibrary',
// SYSTEM=DSN,UID='TEMP',UTPROC=''
//*
//DSNUPROC.SYSIN DD *
RUNSTATS TABLESPACE DRLDB.DRLSSYS1 TABLE INDEX
RUNSTATS TABLESPACE DRLDB.DRLSSYS2 TABLE INDEX
/*

Figure 70. DRLJRUNS job for generating Db2 statistics

Learn more about the Db2 RUNSTATS utility from the description of its use in the Db2 for z/OS:
Administration Guide and Reference.
Start the RUNSTATS utility from the administration dialog by choosing it from the Utilities menu in the
Tables window. After using the RUNSTATS utility, use the administration dialog to see the number of bytes
used for data in the product database (described in “Showing the size of a table” on page 206).

Understanding how IBM Z Performance and Capacity Analytics uses Db2


locking and concurrency
Db2 provides locking and dynamic recovery for the databases it controls. The IBM Z Performance and
Capacity Analytics database is under Db2 control and uses these Db2 mechanisms.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 157


Administering the database

More than one IBM Z Performance and Capacity Analytics user or function can request access to the data
at the same time. The way Db2 maintains data integrity during such times is by locking out data to all
processes but one.
Learn more about Db2 locking and how it allows more than one process to work with data concurrently
from the discussion of improving concurrency in the Guide to Reporting.
Deadlock or timeout conditions can occur when more than one user works with IBM Z Performance and
Capacity Analytics tables, which causes Db2 to generate messages; for example:

DSNT408I SQLCODE = -911, ERROR: THE CURRENT UNIT OF WORK HAS BEEN
ROLLED BACK DUE TO DEADLOCK OR TIMEOUT. REASON 00C90088,
TYPE OF RESOURCE 00000100, AND RESOURCE NAME DRLDB

Consider the following potential locking scenarios:


• If running more than one collect job at a time, ensure the jobs do not update the same tables.
Although concurrent collects might not update the same data tables, locking can occur for the
DRLSYS.DRLLOGDATASETS system table, updated by all collect runs.
• Generating reports while a collect job runs does not usually cause lockouts.
Report queries do not update table information; their access is read-only. However, QMF can hold locks
while you display large reports.
• You cannot collect while Db2 utilities such as COPY and REORG are running. Also, you cannot collect
and purge simultaneously.
COPY and REORG lock all tables in the table space on which they operate. Purge locks the table on
which it operates.
• Creating tables (or installing components) locks the entire database.
If some users create many tables, give them a private database. See “Installing multiple IBM Z
Performance and Capacity Analytics systems” on page 31 for more information.
To find out who is locking a resource, use the DB2 COMMANDS option in Db2 to issue this command:

-DISPLAY DATABASE(DRLDB) LOCKS LIMIT(100)

For more information, refer to the description of monitoring Db2 locking in the Db2 for z/OS:
Administration Guide and Reference.

Maintaining database security

About this task


You control user access to database tables. Although IBM Z Performance and Capacity Analytics grants
read access to the DRLUSER group ID for any components you install, you can grant or revoke authority
to tables in the IBM Z Performance and Capacity Analytics database. See “Administering user access to
tables” on page 236 for more information.

Monitoring database access

About this task


To see which end users access which database tables (for example, if you are considering removing
tables), use the Db2 trace facility for tracing table accesses. Analyze the trace outside Db2 with another
product. IBM Db2 Performance Monitor (DB2PM) can format, print, and interpret Db2 trace data.
Tracing involves a significant amount of overhead and is not something you should do regularly.
For information about Db2 trace facilities, refer to the description of using tools to monitor performance in
the Db2 for z/OS: Administration Guide and Reference.

158 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering lookup and control tables

For information about DB2PM, refer to the Db2 for z/OS: Administration Guide and Reference and to the
IBM Db2 Performance Monitor: User's Guide.

Using available tools to work with the IBM Z Performance and Capacity
Analytics database

About this task


IBM and other software suppliers provide a variety of database maintenance tools. Because you have
database administrator authority for the IBM Z Performance and Capacity Analytics database, you can use
tools such as DB2I, a part of Db2. With DB2I you can:
• Run SQL statements
• Issue authorized Db2 commands
• Run Db2 utilities
• Work with Db2 objects in your database
Select DB2I from the Other menu of any IBM Z Performance and Capacity Analytics primary window. You
can also type DB2I on the command line of a window.
Figure 71 on page 159 shows the DB2I Primary Option Menu.

DB2I PRIMARY OPTION MENU


COMMAND ===>

Select one of the following Db2 functions and press ENTER.

1 SPUFI (Process SQL statements)


2 DCLGEN (Generate SQL and source language declarations)
3 PROGRAM PREPARATION (Prepare a Db2 application program to run)
4 PRECOMPILE (Invoke Db2 precompiler)
5 BIND/REBIND/FREE (BIND, REBIND, or FREE plans or packages)
6 RUN (RUN an SQL program)
7 DB2 COMMANDS (Issue Db2 commands)
8 UTILITIES (Invoke Db2 utilities)
9 CATALOG VISIBILITY (Invoke catalog dialogs)
D DB2I DEFAULTS (Set global parameters)
X EXIT (Leave DB2I)

F13=HELP F14=SPLIT F15=END F16=RETURN F17=RFIND F18=RCHANGE


F19=UP F20=DOWN F21=SWAP F22=LEFT F23=RIGHT F24=RETRIEVE

Figure 71. DB2I Primary Option Menu

For more information about DB2I, refer to the description of utility jobs in the Db2 for z/OS: Administration
Guide and Reference.

Administering lookup and control tables


Periodically review the contents of IBM Z Performance and Capacity Analytics lookup and control tables.
See “Control tables and common tables” on page 274 for a description of the columns in lookup and
control tables that many product feature components use. Lookup tables used exclusively by an IBM Z
Performance and Capacity Analytics feature are described in the documentation for each feature.
Edit each lookup table and control table to implement standards and definitions at your site. “Working
with data in tables” on page 203 describes how to edit tables.
Lookup and control tables are particularly important for reporting availability of resources. Discuss
availability reporting with your users to determine necessary changes to these tables.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 159


Administering reports

Administering reports

About this task


As an IBM Z Performance and Capacity Analytics administrator, you have authority to run all frequently
requested reports in batch mode and distribute them regularly. You can also create report groups that suit
your organization.

Running reports in batch

About this task


You can generate reports using the reporting dialog. For more information, refer to the Guide to Reporting.
However, for frequently requested reports, you should set up jobs that produce the reports regularly.
The steps to do this are as follows:

Procedure
1. Specify batch settings for the reports.
2. Define queries and forms suitable for batch reports.
3. Print reports or save them in data sets, using a batch job or the reporting dialog.
4. Optionally, save the reports for reporting dialog users and regularly replace the saved report data with
new data.
5. Optionally, include saved charts in BookMaster® documents.

Specifying batch settings


Use the Set batch option in the Batch pull-down in the reporting dialog to specify the batch settings for a
report. Batch settings include output options and other options.

Understanding output options for batch reports


There are two output options for batch reports:
• Print the report:
– If your installation uses QMF, tabular reports are printed to the DSQPRINT file. Otherwise they are
printed to the DRLPRINT file.
– Graphic reports are printed to the printer specified in the job (or to the default printer defined in the
QMF profile, if no printer is specified).
The printer name must be defined in the GDDM nicknames file, allocated to the ADMDEFS ddname.
Refer to the QMF: Planning and Administration Guide for MVS and the GDDM User's Guide for more
information about defining GDDM nicknames.
If you do not use QMF, all reports are printed in tabular format. If you require graphic reports, you can
print a saved report with GDDM-PGF or other tools.
• Save the report in a data set:
– Tabular reports are saved in the data set defined by the DRLREP ddname, usually
DRL.LOCAL.REPORTS.
– Graphic reports are saved in the data set defined by the ADMGDF ddname, usually
DRL.LOCAL.CHARTS.
Saved reports serve different purposes:
– Set up the reporting dialog to use it to look at saved reports.
– Display the reports in other ways, such as from user-written applications.
– Include the reports in BookMaster documents.

160 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports

Defining report queries and forms for batch execution


Although all IBM Z Performance and Capacity Analytics reports can be run in batch, most of them are not
suited for batch because you must supply values for all the variables in the queries and forms.
For example, a typical query looks like this:

SELECT column1, column2, ...


FROM table
WHERE DATE >= &FROM_DATE.
AND DATE <= &TO_DATE.
AND SYSTEM_ID = &SYSTEM_ID.

When displayed from the dialog, IBM Z Performance and Capacity Analytics prompts you for values for
FROM_DATE, TO_DATE, and SYSTEM_ID. To run the report in batch, you must supply the values in the job
and you must change them when you want the reports to cover a different period.
You can change the query to require no variables and always cover the last week:

SELECT SYSTEM_ID, column1, column2, ...


FROM table
WHERE DATE >= CURRENT DATE - 7 DAYS

Refer to the Guide to Reporting for a description of how to create a query.


If the form used contains variables other than the standard variables REPORT_TITLE, PRODUCT_NAME,
and REPORT_ID, you must make sure that these variables are set in the batch reporting job, or modify the
form. Refer to the Guide to Reporting for a description of how to create and modify forms.

Using job DRLJBATR to run reports in batch


The sample job DRLJBATR in the SDRLCNTL library produces all, or a subset, of the reports that have
batch settings specified.
“DRLJBATR job for printing or saving reports in batch” on page 161 shows DRLJBATR.
You need to change some parameters in DRLJBATR to your requirements. For a description of those
parameters, see Table 8 on page 164.
DRLJBATR job for printing or saving reports in batch

//DRLJBATR JOB (ACCT#),'REPORTS'


//* *
//* Function: *
//* Batch reporting sample job. *
//* *
//* This job is used to print and/or save all (or a selected *
//* subset of) the batch reports. *
//* *
//* Reports printed to : DSQPRINT with QMF (tables) *
//* DRLPRINT w/o QMF (tables) *
//* printer specified (charts) *
//* Reports saved in : DRLREP (tables) *
//* ADMGDF (charts) *
//* Messages written to: DRLOUT *
//* *
//* The exec DRLEBATR accepts the following parameters: *
//* *
//* SYSTEM=DB2_system Db2 subsystem name. The default is DSN *
//* SYSPREFIX=sysprefix Prefix for IZPCA system *
//* tables. The default is DRLSYS. *
//* PREFIX=prefix Prefix for all other tables. The default *
//* is DRL. *
//* SHOWSQL=YES/NO Show SQL statements (for debugging). *
//* YES or NO. The default is NO. *
//* CYCLE=run_cycle Run cycle: DAILY, WEEKLY or MONTHLY. *
//* If not specified, all reports are printed.*
//* GROUP=report_group Report group. If not specified, all *
//* reports are printed. *
//* REPORT=rpt1,rpt2.. Lists the reports to print. If not speci- *
//* fied, all reports are printed. *
//* PRINTER=prt_name Printer to be used for graphic reports. *
//* The default printer is defined in the QMF *
//* profile. *

Chapter 6. Administering IBM Z Performance and Capacity Analytics 161


Administering reports

//* DIALLANG=n Define the application language. *


//* n=1 for English (default) *
//* QMF=YES/NO Report generation with or *
//* w/o QMF. YES or NO. Default is YES. *
//* GDDM=YES/NO GDDM available for graphic *
//* reports. YES or NO. Default is YES. *
//* DRLMAX=nnnn Max number of result rows from *
//* a query w/o QMF. Default is 5000. *
//* PAGELEN=nn Page length used when printing *
//* tabular reports w/o QMF. Default is 60. *
//* PAGE=PAGE This word is used in the report *
//* footing for page numbering tabular *
//* reports w/o QMF. Default is PAGE *
//* TOTAL=TOTAL This word is used for an across *
//* summary column header in tabular *
//* reports w/o QMF. Default is TOTAL *
//* DECSEP=PERIOD PERIOD/COMMA. Decimal separator *
//* setting for tabular reports without QMF. *
//* DUALSAVE=xxx Allow graphic reports to be saved *
//* as tabular reports simultaneously. *
//* YES/NO (default=NO) *
//* &variable=value Give a value to a variable used in a *
//* query or a form. All variables used in *
//* queries or forms MUST be given a value. *
//* '' = all values for that variable *
//* '''''' means the null value. *
//* NB: for variables used with IN operator *
//* '(''x'') OR (1=1)' = all values *
//* PRODNAME=IBM Z Performance an *
//* This text is used in the report footing. *
//* The default is IZPCA *
//* Note: If specified, PRODNAME must be the *
//* last parameter. *
//* *
//* Notes: *
//* Before you submit the job, do the following: *
//* 1. Check that the data set names are correct. Update 'DRLvrm' *
//* to match your HLQ for IZPCA data sets. *
//* 2. Change the parameters to DRLEBATR as required. *
//* 3. Remove QMF DD-statements if you are not using QMF. *
//* Search on 'DSQ' to find such occurrences. *
//* The exception is DSQUCFRM, which should be changed *
//* to DRLUFORM. The data set name should point to the *
//* user defined forms library. *
//* 4. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* *
//********************************************************************
//REPORT EXEC PGM=IKJEFT01
//*
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=qmfloadlibrary
// DD DISP=SHR,DSN=db2loadlibrary
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
// DD DISP=SHR,DSN=qmfclistlibrary
//SYSEXEC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
// DD DISP=SHR,DSN=qmfexeclibrary
//*********************
//* MESSAGES
//*
//DRLOUT DD SYSOUT=*
//*********************
//* PRINT REPORTS TO EITHER DSQPRINT OR DRLPRINT
//*
//DSQPRINT DD SYSOUT=*,DCB=(RECFM=FBA,LRECL=133,BLKSIZE=1330)
//DRLPRINT DD SYSOUT=*,DCB=(RECFM=FBA,LRECL=133,BLKSIZE=1330)
//*********************
//* SAVE REPORTS IN
//*
//DRLREP DD DISP=SHR,DSN=DRL.LOCAL.REPORTS
//ADMGDF DD DISP=SHR,DSN=DRL.LOCAL.CHARTS
//*********************
//* GDDM LIBRARIES
//*
//ADMGGMAP DD DISP=SHR,DSN=ADMGGMAPlibrary
//ADMCFORM DD DISP=SHR,DSN=ADMCFORMlibrary
// DD DISP=SHR,DSN=DRLvrm.SDRLFENU
//ADMSYMBL DD DISP=SHR,DSN=SYS1.GDDMSYM
//ADMDEFS DD DISP=SHR,DSN=SYS1.GDDMNICK
//*ADMPRNTQ DD DISP=SHR,DSN=ADMPRINT.REQUEST.QUEUE
//DSQUCFRM DD DISP=SHR,DSN=DRLvrm.SDRLFENU

162 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports

//*********************
//* QMF LIBRARIES
//*
//DSQDEBUG DD DUMMY
//DSQUDUMP DD DUMMY
//DSQPNL DD DISP=SHR,DSN=QMFDSQPNLxlibrary
//DSQSPILL DD DSN=&&SPILL,DISP=(NEW,DELETE),UNIT=SYSDA,
// SPACE=(CYL,(1,1),RLSE),DCB=(RECFM=F,LRECL=4096,BLKSIZE=4096)
//DSQEDIT DD DSN=&&EDIT,UNIT=SYSDA,SPACE=(CYL,(1,1),RLSE),
// DCB=(RECFM=FBA,LRECL=79,BLKSIZE=4029)
//DRLFORM DD DSN=&&FORMDS,UNIT=SYSDA,SPACE=(TRK,(5,5),RLSE),
// DCB=(RECFM=VB,LRECL=255,BLKSIZE=2600),DISP=(NEW,DELETE)
//**********************
//* START EXEC DRLEBATR
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLEBATR SYSTEM=DSN SYSPREFIX=DRLSYS PREFIX=DRL -
PRINTER=XXX -
REPORT=XXXXXXXX,YYYYYYYY -
&SYSTEM_ID='SYS1' -
&FROM_DATE='1993-01-01' -
&TO_DATE='1993-04-01' -
DIALLANG=1 -
PRODNAME=IBM Z Performance an
/*
//*********************
//* GDDM LIBRARIES
//*
//ADMGGMAP DD DISP=SHR,DSN=ADMGGMAPlibrary
//ADMCFORM DD DISP=SHR,DSN=ADMCFORMlibrary
// DD DISP=SHR,DSN=DRLvrm.SDRLFENU
//ADMSYMBL DD DISP=SHR,DSN=SYS1.GDDMSYM
//ADMDEFS DD DISP=SHR,DSN=SYS1.GDDMNICK
//*ADMPRNTQ DD DISP=SHR,DSN=ADMPRINT.REQUEST.QUEUE
//DSQUCFRM DD DISP=SHR,DSN=DRLvrm.SDRLFENU
//*********************
//* QMF LIBRARIES
//*
//DSQDEBUG DD DUMMY
//DSQUDUMP DD DUMMY
//DSQPNL DD DISP=SHR,DSN=QMFDSQPNLxlibrary
//DSQSPILL DD DSN=&&SPILL,DISP=(NEW,DELETE),UNIT=SYSDA,
// SPACE=(CYL,(1,1),RLSE),DCB=(RECFM=F,LRECL=4096,BLKSIZE=4096)
//DSQEDIT DD DSN=&&EDIT,UNIT=SYSDA,SPACE=(CYL,(1,1),RLSE),
// DCB=(RECFM=FBA,LRECL=79,BLKSIZE=4029)
//DRLFORM DD DSN=&&FORMDS,UNIT=SYSDA,SPACE=(TRK,(5,5),RLSE),
// DCB=(RECFM=VB,LRECL=255,BLKSIZE=2600),DISP=(NEW,DELETE)
//**********************
//* START EXEC DRLEBATR
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLEBATR SYSTEM=DSN SYSPREFIX=DRLSYS PREFIX=DRL -
PRINTER=XXX -
REPORT=XXXXXXXX,YYYYYYYY -
&SYSTEM_ID='SYS1' -
&FROM_DATE='1993-01-01' -
&TO_DATE='1993-04-01' -
DIALLANG=1 -
PRODNAME=IBM Z Performance an
/*

Using the reporting dialog to run reports in batch


To create reports in batch from the reporting dialog:
1. From the IBM Z Performance and Capacity Analytics Administration window, select 5, Reports, and
press Enter to display the Reports window.
2. Without selecting any reports in the IBM Z Performance and Capacity Analytics Reports window, select
the Invoke batch option from the Batch pull-down. The Batch Reports Selection window is displayed.
3. Type required information, such as whether to run daily, weekly, or monthly reports, and press Enter. If
any of the reports contain variables, the Batch Reports Data Selection window is displayed.
4. Specify values to select the data to be reported, and press Enter to display the job.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 163


Administering reports

5. Edit the job, specifying the parameters described in “Parameters for batch reporting” on page 164.
Then type SUBMIT on the command line, and press Enter.
IBM Z Performance and Capacity Analytics submits your job to run in background.
6. Press F3 to return to the Reports window.
Refer to the Guide to Reporting for more information about running reports in batch.

Parameters for batch reporting


Table 8. Parameters for batch reporting

Parameter Value to set Explanation Default value Your value

SYSTEM Db2 subsystem name The Db2 subsystem where IBM Z Performance and DSN
(UPPERCASE) Capacity Analytics resides.
This required parameter can be 4 alphanumeric
characters. The first character must be alphabetic.
The default value is DSN. If the value in this field is
something other than DSN, it was changed during
installation to name the correct Db2 subsystem.
Do not change the value to name another Db2
subsystem to which you might have access. IBM Z
Performance and Capacity Analytics must use the
Db2 subsystem that contains its system, control,
and data tables.

SYSPREFIX Prefix for system The prefix of all IBM Z Performance and Capacity DRLSYS
tables (UPPERCASE) Analytics system and control Db2 tables. The
value of this field depends upon your naming
conventions and is determined during installation.
This required parameter can be 8 alphanumeric
characters. The first character must be alphabetic.
The default is DRLSYS. If the value is something
other than DRLSYS, it was changed during
installation.
Do not change the value; IBM Z Performance and
Capacity Analytics uses this value to access its
system tables.

PREFIX Prefix for all other The prefix of IBM Z Performance and Capacity DRL
tables (UPPERCASE) Analytics data tables in the Db2 database.
Valid values are determined at installation.
This required parameter can be 8 alphanumeric
characters. The first character must be alphabetic.
The default is DRL. If the value is something other
than DRL, it was changed during installation.

SHOWSQL YES or NO Here you specify if SQL statements should be NO


(UPPERCASE) shown (for debugging purposes).

CYCLE DAILY, WEEKLY The run cycle for reports. If you do not specify All reports
or MONTHLY daily, weekly, or monthly, all reports are printed.
(UPPERCASE)

GROUP A report group ID Here you can specify the ID of a report group. If All reports
(UPPERCASE) you do not specify a group, all reports are printed.

REPORT One or more report Here you can specify one or more reports to be All reports
IDs (UPPERCASE) printed. If you do not specify any reports, all
reports are printed.

164 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering reports

Table 8. Parameters for batch reporting (continued)

Parameter Value to set Explanation Default value Your value

PRINTER Default printer name The GDDM nickname of a printer to use for printing As defined in the
(UPPERCASE) graphic reports. The printer should be capable of QMF profile
printing GDDM-based graphics.
The printer name must be defined in the GDDM
nicknames file, allocated to the ADMDEFS ddname.
Refer to the QMF: Reference and GDDM User's
Guide for more information about defining GDDM
nicknames.
This parameter cannot be used if QMF=NO.

DIALLANG 1. English With this parameter, you specify the language to be 1=English
used.

QMF YES or NO With this parameter, you specify whether your YES
(UPPERCASE) installation uses QMF or not.

GDDM YES or NO With this parameter, you specify if your installation YES
(UPPERCASE) uses GDDM.

DRLMAX nnnn If your installation does not use QMF, you use 5000
this parameter to specify the maximum number of
result rows from a query.

PAGELEN nn If your installation does not use QMF, you use this 60
parameter to specify the page length when printing
tabular reports.

PAGE The word for page If your installation does not use QMF, the word you PAGE
(Mixed case) specify here is inserted before the page number for
tabular reports.
You can type the word in mixed case, for example,
Page.

TOTAL The word for total If your installation does not use QMF, the word you TOTAL
(Mixed case) specify here is used as column heading for across
summary columns in tabular reports.
You can type the word in mixed case, for example,
Total.

DECSEP Period or comma If your installation does not use QMF, you use this PERIOD
parameter to specify the decimal separator to be
used in tabular reports. If you use a comma as a
decimal separator, a period is used as thousands
separator, if applicable.

DUALSAVE YES or NO Allow graphic reports to be saved as tabular NO


(UPPERCASE) reports simultaneously.

&variable A value This parameter gives a value to a variable used in


a query or form. All variables used in queries or
forms must be given a value.

PRODNAME IBM Z Performance This text is used in the report footer. If specified, IBM Z
and Capacity PRODNAME must be the last parameter. Performance
Analytics Report and Capacity
(Mixed case) Analytics Report

Saving reports for reporting dialog users


You can save report data from a reporting job like DRLJBATR. Creating reports for batch preprocessing
and then saving them for end users means:
• Users need not access the IBM Z Performance and Capacity Analytics database if they have access to
current reports instead.
• Users need not take the time to run reports.
• Users have the data they need to begin analysis immediately.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 165


Administering problem records

To preprocess reports for dialog users:


1. Define the batch report as described in “Specifying batch settings” on page 160.
2. Select the batch report and select 4, Save report data, from the Reports pull-down. The Saved Report
Definition window is displayed. Refer to the Guide to ReportingGuide to Reporting for information about
defining saved reports in the Saved Report Definition window.
3. After completing all fields in the Saved Report Definition window, press Enter.
The report is run and saved in the specified member.
4. Add the saved report to a report group, such as Monthly Management Reports, to let users display
relevant reports easily.
Refer to the Guide to ReportingGuide to Reporting for information about adding a report to a report
group.
After you complete the steps above, you can run the batch report periodically (using the DRLJBATR job) to
replace the saved report member with up-to-date information.

QMF batch reporting


Batch reporting can also be performed with QMF only, without using IBM Z Performance and Capacity
Analytics functions. A QMF job can simply execute a QMF procedure that contains QMF commands (Figure
72 on page 166).

RUN QUERY1 (FORM=FORM1


PRINT REPORT
RUN QUERY2 (FORM=FORM2
PRINT REPORT (PRINTER=LOCAL1

Figure 72. Using QMF to report in batch

These books contain more information about using QMF in this way:
• QMF Advanced User's Guide
• QMF Reference

Creating report groups

About this task


IBM Z Performance and Capacity Analytics reports are grouped by component within each feature.
Placing more commonly requested reports in new report groups can make retrieving them easier .
Creating report groups for users with special requirements, such as managers, also makes reporting
more effective.
Refer to the Guide to Reporting for information about creating report groups.

Administering problem records

About this task


The update definitions of some IBM Z Performance and Capacity Analytics components update the
common table, EXCEPTION_T, with data about system exceptions that require attention. Review this
information and use the product interface for adding selected exceptions to the Tivoli Information
Management for z/OS database.
You can review exceptions only through the administration dialog. You can generate problem records with
either the dialog or a job.

166 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administering problem records

Reviewing exceptions and generating problem records

About this task


To review exceptions and generate problem records:

Procedure
1. Select 2, Generate problem records, from the Utilities pull-down of the IBM Z Performance and
Capacity Analytics Administration window and press Enter.
The Exception Selection window is displayed.
2. Type 2, No, in the Problems only field to list all exception records.>
Note: The default update definitions do not classify exceptions as problems. You can modify them to
set the problem flag (column PROBLEM_FLAG='Y' in the EXCEPTION_T table).
3. Type 1, Yes, in the Not generated only field to select exception records that have not yet been
generated as problem records in the Tivoli Information Management for z/OS database.
4. Select values for other required fields in the window.
Use the fields to restrict the number of exceptions in the list of exceptions.
Use F4 (Prompt) to see a selection list for any field in the Exception Selection window.
5. Press Enter to see the list of exceptions.
The Exception List window is displayed.
6. Select an exception and press Enter
The Generate Record window is displayed, showing the exception record in detail.
7. If the exception record is one you want to add to the Tivoli Information Management for z/OS
database, press Enter.
IBM Z Performance and Capacity Analytics generates the problem record.

Generating problem records in batch

About this task


Although the sample job, DRLJEXCE (in the hlq.SDRLCNTL library) does not let you review exception
records, it generates problem records in the IBM Z Performance and Capacity Analytics database only
from EXCEPTION_T records defined as problems.
Note: You must customize the product update definitions that add records to EXCEPTION_T to set the
problem flag column.
DRLJEXCE job for generating problem records

//DRLJEXCE JOB (ACCT#),'EXCEPTION REPORTING'


//*********************************************************************
//* *
//* FUNCTION: EXCEPTION REPORTING. *
//* PROBLEM RECORDS ARE GENERATED BY TIVOLI SERVICE DESK *
//* FOR ALL RECORDS IN THE EXCEPTION TABLE (EXCEPTION_T), *
//* WHERE *
//* A) THE PROBLEM_FLAG COLUMN INDICATES THAT THIS RECORD *
//* IS A PROBLEM RECORD (PROBLEM_FLAG='Y') *
//* B) AND THE DATE_GENERATED COLUMN INDICATES THAT THE *
//* TIVOLI SERVICE DESK DATABASE HAS NOT BEEN UPDATED *
//* WITH THIS RECORD (DATE_GENERATED IS NULL). *
//* *
//* INPUT PARAMETERS: *
//* SYSTEM=Db2-SUBSYSTEM Db2 SUBSYSTEM (DEFAULT=DSN) *
//* PREFIX=PREFIX TABLE PREFIX (DEFAULT=DRL) *
//* MODE=BATCH BATCH/ONLINE (DEFAULT=BATCH) *
//* APPLID=XXXXXXX APPLICATION ID (DEFAULT=SYSUID) *
//* SESSMBR=XXXXXXX SESSION MEMBER (DEFAULT=BLGSES00) *

Chapter 6. Administering IBM Z Performance and Capacity Analytics 167


Administering problem records

//* PRIVCLASS=XXXXXXX PRIVILEGE CLASS (DEFAULT=MASTER) *


//* *
//* OUTPUT: - PROBLEM RECORD(S) CREATED IN TIVOLI SERVICE DESK. *
//* - TABLE EXCEPTION_T UPDATED WITH PROBLEM NUMBER *
//* AND DATE GENERATED. *
//* - RESULT FILE WRITTEN TO FILE DEFINED BY DRLOUT DD. *
//* *
//* NOTES: BEFORE YOU SUBMIT THIS JOB, DO THE FOLLOWING: *
//* 1. ENSURE THAT YOU (OR THE VALUE SPECIFIED BY APPLID) ARE *
//* REGISTERED AS A VALID APPLICATION ID IN TIVOLI SERVICE DESK.*
//* 2. CHECK THAT THE DATASET NAMES ARE CORRECT. *
//* 3. CHANGE THE PARAMETERS TO DRLEREGE AS REQUIRED. *
//* 4. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* *
//*********************************************************************
//*
//EPDMEXCE EXEC PGM=IKJEFT01,DYNAMNBR=25
//STEPLIB DD DISP=SHR,DSN=TSD.SBLMMOD1
// DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=db2loadlibrary
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
//*------------------------------------------------------------------*
//* TIVOLI SERVICE DESK LIBRARIES *
//*------------------------------------------------------------------*
//BLGSD DD DISP=SHR,DSN=TSD.SDDS
//BLGSI DD DISP=SHR,DSN=TSD.SDIDS
//BLGSL DD DISP=SHR,DSN=TSD.SDLDS
//BLGPNL0 DD DISP=SHR,DSN=TSD.IBMPNLS
//BLGPNL1 DD DISP=SHR,DSN=TSD.RPANEL1
//BLMFMT DD DISP=SHR,DSN=TSD.BLMFMT
//ISPLLIB DD DISP=SHR,DSN=TSD.SBLMMOD1
//*------------------------------------------------------------------*
//DRLOUT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLEREGE SYSTEM=DSN PREFIX=DRL MODE=BATCH
/*

Working with components


This section describes how to use the administration dialog to work with components. After reading this
section, you should be familiar with these tasks:
• “Installing a component” on page 169
• “Uninstalling a component” on page 176
• “Working with a component definition” on page 178
In IBM Z Performance and Capacity Analytics, a component refers to a logical group of objects used to
collect log data from a specific source, to update the product database using that data, and to create
reports from data in the database. Grouping objects into a component enables you to:
• Install or remove (uninstall) a set of related objects as a package
• View and work with a set of related objects
Each IBM Z Performance and Capacity Analytics component can include:
• Log collector definitions for:
– Log types
– Log procedures
– Record types in log data sets
– Record procedures
– Update definitions
• SQL statements that define these Db2 objects for the component:
– Table spaces

168 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component

– Tables
– Lookup tables
– Views
– Triggers
– Procedures
• Report definitions for the component:
– Report groups
– Reports
Each IBM Z Performance and Capacity Analytics Key Performance Metrics (KPM) component also includes
table space profiles. Refer to the following section on working with table space profiles before installing
any of the Key Performance Metrics components.
Definition members in product libraries contain component object definitions. You can use the
administration dialog to examine statements in these definitions. For an explanation of the statements,
see the Language Guide and Reference.
You can use the administration dialog to work with components. From the Administration window (see
Figure 6 on page 7), select 2, Components, and press Enter.
The Components window is displayed.

Installing and uninstalling a component


The Components window lists the components available for IBM Z Performance and Capacity Analytics
installation on your system. When you install a component, IBM Z Performance and Capacity Analytics
executes definitions in the component to define all its objects. Then you can use the component to
collect, store, and create reports on log data that it supports.
If you no longer need a component, you can use the administration dialog to uninstall it. When you
uninstall a component, the product deletes from its system tables all objects in that component that are
not used by any other installed component. It also deletes all of the component's Db2 objects, including
tables and table spaces. The data sets that contain object definition statements are still available, so
you can reinstall the component if necessary. The component still appears in the list in the Components
window. “Uninstalling a component” on page 176 describes this procedure.

Installing a component

Procedure
1. Refer to these books to plan the tasks you must perform to complete the installation:
Feature
Book name
AS⁄400 Performance
IBM i System Performance Feature Guide and Reference
CICS Performance
CICS Performance Feature Guide and Reference
Distributed Systems Performance
Distributed Systems Performance Feature Guide and Reference
IMS Performance
IMS Performance Feature Guide and Reference
System Performance
System Performance Feature Guide
Resource Accounting
Resource Accounting for z/OS

Chapter 6. Administering IBM Z Performance and Capacity Analytics 169


Installing and uninstalling a component

2. If you want to review Db2 parameters before installing a component, select the component in the
Components window, and select Space, as shown in Figure 73 on page 170.

Component Space Other Help

1. Tablespaces omponents Row 1 to 21 of 49


2. Indexes
Select one en press Enter to Open component.

/ Components Status Date


Sample Component Installed 02-04-26
CICS Monitoring Component
s CICS Statistics Component
. CICS Transaction and UOW Analysis Component
.
.

Figure 73. Space pull-down

You can use this pull-down to review and change Db2 space parameters such as:
• Buffer pool
• Compression
• Erase on deletion
• Free space
• Lock size
• Number of partitions, for a partitioned space
• Number of subpages, for an index space
• Primary and secondary space
• Segment size
• Type of space
• VSAM data set password
These parameters can affect the performance of your system. If you are unsure how these parameters
affect your system, you are recommended to use the defaults provided with the product. If you are
unsure about the meaning of a field, press F1 to get help. You should also refer to the CREATE INDEX
and CREATE TABLESPACE command descriptions in the Db2 documentation.
IBM Z Performance and Capacity Analytics saves the changed definitions in your local definitions
library. When you save a changed definition, it tells you where it is saving it, and prompts you for a
confirmation before overwriting a member with the same name.
3. From the Components window, select the component to install and press F6 (Install).
If the component you selected contains subcomponents, the Component Parts window is displayed.
Either select the subcomponents to install or press F12 to install only those objects that are not in a
subcomponent. (IBM Z Performance and Capacity Analytics might install some common definitions for
the component even though you do not select any of the parts to install.)
The Installation Options window is displayed.

170 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component

Component Space Other Help

Installation Options TO 30 OF 48

S Select one of the following. Then press Enter.

/ 1. Online Date
/ 2. Batch
– F1=Help F2=Split F6=Objects F9=Swap F12=Cancel

/ RACF Component
– Sample Component
– Storage Management Component
– VM Accounting Component
– VM Performance Component
******************************** BOTTOM OF DATA **********************************

Command ===>
F1=Help F2=Split F3=Exit F5=New F6=Install F7=Bkwd
F8=Fwd F9=Swap F10=Actions F12=Cancel

Figure 74. Installation Options window


4. From the Installation Options window, decide whether to install the component online or in batch
mode.
From the Installation Options window, you can press F6 (Objects) to see a list of objects in the
component. This gives you some idea of its size.
Batch installation leaves an audit trail of what it has done in its spooled output.
Installing a component locks write access to the database, whether you choose online or batch
installation. While batch installation occurs, you can use IBM Z Performance and Capacity Analytics to
do anything but update a table in the IBM Z Performance and Capacity Analytics database. You can
also use your terminal to perform any ISPF or TSO task.
5. Select 1 (online) or 2 (batch) and press Enter.
If installing the component online, see the next section, “Installing the component online” on page
171.
If installing the component in batch mode, see “Installing the component in batch mode” on page 173.

Installing the component online

About this task


IBM Z Performance and Capacity Analytics runs the SQL, log collector, and report definition statements to
create the objects in the component. The resulting messages are displayed in a browse window:

Procedure
1. If the return code is greater than 0, investigate the messages. For example, the following message
indicates a problem accessing the database. Db2 messages are described in Db2 for z/OS: Messages. If
you get this message, you must reinstall the component:

DSNT408I SQLCODE = -911, ERROR: THE CURRENT UNIT OF WORK HAS


BEEN ROLLED BACK DUE TO DEADLOCK OR TIMEOUT. REASON
00C9008E, TYPE OF RESOURCE 00000100, AND RESOURCE
NAME DRLDB

Correct any error conditions that the product discovers, and install the component again. If the return
code is 8 or lower, the status of the component status is Installed.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 171


Installing and uninstalling a component

If there are no Db2 messages, userid.DRLOUT can look like Figure 75 on page 172.

Db2 Messages
SQL statements executed successfully
--------------------------------------------------------------------------

Line Log Collector Messages


--------------------------------------------------------------------------
93
DRL0125I The record SMF_080 is defined.
96
DRL0130I The comment is stored for the record SMF_080.
1007
DRL0201I The update RACFCOMMAND_80 is defined.
1014
DRL0403I The purge condition for DRL .RACF_COMMAND_T is added.
1138
DRL0201I The update RACFLOGON_80 is defined.
1145
DRL0403I The purge condition for DRL .RACF_LOGON_T is added.
1293
DRL0201I The update RACFOPERATION_80 is defined.
1300
DRL0403I The purge condition for DRL .RACF_OPERATION_T is added.
1466
DRL0201I The update RACFRESOURCE_80 is defined.
1473
DRL0403I The purge condition for DRL .RACF_RESOURCE_T is added.

Line Report Definition Messages


--------------------------------------------------------------------------
1503
DRL3001I The group RACF is defined.
1511
DRL3001I The report RACF01 is defined.
1519
DRL3001I The report RACF02 is defined.
1527
DRL3001I The report RACF03 is defined.
1535
DRL3001I The report RACF04 is defined.
1543
DRL3001I The report RACF05 is defined.
1551
DRL3001I The report RACF06 is defined.
1559
DRL3001I The report RACF07 is defined.
--------------------------------------------------------------------------

Figure 75. Sample log collector messages


2. When you finish browsing the output data set, press F3 (Exit).
If the component has lookup tables, the Lookup Tables window is displayed (“Installing the
component online” on page 171).

172 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component

Component Space Other Help

Lookup Tables ROW 1 TO 3 OF 3

Select a lookup table. Then press Enter to Edit the table in ISPF Edit
mode.

/ Lookup table
– RACF_EVENT_CODE
– RACF_RES_OWNER
– RACF_USER_OWNER
**************************** BOTTOM OF DATA *****************************

Command ===>
F1=Help F2=Split F5=QMF add F6=QMF chg F7=Bkwd F8=Fwd
F9=Swap F12=Cancel

– VM Accounting Component
– VM Performance Component

Command ===>
F1=Help F2=Split F3=Exit F5=New F6=Install F7=Bkwd
F8=Fwd F9=Swap F10=Actions F12=Cancel

Figure 76. Lookup Tables window

Refer to the appropriate feature book (shown in “Installing a component” on page 169) for a
description of its component lookup tables and how you must edit them.
3. To edit a lookup table using ISPF edit, select a table, and press Enter.
IBM Z Performance and Capacity Analytics accesses the ISPF editor where you can edit the lookup
table as described in “Editing the contents of a table” on page 204.
If you have QMF installed, you can use the QMF table editor to edit tables wider than 255 characters.
If the table has more rows than the value you set for the SQLMAX value field in the Dialog Parameters
window, IBM Z Performance and Capacity Analytics prompts you to temporarily override the default
for this edit session. To edit a lookup table using the QMF table editor in add mode, press F5 (QMF
add). To edit a lookup table using the QMF table editor in change mode, press F6 (QMF chg). “Editing
the contents of a table” on page 204 also describes using QMF to edit tables.
4. After you make any necessary changes to a lookup table, press F3 (Exit) to save your changes.
IBM Z Performance and Capacity Analytics returns to the Lookup Tables window.
5. Edit any other lookup tables that the component requires.
When you finish, the installation is complete.
6. Press F12 (Cancel).
IBM Z Performance and Capacity Analytics returns to the Components window.
The product has changed the Status field for the component to read Installed.
7. Press F3 (Exit).
The product returns to the Administration window.

Installing the component in batch mode

About this task


IBM Z Performance and Capacity Analytics builds a batch job to run the SQL, log collector, and report
definition statements to create the objects in the component. It then initiates an ISPF edit session. You
may have to edit the JCL, for example, to change the job card. Figure 77 on page 174 shows a job in an
ISPF edit session.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 173


Installing and uninstalling a component

EDIT ---- XLLOYDA.SPFTEMP2.CNTL -------------------------------COLUMNS 001 072

***************************** TOP OF DATA ******************************


//XLLOYDAA JOB (ACCOUNT),'NAME'
//*
//*
//*
//RUNLOG EXEC PGM=DRLPLC,
// PARM=('SYSTEM=DSN SYSPREFIX=DRLSYS &PREFIX=DRL',
// '&DATABASE=DRLDB &STOGROUP=DRLSG')
//STEPLIB DD DISP=SHR,DSN=DRLxxx.SDRLLOAD
// DD DISP=SHR,DSN=DSNxxx.SDSNLOAD
//DRLLOG DD DUMMY
//DRLOUT DD DSNAME=&TEMP,UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=6160),
// SPACE=(CYL,(10,2)),DISP=(NEW,PASS)
//DRLDUMP DD SYSOUT=*,DCB=BLKSIZE=6160
//DRLIN DD *
SQL SET CURRENT SQLID='DRL';
SET USERS='DRLUSER';
// DD DSN=DRLxxx.SDRLDEFS(DRLRS080),DISP=SHR
COMMAND ===> submit SCROLL ===> 0020
F1=HELP F2=SPLIT F3=END F4=RETURN F5=RFIND F6=RCHANGE
F7=UP F8=DOWN F9=SWAP F10=LEFT F11=RIGHT F12=RETRIEVE

Figure 77. Editing an installation job

After editing the job:

Procedure
1. Type SUBMIT on the command line and press Enter.
2. Press F3 after submitting the job.
IBM Z Performance and Capacity Analytics returns to the Components window. The Status field
shows Batch which does not mean that the job completed, or that it completed successfully. The
installation job changes the value to Installed at its successful completion.
3. When the job completes, use a tool such as the Spool Display and Search Facility (SDSF) to look at
the job spool.
4. Review messages for errors as described in step “1” on page 171.
5. Exit SDSF (or whatever tool you are using to review the job spool).
6. Exit the Components window.
7. Refer to the book for the appropriate feature for a description of the component lookup tables you
must edit.
8. Select 4, Tables, from the Administration window.
The Tables window is displayed.
9. Select 2, Some, from the View pull-down.
The Select Table window is displayed (Figure 78 on page 175).

174 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Installing and uninstalling a component

Table Maintenance Utilities Edit View Other Help

Tables ROW 1 TO 13 OF 393

S Select Table

/ Type in the selection criteria. Then press Enter to display.




– Component RACF Component +
* *
Name
Prefix
Type 2 1. Table
2. Lookup
3. View

F1=Help F2=Split F4=Prompt F9=Swap F12=Cancel

Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 78. Select Table window


10. Type the values as shown in Figure 78 on page 175, and press Enter.
The Tables window is displayed (Figure 79 on page 175), showing the component's lookup tables
only.

Table Maintenance Utilities Edit View Other Help

Tables ROW 1 TO 3 OF 3

Select one or more tables. Then press Enter to Open table definition.

/ Tables Prefix Type


– RACF_EVENT_CODE DRL TABLE
– RACF_RES_OWNER DRL TABLE
/ RACF_USER_OWNER DRL TABLE
**************************** BOTTOM OF DATA *****************************

Objects of type Tables meeting the selection criteria are listed.


Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 79. Tables window - showing component's lookup tables


11. Select a table to edit, but do not press Enter.
12. Select an edit option from the Edit pull-down and press Enter.
If you have QMF installed, you can use the QMF table editor to edit tables wider than 255 characters.
See “Editing the contents of a table” on page 204.
13. Press F3 (Exit) when you finish selecting and editing lookup tables.
IBM Z Performance and Capacity Analytics returns to the Administration window.
When the installation of a component ends with an error or warning, and RC=4 or RC=8, this does not
necessarily indicate a problem. The following table shows when you can ignore these messages and
return codes.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 175


Installing and uninstalling a component

Message Return Code Explanation


SQLCODE=-204 name IS AN RC=8 You can ignore this message and return
UNDEFINED NAME code only if it is caused by an SQL ALTER
statement that attempts to add a column to
a table that has not yet been created.
SQLCODE=+562 A GRANT OF RC=4 You can always ignore this message and
A PRIVILEGE WAS IGNORED return code.
BECAUSE THE GRANTEE ALREADY
HAS THE PRIVILEGE FROM THE
GRANTOR
SQLCODE=-601 THE NAME OF RC=8 You can always ignore this message and
THE OBJECT TO BE CREATED IS return code.
IDENTICAL TO THE EXISTING
NAME name OF THE OBJECT
TYPE objecttype
SQLCODE=-612 column name IS RC=8 You can always ignore this message and
A DUPLICATE COLUMN NAME return code.

Testing the component to verify its proper installation

Procedure
1. Collect data from a log data set and review any messages, as described in “Using collect messages” on
page 141.
Note: Depending on the component you installed, you might not be able to collect its log data in an
online collect. Refer to “Collecting data from a log into Db2 tables” on page 186 for more information.
2. Display a table to ensure that it exists and that it contains the correct information as described in the
book for the appropriate feature:
Feature name
Book name
AS⁄400 Performance
IBM i System Performance Feature Guide and Reference
CICS Performance
CICS Performance Feature Guide and Reference
Distributed Systems Performance
Distributed Systems Performance Feature Guide and Reference
IMS Performance
IMS Performance Feature Guide and Reference
Network Performance
Network Performance Feature Reference
System Performance
System Performance Feature Reference
For Resource Accounting, see the Resource Accounting for z/OS book.
3. Display a report to ensure it is correctly installed.

Uninstalling a component

About this task


To uninstall a component:

176 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with table space profiles

Procedure
1. From the Components window, select the component you want to uninstall. From the Component
pull-down, select the Uninstall option.
If the component you selected contains subcomponents, the Component Parts window is displayed.
Either select the parts to uninstall or press F12 to cancel.
A confirmation window is displayed.
2. Press Enter to confirm the uninstallation.
IBM Z Performance and Capacity Analytics deletes from its system tables any component definitions
not used by other components. It also deletes all Db2 objects of the component or selected
subcomponents, including any tables and table spaces. The component remains in the list of
components, but with its Status field cleared. If the component contains subcomponents, they remain
in the list of subcomponents but with their Status field cleared.
Note: If a component (or subcomponent) including a common object is uninstalled, the common
object is not dropped, unless it is the only installed component (or subcomponent) that includes the
common object. When a component or subcomponent is uninstalled, all its data tables are dropped
and their contents lost.

Working with table space profiles


IBM Z Performance and Capacity Analytics provides functionality to allow you to easily create table
spaces, partitioning on tables, and indexes on tables through the use of table space profiles.
These profiles are defined in the DRLTKEYS member in your SDRLDEFS library. Two system tables named
GENERATE_PROFILES and GENERATE_KEYS are created and loaded from the DRLTKEYS definition file
during system table creation.
Default profiles are provided that set default values for all table spaces, tables, and indexes associated
with the profile. Parameters that can be customized in a profile include the partitioning method (partition
by growth, partition by range, or non-partitioned), primary, and secondary quantities, storage group
names, lock sizes, maximum number of partitions, and most other Db2 table space parameters. You can
use the same parameters for the creation of a set of table spaces, tables, and indexes by using a single
shared profile.
For more detailed information on the GENERATE_PROFILES and GENERATE_KEYS system tables, refer to
“System tables and views” on page 257

Reviewing table space profiles prior to installation


When you have created your system tables, review the parameter values in the GENERATE_PROFILES and
GENERATE_KEYS system tables using the IBM Z Performance and Capacity Analytics table edit facility.
Modify parameters such as PRIQTY and SECQTY accordingly. Note that the GENERATE_KEYS system
table only needs to be modified if you are using the partition by range partitioning method.
Refer to “System tables and views” on page 257 for information on each column within the
GENERATE_PROFILES and GENERATE_KEYS system tables.

Creating storage groups when partitioning by range


If you modify your table space profile to partition by range, you need to run the DRLJDBIP job. The
DRLJDBIP job creates additional storage groups that are used in the partitioned table spaces of the
components where partition by range was set. Be sure to create one storage group per partition that you
identified in the GENERATE_KEYS table.
To run DRLJDBIP:
1. Copy member DRLJDBIP in the DRL310.SDRLCNTL library to the &HLQ.CNTL library.
2. Modify the job statement to run your job.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 177


Working with a component definition

3. Customize the job for your site. Follow the instructions in the job prolog.
4. Submit the job.
Note: A person with Db2 SYSADMIN authority (or someone who has access to the Db2 catalog) must
submit the job.

Reviewing the GENERATE statements for table spaces, tables, and indexes
Components can make use of table space profiling by using GENERATE statements when creating table
spaces, tables, and indexes.
Each GENERATE statement will refer to a profile name in the GENERATE_PROFILES and GENERATE_KEYS
system tables. Default profiles are provided for use by the supplied components. For example, the
following GENERATE statement refers to the SMF table space profile:

GENERATE TABLESPACE DRLSKD01


PROFILE 'SMF';

If you install components that use these profiles, no customizations are required in the GENERATE
statements which create the table spaces, tables, and indexes for the components.
If you want to use a different profile name, you will need to customize all the GENERATE statements
by copying the definitions members into your LOCAL.DEFS data set, and modifying the profile names
accordingly.
If you want to use the default profile names but with a different set of table space parameters, you will
need to update the GENERATE_PROFILES and GENERATE_KEYS system tables with your new table space
settings for the default profiles.
Refer to the IBM Z Performance and Capacity Analytics Language Guide and Reference for the syntax and
additional information on using the GENERATE statements.

Working with a component definition

About this task


This section describes these tasks:
• Controlling objects that you have modified
• Viewing objects in a component
• Viewing or editing an object definition
• Adding an object to a component
• Deleting an object from a component
• Excluding an object from a component installation
• Including an object in a component installation
• Deleting a component
• Creating a component

Controlling objects that you have modified

About this task


The variable VERSION, together with the VERSION column in the system tables, is used to:
• Ensure that unchanged IBM Z Performance and Capacity Analytics objects are not replaced when a
component is migrated
• Provide for the control of IBM Z Performance and Capacity Analytics objects that you have changed

178 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition

The variable VERSION has the value IBM.nnnAPAR_number, where nnn is the version, release, and
modification level. For example, IBM.310 is an object supplied with IBM Z Performance and Capacity
Analytics version 3 release 1 modification level 0. The value of VERSION is set for all objects when the
object is installed (see “How IBM Z Performance and Capacity Analytics controls object replacement” on
page 126 for details).
Important:
If you change an object supplied by IBM Z Performance and Capacity Analytics , you must set the variable
VERSION to a custom version number as defined in “IBM Z Performance and Capacity Analytics Version
variable format” on page 126. During component installation, the product can then recognize an object as
having been modified by you. When you select the component you wish to install (from the Components
window) and press F6=Install, the User Modified Objects window is automatically displayed, listing the
supplied objects that you have later modified.

Viewing objects in a component

About this task


You can use the administration dialog to view a list of objects in a component. To view objects in a
component:

Procedure
1. From the Components window, select the component, and press Enter.
The Component window is displayed (Figure 80 on page 179) for the component. All IBM Z
Performance and Capacity Analytics objects in the component are listed..

SAMPLE Component ROW 1 TO 11 OF 12

Select an object. Then press Enter to Edit definition.

Description . . . . . Sample Component


Installation time . :
Installed by . . . . :

/ Object Name Object Type Member Part


_ DRLSSAMP TABSPACE DRLSSAMP
_ SAMPLE LOG DRLLSAMP
_ SAMPLE REPGROUP DRLOSAMP
_ SAMPLE_H TABLE DRLTSAMP
_ SAMPLE_H_M UPDATE DRLTSAMP
_ SAMPLE_M TABLE DRLTSAMP
_ SAMPLE_USER LOOKUP DRLTSAMP
_ SAMPLE_01 RECORD DRLRSAMP
_ SAMPLE_01_H UPDATE DRLTSAMP
_ SAMPLE01 REPORT DRLOSAMP
_ SAMPLE02 REPORT DRLOSAMP

Command ===> ____________________________________________________________


F1=Help F2=Split F3=Exit F4=Exclude F5=Add obj F7=Bkwd
F8=Fwd F9=Swap F10=View F11=Delete F12=Cancel

Figure 80. Component window


2. Press F10 to limit the list of objects displayed in the window.
The View Objects window is displayed.
3. Type selection criteria in fields in the View Objects window and press Enter.
IBM Z Performance and Capacity Analytics returns to the Component window and shows only those
objects that meet the criteria.
4. You can choose to edit objects, add objects, or delete objects. When you finish, press F3.
IBM Z Performance and Capacity Analytics returns to the Components window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 179


Working with a component definition

Viewing or editing an object definition

About this task


Before you modify any data set that contains IBM Z Performance and Capacity Analytics definitions,
copy the member to avoid changing the shipped version. Copy any member you plan to change from
the product definitions or reports library to your local definitions library, DRL.LOCAL.DEFS. (The default
names of the product definitions and reports libraries are DRL310.SDRLDEFS and DRL310.SDRLRENU.)
You can use the administration dialog to view and edit an object definition. To edit an object in a
component:

Procedure
1. From the Component window, select an object to work with, and press Enter.
IBM Z Performance and Capacity Analytics accesses the ISPF editor, where you can edit (or view) the
object definition.
2. When you finish editing the object definition, press F3 to exit the ISPF edit session.
IBM Z Performance and Capacity Analytics returns to the Component window.

Adding an object to a component

About this task


Components include object definitions necessary to collect log data, store it in the product database,
and generate reports. However, if you create customized objects, you can add the object definition to an
existing component.
Before using the administration dialog to add an object to a component, create the definition member that
defines the object. See “Overview of IBM Z Performance and Capacity Analytics objects” on page 125 for
more information about definition members.
To add an object to a component:

Procedure
1. From the Component window, press F5.
The Add Object window is displayed.
2. Type information about the new object, and press Enter.
You must use the same name in the Object name field as the one that appears in the definition
member for the object. For example, if there is a definition member, DRLLSAMP, that contains the log
collector language statement DEFINE LOG SAMPLE;, you must specify SAMPLE as the name of the
log definition object.
IBM Z Performance and Capacity Analytics saves the object specification (that includes the name of
the member that defines it) and returns to the Component window.
3. Repeat this procedure to add additional objects.

Deleting an object from a component

About this task


Components include object definitions necessary to collect log data, store it in the IBM Z Performance
and Capacity Analytics database, and generate reports. If you do not need to collect, store, or report on
certain types of data, you can delete object definitions for those data types.

180 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition

Note: When you delete an object using the dialog, IBM Z Performance and Capacity Analytics deletes
references to the object from the component. It does not delete the definition member that contains log
collector language statements that define the object. You can add the object again at a later time.
To delete an object from a component:

Procedure
1. From the Component window, select the object to delete, and press F11.
A Confirmation window is displayed.
2. From the Confirmation window, press Enter to confirm the deletion.
IBM Z Performance and Capacity Analytics deletes from its system tables all references from the
component to the object and returns to the Component window.

Excluding an object from a component installation

About this task


This window User Modified Objects allows you to exclude product objects that have been modified by you,
from the installation of the component.
Objects that are listed here were previously included by you in the component installation, although they
contain your modifications to the IBM-supplied object.
For an explanation of the use of VERSION variable in controlling the excluding of user-modified objects
from component installation, see “How IBM Z Performance and Capacity Analytics controls object
replacement” on page 126.
To exclude an object from a component installation:>

Procedure
1. From the Components window, select the component. Then select the Show user objects option in the
Component pull-down.
2. From the User Modified Objects window, select the object to exclude, and press F4.
A Confirmation window is displayed.
3. From the Confirmation window, press Enter to confirm that the object should be excluded from the
installation.

Including an object in a component installation

About this task


After you have excluded an object from the installation of a component (see “Excluding an object from a
component installation” on page 181 for details), you have the option to re-include the object.
To include an object in a component installation:

Procedure
1. From the Components window, select the component. Then select the Show excluded option in the
Component pull-down.
2. From the Objects Excluded window, select the object to include, and press F4.
A Confirmation window is displayed.
3. From the Confirmation window, press Enter to confirm that the object should be included in the
installation.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 181


Working with a component definition

Deleting a component

About this task


To remove all references to a component from IBM Z Performance and Capacity Analytics, you can use
the administration dialog to delete the component. Do not delete components shipped with the product
unless you are sure you are not going to use them.
To delete a component:

Procedure
1. Uninstall the component that you plan to delete. See “Uninstalling a component” on page 176 for
more information.
You must uninstall a component before deleting it. Uninstalling deletes all objects of the component.
2. From the Components window, select the component. Then select the Delete option in the Component
pull-down.
A confirmation window is displayed.
3. Press Enter to confirm the deletion.
IBM Z Performance and Capacity Analytics deletes from its system tables all references to the
component. The component no longer appears in the list of components in the Components window.
The feature definition member (see “Overview of IBM Z Performance and Capacity Analytics objects”
on page 125) still exists, however, and you can reinstall it at a later time. Before reinstalling deleted
components, you must update the system tables to refresh the list of components available for
installation.

Creating a component

About this task


If you have created a set of definitions (for example, for records or tables) using log collector language
or report definition language, you can package them as a component. Creating a component can also be
useful when designing a component to use at other sites. You must also transfer members that define the
objects to the system at the other site.
You can define a component with SQL statements that directly update these system tables:
DRLCOMPONENTS, DRLCOMP_PARTS, and DRLCOMP_OBJECTS, described in “Dialog system tables” on
page 265. IBM Z Performance and Capacity Analytics features define entries in these tables as you create
or update the system tables, using SQL statements in definition members. For examples of component
definition members, see “Overview of IBM Z Performance and Capacity Analytics objects” on page 125.
Note: As you create your component, remember that the product requires that some definitions exist
before you can install others. For example, if your component contains record procedures, you must
install the record definition that maps the source record for the record procedure before installing
the record procedure. Furthermore, you must install the record procedure before installing the record
definition that maps the output of the record procedure. To do this, put both definitions in the same
member.
IBM Z Performance and Capacity Analytics installs component definitions in the following order:
1. Log
2. Record
3. Record procedure
4. Table space
5. Lookup table
6. Table

182 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with a component definition

7. Update
8. View
9. Report group
10. Report
The order of installation within a definition type is determined by the sorting sequence of the definition
member names.
If you plan to use a component on the same IBM Z Performance and Capacity Analytics system on which
you are creating it, you can use the administration dialog to create the component.

Procedure
1. Optionally, you can select an existing component for IBM Z Performance and Capacity Analytics to use
as a template for the new component before performing the next step.
2. From the Components window, press F5.
The New Component window is displayed.
3. Type information about the new component in the fields.
4. Press F5 to add an object to the component.
The Add Object window is displayed. See “Adding an object to a component” on page 180 for more
information.
5. Select an object, and press Enter to edit its definition.
IBM Z Performance and Capacity Analytics accesses the ISPF editor, where you can edit the object
definition. See “Viewing or editing an object definition” on page 180 for more information.
6. To delete an object that currently exists (either it existed in the template or you decided not to use an
object you added), select the object, and press F11.
A Confirmation window is displayed for you to confirm the deletion. See “Deleting an object from a
component” on page 180 for more information.
7. When you finish adding, editing, or deleting objects, press F3.
IBM Z Performance and Capacity Analytics returns to the Components window and lists the new
component.

Working with log and record definitions


IBM Z Performance and Capacity Analytics uses log definitions to associate a series of processing
definitions with a certain type of log data set. An example is the SMF log definition that the product uses
to process SMF log data sets created by MVS. IBM Z Performance and Capacity Analytics associates log,
record, and update definitions with the SMF log and uses these definitions to collect the data, manipulate
it, and store it in appropriate tables.
This section describes how to use the administration dialog to work with log and record definitions. It
describes how to:
• Work with the contents of logs (page “Working with the contents of logs” on page 184):
– View a list of log data sets that IBM Z Performance and Capacity Analytics has collected (page
“Viewing a list of log data sets collected” on page 184)
– Collect data from a log into Db2 tables (page “Collecting data from a log into Db2 tables” on page
186)
– Display statistics of log data sets (page “Displaying log statistics” on page 187)
– Display the contents of a log data set (page “Displaying the contents of a log” on page 188)
– Generate a report on a record in a log data set (page “Creating a report on a record” on page 189)
• Work with log definitions (page “Working with log definitions” on page 191):

Chapter 6. Administering IBM Z Performance and Capacity Analytics 183


Working with the contents of logs

– View and modify a log definition and its header fields (page “Viewing and modifying a log definition”
on page 191)
– Create a log definition (page “Creating a log definition” on page 193)
– Delete a log definition (page “Deleting a log definition” on page 193)
• Work with record definitions (page “Working with record definitions in a log” on page 194):
– View and modify a record definition (page “Viewing and modifying a record definition” on page 194):
- Work with fields in a record definition (page “Working with fields in a record definition” on page
196)
- Work with sections in a record definition (page “Working with sections in a record definition” on
page 197)
– Create a record definition (page “Creating a record definition” on page 198)
– Display update definitions associated with a record (page “Displaying update definitions associated
with a record” on page 198)
– Delete a record definition (page “Deleting a record definition” on page 199)
– View and modify a record procedure definition (page “Viewing and modifying a record procedure
definition” on page 199)
– Create a record procedure definition (page “Creating a record procedure definition” on page 200)
– Delete a record procedure definition (page “Deleting a record procedure definition” on page 201)

Working with the contents of logs

About this task


To work with logs, first display a list of log definitions stored in the product system tables.

Procedure
1. From the IBM Z Performance and Capacity Analytics Administration window, select 3, Logs.
2. Press Enter.
IBM Z Performance and Capacity Analytics displays the Logs window.

Viewing a list of log data sets collected

About this task


The product Data Sets window shows you a list of data sets that have been collected. The window (Figure
81 on page 185) shows the name of each data set, when it was collected, and the status of the collect job.
The Status column reads OK if the collect job ran uninterrupted and without error. It shows Incomplete
if the job was interrupted before the entire log had been processed. For example, due to a locking or out
of space problem. Warning in the Status column means that the collect issued warning messages but the
job completed successfully.
You can display detailed collection statistics for each collected data set. This is the default action for the
window; you perform it by pressing Enter after selecting a data set.
You can also display the data in a log data set, record by record.
To view a list of collected log data sets:

Procedure
1. From the Logs window, select a log definition and press F6.

184 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs

IBM Z Performance and Capacity Analytics displays the Data Sets window for the log type you selected
(see Figure 81 on page 185). You can then display collect statistics for each data set.

SMF Data Sets ROW 1 TO 15 OF 169

Select one data set. Then press Enter to view statistics.

/ Data Sets Time collected Status


_ SYST.SMFSYSA.D930131 2000-02-01-04.26.57 OK
_ SYST.SMFSYSA.D930130 2000-01-31-05.22.15 OK
_ SYST.SMFSYSB.D930129 2000-01-30-04.14.36 OK
_ SYST.SMFSYSA.D930129 2000-01-30-02.22.14 Incomplete
_ SYST.SMFSYSB.D930128 2000-01-29-02.59.20 OK
_ SYST.SMFSYSA.D930128 2000-01-29-01.38.50 OK
/ SYST.SMFSYSB.D930127 2000-01-28-08.30.02 Warning
_ SYST.SMFSYSA.D930127 2000-01-28-03.56.24 Warning
_ SYST.SMFSYSB.D930126 2000-01-27-03.23.27 OK
_ SYST.SMFSYSA.D930126 2000-01-27-03.26.17 OK
_ IVT.SMFCICS.TEST1 2000-01-26-14.23.23 OK
_ IVT.SMFCICS.DELTA 2000-01-26-10.42.26 OK
_ SYST.SMFSYSB.D930125 2000-01-26-04.18.48 OK
_ SYST.SMFSYSA.D930125 2000-01-26-02.56.26 OK
_ SYST.SMFSYSB.D930124 2000-01-26-04.18.48 OK

Command ===> _____________________________________________________________


F1=Help F2=Split F3=Exit F5=Display F7=Bkwd F8=Fwd
F9=Swap F11=Delete F12=Cancel

Figure 81. Data Sets window


2. From the Data Sets window, select a data set and press Enter.
The product displays the Collect Statistics window for the data set (Figure 82 on page 185).

SMF Collect Statistics

Press Enter to return.

Data set . . . . : SYST.SMFSYSB.D930127


Volume . . . . . : TSO007

Time collected . :2000-01-28-08.30.02 Collected by ... : STROMBK


Elapsed time . . : 462 Return code . . . : 4
Times collected. : 1 Completed . . . . : Y

First record . . : 00001E2900006EB60093104FD3C4C7F140404040


: 0003000500000000004400180001000000000000
Firsttimestamp . : 2000-01-27-00.04.43
Last timestamp . : 2000-01-27-22.17.23

Records read . . : 187714 Records selected. : 17701

Database updates : 0 Inserts: 13610 Deletes :0

F1=Help F2=Split F9=Swap F12=Cancel

Figure 82. Collect Statistics window


3. Press Enter to return to the Data Sets window after you finish viewing statistics.

What to do next
To display the contents of a data set record by record, select the data set and press F5.
IBM Z Performance and Capacity Analytics displays the Record Selection window. Refer to “Displaying the
contents of a log” on page 188 for more information.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 185


Working with the contents of logs

Deleting a log data set

About this task


To delete data set statistics from the product system tables:

Procedure
1. From the Data Sets window, select the data set and press F11.
IBM Z Performance and Capacity Analytics displays a confirmation window.
2. Press Enter to confirm the deletion.
The product deletes any references it has to the data set, which no longer appears in the list of
collected data sets.

Collecting data from a log into Db2 tables

About this task


IBM Z Performance and Capacity Analytics stores data it collects in Db2 tables in the product database,
following the instructions in update definitions associated with records in the log.
Usually, you use a batch job to collect log data. (See “Collecting log data” on page 135 for more
information about sample collect jobs.) However, you can use the administration dialog to perform online
collection. For example, to correct problems or to test new log, record, or update definitions.
Note: Some logs require special processing or contain collect statements that can be initiated only from
batch jobs. Such logs include those for DCOLLECT, VMACCT, SMF_VPD, and IMS.
To collect data from a log into Db2 tables:

Procedure
1. From the Logs window, select a log and press F11.
The Collect window is displayed (see Figure 83 on page 186).

Log Utilities View Other Help

Collect

Type information. Then press Enter to edit the collect JCL.

Data set DRLxxx.SDRLDEFS(DRLSAMPL)


(reqd)
Volume (If not cataloged)
Unit (Required for batch if Volume defined)

Reprocess 2 1. Yes
2. No
Commit after 1 1. Buffer full
2. End of file
3. Specify number of records
Number of records
Buffer size 10
Extention 2 1.K
Condition 2.M >

F1=Help F2=Split F4=Online F5=Include F6=Exclude


F9=Swap F10=Show fld F11=Save def F12=Cancel

F8=Fwd F9=Swap F4=Action F11=Collect F12=Cancel

Figure 83. Collect window


2. Type the name of the log data set in the Data set field.

186 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs

Note: The log data sets used as input for the collect (DRLLOG DD statement) are expected to be sorted
in chronological order).
3. Optionally, specify other collect options in fields in the window.
Note: Entry fields followed by a greater than (>) sign respond to the F10 (Show fld) function key, which
displays all of the data in the field or lets you type more data in the Show Field window.
4. Press F5 to include only specific Db2 tables in the collect process.
The Include Tables window is displayed.
5. Select those tables to include in the collect process and press Enter.
You are returned to the Collect window.
You can exclude tables as well. You need exclude only tables that the product would normally update
during the collection.
6. Press F6 to exclude tables from the collect process.
The Exclude Tables window is displayed. Select tables to exclude from the collect process and press
Enter. You are returned to the Collect window.
7. Run the collect either in batch or online:
a) Press Enter to run the collect in batch mode.
IBM Z Performance and Capacity Analytics builds a JCL job stream for the collect job and accesses
the ISPF editor where you can edit and submit the JCL.
b) Press F4 to perform an online collection.
IBM Z Performance and Capacity Analytics starts the collect process online. When the collection is
complete, collect messages are displayed in an ISPF browse window.
8. Press F3 to return to the Logs window.

Displaying log statistics

About this task


You can create log statistics for any log data set, regardless of whether it has been collected. A log
statistics file shows the number of records of each type in a log data set. It also shows records built by log
and record procedures.
To view statistics for a log data set:

Procedure
1. From the Logs window, select a log definition.
2. Select 3, Show log statistics, from the Log pull-down.
You are prompted for the name of a log data set.
3. Type the name of the data set and press Enter.
The product displays statistics for the log (see Figure 14 on page 29).

Chapter 6. Administering IBM Z Performance and Capacity Analytics 187


Working with the contents of logs

DRLnnnnI Logstat started at 2000-12-04-10.04.15


DRL0302I Processing SMF.DATA.SET on VOL001
DRL0341I The first record timestamp is 2000-06-03-07.00.01.730000.
DRL0342I The last record timestamp is 2000-06-03-11.52.40.220000.
DRL0003I
DRL0315I Records read from the log or built by log procedure:
DRL0317I Record name | Number
DRL0318I -------------------|----------
DRL0319I SMF_000 | 0
DRL0319I SMF_006 | 6
DRL0319I SMF_007 | 0
DRL0319I SMF_021 | 0
DRL0319I SMF_025 | 0
DRL0319I SMF_026 | 476
DRL0319I SMF_030 | 3737
DRL0319I SMF_070 | 40
DRL0319I SMF_071 | 40
DRL0319I SMF_072_1 | 280
DRL0319I SMF_090 | 0
DRL0320I Unrecognized | 3
DRL0318I -------------------|----------
DRL0321I Total | 4582
DRL0003I
DRL0316I Records built by record procedures:
DRL0317I Record name | Number
DRL0318I -------------------|----------
DRL0319I SMF_030_X | 2012
DRL0319I SMF_070_X | 200
DRL0318I -------------------|----------
DRL0321I Total | 2212
DRLnnnnI Logstat ended at 2000-12-04-10.09.43

Figure 84. Sample log statistics output


4. When you finish viewing statistics, press F3.
The Logs window is displayed.

Displaying the contents of a log

About this task


IBM Z Performance and Capacity Analytics provides a facility for displaying the contents of a log, record
by record. The Record Data window describes each field in each record in the log data set you identify.
To view the contents of a log:

Procedure
1. From the Logs window, select the log.
2. From the Utilities pull-down, select 2, Display log, and press Enter.
Note: You can also display the contents of a log by selecting Display record from the Record Definition
window or by pressing F5 from the Data Sets window.
The Record Selection window is displayed.
3. Type the log data set name and, optionally, the name of a record type (to display only one record
definition), or a record sequence number (to start displaying records at that position in the log). Press
Enter.
The Record Data window is displayed.

188 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with the contents of logs

Record Data ROW 1 TO 13 OF 222

Press Enter to view the next record.

Record name . : SMF_030 Record number : 3


Data set . . : LDG.SMFSYSA.W20

Field Name Type Length Offset Value


SMF30LEN BINARY 2 0 628
SMF30SEG BINARY 2 2 0
SMF30FLG BIT 1 4
SMF30RTY BINARY 1 5 30
SMF30TME TIME 4 6 07.00.03.830000
SMF30DTE DATE 4 10 2000-06-03
SMF30SID CHAR 4 14 MVS1
SMF30WID CHAR 4 18 JES2
SMF30STP BINARY 2 22 3
------------------ -------- ---- ---- --------------------------
TRIPLETS SECTION 88 24 1 (1)
------------------ -------- ---- ---- --------------------------
SMF30SOF BINARY 4 0 112

Command ===> ______________________________________________________________


F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 85. Record Data window


4. Press Enter to step through records in the log.
Each time you press Enter, IBM Z Performance and Capacity Analytics displays the next identified
record in the log.
5. When you finish viewing record data, press F12.
You are returned to the Logs window.

Creating a report on a record

About this task


To produce a report of the data in a record type without performing a collect operation, you can use the
IBM Z Performance and Capacity Analytics list function. For example, you may need very detailed data
from a record, or you may want to get information from a record one time, without creating product tables
for it. The list function creates a report of the data in a record either in QMF format or as a data set that
can be browsed.
To create a report of the data in a record:

Procedure
1. From the Logs window, select the log and press Enter.
The Record Definitions window for the log is displayed (see Figure 89 on page 194).
2. Select a record and press F11.
The List Record window for the record is displayed (see Figure 86 on page 190).

Chapter 6. Administering IBM Z Performance and Capacity Analytics 189


Working with the contents of logs

SMF_030 List Record ROW 61 TO 70 OF 176

Type information, select fields. Then press Enter to define option.

Condition >

/ Fields Type Sort Seq Condition


_ SMF30IOF BINARY
_ SMF30ION BINARY
_ SMF30ISB BINARY
_ SMF30IST TIME_001
_ SMF30IVA BINARY
_ SMF30IVU BINARY
/ SMF30JBN CHAR
_ SMF30JNM CHAR
_ SMF30JPT BINARY
_ SMF30JVA BINARY

Command===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Show fld F12=Cancel

Figure 86. List Record window


3. From the List Record window, select fields to include in the report. Type information in the fields and
press Enter.
If your installation uses QMF, the Report Display Options window is displayed.
4. In the Report Display Options window, choose whether to display the report using QMF or as a data set
that can be browsed. Specify the name of the log data set from which IBM Z Performance and Capacity
Analytics is to produce the report, then press Enter.
If your installation does not use QMF, the report is displayed using ISPF browse. Specify the name of
the log data set from which IBM Z Performance and Capacity Analytics is to produce the report in the
Input Log Data Set Name window, then press Enter.
The report is displayed.

ISRBROBF STROMBK.DRLLST1 -------------------------- LINE 00000000 COL 001 080


********************************* TOP OF DATA ********************************
SMF30CPS SMF30CPT SMF30DTE SMF30JBN SMF30RST SMF30SIT
----------- ----------- ---------- -------- -------- --------
0 18 2000-06-03 LOGREFL1 07.00.00 07.00.01
1 19 2000-06-03 LOGREFL2 07.00.00 07.00.05
0 17 2000-06-03 LOGREES1 07.00.01 07.00.07
0 13 2000-06-03 LOGREES2 07.00.01 07.00.09
2 20 2000-06-03 LOGRSP4A 07.00.02 07.00.10
1 19 2000-06-03 LOGSP4B 07.00.02 07.00.22
0 16 2000-06-03 LOGRXAA 07.00.03 07.00.23
0 13 2000-06-03 LOGRXAB 07.00.03 07.00.26
4 73 2000-06-03 EID3D105 07.00.12 07.00.13
0 7 2000-06-03 EID3D105 07.00.12 07.01.21
9 79 2000-06-03 EID3D105 07.00.12 07.01.21
227 1108 2000-06-03 EID4 01.14.42 01.14.43
18 226 2000-06-03 EID4 07.12.42 07.12.44
1 12 2000-06-03 XGORANW 07.26.33 07.26.34
1 12 2000-06-03 XGORANW 07.26.50 07.26.51
7 215 2000-06-03 NORBACK 07.31.52 07.31.52
The list record action is executed successfully.
COMMAND ===> SCROLL ===> CSR
F1=Help F2= F3=End F4= F5=R Find F6=R Change
F7=Backward F8=Forward F9= F10=Left F11=Right F12=Cursor

Figure 87. Output from List record function


5. When you finish viewing the report, press F3 to exit QMF or the ISPF browse window.
You are returned to the List Record window.
6. From the List Record window, press F12 to return to the Record Definitions window.

190 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with log definitions

7. From the Record Definitions window, repeat this procedure for more records or press F3 to return to
the Logs window.

Working with log definitions


All the logs that you plan to process must be defined to IBM Z Performance and Capacity Analytics. Log
definitions included with each component define the logs that the product uses to collect data.
A log definition can include these elements:
Header
Lists fields common to all records in the log.
Timestamp
Describes how to derive the timestamp of a record from fields in the header.
First record
Describes a condition that should be met for the first record in the log data set.
Last record
Describes a condition that should be met for the last record in the log data set.
Log procedure
Identifies a program that is invoked for each record read.
Log procedure parameters
Identifies the language of the log procedure and other information, such as information the log
procedure cannot retrieve from the record.
For more information about log procedures, refer to the Language Guide and Reference.

Viewing and modifying a log definition

About this task


You can use the administration dialog to view or modify log definitions. To view and modify a log
definition:

Procedure
1. From the Logs window, select the log and press F5.
The Log Definition window is displayed (see Figure 88 on page 192) for the log you specified.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 191


Working with log definitions

Log Utilities View Other Help

SMF Log Definition

Type information. Press Enter to save and return.

Description MVS Systems Management Facility >

Log procedure
Log procedure parameter >
Log procedure language 1. ASM
2. C

Record timestamp TIMESTAMP(SMFDTE,SMFTME) >


First record condition SMFRTY= 2 >
Last record condition SMFRTY= 3 >

F1=Help F2=Split F5=Header F9=Swap F10=Show fld


F12=Cancel

Command===>
F1=Help F2=Split F3=Exit F5=Log def F6=Datasets F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Collect F12=Cancel

Figure 88. Log Definition window


2. Change the log definition.
3. Press F5 to display header fields for the log definition.
The Header Fields window is displayed for the log. See “Working with header fields” on page 192 for
more information.
4. When you finish modifying the log definition, press Enter.
The changes are saved and you are returned to the Logs window.

Working with header fields

About this task


To add header fields to a log definition:

Procedure
1. From the Header Fields window, press F5 to add a header field.
A blank Header Field Definition window is displayed.
2. Type the required information in the fields and press Enter.
The Header Field Definition window for the next field is displayed. IBM Z Performance and Capacity
Analytics carries forward values for the Type and Length fields from the previous field and increments
the Offset field by the length of the previous field.
3. Press F12 when you finish adding fields.
You are returned to the Header Fields window.
4. Press F3 to return to the Log Definition window.

Modifying header fields for a log definition

About this task


To modify header fields for a log definition:

192 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with log definitions

Procedure
1. From the Header Fields window, select the header field and press Enter.
The Header Field Definition window for the header field you specified is displayed.
2. Type changes in the fields and press Enter.
You are returned to the Header Fields window.
3. Press F3 to return to the Log Definition window.

Deleting header fields for a log definition

About this task


To delete header fields for a log definition:

Procedure
1. To delete a header field, select the field and press F11.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The header field is deleted from the list and you are returned to the Header Fields window.
3. Press F3 to return to the Log Definition window.

Creating a log definition

About this task


To collect data from a log that is not defined by an IBM Z Performance and Capacity Analytics component,
you must create a log definition. You can use the administration dialog to create log definitions, or you
can use the log collector language. Refer to the Language Guide and Reference for more information about
creating log definitions with log collector language.
To create a log definition:

Procedure
1. To use an existing log definition as a template, select a log definition from the Logs window. Otherwise,
do not select a log definition before the next step.
2. Select 1, New, from the Log pull-down and press Enter.
The New Log Definition window is displayed.
3. Type information for the new log definition in the fields.
4. Press F5 to add header fields to the log definition.
The Header Fields window is displayed. See “Working with header fields” on page 192 for more
information on adding header fields.
5. After you add all the information, press Enter.
The new log definition is saved and you are returned to the Logs window.

Deleting a log definition

About this task


If you no longer need to collect data from a log, you can use the administration dialog to delete the log
definition. When you delete this log definition, you delete references to the log definition from IBM Z

Chapter 6. Administering IBM Z Performance and Capacity Analytics 193


Working with record definitions in a log

Performance and Capacity Analytics system tables, but you do not delete the member that defines the log
type.
To delete a log definition:

Procedure
1. From the Logs window, select a log and then select the Delete option from the Log pull-down.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The log definition is deleted and you are returned to the Logs window.

Working with record definitions in a log

About this task


Each record in a log belongs to a record type that must be defined to IBM Z Performance and Capacity
Analytics to be collected. Otherwise, the product designates it as an unrecognized type of record and does
not process it. Record definitions are included with each predefined component.
To view a list of record definitions:

Procedure
1. From the product Administration window, select 3, Logs, and press Enter.
The Logs window is displayed.
2. From the Logs window, select the log that contains the record and press Enter.
The Record Definitions window for the log is displayed (see Figure 89 on page 194).

Record Utilities Other Help


--------------------------------------------------------------------------
SMF Record Definitions ROW 8 TO 20 OF 124

Select a record definition. Then press Enter to Open record definition.

/ Record Definitions Description


_ SMF_000 IPL
_ SMF_002 Dump header
_ SMF_003 Dump trailer
_ SMF_004 Step termination
_ SMF_005 Job termination
_ SMF_006 JES2/JES3/PSF/External writer
_ SMF_007 Data lost
_ SMF_008 I/O configuration
_ SMF_009 VARY device ONLINE
_ SMF_010 Allocation recovery
_ SMF_011 VARY device OFFLINE
_ SMF_014 INPUT or RDBACK data set activity
_ SMF_015 OUTPUT, UPDAT, INOUT, or OUTIN data set

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=Procs F6=Updates F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=List rec F12=Cancel

Figure 89. Record Definitions window

Viewing and modifying a record definition

About this task


Most of a record definition describes the layout of the record. Records are divided into fields and,
optionally, sections. A field is a named sequence of adjacent bytes. A section is a larger structure that

194 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

contains fields or other sections. For more information about defining records, sections, and fields, refer
to the Language Guide and Reference.
You can use the administration dialog to modify record definitions. To view and modify a record definition:

Procedure
1. From the Record Definitions window, select the record definition and press Enter.
The Record Definition window for the record definition is displayed (see Figure 90 on page 195).

SMF_030 Record Definition ROW 1 TO 9 OF 188

Type information. Select a field or a section. Then press Enter to


display.

Log name . . . SMF


Identified by . SMF30RTY= 30 > (condition)
Built by . . . ________ + (program name)
Description . . Common address space work >

/ Field Type Length Offset Section


_ SMF30LEN BINARY 2 0
_ SMF30SEG BINARY 2 2
_ SMF30FLG BIT 1 4
_ SMF30RTY BINARY 1 5
_ SMF30TME TIME_001 4 6
_ SMF30DTE DATE_001 4 10
_ SMF30SID CHAR 4 14
_ SMF30WID CHAR 4 18
_ SMF30STP BINARY 2 22

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F4=Prompt F5=Add fld F6=Add sec
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel

Figure 90. Record Definition window


2. Type any changes to the record definition.
Note: By changing the value in the Log name field, you can move the record to another log definition.
3. To modify the definition of a field, select the field and press Enter.
The Field Definition window is displayed. See “Working with fields in a record definition” on page 196
for more information.
4. To modify a section, select the section and press Enter.
The Section Definition window is displayed. See “Working with sections in a record definition” on
page 197 for more information.
5. Press F5 to add fields to the record definition.
The Field Definition window is displayed. See “Working with fields in a record definition” on page 196
for more information.
6. Press F6 to add sections to the record definition.
The Section Definition window is displayed. See “Working with sections in a record definition” on
page 197 for more information.
7. To delete a section or field from the record definition, select the section or field and press F11.
If the section or field definition already existed in the record definition, a confirmation window is
displayed. Otherwise, you are deleting something you just added. IBM Z Performance and Capacity
Analytics does not ask you to confirm this type of deletion and you can skip step 8.
8. Press Enter to confirm the deletion.
The section or field is deleted and you are returned to the Record Definition window.
9. Press F3 when you finish modifying the record definition.
Your changes are saved and you are returned to the Record Definitions window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 195


Working with record definitions in a log

Note: If you have incorrectly modified the record definition, IBM Z Performance and Capacity Analytics
displays error messages in an ISPF browse window. Examine the messages and press F3 to return to
the Record Definition window where you can correct the errors.

Working with fields in a record definition

About this task


You can use the administration dialog to modify existing field definitions or to add field definitions. You
can also use log collector language statements. Refer to the Language Guide and Reference for more
information about defining fields in a record.
To add a field definition to a record definition:

Procedure
1. From the Record Definition window, press F5.
A blank Field Definition window is displayed.
2. Type the required information in the fields and press Enter.
Another Field Definition window is displayed (see Figure 91 on page 196).

SMF_030 Record Definition ROW 1 TO 9 OF 188

T Field Definition
d
Type information.Then press Enter to save.
L
I Field name (required)
B Type + (required)
D Length
Offset
/
_ Section name +
_ Description >
_
_
_
_ F1=Help F2=Split F4=Prompt F9=Swap F10=Show fld
F12=Cancel
/
_
_ SMF30STP BINARY 2 22

Command===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Add fld F6=Add sec
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel

Figure 91. Field Definition window


3. Press F12 when you finish adding fields.
You are returned to the Record Definition window.

Modifying a field definition

About this task


To modify a field definition:

Procedure
1. From the Record Definition window, select the field and press Enter.
The Field Definition window is displayed.
2. Type changes in the fields and press Enter.

196 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

Your changes are saved and you are returned to the Record Definition window.

Working with sections in a record definition

About this task


You can use the administration dialog to modify existing section definitions or to add section definitions.
You can also use log collector language statements. Refer to the Language Guide and Reference for more
information about defining sections and repeated sections.
To modify a section definition:

Procedure
1. From the Record Definition window, select the section and press Enter.
The Section Definition window is displayed (see Figure 92 on page 197).

SMF_030 Record Definition ROW 42 to 50 OF 188

Section Definition

Type information. Press Enter to save and return.

Section name . . . . SUBSYSTEM (required)


In Section name . . . +

Offset . . . . . . . SMF30SDF >(required)


Length . . . . . . . SMF30SLN >
Number . . . . . . . SMF30SON >
Present if condition >

Repeated . . . . . . . 2 1. Yes
2. No

F1=Help F2=Split F4=Prompt F9=Swap F10=Show fld


F12=Cancel

Command ===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Add fld F6=Addsec
F7=Bkwd F8=Fwd F9=Swap F10=Showfld F11=Delete F12=Cancel

Figure 92. Section Definition window


2. Type changes in the fields and press Enter.
Your changes are saved and you are returned to the Record Definition window.

Adding a section definition to a record definition

About this task


To add a section definition to a record definition:

Procedure
1. From the Record Definition window, press F5.
A blank Section Definition window is displayed.
2. Type the required information in the fields and press Enter.
Another Section Definition window is displayed.
3. Press F12 when you finish adding sections.
You are returned to the Record Definition window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 197


Working with record definitions in a log

Creating a record definition

About this task


You can create record definitions by using:
• The administration dialog; or
• Log collector language statements.
For more information about defining records with the log collector language, refer to the Language Guide
and Reference.
To create a record definition:

Procedure
1. To use an existing record definition as a template, select a record definition from the Record
Definitions window. Otherwise, do not select a record definition.
2. From the Record Definitions window, select 1, New, from the Record pull-down.
The New Record Definition window is displayed.
3. Type information for the new record definition in fields of the window.
4. Press F5 to add fields to the record definition.
The Field Definition window is displayed. See “Working with fields in a record definition” on page 196
for more information.
5. Press F6 to add sections to the record definition.
The Section Definition window is displayed. See “Working with sections in a record definition” on page
197 for more information.
6. Press F3 when you finish adding fields and sections.
The new record definition is saved and you are returned to the Record Definitions window.

Displaying update definitions associated with a record

About this task


Update definitions contain instructions for summarizing log data into Db2 tables. The Record Definitions
window lets you view which update definitions IBM Z Performance and Capacity Analytics uses to process
data that a record definition maps.
Each record is associated with one or more update definitions. To display update definitions associated
with a record:

Procedure
1. From the Record Definitions window, select the record with associated update definitions you plan to
view and press F6.
The Update Definitions window lists all the update definitions that use the selected record definition
as input. From this window, you can view, modify, or add update definitions. See “Displaying and
modifying update definitions of a table” on page 220 or “Creating an update definition” on page 235
for more information.
2. Press F3 when you finish viewing update definitions.
You are returned to the Record Definitions window.

198 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

Deleting a record definition

About this task


If you no longer require data from a certain record, you can use the administration dialog to delete the
record definition.
Note: IBM Z Performance and Capacity Analytics prevents you from deleting record definitions that affect,
or are affected by, other IBM Z Performance and Capacity Analytics objects. To delete a record definition,
remove links from it to other IBM Z Performance and Capacity Analytics objects.
To delete a record definition:

Procedure
1. From the Record Definitions window, select the record definition to delete. Then select 5, Delete, from
the Record pull-down.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
The record definition is deleted and you are returned to the Record Definitions window.

Viewing and modifying a record procedure definition

About this task


Record procedures are programs that can modify, split, combine, sort, delete, or perform any function to
records during collection. Record procedures use existing records as input and produce other records,
which must be defined to IBM Z Performance and Capacity Analytics. Some product components include
record procedures and their definitions.
Each record procedure definition defines record types that the procedure processes, identifies the
language of the procedure, and passes parameters to the procedure. For more information, refer to the
Language Guide and Reference.
You can use the administration dialog to modify record procedure definitions.
To view and modify a record procedure definition:

Procedure
1. From the Record Definitions window, select the record definition that is input to the record procedure
you plan to modify and press F5.
The Record Procedures window for the record definition is displayed. This window lists all record
procedure names that use the record as input.
2. From the Record Procedures window, select the record procedure whose definition you plan to modify
and press Enter.
The Record Procedure Definition window for the record procedure is displayed (see Figure 93 on page
200).

Chapter 6. Administering IBM Z Performance and Capacity Analytics 199


Working with record definitions in a log

SMF_030 Record Procedures ROW 1 TO 1 OF 1

DRL2S030 Record Procedure Definition

Type in information. Then press Enter to save and return.

Description >

Language 1 1. ASM
2. C

Procedure parameter >

F1=Help F2=Split F5=Linkrec F9=Swap F10=Show fld


F12=Cancel

Command===>
F1=Help F2=Split F3=Exit F5=New F6=Delete F7=Bkwd
F8=Fwd F9=Swap F11=Save def F12=Cancel

Figure 93. Record Procedure Definition window


3. Type your changes in the fields.
4. Press F5 to link record definitions to the record procedure (to define them as input to the record
procedure).
The Record Definitions window is displayed.
5. From the Record Definitions window, select record definitions to link to the record procedure and press
Enter.
The record procedure is linked to the record definitions you selected and you are returned to the
Record Procedure Definition window.
6. When you finish modifying the record procedure definition, press Enter.
Your changes are saved and you are returned to the Record Procedures window.
7. Repeat this procedure for other record procedures or press F3 to return to the Record Definitions
window.

Creating a record procedure definition

About this task


If you must add a record procedure, you must first write a program according to the instructions in the
Language Guide and Reference. You can then use the administration dialog to define the record procedure
to IBM Z Performance and Capacity Analytics.
To create a record procedure definition:

Procedure
1. From the Record Definitions window, select the record definition from which the new record procedure
derives its input and press F5.
The Record Procedures window for the record definition is displayed.
2. From the Record Procedures window, press F5.
The New Record Procedure Definition window is displayed.
3. Type information for the new record procedure in the fields.

200 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

4. Press F5 if you want to link the record procedure to additional record definitions that describe record
types on which the record procedure acts. The record procedure is automatically linked to the record
type selected in step 1 above.
The Record Definitions window is displayed.
5. From the Record Definitions window, select record definitions to link to the record procedure and press
Enter.
The record procedure is linked to the record definitions you selected and you are returned to the
Record Procedure Definition window.
6. When you finish entering information, press Enter.
The new record procedure is saved and returns to the Record Procedures window.
7. Repeat this procedure to add more record procedures or press F3 to return to the Record Definitions
window.

What to do next
In addition, you must define a record type as the record procedure's output. Do this in the Record
Definition window (Figure 90 on page 195). Type the record procedure name in the Built by field, to
identify a record type as one that is created by the record procedure.

Deleting a record procedure definition

About this task


If you no longer require a record procedure, you can use the administration dialog to delete the record
procedure definition.
Note: IBM Z Performance and Capacity Analytics prevents you from deleting record procedure definitions
that affect, or are affected by, other product objects. To delete a record procedure definition, remove links
from the record procedure to other product objects.
To delete a record procedure definition:

Procedure
1. From the Record Definitions window, select the record definition that is associated with the record
procedure to delete and press F5.
The Record Procedures window for the record definition is displayed.
2. From the Record Procedures window, select the record procedure to delete and press F6.
A confirmation window is displayed.
3. Press Enter to confirm the deletion.
You are returned to the Record Procedures window.
4. Repeat this procedure to delete more record procedures or press F3 to return to the Record Definitions
window.

Results
The record procedure is deleted.

Working with tables and update definitions


This section describes how to use the administration dialog to work with tables, update definitions, and
other table-related objects such as purge conditions, indexes, views, and table spaces. After reading this
section, you should be familiar with these tasks:

Chapter 6. Administering IBM Z Performance and Capacity Analytics 201


Working with record definitions in a log

• “Working with data in tables” on page 203


– “Displaying the contents of a table” on page 203
– “Editing the contents of a table” on page 204
– “Showing the size of a table” on page 206
– “Recalculating the contents of a table” on page 207
– “Importing the contents of an IXF file to a table” on page 209. (This option is available only if your
installation uses QMF with IBM Z Performance and Capacity Analytics.)
– “Exporting table data to an IXF file” on page 210. (This option is available only if your installation
uses QMF with IBM Z Performance and Capacity Analytics.)
– “Purging a table” on page 210
– “Unloading and loading tables” on page 211
• “Working with tables and update definitions” on page 216
– “Opening a table to display columns” on page 216
– “Displaying and modifying update definitions of a table” on page 220
– “Displaying and editing the purge condition of a table” on page 225
– “Displaying and modifying a table or index space” on page 227
– “Displaying a view definition” on page 231
– “Printing a list of IBM Z Performance and Capacity Analytics tables” on page 231
– “Saving a table definition in a data set” on page 232
– “Listing a subset of tables in the Tables window” on page 232
– “Creating a table” on page 232
– “Deleting a table or view” on page 234
– “Creating a table space” on page 234
– “Creating an update definition” on page 235
– “Deleting an update definition” on page 236
– “Administering user access to tables” on page 236
When you use IBM Z Performance and Capacity Analytics to collect log data, the product stores the data
in Db2 tables in its database. To view a list of the tables that are used to store collected data, from the
Administration window, select 4, Tables. TheTables window is displayed. The list in this window includes
all the product data tables, lookup tables, and control tables.

202 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

Table Maintenance Utilities Edit View Other Help


--------------------------------------------------------------------------
Tables ROW 1 TO 13 OF 212

Select one or more tables. Then press Enter to Open table definition.

/ Tables Prefix Type


_ AVAILABILITY_D DRL TABLE
_ AVAILABILITY_M DRL TABLE
_ AVAILABILITY_PARM DRL TABLE
_ AVAILABILITY_T DRL TABLE
_ AVAILABILITY_W DRL TABLE
_ CICS_A_BASIC_H DRL TABLE
_ CICS_A_BASIC_W DRL TABLE
_ CICS_A_DBCTL_H DRL TABLE
_ CICS_A_DBCTL_USR_H DRL TABLE
_ CICS_A_DBCTL_USR_W DRL TABLE
_ CICS_A_DBCTL_W DRL TABLE
_ CICS_A_DLI_H DRL TABLE
_ CICS_A_DLI_USR_H DRL TABLE

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 94. Tables window

The name of each table is shown in the Tables column.


The prefix of each table is shown in the Prefix column. Data tables and lookup tables have a prefix of DRL,
the default value of the Prefix for all other tables field in the Dialog Parameters window. Control tables
have a prefix of DRLSYS, the default value of the Prefix for system tables field in the Dialog Parameters
window.
The Type column shows whether an object is a Db2 table or a view.

Working with data in tables


This section describes these tasks:
• Displaying the contents of a table
• Editing the contents of a table
• Showing the size of a table
• Recalculating the contents of a table
• Importing the contents of an IXF file to a table (This option is available only if your installation uses QMF
with IBM Z Performance and Capacity Analytics.)
• Exporting table data to an IXF file (This option is available only if your installation uses QMF with IBM Z
Performance and Capacity Analytics.)
• Purging a table
• Unloading and loading a table

Displaying the contents of a table

About this task


You can use the administration dialog to display the contents of a table.
Note: If QMF is not used with IBM Z Performance and Capacity Analytics on your system, the following
applies:
• Tables are displayed with ISPF browse.
• The Add rows and Change rows options on the Edit pull-down are not selectable.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 203


Working with record definitions in a log

• If you display a very large table, data table, or system table, you might run out of REXX storage. If this
happens, there are some actions you can take that enable you to display the table, or the part of the
table you want to see.
– Increase the region size.
– If you need to see only the first part of the table, you can decrease the SQLMAX parameter on the
Dialog Parameters window.
– Use F4 (Run) on the SQL Query pop-up in the reporting dialog. Write an SQL SELECT statement that
restricts the retrieved table information to the columns and rows you are interested in. This is a way
to create and run a query without having to save it.
To display the contents of a table:

Procedure
1. From the Tables window, select the name of the table that you plan to display.
2. Press F11, or select 1, Display, from the Utilities pull-down.
The product displays the contents of the table in rows and columns.
Note: The table is not necessarily sorted in key sequence.

REPORT DRL.SAMPLE_H LINE 1 POS 1 79

SYSTEM DEPARTMENT USER RESPONSE


DATE TIME ID NAME ID TRANSACTIONS SECONDS
---------- -------- ------ ---------- -------- ------------ -----------
2000-01-01 13.00.00 SYS1 Sys Supp PIANKA 40 267
2000-01-01 15.00.00 SYS1 Appl Dev ADAMS 72 198
2000-01-02 08.00.00 SYS1 Appl Dev JONES 28 131
2000-01-02 11.00.00 SYS1 Retail PEREZ 21 171
2000-01-03 10.00.00 SYS1 Marketng KWAN 74 220
2000-01-03 11.00.00 SYS1 Manufact LEE 22 234
2000-01-03 11.00.00 SYS1 Manufact LUTZ 2 95
2000-01-04 07.00.00 SYS1 Finance HAAS 26 109
2000-01-04 07.00.00 SYS1 Sys Supp THOMPSON 84 64
2000-01-04 08.00.00 SYS1 Marketng KWAN 63 290
2000-01-04 08.00.00 SYS1 Finance GEYER 94 131
2000-01-04 08.00.00 SYS1 Finance GEYER 94 131
2000-01-04 09.00.00 SYS1 Marketng STERN 51 162
2000-01-04 09.00.00 SYS1 Manufact PULASKI 69 76
1=Help 2= 3=End 4=Print 5=Chart 6=Query
7=Backward 8=Forward 9=Form 10=Left 11=Right 12=
OK, DRL.SAMPLE_H is displayed.
COMMAND ===> SCROLL ===> PAGE

Figure 95. Using QMF to display an IBM Z Performance and Capacity Analytics table
3. Press F3 when you finish viewing the contents of the table,
You are returned to the Tables window.

Editing the contents of a table

About this task


You can use the administration dialog to edit the contents of a table, using either the QMF table editor (if
QMF is used with IBM Z Performance and Capacity Analytics) or the ISPF editor.
The QMF table editor can be used in two modes, add and change. For a complete description, refer to the
Query Management Facility: Learner's Guide
To add rows to a table using the QMF table editor:

204 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit.
2. Select 1, Add rows, from the Edit pull-down.
The product calls the QMF table editor in add mode.
3. Enter values for columns, and press F2.
4. Press F3 when you finish adding rows.
QMF prompts you for confirmation.
5. Press Enter.
You are returned to the Tables window.

Using the QMF table editor to change or delete rows

About this task


To change or delete rows using the QMF table editor:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit
2. Select 2, Change rows, from the Edit pull-down.
IBM Z Performance and Capacity Analytics calls the QMF table editor in change mode.
3. To search for rows to change or delete, type values to search for, and press F2.
QMF displays the first row that matches the search criteria.
4. To change the row, type values for columns, and press F2.
5. To delete the row, press F11.
6. Press F3 when you finish changing or deleting rows.
QMF prompts you for confirmation.
Note: The ISPF edit function in the product administration dialog works according to ISPF rules. If no
value is entered or if the value is removed, the character-type fields are filled with blanks. The ISPF
Editor works the same way outside the dialog: that is, you can enter NULL values in Edit mode by
typing HEX on the command line and X'00' in the field.
7. Press Enter.
You are returned to the Tables window.

What to do next
If all columns in a table row can be displayed in 32 760 characters (if you are using ISPF version 4 or later,
otherwise 255 characters), you can use the ISPF editor to edit the table. If the table has more rows than
the value you set for the SQLMAX value field in the Dialog Parameters window, IBM Z Performance and
Capacity Analytics prompts you to temporarily override the default for this edit session.
IBM Z Performance and Capacity Analytics deletes all rows from the table and then reinserts them when
you use this function. Because of this, the ISPF editor is not recommended for large tables.

Using the ISPF editor to edit a table

About this task


To edit a table using the ISPF editor:

Chapter 6. Administering IBM Z Performance and Capacity Analytics 205


Working with record definitions in a log

Procedure
1. From the Tables window (Figure 94 on page 203), select the table to edit
2. Select 3, ISPF editor, from the Edit pull-down.
3. IBM Z Performance and Capacity Analytics copies table rows to a sequential file and accesses the ISPF
editor.

ISREDDE - STROMBK.DRLTAB ------------------------------------- COLUMNS 001 017


****** ***************************** TOP OF DATA ******************************
==MSG> Use Tab Key to position to the next column
====== USER_ID |DEPARTME
====== |NT_NAME
====== -----------------
000001 ADAMS Appl Dev
000002 GEYER Finance
000003 GOUNOT Retail
000004 HAAS Finance
000005 JONES Appl Dev
000006 KWAN Marketng
000007 LEE Manufact
000008 LUTZ Manufact
000009 MARINO Retail
000010 MEHTA Manufact
000011 PARKER Finance
000012 PEREZ Retail
000013 PIANKA Sys Supp
000014 PULASKI Manufact
000015 SMITH Appl Dev
COMMAND ===> SCROLL ===> CSR
F1=Help F2= F3=End F4= F5=R Find F6=R Change
F7=Backward F8=Forward F9= F10=Left F11=Right F12=Cursor

Figure 96. Editing a table in ISPF


4. Make any modifications to the table rows. You can add, delete, and change rows.
5. To cancel the changes, type CANCEL on the command line, and press Enter.
You are returned to the Tables window without changing the table.
6. Press F3 when you finish editing the table.
The rows are reinserted into the Db2 table and you are returned to the Tables window.

Showing the size of a table

About this task


Monitor the size of tables periodically to ensure that they are not getting too large.
Use the Db2 RUNSTATS utility to get information about tables and store it in the Db2 catalog each time
you need current information about any Db2 database, including the IBM Z Performance and Capacity
Analytics database. As described in “Monitoring the size of the IBM Z Performance and Capacity Analytics
database” on page 157, IBM Z Performance and Capacity Analytics provides a sample job, DRLJRUNS, as
an example of how to run the RUNSTATS utility. You can also run the RUNSTATS utility using these steps:

Procedure
1. From the list of tables, select the Maintenance pull-down without selecting a table.
2. Select option 1, Tablespace.
3. From the list of table spaces, select one or more table spaces (or make no selection to process all the
table spaces) and select the Utilities pull-down, as shown in Figure 64 on page 146.
4. Select option 2, Run Db2 RUNSTATS.

206 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

What to do next
To learn more about the Db2 RUNSTATS utility, refer to the Db2 for z/OS: Administration Guide and
Reference.
Use the administration dialog to check the size of tables in the product database:
1. From the Tables window (Figure 94 on page 203), select tables to display their sizes.
Note: If you do not select any tables, IBM Z Performance and Capacity Analytics displays the size of all
tables in the product database.
2. Select 2, Show size, from the Utilities pull-down.
The Table Size window is displayed (Figure 97 on page 207).

Table Maintenance Utilities Edit View Other Help

Tables ROW 1 TO 13 OF 212

Table Size ROW 1 TO 10 OF 212

Press Enter to return

Name Prefix Row Row length Kbytes


MVSPM_DEVICE_H DRL 80927 240 18967
MVSPM_DEVICE_AP_H DRL 34821 102 3468
MVSPM_CHANNEL_H DRL 9338 140 1276
MVSPM_APPL_H DRL 2388 491 1145
MVSPM_WORKLOAD_H DRL 2727 308 820
MVSPM_STORAGE_H DRL 2567 199 498
MVSPM_PAGE_DS_H DRL 966 229 216
MVSPM_XCF_PATH_H DRL 1296 171 216
MVSPM_SWAP_H DRL 1771 114 197
MVSPM_ENQUEUE_H DRL 1642 100 160

Command===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 97. Table Size window

Note:
a. You can use the SORT command (for example, SORT KBYTES DESC) to find the largest tables.
b. If the information shown in the Table Size window is incomplete, run the Db2 RUNSTATS utility and
restart this procedure.
3. After you finish viewing this window, press Enter.
You are returned to the Tables window.

Recalculating the contents of a table

About this task


Sometimes tables get filled with incorrect data during the collect process. (This can be caused by a
situation such as a bad record in a log.) For a single, independent table, you can correct these problems
using one of the options on the Edit pull-down. IBM Z Performance and Capacity Analytics provides a
recalculate function for the following special conditions:
• When tables are updated from other tables and corrections must be propagated to all dependent tables.
• When a key column is changed to a new value, and data already exists for the new key.
You can also use the recalculate function to populate a new table from another table, for example a
monthly table from a daily table.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 207


Working with record definitions in a log

You can use the administration dialog to recalculate the contents of tables. For more information about
the RECALCULATE log collector language statement, refer to the Language Guide and Reference.
To recalculate the contents of tables:

Procedure
1. From the Tables window (Figure 94 on page 203), select the source table (the table you plan to
modify)
2. Select 8, Recalculate, from the Utilities pull-down.
The Recalculate window is displayed (Figure 98 on page 208).

Table Maintenance Utilities Edit View Other Help


-
D Recalculate 12

S Select function. Then press Enter.

/ Source table : DRL.SAMPLE_H


S
_ Function 1 1. Update and recalculate
_ 2. Delete and recalculate
_ 3. Insert and recalculate
_ 4. Recalculate
_
_
_
_ F1=Help F2=Split F4=Target F9=Swap F11=Save def
_ F12=Cancel
_
_ VMACCT_SESSION_D DRL TABLE
_ VMACCT_SESSION_M DRL TABLE

Command===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 98. Recalculate window


3. Optionally, press F4 to specify target tables (the tables that changes in the source table should be
propagated to). If you do not specify target tables, changes are propagated to all affected tables.
The Target Tables window is displayed.
4. Select one or more target tables from the list and press Enter.
You are returned to the Recalculate window.
5. Select the desired function from the list and press Enter. Options 1, 2, and 3 are used to modify the
source table. Option 4 propagates selected source table rows without changing the source table.
If you did not choose to insert and recalculate (option 3), the Condition window is displayed (Figure 99
on page 209).

208 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

Table Utilities Edit View Other Help



D Recalculate 12

S S Condition

/ S Type a select condition. Then press Enter to save and return.


s
– F
– DATE = '2000-06-06' AND TIME = '13.00.00' AND
– USER_ID = 'ADAMS'





– F

– F1=Help F2=Split F9=Swap F12=Cancel
V
V
Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 99. Condition window


6. Specify a condition to restrict rows affected in the source table and press Enter.
If you choose to update and recalculate (option 1) or insert and recalculate (option 3), the Column
Values window is displayed (Figure 100 on page 209).

Table Utilities Edit View Other Help



D Recalculate 12

S S Condition

/ S Column Values ROW 2 TO 8 OF 9


s
– F Type column values. Then press Enter.

– Column Value
– TIME
– SYSTEM_ID
– DEPARTMENT_NAME
– USER_ID
– TRANSACTIONS
– F RESPONSE_SECONDS
– CPU_SECONDS 2.0
– V
V Command ===>
F1=Help F2=Split F7=Bwd F8=Fwd F9=Swap F12=Cancel
Comma
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 100. Column Values window


7. Type column values in the fields, and press Enter.
The recalculate function is performed and you are returned to the Recalculate window.
8. Press F12 to return to the Tables window.

Importing the contents of an IXF file to a table

About this task


You might want to import data from another source into an IBM Z Performance and Capacity Analytics
table. If QMF is used with IBM Z Performance and Capacity Analytics, you can use the administration

Chapter 6. Administering IBM Z Performance and Capacity Analytics 209


Working with record definitions in a log

dialog to import data in the Integration Exchange Format (IXF). Refer to the QMF Application Development
Guide for a description of the IXF format.
Note: When you import the file, IBM Z Performance and Capacity Analytics replaces the contents of the
table.
To import data into a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table.
2. Select 3, Import, from the Utilities pull-down.
The Import Data Set window is displayed.
3. Type the name of the data set that contains the data you want to import and press Enter.
The data is imported into the table and you are returned to the Tables window.

Exporting table data to an IXF file

About this task


You might want to export data from an IBM Z Performance and Capacity Analytics table to an IXF data set.
If QMF is used with the product, you can use the administration dialog to do this.
To export data from a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table.
2. Select 4, Export, from the Utilities pull-down
The Export Data Set window is displayed.
3. Type the name of the data set to export data into, and press Enter.
The data is exported into the data set you specified and you are returned to the Tables window.

Purging a table

About this task


Each table in the product database is associated with a purge condition that determines how long the
data in the table is kept. See “Displaying and editing the purge condition of a table” on page 225 for a
description of how to define the purge condition for a table.
Purging the database is normally a batch process. See “Purge utility” on page 152 for a description of how
to run purge in batch.
You can also use the administration dialog to delete the data specified by the purge condition:

Procedure
1. From the Tables window (Figure 94 on page 203), select tables to purge.
Note: If you do not select any tables, IBM Z Performance and Capacity Analytics purges the contents
of all data tables with purge conditions.
2. Select 9, Purge, from the Utilities pull-down.
The Purge Confirmation window is displayed.
3. Press Enter to confirm the purge.

210 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

The purge conditions associated with the tables are run and the statistics on the number of rows
deleted from each table are displayed.

Unloading and loading tables

About this task


When you need to change a Db2 table, for example by adding a column, you can save the existing data by
using the Db2 Unload utility. After the change to the table, you then reload the table using the Load utility.
Using Unload and Load with no change reorganizes the data.
Moreover, the possibility of reading and writing a data set of data directly on tape improves possible
recovery and backup operations.
The Load utility is used to load data into a table of a table space. It enables you to load records into
the tables and builds or extends any indexes defined on them. If the table space already contains data,
you can either add the new data, or replace the existing data with the new data. Because the Load utility
operates at a table space level, to run it you must have the required authority for all the tables of the table
space. The data set used for the Load utility can be read from both disk and tape. The Unload utility is
used to unload data from a table to a sequential data set. To use the Unload utility, the definitions of the
table space and tables must be available on the system. The data set used for the unload operation can
be saved both on disk and tape.
Note: Load and Unload work only with tables, and cannot be used with views.
To unload the contents of a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the tables to unload, as shown in Figure 101
on page 211.

Utilities Edit View Other Help


+------------------------------------+
| 10 1. Display... |
| 3. Import... |
| 4. Export... |
| 5. Grant... |
| 6. Revoke... |
| 7. Document... |
| 8. Recalculate... |
| 9. Purge... |
| 10. Unload... |
| 11. Load... |
| 12. Reorg/Discard... |
| 13. DB2HP Unload... |
+------------------------------------+

Figure 101. Selecting tables to unload


2. Select option 10 Unload, from the Utilities pull-down menu.
The Unload Utility window opens, as shown in the following figure:

Chapter 6. Administering IBM Z Performance and Capacity Analytics 211


Working with record definitions in a log

UNLOAD Utility

The UNLOAD utility will unload table data to a data set. Type the fully
qualified data set name, without quotes. Then press Enter to create the
JCL.

Type of UNLOAD . . . . . . . 1_ 1. Disk


2. Tape
Table . . . . . . . . . . : AVAILABILITY_D
UNLOAD data set name . . . . SAMPLE.DAT__________________________________

Type information in the following fields. In case of Tape UNLOAD, VOLSER is


the Tape Label. In case of Disk UNLOAD, type information only if the data
set is not available.
UNIT . . . . . . . . . . . . ________
VOLSER . . . . . . . . . . . ________

F1=Help F2=Split F9=Swap F12=Cancel

Figure 102. Unload Utility window


3. From the Unload Utility window, specify the unload type by inserting 1 for disk unload or 2 for tape
unload. The default is Disk Unload.
4. Specify the name of the table and data set you want to unload.
5. If you selected Disk Unload:
Option Description
If the data set already Leave the fields UNIT and VOLSER blank. If you need to create a new
exists data set, enter the required information in both the fields.
If you selected Tape Specify the tape unit in the UNIT field, and the tape label in the VOLSER
Unload fiel.d
6. When you are finished, press Enter.
A JCL is created and saved in your library, so that it can be used later. When the JCL is launched two
data sets are automatically created. One is used to reload data (SYSPUNCH) and the other contains the
data unloaded by the utility.
Note: When using Load on a multiple table space, you must be careful because Load works on a
complete table space at a time (for information about replacing data with Load, refer to the Db2
for OS/390 V5 Utility Guide and Reference). This applies especially when tables are dropped and
recreated.
For this reason, when you apply PTFs involving tables that need to be dropped and recreated, you
should follow these steps:
a. Unload the tables, if you want to keep the previously collected data.
b. Use SMP/E to apply the PTF.
c. Execute the SQL drop table statement of the above tables using either of the following:
• Db2 SPUFI
• Option 5, Process IBM Z Performance and Capacity Analytics statements, from the Other pull-
down on any primary window of the IBM Z Performance and Capacity Analytics administration
dialogs.
d. Execute the SQL create table statements for the same tables using either of the following methods:
• Reinstall the component.
• Select Option 5, Process IBM Z Performance and Capacity Analytics statements, from the
Other pull-down on any primary window of the IBM Z Performance and Capacity Analytics
administration dialogs. Execute the definition members of the local or the standard definition
library, depending on whether or not the definitions have been user-modified. Ignore the error
messages issued for the existing objects and make sure that the changed tables are correctly
created.

212 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

e. Load your previously unloaded data.

What to do next
To generate a job that reloads the data, from the Tables window, select option 11, Load. Then enter the
required information, as explained above.
The following example shows control statements for the Unload utility. Data is unloaded from the
AVAILABILITY_D table onto tape. The DDNAME for SYSPUNCH data set is completed with the UNIT and
VOLSER information about the Tape Unit used. The data set input from panel is SYSREC00.

//UNLOAD JOB (ACCOUNT),'NAME'


//*
//* THIS JCL HAS BEEN REWRITTEN IN ORDER
//* TO PROPERLY UNLOAD THE DATA FROM Db2 TABLES.
//* DSNTIAUL IS USED FOR UNLOAD INSTEAD OF DSNUPROC
//* UTILITY.
//* THEREFORE, PLEASE, NOTE THAT THIS IS ONLY
//* A SAMPLE THAT NEEDS TO BE PROPERLY CUSTOMIZED.
//* WARNINGS :
//* PLEASE CHECK PLAN NAME (NORMALLY DSNTIBVR),
//* V=Db2 VERSION, AND R=Db2 RELEASE;
//* TWO NEW DATASETS ARE DEFINED (SYSREC00 AND SYSPUNCH).
//* SYSPUNCH DATASET, IS CREATED AT UNLOAD STEP,
//* as USERID.SYSPUNCH (USERID.SYSPUNCH).
//* SYSREC00 DATASET IS SELECTED FROM the PREVIOUS PANEL.
//*
//* I M P O R T A N T :
//* CHECK THE DATA SET PARAMETER IF YOU HAVE CHOSEN
//* THE UNLOAD ON TAPE.
//*
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN6)
RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB71) -
PARMS('SQL') LIB('DSN710.RUNLIB.LOAD')
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSREC00 DD DSN=USERID.DAT.UNLOAD,
// UNIT=TAPE_UNIT,
// SPACE=(4096,(5040,504)),
// DISP=(,PASS),
// LABEL=(1,SL),
// DCB=(RECFM=FB,LRECL=410,BLKSIZE=27880),
// VOL=SER=TAPE_LABEL
//SYSPUNCH DD DSN=USERID.SYSPUNCH,
// UNIT=xxxx,
// VOL=SER=xxxxxx,
// SPACE=(4096,(5040,504)),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
// DISP=(NEW,CATLG,CATLG)
//SYSIN DD *
SELECT * FROM USERDB.AVAILABILITY_D;

The following example shows control statements for the Load utility. Data is loaded from tape into the
AVAILABILITY_D table. The DDNAME for the SYSPUNCH data set is completed with the UNIT and VOLSER
information about the Tape Unit used. The data set input from panel is SYSREC00.

//LOAD JOB (ACCOUNT),'NAME'


//*
//* THIS JCL HAS BEEN REWRITTEN IN ORDER
//* TO PROPERLY LOAD THE DATA FROM Db2 TABLES.
//* DSNTIAUL IS PREVIOUSLY USED FOR UNLOAD
//* INSTEAD OF DSNUPROC UTILITY.
//* THEREFORE, PLEASE, NOTE THAT THIS IS ONLY
//* A SAMPLE THAT NEEDS TO BE PROPERLY CUSTOMIZED.
//* WARNINGS :
//* PLEASE CHECK PLAN NAME (NORMALLY DSNTIBVR),
//* V=Db2 VERSION, AND R=Db2 RELEASE;
//* TWO NEW DATASETS ARE DEFINED (SYSREC00 AND SYSPUNCH).
//* as USERID.SYSPUNCH (USERID.SYSPUNCH).
//* SYSREC00 DATASET IS SELECTED FROM the PREVIOUS PANEL
//*
//*
//* I M P O R T A N T :

Chapter 6. Administering IBM Z Performance and Capacity Analytics 213


Working with record definitions in a log

//* SYSPUNCH DATASET NEEDS TO BE EDITED FROM USER


//* BEFORE EXECUTING LOAD,
//* INSERTING "RESUME YES LOG YES" OPTIONS,
//* IN ORDER TO CONTAIN COMMAND :
//* "LOAD DATA RESUME YES LOG YES INDDN
//* SYSREC00 INTO TABLE tablename"
//* CHECK THE DATA SET PARAMETER IF YOU HAVE CHOSEN
//* THE LOAD FROM TAPE.
//*
//LOAD EXEC DSNUPROC,PARM='DSN6,MYUID'
//DSNTRACE DD SYSOUT=*
//SORTLIB DD DSN=SYS1.SORTLIB,DISP=SHR
//SORTWK01 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTWK02 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTWK03 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTWK04 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SORTOUT DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSREC00 DD DSN=USERID.DAT.UNLOAD,
// UNIT=TAPE_UNIT,VOL=SER=TAPE_LABEL,
// LABEL=(1,SL),
// DISP=SHR
//SYSUT1 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
//SYSIN DD DSN=USERID.SYSPUNCH,DISP=SHR

Integration with Db2 High Performance Unload


The Db2 High Performance Unload is a high-speed utility for unloading Db2 tables from either a table
space or an image copy. Tables are unloaded to one or more files based on a specified format. You can use
it to extract data for movement across enterprise systems or for reorganization in-place. Db2 HP Unload
can do the following:
• Rapidly unload table spaces
• Run parallel unloads accessing the same table space
• Unload against any image copy to eliminate interference with Db2 production databases
• Unload selected rows and columns
• Unload a maximum number of rows, unloading one row out of every n rows
• Generate load control statements for a subsequent reload.
The Db2 High Performance Unload can manage an UNLOAD command and an optional SELECT statement.
The syntax of the SELECT statement is compatible with the syntax of the Db2 SELECT statement. The
SELECT statement is used to define which table data must be extracted onto data set or tape (for
example, if in your table a DATE field is present, you can extract all the data with a date later than
2002-01-01, by writing the appropriate WHERE condition in the SELECT statement of the UNLOAD
command).

Running Db2 High Performance Unload utility

About this task


To run the Db2 High Performance Unload utility, you must have the product correctly installed and
configured on the system.
Note: The DB2HP Unload utility integration works in batch mode; it can run in interactive mode only if
you have Db2 Administration Tool, or Db2 Tools Launchpad, installed on your system. These products are
optional and not needed to run the DB2HP Unload utility.
To run the utility follow these steps:

Procedure
1. From the Tables window, select the table to unload, as shown in Figure 94 on page 203
2. From the Utilities pull-down menu, select option DB2HP Unload, as shown in Figure 101 on page 211.
Note: The Db2 High Performance Unload utility can only be run on tables. It cannot be run on views.

214 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with record definitions in a log

3. From the Db2 High Performance Unload Utility window, specify the unload type by inserting 1 for disk
unload or 2 for tape unload. The default value is disk unload. Then, specify the name of data set that
will be used to store the unloaded data, as shown in the following window.

DB2HP Unload Utility

The DB2HP Unload will unload table data to a data set. You can use the
utility only if the DB2HPU product is present on system. Type the fully
qualified data set name, without quotes. Then press Enter to create the
JCL.

Type of DB2HP Unload . . . . 1_ 1. Disk


2. Tape
Table . . . . . . . . . . : AVAILABILITY_D
Unload data set name . . . . SAMPLE.DAT__________________________________

Type information in the following fields. In case of Tape UNLOAD, VOLSER is


the Tape Label. In case of Disk UNLOAD, type information only if the data
set is not available.
UNIT . . . . . . . . . . . . ________
VOLSER . . . . . . . . . . . ________

F1=Help F2=Split F9=Swap F12=Cancel

Figure 103. Db2 High Performance Unload utility


4. If you selected Disk:
• If the data set already exists, leave the fields UNIT and VOLSER blank. If you need to create a new
data set, enter the required information in both the fields.
If you selected Tape:
• Specify the tape unit in the UNIT field, and the tape label in the VOLSER field
5. When you are finished, press Enter.
A JCL is created and saved in your library so that it can be used later. When the JCL is launched two
data sets are automatically created. One is used to reload data (SYSPUNCH), the other contains the
data unloaded by the utility.

Sample control statement for Db2 High Performance Unload utility


Data has been unloaded from the AVAILABILITY_D table; the DDNAME for SYSPUNCH data set must be
completed with UNIT and VOLSER information. The data set input from panel is SYSREC00.

//DB2HPU JOB (ACCOUNT),'NAME'


//*
//* THIS JCL HAS BEEN REWRITTEN IN ORDER
//* TO PROPERLY UNLOAD THE DATA FROM Db2 TABLES.
//* THE Db2 High Performance Unload (INZUTILB)
//* IS USED FOR UNLOAD DATA IN BATCH MODE.
//* THEREFORE, PLEASE, NOTE THAT THIS IS ONLY
//* A SAMPLE THAT NEEDS TO BE PROPERLY CUSTOMIZED.
//* WARNINGS :
//* V=Db2 VERSION, AND R=Db2 RELEASE;
//* TWO NEW DATASETS ARE DEFINED (SYSREC00 AND SYSPUNCH).
//* SYSPUNCH DATASET, IS CREATED AT UNLOAD STEP,
//* as USERID.SYSPUNCH (USERID.SYSPUNCH).
//* SYSREC00 DATASET IS SELECTED FROM the PREVIOUS PANEL.
//*
//* I M P O R T A N T :
//* CHECK THE DATA SET PARAMETER IF YOU HAVE CHOSEN
//* THE UNLOAD ON TAPE.
//*
//STEP1 EXEC PGM=INZUTILB,REGION=0M,DYNAMNBR=99,
// PARM='DSN6,DB2UNLOAD'
//STEPLIB DD DSN=DSN710.SINZLINK,DISP=SHR
//*
//SYSIN DD *
UNLOAD TABLESPACE PRM1DB.DRLSCOM
DB2 YES
QUIESCE YES QUIESCECAT YES
OPTIONS DATE DATE_A
SELECT * FROM PRM1.AVAILABILITY_D

Chapter 6. Administering IBM Z Performance and Capacity Analytics 215


Working with tables and update definitions

OUTDDN (SYSREC00)
FORMAT DSNTIAUL
LOADDDN SYSPUNCH LOADOPT (RESUME NO REPLACE)
/*
//SYSPRINT DD SYSOUT=*
//*
//******* DDNAMES USED BY THE SELECT STATEMENTS ********
//*
//SYSREC00 DD DSN=SAMPLE.DAT,
// UNIT=3390,
// SPACE=(4096,(1,1)),
// DISP=(NEW,CATLG,CATLG),
// DCB=(RECFM=FB,LRECL=410,BLKSIZE=27880),
// VOL=SER=MYVOL
//SYSPUNCH DD DSN=USERID.SYSPUNCH,
// UNIT=xxxx,
// VOL=SER=xxxxxx,
// SPACE=(4096,(1,1)),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
// DISP=(NEW,CATLG,CATLG)

Working with tables and update definitions


The rest of this chapter describes working with tables and update definitions.

Opening a table to display columns

About this task


You can use the administration dialog to view a table definition.
To open a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table definition you plan to view.
2. Press Enter.
The table definition is opened. Figure 104 on page 216 shows an example of an opened table
definition.

Table SAMPLE_H ROW 1 TO 9 OF 9

Select a column. Then press Enter to display the definition.

Database . . : DRLDB Tablespace . . : DRLSSAMP


Comments . . . This table provides hourly sample data. >
/ Column Type Length Nulls Primary Key
_ DATE DATE 4 No Yes
_ TIME TIME 3 No Yes
_ SYSTEM_ID CHAR 4 No Yes
_ DEPARTMENT_NAME CHAR 8 No Yes
_ USER_ID CHAR 8 No Yes
_ TRANSACTIONS INTEGER 4 Yes No
_ RESPONSE_SECONDS INTEGER 4 Yes No
_ CPU_SECONDS FLOAT 8 Yes No
_ PAGES_PRINTED INTEGER 4 Yes No
******************************* BOTTOM OF DATA ********************************

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=Add col F6=Indexes F7=Bkwd
F8=Fwd F9=Swap F10=Show fld F12=Cancel

Figure 104. Table window


3. Type changes to comments in the Comments field and press Enter.
Note: Press F10 to see the entire Comments field.

216 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

The changes to the comments are saved.

Displaying and modifying a column definition

About this task


To display and modify a column definition:

Procedure
1. From the Table window, select the column, and press Enter.
The Column Definition window for the column is displayed (Figure 105 on page 217).

Table SAMPLE_H ROW 1 TO 9 OF 9

S Column Definition

D Modify comments if required. Then press Enter to save and return.


C
/ Name.... : CPU_SECONDS
– Comments.. : Total CPU time, in seconds. This is the >
– Length... : 8
– Type.... : FLOAT
– Primary key: No
– Nulls... : Yes


s

*
F1=Help F2=Split F9=Swap F10=Show fld F12=Cancel

Command ===>
F1=Help F2=Split F3=Exit F5=Add col F6=Indexes F7=Bkwd
F8=Fwd F9=Swap F10=Show fld F12=Cancel

Figure 105. Column Definition window


2. Type changes to comments in the Comments field, and press Enter.
Note: Press F10 to see the entire Comments field.
The changes are saved and you are returned to the Tables window.

Adding a column to a table

About this task


You can add columns to a table, but you cannot delete columns.
To add a column to a table:

Procedure
1. From the Table window, press F5.
The Add Column window is displayed(Figure 106 on page 218).

Chapter 6. Administering IBM Z Performance and Capacity Analytics 217


Working with tables and update definitions

– Add Column
S
Type column information. Then press Enter to save and return.
D
C Name (required)
/ Comments >
– Length Primary key . 2 1. Yes
– 2. No
– Type 1. Char Nulls 1 . 1. Default
– 2. Varchar 2. NOT NULL
– 3. Smallint 3. NOT NULL WITH
– 4. Integer DEFAULT
– 5. Float
– 6. Decimal
– 7. Date
* 8. Time
9. Timestamp
10. Graphic
11. Vargraphic
12. Long varchar
C 13. Long vargraphic
F1=Help F2=Split F9=Swap F10=Show fld F12=Cancel

Figure 106. Add Column window


2. Type information for the new column in the window, and press Enter.
The new column is added to the table andyou are returned to the Add Column window.
3. When you finish adding columns to the table, press F12.
You are returned to the Tables window.

Displaying and adding a table index

About this task


If a table has a primary key, it must have an index on that key (the primary index). Some queries access
tables using the primary index.
A table can have more than one index. Secondary indexes can give you faster data retrieval, but increase
the amount of time that collect requires to update those tables.
Note: If you want to work with index spaces, see “Displaying and modifying a table or index space” on
page 227.
To view or add indexes to a table:

Procedure
1. From the Tables window, select a table and press Enter.
2. From the Table window, press F6.
The Indexes window is displayed (Figure 107 on page 219).

218 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Indexes ROW 1 FROM 1

Select an index. Then press Enter to display.

/ Indexes Table Unique Cluster


_ SAMPH_IX SAMPLE_H Yes Yes
******************************* BOTTOM OF DATA ********************************

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=Add ind F7=Bkwd F8=Fwd
F9=Swap F11=Delete F12=Cancel

Figure 107. Indexes window


3. To view an index definition, select the index and press Enter.
The Index window is displayed (Figure 108 on page 219). The index on the primary key should be a
unique, clustering index. Refer to the Db2 documentation for a description of the other index options.

Index SAMPH_IX ROW 1 TO 7 OF 9


S
Press Enter to return.
/
/ Table name : SAMPLE_H
* *
Storage group : SYSDEFLT Subpages. . . :4
Primary quantity : 6 Unique . . . :YES
Secondary quantity : 3 Cluster . . . :YES
Erase : NO Buffer pool . :BPO

Column name Column type Seq Order


DATE DATE 1 ASC
TIME TIME 2 ASC
SYSTEM_ID CHAR 3 ASC
DEPARTMENT_NAME CHAR 4 ASC
USER_ID CHAR 5 ASC
TRANSACTIONS INTEGER
RESPONSE_SECONDS INTEGER

C Command ===>

F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 108. Index window


4. Press Enter to return to the Indexes window.
5. From the Indexes window, press F5 to add an index to the table.
The Add Index window is displayed (Figure 109 on page 220).

Chapter 6. Administering IBM Z Performance and Capacity Analytics 219


Working with tables and update definitions

Add Index ROW 1 TO 5 OF 9


S
Type information. Then press Enter to save and return.
/
Index name (required)
Table Name : SAMPLE_H

Storage group Subpages


Primary quantity Unique 2 1. Yes
Secondary quantity 2. No
Erase 1 1. No Cluster 2 1. Yes
2. Yes 2. No
Bufferpool BP0
Column name Column type Seq Order
DATE DATE
TIME TIME
SYSTEM_ID CHAR
DEPARTMENT_NAME CHAR
USER_ID CHAR

C Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 109. Add Index window


6. Type the information for the new index and press Enter.
The index is added to the table and you are returned to the Indexes window.

What to do next
Note: To modify an index, delete and recreate it.

Deleting a table index

About this task


To delete a table index:

Procedure
1. From the Indexes window, select the index and press F11.
A confirmation window is displayed.
2. Press Enter to confirm the deletion.
You are returned to the Indexes window.

Displaying and modifying update definitions of a table

About this task


The instructions for entering data from logs into Db2 tables in the product database are provided by
update definitions. An update definition describes how the data in a source (a record or a table) is
summarized into a target table during the collect process. Refer to the Language Guide and Reference for
information about how to define update definitions using the log collector language.
Update definitions are supplied for all data tables. You can use the administration dialog to modify these
update definitions.
To display and edit the update definitions of a table:

220 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Procedure
1. From the Tables window (Figure 94 on page 203), select the table and press F5.
The Update Definitions window for the table is displayed (Figure 110 on page 221). All update
definitions where the selected table is either the source or the target are included.

Update Definitions ROW 1 TO 2 OF 2

Select an update definition. Then press Enter to display definition.

/ Updates Source Target


_ SAMPLE_H_M SAMPLE_H SAMPLE_M
_ SAMPLE_01_H SAMPLE_01 SAMPLE_H
******************************* BOTTOM OF DATA *******************************

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F5=New F6=Prt list F7=Bkwd
F8=Fwd F9=Swap F10=Save def F11=Delete F12=Cancel

Figure 110. Update Definitions window


2. Select the update definition to modify and press Enter.
The Update Definition window for the update definition is displayed (Figure 111 on page 221).

SAMPLE_01_H Update Definition ROW 1 TO 9 OF 9

Type information. Press F3 to save.

Source . . : SAMPLE_01 Prefix : (blank for record)


Target . . : SAMPLE_H Prefix : DRL
Section . . . __________________ +
Condition . . ________________________________________ >
Comments . . ________________________________________ >

Column Function Expression


DATE __________ S01DATE >
TIME __________ ROUND(S01TIME,1 HOUR) >
SYSTEM_ID __________ S01SYST >
DEPARTMENT_NAME __________ VALUE(LOOKUP DEPARTMENT_NAME IN DRL >
USER_ID __________ S01USER >
TRANSACTIONS SUM S01TRNS >
RESPONSE_SECONDS SUM S01RESP >
CPU_SECONDS SUM S01CPU/100.0 >
PAGES_PRINTED SUM S01PRNT >

Command ===> ______________________________________________________________


F1=Help F2=Split F3=Exit F4=Prompt F5=Abbrev F6=Distrib
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Schedule F12=Cancel

Figure 111. Update Definition window

Complete these fields in the window:


Section
The name of a repeated section in a source record.
If the source is a record, you can type the name of a repeated section in this field. IBM Z
Performance and Capacity Analytics uses the update during collection to process each repeated
section.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 221


Working with tables and update definitions

Condition
A condition that is applied to source fields or columns.
Type an expression that evaluates as either true or false in this field. IBM Z Performance and
Capacity Analytics evaluates the expression to determine if it is true before processing the source
with the update.
Comments
A description of the update definition.
Column
All columns of the target table.
Function
Describes the accumulation function to use. Blank means that the column is a key (a GROUP BY
column). For data columns, the value of this field can be SUM, MIN, MAX, COUNT, FIRST, LAST,
AVG, and PERCENT.
To use the MERGE function, identify input to the function by designating a column for each of these
functions: INTTYPE, START, END, and QUIET.
Expression
Describes how the value in the column should be derived from source fields, columns, or
abbreviated names of expressions. (See “Working with abbreviations” on page 223 for more
information.) If the update does not affect the value of the column, there is no entry in the
expression field.
For an AVG column, type the expression, followed by a comma, and a column name. For a
PERCENT column, type the expression, followed by a comma, a column name, a comma, and a
percentile value (without the percent sign).
Refer to the Language Guide and Reference for more information about using log collector language:
• Functions
• Accumulation functions
• Expressions
• Statements
• Averages
• Percentiles
3. Type any modifications to the update definition in the fields.
4. Press F5 to modify abbreviations in this update definition.
The Abbreviations window is displayed. See “Working with abbreviations” on page 223, for more
information.
5. Press F6 to modify the distribution clause associated with the update definition.
The Distribution window is displayed. See “Modifying a distribution clause” on page 224 for more
information.
6. Press F11 to modify the apply schedule clause associated with an update definition.
The Apply Schedule window is displayed. See “Modifying an apply schedule clause” on page 224 for
more information.
7. Press F3 when you finish modifying the update definition.
The changes are saved and you are returned to the Update Definitions window.
8. Repeat this procedure to modify other update definitions or press F3 again to return to the Tables
window.

222 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Working with abbreviations

About this task


You can use abbreviations to give names to long expressions that are used several times. Using
abbreviations improves product performance because expressions are evaluated only once.
Defining abbreviations with the administration dialog is equivalent to using the LET clause in a log
collector DEFINE UPDATE statement to assign an expression to a variable name. (Refer to the description
of the DEFINE UPDATE statement in the Language Guide and Reference for more information.)
To modify an abbreviation:

Procedure
1. From the Update Definition window (Figure 111 on page 221), press F5.
The Abbreviations window is displayed (Figure 112 on page 223).

Abbreviations ROW 1 TO 3 OF 3

Modify expressions if required. Press F3 to save and return.

/ Abbreviation Expression
TS1 TIMESTAMP(SMF70DAT,SMF70IST)+(SMF70INT
D1 DATE(TS1)
T1 TIME(TS1)
BOTTOM OF DATA

Command ===>
F1=Help F2=Split F3=Exit F5=Add abbr F7=Bkwd
F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel

Figure 112. Abbreviations window


2. Type modifications in the fields and press Enter.
The changes are saved and you are returned to the Update Definition window.

Adding an abbreviation to an update definition

About this task


To add an abbreviation to an update definition:

Procedure
1. From the Abbreviations window, press F5.
The Abbreviation window is displayed.
2. Type the abbreviation and the expression in the fields and press Enter.
The abbreviation is added and you are returned to the Abbreviations window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 223


Working with tables and update definitions

Deleting an abbreviation from an update definition

About this task


To delete an abbreviation from an update definition:

Procedure
From the Abbreviations window, select the abbreviation to delete, and press F11.
The abbreviation is deleted from the list.

Modifying a distribution clause

About this task


The distribution clause of an update definition specifies that source fields or columns are distributed over
a time period. It can be used when you have a record that contains data for a long time period and you do
not want all values to be summarized at the start or end time.
To modify the distribution clause associated with an update definition:

Procedure
1. From the Update Definition window (Figure 111 on page 221), press F6.
The Distribution window is displayed (Figure 113 on page 224).

Distribution ROW 1 TO 7 OF 65

Type information. Then press Enter to save and return.

By period 60 VALUE(LOOKUP TIME_RESOLUTION IN DRL.M > (seconds)


Start timestamp TIMESTAMP(SMF33TSD,SMF33TST) >
End timestamp TIMESTAMP(SMF33TED,SMF33TET) >
Timestamp INTERVAL_START (any ID)
Interval INTERVAL_LENGTH(any ID)

/ Column/Field
– SMF33ACL
– SMF33ACT
– SMF33ALN
– SMF33AOF
– SMF33AON
/ SMF33CN
/ SMF33CNA

Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Show fld F11=Delete F12=Cancel

Figure 113. Distribution window


2. Type modifications in the fields and press Enter.
The changes are saved and you are returned to the Update Definition window.

Modifying an apply schedule clause

About this task


IBM Z Performance and Capacity Analytics uses the apply schedule clause of an update definition in
calculating availability. The clause specifies how the product should merge schedule information in
control tables (see “Control tables” on page 275) with detailed availability information.

224 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

To modify the apply schedule clause associated with an update definition:

Procedure
1. From the Update Definition window (Figure 111 on page 221), press F11.
The Apply Schedule window is displayed (Figure 114 on page 225).

AVAIL_T_D Update Defintion ROW 1 TO 9 OF 15

Apply Schedule

Type information. Then press Enter to save and return.

Name LOOKUP SCHEDULE_NAME IN DRL.AVAILABILITY > (expression)

Interval type INTERVAL_TYPE + (column)


Start time START_TIME + (column)
Stop time END_TIME + (column)

Status SCHED_STAT Any ID

F1=Help F2=Split F4=Prompt F9=Swap F10=Show fld


F11=Delete F12=Cancel

MEASURED_HOURS SUM INT_TIME >


UP_IN_SCHEDULE SUM CASE WHEN INT_TYPE = '= ' AND SCHED_ >

Command ===>
F1=Help F2=Split F3=Exit F4=Prompt F5=Abbrev F6=Distrib
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Schedule F12=Cancel

Figure 114. Apply Schedule window


2. Type modifications in the fields and press Enter.
The changes are saved and you are returned to the Update Definition window.

What to do next
Refer to the Language Guide and Reference for more information about using the log collector language to:
• Determine resource availability
• Calculate the actual availability of a resource
• Compare actual availability to scheduled availability

Displaying and editing the purge condition of a table

About this task


IBM Z Performance and Capacity Analytics uses purge conditions to specify when old data should be
purged from tables. A table can have only one purge condition. Purge conditions are supplied for all data
tables. You can use the administration dialog to modify the purge condition of a table.
The administrative report PRA003 produces a complete list of all current product purge definitions. For
more information about this report, see “PRA003 - Table Purge Condition” on page 311.
To display and edit the purge condition of a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table to update and press F6.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 225


Working with tables and update definitions

The Retention Period window is displayed (Figure 115 on page 226) if the purge condition is blank or
has the standard format (column_name < CURRENT DATE - n DAYS), and if the column name,
which can be an expression (for example, DATE(START_TIMESTAMP)), is less than 18 characters.

Table Maintenance Utilities Edit View Other Help

D Retention Period for MVS_ADDRSPACE_D

S Type information. Then press Enter to save and return.

/ Retention period 30 Days

Column DATE + Date/Timestamp


s

F1=Help F2=Split F4=Prompt F5=Conditio F9=Swap


F11=Save def F12=Cancel

MVS_WORKLOAD_HV2 DRL VIEW

Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 115. Retention Period window


2. Modify information in the fields. The column is the date or timestamp column in the table that IBM Z
Performance and Capacity Analytics uses to determine the age of the rows.
3. Press Enter.
The changes are saved and you are returned to the Tables window.
4. If the purge condition does not have the standard format, the Purge Condition window is displayed
(Figure 116 on page 226) instead of the Retention Period window.
This window is displayed also if you press F5 from the Retention Period window.

Table Maintenance Utilities Edit View Other Help

D MVS_ADDRSPACE_T Purge Condition

S Modify as required. Then press Enter to save and return.

/ SQL condition

( JOB_TIMESTAMP < CURRENT TIMESTAMP - 7 DAYS AND SYSTEM_ABEN


D_CODE IS NULL AND USER_ABEND_CODE IS NULL ) OR ( JOB _TIMEST
AMP < CURRENT TIMESTAMP - 45 DAYS AND ( SYSTEM_ABEND_CODE IS
NOT NULL OR USER_ABEND_CODE IS NOT NULL ) )

F1=Help F2=Split F9=Swap F11=Save def F12=Cancel

Command ===>
F1=Help F2=Split F3=Exit F5=Updates F6=PurCond F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 116. Purge Condition window


5. Modify the SQL condition, and press Enter.
The changes are saved and the previous window is displayed.

226 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Displaying and modifying a table or index space

About this task


Each table in the product database is in a table space, and each index is in an index space. IBM Z
Performance and Capacity Analytics usually uses one table space for each component. You can use the
administration dialog to work with table and index spaces.
Note: The method described here makes changes directly to your Db2 database, and does not affect
the component definition. You lose such direct changes if you delete and reinstall a component. To
change the component definition to keep it in line with the database, use the Space pull-down in the
Components window, in addition to making the direct change as described in the following steps.
To make a change to a table space:

Procedure
1. From the Tables window, select the Maintenance pull-down. Do not select a table first.
2. The pull-down has these options:

1. Tablespace...
2. Index and index space...

To change table space parameters, select 1. The Tablespace window is displayed (with the Tablespace
pull-down illustrating the options available: you can use the Utilities pull-down to reorganize or get
statistics on a table space).

Tablespace Utilities Other Help

1. New... F5 ces in SVTDB database Row 1 to 20 of 234


2. Open... Enter
3. Delete ess Enter to open a tablespace definition.
4. Save definition
5. Print List
6. Exit ondary Storage grp Type Locksize
20 SVTSG SEGMENTED TABLE
DRLSCI01 60000 30000 SVTSG SEGMENTED TABLE
DRLSCI02 60000 30000 SVTSG SEGMENTED TABLE
DRLSCI03 60000 30000 SVTSG SEGMENTED TABLE
DRLSCI04 40000 20000 SVTSG SEGMENTED TABLE

DRLSIMST 100000 50000 SVTSG SEGMENTED TABLE


DRLSIMSY 100000 50000 SVTSG SEGMENTED TABLE
DRLSINFO 600 300 SVTSG SEGMENTED TABLE
DRLSMAA 60000 30000 SVTSG SEGMENTED TABLE

Command ===>
F1=Help F2=Split F3=Exit F5=New F7=Bkwd F8=Fwd
F9=Swap F10=Actions F12=Cancel

Figure 117. Tablespaces window

You can use the Save definition option to create SQL commands that can recreate the selected table
space. Note that this does not update the component definition: only the definition of the selected
table space is saved.
3. Select a table space and press Enter. The Tablespace window is displayed, which you can use to
change the table space parameters. Change the parameters and press Enter.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 227


Working with tables and update definitions

Tablespace DRLSCI06

Type information. Then press Enter to save and return.

Type . . . . . . . . . : 2 1.Simple
2.Segmented
3.Partitioned

Storage group . . . . . SVTSG VCAT . . . ________


Primary quantity . . 20000
Secondary quantity 10000
Erase . . . . . . . . 2 1. Yes
2. No

Locksize . . . . . . . . 4 1. Any
2. Tablespace
3. Page
4. Table

Close . . . . . . . . . 1 1. Yes Compress __ 1. Yes


2. No 2. No

Bufferpool . . . . . . . BP0 Dsetpass . . . . . . . . ________


Freepage . . . . . . . . 0 Segment size . . . . . : 8
Pctfree . . . . . . . . 5 Number of partitions . : 0

F1=Help F2=Split F5=Tables F6=Parts F7=Bkwd F8=Fwd


F9=Swap F12=Cancel

Figure 118. Tablespace DRLxxx

IBM Z Performance and Capacity Analytics takes action depending on the parameters to be changed:
Where reorganization is needed
Some parameter changes need a database reorganization before they take effect. In this case the
product :
a. Makes the change, using the ALTER TABLESPACE command.
b. Creates a batch job to reorganize the database, which you can submit when it is convenient.
Where the database needs to be stopped
Some parameter changes need exclusive use of the database. In this case the product creates a
batch job that:
a. Stops the database.
b. Makes the change, using the ALTER TABLESPACE command.
c. Starts the database again.
Do not submit the job if some task, for example a collect, is using the table space, because this
stops the collect job.
In other cases
Some parameter changes can be made immediately. IBM Z Performance and Capacity Analytics
issues the ALTER TABLESPACE command online.
Press F1 to get more information about a parameter, or refer to the discussion of designing a database
in Db2 for z/OS: Administration Guide and Reference.

Making changes to an index space

About this task


To make a change to an index space:

228 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Procedure
1. From the Tables window (Figure 94 on page 203), select the Maintenance pull-down. Do not select a
table first.
2. To change index space parameters, select 2. The Indexes window is displayed (with the Index pull-
down illustrating the options available; you can use the Utilities pull-down to reorganize an index
space).

Index Utilities Other Help

1. Open...Enter ndexes in SVTDB database Row 1 to 21 of 443


2. Delete
3. Print List ress Enter to display.
4. Exit Indexspace
Table name Indexspace Unique Cluster
AVAILD_IX AVAILABILITY_D AVAILDRI YES YES
AVAILM_IX AVAILABILITY_M AVAILMRI YES YES
AVAILP_IX AVAILABILITY_PARM AVAILPRI YES YES
AVAILT_IX AVAILABILITY_T AVAILTRI YES YES
AVAILW_IX AVAILABILITY_W AVAILWRI YES YES

CICS_DLI_TRAN_D CICS_DLI_TRAN_D CICS1TA1 YES YES


CICS_DLI_TRAN_W CICS_DLI_TRAN_W CICS1TR4 YES YES
CICS_DLI_USR_D CICS_DLI_USR_D CICS1SKV YES YES

Command ===>
F1=Help F2=Split F3=Exit F7 =Bkwd F8=Fwd F9=Swap
F10=Actions 12=Cancel

Figure 119. Indexes window


3. Select an index space and press Enter. An index window will be displayed, which you can use to
change the index space parameters: change the parameters and press Enter.

Index CICS_A_DLI_USR_W

Press Enter to save and return.

Table name . . . . . . : CICS_A_DLI_USR_W


Indexspace name . . . : CICS1S6O

Storage group . . . . . SVTSG VCAT . . . . . . . . ________


Primary quantity . . 20000 Unique . . . . . . . 1 1. Yes
Secondary quantity 10000 2. No
Erase . . . . . . . . 2 1. Yes Cluster . . . . . . 1 1. Yes
2. No 2. No
Close . . . . . . . . . 1 1. Yes Part . . . . . : 2 1. Yes
2. No 2. No

Subpages . . . . . . . . 4 Pctfree . . . . . . 10
Bufferpool . . . . . . . BP0 Dsetpass . . . . . . ________
Freepage . . . . . . . . 0

F1=Help F2=Split F5=Columns F6=Parts F7=Bkwd F8=Fwd


F9=Swap F12=Cancel

Figure 120. Index window

IBM Z Performance and Capacity Analytics takes action depending on the parameters to be changed:
Where the index must be recreated
In this case the product:
a. Asks you to confirm the change.
b. Deletes the index, with the DROP command.
c. Redefines the index, using the DEFINE command.
Where the database needs to be stopped
Some parameter changes need exclusive use of the database. In this case the product creates a
batch job that:
a. Stops the database.
b. Makes the change, using the ALTER command.
c. Starts the database again.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 229


Working with tables and update definitions

Do not submit the job if some task, for example a collect, is using the index space, because this
stops the collect job.
In other cases
Some parameter changes can be made immediately. IBM Z Performance and Capacity Analytics
issues the ALTER command online.
Press F1 to get more information about a parameter, or refer to the discussion of designing a database
in Db2 for z/OS: Administration Guide and Reference.

Making table space parameter changes that do not require offline or batch action

About this task


If you want just to make table space parameter changes that do not require offline or batch action, you
can use this alternative method:

Procedure
1. From the Tables window (Figure 94 on page 203), select a table in the table space to open.
2. Select 5, Open Tablespace, from the Table pull-down.
IBM Z Performance and Capacity Analytics displays the Tablespace window.

- Tablespace DRLSSAMP
D
Type information. Then press Enter to save and return.
S
More: +
/ Name ......... : DRLSSAMP
s
Type ......... : 2 1.Simple
2.Segmented
3.Partitioned

VCAT

Storage group SYSDEFLT


Primary quantity 3
Secondary quantity 3
Erase 2. 1. Yes
2. No

Command ===>
C F1=Help F2=Split F5=Tables F7=Bkwd F8=Fwd F9=Swap
F12=Cancel

Figure 121. Tablespace window


3. Type any changes in the fields.
Note: You can scroll the window to display more options.
4. Press F5 to see a list of tables in the table space.
The Tables window is displayed.
5. Press Enter when you finish viewing this window.
You are returned to the Tablespace window.
6. Press Enter.
The changes to the tables pace are saved and you are returned to the Tables window.

230 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Displaying a view definition

About this task


You can use the administration dialog to display a view definition created with SQL statements.
To display the view definition:

Procedure
1. From the Tables window, select a view to display, and press Enter.
The View window is displayed (Figure 122 on page 231).

Table Utilities Edit View Other Help

View MSG_SYSLOG_DV ROW 1 TO 4 OF 4

S Type information. Then press Enter to save and return.


/ Comments ..... This view provides daily statistics on J >

Column Comments
DATE Date when the SYSLOG records were written >
PERIOD_NAME Name of the period. This is derived using >
/
JES_COMPLEX Name of the JES complex. From the SET JE >
MESSAGES_TOT Total number of messages. This is the co >
BOTTOM OF DATA

Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap
F10=Showfld F12=Cancel

F8=Fwd F9=Swap F10=Actions F11=Display F12=Cancel

Figure 122. View window


2. You can change any of the comments in a view definition. To change a comment, type the text in the
Comments field.
3. Press Enter when you finish displaying the view definition.
The changes are saved and you are returned to the Tables window.

Printing a list of IBM Z Performance and Capacity Analytics tables

About this task


IBM Z Performance and Capacity Analytics maintains a list of all tables in the product database. You can
use the administration dialog to print a list of these tables.

Procedure
1. From the Table pull-down in the Tables window (Figure 94 on page 203), select 8, Print list.
The Print Options window is displayed.
2. Type the required information, and press Enter.
The list of IBM Z Performance and Capacity Analytics tables is routed to the destination you specified.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 231


Working with tables and update definitions

Saving a table definition in a data set

About this task


Each table in the IBM Z Performance and Capacity Analytics database is defined using SQL. You can use
the administration dialog to save the SQL table definition statement in a data set.
To save a table definition statement in a data set:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table definition to save in a data set.
2. Select 7, Save definition, from the Table pull-down.
The Save Data Set window is displayed.
3. Type the data set name in the field, and press Enter.
The table definition in the data set that you specified is saved and you are returned to the Tables
window.

Listing a subset of tables in the Tables window

About this task


When you select 4, Tables, from the Administration window, all tables in the IBM Z Performance and
Capacity Analytics database are listed in the Tables window. You can use the administration dialog to list
only a subset of tables in the IBM Z Performance and Capacity Analytics database in the Tables window.
To specify which tables should appear in the Tables window:

Procedure
1. From the View pull-down in the Tables window (Figure 94 on page 203), select 2, Some, and press
Enter.
IBM Z Performance and Capacity Analytics displays the Select Table window.
2. Type selection criteria in the fields, and press Enter.
Note: You can see a list of components by pressing F4.
The tables that correspond to the criteria you specified are listed.
To list all the tables, from the View pull-down in the Tables window, select 1, All. All the tables in the
IBM Z Performance and Capacity Analytics database are listed.

Creating a table

About this task


IBM Z Performance and Capacity Analytics stores data collected from logs in Db2 tables. Each component
includes table definitions for tables that it uses. However, you might need to create additional tables.
You can use the administration dialog to create a table. You should have a working knowledge of Db2
databases before attempting to create a table. Refer to the Db2 documentation for more information.
Note: Views cannot be created from the administration dialog. Refer to the Db2 documentation for a
description of how to create views using SQL.
To create a table:

232 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Procedure
1. From the Table pull-down in the Tables window (Figure 94 on page 203), select 1, New, and press
Enter.
The New Table window is displayed (Figure 123 on page 233).
2. Type required information in the fields.
3. To see a list of available table spaces, place the cursor in the Tablespace field, and press F4.
The Prompt for Tablespace window is displayed. If the table is related to existing tables, you might
want to put the table in the same table space.
4. Select a table space from the list, and press Enter.
The product returns to the New Table window, and the table space appears in the Tablespace field.
Note: To create a table space, see “Creating a table space” on page 234.

New Table

Type information. Then press F5 to add columns. To select an already


added column, press Enter.

Table name . . __________________ Prefix . . . . DRL


Database . . . DRLDB Tablespace . . ________ +
Comments . . . ________________________________________ >

/ Column Type Length Nulls Primary Key


******************************* BOTTOM OF DATA ********************************

Command ===> __________________________________________________________


F1=Help F2=Split F3=Exit F4=Prompt F5=Add col F6=Indexes
F7=Bkwd F8=Fwd F9=Swap F10=Show fld F11=Delete F12=Cancel

Figure 123. New Table window


5. Press F5 to add a column to the table.
The product displays the Add Column window (Figure 106 on page 218).
6. Type the required information in the fields, and press Enter.
You are returned to the Add Column window.
7. When you finish adding columns to the table, press F12.
You are returned to the New Table window.
8. Press F6 to add indexes to the table.
The Indexes window is displayed (Figure 107 on page 219).
9. Press F5 to add an index.
The Add Index window is displayed (Figure 109 on page 220).
10. Type the required information in the fields, and press Enter.
The index is added and you are returned to the Indexes window.
11. Press F3 to return to the New Table window.
12. Press F3 when you finish typing information.
The table is added to the database and you are returned to the Tables window.

Using an existing table as a template

About this task


You can also create a table by using an existing table as a template.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 233


Working with tables and update definitions

Procedure
1. From the Tables window, select the table to use as a template.
2. Select 1, New, from the Table pull-down.
The New Table window is displayed.
The fields are filled with information from the template table.
3. The rest of the procedure is the same as when creating a table without a template.
Note: The index for the template table is not copied and must be added for the primary key. To add an
index, see “Displaying and adding a table index” on page 218.

Deleting a column from the table

About this task


You can use the administration dialog to delete a column from a table you are defining. To delete a
column:

Procedure
1. From the New Table window, select an existing column.
2. Press F11 to delete the column.
A confirmation window is displayed.
3. Verify the deletion by pressing Enter.
The column is deleted and you are returned to the New Table window.

Deleting a table or view

About this task


To delete a table or view:

Procedure
1. Select the table or view to delete in the Tables window (Figure 94 on page 203) and select 6, Delete,
from the Table pull-down.
Note: IBM Z Performance and Capacity Analytics prevents you from deleting table definitions that
affect, or are affected by, other product objects. To delete a table definition, remove links from the
table to other product objects.
A confirmation window is displayed.
2. Verify the deletion by pressing Enter.
The table or view is deleted and you are returned to the Tables window.
Note: A table in a partitioned table space cannot be explicitly deleted (dropped). You can drop the
table space that contains it. This does not have any impact on other tables because only one table can
be defined in a single table space.

Creating a table space

About this task


Db2 tables are in table spaces. For a new table, you might need to create a table space.

234 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

You can use the administration dialog to create a table space. You must have some knowledge of Db2
databases before creating the table space. See “Understanding table spaces” on page 146 for more
information about table spaces, or refer to the discussion of designing a database in Db2 for z/OS:
Administration Guide and Reference.
To create a table space:

Procedure
1. From the New Table window (Figure 123 on page 233), place the cursor in the Tablespace field and
press F4.
The Prompt for Tablespace window is displayed.
2. From the Prompt for Tablespace window, press F5.
The New Tablespace window is displayed.
3. Type required information in the fields, and press Enter.
A table space is created and you are returned to the Prompt for Tablespace window.
4. Press Enter again to return to the New Table window.
5. Continue creating the table as described in “Creating a table” on page 232.
Note: It is also possible to create a table space without creating a table: use the Maintenance pull-
down in the Tables window (as described in “Displaying and modifying a table or index space” on page
227) and select New from the Tablespace pull-down in the Tablespaces window.

Creating an update definition

About this task


In IBM Z Performance and Capacity Analytics, update definitions specify how to store data from log
records in Db2 tables and how to use data from one table to update another. Each component includes
all the update definitions that it uses. However, if you tailor the objects used during a collect, or create
components of your own, you might need to create more update definitions.
You can use the administration dialog to create an update definition. You can also use log collector
language. Refer to the Language Guide and Reference for more information about defining update
definitions using log collector language.
To create an update definition:

Procedure
1. From the Tables window (Figure 94 on page 203), select a table for addition of an update definition,
and press F5.
The Update Definitions window is displayed (Figure 110 on page 221).
2. To use an existing update definition as a template, select one of the update definitions from the list
and press F5. Otherwise, do not select an update definition.
The New Update Definition window is displayed. The columns are filled with values from the template.
3. To create an update definition without a template, press F5 from the Update Definitions window.
You are prompted for the name of the target table in the Target Table of New Update window. Type the
name of the target table, and press Enter.
The New Update Definition window is displayed.
4. Type required information in the fields, and press F3.
The new update definition is saved and you are returned to the Update Definitions window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 235


Working with tables and update definitions

You might choose to use abbreviations for expressions in the expression fields. Or you might require
that data be distributed over some interval or used in availability processing. See these topics in
““Displaying and modifying update definitions of a table” on page 220” for information:
• “Working with abbreviations” on page 223
• “Modifying a distribution clause” on page 224
• “Modifying an apply schedule clause” on page 224
5. Press F3 again to return to the Tables window.

Deleting an update definition

About this task


Update definitions are supplied for all data tables. You can use the administration dialog to delete an
update definition you no longer need. IBM Z Performance and Capacity Analytics removes all references
to the update from its system tables. However, it does not delete the definition member; you can use the
dialog to reinstall it.
To delete an update definition of a table:

Procedure
1. From the Tables window (Figure 94 on page 203), select the table and press F5.
The Update Definitions window for the table is displayed (Figure 110 on page 221). All update
definitions where the selected table is either the source or the target are included.
2. Select the update definition to delete, and press F11.
A confirmation window is displayed.
3. Verify the deletion by pressing Enter.
The definition is updated and you are returned to the Update Definitions window.
4. Press F3 to return to the Tables window.

Administering user access to tables

About this task


When you install a component, IBM Z Performance and Capacity Analytics grants read access to the users
or groups you have specified in dialog parameters (the default is the DRLUSER group). You can use the
administration dialog to grant or revoke table access to other IBM Z Performance and Capacity Analytics
users.
To grant table access to other users:

Procedure
1. From the Tables window (Figure 94 on page 203), select one or more tables to grant access to.
2. Select 5, Grant, from the Utilities pull-down.
The Grant Privilege window is displayed (Figure 124 on page 237).

236 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Working with tables and update definitions

Table Maintenance Utilities Edit View Other Help

Tables ROW 186 TO 198 OF 212

S Grant Privilege ion.

/ Type information. Then press Enter to save and return.


s
Table : DRL.SAMPLE_H

User ID

Access 1 1. Read only


2. Read and update
3. All

C F1=Help F2=Split F9=Swap F12=Cancel


F7=Bkwd
F8=Fwd F9=Swap F10=Actions F11=Dispay F12=Cancel

Figure 124. Grant Privilege window


3. Type required information in the fields, and press Enter.
The user ID is granted access to the table.
4. When you finish granting access to the table, press F12.
If you selected more than one table, the Grant Privilege window for the next table is displayed. When
you complete the Grant Privilege window for the last table, you are returned to the Tables window.

Revoking table access

About this task


To revoke table access:

Procedure
1. From the Tables window (Figure 94 on page 203), select one or more tables to revoke access to.
2. Select 6, Revoke, from the Utilities pull-down.
The Revoke Privilege window (Figure 125 on page 238) is displayed.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 237


Summary of how the log data manager is used

Revoke Privilege ROW 1 TO 5 OF 5

Select one more user IDs. Then press Enter to execute.

/ User ID Table Privilege Grantor


– DRL DRL.SAMPLE_H DELETE DRL
– DRL DRL.SAMPLE_H UPDATE DRL
– DRL DRL.SAMPLE_H INSERT DRL
– DRL DRL.SAMPLE_H SELECT DRL
– DRLUSER DRL.SAMPLE_H SELECT DRL
BOTTOM OF DATA

Command ===>
F1=Help F2=Split F7=Bkwd F8=Fwd F9=Swap F12=Cancel

Figure 125. Revoke Privilege window


3. Select the user IDs with table access privileges to revoke, and press Enter.
The access privileges are revoked and you are returned to the Tables window.

Working with the log data manager option


This section contains information about the IBM Z Performance and Capacity Analytics log data manager
option, which automates and simplifies the collection of data.
After providing a summary of the log data manager, this section then describes:
• How the log data manager is invoked from the administration dialog (page “Summary of how the log
data manager is used” on page 238).
• The job for recording of log data sets for collection (page “Job step for recording a log data set for
collection” on page 239).
• Modifying log collector statements to be used in the collect (page “Modifying log collector statements”
on page 241).
• Modifying the list of log data sets to be collected (page “Listing and modifying the list of log data sets to
be collected” on page 244).
• The collect job and the parameters it uses (page “The DRLJLDMC collect job and the parameters it
uses” on page 247).
• Modifying the list of successfully collected log data sets (page “Listing and modifying the list of log data
sets to be collected” on page 244).
• Modifying the list of unsuccessfully collected log data sets (page “Modifying the list of unsuccessfully
collected log data sets” on page 252).

Summary of how the log data manager is used


You usually include a log data set for use with the log data manager by inserting a job step DRLELDML in
the job that creates the log data set. The job step DRLELDML records the log data set as being ready to be
collected by the log data manager collect job. You must run the job step DRLELDML for each log data set
that you want to be collected.
The log data manager collect job DRLELDMC then performs the data collection and updates the database
tables.
You can also use the Administration dialog windows to do the following:

238 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Invoking the log data manager

• Amend the list of log data sets to be collected.


• Amend the list of the log data sets that were successfully or unsuccessfully collected.
• Amend the collect statements used in a collect.

Invoking the log data manager

About this task


To invoke the log data manager:

Procedure
1. From the Administration Dialog window, select 3, Logs, to display the Logs window.
2. Select one of the displayed logs, then select 5, Open Log Data Manager (a new option provided with
the log data manager) from the Log pull-down.
The log data manager Main Selection window (Figure 126 on page 239) is displayed.

DRLDLDML Log Data Management of SMF logs

Select one of the following. Then press Enter.

__ 1. Log collector statements


2. Log data set to be collected
3. Log data sets collected successfully
4. Log data sets collected with failure

F1=Help F2=Split F9=Swap F12=Cancel

Figure 126. Log Data Manager Main Selection window.

Each of these options is discussed in the sections of this chapter.


3. The Main Selection window gives you the possibility to:
• Browse, add, delete and modify log collector statements.
• Add, delete, and change the list of log data sets to be collected by the collect job.
• List the log data sets that were collected successfully by the collect job.
• List the log data sets that were collected unsuccessfully by the collect job.

Job step for recording a log data set for collection


The job step DRLJLDML records a log data set as being ready to be collected. The collect job DRLJLDMC
then performs the collection of this log data set (described in “The DRLJLDMC collect job and the
parameters it uses” on page 247).
After job step DRLJLDML has successfully run, IBM Z Performance and Capacity Analytics will have
created a record in system table DRLLDM_LOGDATASETS (described in “DRLLDM_LOGDATASETS” on
page 258). You must run this job step for each log data set that you want to be collected by the log data
manager. The list of log data sets to be collected can then be displayed, changed, or deleted, or a log
data set added for collection (an alternative to using the DRLJLDML job), using the Log Data Sets To Be
Collected window, described in “Listing and modifying the list of log data sets to be collected” on page
244.

Using the DRLJLDML job step

About this task


To use the DRLJLDML job step:

Chapter 6. Administering IBM Z Performance and Capacity Analytics 239


Job step for recording a log data set for collection

Procedure
1. Ensure that your log data sets are cataloged (otherwise the DRLJLDML job step does not work).
2. Take a copy of the supplied sample DRLJLDML job step.
3. Insert the DRLJLDML job step in each job that creates a log data set, and which you want to be
collected by the log data manager. For Generation Data Sets, you must insert the DRLJLDML job step
after each Generation Data Set member that has been created.
4. Enter the name of the log data set (*.stepname.ddname) in the DRLLOG DD statement of the job step
(described in “DRLJLDML sample job” on page 240.
5. Run the job you have now amended, to create the log data set.

DRLJLDML sample job


This job is shipped with IBM Z Performance and Capacity Analytics as sample job DRLJLDML.

//DRLJLDML JOB (ACCT#),'LOGS'


//********************************************************************
//* Name: DRLJLDML *
//* *
//* Function: *
//* Log Data Manager - register a log data set sample job *
//* *
//* This job is used to register the log data set (only one) *
//* specified in DRLLOG in the DRLLDM_LOGDATASETS as being ready *
//* for collect by the Log Data Manager. *
//* *
//* Input: *
//* The exec DRLELDML accepts the following parameters: *
//* *
//* SYSPREFIX=xxxxxxxx Prefix for system tables. default=DRLSYS *
//* PLAN=xxxxxxxx Db2 plan name default=DRLPLAN *
//* SYSTEM=xxxxxx Db2 subsystem name. default=DSN *
//* SHOWSQL=xxx Show SQL. YES/NO default=NO *
//* LOGTYPE=xxxxxxxxxx Log type (e.g. SMF). Required. *
//* LOGID=xxxxxx Log ID. If not specified (or =''), a blank *
//* Log ID is generated, and the default collect*
//* statement is used in collect. *
//* ONTAPE=N/Y Specify if the LOG name is on DASD or not. If*
//* not coded, it defaults to NO. *
//* *
//* DRLLOG DD card: Name of log data set to be registered *
//* (can refer to a previous step). *
//* It must be cataloged. *
//* *
//* Output: Log data set name registered in *
//* sysprefix.DRLLDM_LOGDATASETS together *
//* with LOG_NAME, LOG_ID and TIME_ADDED. *
//* Confirmation message including data set name *
//* *
//* Notes: *
//* Before you submit the job, do the following: *
//* 1. Fill in a correct log data set name. *
//* 2. Check that the steplib db2loadlibrary is correct. *
//* 3. Change the input parameters to DRLELDML as required. *
//* 4. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* 5. Change the IZPCA high level qualifier. Default id 'DRLvrm' *
//* *
//********************************************************************
//LDMLOG EXEC PGM=IKJEFT01
//*
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD
// DD DISP=SHR,DSN=db2loadlibrary <--
//***********************************************
//* MESSAGES
//*
//DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//***********************************************
//* LOG DATA SET
//* DSN=*.stepname.ddname can be used
//*
//DRLLOG DD DISP=SHR,DSN=... <--

240 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements

//***********************************************
//* START EXEC DRLELDML
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLELDML SYSTEM=DSN SYSPREFIX=DRLSYS -
LOGTYPE=SMF -
LOGID='' ONTAPE=N
/*

Setting the parameters for job DRLJLDML

About this task


These are the rules for entering parameter values:
1. LOGTYPE is the only parameter that must be changed by you. The remaining parameters are optionally
changed by you.
2. Blanks must not exist before or after an equal (=) sign.
3. Blanks must not exist within a parameter value.
4. A parameter value must not be enclosed in apostrophes.
5. A continuation mark (-) can be placed in any column.
The following are the DRLJLDML job parameters:
SYSPREFIX
The prefix of all IBM Z Performance and Capacity Analytics system and control Db2 tables. If you do
not specify a value here, the default DRLSYS is used.
SYSTEM
The Db2 subsystem. The default value is DSN.
PLAN
The name of the Db2 application plan. The default value is DRLPLAN.
SHOWSQL
When this value is set to YES, all executed SQL statements will be written to an output file. The default
value is NO.
LOGTYPE, LOGID
Each combination of LOGTYPE and LOGID identifies the collect statements to be used by the collect
job (which is run after this job):
• If you do not enter a value for LOGID, or if you enter two apostrophes with no blank between (''), the
default collect statements for this LOGTYPE will be used for collecting the log data set.
• If you set LOGID to a user-defined value, the collect statements for the user-defined value will be used
for this LOGTYPE, when collecting the log data set.
• Using different values of LOGID will produce more than one collect for a specific LOGTYPE. These
collects will normally be run serially. However, you can run these collects in parallel by setting up your
system accordingly.

Modifying log collector statements


In order to modify log collector statements, this section describes the following:
• “Listing the data sets containing collect statements” on page 242
• “Editing the collect statements” on page 242
• “Adding a log ID and collect statements data set” on page 243
• “Changing the collect statements data set name” on page 244

Chapter 6. Administering IBM Z Performance and Capacity Analytics 241


Modifying log collector statements

Listing the data sets containing collect statements

About this task


To list the log collector statements used with a log type:

Procedure
Select 1, Log collector statements, from the log data manager Main Selection window.
The Collect Statements window (Figure 127 on page 242) is displayed,one row for each log ID defined for
the log type. When a default row is created during installation of a product component, the field log ID is
always blank.

DRLDLDMS Log Data Manager Collect Statements for SMF

Select a Log ID. Then press Enter to edit the collect statement

/ Log ID Collect statement data set


s DRLxxx.SDRLDEFS(DRLBSMF)
_ MVSA DRLxxx.LOCAL.DEFS(MVSACOLL)
_ MVSB DRLxxx.LOCAL.DEFS(MVSBCOLL)
_ MVSX DRLxxx.LOCAL.DEFS(MVSXCOLL)
_ MVS1 DRLxxx.LOCAL.DEFS(MVS1COLL)
_ SYS1 DRLxxx.LOCAL.DEFS(SYS1COLL)

Command ===> ___________________________________________________________


F1=Help F2=Split F3=Exit F5=Add F6=Modify F7=Bkwd F8=Fwd
F9=Swap F11=Delete F12=Cancel

Figure 127. Collect Statements window

Editing the collect statements

About this task


To edit (default action) the collect statements for a log ID:

Procedure
1. Select the log ID whose collect statements you want to edit, and press Enter. The Edit window (Figure
128 on page 243) is displayed.
2. Edit the collect statements using the ISPF editor. If the member does not exist, it will be automatically
created by the edit. If the collect statements data set does not exist or is not cataloged, an error
message is displayed. A confirmation window is displayed if a member of the product definition library
is selected for editing. If you want to edit collect statements that reside in the product distribution
library, follow the instructions given in “Modifying IBM Z Performance and Capacity Analytics-supplied
collect statements” on page 243
3. On completion of the editing, you are returned to the Log Data Manager Collect Statements window.

Results
Note: The COMMIT AFTER BUFFER FULL ONLY parameter will not be accepted in the collect statement
member if the collect involves concatenated log data sets (an appropriate error message is displayed).
The reason is that such concatenated data sets are never recorded in the DRLLOGDATASETS system table
as being collected.

242 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements

EDIT ---- DRLxxx.SDRLDEFS(DRLBSMF)---------------------------- COLUMNS 0


****** ***************************** TOP OF DATA ***********************
000001 COLLECT SMF;
****** **************************** BOTTOM OF DATA *********************

COMMAND ===> SCROLL ===


F1=Help F2=Split F3=Exit F5=Rfind F6=Rchange F7=Up
F8=Down F9=Swap F10=Left F11=Right F12=Cancel

Figure 128. Edit collect statements window

Modifying IBM Z Performance and Capacity Analytics-supplied collect statements

About this task


Not all the components have a default collect statement supplied by the product. You must modify the
collect statements for these log types to use with these components. You might also want to modify
other IBM Z Performance and Capacity Analytics-supplied collect statements. In all cases, a warning
is displayed if you attempt to edit a collect statement member that resides in the product distribution
library.
Note: Any modifications you make to IBM Z Performance and Capacity Analytics-supplied collect
statements are lost if a PTF or new release updates the member containing the collect statements.
To modify an IBM Z Performance and Capacity Analytics-supplied collect statement member:

Procedure
1. Copy the member containing the collect statements to your local library.
2. Use option F6=Modify of the Log Data Manager Collect Statements window to change the data set
name of the default log ID (see “Modifying log collector statements” on page 241 for details).
3. Edit the collect statements member as you require.

Adding a log ID and collect statements data set

About this task


To add a log ID and data set name to the list:

Procedure
1. Press F5 and the Add Collect Statements Definition window is displayed (Figure 129 on page 244
2. Type a log ID and data set name and press Enter. The log ID and data set name are added to the Log
Data Manager Collect Statements list in alphanumeric sequence. However, a non-existent data set is
not created

Chapter 6. Administering IBM Z Performance and Capacity Analytics 243


Modifying log collector statements

DRLDLDMA Add Collect Statements Definition for SMF

Type information. Then press Enter to save.

Log ID ________ (blank for default collect statements)


Data set name __________________________________________________

F1=Help F2=Split F9=Swap F12=Cancel

Figure 129. Add Collect Statements Definition window

Changing the collect statements data set name

About this task


To change the name of a collect statements data set:

Procedure
1. Select the log ID corresponding to the data set name which you want to modify, and press F6. The
Modify Collect Statements Definition window is displayed (Figure 130 on page 244)
2. Type the modified data set name and press Enter. The data set name is changed in the Log Data
Manager Collect Statements list.

DRLDLDMB Modify Collect Statements Definition for SMF

Type information. Then press Enter to save.

Log ID MVSA____
Data set DRLxxx.LOCAL.DEFS(MVSACOLL)__________________________

F1=Help F2=Split F9=Swap F12=Cancel

Figure 130. Modify Collect Statements Definition window

Listing and modifying the list of log data sets to be collected


In order to list and modify the list of log data sets to be collected, this section describes the following:
• “Listing the log data sets to be collected” on page 244
• “Modifying the log ID for a log data set” on page 245
• “Deleting information about a log data set” on page 246
• “Recording a log data set to be collected again” on page 246
• “Adding a log data set to be collected” on page 246

Listing the log data sets to be collected

About this task


To list the log data sets to be collected:

Procedure
Select 2, Log data sets to be collected, from the log data manager Main Selection window.
The Log Data Sets To Be Collected window (Figure 131 on page 245) is displayed, one row for each log ID
and log data set.

244 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying log collector statements

What to do next
Each list of log data sets are sorted firstly by log ID, and then by the date the log data set was added.
Each log data set displayed in this window has a value in the Status column, which can contain one of
these values:
• blank
The log data set is ready to be collected by the DRLMLDMC job (see “The DRLJLDMC collect job and the
parameters it uses” on page 247 for details).
• 'SELECT'
This value occurs when the log data set has been selected for collect by the DRLMLDMC job, but the
collect has not completed. The data set is protected from a collect by a “parallel” invocation of the
DRLMLDMC job. If theDRLMLDMC job abends, the action you take depends upon how many log data
sets have the status 'SELECT' after the abend has occurred:
– If there are many log data sets with status 'SELECT', run job DRLELDMC with parameter
CLEANUP=YES, to record the log data sets as ready for collection again.
– If there are only a few log data sets with status 'SELECT', it is easier to manually record the data sets
as ready for collection again by selecting F4=Rerun for these log data sets.
• A log collector return code or a system or user abend code
This occurs when the log data set was collected with a failure, and the Rerun option was selected for
this log data set in the Log Data Sets Collected with Failure window (described in “Modifying the list of
unsuccessfully collected log data sets” on page 252). The data set is collected again the next time job
DRLELDMC is run.

DRLDLDMT SMF Log Data Sets To Be Collected

Select a data set. Then press Enter to modify Log ID.

/ Log ID Log data set Time added Status


_ SYS170.SMFLOG.SLOG9501222 2004-11-22.13
s MVSA SYS170.SMFLOGA.SLOG950122 2004-11-21.23 SELECT
_ MVSB SYS170.SMFLOGB.SLOG950122 2004-11-22.01
_ MVSX SYS170.SMFLOGX.SLOG950122 2004-11-22.01
_ MVS1 SYS170.SMFLOG1.SLOG02 2004-11-21.23 8
_ MVS2 SYS170.SMFLOG.MVS2.SLOG01 2004-11-21.10 U0005
_ SYS1 SYS170.SMFLOG.SYS1.SLOG01 2004-11-18.10 20

Command ===> ___________________________________________________________


F1=Help F2=Split F3=Exit F4=Rerun F5=Add F7=Bkwd F8=Fwd
F9=Swap F11=Delete F12=Cancel

Figure 131. SMF Log Data Sets To Be Collected window

Modifying the log ID for a log data set

About this task


To modify the log ID (the default action) to be used with a log data set:

Procedure
1. Select the log ID and press Enter.
The Modify Log ID for a Log Data Set window is displayed (Figure 132 on page 246).
2. Type the modified log ID and press Enter. The log ID is then changed in the Log Data Sets To Be
Collected list.
Note: You can also use this window to display the full length of a truncated log data set name. Data set
names longer than 34 characters are truncated in the Log Data Sets To Be Collected window, but are
displayed in full in the Modify Log ID for a Log Data Set window.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 245


Modifying log collector statements

DRLDLDMM Modify Log ID for a SMF Log Data Set

Type Log ID. Then press Enter to save.

Log ID MVSA (blank for default collect statements)


Data set SYS150.SMFLOGA.SLOG950122

F1=Help F2=Split F9=Swap F12=Cancel

Figure 132. Modify Log ID For a Log Data Set window.

Deleting information about a log data set

About this task


To delete an entry from the Log Data Sets To Be Collected window:

Procedure
1. Select the log ID and log data set and press F11.
2. Press Enter to confirm deletion.

Recording a log data set to be collected again

About this task


A log data set can be recorded for collection again if it has the value 'SELECT' in the Status column,
caused by the collect job abending and as a result, the log data set still having the value 'SELECT in the
Status column.
After the log data set has been recorded for collection again, it is included in the next collect job
(described in “The DRLJLDMC collect job and the parameters it uses” on page 247).
To record a log data set to be collected again:

Procedure
1. Select the log ID and log data set and press F4
2. Press Enter to confirm.

Adding a log data set to be collected

About this task


To add an entry to the Log Data Sets To Be Collected list:

Procedure
1. Press F5 and the Add a Data Set To Be Collected window is displayed (Figure 133 on page 247).
2. Type the log ID and log data set name and press Enter.
The Log Data Sets To Be Collected window is displayed, containing the added entry.

246 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The collect job and the parameters it uses

DRLDLDMN Add a SMF Data Set To Be Collected

Type information. Then press Enter to save.

Log ID ________ (blank for default collect statements)


Data set name ____________________________________________________

F1=Help F2=Split F9=Swap F12=Cancel

Figure 133. Add a Data Set To Be Collected window

An error message is displayed in this window if you attempt to add an already existing log data set.

The DRLJLDMC collect job and the parameters it uses


The job DRLJLDMC is used to collect log data sets that are recorded as being ready for collection.
A system table (described in “DRLLDM_COLLECTSTMT” on page 258) is used to identify the data set
containing the collect statements to be used for the collect.
Log data sets are recorded as ready for collection either by running the job DRLJLDML (see “Job step
for recording a log data set for collection” on page 239 for details), or by using the Log Data Sets To Be
Collected window (see “Listing and modifying the list of log data sets to be collected” on page 244 for
details).

Deciding which log data sets to collect


Using the two parameters LOGTYPE and LOGID you specify which log data sets you want to collect. If
you omit both parameters, all log data sets that are ready to be collected are collected. If, however,
you decide to enter values for LOGTYPE and LOGID, a subset only of the log data sets belonging to the
specified log type is collected.

Concatenation of log data sets


Each time you run the DRLELDMC EXEC, all log data sets corresponding to the values you enter for the
parameters LOGTYPE and LOGID are serially collected. The log collector function is used only once for all
log data sets of the same log type and log ID. Log data sets are added to the log collector file DRLLOG in
the order in which they were recorded by the Log Data Manager. As a result, the log collector output files
DRLOUT and DRLDUMP may contain the output from many log data sets.
You should also note that if the collect of such a concatenated log data set fails after one or more log data
sets have been successfully collected, the remaining log data sets in the concatenation are not collected.
You must then rerun the DRLJLDMC collect job, to collect these remaining log data sets.

Running collect jobs in parallel


If you do not specify the LOGID and/or the LOGTYPE parameters, the DRLELDMC EXEC calls the log
collector and runs the collect job each time a combination of log type and log ID is processed. If you
want to decrease the total elapsed time of these collects, you can run DRLJLDMC collect jobs in parallel.
However, you should not run jobs with the same LOGTYPE parameter in parallel.

DRLELDMC sample job


This job is shipped with the product as sample job DRLJLDMC.
DRLJLDMC

//DRLJLDMC JOB (ACCT#),'COLLECT'


//********************************************************************
//* Name: DRLJLDMC *
//* *
//* Function: *
//* Log Data Manager Collect Log Data Sets sample job *

Chapter 6. Administering IBM Z Performance and Capacity Analytics 247


The collect job and the parameters it uses

//* *
//* This job is used to collect log data sets that are recorded *
//* in the DRLLDM_LOGDATASETS system table as being ready for *
//* collect by the Log Data Manager. *
//* *
//* Input: *
//* The exec DRLELDMC accepts the following parameters: *
//* *
//* SYSPREFIX=xxxxxxxx Prefix for system tables. default=DRLSYS *
//* SYSTEM=xxxxxx Db2 subsystem name. default=DSN *
//* PREFIX=xxxxxxxx Prefix for all other tables.default=DRL *
//* PLAN=xxxxxxxx Db2 plan name default=DRLPLAN *
//* DSPREFIX=xxxxxxxx Prefix for creation of data sets DRLOUT and *
//* DRLDUMP. default=DRL *
//* SHOWSQL=xxx Show SQL. YES/NO default=NO *
//* SHOWINPUT=xxx Copy DRLIN to DRLOUT. YES/NO default=YES *
//* LOGTYPE=xxxxxxxxxx Log type (e.g. SMF). If not specified, *
//* all log types are selected for processing. *
//* LOGID=xxxxxx Log ID. If not specified, all log id's are *
//* are selected for processing. Default Log ID *
//* should be coded as =''. *
//* RETENTION=xxx Retention period for DRLOUT, DRLDUMP and *
//* collect result info. default=10 days *
//* PURGE=xxx Purge info for successful collects that *
//* are older than its Retention period *
//* YES/NO default=YES *
//* CLEANUP=xxx Option only to be used after an Abend. *
//* No collect is done. Processes only log data *
//* sets marked with SELECT in the Log Data Sets*
//* To Be Collected list (on panel DRLDLDMT). *
//* Output: the data set being collected when *
//* the abend occurred will be moved to the *
//* Collected With Failure list. Other concate- *
//* nated data sets are moved to the Successful *
//* list or made ready for a renewed collect. *
//* YES/NO default=NO *
//* *
//* DRLOUT/DRLDUMP DD card: if any of these files are specified *
//* they will be used by all collects started by*
//* this job. They will then not be controlled *
//* or viewed by the Log Data Manager dialog. *
//* *
//* DRLLOG DD card: Must not be allocated. *
//* *
//* LMDLOG EXEC card: The value used for DYNAMNBR should be *
//* as a minimum, 2 plus the number of *
//* log data sets to be collected. *
//* *
//* Output: The results of the collects are recorded in *
//* sysprefix.DRLLDM_LOGDATASETS together *
//* with LOG_NAME, LOG_ID and TIME_ADDED. *
//* Job messages in the DRLMSG file *
//* *
//* Notes: *
//* Before you submit the job, do the following: *
//* 1. Check that the steplib db2loadlibrary is correct. *
//* 2. Change the parameters to DRLELDMC as required. *
//* 3. Change the Db2 load library name according to *
//* the naming convention of your installation. *
//* Default is 'db2loadlibrary'. *
//* 4. Change the IZPCA data set HLQ (default is DRLvrm.) *
//* *
//********************************************************************
//LDMLOG EXEC PGM=IKJEFT01,DYNAMNBR=20
//*
//SYSPROC DD DISP=SHR,DSN=DRLvrm.SDRLEXEC --
//STEPLIB DD DISP=SHR,DSN=DRLvrm.SDRLLOAD --
// DD DISP=SHR,DSN=db2loadlibrary --
//*********************************************************
//*DRLOUT DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*DRLDUMP DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*********************************************************
//* MESSAGES
//*
//DRLMSG DD SYSOUT=*,DCB=(RECFM=F,LRECL=80)
//*********************************************************
//* Add the next three DD statements if you collect IMS.
//* Note 1: IMSVER must specify the same release as the
//* collect statement used by the Log Data Manager.
//* Note 2: DRLICHKI must be DUMMY or point out an empty
//* data set after an IMS restart.
//*********************************************************

248 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The collect job and the parameters it uses

//*DRLICHKI DD DSN=Generation data set(0),DISP=SHR


//*DRLICHKO DD DSN=Generation data set(+1),DISP(NEW,CATLG)
//*DRLIPARM DD *
//*IMSVER=71 -- IMS release being processed. 71 is default
//*MAXOUTPUT=50 -- Allow up to 50 outputs per transaction/BMP
//*MAXUOR=50 -- Allow up to 50 UOR's per BMP
//**********************
//* START EXEC DRLELDMC
//*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%DRLELDMC SYSTEM=DSN SYSPREFIX=DRLSYS PREFIX=DRL -
DSPREFIX=DRL -
LOGTYPE=SMF -
LOGID=''
/*

Setting the DYNAMNBR value


The value for the EXEC parameter DYNAMNBR should be a minimum of the number of log data sets to be
collected, plus 2.
The supplied default is 20.

Setting the parameters for job DRLJLDMC


The rules for entering parameter values are as follows:
1. All parameters are optional.
2. Blanks must not exist before or after an equal sign (=).
3. Blanks must not exist within a parameter value.
4. A parameter value must not be enclosed in apostrophes.
5. A continuation mark (-) can be placed in any column.
These are the DRLJLDMC job parameters:
SYSPREFIX
The prefix of all product system and control Db2 tables. If you do not specify a value here, the default
DRLSYS is used.
SYSTEM
The Db2 subsystem. The default value is DSN.
PREFIX
The prefix used with all other tables. The default value is DRL.
PLAN
The name of the Db2 application plan. The default value is DRLPLAN.
DSPREFIX
The prefix used for the creation of data sets DRLOUT and DRLDUMP. The default is DRL. The
names of these data sets are 'dsprefix_value.Ddate.Ttime.DRLOUT/DRLDUMP' where date and time
are generated. The maximum length of DSPREFIX is 20 characters.
SHOWSQL
When this value is set to YES, all executed SQL statements are written to an output file. The default
value is NO.
SHOWINPUT
When this value is set to YES, all DRLIN statements are written to DRLOUT. The default value is YES.
LOGTYPE, LOGID
Each combination of LOGTYPE and LOGID identifies the log IDs to be used in the collect. If log type
is not specified, all log types are selected for processing. If log ID is not specified, all log IDs for the
log type specified are selected for processing. The default log ID is selected by setting this value to
straight quotes (").

Chapter 6. Administering IBM Z Performance and Capacity Analytics 249


Modifying the list of successfully collected log data sets

RETENTION
The retention period for DRLOUT, DRLDUMP and the log data manager information that is produced by
the collects. The default is 10 days.
PURGE
This parameter determines whether or not the information resulting from successful collects should
be purged when the date of the information is older than the retention period. The parameter can be
set to the value YES or NO. If PURGE is set to YES, all log data manager information about successfully
collected log data sets is deleted (for all log types and log IDs). The default value is PURGE=YES.
CLEANUP
This parameter is used when the DRLELDMC job has had an abend during a collect of concatenated
log data sets. If you run the DRLELDMC job with parameter CLEANUP set to YES, log data sets that
were successfully collected before the abend occurred are moved to the Log Data Sets Successfully
Collected list. The log data set that was being collected when the abend occurred is moved to the Log
Data Sets Collected With Failure list. The default value is CLEANUP=NO.
DRLOUT DD statement
If this file is specified, it is used by all collects started by this job. However, this file is not used by the
log data manager dialog.
DRLDUMP DD statement
If this file is specified, it is used by all collects started by this job. However, this file is not used by the
log data manager dialog.
DRLLOG DD statement
Must not be allocated.

Modifying the list of successfully collected log data sets

About this task


To list the log data sets that have been successfully collected:

Procedure
Select 3, Log data sets collected successfully, from the log data manager Main Selection window.
The Log Data Sets Collected Successfully window (Figure 134 on page 250) is displayed, one row for each
log data set that has been successfully collected by the Log Data Manager for this log type.
The list of data sets are sorted by the Time collected column.

DRLDLDMC Log Data Sets Collected Successfully for SMF

Select a data set. Then press Enter to view DRLOUT.

/ Log data set Time collected RC


_ SYS170.SMFLOGX.SLOG950120 2004-11-21.02.03.25 0
_ SYS170.SMFLOGB.SLOG950120 2004-11-21.01.33.25 0
_ SYS170.SMFLOGA.SLOG950120 2004-11-21.01.15.10 0
_ SYS170.SMFLOG.SLOG950120B 2004-11-21.01.01.20 0
_ SYS170.SMFLOG.SLOG950120A 2004-11-21.00.45.20 0
_ SYS170.SMFLOGA.SLOG950119 2004-11-20.23.15.10 0
_ SYS170.SMFLOG.SLOG950119B 2004-11-20.01.45.20 0
_ SYS170.SMFLOGB.SLOG950119 2004-11-20.01.13.25 0
_ SYS170.SMFLOGX.SLOG950119 2004-11-20.01.13.25 0
_ SYS170.SMFLOG.SLOG950119A 2004-11-20.00.45.20 0

Command ===> ___________________________________________________________


F1=Help F2=Split F3=Exit F5=DRLDUMP F6=Retent. F7=Bkwd
F8=Fwd F9=Swap F11=Delete F12=Cancel

Figure 134. Log Data Sets Collected Successfully window

250 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets

Viewing the information about successfully collected log data sets

About this task


To view the log data manager information about a log data set (the default action):

Procedure
Select a log data set and press Enter.
The DRLOUT data set is displayed in ISPF Browse mode (if a DRLOUT statement was not included in the
collect job).

Viewing the dump data set

Procedure
Select the log data set and press F5.
The DRLDUMP data set is displayed using the ISPF Browse function, if a DRLDUMP DD statement was not
present in the collect job. DRLDUMP should be empty if the return code from the collect was 0.

Changing the retention period of information about a log data set

About this task


To change the retention period for the log data managerlog data manager information about a log data
set:

Procedure
1. Select the log data set and press F6. The Retention Period for Collect Information window is displayed
(Figure 135 on page 251).
2. Type the retention period field the number of days you require, and press Enter
Note: You are not changing the retention period for the collected log data here, but only the retention
period for the log data manager information about the log data set.

DRLDLDMR Retention period for collect information

Type Retention period. Then press Enter to save.

Data set DRL310.SMFLOGA.SLOG950122


Retention period 10 days

F1=Help F2=Split F9=Swap F12=Cancel

Figure 135. Retention Period window

Deleting the information about a log data set

About this task


To delete the log data manager information about a log data set together with DRLOUT and DRLDUMP
data sets (if they exist):

Procedure
1. Select the log data set for which you want to delete the log data manager information from, and press
F11.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 251


Modifying the list of successfully collected log data sets

2. Press Enter to confirm deletion.


Note: You are not deleting the log data set itself, but only the log data manager information about the
log data set.

Modifying the list of unsuccessfully collected log data sets

About this task


To list the log data sets that have been unsuccessfully collected:

Procedure
Select 4, Log Data Sets Collected with Failure, from the log data manager Main Selection window.
The Log Data Sets Collected with Failure window (Figure 136 on page 252) is displayed, one row for each
log data set that has been unsuccessfully collected by the Log Data Manager for this log type.
The list of data sets are sorted by the Time collected column.

DRLDLDMF Log Data Sets Collected with Failure for SMF

Select a data set. Then press Enter to view DRLOUT.

/ Log data set Time collected RC


_ SYS170.SMFLOG1.SLOG01 2004-11-20.23.22.10 8
_ SYS170.SMFLOG.SYS1.SLOG0 2004-11-18.10.16.22 20

Command ===> ___________________________________________________________


F1=Help F2=Split F3=Exit F4=Rerun F5=DRLDUMP F7=Bkwd
F8=Fwd F9=Swap F11=Delete F12=Cancel

Figure 136. Log Data Sets Collected with Failure window

Viewing the unsuccessfully collected log data set

About this task


To view the log data set (the default action):

Procedure
1. Select the log data set and press Enter.
2. The DRLOUT data set is displayed in ISPF Browse mode (if a DRLOUT statement was not included in
the collect job).

Viewing the dump data set

About this task


To view the dump data set (DRLDUMP):

Procedure
Select the log data set and press F5.
The DRLDUMP data set is displayed using the ISPF Browse function, if a DRLDUMP DD statement was not
present in the collect job. DRLDUMP is empty in most cases if the return code from the collect was 0.

252 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets

Recording a log data set to be collected again

About this task


If you record a log data set for collection again, it is included in the next collect job (described in “The
DRLJLDMC collect job and the parameters it uses” on page 247).
However, the entry you select to be collected again is not deleted from the Log Data Sets Collected with
Failure window.
If you select a log data set to be collected a second time (using the F4=Rerun option) after it has already
been successfully collected, the log collector detects this incorrect selection and the collect attempt is
rejected. However, if you have specified REPROCESS=YES in the collect job to recollect a successfully
collected log data set, the log collector does not reject the collect.
To record a log data set to be collected again:

Procedure
1. Select the log data set.
2. Press F4.
An error message is displayed if this log data set is already included in the list of data sets to be
collected.

Deleting the information about a log data set

About this task


To delete the information about a log data set from the list shown, together with DRLOUT and DRLDUMP
data sets (if they exist):

Procedure
1. Select the log data set you want to delete, and press F11.
2. Press Enter to confirm deletion.

Working with the Continuous Collector


This topic shows the commands to stop or modify the operation of the Continuous Collector.

Stopping the Continuous Collector


To stop the Continuous Collector, enter the following command:

Command Explanation
Stop the Continuous Collector.
STOP jobname

P jobname

Modify commands
To modify the Continuous Collector during operation, choose from the following commands:

Chapter 6. Administering IBM Z Performance and Capacity Analytics 253


Modifying the list of successfully collected log data sets

Command Explanation
Turn the commit heart beat message DRL0383I ON
MODIFY jobname,INTERVAL MESSAGE ON|OFF
or OFF. The message is ON by default when the
COMMIT phrase FULL STATISTICS AFTER is used.
F jobname,INTERVAL MESSAGE ON|OFF

F jobname,IM ON|OFF

Enable or disable messages at the start and end


MODIFY jobname,COMMIT TIME ON|OFF
of database updates and database commits. This
function is OFF initially but may be turned on to
F jobname,COMMIT TIME ON|OFF
give an indication of how long the Db2 update and
commit are taking. It is not advisable to leave this
F jobname,CT ON|OFF
turned on for a long period of time as performance
may be impacted and additional output is written
to the DRLOUT data set.
FSA count (FULL STATISTICS AFTER count) alters
MODIFY jobname,FULL STATISTICS AFTER count
the interval at which the collect statistics are
displayed. This is initially set by the COLLECT
F jobname,FULL STATISTICS AFTER count
syntax FULL STATISTICS AFTER integer COMMITS.
This may be used to set an interval for the collect
F jobname,FSA count
statistics for the first time when FULL STATISTICS
AFTER integer COMMITS is not specified on the
COLLECT statement.
Pause the Continuous Collector until a RESUME
MODIFY jobname,PAUSE
command is received. While in the paused state,
all MODIFY or STOP commands are still accepted,
F jobname,PAUSE
but no data is read from the input log stream.
You can use the pause process to avoid contention
with other activities in the IBM Z Performance and
Capacity Analytics Db2 subsystem.
Resume the Continuous Collector. Any parameters
MODIFY jobname,RESUME
changed while the collector was paused come
into effect. When a RESUME command is received,
F jobname,RESUME
processing restarts from where it paused.

MODIFY jobname, COMMIT AFTER BUFFER FULL Changes when the Continuous Collector issues
a COMMIT to make the database updates
permanent. After this Modify statement a COMMIT
F jobname, COMMIT AFTER BUFFER FULL
will occur only when the collect buffer is filled.
F jobname, CA BUFFER FULL

Changes when the Continuous Collector issues


MODIFY jobname,COMMIT AFTER count RECORDS
a COMMIT to make the database updates
| MINUTES | SECONDS
permanent. After this Modify statement a COMMIT
will occur:
F jobname,COMMIT AFTER count RECORDS |
MINUTES | SECONDS • After processing the number of records
• After reaching the time value (minutes or
F jobname,CA count R | M | S seconds) specified by the integer-constant.
Note: The new COMMIT condition replaces the
COMMIT condition that is active at the time of
the Modify, it does not add an additional COMMIT
cycle.

254 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Modifying the list of successfully collected log data sets

Command Explanation

MODIFY jobname,REFRESH AT hhmm Changes the time at which the refresh of the
internal definitions (DEFS) and lookup tables for
F jobname,REFRESH AT hhmm the Continuous Collector will occur. The time
will replace the previous time set by either the
COLLECT syntax or an earlier modify command and
turn the refresh function back on

MODIFY jobname,REFRESH OFF Turn off the refresh function. This will prevent a
refresh occurring.
F jobname,REFRESH OFF

MODIFY jobname,REFRESH ON Turn on the refresh function. This will re-establish


the REFRESH time that was active at the time of
F jobname,REFRESH ON the REFRESH OFF. If a REFRESH had never been
set then a default time of 2am ('0200') will be set.

MODIFY jobname,REFRESH NOW Refresh the internal definitions (DEFS) and lookup
tables for the Continuous Collector immediately.
F jobname,REFRESH NOW This command does not alter the hhmm set by
the COLLECT statement or a previous modify
command.

MODIFY jobname,LOGSTREAM FREE NOW This command will cause the Continuous Collector
to close and re-open the log stream immediately.
F jobname,LOGSTREAM FREE NOW This enables log stream overflow data sets that
have been marked as freeable to be freed. This
command does not change frequency or the ON
or OFF status of the hourly log stream free set by
the LOGSTREAM FREE ON or LOGSTREAM FREE n
commands.

MODIFY jobname,LOGSTREAM FREE OFF This command turns off the regular close and re-
open of the log stream function started by the
F jobname,LOGSTREAM FREE OFF LOGSTREAM FREE ON or LOGSTREAM FREE n
commands.

MODIFY jobname,LOGSTREAM FREE ON If the log stream close and re-open function is off
by default or has been turned off by the MODIFY
F jobname,LOGSTREAM FREE ON jobname,LOGSTREAM FREE OFF command, it may
be turned on again using this command. When
turned back on, the close and re-open will happen
immediately and then repeat at the interval that
was last set, the default being one hour.

MODIFY jobname,LOGSTREAM FREE n Start the log stream close and re-open function at
the frequency required. The options are 1, 2, 3, 4,
F jobname,LOGSTREAM FREE n 5, or 6 hours. After this command, the log stream
close and re-open will happen immediately and
then repeat at the selected interval. This command
may be repeated with a different interval without
an intervening LOGSTREAM FREE OFF command.

When the Continuous Collector is running, log stream overflow data sets that have been marked as
freeable may not be correctly freed until the log stream is closed.

Chapter 6. Administering IBM Z Performance and Capacity Analytics 255


Modifying the list of successfully collected log data sets

If required, the log stream being read by the Continuous Collector may be closed and re-opened using the
MODIFY LOGSTREAM command.
The MODIFY jobname,LOGSTREAM FREE command has several options; the function may be performed
ad hoc with the NOW option or started to run regularly at a selected interval.
The COLLECT parameter REFRESH AT hhmm sets a time for when the Continuous Collector will refresh
the internal definitions and lookup tables from the IBM Z Performance and Capacity Analytics Db2
system tables. If you are installing a new component to your IBM Z Performance and Capacity Analytics
system you must ensure that a REFRESH does not occur while the component install is still in progress.
The safest way to do this is to set REFRESH OFF and turn it back on once the component install has
completed.

256 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables

Chapter 7. Administration reference

System tables and views


This section describes system tables and views. These tables are used by the product log collector
and dialogs. They are created during installation of the product base, with the prefix for system tables
specified in userid.DRLFPROF. The default prefix for the tables is DRLSYS.
System tables do not appear in the tables list in the administration dialog.
Each table description includes information about the table, a description of each key column and data
column in the table, and an example of the table's contents.
Key columns are marked with a "K".
Data columns are listed after the last key column.
The tables appear in alphabetical order, with any underscores ignored.

Log collector system tables


These tables contain definitions used by the log collector. They are maintained by the log collector. Do not
modify them.

DRLEXPRESSIONS
This system table contains one row for each expression or condition in a log, record, record procedure, or
update definition.

Column name Data type Description


OBJECT_TYPE K CHAR(8) Object type. This is LOG, RECORD, RECPROC, or UPDATE.
OBJECT_NAME K VARCHAR(18) Name of the object.
EXPRESSION_NO K SMALLINT Expression sequence number within the object.
EXPRESSION VARCHAR(2000) Original expression text.
PARSED_EXPRESSION VARCHAR(2000) Parsed version of the expression.

DRLFIELDS
This system table contains one row for every field in each defined record type.

Column name Data type Description


RECORD_NAME K VARCHAR(18) Name of the record. For a log header, this is *log-name*.
FIELD_NO K SMALLINT Field sequence number within the record.
FIELD_NAME VARCHAR(18) Name of the field.

© Copyright IBM Corp. 1993, 2017 257


Log collector system tables

Column name Data type Description


TYPE CHAR(8) Type of the field. The following values are possible:

Type Field Format


BINARY BINARY
BINARYS BINARY SIGNED
EINTEGER EXTERNAL INTEGER
HEXIN EXTERNAL HEX
DECIMAL DECIMAL(p,s)
ZONED ZONED(p,s)
FLOAT FLOAT
CHAR CHAR or CHAR(n)
CHAR(*) CHAR(*) or LENGTH * CHAR
VARCHAR VARCHAR
BIT BIT or BIT(n)
HEX HEX
DATE_001 DATE(0CYYDDDF)
DATE_002 DATE(YYYYDDDF)
DATE_003 DATE(MMDDYY)
DATE_004 DATE(YYDDDF)
DATE_005 DATE(CYYMMDDF)
DATE_006 DATE(YYMMDD)
DATE_007 DATE(MMDDYYYY)
TIME_001 TIME(1/100S)
TIME_002 TIME(HHMMSSTF)
TIME_003 TIME(0HHMMSSF)
TIME_004 TIME(HHMMSSTH)
TIME_005 TIME(HHMMSSXF)
TIME_006 TIMER(HHMMSS)
TIME_007 TIME(HHMMSSU6)I
NTV_001 INTV(MMSSTTTF)
TSTAMP_ 1TIMESTAMP(TOD)

LENGTH SMALLINT Length of the field. For DECIMAL and ZONED fields, this is a
1-byte precision followed by a 1-byte scale.
OFFSET SMALLINT Offset of the field in the record or section.
INSECTION_NO SMALLINT Number of the section where the field is contained. This is
zero if the field is not in a section.
REMARKS VARCHAR(254) Description of the field, set by the COMMENT ON statement.

DRLLDM_COLLECTSTMT
This system table contains one row for each combination of log type and log ID that is defined to the Log
Data Manager. Each row identifies the collect statement that is used for the log type/log ID combination.

Column name Data type Description


LOG_NAME K VARCHAR(18) Name of the log type.
LOG_ID K CHAR(8) The log ID.
COLLECT_STMT_DS VARCHAR(54) Name of the data set that contain the collect statement,
including the member name (for a PDS member).

DRLLDM_LOGDATASETS
This system table contains one or more rows for each log data set recorded by the Log Data Manager.

258 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables

Column name Data type Description


DATASET_NAME K VARCHAR(54) Name of the log data set, including the member name (for a
PDS member).
LOG_NAME K VARCHAR(18) Name of the log type.
TIME_COLLECTED K TIMESTAMP Timestamp of the collect. For a data set not yet collected
it is 0001-01-01-00.00.00.000000. For a successfully
collected data set it is set to the value of the
TIME_COLLECTED field in the corresponding entry in
DRLLOGDATASETS. For an unsuccessfully collected data set,
or a successfully collected data set in which no record was
recognized, it set to the timestamp when DRLELDMC called
the log collector.
LOG_ID CHAR(8) The log ID currently associated with this data set.
TIME_ADDED TIMESTAMP Timestamp when the log data set was first recorded.
TIME_COLLECT_CALL TIMESTAMP Timestamp when the DRLELDMC exec called the log
collector to process the log data set.
COLLECT_RC CHAR(5) The return code from the collect. It is blank if not yet
collected; '0' or '4' if successfully collected; >= '8' if
unsuccessfully collected without abend; 'Unn' if the collect
ended with a user abend; 'Snn' if the collect ended with a
system abend.
OUTPUT_DS VARCHAR(35) The high level qualifiers used when DRLOUT
and/or DRLDUMP data sets were created.
'OUTPUT_DS_value.DRLOUT' is the data set name of the
DRLOUT file. This value is blank if no DRLOUT or DRLDUMP
data set has been created.
RETENTION SMALLINT Retention period in days. Null field if not yet collected.
RETENTION_DATE INTEGER Collect date expressed as number of days from January 1,
Year 1. This field is used for purge calculations. Null field if
not yet collected.
COMPLETE CHAR(1) Flag indicating the status of the log data set. It is blank if the
data set is ready to be collected; 'S' if the collect is running;
'Y' if successfully collected; 'F' it collected with failure.

DRLLOGDATASETS
This system table contains one row for each collected log data set.

Column name Data type Description


LOG_NAME K VARCHAR(18) Name of the log definition.
FIRST_RECORD K VARCHAR(80) First 80 bytes of the first identified record in the data set.
This is used to identify the data set and make sure that it
is not collected again. If the record is a user defined one,
avoid beginning the record with data needed to distinguish
two records. For more information, refer to Language Guide
and Reference.
DATASET_NAME K VARCHAR(54) Name of the data set, including the member name (for a PDS
member).
VOLUME CHAR(6) Volume serial number for the data set.
LOG_SOURCE CHAR(16) Reserved.
FIRST_TIMESTAMP TIMESTAMP Timestamp of the first record in the log. This is only set if
TIMESTAMP expression is specified for the log.

Chapter 7. Administration reference 259


Log collector system tables

Column name Data type Description


LAST_TIMESTAMP TIMESTAMP Timestamp of the last record in the log. This is only set if
TIMESTAMP expression is specified for the log.
NCOLLECTS SMALLINT Number of times the data set has been collected. If this is
greater than 1, it means that collect has been run with the
REPROCESS operand to collect the data set again.
TIME_COLLECTED TIMESTAMP Date and time when collect ended.
USER_ID CHAR(8) ID of the user running collect.
COMPLETE CHAR(1) Shows whether the data set has been completely processed.
This is Y (the data set has been completely processed) or N
(the data set has only been partly processed).
RETURN_CODE SMALLINT Return code from collect; 0 or 4.
NRECORDS INTEGER Number of records read from the log data set.
NSELECTED INTEGER Number of records identified.
NUPDATES INTEGER Number of database rows updated when the data set was
collected.
NINSERTS INTEGER Number of database rows inserted when the data set was
collected.
NDELETES INTEGER Number of database rows deleted when the data set was
collected.
ELAPSED_SECONDS INTEGER Collect elapsed time, in seconds. The actual collect elapsed
time is a bit longer since there is some activity after this
table has been updated.
NSKIPPED INTEGER Number of records skipped due to timestamp overlap
(applies when ON TIMESTAMP OVERLAP SKIP specified).
LOGSTREAM_BLK_ID CHAR(8) The block ID number of the log stream record that was read
prior to the last SQL COMMIT performed by the Continuous
Collector. On a restart, the Continuous Collector will attempt
to continue reading the log stream at the saved block ID
number plus one.
LOGSTREAM_TS CHAR(8) The timestamp of the log stream record that was read
prior to the last SQL COMMIT performed by the Continuous
Collector.

DRLLOGS
This system table contains one row for each defined log type.

Column name Data type Description


LOG_NAME K VARCHAR(18) Name of the log.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
HEADER CHAR(1) Shows whether a header is defined for the log. This is Y (a
header is defined) or N (no header is defined). If there is a
header, it is contained in the DRLRECORDS and DRLFIELDS
tables.
REC_FMT CHAR(1)

260 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables

Column name Data type Description


LENGTH_EXPR_NO SMALLINT
TIMESTAMP_EXPR_NO SMALLINT Number of the TIMESTAMP expression in the
DRLEXPRESSIONS table. This is zero if no TIMESTAMP
expression is specified.
LOG_SOURCE_EXPR_NO SMALLINT
FIRST_CONDITION_NO SMALLINT Number of the FIRST RECORD condition in the
DRLEXPRESSIONS table. This is zero if no FIRST RECORD
condition is specified.
LAST_CONDITION_NO SMALLINT Number of the LAST RECORD condition in the
DRLEXPRESSIONS table. This is zero if no LAST RECORD
condition is specified.
LOGPROC CHAR(8) Name of the log procedure to use for the log. This is blank if
no log procedure is specified.
LOGPROC_LANGUAGE CHAR(8) Programming language that the log procedure is written in.
This is ASM or C.
LOGPROC_PARM_NO SMALLINT Number of the log procedure PARM expression in the
DRLEXPRESSIONS table. This is zero if no PARM expression
is specified.
TIME_DEFINED TIMESTAMP Date and time when the log was defined.
CREATOR CHAR(8) ID of the user who defined the log.
REMARKS VARCHAR(254) Description of the log, set by the COMMENT ON statement.

DRLPURGECOND
This system table contains one row for each purge condition in defined data tables.

Column name Data type Description


TABLE_PREFIX K CHAR(8) Prefix of the table.
TABLE_NAME K VARCHAR(18) Name of the table.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
SQL_CONDITION VARCHAR(254) An SQL condition that defines rows to be deleted from the
database when the PURGE statement is executed.
TIME_DEFINED TIMESTAMP Date and time when the purge condition was defined.
CREATOR CHAR(8) ID of the user who defined the purge condition.

DRLRECORDPROCS
This system table contains one row for each defined record procedure.

Column name Data type Description


PROGRAM_NAME K CHAR(8) Name of the record procedure (name of the load module
that gets invoked).

Chapter 7. Administration reference 261


Log collector system tables

Column name Data type Description


VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
LANGUAGE CHAR(8) Programming language that the record procedure is written
in. This is ASM or C.
PARAMETER_EXPR_NO SMALLINT Number of the PARM expression in the DRLEXPRESSIONS
table. This is zero if no PARM expression is specified.
TIME_DEFINED TIMESTAMP Date and time when the record procedure was defined.
CREATOR CHAR(8) ID of the user who defined the record procedure.
REMARKS VARCHAR(254) Description of the record procedure, set by the COMMENT
ON statement.

DRLRECORDS
This system table contains one row for each defined record type and one row for each defined header in
log definitions.

Column name Data type Description


RECORD_NAME K VARCHAR(18) Name of the record. For a log header, this is *log-name*.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
LOG_NAME VARCHAR(18) Name of the log that contains the record.
BUILT_BY CHAR(8) Name of the record procedure that builds the record, if any.
NFIELDS SMALLINT Number of fields in the record.
NSECTIONS SMALLINT Number of sections in the record.
CONDITION_NO SMALLINT Number of the IDENTIFIED BY condition in the
DRLEXPRESSIONS table. This is zero if no IDENTIFIED BY
condition is specified.
TIME_DEFINED TIMESTAMP Date and time when the record was defined.
CREATOR CHAR(8) ID of the user who defined the record.
REMARKS VARCHAR(254) Description of the record, set by the COMMENT ON
statement.

DRLRPROCINPUT
This system table contains one row for every defined record type that must be processed by a record
procedure.

Column name Data type Description


PROGRAM_NAME K CHAR(8) Name of the record procedure.
RECORD_NAME K VARCHAR(18) Name of the record that is input to the record procedure.

262 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Log collector system tables

DRLSECTIONS
This system table contains one row for every defined section in defined record types.

Column name Data type Description


RECORD_NAME K VARCHAR(18) Name of the record.
SECTION_NO K SMALLINT Section sequence number within the record.
SECTION_NAME VARCHAR(18) Name of the section.
CONDITION_NO SMALLINT Number of the PRESENT IF condition in the
DRLEXPRESSIONS table. This is zero if no PRESENT IF
condition is specified.
OFFSET_EXPR_NO SMALLINT Number of the OFFSET expression in the DRLEXPRESSIONS
table. This is zero if no OFFSET expression is specified.
LENGTH_EXPR_NO SMALLINT Number of the LENGTH expression in the DRLEXPRESSIONS
table. This is zero if no LENGTH expression is specified.
NUMBER_EXPR_NO SMALLINT Number of the NUMBER expression in the
DRLEXPRESSIONS table. This is zero if no NUMBER
expression is specified.
INSECTION_NO SMALLINT Number of the section that this section is contained in. This
is zero if the section is not contained in another section.
REPEATED CHAR(1) Shows whether the section is repeated. This is Y (the section
is repeated) or N (the section is not repeated).

DRLUPDATECOLS
This system table contains one row for every column in each update definition, including GROUP BY, SET,
and MERGE columns.

Column name Data type Description


UPDATE_NAME K VARCHAR(18) Name of the update definition.
UPDATECOL_NO K SMALLINT Sequence number of the column in the update definition.
COLUMN_NAME VARCHAR(18) Name of the column.
COLUMN_NO SMALLINT Number of the column in the table definition.
FUNCTION CHAR(8) This is blank for GROUP BY columns; SUM, MAX, MIN,
COUNT, FIRST, LAST, AVG, or PERCENT for SET columns; or
INTTYPE, START, END, or QUIET for MERGE columns.
EXPRESSION_NO SMALLINT Number of the expression in the DRLEXPRESSIONS table.
COUNT_COLUMN VARCHAR(18) If the function is AVG or PERCENT, this contains the name of
the column that contains the count of values.
PERCENTILE SMALLINT If the function is PERCENT, this contains the percentile value
(1 - 99).

DRLUPDATEDISTR
This system table contains one row for every distributed field or column in each update definition.

Column name Data type Description


UPDATE_NAME K VARCHAR(18) Name of the update definition.
DISTR_NO K SMALLINT Field or column sequence number in the DISTRIBUTE
clause.

Chapter 7. Administration reference 263


Log collector system tables

Column name Data type Description


FIELD_NAME VARCHAR(18) Name of the field or column to be distributed.

DRLUPDATELETS
This system table contains one row for every identifier in the LET clause of each update definition. (The
identifiers are defined as abbreviations in the administration dialog.)

Column name Data type Description


UPDATE_NAME K VARCHAR(18) Name of the update definition.
LET_NO K SMALLINT Sequence number of the identifier in the LET clause.
LET_NAME VARCHAR(18) Name of the identifier.
EXPRESSION_NO SMALLINT Number of the expression in the DRLEXPRESSIONS table.

DRLUPDATES
This system table contains one row for each update definition.

Column name Data type Description


UPDATE_NAME K VARCHAR(18) Name of the update definition.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
SOURCE_PREFIX CHAR(8) Prefix of the source table. This is blank if the source is a
record.
SOURCE_NAME VARCHAR(18) Name of the source. This is a record name or a table name.
TARGET_PREFIX CHAR(8) Prefix of the target table.
TARGET_NAME VARCHAR(18) Name of the target table.
SECTION_NAME VARCHAR(18) Name of the repeated section, if any, that is used in the
update definition.
CONDITION_NO SMALLINT Number of the WHERE condition in the DRLEXPRESSIONS
table. This is zero if no WHERE condition is specified.
NLETS SMALLINT Number of identifiers specified in the LET clause.
NUPDATECOLS SMALLINT Number of columns in the GROUP BY, SET, and MERGE
clauses.
SCHEDULE_EXPR_NO SMALLINT Number of the APPLY SCHEDULE expression in the
DRLEXPRESSIONS table. This is zero if APPLY SCHEDULE is
not specified.
SCHEDULE_CALEXP_NO SMALLINT
SCHEDULE_INTTYPE VARCHAR(18) Name of the source column or field that defines the interval
type.
SCHEDULE_START VARCHAR(18) Name of the source column or field that defines the interval
start timestamp.
SCHEDULE_END VARCHAR(18) Name of the source column or field that defines the interval
end time stamp.
SCHEDULE_STATUS VARCHAR(18) Name of the identifier that contains the schedule status.

264 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables

Column name Data type Description


NDISTR_FIELDS SMALLINT Number of fields or columns that are distributed.
DISTR_BY_EXPR_NO SMALLINT Number of the DISTRIBUTE BY expression in the
DRLEXPRESSIONS table. This is zero if DISTRIBUTE is not
specified.
DISTR_FROM_EXPR_NO SMALLINT Number of the DISTRIBUTE FROM expression in the
DRLEXPRESSIONS table. This is zero if DISTRIBUTE is not
specified.
DISTR_TO_EXPR_NO SMALLINT Number of the DISTRIBUTE TO expression in the
DRLEXPRESSIONS table. This is zero if DISTRIBUTE is not
specified.
DISTR_TIMESTAMP VARCHAR(18) Name of the identifier that contains the distribution interval
start timestamp.
DISTR_INTERVAL VARCHAR(18) Name of the identifier that contains the distribution interval
length.
TIME_DEFINED TIMESTAMP Date and time when the update was defined.
CREATOR CHAR(8) ID of the user who defined the update.
REMARKS VARCHAR(254) Description of the update definition, set by the COMMENT
ON statement.

Dialog system tables


These tables contain definitions used by IBM Z Performance and Capacity Analytics dialogs and utilities.
Do not modify them.

DRLCHARTS
This system table stores information extracted from the host graphical report formats (ADMCFORM data).
Data is inserted into this table at installation time by the host DRLIRD2 member. If GDDM version 3 or
later is installed and available, DRLCHARTS is also updated by the host exec DRLECHRT when a report is
saved in the host ISPF dialog.

Column name Data type Description


CHART_NAME K CHAR(8) ADMCFORM name. This is the same as the CHART column in
the DRLREPORTS table.

Chapter 7. Administration reference 265


Dialog system tables

Column name Data type Description


TYPE SMALLINT This column shows a number identifying the chart type:
1
Line chart
2
Surface chart
3
Histogram
41, 42, 43
Bar chart. The 4 indicates that this is a bar chart; 1,
2, or 3 indicates whether the bars are side by side (1),
stacked (2), or overlaid (3).
5
Pie chart
6
Venn diagram
7
Polar chart
8
Tower diagram
9
Table. This is not used.
10
Combination chart.

VALUES SMALLINT This column contains one of the values 0, 1, 2, or 3. The


column is valid only for chart types 4 (bar) and 5 (pie). For
bar charts, the values are:
0
No values are shown
1
Values are shown at the top/end of the bar
2
Values are shown inside the bars
3
Values are shown as in GDDM version 1 release 3
For pie charts, the values are:
1
Values are shown
2
No values are shown

AXIS_ORIENT SMALLINT Axis orientation. This can be 1 or 2. 1 means vertical y-axis


and bars. 2 means horizontal y-axis and bars.

266 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables

Column name Data type Description


Y_DATA_TYPE VARCHAR(50) If the chart type is 10 (combination), this column shows the
chart type for each data group:
1
Line chart
2
Surface chart
3
Histogram
41, 42, 43
Bar chart
For example, 1, 42, 42, 42, 42 identifies a combination
chart with a line chart and stacked bars. For a bar chart, the
number is concatenated to indicate bar position as in TYPE
above.
X_AXIS_TITLE VARCHAR(52) This is a string containing the x-axis title.
Y_AXIS_TITLE VARCHAR(52) This is a string containing the y-axis title.

DRLCOMPONENTS
This system table contains one row for each IBM Z Performance and Capacity Analytics component.

Column name Data type Description


COMPONENT_NAME K VARCHAR(18) Name of the component.
DESCRIPTION VARCHAR(50) Description of the component that is shown in the dialog.
STATUS CHAR(1) Component status. This is blank if the component is not
installed, I if the component is installed online, or B if the
component is installed in batch.
TIME_INSTALLED TIMESTAMP Date and time when the component was installed or defined.
USER_ID CHAR(8) ID of the user who installed or defined the component.

DRLCOMP_OBJECTS
This system table contains one row for every object in each component.

Column name Data type Description


COMPONENT_NAME K VARCHAR(18) Name of the component.
OBJECT_NAME K VARCHAR(18) Name of the object.
OBJECT_TYPE K CHAR(8) Type of object. This is LOG, RECORD, RECPROC, TABSPACE,
LOOKUP, TABLE, UPDATE, REPORT, or REPGROUP.
MEMBER_NAME CHAR(8) Name of the member in the SDRLDEFS or SDRLRxxx library
where the object is defined.
PART_NAME VARCHAR(18) Name of the component part that the object belongs to, if
any.
EXCLUDE_FLAG CHAR(1) Flag to determine if this object is excluded from installation
of the component.

DRLCOMP_PARTS
This system table contains one row for every part in each component.

Chapter 7. Administration reference 267


Dialog system tables

Column name Data type Description


COMPONENT_NAME K VARCHAR(18) Name of the component.
PART_NAME K VARCHAR(18) Name of the component part.
DESCRIPTION VARCHAR(50) Description of the component part that is shown in the
dialog.
STATUS CHAR(1) Component part status. This is blank if the component part
is not installed, I if the component part is installed online, or
B if the component is installed in batch.
TIME_INSTALLED TIMESTAMP Date and time when the component part was installed or
defined.
USER_ID CHAR(8) ID of the user who installed or defined the component part.

DRLGROUPS
This system table contains one row for each defined report group.

Column name Data type Description


GROUP_NAME K VARCHAR(18) Group ID.
GROUP_OWNER K CHAR(8) Owner of the group. This is blank for a public group.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.
DESCRIPTION VARCHAR(50) Description of the group that is shown in the dialog.
TIME_CREATED TIMESTAMP Date and time when the group was defined.
CREATOR CHAR(8) ID of the user who defined the group.

DRLGROUP_REPORTS
This system table contains one row for every report in each defined report group.

Column name Data type Description


GROUP_NAME K VARCHAR(18) Group ID.
GROUP_OWNER K CHAR(8) Owner of the group.
REPORT_NAME K VARCHAR(18) ID of the report that belongs to the group.
REPORT_OWNER K CHAR(8) Owner of the report that belongs to the group.

DRLREPORTS
This system table contains one row for each defined report.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.
VERSION VARCHAR(18) Version level. The value of VERSION is set for an object
when the object is defined and is taken from the value
of keyword VERSION. For definitions supplied by IBM, the
value is IBM.nnn[.APAR_number], where nnn is the version,
release, modification level of the object.

268 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables

Column name Data type Description


DESCRIPTION VARCHAR(50) Description of the report that is shown in the dialog.
TYPE CHAR(8) Type of report. This is QUERY, TABDATA, or GRAPH.
BATCH CHAR(1) Y if the report should be produced in batch; N otherwise.
PRINT CHAR(1) Y if the report should be printed when produced in batch; N
otherwise.
SAVE CHAR(1) Y if the report should be saved when produced in batch; N
otherwise.
RUN_CYCLE CHAR(8) Batch run cycle for the report. This is DAILY, WEEKLY, or
MONTHLY.
QUERY_PREFIX CHAR(8) Prefix of the QMF query that should be run when the report
is produced.
QUERY VARCHAR(18) Name of the QMF query that should be run when the report
is produced.
FORM_PREFIX CHAR(8) Prefix of the QMF form that should be used when the report
is produced.
FORM VARCHAR(18) Name of the QMF form that should be used when the report
is produced.
CHART CHAR(8) Name of the GDDM-ICU format to be used for the report.
Blank means that the report is tabular.
FILE CHAR(8) Name of the member where the data is saved (if type is
TABDATA or GRAPH), or where the data should be saved
when the report is produced in batch (if save is Y).
MACRO CHAR(8) Not used.
TABLE_NAME VARCHAR(254) Name of the table or tables on which the the report is bases.
This is extracted from the query when the report is defined.
NVARIABLES SMALLINT Number of variables defined for the report or extracted from
the query.
NATTRIBUTES SMALLINT Number of attributes defined for the report.
TIME_CREATED TIMESTAMP Date and time when the report was defined.
CREATOR CHAR(8) ID of the user who defined the report.
REMARKS VARCHAR(254) Long free-format description of the report that can be
entered from the dialog.
FINAL_SUMMARY CHAR(3) This is valid when QMF is not used. If FINAL_SUMMARY is
set to YES, a row containing totals for all numeric columns is
generated at the end of the report.
ACROSS_SUMMARY CHAR(3) If ACROSS_SUMMARY is set to YES for a report of the Across
type, a summary column is created to the right in the report.
It contains one total value for each row. This is valid when
QMF is not used.

DRLREPORT_ATTR
This system table contains one row for every attribute in each defined report.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.

Chapter 7. Administration reference 269


Dialog system tables

Column name Data type Description


ATTRIBUTE_NO K SMALLINT Attribute sequence number.
ATTRIBUTE VARCHAR(18) Attribute value.

DRLREPORT_COLUMNS
This system table contains one row for every column in each defined report if QMF is not used. The
information is taken from the QMF form.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.
COLUMN_NO K SMALLINT Column number.
HEADING VARCHAR(40) Column heading.
USAGE CHAR(7) Usage code.
INDENT SMALLINT Column indentation.
WIDTH SMALLINT Column width.
EDIT CHAR(5) Edit code.
SEQ SMALLINT Column sequence number.
DEFINITION VARCHAR(50) The DEFINITION column can define an additional report
column, which is not present in the SQL query. The definition
must be a valid REXX expression, and may contain numeric
constants and variables of the &n type, where n is an existing
column number. The DEFINITION column is intended only
for existing IBM Z Performance and Capacity Analytics
reports and is not used for user-defined reports.

DRLREPORT_QUERIES
This system table contains one row for every query line in each defined report, if QMF is not used.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.
LINE_NO K SMALLINT Line number in the query.
QUERY_LINE VARCHAR(80) Query text.

DRLREPORT_TEXT
This system table is used for for host reports when QMF is not used. It contains one row for every heading
and footing row. It also contains one row if there is a final summary line with a final text, and one row if
there is an expression that limits the number of output rows in the report.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.
TYPE K CHAR(8) Text type. This is HEADING, FOOTING, DETAIL, FINAL or
ROWS.
LINE_NO K SMALLINT Line number for HEADING and FOOTING.

270 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Dialog system tables

Column name Data type Description


ALIGNMENT CHAR(6) Shows how the text should be aligned; left, center, or right.
TEXT VARCHAR(55) Text for one line of a report text (see TYPE above).

DRLREPORT_VARS
This system table contains one row for every variable in each defined report. The variables may be
specified in the DEFINE REPORT statement or extracted from the query.

Column name Data type Description


REPORT_NAME K VARCHAR(18) Report ID.
REPORT_OWNER K CHAR(8) Owner of the report. This is blank for a public report.
VARIABLE_NO K SMALLINT Sequence number of the variable.
VARIABLE_NAME VARCHAR(18) Name of the variable.
EXPRESSION VARCHAR(80) Expression in the query that is compared with the variable,
if the variable is found in the query. This is used, with
TABLE_NAME in the DRLREPORTS table, to find a list of
possible values for the variable.
OPERATOR CHAR(4) Operator that is used when comparing the variable and the
expression, if the variable is found in the query. This is =, <=,
>=, IN, or LIKE.
DATA_TYPE CHAR(8) Data type of the variable, if specified. This is CHAR,
NUMERIC, DATE, TIME, or TIMESTAMP.
REQUIRED CHAR(1) Shows whether the variable must be given a value. This is Y
for yes, or N or blank for no.
DEFAULT VARCHAR(40) Default value to use for the variable, if specified.
IN_HEADER CHAR(1) Variable to determine if the IBM Z Performance and Capacity
Analytics variable is used in the header. This is Y for yes, or N
for no.
IN_HEADER_VALUE VARCHAR(35) Default header value for a non-required variable without a
substitution value.

DRLSEARCH_ATTR
This system table contains one row for every attribute in each saved report search.

Column name Data type Description


SEARCH_NAME K VARCHAR(18) Name of the saved search.
SEARCH_OWNER K CHAR(8) Owner of the saved search. This is blank for a public search.
ATTR_SET_NO K SMALLINT Attribute set sequence number. The attribute sets are
logically ORed together.
ATTRIBUTE_NO K SMALLINT Attribute sequence number within the attribute set. The
attributes within a set are logically ANDed together.
ATTRIBUTE VARCHAR(18) Attribute value. This can contain global search characters.

DRLSEARCHES
This system table contains one row for each saved report search.

Chapter 7. Administration reference 271


Dialog system tables

Column name Data type Description


SEARCH_NAME K VARCHAR(18) Name of the saved search.
SEARCH_OWNER K CHAR(8) Owner of the saved search. This is blank for a public search.
DESCRIPTION VARCHAR(50) Description of the search that is shown in the dialog.
NATTR_SETS SMALLINT Number of attribute sets used in the search.
REPORT_DESC VARCHAR(50) Report description used in the search. This can contain
global search characters.
REPORT_TYPE CHAR(8) Report type specified in the search.
REPORT_OWNER CHAR(8) Report owner specified in the search.
TIME_CREATED TIMESTAMP Date and time when the search was saved.
CREATOR CHAR(8) ID of the user who saved the search.

GENERATE_PROFILES
This system table contains one row for each profile for each generate statement profile. It is used
when installing components that use the GENERATE statement to create table spaces, partitioning, and
indexes.

Column name Data type Description


PROFILE K VARCHAR(18) Profile name. This value is specified on the PROFILE
parameter of the GENERATE TABLESPACE and GENERATE
INDEX statements.
COMPONENT_ID K VARCHAR(18) IBM Z Performance and Capacity Analytics component ID or
%. Allows for a unique profile for an IBM Z Performance and
Capacity Analytics component.
SUBCOMPONENT_ID K VARCHAR(18) IBM Z Performance and Capacity Analytics subcomponent
ID or %. Allows for a unique profile for an IBM Z
Performance and Capacity Analytics subcomponent. An
explicit subcomponent is only valid for a profile where an
explicit value is specified for COMPONENT_ID.
TABLESPACE_NAME K VARCHAR(18) Table space name or %. An explicit value can be
specified even if explicit values are not also specified for
COMPONENT_ID and SUBCOMPONENT_ID.
TABLESPACE_TYPE VARCHAR(9) Table space type (RANGE, GROWTH, SEGMENTED). If
invalid, GROWTH will be used. For type RANGE, there must
be a set of definitions in the GENERATE_KEYS system table
with the same profile name.
MAXPARTS INTEGER Maximum partitions. Used with TABLESPACE_TYPE of
GROWTH or SEGMENTED.
NUMPARTS INTEGER Initial number of partitions. Used with TABLESPACE_TYPE of
GROWTH. (For range partitioning, NUMPARTS is calculated
from the number of entries in the GENERATE_KEYS table).
SEGSIZE INTEGER Segment size for all table space types.
TBSPACE1 VARCHAR(250) The first set of SQL options allowed for GENERATE
TABLESPACE. (TBSPACE1 contains parameters that are in
the select group prior to the parameter DSSIZE in the Db2
SQL Reference syntax diagram for the CREATE TABLESPACE
statement.)

272 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views

Column name Data type Description


TBSPACE2 VARCHAR(250) The second set of SQL options allowed for GENERATE
DEFAULT TABLESPACE. (TBSPACE2 contains parameters
that are in the select group prior to the parameter SEGSIZE
in the Db2 SQL Reference syntax diagram for the CREATE
TABLESPACE statement.)
INDEX1 VARCHAR(250) The first set of SQL options allowed for GENERATE INDEX
and GENERATE KEY. (INDEX1 contains parameters that are
in the select group following the parameter partition-
element in the Db2 SQL Reference syntax diagram for the
CREATE INDEX statement.)
INDEX2 VARCHAR(250) The second set of SQL options allowed for GENERATE INDEX
and GENERATE KEY. (INDEX2 contains parameters that are
in the select group starting with BUFFERPOOL-bpname in
the Db2 SQL Reference syntax diagram for the CREATE
INDEX statement.)

GENERATE_KEYS
This system table contains one row for each partition of a generate statement profile using range-
partitioning. It is used when installing components that use the GENERATE statement to create range-
partitioned table spaces, partitioning and indexes.

Column name Data type Description


PROFILE K VARCHAR(18) Profile name. This value is specified on the PROFILE
parameter of the GENERATE PARTITIONING statement.
COMPONENT_ID K VARCHAR(18) IBM Z Performance and Capacity Analytics component ID or
%. Allows for a unique profile for an IBM Z Performance and
Capacity Analytics component.
SUBCOMPONENT_ID K VARCHAR(18) IBM Z Performance and Capacity Analytics subcomponent
ID or %. Allows for a unique profile for an IBM Z
Performance and Capacity Analytics subcomponent. An
explicit subcomponent is only valid for a profile where an
explicit value is specified for COMPONENT_ID.
TABSPACE_NAME K VARCHAR(18) Table space name or %. An explicit value can be
specified even if explicit values are not also specified for
COMPONENT_ID and SUBCOMPONENT_ID.
PART_NUM K INTEGER The number of the physical partition for the table space.
NUMPARTS for a RANGE table space is the number of
PART_NUM entries for this profile name.
PARTITION_OPTIONS VARCHAR(250) A string of partition options added to each partition of the
partition-by-range specification of the generated CREATE
TABLESPACE statement.
RANGE_COLUMNS VARCHAR(250) Specifies the columns of the range-partitioning key. Provide
a comma-separated list of the columns comprising the
range-partitioning key. Column options as defined by the
Db2 ALTER TABLE … ADD PARTITION BY RANGE syntax may
also be specified.
PARTITION_KEY VARCHAR(250) The limiting key value for the partition boundary.
INCLUSIVE CHAR(1) Specifies if the range values are included in the data
partition: either Y (for inclusive) or N (for not inclusive).

Views on Db2 and QMF tables


These views on Db2 tables are required for users without access to the tables.

Chapter 7. Administration reference 273


Views

View name Description


DRLCOLUMNS This view is based on SYSIBM.SYSCOLUMNS in the Db2 catalog. It is used to get column
names and comments.
DRLINDEXES This table is based on SYSIBM.SYSINDEXES in the Db2 catalog. It is used to get table
index information.
DRLINDEXPART This view is based on SYSIBM.SYSINDEXPART in the Db2 catalog. It is used to get index
partition information.
DRLKEYS This view is based on SYSIBM.SYSKEYS in the Db2 catalog. It is used to get information on
index keys.
DRLOBJECT_DATA This view is based on Q.OBJECT_DATA, a QMF control table that contains information
about QMF objects.
DRLTABAUTH This view is based on SYSIBM.SYSTABAUTH in the Db2 catalog. It is used to get table
privilege information.
DRLTABLEPART This view is based on SYSIBM.SYSTABLEPART in the Db2 catalog. It is used to get table
space information.
DRLTABLES This view is based on SYSIBM.SYSTABLES in the Db2 catalog. It is used to get a list of
tables and comments for the tables.
DRLTABLESPACE This view is based on SYSIBM.SYSTABLESPACE in the Db2 catalog. It is used to get a list of
table spaces.
DRLVIEWS This view is based on SYSIBM.SYSVIEWS in the Db2 catalog. It is used to get view
definitions.

Views on IBM Z Performance and Capacity Analytics system tables


These views on IBM Z Performance and Capacity Analytics dialog system tables are required for users
without access to the tables.

View Name Description


DRLUSER_GROUPREPS This view is based on DRLGROUP_REPORTS. It allows a user to update only his own report
groups.
DRLUSER_GROUPS This view is based on DRLGROUPS. It allows a user to update only his own report groups.
DRLUSER_REPORTATTR This view is based on DRLREPORT_ATTR. It allows a user to update only his own reports.
DRLUSER_REPORTS This view is based on DRLREPORTS. It allows a user to update only his own reports.
DRLUSER_REPORTVARS This view is based on DRLREPORT_VARS. It allows a user to update only his own reports.
DRLUSER_SEARCHATTR This view is based on DRLSEARCH_ATTR. It allows a user to update only his own searches.
DRLUSER_SEARCHES This view is based on DRLSEARCHES. It allows a user to update only his own searches.
DRLUSER_REPORTQRYS This view is based on DRLREPORT_QUERIES. It allows a user to update only his own
reports.
DRLUSER_REPORTCOLS This view is based on DRLREPORT_COLUMNS. It allows a user to update only his own
reports.
DRLUSER_REPORTTEXT This view is based on DRLREPORT_TEXT. It allows a user to update only his own reports.

Control tables and common tables


This chapter describes control tables and common tables. These tables are used by many IBM Z
Performance and Capacity Analytics components. The tables are provided with the IBM Z Performance
and Capacity Analytics base.

274 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views

Each table description includes information about the table, and a description of each key column and
data column in the table.
Key columns are marked with a "K".
Data columns come after the last key column and are sorted in alphabetical order, with any underscores
ignored.
The tables appear in alphabetical order, with any underscores ignored.
Note: Data tables with similar contents (that is, data tables with the same name but different suffixes) are
described under one heading. For example, “AVAILABILITY_D, _W, _M” on page 279 contains information
about three similar tables:

AVAILABILITY_D
AVAILABILITY_W
AVAILABILITY_M

Except for the DATE column and TIME column, the contents of these three tables are identical.
Differences in the contents of similar tables are explained in the column descriptions.
The DATE and TIME information are stored in the standard Db2 format and displayed in the local format.

Control tables
The control tables are created during installation of the IBM Z Performance and Capacity Analytics base.
The tables control results returned by some log collector functions.
Control tables appear in the tables list in the administration dialog.

DAY_OF_WEEK
This control table defines the day type to be returned by the DAYTYPE function for each day of the week.
The day type is used as a key in the PERIOD_PLAN and SCHEDULE control tables.

Column name Data type Description


DAY_OF_WEEK K SMALLINT Day of week number, 1 through 7 (Monday through Sunday).
DAY_TYPE CHAR(8) Day type for the day of week.

Example of table contents


DAY
OF DAY
WEEK TYPE
------ --------
1 MON
2 TUE
3 WED
4 THU
5 FRI
6 SAT
7 SUN

PERIOD_PLAN
This control table defines the periods to be returned by the PERIOD function, which is described in the
Language Guide and Reference. A period plan defines the partition of a day into periods (such as shifts) for
each day type defined by the DAY_OF_WEEK and SPECIAL_DAY control tables.

Chapter 7. Administration reference 275


Views

Column name Data type Description


PERIOD_PLAN_ID K CHAR(8) You can have different sets of period names for different systems. Each
application normally uses a system ID from the log to match this field, for
example the MVS system ID for an MVS performance application. Specify % for
the rows that specify your default set of period names. This can contain global
search characters.
DAY_TYPE K CHAR(8) Day type the period applies to. This can be any of the day types specified in the
DAY_OF_WEEK and SPECIAL_DAY control tables.
START_TIME K TIME Time when the period starts.
END_TIME TIME Time when the period ends.
PERIOD_NAME CHAR(8) Name of the period.

Example of table contents


PERIOD
PLAN DAY START END PERIOD
ID TYPE TIME TIME NAME
-------- -------- -------- -------- --------
% MON 00.00.00 08.00.00 NIGHT
% MON 08.00.00 17.00.00 PRIME
% MON 17.00.00 24.00.00 NIGHT
% TUE 00.00.00 08.00.00 NIGHT
% TUE 08.00.00 17.00.00 PRIME
% TUE 17.00.00 24.00.00 NIGHT
% WED 00.00.00 08.00.00 NIGHT
% WED 08.00.00 17.00.00 PRIME
% WED 17.00.00 24.00.00 NIGHT
% THU 00.00.00 08.00.00 NIGHT
% THU 08.00.00 17.00.00 PRIME
% THU 17.00.00 24.00.00 NIGHT
% FRI 00.00.00 08.00.00 NIGHT
% FRI 08.00.00 17.00.00 PRIME
% FRI 17.00.00 24.00.00 NIGHT
% SAT 00.00.00 24.00.00 WEEKEND
% SUN 00.00.00 24.00.00 WEEKEND
% HOLIDAY 00.00.00 24.00.00 HOLIDAY

SCHEDULE
This control table defines the schedules to be returned by the APPLY SCHEDULE function. A schedule is a
time period when a resource is planned to be up; it is used in availability calculations.

Column name Data type Description


SCHEDULE_NAME K CHAR(8) Name of the schedule. By giving different names to schedules, you
can have different schedules for the various systems or resources. The
AVAILABILITY_PARM table controls which schedule name to use for a
resource.
DAY_TYPE K CHAR(8) Day type the schedule applies to. This can be any of the day types specified in
the DAY_OF_WEEK and SPECIAL_DAY control tables.
START_TIME K TIME Time when the schedule starts.
END_TIME TIME Time when the schedule ends.

Example of table contents


SCHEDULE DAY START END
NAME TYPE TIME TIME
-------- -------- -------- --------
STANDARD MON 08.00.00 17.00.00
STANDARD TUE 08.00.00 17.00.00
STANDARD WED 08.00.00 17.00.00
STANDARD THU 08.00.00 17.00.00
STANDARD FRI 08.00.00 17.00.00
STANDARD SAT 00.00.00 00.00.00

276 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Views

STANDARD SUN 00.00.00 00.00.00


STANDARD HOLIDAY 00.00.00 00.00.00

SPECIAL_DAY
This control table defines the day type to be returned by the DAYTYPE function for special dates such as
holidays. The day type is used as a key in the PERIOD_PLAN and SCHEDULE control tables.

Column name Data type Description


DATE K DATE Date to be defined as special day.
DAY_TYPE CHAR(8) Day type for the date; for example, HOLIDAY.

Example of table contents


DAY
DATE TYPE
---------- --------
1999-12-25 HOLIDAY
2000-01-01 HOLIDAY

CICS control tables


The CICS control tables are created during installation of the IBM Z Performance and Capacity Analytics
base. The tables control results returned by some log collector functions during CICS log data collection.
CICS control tables appear in the tables list in the administration dialog.

CICS_DICTIONARY
This control table is used during CICS log data collection. The CICS record procedure, DRL2CICS,
uses CICS_DICTIONARY to store the latest dictionary record processed for each unique combination
of MVS_SYSTEM_ID, CICS_SYSTEM_ID, CLASS and VERSION. For more information, refer to the CICS
Performance Feature Guide and Reference.

Column name Data type Description


MVS_SYSTEM_ID K CHAR(4) MVS system ID. From SMFMNSID (V3) or SMFSID (V2).
CICS_SYSTEM_ID K CHAR(8) CICS generic ID. This is the VTAM® application identifier for the CICS system
that produced the dictionary. From SMFMNPRN (V3) or SMFPSPRN (V2).
CLASS K SMALLINT Monitoring class. This is 2 for accounting (CICS/MVS V2 only), 3 for
performance data, and 4 for exception data (CICS/MVS V2 only). From
SMFMNCL (V3) or MNSEGCL (V2).
VERSION K SMALLINT Version of the CICS system that produced the dictionary. This is 2 for CICS/MVS
(V2) and 3 for CICS/ESA (V3). Set by DRL2CICS based on SMFMNSTY (V3) or
SMFSTY (V2).
FIELD_NO K SMALLINT Assigned connector for this dictionary entry (CMODCONN). This is also the
index to the dictionary entry array.
CICS_VER K CHAR(4) CICS version and release that created this dictionary (from the field
SMFMNRVN). EX. 0410.
DICT_ENTRY_ID CHAR(12) Dictionary entry ID. It is made up of the CMODNAME, CMODTYPE and
CMODIDNT fields in the dictionary entry. It is used to uniquely identify each
dictionary entry.
OUTPUT_LENGTH SMALLINT Field length for matching DICT_ENTRY_ID in CICS_FIELD. It is used for building
the output record.
OUTPUT_OFFSET SMALLINT Field offset for matching DICT_ENTRY_ID in CICS_FIELD. It is used for building
the output record.

Chapter 7. Administration reference 277


Common data tables

Column name Data type Description


USED CHAR(8) A flag indicating (if = Y) that this dictionary entry has been updated with field
length and offset data from a matching DICT_ENTRY_ID in CICS_FIELD.

CICS_FIELD
This control table is used during CICS log data collection. The CICS record procedure, DRL2CICS, uses
CICS_FIELD to store field lengths and offsets for dictionary fields described in “CICS_DICTIONARY”
on page 277. For more information, refer to the CICS Performance Feature Guide and ReferenceCICS
Performance Feature Guide and Reference.

Column name Data type Description


CLASS K SMALLINT CMF record class. 2 for accounting (CICS/MVS V2 only), 3 for performance
data (transaction and global (CICS/MVS V2 only)) and 4 for exception data
(CICS/MVS V2 only).
DICT_ENTRY_ID K CHAR(12) This is the dictionary entry ID. It is made up of the CMODNAME, CMODTYPE
and CMODIDNT fields in the dictionary entry. It is used to uniquely identify
each dictionary entry.
FIRST_CICS_VER K CHAR(4) This is first version of CICS that introduced this CMODTYPE and CMODIDNT
with these attributes. This allows multiple versions of the same key as many
fields were changed with CICS TS 3.2
OUTPUT_LENGTH SMALLINT This is the field length that is used to build the output record.
OUTPUT_OFFSET INTEGER This is the field offset that is used to build the output record. This offset
should match the SMF_CICS_T, _G, _A, _E2 record definitions.

Common data tables


These tables are ordinary data tables that are used by many components. They are provided with the IBM
Z Performance and Capacity Analytics base, but not created until the installation of the first component
that uses them.

Naming standard for common data tables


Names of IBM Z Performance and Capacity Analytics common data tables are in this format:
content_suffix
where:
• content is a description (for example, AVAILABILITY for system and resource availability data).
• suffix indicates the summarization level of the data in the table (for example, AVAILABILITY_D for
availability data summarized by day).
A common table name can have these summarization-level suffixes:
_T
The table holds nonsummarized data (timestamped data).
_D
The table holds data summarized by day.
_W
The table holds data summarized by week.
_M
The table holds data summarized by month.

278 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common data tables

AVAILABILITY_D, _W, _M
These tables provide daily, weekly, and monthly statistics on the availability of systems and subsystems.
They contain consolidated data from the AVAILABILITY_T table.
The default retention periods for these tables are:
AVAILABILITY_D
90 days
AVAILABILITY_W
400 days
AVAILABILITY_M
800 days

Column name Data type Description


DATE K DATE Date that the availability data applies to. For the _W table, this is
the date of the first day of the week. For the _M table, this is the
date of the first day of the month.
SYSTEM_ID K CHAR(8) System ID such as an MVS or VM system ID.
AREA K CHAR(8) Major area the resource is related to, such as MVS or NETWORK.
RESOURCE_TYPE K CHAR(8) Resource type.
RESOURCE_NAME K CHAR(8) Resource name.
RESOURCE_GROUP K CHAR(8) Resource group.
AVAIL_OBJ_PCT DECIMAL(4,1) Availability objective for the resource, in percent. This is from the
column AVAIL_OBJ_PCT in the AVAILABILITY_PARM lookup table.
This value should be compared with the actual availability, which is
calculated as: 100*UP_IN_SCHEDULE/SCHEDULE_HOURS.
MEASURED_HOURS FLOAT Number of hours measured.
SCHEDULE_DAYS SMALLINT Number of days during the week or month that the resource was
scheduled to be up. This column is only present in the _W and _M
tables.
SCHEDULE_HOURS FLOAT Number of hours the resource was scheduled to be up.
STARTS SMALLINT Number of times the resource was started.
STARTS_IN_SCHEDULE SMALLINT Number of times the resource was started within the schedule.
STOPS SMALLINT Number of times the resource was stopped.
STOPS_IN_SCHEDULE SMALLINT Number of times the resource was stopped within the schedule.
UP_HOURS FLOAT Number of hours the resource was up.
UP_IN_SCHEDULE FLOAT Number of hours the resource was up within the schedule.

AVAILABILITY_T
This table provides detailed availability data about the system as a whole and all its subsystems. The data
comes from many different sources. For every resource tracked, this table contains one row for each time
interval with a different status.
The default retention period for this table is 10 days.

Column name Data type Description


SYSTEM_ID K CHAR(8) System ID such as an MVS or VM system ID.
AREA K CHAR(8) Major area the resource is related to, such as MVS or NETWORK.

Chapter 7. Administration reference 279


Common data tables

Column name Data type Description


RESOURCE_TYPE K CHAR(8) Resource type.
RESOURCE_NAME K CHAR(8) Resource name.
RESOURCE_GROUP K CHAR(8) Resource group.
INTERVAL_TYPE K CHAR(3) Interval type. Possible values are: ===, |==, ==|, |=|, XXX, |XX, XX|, |X|,
and blank, where:
=
Indicates that the resource is up (available)
X
Indicates that the resource is down
|
Indicates an interval start or end
blank
Means that the status is unknown

START_TIME K TIMESTAMP Start time of the interval.


END_TIME TIMESTAMP End time of the interval.
QUIET_INTERVAL_SEC INTEGER Number of seconds after the interval end that the resource is expected
to remain in the same status. If another interval with a start time
within this range appears, the two intervals are merged.

EXCEPTION_T
This table provides a list of exceptions that have occurred in the system and require attention. The data
comes from many different sources.
The layout of this table cannot be changed by the user.
The default retention period for this table is 14 days.

Column name Data type Description


DATE K DATE Date when the exception occurred.
TIME K TIME Time when the exception occurred.
SYSTEM_ID K CHAR(8) System where the exception occurred.
AREA K CHAR(8) Major area the exception is related to, such as MVS or NETWORK.
EXCEPTION_ID K VARCHAR(18) Short description of the exception type. This can be used to count the
number of exceptions of different types.
RESOURCE_NAME1 K CHAR(8) Name of the first resource that the exception is related to.
RESOURCE_NAME2 K CHAR(8) Name of the second resource that the exception is related to.
DATE_GENERATED DATE Date when the problem was recorded in the Information/Management
database. This is null if no problem record has been generated.
EXCEPTION_DESC VARCHAR(45) Text that describes the exception, in any format.
PROBLEM_FLAG CHAR(1) Controls whether a problem record should be automatically generated
for the exception. This can be Y (generate a problem record) or N (do
not generate a problem record).
PROBLEM_NUMBER CHAR(8) The Information/Management problem-record number. This is null if
no problem record has been generated.
SEVERITY CHAR(2) Severity of the problem. This is user-defined.
TRANSACT_NUMBER INTEGER Transaction identifier number.

280 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common lookup tables

Column name Data type Description


TRANSACT_CHAR CHAR(4) Transaction number in character format. (in some special cases CICS
system tasks are identified as III, JBS, J01-J99, TCB.)
PROGRAM_NAME CHAR(8) Name of the program.

MIGRATION_LOG
This table holds information on what migration jobs have been run, and the results of each step.
The layout of this table cannot be changed by the user.
The default retention period for this table is 14 days.

Column name Data type Description


JOB_NAME K CHAR(8) Migration job name.
STEP_NO K INTEGER Step number of job.
START_DATE K DATE Start date of job.
START_TIME K TIME Start time of job.
STEP_NAME CHAR(30) Step name of job.
RETURN_CODE INTEGER Step status code.
COMPLETED_CODE CHAR Y - Completed successfully
U - Abend

END_DATE DATE End date of last migration step.


END_TIME TIME End time of last migration step.

Common lookup tables


These tables are ordinary lookup tables that are used by many components. They are provided with
the IBM Z Performance and Capacity Analytics base, but not created until the installation of the first
component that uses them.

AVAILABILITY_PARM
This lookup table sets availability parameters. It contains the schedule names and availability objectives
to use for the different resources in the system. Its values are used in the AVAILABILITY_D, _W, and _M
tables.

Column name Data type Description


SYSTEM_ID K CHAR(8) System ID associated with the resource. This can contain global
search characters.
AREA K CHAR(8) Major area that the resource is related to, such as MVS or NETWORK.
This can contain global search characters.
RESOURCE_TYPE K CHAR(8) Resource type. This can contain global search characters.
RESOURCE_NAME K CHAR(8) Resource name. This can contain global search characters.
RESOURCE_GROUP K CHAR(8) Resource group. This can contain global search characters.
AVAIL_OBJ_PCT DECIMAL(4,1) Availability objective for the resource, in percent.
SCHEDULE_NAME CHAR(8) Schedule name to use for the resource.

Chapter 7. Administration reference 281


Common lookup tables

Example of table contents


AVAIL
SYSTEM RESOURCE RESOURCE RESOURCE SCHEDULE OBJ
ID AREA TYPE NAME GROUP NAME PCT
-------- -------- -------- -------- -------- -------- -------
% % % % % STANDARD 95.0

USER_GROUP
This lookup table groups the users of the system into user groups. The values are used in many tables.
You can also assign division and department names to the user groups; however, the names are left blank
in the predefined tables.

Column name Data type Description


SYSTEM_ID K CHAR(8) System ID such as an MVS or VM system ID. This can contain global search
characters.
SUBSYSTEM_ID K CHAR(8) Subsystem ID such as TSO or a CICS* system ID. This can contain global search
characters. This is not used in the predefined tables.
USER_ID K CHAR(8) User ID of the user to be grouped. This can contain global search characters.
DEPARTMENT CHAR(8) Department that the user belongs to. This is not used in the predefined tables.
DIVISION CHAR(8) Division that the user belongs to. This is not used in the predefined tables.
GROUP_NAME CHAR(8) Name of the group that the user belongs to.

Example of table contents


SYSTEM SUBSYSTEM USER GROUP
ID ID ID DIVISION DEPARTMENT NAME
-------- --------- -------- -------- ---------- --------
* * USER1 GROUP1
* * USER2 GROUP2

TIME_RES
This lookup table defines the time resolution to use for each row of data stored in a set of tables. This
enables you to specify that data should be recorded for a time period other than 1 hour. The values are
used in these data tables:
• D_DB2_BUFF_POOL_T
• D_DB2_DATABASE_T
• D_DB2_SYSTEM_T
• D_KPM_DB2_BP_T
• D_KPM_DB2_DBASE_T
• D_KPM_DB2_SYSTEM_T

Column name Data type Description


HOUR K CHAR(2) Hour of the day (that the time resolution applies to), 00 to 23.
PERIOD_NAME K CHAR(8) Name of the period. This can contain global search characters.
SYSTEM_ID K CHAR(8) Name of the system. This can contain global search characters.
TABLE_SET_NAME K CHAR(18) Name that identifies the set of tables the time resolution is defined for.

282 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Common lookup tables

Column name Data type Description


TIME_RESOLUTION SMALLINT Time resolution for the set of tables, in minutes. This defines the time period for
which data is to be recorded. Valid values are 1, 2, 3, 5, 6, 10, 12, 15, 20, 30, or
any multiple of 60.

Example of table contents

HOUR PERIOD_NAME SYSTEM_ID TABLE_SET_NAME TIME_RESOLUTION


---- ----------- --------- ----------------- ---------------
% % % D_DB2_BUFF_POOL_T 15
% % % D_DB2_DATABASE_T 15
% % % D_DB2_SYSTEM_T 15
% % % D_KPM_DB2_BP_T 15
% % % D_KPM_DB2_DBASE_T 15
% % % D_KPM_DB2_SYSTEM_T 15

AGGR_VALUE
This table is to be used to assign a default value to a key field if it is not required in the aggregation. If a
record is found in the AGGR_VALUE for a particular table and column, then the default value is used in the
aggregation. This has the potential to reduce the number of rows collected for that particular table.

Column name Data type Description


AGGR_TABLE K CHAR(18) Name of IBM Z Performance and Capacity Analytics table.
AGGR_COLUMN K CHAR(18) Name of IBM Z Performance and Capacity Analytics column.
AGGR_DEF_VALUE CHAR(16) Default value to assign to field.

Example of table contents


AGGR AGGR AGGR
TABLE COLUMN DEF
VALUE
------------------ ------------------ ----------------
DB2_PACKAGE_H CORRELATION_ID $USER
DB2_PACKAGE_H PRIMARY_AUTH_ID $USER

Sample component
This topic describes the Sample component, the only component shipped with the IBM Z Performance
and Capacity Analytics base product. You can use the Sample component for testing the installation of the
base product or to demonstrate functionality.
The sample component consists of:
• A sample log and record definition
• Three sample tables with update definitions
• Three sample reports
• A log data set with sample data that can be collected
Figure 137 on page 284 shows an overview of the flow of data from the sample log data set, DRLSAMPL
(in the DRLxxx.SDRLDEFS library), through the Sample component of IBM Z Performance and Capacity
Analytics, and finally into reports.

Chapter 7. Administration reference 283


Data tables

Sample log data set

Collect

Records

SAMPLE_01

Data tables

Lookup and
control tables
SAMPLE_H
SAMPLE_M
SAMPLE_USER

Reports

Figure 137. Sample data flow

SAMPLE_H, _M data tables


These tables provide hourly and monthly sample data.

Column name Data type Description


DATE K DATE Date. For the _M table, this is the date of the first day of the month. From
S01DATE.
TIME K TIME Time rounded down to the nearest hour. This applies only to the _H table.
From S01TIME.
SYSTEM_ID K CHAR(4) System ID. From S01SYST.
DEPARTMENT_NAME K CHAR(8) Department name. From DEPARTMENT_NAME in the SAMPLE_USER
lookup table. This is derived using field S01USER from the record as key.
USER_ID K CHAR(8) User ID. From S01USER.
CPU_SECONDS FLOAT Total processor time, in seconds. Calculated as the sum of S01CPU/
100.0.
PAGES_PRINTED INTEGER Number of pages printed. This is the sum of S01PRNT.
RESPONSE_SECONDS INTEGER Total response time, in seconds. This is the sum of S01RESP.
TRANSACTIONS INTEGER Number of transactions. This is the sum of S01TRNS.

SAMPLE_USER lookup table


This lookup table assigns department names to users.

Column name Data type Description


USER_ID K CHAR(8) User ID

284 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Reports

Column name Data type Description


DEPARTMENT_NAME CHAR(8) Department name

Example of table contents


USER DEPARTMENT
ID NAME
-------- ----------
ADAMS Appl Dev
GEYER Finance
GOUNOT Retail
HAAS Finance
JONES Appl Dev
KWAN Marketng
LEE Manufact
LUTZ Manufact
MARINO Retail
MEHTA Manufact
PARKER Finance
PEREZ Retail

Sample component reports


In the report descriptions that follow, this information is included:
Heading
The title of the report.
Introduction
A brief introduction to the purpose of the report.
Report ID
IBM Z Performance and Capacity Analytics assigns each report a unique report identifier. Each report
ID consists of SAMPLE and a sequential number, such as SAMPLE01.
Report group
To make it easier to find reports, IBM Z Performance and Capacity Analytics organizes reports into
report groups, which correspond to feature components. Sample component reports belong to the
Sample report group.
Source
Each Sample report contains information adapted from either the SAMPLE_H or SAMPLE_M source
tables.
Attributes
Attributes are keys that you can use to search for a particular report. The Sample component reports
each have one attribute, Sample.
Variables
Each report has several variables associated with it. When you select a report to display, IBM Z
Performance and Capacity Analytics prompts you for the variables listed in the description.
Example report
Each example illustrates a typical report.
Column descriptions
Column descriptions identify the information contained within the report, in detail. If the column
contains a calculated value, the formula used for the calculation is included.

Sample Report 1
This surface chart shows the processor time consumed by different projects. It gives an hourly profile for
an average day.
This information identifies the report:
Report ID
SAMPLE01

Chapter 7. Administration reference 285


Reports

Report group
Sample Reports
Source
SAMPLE_H
Chart format
DRLGSURF
Attributes
Sample
Variables
System ID

Figure 138. Sample Report 1

The report contains this information:


Horizontal axis
Hour, in the format hh.mm
Vertical axis
Processor time, in seconds
Legend
Department name

Sample Report 2
This report shows the resources consumed by each user and department.
This information identifies the report:
Report ID
SAMPLE02
Report group
Sample Reports

286 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Reports

Source
SAMPLE_M
Attributes
Sample
Variables
From_month, To_month, System_ID

Sample Report 2

Average
Month Department User Trans- response CPU Pages
start date name ID actions seconds seconds printed
---------- ---------- -------- -------- -------- -------- --------
2000-01-01 Appl Dev ADAMS 1109 3.84 244.13 821
JONES 1138 3.40 228.79 1055
SMITH 870 4.27 183.03 864
-------- -------- -------- --------
* 3117 3.84 655.95 2740

Finance GEYER 509 4.29 115.97 529


HAAS 786 3.56 137.48 648
PARKER 462 6.79 171.51 704
SPENCER 800 3.33 172.82 640
-------- -------- -------- --------
* 2557 4.50 597.78 2521

======== ======== ======== ========
36396 4.03 7868.97 38711

IBM Z Performance and Capacity Analytics Report: SAMPLE02

Figure 139. Sample Report 2

The columns in this report contain this information:


Month start date
Date of the first day in the month.
Department name
Name of the department that the user belongs to.
User ID
ID of the user.
Transactions
Number of transactions run by the user.
Average response seconds
The average response time, in seconds for all transactions. Calculated as RESPONSE_SECONDS/
TRANSACTIONS.
CPU seconds
Number of processor seconds consumed.
Pages printed
Number of pages printed.

Sample Report 3
This bar chart shows the processor time consumed by each project during the selected time period,
sorted as a toplist.
This information identifies the report:
Report ID
SAMPLE03
Report group
Sample Reports

Chapter 7. Administration reference 287


SMF records

Source
SAMPLE_M
Chart format
DRLGHORB
Attributes
Sample
Variables
From_date, To_date, System_ID

Figure 140. Sample Report 3

The report contains this information:


Horizontal axis
Processor time, in seconds
Vertical axis
Department name

Record definitions supplied with IBM Z Performance and Capacity


Analytics
In addition to the records used by the components, the IBM Z Performance and Capacity Analytics
base product contains definitions of many records. This chapter lists all the records defined by the base
product, except for those built by IBM Z Performance and Capacity Analytics exits and utilities.

SMF records
Record name Member name Description
SMF_000 DRLRS000 IPL
SMF_002 DRLRS002 Dump header
SMF_003 DRLRS003 Dump trailer
SMF_004 DRLRS004 Step termination
SMF_005 DRLRS005 Job termination
SMF_006 DRLRS006 JES2/JES3/PSF/External writer

288 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records

Record name Member name Description


SMF_007 DRLRS007 Data lost
SMF_008 DRLRS008 I/O configuration
SMF_009 DRLRS009 VARY device ONLINE
SMF_010 DRLRS010 Allocation recovery
SMF_011 DRLRS011 VARY device OFFLINE
SMF_014 DRLRS014 INPUT or RDBACK data set activity
SMF_015 DRLRS015 OUTPUT, UPDAT, INOUT, or OUTIN data set activity
SMF_016 DRLRS016 DFSORT statistics
SMF_017 DRLRS017 Scratch data set status
SMF_018 DRLRS018 Rename data set status
SMF_019 DRLRS019 Direct access volume
SMF_020 DRLRS020 Job initiation
SMF_021 DRLRS021 Error statistics by volume
SMF_022 DRLRS022 Configuration
SMF_023 DRLRS023 SMF status
SMF_024 DRLRS024 JES2 spool offload
SMF_025 DRLRS025 JES3 device allocation
SMF_026 DRLRS026 JES2/JES3 job purge
SMF_028 DRLRS028 NPM statistics. SMF_028 maps all subtypes of SMF type 28. To improve
performance, the subtypes used by IBM Z Performance and Capacity
Analytics are mapped with special record definitions (SMF_028_xxx). Note
that SMF_028 cannot be used together with these definitions because
each log record can be mapped by only one record definition.
SMF_028_NTRI DRLRSNTR NPM NTRI statistics
SMF_028_TRANSIT DRLRSNTT NPM transit time statistics
SMF_028_TRANS_SUM DRLRSNT1 NPM Transit Time summary statistics
SMF_028_X25 DRLRSX25 NPM X25 statistics
SMF_028_PU DRLRSNPU NPM PU statistics
SMF_028_NPM DRLRSNPM NPM internal statistics
SMF_028_LINE DRLRSNLI NPM line statistics
SMF_028_NEO DRLRSNEO NPM NEO statistics
SMF_028_NCP DRLRSNCP NPM NCP statistics
SMF_028_LAN DRLRSLAN NPM LAN statistics
SMF_028_VTAM DRLRSVTM NPM VTAM statistics
SMF_030 DRLRS030 Common address space work
SMF_031 DRLRS031 TIOC initialization
SMF_032 DRLRS032 TSO user work accounting
SMF_033 DRLRS033 APPC/MVS TP accounting
SMF_034 DRLRS034 TS-step termination

Chapter 7. Administration reference 289


SMF records

Record name Member name Description


SMF_035 DRLRS035 LOGOFF
SMF_036 DRLRS036 ICF catalog
SMF_037_HW DRLRS037 NetView Hardware Monitor
SMF_037_VPD DRLRSVPD Network configuration (VPD)
SMF_039_1_TO_7 DRLRS039 NetView Session Monitor, SMF 39, subtypes 1 to 7
SMF_039_8 DRLRS039 NetView Session Monitor, SMF 39, subtype 8
SMF_040 DRLRS040 Dynamic DD
SMF_041 DRLRS041 Data-in-virtual Access/Unaccess
SMF_042_1 DRLRS042 BMF performance statistics
SMF_042_2 DRLRS042 DFP cache control unit statistics
SMF_042_3 DRLRS042 DFP SMS configuration statistics
SMF_042_5 DRLRSX42 DFSMS storage class statistics
SMF_042_6 DRLRSX42 DFSMS Data Set statistics
SMF_042_14 DRLRADSM ADSTAR Distributed Storage Manager (ADSM) server statistics
SMF_042_11 DRLRSX42 DFP Extended Remote Copy (XRC) session statistics
SMF_043_2 DRLRS043 JES2 start
SMF_043_5 DRLRS043 JES3 start
SMF_045_2 DRLRS045 JES2 withdrawal
SMF_045_5 DRLRS045 JES3 stop
SMF_047_2 DRLRS047 JES2 SIGNON/start line (BSC only)
SMF_047_5 DRLRS047 JES3 SIGNON/start line/LOGON
SMF_048_2 DRLRS048 JES2 SIGNOFF/stop line (BSC only)
SMF_048_5 DRLRS048 JES3 SIGNOFF/stop line/LOGOFF
SMF_049_2 DRLRS049 JES2 integrity (BSC only)
SMF_049_5 DRLRS049 JES3 integrity
SMF_050 DRLRS050 ACF/VTAM* tuning statistics
SMF_052 DRLRS052 JES2 LOGON/start line (SNA only)
SMF_053 DRLRS053 JES2 LOGOFF/start line (SNA only)
SMF_054 DRLRS054 JES2 integrity (SNA only)
SMF_055 DRLRS055 JES2 network SIGNON
SMF_056 DRLRS056 JES2 network integrity
SMF_057_2 DRLRS057 JES2 network SYSOUT transmission
SMF_057_5 DRLRS057 JES3 networking transmission
SMF_058 DRLRS058 JES2 network SIGNOFF
SMF_059 DRLRS059 MVS/BDT file-to-file transmission
SMF_060 DRLRS060 VSAM volume data set updated
SMF_061 DRLRS061 ICF define activity
SMF_062 DRLRS062 VSAM component or cluster opened

290 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records

Record name Member name Description


SMF_063 DRLRS063 VSAM entry defined
SMF_064 DRLRS064 VSAM component or cluster status
SMF_065 DRLRS065 ICF delete activity
SMF_066 DRLRS066 ICF alter activity
SMF_067 DRLRS067 VSAM entry delete
SMF_068 DRLRS068 VSAM entry renamed
SMF_069 DRLRS069 VSAM data space defined, extended, or deleted
SMF_070 DRLRS070 RMF CPU activity
SMF_071 DRLRS071 RMF paging activity
SMF_072_1 DRLRS072 RMF workload activity
SMF_072_2 DRLRSX72 RMF storage data
SMF_072_3 DRLRS072 RMF goal mode workload activity
SMF_072_4 DRLRSX72 RMF goal mode delay and storage frame data
SMF_073 DRLRS073 RMF channel path activity
SMF_074_1 DRLRS074 RMF device activity
SMF_074_2 DRLRS074 RMF XCF activity
SMF_074_3 DRLRSX74 RMF Device OMVS activity
SMF_074_4 DRLRSX74 RMF XES/CF activity
SMF_074_6 DRLRX74 File system statistics
SMF_075 DRLRS075 RMF page/swap data set activity
SMF_076 DRLRS076 RMF trace activity
SMF_077 DRLRS077 RMF enqueue activity
SMF_078_1 DRLRS078 RMF I/O queueing activity for the 308x, 908x, and 4381 processors
SMF_078_2 DRLRS078 RMF virtual storage activity
SMF_078_3 DRLRS078 RMF I/O queueing activity for the 3090, 9021, 9121, and 9221 processors
SMF_079 DRLRS079 RMF Monitor II activity
SMF_080 DRLRS080 RACF processing
SMF_081 DRLRS081 RACF initialization
SMF_082_1 DRLRS082 PCF record
SMF_082_2 DRLRS082 CUSP record
SMF_083 DRLRS083 RACF audit record for data sets
SMF_084_1 DRLRS084 JMF - FCT analysis
SMF_084_2 DRLRS084 JMF - FCT summary and highlights
SMF_084_3 DRLRS084 JMF - spool data management
SMF_084_4 DRLRS084 JMF - resqueue cellpool, JCT and control block utilization
SMF_084_5 DRLRS084 JMF - job analysis
SMF_084_6 DRLRS084 JMF - JES3 hot spot analysis
SMF_084_7 DRLRS084 JMF - JES internal reader DSP analysis

Chapter 7. Administration reference 291


SMF records

Record name Member name Description


SMF_084_8 DRLRS084 JMF - JES3 SSI response time analysis
SMF_084_9 DRLRS084 JMF - JES3 SSI destination queue analysis
SMF_085 DRLRS085 OAM record
SMF_088 DRLRS088 System logger
SMF_089 DRLRS089 Product Usage Data
SMF_090 DRLRS090 System status
SMF_092 DRLRS092 z/OS UNIX activity
SMF_094 DRLRS094 3494, 3495 Tape library data server statistics
SMF_099 DRLRS099 SMS System Resource Manager decisions
SMF_100_0 DRLRS100 Db2 statistics, system services
SMF_100_1 DRLRS100 Db2 statistics, database services
SMF_100_2 DRLRS100 Db2 statistics, dynamic ZPARMs
SMF_100_3 DRLRS100 Db2 statistics, Buffer, Manager Group Buffer Pool
SMF_101 DRLRS101 Db2 accounting
SMF_101_1 DRLRS101 Db2 accounting, Packages extension
SMF_102 DRLRS102 Db2 system initialization parameters
SMF_110_0 DRLRS110 CICS/ESA journaling record
SMF_110_0_V2 DRLRS110 CICS/MVS monitoring record
SMF_110_1 DRLRS110 CICS/ESA monitoring record
SMF_110_1_1 DRLRS110 CICS/TS <3.2 record
SMF_110_1_5 DRLR110T CICS transaction resource - expanded
SMF_110_2 DRLR1102 CICS/ESA and CICS/TS statistics record
SMF_110_3 DRLRS1103 CICS/TS statistics record
SMF_110_4 DRLR1103 CICS/TS CF statistics record
SMF_110_5 DRLR1103 CICS/TS NC statistics record
SMF_110_1_C DRLRS110 CICS/TS 3.2+ - may be compressed
SMF_110_1_CO DRLRS110 CICS/TS 3.2+ - expanded
SMF_110_E DRLRS110 CICS/ESA exception record - expanded
SMF_112_203_C DRLRS112 OMEGAMON® XE for CICS file and database usage - compressed
SMF_112_203 DRLRS112 OMEGAMON XE for CICS file and database usage - expanded
SMF_114_1 DRLRS114 System Automation Tracking
SMF_115 DRLRS115 WebSphere MQ for z/OS statistics
SMF_116 DRLRS116 WebSphere MQ for z/OS statistics
SMF_117 DRLRS117 Websphere Message Broker
SMF_118_1 DRLRS118 TCP/IP API calls record
SMF_118_3 DRLRS118 TCP/IP FTP client calls record
SMF_118_4 DRLRS118 TCP/IP TELNET client calls record
SMF_118_20 DRLRS118 TCP/IP TELNET server record

292 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
SMF records

Record name Member name Description


SMF_118_5 DRLRS118 TCP/IP general statistics record
SMF_118_70 DRLRS118 TCP/IP FTP server record
SMF_119_1 DRLRS119 TCP connection initiation
SMF_119_2 DRLRS119 TCP connection termination
SMF_119_3 DRLRS119 FTP client transfer completion
SMF_119_4 DRLRS119 TCP/IP Profile Information record
SMF_119_5 DRLRS119 TCP/IP statistics
SMF_119_6 DRLRS119 Interface statistics
SMF_119_7 DRLRS119 Server port statistics
SMF_119_8 DRLRS119 TCP/IP stack start/stop
SMF_119_10 DRLRS119 UDP socket close
SMF_119_20 DRLRS119 TN3270 server SNA session initiation
SMF_119_21 DRLRS119 TN3270 server SNA session termination
SMF_119_22 DRLRS119 TSO telnet client connection initiation
SMF_119_23 DRLRS119 TSO telnet client connection termination
SMF_119_70 DRLRS119 FTP server transfer completion
SMF_119_72 DRLRS119 FTP server logon failure
SMF_119_73 DRLRS119 IPSec IKE Tunnel Activation/Refresh record
SMF_119_74 DRLRS119 IPSec IKE Tunnel Deactivation/Expire record
SMF_119_75_80 DRLRS119 IPSec Dynamic Tunnel Activation/Refresh
SMF_119_75_80 DRLRS119 IPSec Dynamic Tunnel Deactivation record
SMF_119_75_80 DRLRS119 IPSec Dynamic Tunnel Added record
SMF_119_75_80 DRLRS119 IPSec Dynamic Tunnel Removed record
SMF_119_75_80 DRLRS119 IPSec Manual Tunnel Activation record
SMF_119_75_80 DRLRS119 IPSec Manual Tunnel Deactivation record
SMF_120_1 DRLRS121 Server activity record
SMF_120_2 DRLRS122 WebSphere Application Server container activity record
SMF_120_3 DRLRS123 Server interval record
SMF_120_4 DRLRS124 WebSphere Application Server container interval record
SMF_120_5 DRLRSJWA J2EE container activity record
SMF_120_6 DRLRSJWI J2EE container interval record
SMF_120_7 DRLRSJWA Web container activity record
SMF_120_8 DRLRSJWI Web container interval record
SMF_120_9 DRLRS129 Request Activity record
SMF_120_10 DRLRS12A Outbound Request record
SMF_120_11 DRLRSRWL Liberty request record
SMF_120_12 DRLRSJWL Liberty Java batch job record
SMF_123_1 DRLR123 z/OS Connect EE Record

Chapter 7. Administration reference 293


DFSMS/RMM records

Record name Member name Description


SMF_123_1_V2 DRLR123 z/OS Connect EE Record V2
SMF_123_2_V2 DRLR123 z/OS Connect EE Record ST2 V2
SMF_194 DRLRS194 TS7700 Virtualization Engine statistics record
SMF_IXFP_01 DRLRIXFP IXFP subsystem performance
SMF_IXFP_02 DRLRIXFP IXFP channel interface statistics
SMF_IXFP_03 DRLRIXFP IXFP functional device performance
SMF_IXFP_04 DRLRIXFP IXFP device module performance
SMF_IXFP_05 DRLRIXFP IXFP deleted data space release
SMF_IXFP_06 DRLRIXFP IXFP snapshot event data
SMF_IXFP_07 DRLRIXFP IXFP space utilization record
SMF_IXFP_08 DRLRIXFP IXFP snapshot extended event data record

These records are user-defined; that is, they are not part of the standard IBM records in the range 0-127.
However, they are written by IBM licensed programs.
The default record numbers are provided within parentheses.

Record name Member name Description


SMF_CACHE_03 DRLRS245 Cache RMF Reporter, 3990 model 03 (245)
SMF_CACHE_06 DRLRS245 Cache RMF Reporter, 3990 model 06 (245)
SMF_CACHE_13 DRLRS245 Cache RMF Reporter, 3880 model 13 (245)
SMF_CACHE_23 DRLRS245 Cache RMF Reporter, 3880 model 23 (245)
SMF_FTP DRLRSFTP NetView File Transfer Program (FTP) log record (252)

DFSMS/RMM records
Record name Member name Description
DFRMM_VOLUME DRLRRMMV Extract file volume record
DFRMM_RACK DRLRRMMR Extract file rack number record
DFRMM_SLBIN DRLRRMMS Extract file storage location bin record
DFRMM_PRODUCT DRLRRMMP Extract file product record
DFRMM_VRS DRLRRMMK Extract file VRS record
DFRMM_OWNER DRLRRMMO Extract file owner record
DFRMM_DATASET DRLRRMMD Extract file data set record

IMS SLDS records


These records come from the IMS recovery log.
No reliable release indicators exist in the IMS records, so one log definition exists for each IMS release
supported. The log and record names contain Vnn where nn is the IMS version and release; B1 for IMS
V11, C1 for IMS V12, D1 for IMS V13, E1 for IMS V14, F1 for IMS V15.1
The records are described in IMS mapping macros.

294 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IMS SLDS records

Record name Member name Description


IMS_Vnn0_01 DRLRInnS Message Queue record (message received from a CNT)
IMS_Vnn0_02 DRLRInnS IMS command record
IMS_Vnn0_03 DRLRInnS Message Queue record (message received from an SMB or IMS)
IMS_Vnn0_06 DRLRInnS IMS event accounting record
IMS_Vnn0_07 DRLRInnS Program termination accounting record
IMS_Vnn0_08 DRLRInnS Program schedule record
IMS_Vnn0_10 DRLRInnS Security violation record
IMS_Vnn0_11 DRLRInnS Start of conversation record
IMS_Vnn0_12 DRLRInnS End of conversation record
IMS_Vnn0_13 DRLRInnS SPA insert record
IMS_Vnn0_16 DRLRInnS Sign on/off record
IMS_Vnn0_18 DRLRInnS Extended checkpoint record
IMS_Vnn0_20 DRLRInnS Database open record
IMS_Vnn0_21 DRLRInnS Database close record
IMS_Vnn0_24 DRLRInnS Database error record
IMS_Vnn0_30 DRLRInnS Message queue prefix changed record
IMS_Vnn0_31 DRLRInnS Message queue GU record
IMS_Vnn0_32 DRLRInnS Message queue reject record
IMS_Vnn0_33 DRLRInnS Message queue DRRN free record
IMS_Vnn0_34 DRLRInnS Message queue cancel record
IMS_Vnn0_35 DRLRInnS Message queue enqueue record
IMS_Vnn0_36 DRLRInnS Message queue dequeue record
IMS_Vnn0_37 DRLRInnS Message queue syncpoint transfer record
IMS_Vnn0_38 DRLRInnS Message queue syncpoint fail record
IMS_Vnn0_4C DRLRInnS Program/Database start/stop record
IMS_Vnn0_400D DRLRInnS Checkpoint CCB record
IMS_Vnn0_400E DRLRInnS Checkpoint SPA record
IMS_Vnn0_4001 DRLRInnS Checkpoint begin
IMS_Vnn0_4002 DRLRInnS Checkpoint message queue record
IMS_Vnn0_4003 DRLRInnS Checkpoint CNT record
IMS_Vnn0_4004 DRLRInnS Checkpoint SMB record
IMS_Vnn0_4005 DRLRInnS Checkpoint CTB record
IMS_Vnn0_4006 DRLRInnS Checkpoint DMB record
IMS_Vnn0_4007 DRLRInnS Checkpoint PSB record
IMS_Vnn0_4008 DRLRInnS Checkpoint CLB record
IMS_Vnn0_4014 DRLRInnS Checkpoint SPA QB record
IMS_Vnn0_4015 DRLRInnS Checkpoint EQE record
IMS_Vnn0_4020 DRLRInnS Checkpoint CIB record

Chapter 7. Administration reference 295


IMS SLDS records

Record name Member name Description


IMS_Vnn0_4021 DRLRInnS Checkpoint VTCB record
IMS_Vnn0_4070 DRLRInnS Checkpoint MSDB begin
IMS_Vnn0_4071 DRLRInnS Checkpoint MSDB ECNT record
IMS_Vnn0_4072 DRLRInnS Checkpoint MSDB header
IMS_Vnn0_4073 DRLRInnS Checkpoint MSDB pagefixed
IMS_Vnn0_4074 DRLRInnS Checkpoint MSDB pageable
IMS_Vnn0_4079 DRLRInnS Checkpoint MSDB end
IMS_Vnn0_4080 DRLRInnS Checkpoint Fast Path begin
IMS_Vnn0_4081 DRLRInnS Checkpoint Fast Path ECNT record
IMS_Vnn0_4082 DRLRInnS Checkpoint Fast Path EMHB record
IMS_Vnn0_4083 DRLRInnS Checkpoint Fast Path RCTE record
IMS_Vnn0_4084 DRLRInnS Checkpoint Fast Path DMCB/DMAC record
IMS_Vnn0_4085 DRLRInnS Checkpoint Fast Path MTO buffer record
IMS_Vnn0_4086 DRLRInnS Checkpoint Fast Path DMHR/DEDB buffer record
IMS_Vnn0_4087 DRLRInnS Checkpoint Fast Path ADSC record
IMS_Vnn0_4088 DRLRInnS Checkpoint Fast Path IEEQE record
IMS_Vnn0_4089 DRLRInnS Checkpoint Fast Path end
IMS_Vnn0_4098 DRLRInnS Checkpoint end blocks record
IMS_Vnn0_4099 DRLRInnS Checkpoint end queues record
IMS_Vnn0_41 DRLRInnS Checkpoint batch record
IMS_Vnn0_42 DRLRInnS Log buffer control record
IMS_Vnn0_43 DRLRInnS Log data set control record
IMS_Vnn0_45FF DRLRInnS End of statistics
IMS_Vnn0_450A DRLRInnS Statistics latch record
IMS_Vnn0_450B DRLRInnS Statistics dispatch storage record
IMS_Vnn0_450C DRLRInnS Statistics DFSCBT00 storage record
IMS_Vnn0_450D DRLRInnS Statistics RecAny pool record
IMS_Vnn0_450E DRLRInnS Statistics fixed pools storage record
IMS_Vnn0_450F DRLRInnS Dispatcher statistics record
IMS_Vnn0_4502 DRLRInnS Statistics queue pool record
IMS_Vnn0_4503 DRLRInnS Statistics format buffer pool record
IMS_Vnn0_4504 DRLRInnS Statistics database buffer pool
IMS_Vnn0_4505 DRLRInnS Statistics main pools record
IMS_Vnn0_4506 DRLRInnS Statistics scheduling stats record
IMS_Vnn0_4507 DRLRInnS Statistics logger record
IMS_Vnn0_4508 DRLRInnS Statistics VSAM subpool record
IMS_Vnn0_4509 DRLRInnS Statistics program isolation record
IMS_Vnn0_47 DRLRInnS Statistics active region record

296 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DCOLLECT records

Record name Member name Description


IMS_Vnn0_48 DRLRInnS OLDS padding record
IMS_Vnn0_5050 DRLRInnS Full function database update undo/redo successful record
IMS_Vnn0_5051 DRLRInnS Full function database update unsuccessful record
IMS_Vnn0_5052 DRLRInnS Full function database update undo KSDS insert record
IMS_Vnn0_5501FE00 DRLRInnS External sub-system Db2 snap in doubt record
IMS_Vnn0_56 DRLRInnS External sub-system record
IMS_Vnn0_5901 DRLRInnS EMH input record
IMS_Vnn0_5903 DRLRInnS EMH output record
IMS_Vnn0_5920 DRLRInnS Fast path MSDB change record
IMS_Vnn0_5921 DRLRInnS Fast path DEDB area data set open record
IMS_Vnn0_5922 DRLRInnS Fast path DEDB area data set close record
IMS_Vnn0_5923 DRLRInnS Fast path DEDB area data set status record
IMS_Vnn0_5924 DRLRInnS Fast path DEDB area data set EQE creation record
IMS_Vnn0_5936 DRLRInnS EMH dequeue record
IMS_Vnn0_5937 DRLRInnS EMH FP syncpoint record
IMS_Vnn0_5938 DRLRInnS EMH FP syncpoint failure record
IMS_Vnn0_5950 DRLRInnS Fast Path database update record
IMS_Vnn0_5953 DRLRInnS Fast Path database update (utilities) record
IMS_Vnn0_5954 DRLRInnS Fast Path database DEDB open record
IMS_Vnn0_5955 DRLRInnS Fast Path sequential dependent syncpoint record
IMS_Vnn0_5957 DRLRInnS Fast Path database DMAC record
IMS_Vnn0_5970 DRLRInnS Fast Path hot standby MSDB relocation record
IMS_Vnn0_67 DRLRInnS Communications trace, DMHR on I/O error and snap trace records
IMS_Vnn0_67FA DRLRInnS Trace table log record
IMS_Vnn0_7201 DRLRInnS ETO user create record
IMS_Vnn0_7202 DRLRInnS ETO user delete record
IMS_Vnn0_7203 DRLRInnS ETO user modify record
IMS_Vnn0_7204 DRLRInnS ETO lterm addition record

DCOLLECT records
These records are produced by the DFP DCOLLECT utility.
For a description of these records, refer to z/OS DFSMS: Access Method Services for Catalog.

Record name Member name Description


DCOLLECT_A DRLRDCOA VSAM base cluster association name
DCOLLECT_AG DRLRDCAG Aggregate Group information
DCOLLECT_B DRLRDCOB Data set backup version information
DCOLLECT_BC DRLRDCBC Base Configuration information
DCOLLECT_C DRLRDCOC DASD capacity planning information

Chapter 7. Administration reference 297


EREP records

Record name Member name Description


DCOLLECT_D DRLRDCOD Active data set information
DCOLLECT_DC DRLRDCDC. Data Class construct information
DCOLLECT_DR DRLRDCDR. Optical Drive information
DCOLLECT_LB DRLRDCLB. Optical Library information
DCOLLECT_M DRLRDCOM Migration data set information
DCOLLECT_MC DRLRDCMC Management Class construct information
DCOLLECT_SC DRLRDCSC Storage Class construct information
DCOLLECT_SG DRLRDCSG Storage Group construct information
DCOLLECT_T DRLRDCOT Tape capacity planning information
DCOLLECT_V DRLRDCOV Volume information
DCOLLECT_VL DRLRDCVL. SMS Volume information

EREP records
For a description of these records, refer to the Environmental Record Editing and Printing Program (EREP)
User's Guide and Reference.

Record name Member name Description


EREP_30 DRLRE030 DASD long outboard record
EREP_36 DRLER036 VTAM long outboard record
EREP_50 DRLER050 IPL system initialization record

Linux on zSeries records


These records are produced by the zLinux programs on your zLinux nodes.

Record name Member name Description


ZLINUX_CPU DRLRZPCP zLinux CPU performance record
ZLINUX_DISK_FS DRLRZPDI zLinux disk space performance record
ZLINUX_DISKIO DRLRZPIO zLinux disk I/O performance record
ZLINUX_PAGING DRLRZPPA zLinux paging space performance record
ZLINUX_HARDCONF DRLRZCNF zLinux hardware configuration record
ZLINUX_SOFTCONF DRLRZCNF zLinux software configuration record
ZLINUX_USR_CMD DRLRZACO zLinux process/command accounting record
ZLINUX_WTMP_INFO DRLRZMTP zLinux connect accounting record
ZLINUX_REC_PI DRLRLNX1 PI log record reformatted to fixed layout
ZLINUX_REC_DF DRLRLNX1 DF log record reformatted to fixed layout
ZLINUX_REC_WW DRLRLNX1 WW log record reformatted to fixed layout
ZLINUX_REC_TO DRLRLNX1 TO log record reformatted to fixed layout

RACF records
These records come from the RACF Database Unload utility output that contains RACF configuration data.

298 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Z Workload Scheduler records

For a description of these records, refer to RACF Macros and Interfaces.

Record name Member name Description


RACF_100 DRLRR100 Group basic data
RACF_200 DRLRR200 User basic data
RACF_205 DRLRR205 User connect data
RACF_400 DRLRR400 Data set basic data
RACF_402 DRLRR402 Data set conditional access
RACF_404 DRLRR404 Data set access
RACF_500 DRLRR500 General resource basic data
RACF_505 DRLRR505 General resource access
RACF_507 DRLRR507 General resource conditional access

Tivoli Workload Scheduler for z/OS (OPC) records


These records come from the OPC track log.
For a description of these records, refer to the Tivoli Workload Scheduler: Diagnosis Guide and Reference.

Record name Member name Description


OPC_03_P DRLROP03 OPC current plan operation
OPC_03_C DRLROP03 OPC current plan occurrence
OPC_03_3 DRLROP03 OPC current plan system automation
OPC_04 DRLROP04 OPC current plan job name table
OPC_23 DRLROP23 OPC operation event
OPC_24 DRLROP24 OPC MCP event
OPC_27 DRLROP27 OPC missed feedback
OPC_29 DRLROP29 OPC auto tracked event

VM accounting records
For a description of these records, refer to z/VM: CP Planning and Administration.

Record name Member name Description


VMACCT_01 DRLRVA01 Virtual machine resource use
VMACCT_02 DRLRVA02 Dedicated devices
VMACCT_03 DRLRVA03 Temporary disk space
VMACCT_04 DRLRVA04 LOGON or AUTOLOG with invalid password
VMACCT_05 DRLRVA05 Successful LINK to protected minidisk
VMACCT_06 DRLRVA06 LINK with invalid password
VMACCT_07 DRLRVA07 Log off from VSCS-controlled device
VMACCT_08 DRLRVA08 Disconnect or log off

VMPRF records
For a description of these records, refer to the VMPRF User's Guide and Reference.

Chapter 7. Administration reference 299


z/VM Performance Toolkit records

Record name Member name Description


VMPRF_01 DRLRVM01 VMPRF system data
VMPRF_02 DRLRVM02 VMPRF processor data
VMPRF_11 DRLRVM11 VMPRF configuration data
VMPRF_41 DRLRVM41 VMPRF user data
VMPRF_61 DRLRVM61 VMPRF DASD data

z/VM Performance Toolkit records


For a description of these records, refer to the z/VM Performance Toolkit manual.

Record name Member name Description


VMPERFT_00 DRLRPT00 System configuration data
VMPERFT_01 DRLRPT01 General system load data
VMPERFT_02 DRLRPT02 Processor load data
VMPERFT_03 DRLRPT03 Logical processor load data (LPAR only)
VMPERFT_04 DRLRPT04 Minidisk cache data
VMPERFT_05 DRLRPT05 CP services activity data
VMPERFT_06 DRLRPT06 Channel busy (HF sampling)
VMPERFT_07 DRLRPT07 Channel measurement facility data
VMPERFT_08 DRLRPT08 Extended channel measurement facility data
VMPERFT_3A DRLRPT3A Overall user transaction data
VMPERFT_3C DRLRPT3C Shared segment data
VMPERFT_3E DRLRPT3E Shared data spaces
VMPERFT_41 DRLRPT41 User resource usage and wait states
VMPERFT_42 DRLRPT42 User class resource usage and wait states (same layout as FC41)
VMPERFT_43 DRLRPT43 System totals for user resource usage and wait states (same layout as FC41)
VMPERFT_44 DRLRPT44 User transactions and response time
VMPERFT_45 DRLRPT45 User class transactions and response time data (same layout as FC44)
VMPERFT_46 DRLRPT46 System totals for user transactions and response time data
VMPERFT_51 DRLRPT51 I/O processor activity data
VMPERFT_55 DRLRPT55 Virtual switch records
VMPERFT_61 DRLRPT61 General DASD data
VMPERFT_65 DRLRPT65 DASD cache data
VMPERFT_68 DRLRPT68 DASD CP owned (system areas)
VMPERFT_6F DRLRPT6F SCSI device records
VMPERFT_6D DRLRPT6D Queued Direct Input Output (QDIO) support
VMPERFT_71 DRLRPT71 DASD SEEKS data
VMPERFT_A2 DRLRPTA2 SFS and BFS server data
VMPERFT_A4 DRLRPTA4 Multitasking users data
VMPERFT_A6 DRLRPTA6 TCP/IP server data

300 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands

Record name Member name Description


VMPERFT_A7 DRLRPTA7 TCP/IP links data
VMPERFT_A8 DRLRPTA8 Reusable server kernel summary data
VMPERFT_A9 DRLRPTA9 Linux application data

Administration dialog options and commands


This chapter describes actions you can access from primary windows in the IBM Z Performance and
Capacity Analytics administration dialog. These actions include dialog window pull-downs and commands
you issue from the command line. These sections describe the actions:
• “IBM Z Performance and Capacity Analytics dialog options” on page 301
• “IBM Z Performance and Capacity Analytics commands” on page 307

IBM Z Performance and Capacity Analytics dialog options


This section describes the menu bar options for the IBM Z Performance and Capacity Analytics windows.
Under each menu bar option, there is a list of pull-down options available, with references to where the
pull-down options are described.

Administration window

Options Other Utilities Help


------------------------------------------------------------------------------
IBM Z Performance and Capacity Analytics Administration
Option ===> __________________________________________________________________

1 System Perform system tasks System ID . . : AUS1


2 Components Install components Db2 Subsystem : DEC1
3 Logs Show installed log objects Db2 plan name : DRLPLAN
4 Tables Show installed data tables System tables : DRLSYSYY
5 Reports Run reports Data tables . : DRLYY

F1=Help F2=Split F3=Exit F9=Swap F10=Actions F12=Cancel

The menu bar options of the Administration window are:


Options

_ 1. Dialog parameters...
2. Reporting dialog defaults...
3. Exit

Dialog parameters
See “Dialog parameters - variables and fields” on page 115.
Reporting dialog defaults
Refer to the Guide to Reporting for more information.
Exit
Returns to the previous window.

Chapter 7. Administration reference 301


Administration dialog options and commands

Other

_ 1. QMF
2. DB2I
3. ISPF/PDF
4. Process IZPCA statements...
5. Messages...

QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.
Process IZPCA statements
See “Working with fields in a record definition” on page 196.
Messages
Refer to the Guide to Reporting for more information.
Utilities

_ 1. Network
2. Workstation interface
3. Generate problem records...
4. System Diagnostics
5. TPM Extract
6. Search installed objects

Network
Refer to the Network Performance Feature Installation and Administration manual .
Generate problem records
See “Administering problem records” on page 166.
System Diagnostics
Refer to the topic "System Diagnostics" in the Messages and Problem Determination manual.
TPM Extract
Extracts usage data from IBM Z Performance and Capacity Analytics data tables which can be
imported into Tivoli Performance Modeller.
Search installed objects
Utility for searching installed component objects such as table columns, table comments, records,
updates, and reports.
Help

_ 1. Using help
2. General help
3. Keys help
4. Product information

Using help
Refer to the Guide to Reporting for more information.
General help
Refer to the Guide to Reporting for more information.
Keys help
Refer to the Guide to Reporting for more information.

302 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands

Product information
Displays IBM Z Performance and Capacity Analytics copyright and release information.

Components window

Component Space Other Help


-------------------------------------------------------------------------
Components Row 1 to 32 of 92
Command ===> ____________________________________________________________

Select one or more components. Then press Enter to Open component.

/ Components Status Date


_ z/OS Performance Management (MVSPM) Installed 2019-08-29

The menu bar options of the Components window are:


Component
New
See “Creating a component” on page 182.
Open component
See “Viewing objects in a component” on page 179.
Install
See “Installing a component” on page 169.
Uninstall
See “Uninstalling a component” on page 176.
Delete
See “Deleting a component” on page 182.
Print list
See “Printing a list of IBM Z Performance and Capacity Analytics tables” on page 231 for a
description of a similar action, printing a list of tables.
Show user objects
See “Controlling objects that you have modified” on page 178.
Show excluded
See “Controlling objects that you have modified” on page 178.
Exit
Saves changes and returns to the previous window.
Space
Table spaces
See “Installing a component” on page 169.
Indexes
See “Installing a component” on page 169.
Other
QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.
Process IBM Z Performance and Capacity Analytics statements
See “Working with fields in a record definition” on page 196.

Chapter 7. Administration reference 303


Administration dialog options and commands

Messages
Refer to the Guide to Reporting for more information.
Help
As for Help on the Administration window (see “Help” on page 302).

Logs window

Log Utilities View Other Help


--------------------------------------------------------------------------
Logs Row 1 to 3 of 3
Command ===> ____________________________________________________________

Select a log. Then press Enter to display record definitions.

/ Logs Description
_ DCOLLECT DFSMS DCOLLECT log

The menu bar options of the Logs window are:


Log
New
See “Creating a log definition” on page 193.
Open log definition
See “Viewing and modifying a log definition” on page 191.
Open record definitions
See “Viewing and modifying a record definition” on page 194.
Open collected log data sets
See “Viewing a list of log data sets collected” on page 184.
Open Log Data Manager
See “Working with the log data manager option” on page 238.
Delete
See “Deleting a log definition” on page 193.
Save definition
See “Saving a table definition in a data set” on page 232 for a description of a similar action,
saving definitions for tables.
Print list
See “Printing a list of IBM Z Performance and Capacity Analytics tables” on page 231 for a
description of a similar action, printing a list of tables.
Exit
Saves changes and returns to the previous window.
Utlities
Collect
See “Collecting data from a log into Db2 tables” on page 186.
Display log
See “Displaying the contents of a log” on page 188.
Show log statistics
See “Displaying log statistics” on page 187.
View
All
Lists all logs in the Logs window.
Some
Restricts the list of logs displayed in the Logs window when you specify selection criteria.

304 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands

Other
QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.
Process IBM Z Performance and Capacity Analytics statements
See “Working with fields in a record definition” on page 196.
Messages
Refer to the Guide to Reporting for more information.
Help
As for Help on the Administration window (see “Help” on page 302).

Tables window

Table Maintenance Utilities Edit View Other Help


--------------------------------------------------------------------------
Tables Row 1 to 33 of 161
Command ===> ____________________________________________________________

/ Tables Prefix Type


_ AGGR_VALUE DRLSYSYY TABLE
_ GENERATE_KEYS DRLSYSYY TABLE
_ GENERATE_PROFILES DRLSYSYY TABLE
_ KPMZ_CPUMF_PT_T DRLYY TABLE

The menu bar options of the Tables window are:


Table
New
See “Creating a table” on page 232.
Open table definition
See “Opening a table to display columns” on page 216.
Open updates
See “Displaying and modifying update definitions of a table” on page 220.
Open purge conditions
See “Displaying and editing the purge condition of a table” on page 225.
Open table space
See “Displaying and modifying a table or index space” on page 227.
Delete
See “Deleting a table or view” on page 234.
Save definition
See “Saving a table definition in a data set” on page 232.
Print list
See “Printing a list of IBM Z Performance and Capacity Analytics tables” on page 231.
Exit
Saves changes and returns to the previous window.
Maintenance
Table space
See “Displaying and modifying a table or index space” on page 227.

Chapter 7. Administration reference 305


Administration dialog options and commands

Index and index space


See “Displaying and modifying a table or index space” on page 227.
Utilities
Display
See “Displaying the contents of a table” on page 203.
Show size
See “Showing the size of a table” on page 206.
Import
See “Importing the contents of an IXF file to a table” on page 209. If your installation does not use
QMF, this item is not selectable.
Export
See “Exporting table data to an IXF file” on page 210. If your installation does not use QMF, this
item is not selectable.
Grant
See “Administering user access to tables” on page 236.
Revoke
See “Administering user access to tables” on page 236.
Recalculate
See “Recalculating the contents of a table” on page 207.
Purge
See “Purging a table” on page 210.
Unload
See “Unloading and loading tables” on page 211.
Load
See “Unloading and loading tables” on page 211.
Edit
Add rows
See “Editing the contents of a table” on page 204. If your installation does not use QMF, this item
is not selectable.
Change rows
See “Editing the contents of a table” on page 204. If your installation does not use QMF, this item
is not selectable.
ISPF editor
See “Editing the contents of a table” on page 204.
View
All
See “Listing a subset of tables in the Tables window” on page 232.
Some
See “Listing a subset of tables in the Tables window” on page 232.
Other
QMF
Refer to the Guide to Reporting for more information. If your installation does not use QMF, this
item is not selectable.
DB2I
See “Using available tools to work with the IBM Z Performance and Capacity Analytics database”
on page 159.
ISPF/PDF
Displays the ISPF/PDF primary menu.

306 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Administration dialog options and commands

Process IBM Z Performance and Capacity Analytics statements


See “Working with fields in a record definition” on page 196.
Messages
Refer to the Guide to Reporting for more information.
Help
As for Help on the Administration window (see “Help” on page 302).

IBM Z Performance and Capacity Analytics commands


You can immediately execute an action anywhere in an IBM Z Performance and Capacity Analytics
dialog by typing these commands on the command line (uppercase letters show the abbreviation for the
command):
COMPonen (see Note)
Displays the Components window.
DB2I
Starts a DATABASE 2 Interactive (DB2I) facility session and displays its primary menu.
DISPLay RECORD record_type (see Note)
Lets you identify a log data set in the Record Selection window from which IBM Z Performance and
Capacity Analytics displays records of the specified type in the Record Data window.
DISPLay report_ID
Displays the specified report from the Reports window.
DISPLay REPort report_ID
Displays the specified report. By default, report IDs are listed in the IBM Z Performance and Capacity
Analytics Report window next to their corresponding report descriptions. You can toggle the display to
show either the report IDs or the report types and owners by pressing F11.
If you do not use a prefix for the report ID (prefix.report_ID), IBM Z Performance and Capacity
Analytics assumes the report is public. Otherwise, the prefix must be the owner of the private report.
DISPLAY TABLE table_name (see Note)
Displays the specified table.
IBM Z Performance and Capacity Analytics assumes a prefix that is the value of the Other table prefix
field from the Dialog Parameters window:

DISPL TAB DRLSYS.DRLTABLES


DISPL TAB MVS_SYSTEM_H or DISPL TAB DRL.MVS_SYSTEM_H

DISPLay table_name (see Note)


Displays the specified table from the Tables window.
DRLESTRA
Displays the Set/Reset Trace Options window.
HELP
Displays general help or, if a message appears, help for the message.
ISPF
Displays the ISPF primary menu.
LOcate argument
In an IBM Z Performance and Capacity Analytics window, locates the first row that starts with
argument in the column that was last sorted.
LOGS (see Note)
Displays the Logs window.
PDF
Displays the ISPF primary menu.

Chapter 7. Administration reference 307


Administration dialog options and commands

QMF
If your installation uses QMF, this command starts QMF and displays either its SQL primary window or
its prompted query primary menu.
REPORTs
Starts the reporting dialog.
SOrt column_name|position ASC|DES
Sorts an IBM Z Performance and Capacity Analytics list by the column you specify as column_name in
either ascending or descending order. (You can also sort by column number by specifying the number
of the column instead of the name. The first column after the selection field column on the left is
column 1.)
SYStem (see Note)
Displays the System window.
TABle (see Note)
Displays the Tables window.
Note: This command is not available in end-user mode from the reporting dialog.

Administration reports
This chapter describes the administration reports that are created when you create or update the IBM Z
Performance and Capacity Analytics system tables. The reports listed in this chapter are the following:

3270 Reports
• “PRA001 - Indexspace Cross-Reference” on page 309
• “PRA002 - Actual Tablespace Space Allocation” on page 310
• “PRA003 - Table Purge Condition” on page 311
• “PRA004 - Table Structure with Comments” on page 312
• “PRA005 - Table Names with Comments” on page 313
• “PRA006 - Object Change Level” on page 313
• “PRA007 - Collected Log Data Sets” on page 314
• “PRA008 - Components and Subcomponents” on page 315
• “PRA009 - Tablespace Allocation” on page 316
• “PRA010 - Update Definitions” on page 317
• “PRA011 - Update Details” on page 318
• “PRA012 - Table Name to Tablespace Cross-Reference ” on page 319
• “PRA013 - Tablespace to Table Name Cross-Reference ” on page 320
• “PRA014 - System Tables” on page 321
• “PRA015 - Non-System Tables Installed” on page 322

Cognos Reports
• “PRA001 - Indexspace Cross-Reference” on page 323
• “PRA002 - Actual Tablespace Space Allocation” on page 324
• “PRA003 - Table Purge Condition” on page 325
• “PRA004 - Table Structure with Comments” on page 326
• “PRA005 - Table Names with Comments” on page 327
• “PRA006 - Object Change Level” on page 328
• “PRA007 - Collected Log Data Sets” on page 330
• “PRA008 - Components and Subcomponents” on page 331

308 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA001 - Indexspace Cross-Reference

• “PRA009 - Tablespace Allocation” on page 332


• “PRA010 - Update Definitions” on page 333
• “PRA011 - Update Details” on page 335
• “PRA012 - Table Name to Tablespace Cross-Reference ” on page 336
• “PRA013 - Tablespace to Table Name Cross-Reference ” on page 337
• “PRA014 - System Tables” on page 339
• “PRA015 - Non-System Tables Installed” on page 340

3270 reports
PRA001 - Indexspace Cross-Reference
The PRA001 report provides a cross-reference between index spaces and indexes that are present in the
IBM Z Performance and Capacity Analytics environment at the time of running the report. This report
enables you to extract the real name of an index, so that you can locate the index in the administration
dialog and adjust its space allocation if required
This report enables you to extract the real name of an index, so that you can locate the index in the
administration dialog and adjust its space allocation if required.
This information identifies the report:
Report ID
PRA001
Report group
ADMIN
Reports Source
DRLINDEXES

Figure 141. Indexspace Cross-Reference report

The report contains the following information:

Chapter 7. Administration reference 309


PRA002 - Actual Tablespace Allocation

Indexspace
The name of the index space whose index name has been extracted. This is either the name
associated with a single index space or the complete cross reference between index and index space
names for all indexes.
Index Name
The name of the index associated with the indexspace.
For information about:
• The DRLINDEXES system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index
space” on page 227.

PRA002 - Actual Tablespace Space Allocation


The PRA002 report shows the actual space allocated to tables. Use the information in this report,
together with the information in PRA003, to estimate future space requirements.
This information identifies the report:
Report ID
PRA002
Report group
ADMIN
Reports Source
DRLTABLESPACE

Figure 142. Actual Tablespace Space Allocation report

The report contains the following information:


Tablespace Name
The name of the table space whose space allocation has been extracted.

310 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA003 - Table Purge Condition

Space Allocated
The SPACE value as reported in the Db2 catalog (SYSIBM.SYSTABLESPACES table). The column
SPACE contains data only if the STOSPACE utility has been run.
For information about:
• The DRLTABLESPACE system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index space”
on page 227.
• The SYSTABLESPACE table, refer to the Db2 for z/OS: SQL Reference.

PRA003 - Table Purge Condition


This report shows a printable list of current purge conditions. It enables you to review purge criteria and
decide which adjustments to make without the need to use the online dialog.
This information identifies the report:
Report ID
PRA003
Report group
ADMIN
Reports Source
DRLPURGECOND
Variables
TABLE_NAME. Optional. You can select the purge condition associated with a single table, or accept
the default setting to obtain a complete list of current purge conditions.

BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN

Figure 143. Table Purge Condition report

The report contains the following information:


Table Prefix
The prefix of the table.
Table Name
The name of the table to which the purge condition applies.
Latest Change
The latest change level recorded in the SDRLDEFS.
PURGE CONDITION
The purge condition that applies to the table.

Chapter 7. Administration reference 311


PRA004 - Table Structure with Comments

Date Installed
The date the purge condition was installed.
Creator
The ID of the person who installed the purge condition.
For information about:
• The DRLPURGCOND system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or edit purge conditions, see “Displaying and editing the purge condition of a table” on
page 225.

PRA004 - Table Structure with Comments


This report shows the column remarks for the selected table.
This information identifies the report:
Report ID
PRA004
Report group
ADMIN
Reports Source
DRLCOLUMNS
Variables
TABLE NAME.

Figure 144. Table Structure with Comments report

The report contains the following information:


Keyseq
The column's numeric position within the table's primary key. 0 if it is not part of a primary key.
Table Name
The table name.
ColNo
Column number withing the table.
Keyseq
The column's numeric position within the table's primary key.
Column Name
Table column name.
Nulls
Y - the column may contain NULLS.

312 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA005 - Table Names with Comments

Length
Column length.
Comments
Column comment (if defined for the table column). It is 255 char long.

PRA005 - Table Names with Comments


This report lists all the tables with comments.
This information identifies the report:
Report ID
PRA005
Report group
ADMIN
Reports Source
DRLCOLUMNS

Figure 145. Table Names with Comments report

The report contains the following information:


Table Name
Table column name.
Comments
Table comment. It is 255 characters long.

PRA006 - Object Change Level


The PRA006 report displays a list of all the items as well as their current change level.
This information identifies the report.
Report ID
PRA006
Report group
ADMIN
Reports Source
DRLCOMP_OBJECTS, DRLRECORDS, DRLRECORDPROCS, DRLLOGS, DRLUPDATES, DRLREPOSTS
Variables
COMPONENT. Optional. Type a component name if you want the user-modified objects for a single
component. If you do not specify any value, the complete list of user modified objects is displayed for
all component.

Chapter 7. Administration reference 313


PRA007 - Collected Log Data Sets

Figure 146. Object Change Level report

The report contains the following information:


Component
Name of the component which the objects belong to.
Object Type
Type of object (Record, Update, Log…).
Object Name
Name of the object.
Member Name
Name of the member in the IBM Z Performance and Capacity Analytics libraries where the object
definition is stored.
Subcomponent
Subcomponent name, if any.
Latest Change
The latest change level of the object. You modify this field when you change any objects. It indicates
whether an object has been modified.
For information about:
• The DRLCOMP_OBJECTS, DRLRECORDS, DRLRECORDPROCS, DRLLOGS, DRLUPDATES, DRLREPOSTS
system tables, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.

PRA007 - Collected Log Data Sets


The PRA007 report provides the list of all the data sets that have been collected.
This information identifies the report.
Report ID
PRA007
Report group
ADMIN
Reports Source
DRLLOGDATASETS

314 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA008 - Components and Subcomponents

Figure 147. Collected Log Data Sets report

The report contains the following information:


Log Name
Name of the log type.
Data Set Name
The name of the data set collected.
Complete
Y if the log was completely collected.
Number of Collects
The number of times this data set has been collected.
Time Collected
Date and time of the last collect of this data set.
First Timestamp
First timestamp located on the log data set.
Last Timestamp
Last timestamp located on the log data set.
User ID
The ID of the person who collected the log data sets.

PRA008 - Components and Subcomponents


The PRA008 report provides a list of all the components and subcomponents available. The report gives a
status of INSTALLED for each component or subcomponent that is installed and the date it was installed.
This information identifies the report.
Report ID
PRA008
Report group
ADMIN
Reports Source
DRLCOMPONENTS, DRLCOMP_PARTS

Chapter 7. Administration reference 315


PRA009 - Table space Allocation

Figure 148. Components and Subcomponents report

The report contains the following information:


Component
Name of the component.
Component Status
The status of the component (INSTALLED or blank).
Subcomponent
Name of the subcomponent.
Subcomponent Status
The status of the subcomponent (INSTALLED or blank).
Date Installed
The date the component and/or subcomponent was installed.

PRA009 - Tablespace Allocation


The PRA009 report provides a list of all tables spaces and the tables they contain.
This information identifies the report.
Report ID
PRA009
Report group
ADMIN
Reports Source
DRLTABLESPACE, DRLTABLES

316 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA010 - Update Definition

Figure 149. Tablespace Allocation report

The report contains the following information:


Tablespace
Name of the tablespace.
Table Name
The table name(s) contained within the tablespace.
Space KB
Space occupied in KB.
Number of Partitions
Lock Rule
The lock size for the tablespace.
Page Size
Max Lock
Number of Tables
The number of tables in this tablespace.
Seg Size
The segment size.

PRA010 - Update Definitions


The PRA010 report provides a list of update definitions for all installed components.
This information identifies the report.
Report ID
PRA010
Report group
ADMIN
Reports Source
DRLUPDATES

Chapter 7. Administration reference 317


PRA011 - Update Details

BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN

Figure 150. Update Definitions report

The report contains the following information:


Update Name
Name of the update definition.
Latest Change
The last change level for the update definition.
Source Prefix
The prefix of the source when the source is another Db2 table.
Source Name
The name of the source. This is either a record definition or a Db2 table.
Target Prefix
The prefix of the target Db2 table.
Target name
The name of the target Db2 table.
Date Installed
The date the definition was installed.
Creator
The name of the installer.

PRA011 - Update Details


The PRA011 report provides a list of field update criteria for each field of an update definition.
This information identifies the report.
Report ID
PRA011
Report group
ADMIN
Reports Source
DRLUPDATECOLS, DRLEXPRESSIONS

318 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA012 - Table Name to Tablespace Cross-Reference

Figure 151. Update Details report

The report contains the following information:


Update Name
Name of the update definition.
Update Col No
The relative position of the field in the update definition.
Column Name
The column name.
Column No
The column number in the target Db2 table.
Function
Expression No
The expression number in the DRLEXPRESSIONS table relative to the update name.
Expression
The update expression.

PRA012 - Table Name to Tablespace Cross-Reference


The PRA012 report provides a cross-reference list of Db2 table names to the tablespace the table is
defined in.
This information identifies the report.
Report ID
PRA012
Report group
ADMIN
Reports Source
DRLTABLES

Chapter 7. Administration reference 319


PRA013 - Tablespace to Table Name Cross-Reference

BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN

Figure 152. Table Name to Tablespace Cross-Reference report

The report contains the following information:


Table Name
Name of the Db2 table.
Tablespace
The name of the tablespace that contains the table.
No Columns
The number of columns in the table.
No Pages
The number pages the table is occupying.
Creator
The name of the user who installed the table.
Space
The space occupied by the table.

PRA013 - Tablespace to Table Name Cross-Reference


The PRA013 report provides a cross-reference list of tablespaces to the Db2 table names that it contains.
This information identifies the report.
Report ID
PRA013
Report group
ADMIN
Reports Source
DRLTABLES

320 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA014 - System Tables

BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN
BKEN

Figure 153. Tablespace to Table Name Cross-Reference report

The report contains the following information:


Tablespace
The name of the tablespace that contains the table.
Table Name
Name of the Db2 table.
No Columns
The number of columns in the table.
No Pages
The number pages the table is occupying.
Creator
The name of the user who installed the table.
Space
The space occupied by the table.

PRA014 - System Tables


The PRA014 report provides a list of all the system tables.
This information identifies the report.
Report ID
PRA014
Report group
ADMIN
Reports Source
DRLTABLES

Chapter 7. Administration reference 321


PRA015 - Non-System Tables Installed

Figure 154. System Tables report

The report contains the following information:


Style Table Name
Name of the system table.
Prefix
The prefix used for the system tables.
Type
The type of table (table or view).

PRA015 - Non-System Tables Installed


The PRA015 report provides a list of all installed Db2 tables.
This information identifies the report.
Report ID
PRA015
Report group
ADMIN
Reports Source
DRLTABLES

322 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA001 - Indexspace Cross-Reference

Figure 155. Non-System Tables Installed report

The report contains the following information:


Table Name
Name of the Db2 table.
Prefix
The prefix used for the system tables.
Type
The type of table (table or view).

Cognos reports
PRA001 - Indexspace Cross-Reference
The PRA001 report provides a cross-reference between index spaces and indexes that are present in the
IBM Z Performance and Capacity Analytics environment at the time of running the report. This report
enables you to extract the real name of an index, so that you can locate the index in the administration
dialog and adjust its space allocation if required
This report enables you to extract the real name of an index, so that you can locate the index in the
administration dialog and adjust its space allocation if required.
This information identifies the report:
Report ID
PRA001
Report group
ADMIN
Reports Source
DRLINDEXES

Chapter 7. Administration reference 323


PRA002 - Actual Tablespace Allocation

Variables
Indexspace. Optional.

Figure 156. Indexspace Cross-Reference report

The report contains the following information:


Indexspace
The name of the index space whose index name has been extracted. This is either the name
associated with a single index space or the complete cross reference between index and index space
names for all indexes.
Index Name
The name of the index associated with the indexspace.
For information about:
• The DRLINDEXES system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index
space” on page 227.

PRA002 - Actual Tablespace Space Allocation


The PRA002 report shows the actual space allocated to tables. Use the information in this report,
together with the information in PRA003, to estimate future space requirements.
This information identifies the report:
Report ID
PRA002
Report group
ADMIN
Reports Source
DRLTABLESPACE

324 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA003 - Table Purge Condition

Variables
Table space. Optional.

Figure 157. Actual Tablespace Space Allocation report

The report contains the following information:


Tablespace Name
The name of the table space whose space allocation has been extracted.
Space Allocated
The SPACE value as reported in the Db2 catalog (SYSIBM.SYSTABLESPACES table). The column
SPACE contains data only if the STOSPACE utility has been run.
For information about:
• The DRLTABLESPACE system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or modify tables or index spaces, see “Displaying and modifying a table or index space”
on page 227.
• The SYSTABLESPACE table, refer to the Db2 for z/OS: SQL Reference.

PRA003 - Table Purge Condition


This report shows a printable list of current purge conditions. It enables you to review purge criteria and
decide which adjustments to make without the need to use the online dialog.
This information identifies the report:
Report ID
PRA003
Report group
ADMIN

Chapter 7. Administration reference 325


PRA004 - Table Structure with Comments

Reports Source
DRLPURGECOND
Variables
Table Name, Latest Changes and Creator are optional. You can select the purge condition associated
with a single table or accept the default setting to obtain a complete list of current purge conditions.

PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER
PARKER

Figure 158. Table Purge Condition report

The report contains the following information:


Table Prefix
The prefix of the table.
Table Name
The name of the table to which the purge condition applies.
Latest Change
The latest change level recorded in the SDRLDEFS.
PURGE CONDITION
The purge condition that applies to the table.
Date Installed
The date the purge condition was installed.
Creator
The ID of the person who installed the purge condition.
For information about:
• The DRLPURGCOND system table, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.
• How to display or edit purge conditions, see “Displaying and editing the purge condition of a table” on
page 225.

PRA004 - Table Structure with Comments


This report shows the column remarks for the selected table.
This information identifies the report:

326 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA005 - Table Names with Comments

Report ID
PRA004
Report group
ADMIN
Reports Source
DRLCOLUMNS
Variables
Table Name. Optional.

Figure 159. Table Structure with Comments report

The report contains the following information:


Table Name
The table name.
ColNo
Column number withing the table.
Keyseq
The column's numeric position within the table's primary key.
Column Name
Table column name.
Nulls
Y - the column may contain NULLS.
Length
Column length.
Comments
Column comment (if defined for the table column). It is 255 char long.

PRA005 - Table Names with Comments


This report lists all the tables with comments.
This information identifies the report:
Report ID
PRA005
Report group
ADMIN

Chapter 7. Administration reference 327


PRA006 - Object Change Level

Reports Source
DRLCOLUMNS
Variables
Table Name. Optional.

Figure 160. Table Names with Comments report

The report contains the following information:


Table Name
Table column name.
Comments
Table comment. It is 255 characters long.

PRA006 - Object Change Level


The PRA006 report displays a list of all the items as well as their current change level.
This information identifies the report.
Report ID
PRA006
Report group
ADMIN
Reports Source
DRLCOMP_OBJECTS, DRLRECORDS, DRLRECORDPROCS, DRLLOGS, DRLUPDATES, DRLREPOSTS
Variables
Component Name. Optional. Select a component name if you want the user-modified objects for
a single component. If you do not specify any value, the complete list of user modified objects is
displayed for all each installed component.

328 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA006 - Object Change Level

Figure 161. Object Change Level report

The report contains the following information:


Component
Name of the component which the objects belong to.
Object Type
Type of object (Record, Update, Log…).
Object Name
Name of the object.
Member Name
Name of the member in the IBM Z Performance and Capacity Analytics libraries where the object
definition is stored.
Subcomponent
Subcomponent name, if any.
Latest Change
The latest change level of the object. You modify this field when you change any objects. It indicates
whether an object has been modified.
For information about:

Chapter 7. Administration reference 329


PRA007 - Collected Log Data Sets

• The DRLCOMP_OBJECTS, DRLRECORDS, DRLRECORDPROCS, DRLLOGS, DRLUPDATES, DRLREPOSTS


system tables, see “Views on Db2 and QMF tables” on page 273.
• How to run reports, see “Administering reports” on page 160.

PRA007 - Collected Log Data Sets


The PRA007 report provides the list of all the data sets that have been collected.
This information identifies the report.
Report ID
PRA007
Report group
ADMIN
Reports Source
DRLLOGDATASETS
Variables
Log Name and Data Set Name are optional.

PARKER
PARKER
PARKER
PARKER
PARKER
SMITH
SMITH
SMITH
SMITH
SMITH
SMITH
ODIN
ODIN
ODIN
ODIN
ODIN
ODIN
FORTE
FORTE
FORTE

Figure 162. Collected Log Data Sets report

The report contains the following information:


Log Name
Name of the log type.
Data Set Name
The name of the data set collected.
Complete
Y if the log was completely collected.
Number of Collects
The number of times this data set has been collected.
Time Collected
Date and time of the last collect of this data set.
First Timestamp
First timestamp located on the log data set.

330 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA008 - Components and Subcomponents

Last Timestamp
Last timestamp located on the log data set.
User ID
The ID of the person who collected the log data sets.

PRA008 - Components and Subcomponents


The PRA008 report provides a list of all the components and subcomponents available. The report gives a
status of INSTALLED for each component or subcomponent that is installed and the date it was installed.
This information identifies the report.
Report ID
PRA008
Report group
ADMIN
Reports Source
DRLCOMPONENTS, DRLCOMP_PARTS
Variables
Component Name and Status are optional.

Figure 163. Components and Subcomponents report

Chapter 7. Administration reference 331


PRA009 - Table space Allocation

The report contains the following information:


Component
Name of the component.
Component Status
The status of the component (INSTALLED or NOT INSTALLED).
Subcomponent
Name of the subcomponent.
Subcomponent Status
The status of the subcomponent (INSTALLED or NOT INSTALLED).
Date Installed
The date the component and/or subcomponent was installed.

PRA009 - Tablespace Allocation


The PRA009 report provides a list of all tables spaces and the tables they contain.
This information identifies the report.
Report ID
PRA009
Report group
ADMIN
Reports Source
DRLTABLESPACE, DRLTABLES
Variables
Table Name. Optional.

332 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA010 - Update Definition

Figure 164. Tablespace Allocation report

The report contains the following information:


Tablespace
Name of the tablespace.
Table Name
The table name(s) contained within the tablespace.
Space KB
Space occupied in KB.
Number of Partitions
Lock Rule
The lock size for the tablespace.
Page Size
Max Lock
Number of Tables
The number of tables in this tablespace.
Seg Size
The segment size.

PRA010 - Update Definitions


The PRA010 report provides a list of update definitions for all installed components.
This information identifies the report.

Chapter 7. Administration reference 333


PRA010 - Update Definition

Report ID
PRA010
Report group
ADMIN
Reports Source
DRLUPDATES
Variables
Source Prefix, Source Name, Target Name, Target Prefix and Creator are optional.

ODIN
FORTE
FORTE
FORTE
FORTE
FORTE
FORTE
ODIN
ODIN
ODIN
SMITH
SMITH
SMITH
SMITH
SMITH
PARKER
PARKER
PARKER
PARKER
PARKER

Figure 165. Update Definitions report

The report contains the following information:


Update Definition
Name of the update definition.
Latest Change
The last change level for the update definition.
Source Prefix
The prefix of the source when the source is another Db2 table.
Source Name
The name of the source. This is either a record definition or a Db2 table.
Target Prefix
The prefix of the target Db2 table.
Target name
The name of the target Db2 table.
Date Installed
The date the definition was installed.

334 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA011 - Update Details

Creator
The name of the installer.

PRA011 - Update Details


The PRA011 report provides a list of field update criteria for each field of an update definition.
This information identifies the report.
Report ID
PRA011
Report group
ADMIN
Reports Source
DRLUPDATECOLS, DRLEXPRESSIONS
Variables
Update Name. Required.

Figure 166. Update Details report

The report contains the following information:


Update Column Number
The relative position of the field in the update definition.
Column Name
The column name.
Column Number
The column number in the target Db2 table.
Function
Expression Number
The expression number in the DRLEXPRESSIONS table relative to the update name.
Expression
The update expression.

Chapter 7. Administration reference 335


PRA012 - Table Name to Tablespace Cross-Reference

PRA012 - Table Name to Tablespace Cross-Reference


The PRA012 report provides a cross-reference list of Db2 table names to the tablespace the table is
defined in.
This information identifies the report.
Report ID
PRA012
Report group
ADMIN
Reports Source
DRLTABLES
Variables
Tablespace. Optional.

Figure 167. Table Name to Tablespace Cross-Reference report

336 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA013 - Tablespace to Table Name Cross-Reference

The report contains the following information:


Table Name
Name of the Db2 table.
Tablespace
The name of the tablespace that contains the table.
No Columns
The number of columns in the table.
No Pages
The number pages the table is occupying.
Creator
The name of the user who installed the table.
Space
The space occupied by the table.

PRA013 - Tablespace to Table Name Cross-Reference


The PRA013 report provides a cross-reference list of tablespaces to the Db2 table names that it contains.
This information identifies the report.
Report ID
PRA013
Report group
ADMIN
Reports Source
DRLTABLES

Chapter 7. Administration reference 337


PRA013 - Tablespace to Table Name Cross-Reference

Figure 168. Tablespace to Table Name Cross-Reference report

The report contains the following information:


Table Name
Name of the Db2 table.
Tablespace
The name of the tablespace that contains the table.
Number of Columns
The number of columns in the table.
Number of Pages
The number pages the table is occupying.
Creator
The name of the user who installed the table.
Space
The space occupied by the table.

338 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA014 - System Tables

PRA014 - System Tables


The PRA014 report provides a list of all the system tables.
This information identifies the report.
Report ID
PRA014
Report group
ADMIN
Reports Source
DRLTABLES
Variables
System Table Name. Optional.

Figure 169. System Tables report

Chapter 7. Administration reference 339


PRA015 - Non-System Tables Installed

The report contains the following information:


Style Table Name
Name of the system table.
Prefix
The prefix used for the system tables.
Type
The type of table (table or view).

PRA015 - Non-System Tables Installed


The PRA015 report provides a list of all installed Db2 tables.
This information identifies the report.
Report ID
PRA015
Report group
ADMIN
Reports Source
DRLTABLES
Variables
Table Name. Optional.

340 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
PRA015 - Non-System Tables Installed

Figure 170. Non-System Tables Installed report

The report contains the following information:


Table Name
Name of the Db2 table.
Prefix
The prefix used for the system tables.
Type
The type of table (table or view).

Chapter 7. Administration reference 341


Calling the DRL1SQLX module

Using the REXX-SQL interface


This chapter contains General-use Programming Interface and Associated Guidance Information.
IBM Z Performance and Capacity Analytics provides a REXX-SQL interface through the DRL1SQLX module,
which supports:
• Loading a Db2 table into an array of REXX variables
• Using SQL EXECUTE IMMEDIATE to execute an argument string that is a valid SQL statement
For more information about Db2 terms and statements mentioned in this chapter, refer to the Db2 for
z/OS: SQL Reference.

Calling the DRL1SQLX module


The module derives its input data from the argument on the CALL instruction and from predefined REXX
variables. There are reserved REXX variables that the calling REXX exec defines before calling the module.
If a REXX exec passes an SQL SELECT statement as the argument, DRL1SQLX executes the SELECT and
returns table data in an array of REXX variables. The module can return any Db2 data type but graphic
strings.
The module return code result, set in the variable RESULT, is available to the calling REXX program.
The syntax for running the DRL1SQLX module is:

CALL DRL1SQLX 'INIT'

sql-statement

'TERM'

where:
INIT
Establishes a call attachment facility (CAF) connection to Db2 that leaves the connection open until
a DRL1SQLX TERM statement is executed. There is not an implied COMMIT until the DRL1SQL TERM
statement.
If the REXX program passes INIT as the argument for the CALL DRL1SQLX statement, the
connection remains open for each SQL statement call. The connection does not terminate until a
CALL DRL1SQLX TERM statement closes it.
If the REXX program does not pass INIT as the argument for the CALL DRL1SQLX statement,
the connection is opened at the beginning of each CALL DRL1SQLXsql_statement and closed at its
conclusion, which makes SQL ROLLBACK impossible.
If you are making more than three calls to DRL1SQLX, it is more efficient to use the CALL DRL1SQLX
INIT statement first.
sql-statement
An SQL SELECT or another SQL statement that can be executed with an EXECUTE IMMEDIATE
statement. DRL1SQLX appends the SQL statement to SQL EXECUTE IMMEDIATE and executes it.
TERM
Terminates an existing connection to Db2 and performs an implied COMMIT.

Input REXX variables


The calling program can define these variables before calling DRL1SQLX:
DB2SUBS
The Db2 subsystem that DRL1SQLX addresses.

342 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Calling the DRL1SQLX module

There is no default for this variable; it must be defined.


DB2PLAN
The name of the Db2 application plan. This variable should be coded only if the installation changed
the default plan name DRLPLAN when the IBM Z Performance and Capacity Analytics bind job was
run.
SQLSTEM
The stem of the REXX array that DRL1SQLX uses to return table values when the argument is an SQL
SELECT statement.
The stem has an initial value of SQLDATA.
SQLMAX
The maximum number of rows to fetch when the argument is an SQL SELECT statement.
SQLMAX has an default value of 5000. Pick an SQLMAX limit that protects you from runaway queries.
The maximum supported value is 99999999.

Output REXX variables


DRL1SQLX always sets these variables:
RESULT
The DRL1SQLX return code.
When the argument is an SQL SELECT, DRL1SQLX sets RESULT to 4 if the number of rows in the table
is greater than the value of SQLMAX. It issues a message, DRL1007W, to warn you of the condition but
completes the select, returning the number of rows specified in SQLMAX.
DRL1SQLX sets these return codes in RESULT:
0
Successful execution.
4
SQLCODE > 0, SQLMAX invalid or the SQLMAX limit was reached. The error message is in SQLMSG.
8
SQLCODE < 0 indicates an SQL error. The error message is in SQLMSG.
12
An error that is not an SQL error. The error message is in SQLMSG.
16
There was either insufficient REXX storage or a REXX variable that could not be set. The error
appears in SQLMSG, if possible.
20
The REXX communication routine IRXEXCOM could not be loaded. There is no indication of the
error in SQLMSG.
SQLCODE
The SQL return code.
This value is positive when there is an SQL warning and negative when there is an SQL error. It is
returned in combination with a RESULT of 4 or 8, exclusively.
SQLMSG.0
The number of different message values returned when RESULT > 0
SQLMSG.1
The value of the first message returned when RESULT > 0
Up to 5 messages can be returned.
SQLMSG.n
The value of the last message returned when RESULT > 0
The value of n is the value of SQLMSG.0.

Chapter 7. Administration reference 343


Calling the DRL1SQLX module

These variables are set by DRL1SQLX after a successful execution of an SQL SELECT statement. For each
variable below, sqlstem is the value of the SQLSTEM input variable, y is the column number, and z is the
row number:
sqlstem.NAME.0
The number of selected columns.
sqlstem.NAME.y
The names of the selected columns.
The column name of an expression is blank. Each value of y is a whole number from 1 through
sqlstem.NAME.0.
sqlstem.LENGTH.y
The maximum length of the value of the selected columns.
A column name can be longer than the value. Each value of y is a whole number from 1 through
sqlstem.NAME.0.
sqlstem.TYPE.y
The data types of the selected columns.
Each type is copied from the SQLTYPE field in the SQL descriptor area (SQLDA) and is a number
ranging from 384 to 501. Each value of y is a whole number from 1 through sqlstem.NAME.0.
sqlstem.0
The number of rows in the result table.
sqlstem.y.z
The value of the column.
Each value of y is a whole number from 1 through sqlstem.NAME.0.
Each value of z is a whole number from 1 through sqlstem.0.

Reserved REXX variable


DRL1SQLX always sets the variable SQLHANDLE on the INIT statement. It must not be reset except by
the TERM statement, which must be able to read the value set by the last INIT statement.
SQLHANDLE contains the handle returned when DRL1SQLX connects to Db2 with the INIT statement.

344 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

REXX example of calling DRL1SQLX


/**REXX**********************************************************/
/* Execute an SQL SELECT statement and display output */
/****************************************************************/

sqlstmt = "SELECT *",


"FROM DRL.MVS_SYSTEM_H",
"WHERE DATE = '2000-05-02'"

db2subs = 'DB2T' /* subsystem name */


sqlstem = 'RES' /* name of stem */
sqlmax = 100 /* limit on nbr of rows */

Call DRL1SQLX sqlstmt /* execute SQL statement */

Say 'DRL1SQLX return code:' result


Say 'SQL return code SQLCODE:' sqlcode

If sqlmsg.0 > 0 Then


Do n = 1 To sqlmsg.0 /* up to 5 error msgs */
Say sqlmsg.n
End

If res.name.0 > 0 Then /* number of columns */

/**************************************************************/
/* Display column names and values for all rows */
/**************************************************************/
If res.0 > 0 Then /* number of rows */
Do z = 1 To res.0
Say ' '
Say 'Following values were returned for row 'z':'
Do y = 1 To res.name.0
Say res.name.y': 'res.y.z
End
End
Else
Say 'No rows were returned'
Exit

Figure 171. Example of REXX-SQL interface call

Using the IBM Db2 Analytics Accelerator


The IBM® Db2 Analytics Accelerator is a high-performance appliance that integrates business insights into
operational processes.

IBM Z Performance and Capacity Analytics includes Analytics Components that are designed to support
the IBM Db2 Analytics Accelerator. These components are based on existing non-Analytics components
that are modified to allow for the following functions:
• Store data directly to an IBM Db2 Analytics Accelerator removing the need to store data on Db2 for
z/OS®.
• Allow for more detailed timestamp level records to be stored.
• Allow for more CPU work to move from z/OS to the IBM Db2 Analytics Accelerator appliance.
• Report to make use of the high query speeds of the IBM Db2 Analytics Accelerator.
The System Data Engine component of the IBM Z Common Data Provider is used to convert SMF log
data into data sets that contain the IBM Z Performance and Capacity Analytics components tables in Db2
internal format. The IBM Db2 Analytics Accelerator Loader for z/OS is then used to load the Db2 internal
format data sets directly into the IBM Db2 Analytics Accelerator.
The Analytics components comprise the following items:
• Analytics - z/OS Performance
• Analytics - Db2
• Analytics - KPM CICS®

Chapter 7. Administration reference 345


IBM Db2 Analytics Accelerator

• Analytics - KPM Db2


• Analytics - KPM z/OS

Relationship of Analytics Components to non-Analytics Components

The Analytics components are based on the following existing non-Analytics components:
Table 9. Relationship of Analytics components to non-Analytics components
Analytics Non-Analytics

Analytics - Db2 Db2


Analytics - KPM CICS Key Performance Metrics - CICS
Analytics - KPM Db2 Key Performance Metrics - Db2
Analytics - KPM z/OS Key Performance Metrics - z/OS
Analytics - z/OS Performance z/OS Performance Management (MVSPM)

The Analytics components include Lookup tables that must be customized as per their equivalent Lookup
tables in the non-Analytics components:
Table 10. Relationship of Analytics Lookup table to non-Analytics Lookup table
Member name Analytics Lookup table non-Analytics Lookup table

DRLTA2AP A_DB2_APPLICATION DB2_APPLICATION


DRLTA2AC A_DB2_ACCUMAC DB2_ACCUMAC
DRLTALUG A_USER_GROUP USER_GROUP
DRLTALKP A_KPM_THRESHOLDS_L KPM_THRESHOLDS
DRLTALW2 A_WORKLOAD2_L MVS_WORKLOAD2_TYPE
DRLTALDA A_DEVICE_ADDR_L MVSPM_DEVICE_ADDR
DRLTALUT A_UNIT_TYPE_L MVSPM_UNIT_TYPE
DRLTALMI A_MIPS_L MVS_MIPS_T
DRLTALSP A_SYSPLEX_L MVS_SYSPLEX
DRLTALWL A_WORKLOAD_L MVS_WORKLOAD_TYPE
DRLTALTR A_TIME_RES_L MVSPM_TIME_RES

The following table lists all the reports per Analytics component, and their equivalent non-Analytics
component reports.

346 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

Table 11. Relationship of Analytics component report to non-Analytics component report


Report name Analytics Report ID non-Analytics Report ID

Db2 Buffer Pool Exceptions ADB219 DB219


Db2 Buffer Pool Statistics, Detail ADB215 DB215
Db2 Buffer Pool Statistics, Overview ADB216 DB216
Db2 DBRMs Class 7,8 Times, Overview ADB222 DB222
Db2 General Measure by Profile, Overview ADB212 DB212
Db2 IDAA Statistics by Transaction, Detail ADB244 DB244
Db2 Packages Class 7,8 Times, Overview ADB221 DB221
Db2 Transaction Statistics, Detail ADB204 DB204
Db2 Transaction Statistics, Overview ADB205 DB205

The following table lists all the tables per Analytics component, and their equivalent non-Analytics
component tables.

Chapter 7. Administration reference 347


IBM Db2 Analytics Accelerator

Table 12. Relationship of Analytics component table to non-Analytics component table


Component Type Analytics component table Equivalent to non-Analytics
component table
Analytics - z/OS Table A_PM_CF_I MVSPM_CF_H
Performance A_PM_CF_LINK_I MVSPM_CF_LINK_H
A_PM_CF_PROC_I MVSPM_CF_PROC_H
A_PM_CF_REQ_I MVSPM_CF_REQUEST_H
A_PM_CF_CF_I MVSPM_CF_TO_CF_H
A_PM_XCF_MEMBER_I MVSPM_XCF_MEMBER_H
A_PM_XCF_PATH_I MVSPM_XCF_PATH_H
A_PM_XCF_SYS_I MVSPM_XCF_SYS_H
A_PM_OMVS_BUF_I MVSPM_OMVS_BUF_H
A_PM_OMVS_FILE_I MVSPM_OMVS_FILE_H
A_PM_OMVS_GHFS_I MVSPM_OMVS_GHFS_H
A_PM_OMVS_HFS_I MVSPM_OMVS_HFS_H
A_PM_OMVS_KERN_I MVSPM_OMVS_KERN_H
A_PM_OMVS_MOUNT_I MVSPM_OMVS_MOUNT_H
A_PM_SYS_CLUST_I MVSPM_CLUSTER_H
A_PM_SYS_CPU_I MVSPM_CPU_H
A_PM_SYS_CPUMT_I MVSPM_CPUMT_H
A_PM_SYS_ENQ_I MVSPM_ENQUEUE_H
A_PM_SYS_LPAR_I MVSPM_LPAR_H
A_PM_SYS_SYS_I MVSPM_SYSTEM_H
A_PM_SYS_PROD_I MVSPM_PROD_T
A_PM_SYS_PRDINT_I MVSPM_PROD_INT_T
A_PM_SYS_MSU_I MVSPM_LPAR_MSU_T
A_PM_WL_GOAL_I MVSPM_GOAL_ACT_H
A_PM_WL_SERVED_I MVSPM_WLM_SERVED_H
A_PM_WL_STATE_I MVSPM_WLM_STATE_H
A_PM_WL_WKLD_I MVSPM_WORKLOAD_H
A_PM_WL_WKLD2_I MVSPM_WORKLOAD2_H
A_PM_IO_DATASET_I MVSPM_DATASET_H
A_PM_IO_VOLUME_I MVSPM_VOLUME_H
A_PM_IO_LCU_I MVSPM_LCU_IO_H
A_PM_GS_BMF_I MVSPM_BMF_H
A_PM_GS_CACHE_I MVSPM_CACHE_H
A_PM_GS_PAGEDS_I MVSPM_PAGE_DS_H
A_PM_GS_PAGING_I MVSPM_PAGING_H
A_PM_GS_STORAGE_I MVSPM_STORAGE_H
A_PM_GS_STORCLS_I MVSPM_STORCLASS_H
A_PM_GS_SWAP_I MVSPM_SWAP_H
A_PM_GS_CACHESS_I MVSPM_CACHE_ESS_H
A_PM_VS_VLF_I MVSPM_VLF_H
A_PM_VS_CSASQA_I MVSPM_VS_CSASQA_H
A_PM_VS_PRIVATE_I MVSPM_VS_PRIVATE_H
A_PM_VS_SUBPOOL_I MVSPM_VS_SUBPOOL_H
A_PM_DEV_CHAN_I MVSPM_CHANNEL_H
A_PM_DEV_HSCHAN_I MVSPM_HS_CHAN_H
A_PM_DEV_AP_I MVSPM_DEVICE_AP_H
A_PM_DEV_DEVICE_I MVSPM_DEVICE_H
A_PM_DEV_FICON_I MVSPM_FICON_H
A_PM_DEV_RAID_I MVSPM_RAID_RANK_H
A_PM_DEV_ESSLNK_I MVSPM_ESSLINK_H
A_PM_DEV_ESSEXT_I MVSPM_ESS_EXTENT_H
A_PM_DEV_ESSRNK_I MVSPM_ESS_RANK_H
A_PM_DEV_PCIE_I MVSPM_PCIE_H
A_PM_CRYP_PCI_I MVSPM_CRYPTO_PCI_H
A_PM_CRYP_CCF_I MVSPM_CRYPTO_CCF_H
A_PM_APP_APPL_I MVSPM_APPL_H

348 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

Table 12. Relationship of Analytics component table to non-Analytics component table (continued)
Component Type Analytics component table Equivalent to non-Analytics
component table
Analytics - Db2 Table A_DB2_SYS_PARM_I DB2_SYS_PARAMETER
A_DB2_DB_I DB2_DATABASE_T
A_DB2_DB_BIND_I DB2_DATABASE_T
A_DB2_DB_QIST_I DB2_DATABASE_T
A_DB2_DB_SYS_I DB2_SYSTEM_T
A_DB2_BP_I DB2_BUFFER_POOL_T
A_DB2_USERTRAN_I DB2_USER_TRAN_H
A_DB2_UT_BP_I DB2_USER_TRAN_H
A_DB2_UT_SACC_I DB2_USER_TRAN_H
A_DB2_UT_IDAA_I DB2_USER_TRAN_H
A_DB2_IDAA_STAT_I DB2_IDAA_STAT_H
A_DB2_IDAA_ACC_I DB2_IDAA_ACC_H
A_DB2_IDAA_ST_A_I DB2_IDAA_STAT_A_H
A_DB2_IDAA_ST_S_I DB2_IDAA_STAT_S_H
A_DB2_PACK_I DB2_PACKAGE_H
A_DB2_SHR_BP_I DB2_BP_SHARING_T
A_DB2_SHR_BPAT_I DB2_BPATTR_SHR_T
A_DB2_SHR_LOCK_I DB2_LOCK_SHARING_T
A_DB2_SHR_INIT_I DB2_SHARING_INIT
A_DB2_SHR_TRAN_I DB2_US_TRAN_SHAR_H
A_DB2_DDF_I DB2_USER_DIST_H
A_DB2_SYSTEM_I DB2_SYSTEM_DIST_T
A_DB2_STORAGE_I DB2_STORAGE_T

View A_DB2_TRAN_IV DB2_TRANSACTION_D


A_DB2_DATABASE_IV DB2_DATABASE_T

Analytics - KPM Db2 Table A_KD_UT_I KPM_DB2_USERTRAN_H


A_KD_UT_BP_I KPM_DB2_USERTRAN_H
A_KD_EU_I KPM_DB2_ENDUSER_H
A_KD_EU_BP_I KPM_DB2_ENDUSER_H
A_KD_PACKAGE_I KPM_DB2_PACKAGE_H
A_KD_SYS_IO_I KPM_DB2_SYSTEM_T
A_KD_SYS_TCBSRB_I KPM_DB2_SYSTEM_T
A_KD_SYS_LATCH_I KPM_DB2_LATCH_T
A_KD_SYS_BP_I KPM_DB2_BP_T
A_KD_SYS_BP_SHR_I KPM_DB2_BP_SHR_T
A_KD_SYS_ST_DBM_I KPM_DB2_STORAGE_T
A_KD_SYS_ST_DST_I KPM_DB2_STORAGE_T
A_KD_SYS_ST_COM_I KPM_DB2_STORAGE_T
A_DB_SYS_DB_WF_I KPM_DB2_DATABASE_T
A_DB_SYS_DB_EDM_I KPM_DB2_DATABASE_T
A_DB_SYS_DB_SET_I KPM_DB2_DATABASE_T
A_DB_SYS_DB_LOCK_I KPM_DB2_LOCK_T

Analytics - KPM CICS Table A_KC_MON_TRAN_I KPMC_MON_TRAN_H

Chapter 7. Administration reference 349


IBM Db2 Analytics Accelerator

Table 12. Relationship of Analytics component table to non-Analytics component table (continued)
Component Type Analytics component table Equivalent to non-Analytics
component table
Analytics - KPM z/OS Table A_KPM_EXCEPTION_I KPM_EXCEPTION_T
A_KZ_JOB_INT_I KPMZ_JOB_INT_T
A_KZ_JOB_STEP_I KPMZ_JOB_STEP_T
A_KZ_LPAR_I KPMZ_LPAR_T
A_KZ_STORAGE_I KPMZ_STORAGE_T
A_KZ_WORKLOAD_I KPMZ_WORKLOAD_T
A_KZ_CHANNEL_I KPMZ_CHANNEL_T
A_KZ_CF_I KPMZ_CF_T
A_KZ_CF_STRUC_I KPMZ_CF_STRUCTR_T
A_KZ_CPUMF_I KPMZ_CPUMF_T
A_KZ_CPUMF1_I KPMZ_CPUMF1_T
A_KZ_CPUMF_PT_I KPMZ_CPUMF_PT_T
A_KZ_CPUMF1_PT_I KPMZ_CPUMF1_PT_T
A_KZ_SRM_WKLD_I KPMZ_SRM_WKLD_T

There are cases where multiple tables from an Analytics component are combined into a single view.
In these cases, the resulting view matches an existing table from an IBM Z Performance and Capacity
Analytics non-Analytics component. See the following table for views in the Analytics components that
are based on multiple tables from non-Analytics components.
Table 13. Relationship of Analytics component tables used in view to non- Analytics component tables used in view
Component View Analytics component Equivalent to non-
tables used in view Analytics component
table
Analytics - Db2 A_DB2_USERTRAN_IV A_DB2_USERTRAN_I DB2_USER_TRAN_H
A_DB2_UT_BP_I
A_DB2_UT_SACC_I
A_DB2_UT_IDAA_I

Analytics - Db2 A_DB2_DATABASE_IV A_DB2_DB_I DB2_DATABASE_T


A_DB2_DB_BIND_I
A_DB2_DB_QIST_I

Analytics - KPM Db2 A_KD_USERTRAN_IV A_KD_UT_I KPM_DB2_USERTRAN_H


A_KD_UT_BP_I

Analytics - KPM Db2 A_KD_ENDUSER_IV A_KD_EU_I KPM_DB2_ENDUSER_H


A_KD_EU_BP_I

Analytics - KPM Db2 A_KD_SYSTEM_IV A_KD_SYS_IO_I KPM_DB2_SYSTEM_T


A_KD_SYS_TCBSRB_I

Analytics - KPM Db2 A_KD_STORAGE_IV A_KD_SYS_ST_DBM_I KPM_DB2_STORAGE_T


A_KD_SYS_ST_DST_I
A_KD_SYS_ST_COM_I

Analytics - KPM Db2 A_KD_DATABASE_IV A_DB_SYS_DB_WF_I KPM_DB2_DATABASE_T


A_DB_SYS_DB_EDM_I
A_DB_SYS_DB_SET_I

350 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

Configuring Analytics Components for use with IBM Db2 Analytics


Accelerator

About this task


You can complete the following steps that are required for IBM Z Performance and Capacity Analytics to
use the IBM Db2 Analytics Accelerator to contain the data for the tables of the Analytics components.
Tables that are created on an IBM Db2 Analytics Accelerator (IDAA) can be loaded without loading data
into Db2, which requires the following items:
• The System Data Engine (SDE) component of the IBM Common Data Provider for z Systems® to collect
the SMF data instead of using IBM Z Performance and Capacity Analytics Collect. The PTFs for APARs
OA52196 and OA52200 must be applied.
• The Db2 Analytics Accelerator Loader for z/OS V2.1 that uses IDAA-Only load mode to load the data
that is created by the SDE into the IDAA.
The Analytics components comprise the following items:
• Analytics - z/OS Performance
• Analytics - Db2
• Analytics - KPM CICS
• Analytics - KPM Db2
• Analytics - KPM z/OS
The Analytics components allow for tables to be created as either kind of the following tables:
• Db2 for z/OS tables
• IBM Db2 Analytics Accelerator Accelerator-shadow tables
• IBM Db2 Analytics Accelerator Accelerator-only tables

Procedure
1. Ensure to apply the PTFs for APAR PI70968 to the IBM Z Performance and Capacity Analytics system.
2. Bind the Db2 plan that is used by IBM Z Performance and Capacity Analytics by specifying the
BIND option QUERYACCELERATION(ELIGIBLE) or QUERYACCELERATION(ENABLE). For example,
assuming the default plan name to be DRLPLAN, the BIND PACKAGE to set ELIGIBLE for the query
acceleration register is as follows:

//SYSTSIN DD *
DSN SYSTEM(DSN)
BIND PACKAGE(DRLPLAN) OWNER(authid) MEMBER(DRLPSQLX) -
ACTION(REPLACE) ISOLATION(CS) ENCODING(EBCDIC) -
QUERYACCELERATION(ELIGIBLE)
BIND PLAN(DRLPLAN) OWNER(authid) PKLIST(*.DRLPLAN.*) -
ACTION(REPLACE) RETAIN
RUN PROGRAM(DSNTIAD) PLAN(DSNTIAxx) -
LIB(’xxxx.RUNLIB.LOAD’)
END

For more information about the sample instructions to BIND with QUERYACCELERATION specified, see
SDRLCNTL(DRLJDBIN).
3. Modify the DRLFPROF data set to reflect the settings to apply when installing Analytics components.
DRLFPROF is the IBM Z Performance and Capacity Analytics data set that contains user modified
parameters. The following parameters in DRLFPROF provide support for the IBM Db2 Analytics
Accelerator:
def_useaot = "YES" | "NO"
"YES": Tables are created as Accelerator Only Tables.

Chapter 7. Administration reference 351


IBM Db2 Analytics Accelerator

"NO": Tables are created in Db2 and are suitable for use either as Db2 tables or as IDAA_ONLY
tables. The default value is "NO".
def_accelerator = "xxxxxxxx"
"xxxxxxxx": The name of the Accelerator where the tables reside. Required only if using
Accelerator Only Tables.
def_timeint = "H" | "S" | "T"
"H": The timestamp for records is rounded to hourly intervals that is similar to non-Analytics tables
with a suffix of "_H" in other components.
"S": The timestamp for records is rounded to intervals of a second that is similar to non-Analytics
tables with time field instead of timestamp in other components.
"T": The timestamp for tables is the actual timestamp in the SMF log record that is similar to
non-Analytics tables with suffix "_T". The default value is "T".
4. Important: This step is required only if you use IBM Z Performance and Capacity Analytics to collect
and populate the component tables on Db2 for z/OS, or if you use IBM Z Performance and Capacity
Analytics reporting. If you only collect data into the IBM Db2 Analytics Accelerator and does not have
the data reside on Db2 for z/OS, configure the lookup tables in IBM Z Common Data Provider. See the
information about collecting data for direct load to the Accelerator in the IBM Z Common Data Provider
V1.1.0 User's Guide (SC27-4624- 01).
Customize each lookup table in the Analytics components as per the existing IBM Z Performance and
Capacity Analytics non-Analytics lookup tables.
For example, insert the same rows that are currently in DB2_APPLICATION into
A_DB2_APPLICATION.
5. Install the desired Analytics component(s).
6. Add tables to the Accelerator.
If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), then the DRLFPROF
setting for def_useaot is "YES", and Db2 creates the tables on the IBM Db2 Analytics Accelerator when
the Analytics components are being installed.
If IBM Z Performance and Capacity Analytics doesn't use AOTs, the tables need to be added to the
IBM Db2 Analytics Accelerator. Tables can be added by using the Data Studio Eclipse application,
or by using stored procedures. To use stored procedures to add the tables to an IBM Db2 Analytics
Accelerator, modify and submit the SDRLCNTL members in the following table:

Table 14. Relationship of SDRLCNTL member name to Analytics component


SDRLCNTL member name Component

DRLJA2DA Analytics - Db2


DRLJAPMA Analytics - z/OS Performance
DRLJAKCA Analytics - KPM CICS
DRLJAKDA Analytics - KPM Db2
DRLJAKZA Analytics - KPM z/OS

7. Load data into lookup tables on the Accelerator.


If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), the lookup tables
are populated on the IBM Db2 Analytics Accelerator when the Analytics components are being
installed.
If IBM Z Performance and Capacity Analytics doesn't use AOTs, the contents of the lookup tables need
to be loaded into the IBM Db2 Analytics Accelerator. Modify and submit the SDRLCNTL members in the
following table to move the contents of the lookup tables into the Accelerator
Note: Not all components have lookup tables.

352 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

Table 15. Relationship of SDRLCNTL member name to Analytics component


SDRLCNTL member name Component

DRLJAPMK Analytics - z/OS Performance


DRLJAKDK Analytics - KPM Db2
DRLJAKZK Analytics - KPM z/OS

Collecting data for direct load to the Accelerator

To collect data for direct load to tables on an IBM Db2 Analytics Accelerator, the following items are
required:
• The System Data Engine (SDE) component of the IBM Common Data Provider for z Systems to collect
the SMF data instead of using IBM Z Performance and Capacity Analytics Collect. The PTFs for APARs
OA52196 and OA52200 must be applied.
• The Db2 Analytics Accelerator Loader for z/OS V2.1 by using IDAA-Only load mode to load the data that
is created by the SDE into the IDAA.
See the information about collecting data for direct load to the Accelerator in the IBM Z Common Data
Provider V1.1.0 User's Guide (SC27-4624- 01).
After the data has been collected, it can be loaded direct to the IBM Db2 Analytics Accelerator.

Loading data into the Accelerator

About this task


The Db2 Analytics Accelerator Loader for z/OS V2.1 (Loader) is used to load the Db2 internal format data
sets, which are created by the System Data Engine (SDE), directly into the Db2 Analytics Accelerator
(Accelerator) without the data residing in Db2 for z/OS.
The Loader is invoked via the Db2 LOAD utility with the following amendments:
• A DD statement that indicates the Loader is to intercept the Db2 LOAD utility:

//HLODUMMY DD DUMMY

• A statement that tells the loader to load data into the Accelerator. This statement indicates the data is
only to reside on the IDAA_ONLY Accelerator, the name of the Accelerator, the schema and the table
name:

//SYSIN DD *
LOAD DATA RESUME YES LOG NO INDDN input_data_set_ddname
IDAA_ONLY ON accelerator-name
INTO TABLE DRLxx.table-name FORMAT INTERNAL;

Procedure
1. To load the data that is created by the System Data Engine, modify and submit the SDRLCNTL
members in the following table based on the installed components:

Chapter 7. Administration reference 353


IBM Db2 Analytics Accelerator

Table 16. Relationship of SDRLCNTL member name to Analytics component


SDRLCNTL member name Component

DRLJA2DD Analytics - Db2


DRLJAPMD Analytics - z/OS Performance
DRLJAKCD Analytics - KPM CICS
DRLJAKDD Analytics - KPM Db2
DRLJAKZD Analytics - KPM z/OS

2. Enable acceleration of tables after first data load.


If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), then the DRLFPROF
setting for def_useaot is "YES", and you don't need to enable the tables on the IBM Db2 Analytics
Accelerator.
If IBM Z Performance and Capacity Analytics doesn't use AOTs and the Db2 LOAD is the first load for
an IDAA_ONLY Accelerator table, after the load has been completed, the table must be enabled for
acceleration in the Accelerator. Tables can be enabled by using the Data Studio Eclipse application,
or by using stored procedures. To use stored procedures to enable the tables, modify and submit the
SDRLCNTL members in the following table:

Table 17. Relationship of SDRLCNTL member name to Analytics component


SDRLCNTL member name Component

DRLJA2DE Analytics - Db2


DRLJAPME Analytics - z/OS Performance
DRLJAKCE Analytics - KPM CICS
DRLJAKDE Analytics - KPM Db2
DRLJAKZE Analytics - KPM z/OS

Uninstalling components used with an IBM Db2 Analytics Accelerator

About this task


To uninstall Analytics components that have been configured for use with an IBM Db2 Analytics
Accelerator, perform the following steps:

Procedure
1. Remove tables from the Accelerator.
If IBM Z Performance and Capacity Analytics uses Accelerator Only Tables (AOTs), then the DRLFPROF
setting for def_useaot is "YES", and you don't need to remove tables on the IBM Db2 Analytics
Accelerator because the next step will automatically remove them.
If IBM Z Performance and Capacity Analytics doesn't use AOTs, the tables must be removed from
the Accelerator prior to uninstalling the component. Modify and submit the SDRLCNTL members in
the following table based on the components to be uninstalled. Modify and submit the SDRLCNTL
members in the following table according to the components to be uninstalled.

354 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM Db2 Analytics Accelerator

Table 18. Relationship of SDRLCNTL member name to Analytics component


SDRLCNTL member name Component

DRLJA2DR Analytics - Db2


DRLJAPMR Analytics - z/OS Performance
DRLJAKCR Analytics - KPM CICS
DRLJAKDR Analytics - KPM Db2
DRLJAKZR Analytics - KPM z/OS

2. Uninstall the Analytics component(s) by using IBM Z Performance and Capacity Analytics menus.

Chapter 7. Administration reference 355


IBM Db2 Analytics Accelerator

356 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Chapter 8. Installing the Usage and Accounting
Collector

The CIMS Lab Mainframe collector is incorporated into IBM Z Performance and Capacity Analytics and
called the Usage and Accounting Collector.
For a description of the Usage and Accounting Collector, see the System Overview section in the Usage
and Accounting Collector User Guide.
To install the Usage and Accounting Collector, follow these steps:
• “Step 1: Customizing the Usage and Accounting Collector” on page 357
• “Step 2: Allocating and initializing Usage and Accounting files” on page 360
• To verify your installation, follow these steps:
– “Step 3: Processing SMF data using DRLNJOB2 (DRLCDATA and DRLCACCT)” on page 360
– “Step 4: Running DRLNJOB3 (DRLCMONY) to create invoices and reports” on page 363
– “Step 5: Processing Usage and Accounting Collector subsystems” on page 364
To support programs such as CICS, Db2, IDMS, IMS, VM/CMS, VSE, DASD Space Chargeback, and Tape
Storage Accounting, edit and run the appropriate jobs. Examples of member names are DRLNCICS,
DRLNDB2, DRLNDISK.

Step 1: Customizing the Usage and Accounting Collector


About this task
Installation job DRLNINIT invokes the REXX program DRLCINIT. This program is a utility that customizes
Usage and Accounting Collector jobs to your specifications. DRLCINIT inserts job cards, adds high level
nodes to all Usage and Accounting Collector data sets, changes VOLSER numbers, and specifies DSCB
model names.
Run job DRLNINIT and follow these steps:

Procedure
1. Replace sample job card with user job card.
2. Insert or replace data set name high-level qualifiers.
3. Insert serial numbers on the VOLUME parameter.
4. Insert DSCB model names.
Note: If you do not run DRLCINIT, you must change each job member manually as you use it.

DRLNINIT

About this task


To execute job DRLNINIT, follow these instructions:

Procedure
1. DRL.SDRLCNTL (DRLMFLST) contains the list of Usage and Accounting Collector jobs that are used in
this utility.

© Copyright IBM Corp. 1993, 2017 357


2. The SMP/E process allocates &HLQ.LOCAL.CNTL. This DSN stores the customized jobs. The Usage and
Accounting Collector JCL is copied to this library and changes are made in this library. The first step in
DRLNINIT performs the copy. This makes it possible to execute DRLNINIT repeatedly until the desired
result is achieved.
Replace the two occurrences of &HLQ.LOCAL.CNTL in DRLNINIT with the filename that was allocated
during the SMP/E install.
3. Job card replacement.
A standard job card can be inserted with a unique jobname. The following parameters in STEP020
control the job card replacement:
JCDSN=
Specifies the file containing the standard job card.
For example: JCDSN=DRL.SDRLCNTL(JBCARD)
The contents in member JBCARD is used as the job card.
JCLINES=
The number of lines to use from JCDSN.
For example: JCLINES=2
The first two lines in the JCDSN member are used as a job card.
JCMASK=
A unique job name can be generated for the execution jobs. The JCMASK is used to specify the
common part of the jobname and the position of a sequential number. After the first character,
you must enter a sequence of '*' (asterisk) characters to indicate where to insert the job sequence
number. The sequence mask is from 2 to 6 characters in length:
Examples:
JCMASK
Jobnames generated
DRL****
DRL0001, DRL0002, DRL0003...
P******Q
P000001Q, P000002Q, P000003Q...
DRL**DRL
DRL01DRL, DRL02DRL, DRL03DRL...
JCSKIP=
Specify any non-blank character and the Job card replacement process will be skipped.
For example: JCSKIP=Y
No job card customization of the Usage and Accounting Collector execution jobs is done.
4. Insert or replace data set name high level qualifiers. The default filenames used for the Usage and
Accounting Collector files start with the high-level qualifier of 'DRL'. The HLQ process in the DRLCINIT
utility allows this default to be replaced or an additional high-level qualifier to be inserted. The
following parameters in STEP020 control the HLQ processing:
HLQACT=
Specifies the action to perform: R=Replace, I=Insert.
For example: HLQACT=R
Every occurrence of a filename with the high-level qualifier of 'DRL.', will be replaced with the value
in HLQDSN.
HLQDSN=
The new value to use for the high-level qualifier.
For example: HLQDSN=DRL.IZPCAUAC

358 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
The default filenames are changed to start with 'DRL.IZPCAUAC'.
HLQSKIP=
Specify any non-blank character and the HLQ processing is skipped.
For example: HLQSKIP=Y
No customization of the Usage and Accounting Collector data set names is done.
5. Insert VOLSER numbers. At various places within the Usage and Accounting Collector jobs, volume
serial numbers are needed. The DRLCINIT job allows you to replace them all globally. The default
volume serial numbers are “??????” throughout the JCL. The default volume serial appears in IDCAMS
processing as VOL(??????) and VOL=SER=?????? and is used for VSAM file allocation. The JCL also uses
VOL=SER=?????? for temporary space allocations. The following parameters in STEP020 control the
VOLSER processing:
VOL=
The replacement volume serial to use instead of “??????”
VSSKIP=
Specify any non-blank character and the VOLSER processing is skipped.
For example: VSSKIP=Y
No customization of the Usage and Accounting Collector VOL or VOL=SER parameters is done.
6. Insert DSCB model names.
A model DSCB parameter is used for the proper functioning of Generation Data Groups (GDGs). The
Usage and Accounting Collector JCL is distributed with all model DSCBreferences set to 'MODELDCB'.
If your installation does not require the use of this parameter, you can delete it manually from the
JCL. The DSCB processing can be used to change the default to a value used at your installation. The
following parameters in STEP020 control the DSCB processing:
MDDSCB=
The replacement model DSCB to use instead of MODELDSCB.
MDSKIP=
Specify any non-blank character and the model DSCB processing will be skipped.
For example: MDSKIP=Y
No customization of the Usage and Accounting Collector model DSCB will be done.
The DRLCINIT utility produces statistics for the execution. If any exceptions are noted, they can be
found listed in the DRLMXCEP member of &HLQ.LOCAL.CNTL. These exceptions might or might not be
severe enough to cause a JCL error; check DRLMXCEP if exceptions are reported.

Statistic report DDNAME SYSTSPRT

Processing......

Completed SYSTSIN

69 Files
0 Exceptions

JobCard : 68 Replacements
HLQ : 1389 Replacements
Volume : 30 Replacements
ModelDSCB: 207 Replacements

Normal completion

Chapter 8. Installing the Usage and Accounting Collector 359


Step 2: Allocating and initializing Usage and Accounting files
About this task
DRLNJOB1 is a member in DRL310.SDRLCNTL. This job creates four permanent files and four Generation
Data Groups (GDGs). The permanent files are:
Usage and Accounting Collector client
Member DRLMCLNT contains sample client records. For information about client records, see “Client
Identification and Budget Reporting - DRLCCLNT and DRLCBDGT” in the Usage and Accounting
Collector User Guide.
Rate
Members DRLMRATE, DRLMRT01, and DRLMRT02 contain sample Rate records. For information about
rate records, see “Computer Center Chargeback Program - DRLCMONY” in the Usage and Accounting
Collector User Guide.
Dictionary
Members DRLKxxxx contain the default record definitions for the Usage and Accounting Collector
Dictionary. For more information about the Usage and Accounting Collector Dictionary, see “Dictionary
- CIMSDTVS” in the Usage and Accounting Collector User Guide.
Status and Statistics VSAM
The Status and Statistics file is a VSAM file that should be allocated so that checkpoint and statistical
information can be recorded for program DRLCEXTR. Use the default values to create the VSAM files.
Note: You do not need to set rates or identify clients at this time.
For the JCL, see member DRLNJOB1 in DRL310.SDRLCNTL.

Step 3: Processing SMF data using DRLNJOB2 (DRLCDATA and


DRLCACCT)
About this task
This job, which is divided into two steps, runs programs DRLCDATA and DRLCACCT. These programs
interface with the z/OS-SMF data set and create the DRL.DRLCACCT.DAILY batch chargeback file.
DRLNJOB2 job is the basis for daily processing and is the only job required on a daily basis for
batch chargeback. Logically, it is run immediately after the SMF data set is unloaded to disk or tape.
After DRLNJOB2 processing is finished, data set DRL.DRLCACCT.DAILY contains z/OS batch and TSO
accounting records, and data set DRL.SMF.HISTORY contains reformatted SMF records.
Note: It is recommended that you read “SMF Interface Program - DRLCDATA” and “Accounting File
Creation Program - DRLCACCT” in the Usage and Accounting Collector User Guide before you start
changing the default control statements.

Procedure
1. OB STEP DRLC2A
This executes program DRLCDATA. For more information, see “SMF Interface Program - DRLCDATA” in
the Usage and Accounting Collector User Guide.

Table 19. Explanation of Program DRLCDATA


Input/output DDNAME Description
INPUT SMFIN This is the SMF DUMP data set.

360 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Table 19. Explanation of Program DRLCDATA (continued)
Input/output DDNAME Description
INPUT CIMSCNTL Data set DRL310.SDRLCNTL (DATAINPT)
Contains input control statements. For more
information, see the Control Statement Table in
Chapter 2. “SMF Interface Program - DRLCDATA”
in the Usage and Accounting Collector User Guide.
OUTPUT CIMSSMF Usage and Accounting Collector reformatted
SMF data set. Contains each SMF record from
the input data set unless limited by a records
statement. This data set is designed as a backup
data set of reformatted SMF Records. Depending
on installation requirements, you might choose
to DD DUMMY this data set, or to COMMENT the
statement.
OUTPUT CIMSACCT This data set contains selected SMF chargeback
records (6, 30, 101, 110). This data set is used as
input in step DRLC2B.
OUTPUT CIMSCICS This data set contains CICS records (SMF Type
110). This record is used by the Usage and
Accounting Collector CICS interface programs.
OUTPUT CIMSDb2 This data set contains Db2 records (SMF Type
101). This record is used by the Usage and
Accounting Collector Db2 interface programs.
2. SMF Merge
It is recommended that you insert a merge between steps DRLC2A and DRLC2B to create a history of
data set DRL.SMF.HISTORY (see member DRLNSMFM in DRL310.SDRLCNTL). The merge field is 7 for
one character. Use a cartridge tape and block the output data set to 32K (BLKSIZE = 32760).
The Usage and Accounting Collector Merge is a sample SORT/MERGE set of JCL that creates a sorted
history data set of Usage and Accounting Collector accounting records can be found in data set
DRL310.SDRLCNTL member DRLNMERG. This job should be run daily after the batch and online Usage
and Accounting Collector jobs have been executed.
If DRLNMERG is done on a daily basis, at the end of the month, the Usage and Accounting Collector
master file is in account code sort sequence.
You should maintain the history data sets on tape. Leave the daily files on disk for daily reports and set
up generation data sets to tape for the history file.
3. JOB STEP DRLC2B
This executes program DRLCACCT, which processes the data set created by program DRLCDATA
(DDNAME CIMSACCT) and generates the Usage and Accounting Collector batch chargeback data set.
For details, see “Accounting File Creation Program - DRLCACCT” in the Usage and Accounting Collector
User Guide.

Table 20. Explanation of Program DRLCACCT


Input/output DDNAME Description
INPUT CIMSDATA Reformatted SMF records. These records are
created by DDNAME CIMSACCT in program
DRLCDATA. The Usage and Accounting Collector
Suspense file for unmatched job step and print
records is appended to DDNAME CIMSDATA.

Chapter 8. Installing the Usage and Accounting Collector 361


Table 20. Explanation of Program DRLCACCT (continued)
Input/output DDNAME Description
INPUT CIMSCNTL Control statements.
INPUT CIMSTABL Optional user-supplied table to convert job
names and/or job card account codes to a new
format.
For more information, see Chapter 3. “Accounting
File Creation Program - DRLCACCT” in the Usage
and Accounting Collector User Guide.

INPUT CIMSDTVS Usage and Accounting Collector Dictionary VSAM


file.
INPUT CIMSPDS Control statements.
This data set is used by DRLCACCT when
PROCESS CIMS SERVER RESOURCE RECORDS
control statement is specified. A member,
DRLMALSA, in this data set contains the control
members for the different records.

OUTPUT IMSACT2 Usage and Accounting Collector batch


chargeback file containing the 79x accounting
records. This data set is used by DRLCEXTR and
DRLCMONY.
OUTPUT CIMSUSPN Suspense file. This data set contains Step and
Print records that have not been matched with a
Job Start or Job Stop record.
OUTPUT CIMSEXCP This data set contains records that have not been
matched with entries in the CIMSTABL data set.
OUTPUT CIMSPRNT This data set contains the runtime parameters
and the results of the run.
OUTPUT CIMSMSG This data set contains informational messages.
OUTPUT CIMSSEL Usage and Accounting Collector accounting
records. This data set contains the records
that failed date selection when the PROCESS
CIMS MAINTENANCE and NON-SELECTED
FILE PROCESSING ON control statements are
specified.
OUTPUT CIMSUNSP Unsupported CSR records. This data set contains
all CSR records that did not have a definition
within CIMSDTVS.

Note: For JCL information, see member DRLNJOB2 in DRL310.SDRLCNTL.

362 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Step 4: Running DRLNJOB3 (DRLCMONY) to create invoices and
reports
About this task
DRLNJOB3 contains the JCL to run program DRLCMONY, which creates invoices and zero-cost invoices
(rate determination).
Billing control statements are contained in member DRLMMNY. Edit these statements to customize Usage
and Accounting Collector for your installation.
You can use the Usage and Accounting Collector defaults as distributed until you decide on client
information, billing rates, and control information.
To run DRLNJOB3, follow these steps:

Procedure
1. Run DRLN3A.
This step converts the 79x accounting records into CSR+ records. DRLCMONY supports only CSR+
records.
2. Run DRLC3B.
This step sorts the data set created by step DRLC3A into account code, job name, and job log number
sequence.
3. Run DRLC3C.
This step is for the Computer Center Billing System - DRLCMONY.

Input/output DDNAME Description


INPUT CIMSACCT Integrated chargeback data set.
INPUT CIMSCLVS Client records.
INPUT CIMSCNTL Control statements.
INPUT CIMSRTVS Billing rates.
INPUT CIMSCLDR Usage and Accounting Collector calendar file.
INPUT CIMSNCPU CPU normalization statements.
INPUT CIMSSCPU CPU job class and priority surcharge statements.
OUTPUT SYSOUT Messages
OUTPUT CIMSPRNT Processing results.
OUTPUT CIMSINVC Invoices.
OUTPUT CIMSMSG Informational messages.
OUTPUT CIMSSUM Summary records by account. One record per
account and billable item - (Rate Code).
OUTPUT CIMSIDNT Identifier data that can be loaded into an IBM
Tivoli Usage and Accounting Manager database.
This file is produced by DRLCMONY in Server
mode.
OUTPUT CIMSDETL Detail data that can be loaded into an IBM Tivoli
Usage and Accounting Manager database. This
file is produced by DRLCMONY in server mode.

Chapter 8. Installing the Usage and Accounting Collector 363


Input/output DDNAME Description
OUTPUT CIMSUMRY Summary data that can be loaded into an IBM
Tivoli Usage and Accounting Manager database.
This file is produced by DRLCMONY in server
mode.

For record descriptions, refer to “Accounting File Record Descriptions” in the Usage and Accounting
Collector User Guide.
For JCL information, see member DRLNJOB3 in DRL310.SDRLCNTL.

Step 5: Processing Usage and Accounting Collector subsystems


About this task
Note: This step is optional.
Usage and Accounting Collector is now installed and ready to be customized for batch chargeback.
After you are comfortable with the results you are receiving from the Usage and Accounting Collector
z/OS batch system, you can start integrating data from the wide range of subsystems that Usage and
Accounting Collector supports.
To integrate a Usage and Accounting Collector subsystem, perform the following steps:

Procedure
1. Edit the appropriate JCL member. For example, DRLNCICS.
2. Create an account code conversion table.
3. Process the job.
4. Merge the output with the input to program DRLCMONY (DRLNJOB3).
5. Run DRLNJOB3 to generate the integrated invoices.

Results
The following list provides a list of member names for some of the most commonly-used Usage and
Accounting Collector subsystems.

Table 21. Usage and Accounting Collector Subsystem Member Names (Partial List)
Subsystem Member name Description
DRLNCICS CICS Support
DRLNDB2 Db2
DRLNMQSR MQSeries®
DRLNDISK DASD Space
DRLNTAPE Tape Storage
DRLNIMS IMS
DRLNUNIV ROSCOE, ADABAS/SMF, IDMS/SMF, RJE, WYLBUR,
Oracle, MEMO, Control-T, BETA

364 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Support information

Appendix A. Support information

If you have a problem with your IBM software, you want to resolve it quickly. IBM provides a number of
ways for you to obtain the support you need.
• Searching knowledge bases: You can search across a large collection of known problems and
workarounds, Technotes, and other information.
• Obtaining fixes: You can locate the latest fixes that are already available for your product.
• Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with
someone from IBM, you can use a variety of ways to contact IBM Support.

Contacting IBM Support


This topic describes how to contact IBM Support if you have been unable to resolve a problem with IBM Z
Performance and Capacity Analytics.
Before contacting IBM Support, your company must have an active IBM software maintenance contract,
and you must be authorized to submit problems to IBM. The type of software maintenance contract
that you need depends on the type of product you have. For more information, refer to the IBM Support
website at the following links:
IBM Support
https://www.ibm.com/mysupport/s/
IBM Z Support
https://www.ibm.com/support/pages/ibm-enterprise-support-and-preferred-care-options-ibm-z
To contact IBM Support to report a problem (open a case), follow these steps:
1. Determine the business impact.
2. Describe the problem and gather information.
3. Submit the problem report.

Determining the business impact


When you report a problem to IBM, you are asked to supply a severity level. Therefore, you need to
understand and assess the business impact of the problem that you are reporting. Use the following
criteria:
Severity 1
The problem has a critical business impact. You are unable to use the program, resulting in a critical
impact on operations. This condition requires an immediate solution.
Severity 2
The problem has a significant business impact. The program is usable, but it is severely limited.
Severity 3
The problem has some business impact. The program is usable, but less significant features (not
critical to operations) are unavailable.
Severity 4
The problem has minimal business impact. The problem causes little impact on operations, or a
reasonable circumvention to the problem was implemented.

Describing the problem and gathering information


When describing a problem to IBM, be as specific as possible. Include all relevant background
information so that IBM Support specialists can help you solve the problem efficiently. To save time,
know the answers to the following questions:

© Copyright IBM Corp. 1993, 2017 365


Support information

• What software versions were you running when the problem occurred?
• Do you have logs, traces, and messages that are related to the problem symptoms? IBM Support is
likely to ask for this information.
• Can you re-create the problem? If so, what steps were performed to re-create the problem?
• Did you make any changes to the system? For example, did you make changes to the hardware,
operating system, networking software, product-specific customization, and so on.
• Are you currently using a workaround for the problem? If so, be prepared to explain the workaround
when you report the problem.

Submitting the problem


You can submit your problem to IBM Support in either of the following ways:
Online
Go to https://www.ibm.com/mysupport/s/, click on Open a case, and enter the relevant details into
the online form.
By email or phone
For the contact details in your country, go to the IBM Support website at https://www.ibm.com/
support/. Look for the tab on the right and click Contact and feedback > Directory of worldwide
contacts for a list of countries by geographic region. Select your country to find the contact details for
general inquiries, technical support, and customer support.
If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM
Support creates an Authorized Program Analysis Report (APAR). The APAR describes the problem in
detail. Whenever possible, IBM Support provides a workaround that you can implement until the APAR is
resolved and a fix is delivered. IBM publishes resolved APARs on the IBM Support website, so that other
users who experience the same problem can benefit from the same resolution.

366 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan, Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who want to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

© Copyright IBM Corp. 1993, 2017 367


Such information may be available, subject to appropriate terms and conditions, including in some cases
payment of a fee.
The licensed program described in this information and all licensed material available for it are provided
by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or
any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be
the same on generally available systems. Furthermore, some measurements may have been estimated
through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
If you are viewing this information softcopy, the photographs and color illustrations may not appear.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. For a current list of IBM trademarks, refer to the Copyright and
trademark information at https://www.ibm.com/legal/copytrade.

368 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Bibliography

IBM Z Performance and Capacity Analytics publications


The IBM Z Performance and Capacity Analytics library contains the following publications and related
documents.
The publications are available online in the IBM Documentation at the following link, from where you can
also download the associated PDF:
https://www.ibm.com/docs/en/zp-and-ca/3.1.0
• Administration Guide and Reference, SC28-3211
Provides information about initializing the IBM Z Performance and Capacity Analytics database and
customizing and administering IBM Z Performance and Capacity Analytics.
• Capacity Planning Guide and Reference, SC28-3213
Provides information about the capacity planning, forecasting, and modeling feature of IBM Z
Performance and Capacity Analytics, intended for those who are responsible for monitoring system
capacity and key performance metrics to help ensure that sufficient resources are available to run the
business and meet expected service levels.
• CICS Performance Feature Guide and Reference, SC28-3214
Provides information for administrators and users about collecting and reporting performance data
generated by Customer Information Control System (CICS).
• Distributed Systems Performance Feature Guide and Reference, SC28-3215
Provides information for administrators and users about collecting and reporting performance data
generated by operating systems and applications running on a workstation.
• Guide to Reporting, SC28-3216
Provides information for users who display existing reports, for users who create and modify reports,
and for administrators who control reporting dialog default functions and capabilities.
• IBM i System Performance Feature Guide and Reference, SC28-3212
Provides information for administrators and users about collecting and reporting performance data
generated by IBM i systems.
• IMS Performance Feature Guide and Reference, SC28-3217
Provides information for administrators and users about collecting and reporting performance data
generated by Information Management System (IMS).
• Language Guide and Reference, SC28-3218
Provides information for administrators, performance analysts, and programmers who are responsible
for maintaining system log data and reports.
• Messages and Problem Determination, GC28-3219
Provides information to help operators and system programmers understand, interpret, and respond to
IBM Z Performance and Capacity Analytics messages and codes.
• Network Performance Feature Installation and Administration, SC28-3221
Provides information for network analysts or programmers who are responsible for setting up the
network reporting environment.
• Network Performance Feature Reference, SC28-3222
Provides reference information for network analysts or programmers who use the Network Performance
Feature.

© Copyright IBM Corp. 1993, 2017 369


• Network Performance Feature Reports, SC28-3223
Provides information for network analysts or programmers who use the Network Performance Feature
reports.
• Resource Accounting for z/OS, SC28-3224
Provides information for users who want to use IBM Z Performance and Capacity Analytics to collect
and report performance data generated by Resource Accounting.
• System Performance Feature Guide, SC28-3225
Provides information for performance analysts and system programmers who are responsible for
meeting the service-level objectives established in your organization.
• System Performance Feature Reference Volume I, SC28-3226
Provides information for administrators and users with a variety of backgrounds who want to use IBM Z
Performance and Capacity Analytics to analyze z/OS, z/VM, zLinux, and their subsystems, performance
data.
• System Performance Feature Reference Volume II, SC28-3227
Provides information for administrators and users with a variety of backgrounds who want to use IBM Z
Performance and Capacity Analytics to analyze z/OS, z/VM, zLinux, and their subsystems, performance
data.
• Usage and Accounting Collector User Guide, SC28-3228
Provides information about the functions and features of the Usage and Accounting Collector.

370 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Glossary
A
administration
An IBM Z Performance and Capacity Analytics task that includes maintaining the database, updating
environment information, and ensuring the accuracy of data collected.
administration dialog
A set of host windows used to administer IBM Z Performance and Capacity Analytics.
C
collect
A process used by IBM Z Performance and Capacity Analytics to read data from input log data sets,
interpret records in the data set, and store the data in Db2 tables in the IBM Z Performance and
Capacity Analytics database.
compatibility mode
A mode of processing, in which the IEAIPSxx and IEAICSxx members of SYS1.PARMLIB determine
system resource management.
component
An optionally installable part of an IBM Z Performance and Capacity Analytics feature. Specifically in
IBM Z Performance and Capacity Analytics , a component refers to a logical group of objects used
to collect log data from a specific source, to update the IBM Z Performance and Capacity Analytics
database using that data, and to create reports from data in the database.
control table
A predefined IBM Z Performance and Capacity Analytics table that controls results returned by some
log collector functions.
D
data table
An IBM Z Performance and Capacity Analytics table that contains performance data used to create
reports.
DFHSM
In this book, DFHSM is referred to by its new product name. See DFSMShsm.
DFSMShsm
Data Facility Storage Management Subsystem hierarchical storage management facility. A functional
component of DFSMS/MVS used to back up and recover data, and manage space on volumes in the
storage hierarchy.
DFSMS
Data Facility Storage Management Subsystem. An IBM licensed program that consists of DFSMSdfp,
DFSMSdss, and DFSMShsm.
E
environment information
All of the information that is added to the log data to create reports. This information can include data
such as performance groups, shift periods, installation definitions, and so on.
G
goal mode
A mode of processing where the active service policy determines system resource management.
H
host
The MVS system where IBM Z Performance and Capacity Analytics runs collect and where the IBM Z
Performance and Capacity Analytics database is installed.
K

© Copyright IBM Corp. 1993, 2017 371


key columns
The columns of a Db2 table that together constitute the key.
L
log collector
An IBM Z Performance and Capacity Analytics program that processes log data sets and provides
other IBM Z Performance and Capacity Analytics services.
log data set
Any sequential data set that is used as input to IBM Z Performance and Capacity Analytics.
log definition
The description of a log data set processed by the log collector.
log procedure
A program module that is used to process all record types in certain log data sets.
lookup table
An IBM Z Performance and Capacity Analytics Db2 table that contains grouping, conversion, or
substitution information.
P
IBM Z Performance and Capacity Analytics database
A set of Db2 tables that includes data tables, lookup tables, system tables, and control tables.
R
record definition
The description of a record type contained in the log data sets used by IBM Z Performance and
Capacity Analytics, including detailed record layout and data formats.
record procedure
A program module that is called to process some types of log records.
record type
The classification of records in a log data set.
report group
A collection of IBM Z Performance and Capacity Analytics reports that can be referred to by a single
name.
reporting dialog
A set of host or workstation windows used to request reports.
resource group
A collection of network resources that are identified as belonging to a particular department or
division. Resources are organized into groups to reflect the structure of an organization.
S
section
A structure within a record that contains one or more fields and may contain other sections.
service class
A group of work which has the same performance goals, resource requirements, or business
importance. For workload management, you assign a service goal and optionally a resource group,
to a service class.
source
In an update definition, the record or Db2 table that contains the data used to update an IBM Z
Performance and Capacity Analytics Db2 table.
sysplex
A set of MVS systems communicating and cooperating with each other through certain multisystem
hardware components and software services to process customer workloads.
system table
A Db2 table that stores information that controls log collector processing, IBM Z Performance and
Capacity Analytics dialogs, and reporting.

372 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
T
target
In an update definition, the Db2 table in which IBM Z Performance and Capacity Analytics stores data
from the source record or table.
threshold
The maximum or minimum acceptable level of usage. Usage measurements are compared with
threshold levels.
U
update definition
Instructions for entering data into Db2 tables from records of different types or from other Db2 tables.
V
view
An alternative representation of data from one or more tables. A view can include all or some of the
columns contained in the table on which it is defined.

Glossary 373
374 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
Index

A B
abbreviation backing up the IBM Z Performance and Capacity Analytics
adding to an update definition 223 database 154
deleting from an update definition 224 backup, incremental-image or full-image 155
adding a column to a table 217 base, IBM Z Performance and Capacity Analytics 28
adding a log data set for collection 246 batch
adding a log ID and collect statements data set 243 generating problem records in 167
adding an abbreviation to an update definition 223 running reports 160
adding an object to a component 180 batch mode
ADMDEFS nickname ddname, description 160 collect 136
ADMGDF ddname, graphic reports data set 160 installing component 173
administering lookup and control tables 159 installing component in 173
administering problem records 166 reporting 166
administering reports 160 batch print SYSOUT class dialog parameter 117
administering the IBM Z Performance and Capacity Analytics books xix
database 145 books for IBM Z Performance and Capacity Analytics 369
administering user access to tables 236
administration dialog
collecting data 135
C
collecting log data 137 calculating and monitoring table space requirements 147
commands 307 calculating table space requirements
introduction 7 monitoring, and 147
options Catcher DataMover 62
Administration window 301 CFRM policy update
Components window 301 UPDPOL 36
Components window options changes in this edition xxi
Help pull-down 301 changing or deleting rows using the QMF table editor 205
Logs window 301 changing space definitions 30
Primary Menu 301 changing the collect statements data set name 244
Tables window 301 changing the retention period of information about a log data
ADMINISTRATION parameter 21 set 251
Administration window options CICS control tables
Other pull-down CICS_DICTIONARY 277
DB2I option 159 CICS_FIELD 278
ISPF/PDF option 301 clear.properties 42, 81
AGGR_VALUE control table 283 Collator
allocating libraries in the generic logon procedure introduction 10
SYSPROC collect
DRL.LOCAL.EXEC 21 batch mode 137
allocation overview, ddname 124 COLLECT log collector language statement 136
APAR (Authorized Program Analysis Report) 365 deciding which data sets to 247
APF-authorized data sets improving performance 144
SDRLEXTR 45 log collector messages 140
APPLY SCHEDULE clause monitoring activity 140
modifying 224 network configuration data collect job 139
authorization ID, Db2 secondary 17 sample collect job 137
authorized data sets vital product data collect job 139
SDRLEXTR 45 collect activity
Authorized Program Analysis Report (APAR) 365 monitoring 140
automated data gathering 33 collect messages
AVAILABILITY_D, _W, _M 279 sample 140
AVAILABILITY_PARM lookup table 281 using 141
AVAILABILITY_T 279 collect performance
improving 144
collect process 4
collect statements

Index 375
collect statements (continued) Components window options (continued)
changing data set name 244 Space pull-down 30
editing 242 concatenation
IBM Z Performance and Capacity Analytics supplied, of log data sets 247
modifying 243 concatenation of log data sets 247
collect statements data set configuration
adding 243 DataMover 86
collect statements for log data manager DataMover, data splitting 96
listing data sets containing 242 DataMover, SMF filtering 95
Collect Statistics window 142 configuration options
Collect window 28 clear.properties 42, 81
collected data sets DASD-only log stream 35
viewing information about 251 DataMover, advanced options 95
collecting data DataMover.sh 39, 79
through administration dialog 135 DRLJSMFO 37
collecting data from a log into Db2 tables 186 DRLJSMFX 38
collecting data from IMS 139 hub DataMover 49, 52
collecting data from Tivoli Information Management for z/OS hub.properties 40
139 log stream on a coupling facility 35
collecting log data 135 publication.properties 80
collecting multiple log data sets 143 sample members 34, 79
collecting network configuration data 139 spoke data mover 49, 52
column spoke.properties 41
adding to a table 217 UPDPOL 36
column definition considerations when running DRLJTBSR 149
displaying and modifying 217 continuous collector
commands and options, administration dialog 301 configuration options 42
common data tables hub and spoke configuration 43
AVAILABILITY_D, _W, _M 279 modifying 253
AVAILABILITY_T 279 stand-alone configuration 42
EXCEPTION_T 280 stopping 253
MIGRATION_LOG 281 working with 253
retention periods 279 Continuous Collector
summarization level 278 CRLOGRDS 47
communications prerequisites 44, 99, 104, 105, 109 DRLJCCOL 38, 79
COMPonen command 307 hub and spoke 56
component implementing 56
adding an object 180 installing 33, 47
creating 182 stand-alone 56
deleting 182 control and common tables
deleting object 180 AGGR_VALUE 283
installation AVAILABILITY_PARM 281
definition members 125 CICS control tables
installing and uninstalling 169 CICS_DICTIONARY 277
installing online 171 CICS_FIELD 278
Sample component common data tables 279
definition member 125 DAY_OF_WEEK 275
description 283 description, lookup tables 281
uninstalling 176 PERIOD_PLAN 275
viewing objects 179 SCHEDULE 276
component definition SPECIAL_DAY 277
working with 178 TIME_RES 282
component installation USER_GROUP 282
excluding and object from 181 control and lookup tables
including an object 181 administering 159
components controlling objects that you have modified 178
installing 31 conventions
Components window 169 typeface xx
Components window options correcting corrupted data in the IBM Z Performance and
Component pull-down Capacity Analytics database 156
Print list option 303 correcting out-of-space condition in table or index space 156
Other pull-down corrupted data in the IBM Z Performance and Capacity
DB2I option 159 Analytics database
ISPF/PDF option 302 correcting 156

376 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
coupling facilityRETPD data tables, common (continued)
defining log stream 35 retention periods 279
create invoices and reports summarization level 278
running DRLNJOB3 363 database
create system tables 25 access 158
creating a component 182 administration 145
creating a log definition 193 backing up 154, 155
creating a record definition 198 error recovery 156
creating a record procedure definition 200 initialization 20
creating a report on a record 189 introduction 6
creating a table monitoring size 157
deleting a column using the administration dialog 234 name dialog parameter 116
using an existing table as a template 233 security 17, 158
creating a table space 234 tools 159
creating an update definitions 235 database access
creating and updating system tables with a batch job 26 monitoring 158
creating report groups 166 database backup
CRLOGRDS determining when 155
Continuous Collector, installing 47 database security
customer support maintaining 158
contacting IBM Support 365 DataMover
customizing additional features 84
DRLEINI1 21 Catcher 62
generic logon procedure 21 configuration 86
JCL sample jobs 27 configuration options, advanced 95
customizing Usage and Accounting Collector data splitting configuration 96
execute DRLNINIT 357 hub and spoke configuration 43
hub configuration options 49, 52
installing 33, 49
D memory management 85
DASD-only log streamRETPD record formats 85
defining 35 remote 62
data SMF filtering configuration 95
collecting from IMS 139 spoke configuration options 49, 52
collecting from Tivoli Information Management for z/OS stages 86
139 TCP/IP 43
data backup, incremental-image or full-image 155 tips 51
data collecting DataMover.sh 39, 79
batch collect 139 Datasplitter and SMF Extractor
IMS 139 introduction 11
data from Tivoli Information Management for date set
z/OS viewing dump 251
collecting 139 DAY_OF_WEEK control table 275
data in tables Db2
working with 203 data sets prefix dialog parameter 118
Data Publication 62, 64 Db2 plan name for IZPCA 116
data security how IBM Z Performance and Capacity Analytics uses
controlling 145 146
initializing 20 locking and concurrency 157
Data Selection window 28 messages
data set during system table creation 25
prefix dialog parameter 118 performance 20
saving a table definition in 232 statistics 157
data sets subsystem name dialog parameter 116
deciding which to collect 247 tools 159
data sets collected Db2 concepts
viewing list 184 understanding 145
data splitting configuration Db2 High Performance Unload
DataMover 96 integration 214
data tables, common Db2 High Performance Unload utility
AVAILABILITY_D, _W, _M 279 running 214
AVAILABILITY_T 279 sample control statement 215
EXCEPTION_T 280 Db2 tables
MIGRATION_LOG 281 collecting data from a log into 186

Index 377
Db2 utility dialog (continued)
RUNSTATS 206 language options 118
DB2I parameters 23, 25
concepts 146 preparing 21
DB2I Primary Option Menu 159 dialog parameters
IBM Z Performance and Capacity Analytics interaction variables and fields 115
146 Dialog Parameters
secondary authorization IDs 20 window 23
statistics 157 Dialog Parameters window
tools 159 overview 110, 114
DB2I command 307 QMF not used 114
DCOLLECT records 297 when QMF is used 114
DEBUG parameter 21 DISPLay RECORD record_type command 307
deciding which log data sets to collect 247 DISPLay REPort report_ID command 307
DEFINE LOG log collector language statement 3, 128 DISPLay report_ID command 307
DEFINE RECORD DISPLay TABLE table_name command 307
log collector language statement 3 DISPLay table_name command 307
DEFINE RECORD log collector language statement 128 displaying a view definition 231
DEFINE UPDATE displaying and adding a table index 218
log collector language statement 4 displaying and editing the purge condition of a table 225
DEFINE UPDATE log collector language statement 129 displaying and modifying a column definition 217
defining objects, overview 125 displaying and modifying a table or index space 227
defining reports 131 displaying and modifying update definitions of a table 220
defining table spaces and indexes using the GENERATE displaying log statistics 187
statement 129 displaying the contents of a log 188
defining triggers 131 displaying the contents of a table 203
defining updates and views 131 displaying update definitions associated with a record 198
definition members distribution clause
component definitions 125 modifying 224
DRLxxxx.SDRLDEFS library 127 documentation
feature 127, 128 IBM Z Performance and Capacity Analytics 369
installation order 127 DRL.LOCAL.CHARTS 160
log 128 DRL.LOCAL.DEFS definitions library 119
record 128 DRL.LOCAL.EXEC, allocating 21
report 131 DRL.LOCAL.REPORTS 160
Sample component definition member 127 DRL.LOCAL.USER.DEFS definitions library 119
table and update definition members 129 DRLCHARTS system table 265
table space 128 DRLCOLUMNS view on Db2 catalog 274
deleting a column from a table being created 234 DRLCOMP_OBJECTS system table 267
deleting a component 182 DRLCOMP_PARTS system table 267
deleting a log data set 186 DRLCOMPONENTS system table 267
deleting a log definition 193 DRLEINI1 listing 113
deleting a record definition 199 DRLELDMC
deleting a record procedure definition 201 sample job 247
deleting a table index 220 DRLESTRA command 307
deleting a table or view 234 DRLEXPRESSIONS
deleting an abbreviation from an update definition 224 system table 257
deleting an object from a component 180 DRLFIELDS
deleting an update definition 236 system table 257
deleting information about a log data set 246 DRLGROUP_REPORTS system table 268
deleting or changing rows using the QMF table editor 205 DRLGROUPS system table 268
deleting the information about a log data set 251, 253 DRLINDEXES view on Db2 catalog 274
detail tables DRLINDEXPART view on Db2 catalog 274
AVAILABILITY_T 279 DRLJBATR batch reporting job 161
EXCEPTION_T 280 DRLJCCOL
MIGRATION_LOG 281 Continuous Collector JCL for started task 38, 79
determining partitioning mode and keys 25 DRLJCOIM
determining when to back up the IBM Z Performance and IMS collect job 139
Capacity Analytics database 155 DRLJCOIN collect job 139
dialog DRLJCOLL sample collect job 137
commands 307 DRLJCOVP network VPD collect job 139
Dialog Parameters window DRLJEXCE problem record job 167
when QMF is not used 114 DRLJLDMC
DRLEINI1 initialization exec 21 setting the parameters for job 249

378 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
DRLJLDMC collect job DRLUSER_REPORTS 274
parameters it uses 247 DRLUSER_REPORTTEXT 274
DRLJLDML DRLUSER_REPORTVARS 274
job step, using 239 DRLUSER_SEARCHATTR 274
sample job 240 DRLUSER_SEARCHES 274
setting the parameters for 241 DRLVIEWS view on Db2 catalog 274
DRLJLDML sample job 240 DRLvrm.SDRLDEFS
DRLJPURG purge job 152 naming convention for members 132
DRLJRFT report format table 139 DRLvrm.SDRLRENU
DRLJRUNS job naming convention for members 133
RUNSTATS utility 157 DRLxxx.SDRLDEFS definitions library
DRLJSMFO definition members 127
SMF Extractor control file 37 DRLxxx.SDRLEXEC, allocating 21
DRLJSMFX DRLxxx.SDRLLOAD, allocating in the logon procedure 21
SMF Extractor startup procedure 38 DSNxxx.SDSNLOAD, allocating in the logon procedure 21
DRLKEYS view on Db2 catalog 274 dump data set
DRLLDM_COLLECTSTMT viewing 251, 252
system table 258 DYNAMNBR
DRLLDM_LOGDATASETS setting the value 249
system table 258
DRLLOGDATASETS
system table 259
E
DRLLOGDATASETS system table 142 editing
DRLLOGS object definition 180
system table 260 editing a table using the ISPF editor 205
DRLNINIT editing the collect statements 242
executing 357 editing the contents of a table 204
DRLNJOB2 EREP
processing SMF data, using 360 records shipped with IBM Z Performance and Capacity
DRLOBJECT_DATA view on Q.OBJECT_DATA 274 Analytics 298
DRLPURGECOND errors, recovering from database 156
system table 261 EXCEPTION_T detail table 280
DRLRECORDPROCS exceptions
system table 261 reviewing 167
DRLRECORDS exceptions and problem records 167
system table 262 excluding an object from a component installation 181
DRLREP ddname, tabular reports data set 160 exporting table data to an IXF file 210
DRLREPORT_ATTR system table 269
DRLREPORT_COLUMNS system table 270
DRLREPORT_QUERIES system table 270 F
DRLREPORT_TEXT system table 270
features, IBM Z Performance and Capacity Analytics
DRLREPORT_VARS system table 271
performance
DRLREPORTS system table 268
definition member description
DRLRPROCINPUT system table 262
record 128
DRLSEARCH_ATTR system table 271
update and view 131
DRLSEARCHES system table 271
installation with base 16
DRLSECTIONS
flow of IBM Z Performance and Capacity Analytics 4
system table 263
DRLTABAUTH view on Db2 catalog 274
DRLTABLEPART view on Db2 catalog 274 G
DRLTABLES view on Db2 catalog 274
DRLTABLESPACE view on Db2 catalog 274 GDDM
DRLUPDATECOLS allocating load library 21
system table 263 GDDM-PGF
DRLUPDATEDISTR formats data set 120
system table 263 local formats data set 120
DRLUPDATELETS nicknames, ADMDEFS ddname 160
system table 264 GDDM.SADMMOD, allocating in the logon procedure 21
DRLUPDATES system table 264 GENERATE statement
DRLUSER_GROUPREPS 274 defining table spaces and indexes 129
DRLUSER_GROUPS 274 GENERATE_KEYS 273
DRLUSER_REPORTATTR 274 GENERATE_PROFILES 272
DRLUSER_REPORTCOLS 274 generating problem records in batch 167
DRLUSER_REPORTQRYS 274 generic logon procedure, customizing 21

Index 379
graphic reports IBM Z Performance and Capacity Analytics administration dialog windows (
data set ddname, ADMGDF system window 23
allocation overview 124 IBM Z Performance and Capacity Analytics definition
dialog parameter description 120 members
naming convention 132
IBM Z Performance and Capacity Analytics dialog options
H 301
hardware IBM Z Performance and Capacity Analytics performance
prerequisites 13 features
Hardware and Network Considerations 76 introduction 2
header fields IBM Z Performance and Capacity Analytics Primary Menu 23
working with 192 IBM Z Performance and Capacity Analytics tables
HELP command 307 printing list 231
hub and spoke IBM Z Performance and Capacity Analytics Version variable
installing 33 format 126
hub and spoke configuration importing the contents of an IXF file to a table 209
continuous collector 43 improving collect performance 144
DataMover 43 including an object in a component installation 181
hub DataMover index space
configuration options 49, 52 displaying and modifying 227
hub system making changes 228
Continuous Collector, implementing 56 out of space 156
Continuous Collector, installing 47 index space definitions 30
DataMover, installing 49 index space, out of space condition 156
hub.properties 40 Indexes window options
hub.properties configuration file options 49, 52 Utilities pull-down
Run Db2 REORG utility 146
install and configure
I for ELK reporting 62, 64
for Splunk reporting 62, 64
IBM Documentation
installation
publications xix
base product and feature installation 16, 28
IBM Support 365
DRLEINI1 listing, variables 113
IBM Z Performance and Capacity Analytics
hardware prerequisites 13
administration dialog windows
software prerequisites 14
System Tables window 25
installation prerequisites 13
Administration window options 301
installation reference 113
component installation 125
installing
data flow 4
Continuous Collector 33
data sets 16
Data Mover 33
data sets prefix dialog parameter 119
hub and spoke 33
database administration 145
Publication DataMover 33
database, introduction to 6
SMF Extractor 33
installation
Usage and Accounting Collector 357
data sets 16
installing a component 169
database security 17
installing and uninstalling a component 169
Db2 database initialization 20
installing components 31
personal dialog parameters 23
installing IBM Z Performance and Capacity Analytics
QMF setup 24
determining partitioning mode and keys 25
test 28
installing components 31
introducing 1
reviewing results of SMP/E installation 16
migration 13
installing other IBM Z Performance and Capacity Analytics
objects overview 125
systems 31
performance features 2
installing the component in batch mode 173
Primary Menu options 301
installing the component online 171
record definitions shipped with IBM Z Performance and
installing Usage and Accounting Collector
Capacity Analytics 288
DRLNJOB1 360
IBM Z Performance and Capacity Analytics administration
DRLNJOB3 (DRLCMONY), running 363
dialog windows
JCL
Administration window 23
customizing Usage and Accounting Collector 357
Data Selection window 28
processing Usage and Accounting Collector subsystems
Dialog Parameters window 114
364
Logs window 28
integration with Db2 High Performance Unload 214
Primary Menu 23
introduction to the Key Performance Metrics components 8

380 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
introduction to the SMF Log Records component 9 log data manager (continued)
invoking the log data manager 239 summary of use of 238
ISPF log data manager option
ISPF.PROFILE 21 working with 238
ISPF command 307 log data set
ISPF editor adding for collection 246
editing a table 205 changing the retention period of information 251
IXF deleting 186
exporting table data to 210 deleting information about 246, 251, 253
IXF file modifying log ID 245
importing contents to a table 209 recording for re-collection 246
recording to be collected again 253
viewing unsuccessfully collected 252
J log data sets
JCL sample jobs concatenation of 247
customizing 27 listing sets to be collected 244
job statement information dialog parameter 120 modifying list of successfully collected 250
job step for recording a log data set for collection 239 modifying the list of unsuccessfully collected 252
viewing information about successfully collected 251
viewing list collected 184
K log definition
creating 193
Key Performance Metrics components
deleting 193
introduction 8
viewing and modifying 191
log definitions
L defining a log 128
introduction 3
language-dependent data sets 16 working with 191
library allocation in the generic logon procedure log ID
STEPLIB adding collect statements data set 243
DRLxxx.SDRLLOAD 21 log statistics
SYSEXEC 21 displaying 187
library definition members, DRLxxx.SDRLDEFS 127 log stream
listing coupling facility 35
subset of tables in the Tables window 232 define a DASD-only log stream 35
listing a subset of tables in the Tables window 232 log streams 44, 99, 104, 105, 109
listing the log data sets to be collected 244 logon procedure, customizing 21
loading tables 211 logs
local data sets 16 working with the contents of 184
local definitions data set dialog parameter 119 LOGS command 307
local messages data set dialog parameter 120 Logs window options
local user definitions data set dialog parameter 119 Log pull-down
LOcate argument command 307 Exit option 304
locking and concurrency 157 Print list option 304
log and record definitions Save definition option 304
working with 183 Other pull-down
log and record procedures 4 DB2I option 159
log collector ISPF/PDF option 303
introduction 3 View pull-down
modifying statements 241 All option 304
system tables 257 Some option 304
log collector language LOGSTAT, log data set statistics 142, 187
COLLECT 136 lookup and control tables
DEFINE LOG 3, 128 administering 159
DEFINE RECORD 128 AGGR_VALUE 283
DEFINE UPDATE 129 AVAILABILITY_PARM 281
SQL 125 CICS control tables
SQL CREATE 129 CICS_DICTIONARY 277
log data CICS_FIELD 278
collecting 135 common data tables 279
log data manager DAY_OF_WEEK 275
invoking 239 description, lookup tables 281
listing log data sets containing collect statements 242 PERIOD_PLAN 275
modifying list of log data sets to be collected 244 SCHEDULE 276

Index 381
lookup and control tables (continued) object (continued)
SPECIAL_DAY 277 excluding from a component installation 181
TIME_RES 282 including in a component installation 181
USER_GROUP 282 object definition
viewing or editing 180
object definitions 125
M objects
maintaining database security 158 controlling 178
making changes to an index space 228 overview 125
making table space parameter changes that do not require viewing in a component 179
offline or batch action 230 OPC/ESA
manuals records shipped 298
IBM Z Performance and Capacity Analytics 369 opening a table to display columns 216
marking objects user-modified 178 operating routines
memory management setting up 135
DataMover 85 options and commands, administration dialog 301
messages out of space condition
collect 140 correcting 156
Db2 output options for reports 160
system table creation 25 overview of defining objects 125, 137
migration instructions 13 overview of Dialog Parameters window 110, 114
migration of objects, using VERSION variable 126 overview of IBM Z Performance and Capacity Analytics data
MIGRATION_LOG detail table 281 flow 4
modifiable area of DRLEINI1 113
modifying a distribution clause 224 P
modifying an apply schedule clause 224
modifying IBM Z Performance and Capacity Analytics parameters
supplied collect statements 243 setting for job DRLJLDMC 249
modifying log collector statements 241 table space reporting 148
modifying the continuous collector 253 parameters for table space reporting 148
modifying the list of successfully collected log data sets 250 partitioning mode and keys
modifying the list of unsuccessfully collected log data sets determining 25
252 PDF command 307
modifying the log ID for a log data set 245 performing routine data collection 139
monitoring collect activity 140 PERIOD_PLAN control table 275
monitoring database access 158 policy
monitoring size of the IBM Z Performance and Capacity update CFRM policy 36
Analytics database 157 UPDPOL 36
monitoring table space requirements ports 44, 99, 104, 105, 109
calculating, and 147 prefix for all other tables dialog parameter 116
multiple IBM Z Performance and Capacity Analytics , prefix for system tables dialog parameter 116
installing 31 prerequisites
multiple log data sets software 14
collecting 143 Primary Menu options
Options pull-down
Dialog parameters option 115
N printed reports from batch 161
naming convention for IBM Z Performance and Capacity Printer line count per page dialog parameter 118
Analytics definition members 132 printing a list of tables IBM Z Performance and Capacity
naming convention for members of DRLvrm.SDRLDEFS 132 Analytics tables 231
naming convention for members of DRLvrm.SDRLRENU 133 problem determination, IBM Support
navigation-administration dialog options and commands 301 determining business impact 365
network problem records
collecting configuration data 139 administering 166
network configuration data generating 167
collecting 139 procedures
Network Considerations 76 log and record 4
network data collect job 139 processing SMF data
nonsummarized data tables 279–281 using DRLNJOB2 (DRLCDATA and DRLCACCT) 360
processing Usage and Accounting Collector subsystems 364
Publication DataMover
O installing 33
Publication Mechanism 64
object
publication.properties 80

382 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
publications record definitions (continued)
accessing online xix record definitions shipped with IBM Z Performance and Capacity Analy
IBM Documentation xix SMF 288
IBM Z Performance and Capacity Analytics 369 Tivoli Workload Scheduler for z/OS (OPC) 299
Publishing Historical Data 76 VM accounting 299
pull-down options VMPRF 299, 300
Administration window 301 record definitions in a log
Components window 301 working with 194
Logs window 301 Record Definitions window options
Primary Menu 301 Other pull-down
Tables window 301 DB2I option 159
purge condition ISPF/PDF option 303
displaying and editing 225 record formats
Purge utility 152 DataMover 85
purging a table 210 record procedure definition
purging data 152 creating 200
viewing and modifying 199
record procedure definitions
Q deleting 201
Q.OBJECT_DATA QMF control table, view of 274 recording a log data set
QMF job step for 239
batch reporting 166 recording a log data set for re-collection 246
data sets prefix dialog parameter 118 recording a log data set to be collected again 253
initialization 24 records
language option dialog parameter 117 Linux on zSeries 298
query 24 recovering from database errors 156
query, importing 24 reference, installation 113
setup 24 remote DataMover 62
view on objects table 274 remote receiver
QMF command 308 ELK 62
QMF table editor Splunk 62
changing or deleting rows 205 Reorg/Discard utility 150
QMFxxx.SDSQLOAD, allocating in the logon procedure 21 report
query creating on a record 189
modifying to eliminate report variables 161 report definition language, defining report groups 131
typical report 161 report format table, DRLJRFT 139
reporting dialog
introduction 7
R reporting dialog mode dialog parameter 118
reports
RACF
PRA002 310
records shipped 298
PRA003 311
recalculating the content of a table 207
PRA004 312
Receiving raw SMF records from the SMF Extractor
reports and report groups
introduction 11
adding to report group 166
record
administering 160
creating a report 189
administration 161, 165
record definition
batch creation 163
creating 198
creating groups 166
deleting 199
customizing for batch processing 161
viewing and modifying 194
defining 131
working with sections 197
examples 131
record definition fields
graphic reports 160
working with 196
output options 160
record definitions
print options 160
DEFINE RECORD log collector language statement 128
printing or saving in batch 161
definition members 128, 131
QMF batch reporting 166
introduction 3, 128
query example 161
record definitions shipped with IBM Z Performance and
Reports window 28
Capacity Analytics
running in batch 160, 163
DCOLLECT 297
saved reports 160
EREP 298
saving in batch 161, 165
IMS SLDS 294
REPORTs command 308
OPC 298
REPORTS parameter 21

Index 383
requirements saving a table definition in a data set 232
software 14 SCHEDULE control table 276
RESET parameter 21 secondary authorization IDs
retention periods, common data tables 279 security without 19
RETPD sections in a record definition
DASD-only log stream 35 working with 197
log stream on a coupling facility 35 security
Review the SID parameter 33 without secondary authorization IDs 19
Review your SYS settings 34 security without secondary authorization IDs 19
reviewing Db2 parameters 30 security, database
reviewing exceptions and generating problem records 167 secondary authorization IDs 17
reviewing table space profiles prior to installation 177 setting the DYNAMNBR value 249
reviewing the GENERATE statements for table spaces, setting the parameters for job DRLJLDMC 249
tables, and indexes 178 setting the parameters for job DRLJLDML 241
reviewing the results of the SMP/E installation 16 severity
routine data collection contacting IBM Support 365
performing< 139 determining business impact 365
routines show IZPCA environment data 117
performing data collection 139 showing the size of the table 206
running Db2 High Performance Unload utility 214 SLDS records 294
running DRLJTBSR SMF Configuration 33
considerations 149 SMF Extractor
running DRLNJOB3 to create invoices and reports 363 DRLJSMFO 37
running reports in batch 160 DRLJSMFX 38
RUNSTATS utility installing 33, 45
DRLJRUNS job 157 introduction 11
tips 46
SMF filtering configuration
S DataMover 95
sample collect messages 140 SMF Log Records component
Sample component introduction 9
component definition member 125 SMF records 288
description 283 SMF_VPD data collect 139
object definition members 125 SMP/E installation
Sample Report 1 285 reviewing results 16
Sample Report 2 286 software
Sample Report 3 287 prerequisites 14
SAMPLE_H, _M tables 284 SOrt column_name|position ASC|DES command 308
SAMPLE_USER lookup table 284 SPECIAL_DAY control table 277
Sample component reports spoke and hub
introduction 285 installing 33
sample configuration members 34, 79 spoke DataMover
sample configurations configuration options 49, 52
clear.properties 42, 81 spoke.properties 41
DASD-only log stream 35 spoke.properties configuration file options 49, 52
DataMover.sh 39, 79 SQL ID to use (in QMF) dialog parameter 117
DRLJSMFO 37 SQL log collector language statement 125
DRLJSMFX 38 SQLMAX dialog parameter 118
hub.properties 40 stage keywords
log stream on a coupling facility 35 common 87
publication.properties 80 input 88
spoke.properties 41 output 92
UPDPOL 36 process 90
sample JCL stages
DRLJCCOL 38, 79 DataMover 86
sample JCL jobs 27 stand-alone configuration
sample job continuous collector 42
DRLELDMC 247 stand-alone system
DRLJLDML 240 Continuous Collector, implementing 56
SAMPLE log type Continuous Collector, installing 47
collecting log data 135 DataMover, implementing 49
saved charts data set dialog parameter 120 statistics, status monitoring 82
saved reports data set dialog parameter 120 status monitoring commands 82
saved reports, batch creation 165 stop the continuous collector 253

384 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
storage group default dialog parameter 116 system tables and views (continued)
streaming data DRLUSER_GROUPREPS 274
remote DataMover 62 DRLUSER_GROUPS 274
subset of tables DRLUSER_REPORTATTR 274
listing in the Tables window 232 DRLUSER_REPORTCOLS 274
successfully collected data sets DRLUSER_REPORTQRYS 274
modifying list of 250 DRLUSER_REPORTS 274
summary of changes xxi DRLUSER_REPORTTEXT 274
Support DRLUSER_REPORTVARS 274
contacting IBM 365 DRLUSER_SEARCHATTR 274
describing problems 365 DRLUSER_SEARCHES 274
determining business impact 365 DRLVIEWS 274
submitting problems 365 GENERATE_KEYS 273
support information 365 GENERATE_PROFILES 272
supported products and releases System Tables window 25
software prerequisites 14 system window
SYSOUT class (in QMF) dialog parameter 117 IBM Z Performance and Capacity Analytics
SYSPROC administration dialog windows
DRL.LOCAL.EXEC 21 system window 23
SYStem command 308 systems, installing other IBM Z Performance and Capacity
system table Analytics 31
DRLEXPRESSIONS 257
DRLFIELDS 257
system tables
T
log collector 257 table
system tables and views adding a column 217
creating system tables 25 creating 232
dialog system tables 265 deleting 234
DRLCHARTS 265 deleting index 220
DRLCOLUMNS 274 displaying and adding index 218
DRLCOMP_OBJECTS 267 displaying and editing purge condition of 225
DRLCOMP_PARTS 267 displaying and modifying 220
DRLCOMPONENTS 267 displaying contents of 203
DRLGROUP_REPORTS 268 editing contents of 204
DRLGROUPS 268 opening to display columns 216
DRLINDEXES 274 purging 210
DRLINDEXPART 274 recalculating contents of 207
DRLKEYS 274 table access 237
DRLLDM_COLLECTSTMT 258 table and update definitions
DRLLDM_LOGDATASETS 258 creating
DRLLOGDATASETS 142, 259 system tables 25
DRLLOGS 260 definition members 129
DRLOBJECT_DATA 274 introduction 129
DRLPURGECOND 261 IXF files, importing 209
DRLRECORDPROCS 261 lookup and control tables 159
DRLRECORDS 262 modifying an APPLY SCHEDULE clause 224
DRLREPORT_ATTR 269 TABle command 308
DRLREPORT_COLUMNS 270 table data
DRLREPORT_QUERIES 270 exporting to an IXF file 210
DRLREPORT_TEXT 270 table definition
DRLREPORT_VARS 271 saving in a data set 232
DRLREPORTS 268 table definitions
DRLRPROCINPUT 262 introduction 4
DRLSEARCH_ATTR 271 table space
DRLSEARCHES 271 backing up 155
DRLSECTIONS 263 creating 234
DRLTABAUTH 274 definition members 128
DRLTABLEPART 274 displaying and modifying 227
DRLTABLES 274 introduction 7
DRLTABLESPACE 274 making parameter changes that do not require offline or
DRLUPDATECOLS 263 batch action 230
DRLUPDATEDISTR 263 out of space 156
DRLUPDATELETS 264 table space definitions 30
DRLUPDATES 264 table space profiles

Index 385
table space profiles (continued) Tables Window (continued)
reviewing prior to installation 177 listing a subset of tables 232
working with 177 Tables window options
table space reporting Other pull-down
parameters 148 ISPF/PDF option 305
table space, out of space condition 156 Table pull-down
table spaces Exit option 305
understanding 146 tables, control and common
table spaces, tables and indexes AGGR_VALUE 283
reviewing the GENERATE statements 178 AVAILABILITY_PARM 281
table summarization levels, common 278 CICS control tables
tables CICS_DICTIONARY 277
administering user access to 236 CICS_FIELD 278
unloading and loading 211 common data tables 279
tables and update definitions DAY_OF_WEEK 275
working with 201 description, lookup tables 281
tables and views PERIOD_PLAN 275
GENERATE_PROFILES 272 SCHEDULE 276
tables and views, system SPECIAL_DAY 277
creating system tables 25 TIME_RES 282
dialog system tables 265 USER_GROUP 282
DRLCHARTS 265 tables, control and lookup
DRLCOLUMNS 274 administering 159
DRLCOMP_OBJECTS 267 tabular reports, DRLREP ddname 160
DRLCOMP_PARTS 267 TCP/IP
DRLCOMPONENTS 267 DataMover 43
DRLGROUP_REPORTS 268 TCP/IP ports 44, 99, 104, 105, 109
DRLGROUPS 268 tech note
DRLINDEXES 274 migration instructions 13
DRLINDEXPART 274 temporary data sets prefix dialog parameter 119
DRLKEYS 274 testing component
DRLLOGDATASETS 142 verify proper installation 176
DRLOBJECT_DATA 274 testing the component to verify its proper installation 176
DRLREPORT_ATTR 269 the DRLJLDMC collect job and the parameters it uses 247
DRLREPORT_COLUMNS 270 TIME_RES lookup table 282
DRLREPORT_QUERIES 270 timestamp tables
DRLREPORT_TEXT 270 AVAILABILITY_T 279
DRLREPORT_VARS 271 EXCEPTION_T 280
DRLREPORTS 268 Tivoli Workload Scheduler for z/OS
DRLSEARCH_ATTR 271 (OPC)
DRLSEARCHES 271 records shipped 299
DRLSECTIONS 263 trigger definitions
DRLTABAUTH 274 definition member 131
DRLTABLEPART 274 trigger, IBM Z Performance and Capacity Analytics
DRLTABLES 274 performance
DRLTABLESPACE 274 definition member description
DRLUPDATECOLS 263 update and view 131
DRLUPDATEDISTR 263 typeface conventions xx
DRLUPDATELETS 264
DRLUPDATES 264
DRLUSER_GROUPREPS 274
U
DRLUSER_GROUPS 274 understanding Db2 concepts 145
DRLUSER_REPORTATTR 274 understanding how IBM Z Performance and Capacity
DRLUSER_REPORTCOLS 274 Analytics uses Db2 146
DRLUSER_REPORTQRYS 274 understanding how IBM Z Performance and Capacity
DRLUSER_REPORTS 274 Analytics uses Db2 locking and concurrency 157
DRLUSER_REPORTTEXT 274 understanding table spaces 146
DRLUSER_REPORTVARS 274 uninstalling
DRLUSER_SEARCHATTR 274 component 176
DRLUSER_SEARCHES 274 uninstalling a component 176
DRLVIEWS 274 unloading and loading tables 211
GENERATE_KEYS 273 unloading tables 211
tables naming standard, common 278 update definition
Tables Window adding an abbreviation 223

386 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
update definition (continued) views and tables, system (continued)
creating 235 DRLINDEXPART 274
deleting 236 DRLKEYS 274
deleting an abbreviation 224 DRLLOGDATASETS 142
Update Definition window 223 DRLOBJECT_DATA 274
update definitions DRLREPORT_ATTR 269
APPLY SCHEDULE clause 224 DRLREPORT_COLUMNS 270
definition member 131 DRLREPORT_QUERIES 270
displaying 198 DRLREPORT_TEXT 270
displaying and modifying 220 DRLREPORT_VARS 271
introduction 4, 129 DRLREPORTS 268
UPDPOL DRLSEARCH_ATTR 271
update CFRM policy 36 DRLSEARCHES 271
Usage and Accounting Collector DRLSECTIONS 263
installing 357 DRLTABAUTH 274
introduction 12 DRLTABLEPART 274
USER_GROUP lookup table 282 DRLTABLES 274
using collect messages 141 DRLTABLESPACE 274
using the DRLJLDML job step 239 DRLUPDATECOLS 263
utility DRLUPDATEDISTR 263
Purge 152 DRLUPDATELETS 264
Reorg/Discard 150 DRLUPDATES 264
DRLUSER_GROUPREPS 274
DRLUSER_GROUPS 274
V DRLUSER_REPORTATTR 274
variables and fields DRLUSER_REPORTCOLS 274
dialog parameters 115 DRLUSER_REPORTQRYS 274
variables, eliminating report 161 DRLUSER_REPORTS 274
verify installation DRLUSER_REPORTTEXT 274
testing the component 176 DRLUSER_REPORTVARS 274
VERSION DRLUSER_SEARCHATTR 274
IBM Z Performance and Capacity Analytics variable DRLUSER_SEARCHES 274
format 126 DRLVIEWS 274
VERSION variable 126 GENERATE_PROFILES 272
view views on Db2 catalog tables 273
deleting 234 views on IBM Z Performance and Capacity Analytics system
view definition tables 274
changing a comment 231 VM accounting records 299
displaying 231 VMPRF
view definitions record definitions 299, 300
definition member 131 VPD data collecting 139
viewing
object definition 180 W
viewing a list of log data sets collected 184
viewing and modifying a log definition 191 what's new xxi
viewing and modifying a record definition 194 windows, administration dialog windows
viewing and modifying a record procedure definition 199 Administration window 7, 23
viewing objects in a component 179 Collect Statistics window 142
viewing or editing an object definitions 180 Collect window 28
viewing the dump data set 251, 252 Data Selection window 28
viewing the information about successfully collected log data Logs window 28
sets 251 Primary Menu 23
viewing the unsuccessfully collected log data set 252 Reports window 28
views and tables, system System Tables window 25
creating system tables 25 system window 23
dialog system tables 265 System window 24
DRLCHARTS 265 working with a component definition 178
DRLCOLUMNS 274 working with abbreviations 223
DRLCOMP_OBJECTS 267 working with components 168
DRLCOMP_PARTS 267 working with data in tables 203
DRLCOMPONENTS 267 working with fields in a record definition 196
DRLGROUP_REPORTS 268 working with header fields 192
DRLGROUPS 268 working with log and record definitions 183
DRLINDEXES 274 working with log definitions 191

Index 387
working with record definitions in a log 194
working with sections in a record definition 197
working with table space profiles 177
working with tables and update definitions 201, 216
working with the contents of logs 184
working with the continuous collector 253
working with the log data manager option 238

Z
z/VM Performance Toolkit
record definitions 300

388 IBM Z Performance and Capacity Analytics : Administration Guide and Reference
IBM®

SC28-3211-01

You might also like