db2 Adminbook
db2 Adminbook
Administration Guide
IBM
SC19-4050-07
Db2 11 for z/OS
Administration Guide
IBM
SC19-4050-07
Notes
Before using this information and the product it supports, be sure to read the general information under “Notices” at the
end of this information.
Subsequent editions of this PDF will not be delivered in IBM Publications Center. Always download the latest edition from
Db2 11 for z/OS Product Documentation.
iv Administration Guide
Dropping and re-creating a table space to change its attributes . . . . . . . . . . . . . . . . . 134
Redistributing data in partitioned table spaces . . . . . . . . . . . . . . . . . . . . . . 136
Increasing partition size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Altering a page set to contain Db2-defined extents . . . . . . . . . . . . . . . . . . . . . 138
| Converting partitioned (non-UTS) table spaces to partition-by-range universal table spaces . . . . . . . 139
| Converting table spaces to use table-controlled partitioning . . . . . . . . . . . . . . . . . . 140
Altering Db2 tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Adding a column to a table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Specifying a default value when altering a column . . . . . . . . . . . . . . . . . . . . . 147
Altering the data type of a column . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Altering a table for referential integrity . . . . . . . . . . . . . . . . . . . . . . . . . 156
Adding or dropping table check constraints . . . . . . . . . . . . . . . . . . . . . . . 160
Adding partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Altering partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Adding XML columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
| Altering tables for hash access (deprecated) . . . . . . . . . . . . . . . . . . . . . . . 173
| Altering the size of your hash spaces (deprecated) . . . . . . . . . . . . . . . . . . . . . 175
Adding a system period and system-period data versioning to an existing table . . . . . . . . . . . 176
Adding an application period to a table . . . . . . . . . . . . . . . . . . . . . . . . 178
Manipulating data in a system-period temporal table . . . . . . . . . . . . . . . . . . . . 179
Altering materialized query tables . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Altering the assignment of a validation routine . . . . . . . . . . . . . . . . . . . . . . 181
Altering a table to capture changed data . . . . . . . . . . . . . . . . . . . . . . . . 182
Changing an edit procedure or a field procedure . . . . . . . . . . . . . . . . . . . . . 183
Altering the subtype of a string column . . . . . . . . . . . . . . . . . . . . . . . . 183
Altering the attributes of an identity column . . . . . . . . . . . . . . . . . . . . . . . 184
Changing data types by dropping and re-creating the table . . . . . . . . . . . . . . . . . . 184
Moving a table to a table space of a different page size . . . . . . . . . . . . . . . . . . . 188
Altering Db2 views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Altering views by using the INSTEAD OF trigger . . . . . . . . . . . . . . . . . . . . . 190
| Changing data by using views that reference temporal tables . . . . . . . . . . . . . . . . . 190
Altering Db2 indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Alternative method for altering an index . . . . . . . . . . . . . . . . . . . . . . . . 192
Adding columns to an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Altering how varying-length index columns are stored . . . . . . . . . . . . . . . . . . . 194
Altering the clustering of an index . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Dropping and redefining a Db2 index . . . . . . . . . . . . . . . . . . . . . . . . . 195
Reorganizing indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
| Pending data definition changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
| Materializing pending definition changes . . . . . . . . . . . . . . . . . . . . . . . . 200
| Restrictions for changes to objects that have pending data definition changes . . . . . . . . . . . . 203
Altering stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Altering user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Altering implicitly created XML objects. . . . . . . . . . . . . . . . . . . . . . . . . . 209
Changing the high-level qualifier for Db2 data sets . . . . . . . . . . . . . . . . . . . . . . 210
Defining a new integrated catalog alias. . . . . . . . . . . . . . . . . . . . . . . . . 210
Changing the qualifier for system data sets . . . . . . . . . . . . . . . . . . . . . . . 211
Changing qualifiers for other databases and user data sets . . . . . . . . . . . . . . . . . . 215
Tools for moving Db2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Moving Db2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Moving a Db2 data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Contents v
Destinations for command output messages . . . . . . . . . . . . . . . . . . . . . . . . 234
Unsolicited Db2 messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
vi Administration Guide
Altering buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Monitoring buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Controlling user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Starting user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Monitoring user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Stopping user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Controlling Db2 utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Starting online utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Monitoring and changing online utilities . . . . . . . . . . . . . . . . . . . . . . . . 298
Controlling Db2 stand-alone utilities. . . . . . . . . . . . . . . . . . . . . . . . . . 298
Controlling the IRLM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
z/OS commands that operate on IRLM. . . . . . . . . . . . . . . . . . . . . . . . . 301
Starting the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Stopping the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Monitoring threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Types of threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Output of the DISPLAY THREAD command . . . . . . . . . . . . . . . . . . . . . . . 305
Displaying information about threads . . . . . . . . . . . . . . . . . . . . . . . . . 305
Monitoring all DBMSs in a transaction . . . . . . . . . . . . . . . . . . . . . . . . . 310
Controlling connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Controlling TSO connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Controlling CICS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Controlling IMS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Controlling RRS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Controlling connections to remote systems . . . . . . . . . . . . . . . . . . . . . . . 336
Controlling traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Diagnostic traces for attachment facilities . . . . . . . . . . . . . . . . . . . . . . . . 360
Controlling the Db2 trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Diagnostic trace for the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Setting the priority of stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . 362
| Setting special registers with profiles . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Chapter 9. Managing the log and the bootstrap data set . . . . . . . . . . . . . . 367
How database changes are made . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Units of recovery and points of consistency . . . . . . . . . . . . . . . . . . . . . . . 367
How Db2 rolls back work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
How the initial Db2 logging environment is established . . . . . . . . . . . . . . . . . . . 369
How Db2 creates log records . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
How Db2 writes the active log . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
How Db2 writes (offloads) the archive log. . . . . . . . . . . . . . . . . . . . . . . . 371
How Db2 retrieves log records . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Managing the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Quiescing activity before offloading . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Archiving the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
| Adding an active log data set to the active log inventory with the SET LOG command. . . . . . . . . 379
Dynamically changing the checkpoint frequency. . . . . . . . . . . . . . . . . . . . . . 380
Setting limits for archive log tape units . . . . . . . . . . . . . . . . . . . . . . . . . 381
Monitoring the system checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Displaying log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
| What to do before RBA or LRSN limits are reached . . . . . . . . . . . . . . . . . . . . . 382
| Converting the BSDS to the 10-byte RBA and LRSN . . . . . . . . . . . . . . . . . . . . 384
| Converting page sets to the 10-byte RBA or LRSN format . . . . . . . . . . . . . . . . . . 386
| Resetting the log RBA value in a data sharing environment (6-byte format). . . . . . . . . . . . . 388
| Resetting the log RBA value in a non-data sharing environment (6-byte format) . . . . . . . . . . . 389
Canceling and restarting an offload . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Displaying the status of an offload . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Discarding archive log records. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Locating archive log data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Management of the bootstrap data set . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Restoring dual-BSDS mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
BSDS copies with archive log data sets . . . . . . . . . . . . . . . . . . . . . . . . . 397
Contents vii
Recommendations for changing the BSDS log inventory . . . . . . . . . . . . . . . . . . . 397
Chapter 13. Recovering from different Db2 for z/OS problems . . . . . . . . . . . 509
Recovering from IRLM failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Recovering from z/OS or power failure . . . . . . . . . . . . . . . . . . . . . . . . . 509
Recovering from disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Recovering from application errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Backing out incorrect application changes (with a quiesce point) . . . . . . . . . . . . . . . . 512
Backing out incorrect application changes (without a quiesce point) . . . . . . . . . . . . . . . 513
Recovering from IMS-related failures . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Recovering from IMS control region failure . . . . . . . . . . . . . . . . . . . . . . . 514
Recovering from IMS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . 514
Recovering from IMS application failure . . . . . . . . . . . . . . . . . . . . . . . . 516
Recovering from a Db2 failure in an IMS environment . . . . . . . . . . . . . . . . . . . 517
Recovering from CICS-related failure . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Recovering from CICS application failures. . . . . . . . . . . . . . . . . . . . . . . . 518
Contents ix
Recovering Db2 when CICS is not operational . . . . . . . . . . . . . . . . . . . . . . 518
Recovering Db2 when the CICS attachment facility cannot connect to Db2 . . . . . . . . . . . . . 519
Recovering CICS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . . 520
Recovering from CICS attachment facility failure . . . . . . . . . . . . . . . . . . . . . 523
Recovering from a QMF query failure . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Recovering from subsystem termination . . . . . . . . . . . . . . . . . . . . . . . . . 524
Recovering from temporary resource failure . . . . . . . . . . . . . . . . . . . . . . . . 525
Recovering from active log failures . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Recovering from being out of space in active logs . . . . . . . . . . . . . . . . . . . . . 526
Recovering from a write I/O error on an active log data set . . . . . . . . . . . . . . . . . . 527
Recovering from a loss of dual active logging . . . . . . . . . . . . . . . . . . . . . . 528
Recovering from I/O errors while reading the active log . . . . . . . . . . . . . . . . . . . 528
Recovering from archive log failures. . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Recovering from allocation problems with the archive log . . . . . . . . . . . . . . . . . . 530
Recovering from write I/O errors during archive log offload . . . . . . . . . . . . . . . . . 531
Recovering from read I/O errors on an archive data set during recovery . . . . . . . . . . . . . 531
Recovering from insufficient disk space for offload processing . . . . . . . . . . . . . . . . . 532
Recovering from BSDS failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Recovering from an I/O error on the BSDS . . . . . . . . . . . . . . . . . . . . . . . 533
Recovering from an error that occurs while opening the BSDS . . . . . . . . . . . . . . . . . 534
Recovering from unequal timestamps on BSDSs . . . . . . . . . . . . . . . . . . . . . . 534
Recovering the BSDS from a backup copy . . . . . . . . . . . . . . . . . . . . . . . . 535
Recovering from BSDS or log failures during restart . . . . . . . . . . . . . . . . . . . . . 537
Recovering from failure during log initialization or current status rebuild . . . . . . . . . . . . . 540
Recovering from a failure during forward log recovery . . . . . . . . . . . . . . . . . . . 552
Recovering from a failure during backward log recovery . . . . . . . . . . . . . . . . . . . 557
Recovering from a failure during a log RBA read request. . . . . . . . . . . . . . . . . . . 560
Recovering from unresolvable BSDS or log data set problem during restart. . . . . . . . . . . . . 561
Recovering from a failure resulting from total or excessive loss of log data . . . . . . . . . . . . . 563
Resolving inconsistencies resulting from a conditional restart . . . . . . . . . . . . . . . . . 567
Recovering from Db2 database failure . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Recovering a Db2 subsystem to a prior point in time . . . . . . . . . . . . . . . . . . . . . 574
Recovering from a down-level page set problem. . . . . . . . . . . . . . . . . . . . . . . 575
Recovering from a problem with invalid LOBs . . . . . . . . . . . . . . . . . . . . . . . 577
Recovering from table space I/O errors. . . . . . . . . . . . . . . . . . . . . . . . . . 578
Recovering from Db2 catalog or directory I/O errors . . . . . . . . . . . . . . . . . . . . . 579
Recovering from integrated catalog facility failure . . . . . . . . . . . . . . . . . . . . . . 580
Recovering VSAM volume data sets that are out of space or destroyed . . . . . . . . . . . . . . 580
Recovering from out-of-disk-space or extent limit problems . . . . . . . . . . . . . . . . . . 581
Recovering from referential constraint violation . . . . . . . . . . . . . . . . . . . . . . . 585
Recovering from distributed data facility failure . . . . . . . . . . . . . . . . . . . . . . . 586
Recovering from conversation failure . . . . . . . . . . . . . . . . . . . . . . . . . 586
Recovering from communications database failure . . . . . . . . . . . . . . . . . . . . . 587
Recovering from database access thread failure . . . . . . . . . . . . . . . . . . . . . . 588
Recovering from VTAM failure . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
| Recovering from VTAM ACB OPEN problems . . . . . . . . . . . . . . . . . . . . . . 589
Recovering from TCP/IP failure . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Recovering from remote logical unit failure . . . . . . . . . . . . . . . . . . . . . . . 591
Recovering from an indefinite wait condition . . . . . . . . . . . . . . . . . . . . . . . 591
Recovering database access threads after security failure . . . . . . . . . . . . . . . . . . . 592
Performing remote-site disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . 593
Recovering from a disaster by using system-level backups . . . . . . . . . . . . . . . . . . 593
Restoring data from image copies and archive logs . . . . . . . . . . . . . . . . . . . . . 593
Recovering from disasters by using a tracker site . . . . . . . . . . . . . . . . . . . . . 608
Using data mirroring for disaster recovery . . . . . . . . . . . . . . . . . . . . . . . 619
Scenarios for resolving problems with indoubt threads . . . . . . . . . . . . . . . . . . . . 625
Scenario: Recovering from communication failure . . . . . . . . . . . . . . . . . . . . . 627
Scenario: Making a heuristic decision about whether to commit or abort an indoubt thread . . . . . . . 629
Scenario: Recovering from an IMS outage that results in an IMS cold start . . . . . . . . . . . . . 631
Scenario: Recovering from a Db2 outage at a requester that results in a Db2 cold start . . . . . . . . . 632
Scenario: What happens when the wrong Db2 subsystem is cold started . . . . . . . . . . . . . 636
x Administration Guide
Scenario: Correcting damage from an incorrect heuristic decision about an indoubt thread . . . . . . . 638
Contents xi
Field-definition for field procedures . . . . . . . . . . . . . . . . . . . . . . . . . . 699
Specifying field procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
When field procedures are taken . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Control blocks for execution of field procedures . . . . . . . . . . . . . . . . . . . . . . 701
Field-definition (function code 8) . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
Field-encoding (function code 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
Field-decoding (function code 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Log capture routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
Specifying log capture routines . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
When log capture routines are invoked . . . . . . . . . . . . . . . . . . . . . . . . . 712
Parameter list for log capture routines . . . . . . . . . . . . . . . . . . . . . . . . . 713
Routines for dynamic plan selection in CICS . . . . . . . . . . . . . . . . . . . . . . . . 714
General guidelines for writing exit routines . . . . . . . . . . . . . . . . . . . . . . . . 715
Coding rules for exit routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
Modifying exit routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Execution environment for exit routines . . . . . . . . . . . . . . . . . . . . . . . . 716
Registers at invocation for exit routines. . . . . . . . . . . . . . . . . . . . . . . . . 716
Parameter list for exit routines. . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
Row formats for edit and validation routines . . . . . . . . . . . . . . . . . . . . . . . . 718
Column boundaries for edit and validation procedures . . . . . . . . . . . . . . . . . . . 718
Null values for edit procedures, field procedures, and validation routines . . . . . . . . . . . . . 719
Fixed-length rows for edit and validation routines . . . . . . . . . . . . . . . . . . . . . 719
Varying-length rows for edit and validation routines . . . . . . . . . . . . . . . . . . . . 719
Varying-length rows with nulls for edit and validation routines . . . . . . . . . . . . . . . . 720
EDITPROCs and VALIDPROCs for handling basic and reordered row formats . . . . . . . . . . . 721
Converting basic row format table spaces with edit and validation routines to reordered row format . . . . 721
Dates, times, and timestamps for edit and validation routines . . . . . . . . . . . . . . . . . 723
Parameter list for row format descriptions . . . . . . . . . . . . . . . . . . . . . . . . 724
Db2 codes for numeric data in edit and validation routines . . . . . . . . . . . . . . . . . . 725
Information resources for Db2 11 for z/OS and related products . . . . . . . . . . 733
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
Programming interface information . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
Terms and conditions for product documentation . . . . . . . . . . . . . . . . . . . . . . 738
Privacy policy considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
Throughout this information, “Db2” means “Db2 11 for z/OS”. References to other
Db2 products use complete names or specific abbreviations.
Important: To find the most up to date content, always use IBM® Knowledge
Center, which is continually updated as soon as changes are ready. PDF manuals
are updated only when new editions are published, on an infrequent basis.
This information assumes that Db2 11 is running in new-function mode, and that
your application is running with the application compatibility value of 'V11R1'.
Availability of new function in Db2 11
The behavior of data definition statements such as CREATE, ALTER, and
DROP, which embed data manipulation SQL statements that contain new
capabilities, depends on the application compatibility value that is in effect
for the application. An application compatibility value of 'V11R1' must be
in effect for applications to use new capability in embedded statements
such as SELECT, INSERT, UPDATE, DELETE, MERGE, CALL, and SET
assignment-statement. Otherwise, an application compatibility value of
'V10R1' can be used for data definition statements.
Generally, new SQL capabilities, including changes to existing language
elements, functions, data manipulation statements, and limits, are available
only in new-function mode with applications set to an application
compatibility value of 'V11R1'.
Optimization and virtual storage enhancements are available in conversion
mode unless stated otherwise.
SQL statements can continue to run with the same expected behavior as in
DB2® 10 new-function mode with an application compatibility value of
'V10R1'.
Db2 11 utilities can use the DFSORT program regardless of whether you purchased
a license for DFSORT on your system. For more information, see the following
informational APARs:
v II14047
v II14213
v II13495
Db2 utilities can use IBM Db2 Sort for z/OS (5655-W42) as an alternative to
DFSORT for utility SORT and MERGE functions. Use of Db2 Sort for z/OS
requires the purchase of a Db2 Sort for z/OS license. For more information about
Db2 Sort for z/OS, see Db2 Sort for z/OS.
Related concepts:
Db2 utilities packaging (Db2 Utilities)
| About the Db2 brand change: IBM is rebranding DB2 to Db2. As such, there will
| be changes to all the Db2 offerings. For example, “DB2 for z/OS” is now referred
| to as “Db2 for z/OS,” beginning with Db2 11. While IBM implements the change
| across the Db2 family of products, you might see references to the original name
| “DB2 for z/OS” or “DB2” in different IBM web pages and documents. “DB2 for
| z/OS” and “Db2 for z/OS” refer to the same product, when the PID, Entitlement
| Entity, version, modification, and release information match. For more information,
| see Revised naming for IBM Db2 family products.
Accessibility features
The following list includes the major accessibility features in z/OS products,
including Db2 11 for z/OS. These features support:
v Keyboard-only operation.
v Interfaces that are commonly used by screen readers and screen magnifiers.
v Customization of display attributes such as color, contrast, and font size
Tip: The IBM Knowledge Center (which includes information for Db2 for z/OS)
and its related publications are accessibility-enabled for the IBM Home Page
Reader. You can operate all features using the keyboard instead of the mouse.
Keyboard navigation
For information about navigating the Db2 for z/OS ISPF panels using TSO/E or
ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User's Guide, and the z/OS
ISPF User's Guide. These guides describe how to navigate each interface, including
the use of keyboard shortcuts or function keys (PF keys). Each guide includes the
default settings for the PF keys and explains how to modify their functions.
Apply the following rules when reading the syntax diagrams that are used in Db2
for z/OS documentation:
v Read the syntax diagrams from left to right, from top to bottom, following the
path of the line.
►► required_item ►◄
►► required_item ►◄
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
►► required_item ►◄
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
►► required_item required_choice1 ►◄
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
►► required_item ►◄
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
►► required_item ►◄
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
►► required_item ▼ repeatable_item ►◄
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
►► required_item ▼ repeatable_item ►◄
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Sometimes a diagram must be split into fragments. The syntax fragment is
shown separately from the main syntax diagram, but the contents of the
fragment should be read as if they are on the main path of the diagram.
►► required_item fragment-name ►◄
fragment-name:
required_item
optional_name
In logical data modeling, you design a model of the data without paying attention
to specific functions and capabilities of the DBMS that will store the data. In fact,
you could even build a logical data model without knowing which DBMS you will
use.
Designing and implementing a successful database, one that satisfies the needs of
an organization, requires a logical data model. Logical data modeling is the process
of documenting the comprehensive business information requirements in an
accurate and consistent format. Analysts who do data modeling define the data
items and the business rules that affect those data items. The process of data
modeling acknowledges that business data is a vital asset that the organization
needs to understand and carefully manage. This section contains information that
was adapted from Handbook of Relational Database Design.
These are all business facts that a manufacturing company's logical data model
needs to include. Many people inside and outside the company rely on
information that is based on these facts. Many reports include data about these
facts.
Any business, not just manufacturing companies, can benefit from the task of data
modeling. Database systems that supply information to decision makers,
customers, suppliers, and others are more successful if their foundation is a sound
data model.
To model data:
1. Build critical user views.
a. Carefully examining a single business activity or function.
b. Develop a user view, which is the model or representation of critical
information that the business activity requires.
This initial stage of the data modeling process is highly interactive. Because
data analysts cannot fully understand all areas of the business that they are
modeling, they work closely with the actual users. Working together,
analysts and users define the major entities (significant objects of interest)
and determine the general relationships between these entities.
In a later stage, the analyst combines each individual user view with all the
other user views into a consolidated logical data model.
2. Add keys to user views
Key business rules affect insert, update, and delete operations on the data. For
example, a business rule might require that each customer entity have at least
one unique identifier. Any attempt to insert or update a customer identifier that
matches another customer identifier is not valid. In a data model, a unique
identifier is called a primary key.
3. Add detail to user views and validate them.
a. Add other descriptive details that are less vital.
b. Associate these descriptive details, called attributes, to the entities.
For example, a customer entity probably has an associated phone number.
The phone number is a non-key attribute of the customer entity.
c. Validate all the user views
To validate the views, analysts use the normalization process and process
models. Process models document the details of how the business will use
the data.
4. Determine additional business rules that affect attributes.
a. Clarify the data-driven business rules.
Data-driven business rules are constraints on particular data values. These
constraints need to be true, regardless of any particular processing
requirements.
The advantage to defining data-driven business rules during the data
design stage, rather than during application design is that programmers of
many applications don't need to write code to enforce these business rules.
For example, Assume that a business rule requires that a customer entity
have a phone number, an address, or both. If this rule doesn't apply to the
data itself, programmers must develop, test, and maintain applications that
verify the existence of one of these attributes. Data-driven business
requirements have a direct relationship with the data, thereby relieving
programmers from extra work.
5. Integrate user views.
a. Combine into a consolidated logical data model the newly created different
user views.
b. Integrate other data models that already exist in the organization with the
new consolidated logical data model.
At this stage, analysts also strive to make their data model flexible so that it
can support the current business environment and possible future changes. For
example, assume that a retail company operates in a single country and that
4 Administration Guide
business plans include expansion to other countries. Armed with knowledge of
these plans, analysts can build the model so that it is flexible enough to
support expansion into other countries.
You begin by defining your entities, the significant objects of interest. Entities are
the things about which you want to store information. For example, you might
want to define an entity, called EMPLOYEE, for employees because you need to
store information about everyone who works for your organization. You might also
define an entity, called DEPARTMENT, for departments.
Next, you define primary keys for your entities. A primary key is a unique
identifier for an entity. In the case of the EMPLOYEE entity, you probably need to
store a large amount of information. However, most of this information (such as
gender, birth date, address, and hire date) would not be a good choice for the
primary key. In this case, you could choose a unique employee ID or number
(EMPLOYEE_NUMBER) as the primary key. In the case of the DEPARTMENT
entity, you could use a unique department number (DEPARTMENT_NUMBER) as
the primary key.
After you have decided on the entities and their primary keys, you can define the
relationships that exist between the entities. The relationships are based on the
primary keys. If you have an entity for EMPLOYEE and another entity for
DEPARTMENT, the relationship that exists is that employees are assigned to
departments. You can read more about this topic in the next section.
After defining the entities, their primary keys, and their relationships, you can
define additional attributes for the entities. In the case of the EMPLOYEE entity,
you might define the following additional attributes:
v Birth date
v Hire date
v Home address
v Office phone number
v Gender
v Resume
Subsections:
v “One-to-one relationships”
v “One-to-many and many-to-one relationships”
v “Many-to-many relationships” on page 7
v “Business rules for relationships” on page 7
The type of a given relationship can vary, depending on the specific environment.
If employees of a company belong to several departments, the relationship
between employees and departments is many-to-many.
You need to define separate entities for different types of relationships. When
modeling relationships, you can use diagram conventions to depict relationships
by using different styles of lines to connect the entities.
One-to-one relationships
When you are doing logical database design, one-to-one relationships are
bidirectional relationships, which means that they are single-valued in both
directions. For example, an employee has a single resume; each resume belongs to
only one person. The previous figure illustrates that a one-to-one relationship exists
between the two entities. In this case, the relationship reflects the rules that an
employee can have only one resume and that a resume can belong to only one
employee.
An employee
has a resume
Employee Resume
A resume is owned
by an employee
6 Administration Guide
Many employees work
for one department
Employee Department
One department can
have many employees
Many-to-many relationships
A many-to-many relationship is a relationship that is multivalued in both
directions. The following figure illustrates this kind of relationship. An employee
can work on more than one project, and a project can have more than one
employee assigned.
Employees work on
many projects
Employee Projects
Projects are worked on
by many employees
When you define relationships, you have a big influence on how smoothly your
business runs. If you don't do a good job at this task, your database and associated
applications are likely to have many problems, some of which may not manifest
themselves for years.
Entity attributes
When you define attributes for the entities, you generally work with the data
administrator to decide on names, data types, and appropriate values for the
attributes.
Attribute names
Most organizations have naming guidelines. In addition to following these
guidelines, data analysts also base attribute definitions on class words.
A class word is a single word that indicates the nature of the data that the attribute
represents.
The class word NUMBER indicates an attribute that identifies the number of an
entity. Therefore, attribute names that identify the numbers of entities should
include the class word of NUMBER. Some examples are EMPLOYEE_NUMBER,
PROJECT_NUMBER, and DEPARTMENT_NUMBER.
When an organization does not have well-defined guidelines for attribute names,
data analysts try to determine how the database designers have historically named
You might use the following data types for attributes of the EMPLOYEE entity:
v EMPLOYEE_NUMBER: CHAR(6)
v EMPLOYEE_LAST_NAME: VARCHAR(15)
v EMPLOYEE_HIRE_DATE: DATE
v EMPLOYEE_SALARY_AMOUNT: DECIMAL(9,2)
The data types that you choose are business definitions of the data type. During
physical database design, you might need to change data type definitions or use a
subset of these data types. The database or the host language might not support all
of these definitions, or you might make a different choice for performance reasons.
For example, you might need to represent monetary amounts, but Db2 and many
host languages do not have a data type MONEY. In the United States, a natural
choice for the SQL data type in this situation is DECIMAL(10,2) to represent
dollars. But you might also consider the INTEGER data type for fast, efficient
performance.
Related concepts:
Data types (Introduction to Db2 for z/OS)
Related reference:
CREATE TABLE (Db2 SQL)
SQL data type attributes (Db2 Programming for ODBC)
For example, you would not want to allow numeric data in an attribute for a
person's name. The data types that you choose limit the values that apply to a
given attribute, but you can also use other mechanisms. These other mechanisms
are domains, null values, and default values.
Subsections:
v “Domain”
v “Null values” on page 9
v “Default values” on page 9
Domain
A domain describes the conditions that an attribute value must meet to be a valid
value. Sometimes the domain identifies a range of valid values. By defining the
domain for a particular attribute, you apply business rules to ensure that the data
will make sense.
Example 1: A domain might state that a phone number attribute must be a 10-digit
value that contains only numbers. You would not want the phone number to be
8 Administration Guide
incomplete, nor would you want it to contain alphabetic or special characters and
thereby be invalid. You could choose to use either a numeric data type or a
character data type. However, the domain states the business rule that the value
must be a 10-digit value that consists of numbers.
Example 2: A domain might state that a month attribute must be a 2-digit value
from 01 to 12. Again, you could choose to use datetime, character, or numeric data
types for this value, but the domain demands that the value must be in the range
of 01 through 12. In this case, incorporating the month into a datetime data type is
probably the best choice. This decision should be reviewed again during physical
database design.
Null values
When you are designing attributes for your entities, you will sometimes find that
an attribute does not have a value for every instance of the entity. For example,
you might want an attribute for a person's middle name, but you can't require a
value because some people have no middle name. For these occasions, you can
define the attribute so that it can contain null values.
A null value is a special indicator that represents the absence of a value. The value
can be absent because it is unknown, not yet supplied, or nonexistent. The DBMS
treats the null value as an actual value, not as a zero value, a blank, or an empty
string.
Just as some attributes should be allowed to contain null values, other attributes
should not contain null values.
Example: For the EMPLOYEE entity, you might not want to allow the attribute
EMPLOYEE_LAST_NAME to contain a null value.
Default values
In some cases, you may not want a given attribute to contain a null value, but you
don't want to require that the user or program always provide a value. In this case,
a default value might be appropriate.
Entity normalization
After you define entities and decide on attributes for the entities, you normalize
entities to avoid redundancy.
The rules for normal form are cumulative. In other words, for an entity to satisfy
the rules of second normal form, it also must satisfy the rules of first normal form.
An entity that satisfies the rules of fourth normal form also satisfies the rules of
first, second, and third normal form.
In this section, you will see many references to the word instance. In the context of
logical data modeling, an instance is one particular occurrence. An instance of an
entity is a set of data values for all of the attributes that correspond to that entity.
Example: The following figure shows one instance of the EMPLOYEE entity.
A relational entity satisfies the requirement of first normal form if every instance of
an entity contains only one value, never multiple repeating attributes. Repeating
attributes, often called a repeating group, are different attributes that are inherently
the same. In an entity that satisfies the requirement of first normal form, each
attribute is independent and unique in its meaning and its name.
Example: An inventory entity records quantities of specific parts that are stored at
particular warehouses. The following figure shows the attributes of the inventory
10 Administration Guide
entity.
Here, the primary key consists of the PART and the WAREHOUSE attributes
together. Because the attribute WAREHOUSE_ADDRESS depends only on the
value of WAREHOUSE, the entity violates the rule for second normal form. This
design causes several problems:
v Each instance for a part that this warehouse stores repeats the address of the
warehouse.
v If the address of the warehouse changes, every instance referring to a part that is
stored in that warehouse must be updated.
v Because of the redundancy, the data might become inconsistent. Different
instances could show different addresses for the same warehouse.
v If at any time the warehouse has no stored parts, the address of the warehouse
might not exist in any instances in the entity.
To satisfy second normal form, the information in the previous figure would be in
two entities, as the following figure shows.
An entity is in third normal form if each nonprimary key attribute provides a fact
that is independent of other non-key attributes and depends only on the key. A
violation of the third normal form occurs when a nonprimary attribute is a fact
about another non-key attribute.
Example: The first entity in the previous figure contains the attributes
EMPLOYEE_NUMBER and DEPARTMENT_NUMBER. Suppose that a program or
user adds an attribute, DEPARTMENT_NAME, to the entity. The new attribute
depends on DEPARTMENT_NUMBER, whereas the primary key is on the
EMPLOYEE_NUMBER attribute. The entity now violates third normal form.
12 Administration Guide
Figure 8. Employee and department entities that satisfy the third normal form
Example: Consider the EMPLOYEE entity. Each instance of EMPLOYEE could have
both SKILL_CODE and LANGUAGE_CODE. An employee can have several skills
and know several languages. Two relationships exist, one between employees and
skills, and one between employees and languages. An entity is not in fourth
normal form if it represents both relationships, as the previous figure shows.
Instead, you can avoid this violation by creating two entities that represent both
relationships, as the following figure shows.
If, however, the facts are interdependent (that is, the employee applies certain
languages only to certain skills), you should not split the entity.
You can put any data into fourth normal form. A good rule to follow when doing
logical database design is to arrange all the data in entities that are in fourth
normal form. Then decide whether the result gives you an acceptable level of
performance. If the performance is not acceptable, denormalizing your design is a
good approach to improving performance.
Related concepts:
Practical examples of data modeling
Denormalization of tables
The Object Management Group is a consortium that created the UML standard.
UML modeling is based on object-oriented programming principals. The basic
difference between the entity-relationship model and the UML model is that,
instead of designing entities, you model objects. UML defines a standard set of
modeling diagrams for all stages of developing a software system. Conceptually,
UML diagrams are like the blueprints for the design of a software development
project.
14 Administration Guide
v Sequence diagrams show object interactions in a time-based sequence
that establishes the roles of objects and helps determine class
responsibilities and interfaces.
v Collaboration diagrams show associations between objects that define
the sequence of messages that implement an operation or a transaction.
Component
Shows the dependency relationships between components, such as main
programs and subprograms.
The logical data model provides an overall view of the captured business
requirements as they pertain to data entities. The data model diagram graphically
represents the physical data model. The physical data model applies the logical
data model's captured requirements to specific DBMS languages. Physical data
models also capture the lower-level detail of a DBMS database.
Database designers can customize the data model diagram from other UML
diagrams, which allows them to work with concepts and terminology, such as
columns, tables, and relationships, with which they are already familiar.
Developers can also transform a logical data model into a physical data model.
Because the data model diagram includes diagrams for modeling an entire system,
it allows database designers, application developers, and other development team
members to share and track business requirements throughout development. For
example, database designers can capture information, such as constraints, triggers,
and indexes, directly on the UML diagram. Developers can also transfer between
object and data models and use basic transformation types such as many-to-many
relationships.
During physical design, you transform the entities into tables, the instances into
rows, and the attributes into columns. You and your colleagues must decide on
many factors that affect the physical design, such as:
v How to translate entities into physical tables
v What attributes to use for columns of the physical tables
v Which columns of the tables to define as keys
v What indexes to define on the tables
v What views to define on the tables
v How to denormalize the tables
v How to resolve many-to-many relationships
The task of building the physical design is a job that never ends. You need to
continually monitor the performance and data integrity characteristics of a
database as time passes. Many factors necessitate periodic refinements to the
physical design.
Db2 lets you change many of the key attributes of your design with ALTER SQL
statements. For example, assume that you design a partitioned table so that it will
store 36 months of data. Later you discover that you need to extend that design to
hold 84 months of data. You can add or rotate partitions for the current 36 months
to accommodate the new design.
The remainder of this chapter includes some valuable information that can help
you build and refine your database's physical design.
Denormalization of tables
During physical design, analysts transform the entities into tables and the
attributes into columns.
The warehouse address column first appears as part of a table that contains
information about parts and warehouses. To further normalize the design of the
table, analysts remove the warehouse address column from that table. Analysts
also define the column as part of a table that contains information only about
warehouses.
Example: Consider the design in which both tables have a column that contains
the addresses of warehouses. If this design makes join operations unnecessary, it
could be a worthwhile redundancy. Addresses of warehouses do not change often,
and if one does change, you can use SQL to update all instances fairly easily.
Tip: Do not automatically assume that all joins take too much time. If you join
normalized tables, you do not need to keep the same data values synchronized in
16 Administration Guide
multiple tables. In many cases, joins are the most efficient access method, despite
the overhead they require. For example, some applications achieve 44-way joins in
subsecond response time.
When you are building your physical design, you and your colleagues need to
decide whether to denormalize the data. Specifically, you need to decide whether
to combine tables or parts of tables that are frequently accessed by joins that have
high performance requirements. This is a complex decision about which this book
cannot give specific advice. To make the decision, you need to assess the
performance requirements, different methods of accessing the data, and the costs of
denormalizing the data. You need to consider the trade-off: is duplication, in
several tables, of often-requested columns less expensive than the time for
performing joins?
Recommendations:
v Do not denormalize tables unless you have a good understanding of the data
and the business transactions that access the data. Consult with application
developers before denormalizing tables to improve the performance of users'
queries.
v When you decide whether to denormalize a table, consider all programs that
regularly access the table, both for reading and for updating. If programs
frequently update a table, denormalizing the table affects performance of update
programs because updates apply to multiple tables rather than to one table.
Example: Employees work on many projects. Projects have many employees. In the
logical database design, you show this relationship as a many-to-many relationship
between project and employee. To resolve this relationship, you create a new
associative table, EMPLOYEE_PROJECT. For each combination of employee and
project, the EMPLOYEE_PROJECT table contains a corresponding row. The
primary key for the table would consist of the employee number (EMPNO) and
the project number (PROJNO).
Example: Assume that a heavily used transaction requires the number of wires that
are sold by month in a given year. Performance factors might justify changing a
table so that it violates the rule of first normal form by storing repeating groups. In
this case, the repeating group would be: MONTH, WIRE. The table would contain
a row for the number of sold wires for each month (January wires, February wires,
March wires, and so on).
Some users might find that no single table contains all the data they need; rather,
the data might be scattered among several tables. Furthermore, one table might
contain more data than users want to see, or more than you want to authorize
them to see. For those situations, you can create views.
You can create a view any time after the underlying tables exist. The owner of a
set of tables implicitly has the authority to create a view on them. A user with
administrative authority at the system or database level can create a view for any
owner on any set of tables. If they have the necessary authority, other users can
also create views on a table that they did not create.
Related concepts:
Db2 views (Introduction to Db2 for z/OS)
18 Administration Guide
Indexes on table columns
If you are involved in the physical design of a database, you will be working with
other designers to determine what columns you should index.
You will use process models that describe how different applications are going to
be accessing the data. This information is important when you decide on indexing
strategies to ensure adequate performance.
In general, users of the table are unaware that an index is in use. Db2 decides
whether to use the index to access the table.
Related concepts:
Creation of indexes (Introduction to Db2 for z/OS)
Index access (ACCESSTYPE is 'I', 'IN', 'I1', 'N', 'MX', or 'DX') (Db2
Performance)
Related tasks:
Designing indexes for performance (Db2 Performance)
Related information:
Implementing Db2 indexes
Introductory concepts
| Db2 hash spaces (deprecated) (Introduction to Db2 for z/OS)
If you are involved in the physical design of a database, you work with other
designers to determine when to enable hash access on tables.
The main purposes of hash access is to optimize data access. If your programs
regularly access a single row in a table and the table has a unique identifier for
| Introductory concepts
| Archive-enabled tables and archive tables (Introduction to Db2 for z/OS)
| Procedure
20 Administration Guide
Chapter 2. Implementing your database design
Implementing your database design involves implementing Db2 objects, loading
and managing data, and altering your design as necessary.
Tip:
You can simplify your database implementation by letting Db2 implicitly create
certain objects for you. For example, if you omit the IN clause in a CREATE
TABLE statement, Db2 creates a table space and database for the table, and creates
other required objects such as:
v The primary key enforcing index and the unique key index
v The ROWID index (if the ROWID column is defined as GENERATED BY
DEFAULT)
v LOB table spaces and auxiliary tables and indexes for LOB columns
Related concepts:
Altering your database design
Related tasks:
Designing databases for performance (Db2 Performance)
Compressing your data (Db2 Performance)
Related reference:
CREATE TABLE (Db2 SQL)
Procedure
Procedure
To drop a database:
You have the following options for creating storage groups and managing Db2
data sets:
v You can let Db2 manage the data sets. This option means less work for Db2
database administrators.
v You can let SMS manage some or all of the data sets, either when you use Db2
storage groups or when you use data sets that you have defined yourself. This
option offers a reduced workload for Db2 database administrators and storage
administrators. For more information, see “Enabling SMS to control Db2 storage
groups” on page 25.
v You can define and manage your own data sets using VSAM Access Method
Services. This option gives you the most control over the physical storage of
tables and indexes.
22 Administration Guide
Related tasks:
Altering Db2 storage groups
The following list describes some of the things that Db2 does for you in managing
your auxiliary storage requirements:
v When a table space is created, Db2 defines the necessary VSAM data sets using
VSAM Access Method Services. After the data sets are created, you can process
them with access method service commands that support VSAM control-interval
(CI) processing (for example, IMPORT and EXPORT).
Exception: You can defer the allocation of data sets for table spaces and index
spaces by specifying the DEFINE NO clause on the associated statement
(CREATE TABLESPACE and CREATE INDEX), which also must specify the
USING STOGROUP clause.
v When a table space is dropped, Db2 automatically deletes the associated data
sets.
v When a data set in a segmented or simple table space reaches its maximum size
of 2 GB, Db2 might automatically create a new data set. The primary data set
allocation is obtained for each new data set.
v When needed, Db2 can extend individual data sets.
v When you create or reorganize a table space that has associated data sets, Db2
deletes and then redefines them, reclaiming fragmented space. However, when
you run REORG with the REUSE option and SHRLEVEL NONE, REORG resets
and reuses Db2-managed data sets without deleting and redefining them. If the
size of your table space is not changing, using the REUSE parameter could be
more efficient.
Exception: When reorganizing a LOB table space with the SHRLEVEL NONE
option, Db2 does not delete and redefine the first data set that was allocated for
the table space. If the REORG results in empty data sets beyond the first data
set, Db2 deletes those empty data sets.
v When you want to move data sets to a new volume, you can alter the volumes
list in your storage group. Db2 automatically relocates your data sets during the
utility operations that build or rebuild a data set (LOAD REPLACE, REORG,
REBUILD, and RECOVER).
Restriction: If you use the REUSE option, Db2 does not delete and redefine the
data sets and therefore does not move them.
For a LOB table space, you can alter the volumes list in your storage group, and
Db2 automatically relocates your data sets during the utility operations that
build or rebuild a data set (LOAD REPLACE and RECOVER).
To move user-defined data sets, you must delete and redefine the data sets in
another location.
Related concepts:
Managing your own data sets
Related information:
Managing Db2 data sets with DFSMShsm
Db2 page sets are defined as VSAM linear data sets. Db2 can define data sets with
variable VSAM control intervals. One of the biggest benefits of variable VSAM
control intervals is an improvement in query processing performance.
The following table shows the default and compatible control interval sizes for
each table space page size. For example, a table space with pages that are 16 KB in
size can have a VSAM control interval of 4 KB or 16 KB. Control interval sizing
has no impact on indexes. Index pages are always 4 KB in size.
Table 1. Default and compatible control interval sizes
Compatible control interval
Table space page size Default control interval size sizes
4 KB 4 KB 4 KB
8 KB 8 KB 4 KB, 8 KB
16 KB 16 KB 4 KB, 16 KB
32 KB 32 KB 4 KB, 32 KB
Procedure
Results
After you define a storage group, Db2 stores information about it in the Db2
catalog. (This catalog is not the same as the integrated catalog facility catalog that
describes Db2 VSAM data sets). The catalog table SYSIBM.SYSSTOGROUP has a
24 Administration Guide
row for each storage group, and SYSIBM.SYSVOLUMES has a row for each
volume. With the proper authorization, you can retrieve the catalog information
about Db2 storage groups by using SQL statements.
Procedure
What to do next
If you use Db2 to allocate data to specific volumes, you must assign an SMS
storage class with guaranteed space, and you must manage free space for each
volume to prevent failures during the initial allocation and extension. Using
guaranteed space reduces the benefits of SMS allocation, requires more time for
space management, and can result in more space shortages. You should only use
guaranteed space when space needs are relatively small and do not change.
Related tasks:
Migrating to DFSMShsm
Related reference:
CREATE STOGROUP (Db2 SQL)
For example, you might be installing a software program that requires that many
table spaces be created, but your company might not need to use some of those
table spaces. You might prefer not to allocate data sets for the table spaces that you
will not be using.
Procedure
Restriction: The DEFINE NO clause is not allowed for table spaces in a work file
database, or for user-defined data sets. (In the case of user-defined data sets, the
table space is created with the USING VCAT clause of the CREATE TABLESPACE
statement).
Do not use the DEFINE NO clause on a table space if you plan to use a tool
outside of Db2 to propagate data into a data set in the table space. When you use
DEFINE NO, the Db2 catalog indicates that the data sets have not yet been
allocated for that table space. Then, if data is propagated from a tool outside of
Db2 into a data set in the table space, the Db2 catalog information does not reflect
the fact that the data set has been allocated. The resulting inconsistency causes Db2
to deny application programs access to the data until the inconsistency is resolved.
Results
The table space is created, but Db2 does not allocate (that is, define) the associated
data sets until a row is inserted or loaded into a table in that table space. The Db2
catalog table SYSIBM.SYSTABLEPART contains a record of the created table space
and an indication that the data sets are not yet allocated.
If new extensions reach the end of the volume, Db2 accesses all candidate volumes
from the Db2 storage group and issues the Access Method Services command
ALTER ADDVOLUMES to add these volumes to the integrated catalog facility
(ICF) catalog as candidate volumes for the data set. Db2 then makes a request to
allocate a secondary extent on any one of the candidate volumes that has space
available. After the allocation is successful, Db2 issues the command ALTER
REMOVEVOLUMES to remove all candidate volumes from the ICF catalog for the
data set.
Db2 extends data sets when either of the following conditions occurs:
v The requested space exceeds the remaining space in the data set.
v 10% of the secondary allocation space (but not over 10 allocation units, based on
either tracks or cylinders) exceeds the remaining space.
If Db2 fails to extend a data set with a secondary allocation space because of
insufficient available space on any single candidate volume of a Db2 storage
group, Db2 tries again to extend with the requested space if the requested space is
smaller than the secondary allocation space. Typically, Db2 requests only one
additional page. In this case, a small amount of two units (tracks or cylinders, as
determined by DFSMS based on the SECQTY value) is allocated. To monitor data
set extension activity, use IFCID 258 in statistics class 3.
26 Administration Guide
Nonpartitioned spaces
For a nonpartitioned table space or a nonpartitioned index space, Db2 defines the
first piece of the page set starting with a primary allocation space, and extends that
piece by using secondary allocation spaces. When the end of the first piece is
reached, Db2 defines a new piece (which is a new data set) and extends that new
piece starting with a primary allocation space.
Exception: When a table space requires a new piece, the primary allocation
quantity of the new piece is determined as follows:
v If the value of subsystem parameter MGEXTSZ is NO, the primary quantity is
the PRIQTY value for the table space. If PRIQTY is not specified, the default for
PRIQTY is used.
v If the value of MGEXTSZ is YES, the primary quantity is the maximum of the
following values:
– The quantity that is calculated through sliding scale methodology
– The primary quantity from rule 1
– The specified SECQTY value
Partitioned spaces
For a partitioned table space or a partitioned index space, each partition is a data
set. Therefore, Db2 defines each partition with the primary allocation space and
extends each partition's data set by using a secondary allocation space, as needed.
Extension failures
If a data set uses all possible extents, Db2 cannot extend that data set. For a
partitioned page set, the extension fails only for the particular partition that Db2 is
trying to extend. For nonpartitioned page sets, Db2 cannot extend to a new data
set piece, which means that the extension for the entire page set fails.
To avoid extension failures, allow Db2 to use the default value for primary space
allocation and to use a sliding scale algorithm for secondary extent allocations.
Db2 might not be able to extend a data set if the data set is in an SMS data class
that constrains the number of extents to less than the number that is required to
reach full size. To prevent extension failures, make sure that the SMS data class
setting for the number of allowed extents is large enough to accommodate 128 GB
and 256 GB data sets.
Related concepts:
Primary space allocation
Secondary space allocation
Related tasks:
Avoiding excessively small extents (Db2 Performance)
If the secondary space allocation is too small, the data set might have to be
extended more times to satisfy those activities that need a large space.
To indicate that you want Db2 to use the default values for primary space
allocation of table spaces and indexes, specify a value of 0 for the following
parameters on installation panel DSNTIP7, as shown in the following table.
Table 2. DSNTIP7 parameter values for managing space allocations
Installation panel DSNTIP7 parameter Recommended value
TABLE SPACE ALLOCATION 0
INDEX SPACE ALLOCATION 0
Thereafter:
v On CREATE TABLESPACE and CREATE INDEX statements, do not specify a
value for the PRIQTY option.
v On ALTER TABLESPACE and ALTER INDEX statements, specify a value of -1
for the PRIQTY option.
For those situations in which the default primary quantity value is not large
enough, you can specify a larger value for the PRIQTY option when creating or
altering table spaces and indexes. Db2 always uses a PRIQTY value if one is
explicitly specified.
If you want to prevent Db2 from using the default value for primary space
allocation of table spaces and indexes, specify a non-zero value for the TABLE
SPACE ALLOCATION and INDEX SPACE ALLOCATION parameters on
installation panel DSNTIP7.
28 Administration Guide
Secondary space allocation
Db2 can calculate the amount of space to allocate to secondary extents by using a
sliding scale algorithm.
The first 127 extents are allocated in increasing size, and the remaining extents are
allocated based on the initial size of the data set:
v For 32 GB, 64 GB, 128 GB, and 256 GB data sets, each extent is allocated with a
size of 559 cylinders.
v For data sets that range in size from less than 1 GB to 16 GB, each extent is
allocated with a size of 127 cylinders.
If you installed Db2 on the operating system z/OS Version 1 Release 7, or later,
you can modify the Extent Constraint Removal option. By setting the Extent
Constraint Removal option to YES in the SMS data class, the maximum number of
extents can be up to 7257. However, the limits of 123 extents per volume and a
maximum volume count of 59 per data set remain valid. For more information, see
Using VSAM extent constraint removal (DFSMS Using the New Functions).
Maximum allocation is shown in the following table. This table assumes that the
initial extent that is allocated is one cylinder in size.
Table 3. Maximum allocation of secondary extents
Maximum data set size, in Maximum allocation, in Extents required to reach
GB cylinders full size
1 127 54
2 127 75
4 127 107
8 127 154
16 127 246
32 559 172
64 559 255
128 559 414
256 559 740
Db2 uses a sliding scale for secondary extent allocations of table spaces
and indexes when:
Otherwise, Db2 always uses a SECQTY value for secondary extent allocations, if
one is explicitly specified.
Exception: For those situations in which the calculated secondary quantity value
is not large enough, you can specify a larger value for the SECQTY option when
creating or altering table spaces and indexes. However, in the case where the
OPTIMIZE EXTENT SIZING parameter is set to YES and you specify a value for
the SECQTY option, Db2 uses the value of the SECQTY option to allocate a
secondary extent only if the value of the option is larger than the value that is
derived from the sliding scale algorithm. The calculation that Db2 uses to make
this determination is:
Actual secondary extent size = max ( min ( ss_extent, MaxAlloc ), SECQTY )
In this calculation, ss_extent represents the value that is derived from the sliding
scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the
maximum potential data set size. This approach allows you to reach the maximum
page set size faster. Otherwise, Db2 uses the value that is derived from the sliding
scale algorithm.
If you do not provide a value for the secondary space allocation quantity, Db2
calculates a secondary space allocation value equal to 10% of the primary space
allocation value and subject to the following conditions:
v The value cannot be less than 127 cylinders for data sets that range in initial size
from less than 1 GB to 16 GB, and cannot be less than 559 cylinders for 32 GB
and 64 GB data sets.
v The value cannot be more than the value that is derived from the sliding scale
algorithm.
The calculation that Db2 uses for the secondary space allocation value is:
Actual secondary extent size = max ( 0.1
× PRIQTY, min ( ss_extent, MaxAlloc ) )
In this calculation, ss_extent represents the value that is derived from the sliding
scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the
maximum potential data set size.
If you do not want Db2 to extend a data set, you can specify a value of 0 for the
SECQTY option. Specifying 0 is a useful way to prevent DSNDB07 work files from
growing out of proportion.
If you want to prevent Db2 from using the sliding scale for secondary extent
allocations of table spaces and indexes, specify a value of NO for the OPTIMIZE
EXTENT SIZING parameter on installation panel DSNTIP7.
Related concepts:
How Db2 extends data sets
30 Administration Guide
Example of primary and secondary space allocation
Primary and secondary space allocation quantities are affected by a CREATE
statement and two subsequent ALTER statements.
This example assumes a maximum data set size of less than 32 GB, and the
following parameter values on installation panel DSNTIP7:
v TABLE SPACE ALLOCATION = 0
v INDEX SPACE ALLOCATION = 0
v OPTIMIZE EXTENT SIZING = YES
Table 4. Example of specified and actual space allocations
Actual
Actual primary secondary
Specified quantity Specified quantity
Action PRIQTY allocated SECQTY allocated
CREATE 100 KB 100 KB 1000 KB 2 cylinders
TABLESPACE
ALTER TABLESPACE -1 1 cylinder 2000 KB 3 cylinders
ALTER TABLESPACE 1 cylinder -1 1 cylinder
You can also use DFSMShsm to move data sets that have not been recently used to
slower, less expensive storage devices. Moving the data sets helps to ensure that
disk space is managed efficiently.
Related concepts:
Managing your own data sets
Advantages of storage groups
Migrating to DFSMShsm
If you decide to use DFSMShsm for your Db2 data sets, you should develop a
migration plan with your system administrator.
With user-managed data sets, you can specify DFSMS classes on the Access
Method Services DEFINE command. With Db2 storage groups, you can specify
SMS classes in the CREATE STOGROUP statement, develop automatic class
selection routines, or both.
Procedure
3. Define the SMS classes for your table space data sets and index data sets.
4. Code the SMS automatic class selection (ACS) routines to assign indexes to one
SMS storage class and to assign table spaces to a different SMS storage class.
Example
Related tasks:
Letting SMS manage your Db2 storage groups
Enabling SMS to control Db2 storage groups
For processes that read more than one archive log data set, such as the RECOVER
utility, Db2 anticipates a DFSMShsm recall of migrated archive log data sets. When
a Db2 process finishes reading one data set, it can continue with the next data set
without delay, because the data set might already have been recalled by
DFSMShsm.
If you accept the default value YES for the RECALL DATABASE parameter on the
Operator Functions panel (DSNTIPO), Db2 also recalls migrated table spaces and
index spaces. At data set open time, Db2 waits for DFSMShsm to perform the
recall. You can specify the amount of time Db2 waits while the recall is being
performed with the RECALL DELAY parameter, which is also on panel DSNTIPO.
If RECALL DELAY is set to zero, Db2 does not wait, and the recall is performed
asynchronously.
You can use System Managed Storage (SMS) to archive Db2 subsystem data sets,
including the Db2 catalog, Db2 directory, active logs, and work file databases
32 Administration Guide
(DSNDB07 in a non-data-sharing environment). However, before starting Db2, you
should recall these data sets by using DFSMShsm. Alternatively, you can avoid
migrating these data sets by assigning them to a management class that prevents
migration.
If a volume has a STOGROUP specified, you must recall that volume only to
volumes of the same device type as others in the STOGROUP.
In addition, you must coordinate the DFSMShsm automatic purge period, the Db2
log retention period, and MODIFY utility usage. Otherwise, the image copies or
logs that you might need during a recovery could already have been deleted.
The RECOVER utility runs this command if the point of recovery is defined by an
image copy that was taken by using the CONCURRENT option of the COPY
utility.
The DFSMSdss RESTORE command extends a data set differently than Db2, so
after this command runs, you must alter the page set to contain extents that are
defined by Db2.
Related tasks:
Altering a page set to contain Db2-defined extents
Related reference:
RECOVER (Db2 Utilities)
The BACKUP SYSTEM utility uses copy pools. A copy pool is a named set of
storage groups that can be backed up and restored as a unit; DFSMShsm processes
the storage groups collectively for fast replication. Each Db2 subsystem has up to
two copy pools, one for databases and one for logs.
Copy pools are also referred to as source storage groups. Each source storage
group contains the name of an associated copy pool backup storage group, which
contains eligible volumes for the backups. The storage administrator must define
both the source and target storage groups. Use the following Db2 naming
convention for a copy pool:
DSN$locn-name$cp-type
The Db2 BACKUP SYSTEM and RESTORE SYSTEM utilities invoke DFSMShsm to
back up and restore the copy pools. DFSMShsm interacts with DFSMSsms to
determine the volumes that belong to a given copy pool so that the volume-level
backup and restore functions can be invoked.
Tip: The BACKUP SYSTEM utility can dump the copy pools to tape automatically
if you specify the options that enable that function.
Related tasks:
Managing DFSMShsm default settings when using the BACKUP SYSTEM,
RESTORE SYSTEM, and RECOVER utilities
Related reference:
BACKUP SYSTEM (Db2 Utilities)
RESTORE SYSTEM (Db2 Utilities)
The first time that you use the ESTABLISH FCINCREMENTAL keyword in an
invocation of the BACKUP SYSTEM utility the persistent incremental FlashCopy
34 Administration Guide
relationship is established. The incremental FlashCopy relationship exists until you
withdraw it by specifying the END FCINCREMENTAL keyword in the utility
control statement.
For the first invocation of BACKUP SYSTEM that specifies the ESTABLISH
FCINCREMENTAL keyword, all of the tracks of each source volume are copied to
their corresponding target volumes. For subsequent BACKUP SYSTEM requests,
only the changed tracks are copied to the target volumes.
If you keep more than one DASD FlashCopy version of the database copy pool,
you need to create full-copy backups for versions other than the incremental
version.
For example, you decide to keep two DASD FlashCopy versions of your database
copy pool. You invoke the BACKUP SYSTEM utility with the ESTABLISH
FCINCREMENTAL keyword. A full-copy of each volume is created, because the
incremental FlashCopy relationship is established for the first time. You invoke the
BACKUP SYSTEM utility the next day. This request creates the second version of
the backup. This version is a full-copy backup, because the incremental FlashCopy
relationship is established with the target volumes in the first version. The
following day you run the BACKUP SYSTEM utility again, but without the
ESTABLISH FCINCREMENTAL keyword. The incremental version is the oldest
version, so the incremental version is used for the FlashCopy backup. This time
only the tracks that have changed are copied. The result is a complete copy of the
source volume.
Tip: As table spaces and index spaces expand, you might need to provide
additional data sets. To take advantage of parallel I/O streams when doing certain
read-only queries, consider spreading large table spaces over different disk
volumes that are attached on separate channel paths.
Related concepts:
Advantages of storage groups
You must define a data set for each of the following items:
v A simple or segmented table space
v A partition of a partitioned table space
v A partition of a partitioned index
You must define the data sets before you can issue the CREATE TABLESPACE,
CREATE INDEX, or ALTER TABLE ADD PARTITION SQL statements.
If you create a partitioned table space, you must create a separate data set for each
partition, or you must allocate space for each partition by using the PARTITION
option of the NUMPARTS clause in the CREATE TABLESPACE statement.
If you create a partitioned secondary index, you must create a separate data set for
each partition. Alternatively, for Db2 to manage your data sets, you must allocate
space for each partition by using the PARTITIONED option of the CREATE INDEX
statement.
If you create a partitioning index that is partitioned, you must create a separate
data set for each partition. Alternatively, for Db2 to manage your data sets, you
must allocate space for each partition by using the PARTITIONED option or the
PARTITION ENDING AT clause of the CREATE INDEX statement in the case of
index-controlled partitioning.
Procedure
Example
36 Administration Guide
DEFINE CLUSTER -
(NAME(DSNCAT.DSNDBC.DSNDB06.SYSUSER.I0001.A001) -
LINEAR -
REUSE -
VOLUMES(DSNV01) -
RECORDS(100 100) -
SHAREOPTIONS(3 3) ) -
DATA -
(NAME(DSNCAT.DSNDBD.DSNDB06.SYSUSER.I0001.A001) -
CATALOG(DSNCAT)
For user-managed data sets, you must pre-allocate shadow data sets prior to
running the following against the table space:
v REORG with SHRLEVEL CHANGE
v REORG with SHRLEVEL REFERENCE
v CHECK INDEX with SHRLEVEL CHANGE
v CHECK DATA with SHRLEVEL CHANGE
v CHECK LOB with SHRLEVEL CHANGE
You can specify the MODEL option for the DEFINE CLUSTER command so that
the shadow is created like the original data set, as shown in the following example
code.
DEFINE CLUSTER -
(NAME(’DSNCAT.DSNDBC.DSNDB06.SYSUSER.x0001.A001’) -
MODEL(’DSNCAT.DSNDBC.DSNDB06.SYSUSER.y0001.A001’)) -
DATA -
(NAME(’DSNCAT.DSNDBD.DSNDB06.SYSUSER.x0001.A001’) -
MODEL(’DSNCAT.DSNDBD.DSNDB06.SYSUSER.y0001.A001’)) -
In the previous example, the instance qualifiers x and y are distinct and are equal
to either I or J. You must determine the correct instance qualifier to use for a
shadow data set by querying the Db2 catalog for the database and table space.
What to do next
The DEFINE CLUSTER command has many optional parameters that do not apply
when Db2 uses the data set. If you use the parameters SPANNED,
EXCEPTIONEXIT, BUFFERSPACE, or WRITECHECK, VSAM applies them to your
data set, but Db2 ignores them when it accesses the data set.
The value of the OWNER parameter for clusters that are defined for storage
groups is the first SYSADM authorization ID specified at installation.
When you drop indexes or table spaces for which you defined the data sets, you
must delete the data sets unless you want to reuse them. To reuse a data set, first
commit, and then create a new table space or index with the same name. When
Db2 uses the new object, it overwrites the old information with new information,
which destroys the old data.
Likewise, if you delete data sets, you must drop the corresponding table spaces
and indexes; Db2 does not drop these objects automatically.
Related concepts:
Advantages of storage groups
Related reference:
Data set naming conventions
When you define a data set, you must give each data set a name that is in the
correct format.
38 Administration Guide
message. If the size of the data set for a simple or a segmented table space
approaches the maximum limit, define another data set with the same
name as the first data set and the number 002. The next data set will be
003, and so on.
You can reach the VSAM extent limit for a data set before you reach the
size limit for a partitioned or a nonpartitioned table space. If this happens,
Db2 does not extend the data set.
Related reference:
CREATE INDEX (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
Procedure
Procedure
Exceptions:
| v If you specify the USING VCAT clause for indexes that are not created on the
| Db2 catalog, you create and manage the data sets yourself.
v If you specify the DEFINE NO clause on a CREATE INDEX statement with the
USING STOGROUP clause, Db2 defers the allocation of the data sets for the
index space.
Procedure
Results
Information about space allocation for the index is stored in the Db2 catalog table
SYSIBM.SYSINDEXPART. Other information about the index is in the
SYSIBM.SYSINDEXES table.
Related reference:
CREATE INDEX (Db2 SQL)
You must use EA-enabled table spaces or index spaces if you specify a DSSIZE that
is larger than 4 GB in the CREATE TABLESPACE statement.
Procedure
40 Administration Guide
Creating partitioned table spaces that are enabled for EA
Introductory concepts
Db2 table spaces (Introduction to Db2 for z/OS)
You can let Db2 create table spaces for your tables, or you can create them
explicitly before you create the tables. It is best to create partition-by-growth or
partition-by-range universal table spaces in most cases. Other table space types are
deprecated. That is, they are supported in Db2 11, but support might be removed
in the future.
Related tasks:
Converting table spaces to use table-controlled partitioning
Related reference:
CREATE TABLE (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
Universal table spaces (UTS) combine the benefits of data partitions and segmented
organization. Each UTS table space always contains only a single table.
| Deprecated function: Non-UTS table spaces for base tables are deprecated and
| likely to be unsupported in the future.
Partitioned (non-UTS) table spaces use partitions based on ranges of data values,
like partition-by-range table spaces, but they do not use segmented organization.
Segmented (non-UTS) table spaces store the data from separate tables in different
segments, but they cannot be partitioned. Simple table spaces are not partitioned
or segmented.
Tip: For best results, convert all simple and other non-UTS table spaces to UTS
table space types as soon as possible. For more information about altering table
spaces, see Altering table spaces and ALTER TABLESPACE (Db2 SQL).
42 Administration Guide
Table 6. Comparison of table space types (continued)
Type Segmented? Partitioned? Remarks
Partitioned (non-UTS) No Yes, based on data value This type is deprecated.
table space ranges
Segmented (non-UTS) Yes No This type is deprecated.
table space
Simple table space No No This type is
deprecated.1
Notes:
1. Db2 11 does not support creating simple table spaces. Existing simple table
spaces remain supported, but they are likely to be unsupported in the future.
The TYPE column of the SYSIBM.SYSTABLESPACE catalog table indicates the type
of each table space.
blank The table space was created without the LOB or MEMBER CLUSTER
options. If the DSSIZE column is zero, the table space is not greater than 64
gigabytes.
G Partition-by-growth table space.
L The table space can be greater than 64 gigabytes.
O The table space was defined with the LOB option (the table space is a LOB
table space).
P Implicit table space created for XML columns.
R Partition-by-range table space.
Related tasks:
Converting table spaces to use table-controlled partitioning
Related information:
In a partition-by-range table space, the partitions are based on the boundary values
that are defined for specific data columns. For example, the following figure
illustrates the data pages for a table with two partitions.
Partition 1
Key range A-L
Partition 2
Key range M-Z
Utilities and SQL statements can run concurrently on each partition. For example, a
utility job can work on part of the data while allowing other applications to
concurrently access data on other partitions. In that way, several concurrent utility
jobs can, for example, load all partitions of a table space concurrently. Because you
You can create an index of any type on a table in a partition-by-range table space.
| Tip: Partition-by-range table spaces are the suggested alternative for partitioned
| (non-UTS) table spaces, which are deprecated.
Partition-by-growth table spaces are best used when a table is expected to exceed
64 GB, or when a table does not have a suitable partitioning key.
Partition-by-growth table spaces can grow up to 128 TB, depending on the buffer
pool page size used, and the MAXPARTITIONS and DSSIZE values specified when
the table space is created.
| Tip: Partition-by-growth table spaces are the suggested alternative for single-table
| Db2-managed segmented (non-UTS) table spaces, which are deprecated.
44 Administration Guide
Restrictions for partition-by-growth table spaces:
LOB objects can do more than store large object data. If you define your LOB
columns for infrequently accessed data, a table space scan on the remaining data in
the base table is potentially faster because the scan generally includes fewer pages.
A LOB table space always has a direct relationship with the table space that
contains the logical LOB column values. The table space that contains the table
with the LOB columns is, in this context, the base table space. LOB data is logically
associated with the base table, but it is physically stored in an auxiliary table that
resides in a LOB table space. Only one auxiliary table can exist in a large object
table space. A LOB value can span several pages. However, only one LOB value is
stored per page.
You must have a LOB table space for each LOB column that exists in a table. For
example, if your table has LOB columns for both resumes and photographs, you
need one LOB table space (and one auxiliary table) for each of those columns. If
the base table space is a partitioned table space, you need one LOB table space for
each LOB in each partition.
If the base table space is not a partitioned table space, each LOB table space is
associated with one LOB column in the base table. If the base table space is a
partitioned table space, each partition of the base table space is associated with a
LOB table space. Therefore, if the base table space is a partitioned table space, you
can store more LOB data for each LOB column.
The following table shows the approximate amount of LOB data that you can store
for a LOB column in each of the different types of base table spaces.
Recommendations:
v Consider defining long string columns as LOB columns when a row does not fit
in a 32 KB page. Use the following guidelines to determine if a LOB column is a
good choice:
– Defining a long string column as a LOB column might be better if the
following conditions are true:
- Table space scans are normally run on the table.
- The long string column is not referenced often.
- Removing the long string column from the base table is likely to improve
the performance of table space scans.
– LOBs are physically stored in another table space. Therefore, performance for
inserting, updating, and retrieving long strings might be better for non-LOB
strings than for LOB strings.
v Consider specifying a separate buffer pool for large object data.
Related concepts:
Creation of large objects (Introduction to Db2 for z/OS)
Related tasks:
Choosing data page sizes for LOB data (Db2 Performance)
Choosing data page sizes (Db2 Performance)
Related reference:
CREATE AUXILIARY TABLE (Db2 SQL)
CREATE LOB TABLESPACE (Db2 SQL)
An XML table space is implicitly created when an XML column is added to a base
table. If the base table is partitioned, one partitioned table space exists for each
XML column of data. An XML table space is always associated with the table space
that contains the logical XML column value. In this context, the table space that
contains the table with the XML column is called the base table space.
| If the base table is partitioned, one partitioned table space exists for each XML
| column of data.
Related concepts:
XML table space implicit creation (Introduction to Db2 for z/OS)
Related tasks:
46 Administration Guide
Creating table spaces explicitly
Choosing data page sizes (Db2 Performance)
| Deprecated function: Non-UTS table spaces for base tables are deprecated and
| likely to be unsupported in the future.
The partitions are based on the boundary values that are defined for specific data
columns. Utilities and SQL statements can run concurrently on each partition.
Partition 1
Key range A-L
Partition 2
Key range M-Z
Related tasks:
Creating table spaces explicitly
Choosing data page sizes (Db2 Performance)
Related reference:
CREATE INDEX (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
| Deprecated function: Non-UTS table spaces for base tables are deprecated and
| likely to be unsupported in the future.
48 Administration Guide
Table space pages can be 4 KB, 8 KB, 16 KB, or 32 KB in size. The pages hold
segments, and each segment holds records from only one table. Each segment
contains the same number of pages, and each table uses only as many segments as
it needs.
When you run a statement that searches all the rows for one table, Db2 does not
need to scan the entire table space. Instead, Db2 can scan only the segments of the
table space that contain that table. The following figure shows a possible
organization of segments in a segmented table space.
Segment
Segment
Segment 4
5 ...
Segment 3
Segment 2 Table B
1 Table A
Table C
Table B
Table A
When you use an INSERT statement, a MERGE statement, or the LOAD utility to
insert records into a table, records from the same table are stored in different
segments. You can reorganize the table space to move segments of the same table
together.
If you want to leave pages of free space in a segmented (non-UTS) table space, you
must have at least one free page in each segment. Specify the FREEPAGE clause
with a value that is less than the SEGSIZE value.
Example: If you use FREEPAGE 30 with SEGSIZE 20, Db2 interprets the value of
FREEPAGE as 19, and you get one free page in each segment.
Restriction: If you are creating a segmented table (non-UTS) space for use by
declared temporary tables, you cannot specify the FREEPAGE or LOCKSIZE
clause.
| Deprecated function: Non-UTS table spaces for base tables are deprecated and
| likely to be unsupported in the future.
You cannot create new simple table spaces, but you can alter and update or
retrieve data from existing simple table spaces. If you implicitly create a table
space or explicitly create a table space without specifying the SEGSIZE,
NUMPARTS, or MAXPARTITIONS clauses the result is a segmented table space
instead of a simple table space. By default, the segmented table space has a
SEGSIZE value of 4 and a LOCKSIZE value of ROW.
50 Administration Guide
Tip: If you have any simple table spaces in your database, alter them to another
type of table space as soon as possible with the ALTER TABLESPACE statement. If
a simple table space contains only one table, alter it to a partition-by-range or
partition-by-growth universal table space.
Related concepts:
Segmented (non-UTS) table spaces (deprecated)
Related tasks:
Dropping and re-creating a table space to change its attributes
Creating table spaces explicitly
Choosing data page sizes (Db2 Performance)
Related reference:
CREATE TABLESPACE (Db2 SQL)
ALTER TABLESPACE (Db2 SQL)
When Db2 defines a table space implicitly, it completes the following actions:
v Generates a table space for you.
v Derives a table space name from the name of your table, according to the
following rules:
– The table space name is the same as the table name if the following
conditions apply:
- No other table space or index space in the database already has that name.
- The table name has no more than eight characters.
- The characters are all alphanumeric, and the first character is not a digit.
– If another table space in the database already has the same name as the table,
Db2 assigns a name of the form xxxxnyyy, where xxxx is the first four
characters of the table name, and nyyy is a single digit and three letters that
guarantee uniqueness.
| v If the IN database clause is not specified, Db2 generates a database for you with
| the name DSNxxxxx, where xxxxx is a five-digit number.
v Uses default values for space allocation.
| v Uses the buffer pool for the specified database. However, Db2 chooses a suitable
| buffer pool for the table space from the subsystem parameter values TBSBPOOL,
| TBSBP8K, TBSBP16K, and TBSBP32K if any of the following conditions apply:
| – The IN database-name clause is not specified.
| – The IN database-name clause is specified, and the table record length does not
| fit in the database buffer pool page size.
| – The IN database-name clause is specified, and the table space is for a
| hash-organized table and it has a calculated optimal page size that is not the
| same as the database buffer pool page size.
v Creates the required LOB objects and XML objects.
v Creates unique indexes for UNIQUE constraints.
v Creates the primary key index.
v Creates the ROWID index, if the ROWID column is defined as GENERATED BY
DEFAULT.
Each XML column has its own table space. The XML table space does not have
limit keys. The XML data resides in the partition number that corresponds to the
partition number of the base row. Tables that contain XML columns also have the
following implicitly created objects:
v A hidden column to store the document ID.
The document ID is a Db2 generated value that uniquely identifies a row. The
document ID is used to identify documents within the XML table. The document
ID is common for all XML columns, and its value is unique within the table.
v A unique index on the document ID (document ID index).
The document ID index points to the base table RID. If the base table space is
partitioned, the document ID index is a non-partitioned secondary index (NPSI).
v The base table has an indicator column for each XML column containing a null
bit, invalid bit, and a few reserved bytes.
The XML table space inherits several attributes from the base table space, such as
the LOG, CCSID, and LOCKMAX clauses.
If an edit procedure is defined on the base table, the XML table inherits the edit
procedure.
For partition-by-growth table spaces, the DSSIZE value depends on the DSSIZE
value of the base table space. If the DSSIZE value of the base table space is less
than 1 GB, the DSSIZE value of the XML table space is 2G. Otherwise, the XML
table space inherits the DSSIZE value of the base table.
For more information see Storage structure for XML data (Db2 Programming for
XML).
Related reference:
ALTER TABLE (Db2 SQL)
CREATE TABLE (Db2 SQL)
52 Administration Guide
| Before you begin
| For information about how Db2 can create table spaces for you, see Implicitly
| defined table spaces.
|
|
| Tip: You can alter table spaces after they are created, but the application of some
| statements, such as ALTER TABLESPACE with MAXPARTITIONS, prevent access
| to the database until alterations complete. Consider future growth when you define
| new table spaces.
| Procedure
| Issue a CREATE TABLESPACE statement and specify the type of table space to
| create and other attributes.
| 1. Specify the table space type to create. For instructions for creating each
| recommended types, see Creating partition-by-range table spaces and Creating
| partition-by-growth table spaces. The following table summarizes the table
| space attributes that determine the type of the resulting table space:
| Table 9. Table space types and related clauses
| Table space type to create Clauses to specify
| Partition-by-growth Any of the following combinations:
| v MAXPARTITIONS and NUMPARTS
| v MAXPARTITIONS and SEGSIZE n1
| v MAXPARTITIONS
| Partition-by-range NUMPARTS and SEGSIZE n1
2
| Segmented (non-UTS) One of the following combinations:
| v SEGSIZE n1
| v Omit MAXPARTITIONS, NUMPARTS, and
| SEGSIZE
2
| Partitioned (non-UTS) NUMPARTS and SEGSIZE 0
| Notes:
| 1. Where n is a non-zero value. The DPSEGSZ subsystem parameter determines the default
| value. For more information, see DEFAULT PARTITION SEGSIZE field (DPSEGSZ
| subsystem parameter) (Db2 Installation and Migration).
| 2. Non-UTS table spaces for base tables are deprecated and likely to be unsupported in the
| future.
|
54 Administration Guide
| MAXROWS, the default number of rows is 255. Do not use MAXROWS for
| a LOB table space or a table space in a work file database.
| MEMBER CLUSTER
| Specifies that data that is inserted by an INSERT operation is not clustered
| by the implicit clustering index (the first index), or the explicit clustering
| index. Db2 locates the data in the table space based on available space. You
| can use the MEMBER CLUSTER keyword on partition-by-range table
| spaces and partition-by-growth table spaces. For details, see Member
| affinity clustering (Db2 Data Sharing Planning and Administration).
| DSSIZE
| Specifies the maximum size in GB for each partition. The size of the table
| space depends on how many partitions are in the table space and the size
| of each partition. For a partition-by-growth table space, the maximum
| number of partitions depends on the value that is specified for the
| MAXPARTITIONS clause.
| Examples
| The following examples illustrate how to use SQL statements to create different
| types of table spaces.
| Creating partition-by-growth table spaces
| The following example CREATE TABLE statement implicitly creates by a
| partition-by-growth table space.
| CREATE TABLE TEST02TB(
| C1 SMALLINT,
| C2 DECIMAL(9,2),
| C3 CHAR(4))
| PARTITION BY SIZE EVERY 4G
| IN TEST02DB;
| COMMIT;
| What to do next
| Generally, when you use the CREATE TABLESPACE statement with the USING
| STOGROUP clause, Db2 allocates data sets for the table space. However, if you
| also specify the DEFINE NO clause, you can defer the allocation of data sets until
| data is inserted or loaded into a table in the table space.
|
|
| Related concepts:
| Managing your own data sets
| Related tasks:
| Altering table spaces
| Choosing data page sizes (Db2 Performance)
| Choosing data page sizes for LOB data (Db2 Performance)
| Creating EA-enabled table spaces and index spaces
| Related reference:
| CREATE TABLESPACE (Db2 SQL)
| CREATE LOB TABLESPACE (Db2 SQL)
| SYSTABLESPACE catalog table (Db2 SQL)
| DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem parameter) (Db2
| Installation and Migration)
| A partition-by-range table space is a universal table space (UTS) that has partitions
| based on ranges of data values. It holds data pages for a single table, and has
| segmented space management capabilities within each partition.
56 Administration Guide
| In a partition-by-range table space, the partitions are based on the boundary values
| that are defined for specific data columns. For example, the following figure
| illustrates the data pages for a table with two partitions.
|
|
Partition 1
Key range A-L
Partition 2
Key range M-Z
|
| Figure 16. Pages in a range-partitioned table space
|
| Utilities and SQL statements can run concurrently on each partition. For example, a
| utility job can work on part of the data while allowing other applications to
| concurrently access data on other partitions. In that way, several concurrent utility
| jobs can, for example, load all partitions of a table space concurrently. Because you
| can work on part of your data, some of your operations on the data might require
| less time. Also, you can use separate jobs for mass update, delete, or insert
| operations instead of using one large job; each smaller job can work on a different
| partition. Separating the large job into several smaller jobs that run concurrently
| can reduce the elapsed time for the whole task.
| You can create an index of any type on a table in a partition-by-range table space.
| Tip: Partition-by-range table spaces are the suggested alternative for partitioned
| (non-UTS) table spaces, which are deprecated.
| Procedure
| A partition-by-growth table space is a universal table space (UTS) that has partitions
| that Db2 manages automatically based on data growth. It holds data pages for
| only a single table, and has segmented space management capabilities within each
| partition.
| Partition-by-growth table spaces are best used when a table is expected to exceed
| 64 GB, or when a table does not have a suitable partitioning key.
| Partition-by-growth table spaces can grow up to 128 TB, depending on the buffer
| pool page size used, and the MAXPARTITIONS and DSSIZE values specified when
| the table space is created.
| Tip: Partition-by-growth table spaces are the suggested alternative for single-table
| Db2-managed segmented (non-UTS) table spaces, which are deprecated.
| Procedure
58 Administration Guide
| v Issue a CREATE TABLESPACE statement and specify one of the following
| combinations of the MAXPARITIONS, NUMPARTS, and SEGSIZE CLAUSES:
| – Specify MAXPARTITIONS without NUMPARTS, for example:
| CREATE TABLESPACE TEST01TS IN TEST01DB USING STOGROUP SG1
| DSSIZE 2G
| MAXPARTITIONS 24
| LOCKSIZE ANY
| SEGSIZE 4;
| COMMIT;
| – Specify both MAXPARTITIONS and NUMPARTS.
| – Specify MAXPARTITIIONS and SEGSIZEn.
| Related concepts:
| Partition-by-growth table spaces
| Related reference:
| CREATE TABLESPACE (Db2 SQL)
| CREATE TABLE (Db2 SQL)
You must use EA-enabled table spaces or index spaces if you specify a maximum
partition size (DSSIZE) that is larger than 4 GB in the CREATE TABLESPACE
statement.
Both EA-enabled and non-EA-enabled partitioned table spaces can have only one
table and up to 4096 partitions. The following table summarizes the differences.
Table 10. Differences between EA-enabled and non-EA-enabled table spaces
EA-enabled table spaces Non-EA-enabled table spaces
Holds up to 4096 partitions with DSSIZE 64G Holds up to 4096 partitions with DSSIZE 4G
Created with any valid value of DSSIZE DSSIZE cannot exceed 4G
Data sets are managed by SMS Data sets are managed by VSAM or SMS
Requires setup No additional setup
Related tasks:
Creating EA-enabled table spaces and index spaces
Creating table spaces explicitly
Related reference:
CREATE TABLESPACE (Db2 SQL)
Creating a table does not store the application data. You can put data into the table
by using several methods, such as the LOAD utility or the INSERT statement.
Procedure
Example
The following CREATE TABLE statement creates the EMP table, which is in a
database named MYDB and in a table space named MYTS:
CREATE TABLE EMP
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
DEPT CHAR(3) ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDL SMALLINT ,
SALARY DECIMAL(9,2) ,
COMM DECIMAL(9,2) ,
PRIMARY KEY (EMPNO))
IN MYDB.MYTS;
Related reference:
CREATE TABLE (Db2 SQL)
The table name is an identifier of up to 128 characters. You can qualify the table
name with an SQL identifier, which is a schema. When you define a table that is
based directly on an entity, these factors also apply to the table names.
60 Administration Guide
About this task
Deprecated function:
| Hash-organized tables are deprecated. Beginning in Db2 12, packages that are
| bound with APPLCOMPAT(V12R1M504) or higher cannot create hash-organized
| tables or alter existing tables to use hash-organization. Existing hash-organized
| tables remain supported, but they are likely to be unsupported in the future.
When you create new tables in universal table spaces, you can enable hash access
to that table by adding the organization-clause to your CREATE TABLE statement.
Procedure
Results
After you organize a table for hash access, Db2 is likely but not certain to select
hash access for statements that access the table.
Example
In this example the user creates a table named EMP in an explicitly defined table
space, sets the EMPNO column as the unique identifier for hash access, and
specifies a HASH SPACE size of 64 with the modifier M for megabytes.
What to do next
You can monitor the real-time-statistics information about your table to verify
whether the hash access path is used regularly and to verify that the use of disk
space is optimized.
Related tasks:
| Organizing tables for hash access to individual rows (deprecated) (Db2
Performance)
Managing space and page size for hash-organized tables (Db2 Performance)
| Monitoring hash access (deprecated) (Db2 Performance)
| Altering tables for hash access (deprecated)
| Altering the size of your hash spaces (deprecated)
Related reference:
CREATE TABLE (Db2 SQL)
Use a created temporary table when you need a permanent, sharable description of
a table, and you need to store data only for the life of an application process. Use a
declared temporary table when you need to store data for the life of an application
process, but you don't need a permanent, sharable description of the table.
Procedure
62 Administration Guide
Creating created temporary tables
If you need a permanent, sharable description of a table but need to store data
only for the life of an application process, you can define and use a created
temporary table.
Db2 does not log operations that it performs on created temporary tables;
therefore, SQL statements that use created temporary tables can execute more
efficiently. Each application process has its own instance of the created temporary
table.
Procedure
Example
Related tasks:
Setting default statistics for created temporary tables (Db2 Performance)
Related reference:
CREATE GLOBAL TEMPORARY TABLE (Db2 SQL)
Procedure
Issue the DECLARE GLOBAL TEMPORARY TABLE statement. Unlike other Db2
DECLARE statements, DECLARE GLOBAL TEMPORARY TABLE is an executable
statement that you can embed in an application program or issue interactively. You
can also dynamically prepare the statement.
When a program in an application process issues a DECLARE GLOBAL
TEMPORARY TABLE statement, Db2 creates an empty instance of the table. You
Chapter 2. Implementing your database design 63
can populate the declared temporary table by using INSERT statements, modify
the table by using searched or positioned UPDATE or DELETE statements, and
query the table by using SELECT statements. You can also create indexes on the
declared temporary table. The definition of the declared temporary table exists as
long as the application process runs.
At the end of an application process that uses a declared temporary table, Db2
deletes the rows of the table and implicitly drops the description of the table.
Example
If specified explicitly, the qualifier for the name of a declared temporary table,
must be SESSION. If the qualifier is not specified, it is implicitly defined to be
SESSION.
Related reference:
DECLARE GLOBAL TEMPORARY TABLE (Db2 SQL)
The following table summarizes important distinctions between base tables, created
temporary tables, and declared temporary tables.
64 Administration Guide
Table 11. Important distinctions between Db2 base tables and Db2 temporary tables
Area of
distinction Base tables Created temporary tables Declared temporary tables
Creation, CREATE TABLE statement CREATE GLOBAL DECLARE GLOBAL
persistence, and puts a description of the table TEMPORARY TABLE TEMPORARY TABLE statement
ability to share in catalog table SYSTABLES. statement puts a description does not put a description of the
table descriptions The table description is of the table in catalog table table in catalog table SYSTABLES.
persistent and is shareable SYSTABLES. The table The table description is not
across application processes. description is persistent and is persistent beyond the life of the
shareable across application application process that issued
The name of the table in the processes. the DECLARE statement and the
CREATE statement can be a description is known only to that
two-part or three-part name. The name of the table in the application process. Thus, each
If the table name is not CREATE statement can be a application process could have its
qualified, Db2 implicitly two-part- or three-part name. own possibly unique description
qualifies the name using the If the table name is not of the same table.
standard Db2 qualification qualified, Db2 implicitly
rules applied to the SQL qualifies the name using the The name of the table in the
statements. standard Db2 qualification DECLARE statement can be a
rules applied to the SQL two-part or three-part name. If
statements. the table name is qualified,
SESSION must be used as the
The table space that is used qualifier for the owner (the
by created temporary tables is second part in a three-part name).
reset by the following If the table name is not qualified,
commands: START DB2, START Db2 implicitly uses SESSION as
DATABASE, and START the qualifier.
DATABASE(dbname)
SPACENAM(tsname), where The table space used by declared
dbname is the name of the temporary tables is reset by the
database and tsname is the following commands: START DB2,
name of the table space. START DATABASE, and START
DATABASE(dbname)
SPACENAM(tsname), where
dbname is the name of the
database and tsname is the name
of the table space.
Table CREATE TABLE statement CREATE GLOBAL DECLARE GLOBAL
instantiation and creates one empty instance of TEMPORARY TABLE TEMPORARY TABLE statement
ability to share the table, and all application statement does not create an creates an empty instance of the
data processes use that one instance of the table. The first table for the application process.
instance of the table. The table implicit or explicit reference to Each application process has its
and data are persistent. the table in an OPEN, own unique instance of the table,
SELECT, INSERT, or DELETE and the instance is not persistent
operation that is executed by beyond the life of the application
any program in the process.
application process creates an
empty instance of the given
table. Each application process
has its own unique instance of
the table, and the instance is
not persistent beyond the life
of the application process.
66 Administration Guide
Table 11. Important distinctions between Db2 base tables and Db2 temporary tables (continued)
Area of
distinction Base tables Created temporary tables Declared temporary tables
Table space and Table space and database Table space and database Table space and database
database operations do apply. operations do not apply. operations do apply.
operations
Table space The table can be stored in The table is stored in table The table is stored in a table
requirements and implicitly created table spaces spaces in the work file space in the work file database.
table size and databases. database.
limitations The table cannot span table
The table cannot span table The table can span work file spaces. Therefore, the size of the
spaces. Therefore, the size of table spaces. Therefore, the table is limited by the table space
the table is limited by the size of the table is limited by size (as determined by the
table space size (as the number of available work primary and secondary space
determined by the primary file table spaces, the size of allocation values that are
and secondary space each table space, and the specified for the table space's
allocation values that are number of data set extents data sets) and the shared usage
specified for the table space's that are allowed for the table of the table space among multiple
data sets) and the shared spaces. Unlike the other types users. When the table space is
usage of the table space of tables, created temporary full, an error occurs for the SQL
among multiple users. When tables do not reach size operation.
the table space is full, an error limitations as easily.
occurs for the SQL operation.
Related concepts:
Temporary tables (Db2 Application programming and SQL)
Related tasks:
Creating temporary tables
Setting default statistics for created temporary tables (Db2 Performance)
Related reference:
CREATE GLOBAL TEMPORARY TABLE (Db2 SQL)
DECLARE GLOBAL TEMPORARY TABLE (Db2 SQL)
The system period consists of a pair of columns with system-maintained values that
indicate the period of time when a row is valid. The begin column contains the
timestamp value for when a row is created. The end column contains the
timestamp value for when a row is updated or deleted.
When you use the application period, determine the need for Db2 to enforce
uniqueness across time. You can create a UNIQUE index that is unique over a
period of time.
Bitemporal tables
68 Administration Guide
v For point-in-time recovery, to keep the data in the system-period temporal table
and the data in the history table synchronized, you must recover the table
spaces for both tables as a set. You can recover the table spaces individually only
if you specify the VERIFYSET NO option in the RECOVER utility statement.
v You cannot run a utility operation that deletes data from a system-period
temporal table. These utilities include LOAD REPLACE, REORG DISCARD, and
CHECK DATA DELETE YES.
v You cannot run the CHECK DATA utility with the options LOBERROR
INVALIDATE, AUXERROR INVALIDATE, or XMLERROR INVALIDATE on a
system-period temporal table. The CHECK DATA utility will fail with return
code 8 and message DSNU076.
v You cannot alter the schema (data type, check constraint, referential constraint,
etc.) of a system-period temporal table or history table; however, you can add a
column to system-period temporal table.
v You cannot drop the history table or its table space.
v You cannot define a clone table on the system-period temporal table or the
history table.
v You cannot create another table in table space for either the system-period
temporal table or history table.
v On the history table, you cannot use the UPDATE, DELETE, or SELECT
statement syntax that specifies the application period.
v You cannot rename a column or table name of a system-period temporal table or
a history table.
Related concepts:
Temporal tables and data versioning
Related reference:
CHECK DATA (Db2 Utilities)
You can also alter existing tables to use system-period data versioning. For more
information, see Adding a system period and system-period data versioning to an
existing table.
The row-begin column of the system period contains the timestamp value for when
a row is created. The row-end column contains the timestamp value for when a row
is removed. A transaction-start-ID column contains a unique timestamp value that
Db2 assigns per transaction, or the null value.
For a list of restrictions that apply to tables that use system-period data versioning,
see Restrictions for system-period data versioning.
To create a temporal table with a system period and define system-period data
versioning on the table:
1. Issue a CREATE TABLE statement with a SYSTEM_TIME clause. The created
table must have the following attributes:
v A row-begin column that is defined as TIMESTAMP(12) NOT NULL with the
GENERATED ALWAYS AS ROW BEGIN attribute.
v A row-end column that is defined as TIMESTAMP(12) NOT NULL with the
GENERATED ALWAYS AS ROW END attribute.
v A system period (SYSTEM_TIME) defined on two timestamp columns. The
first column is the row-begin column and the second column is the row-end
column.
| v A transaction-start-ID column that defined as TIMESTAMP(12) NOT NULL
| with the GENERATED ALWAYS AS TRANSACTION START ID attribute.
v The only table in the table space
v The table definition is complete
It cannot have clone table defined on it, and it cannot have the following
attributes:
v Column masks
v Row permissions
v Security label columns
2. Issue a CREATE TABLE statement to create a history table that receives the old
rows from the system-period temporal table. The history table must have the
following attributes:
v The same number of columns as the system-period temporal table that it
corresponds to
v Columns with the same names, data types, null attributes, CCSIDs, subtypes,
hidden attributes, and field procedures as the corresponding system-period
temporal table. However, the history table cannot have any GENERATED
ALWAYS columns unless the system-period temporal table has a ROWID
GENERATED ALWAYS or ROWID GENERATED BY DEFAULT column. In
that case, the history table must have a corresponding ROWID GENERATED
ALWAYS column. .
v The only table in the table space
v The table definition is complete
| A history table cannot be a materialized query table, an archive-enabled table,
| or an archive table, cannot have a clone table defined on it, and cannot have
| the following attributes:
v Identity columns or row change timestamp columns
v ROW BEGIN, ROW END, or TRANSACTION START ID columns
v Column masks
v Row permissions
v Security label columns
v System or application periods
3. Issue the ALTER TABLE ADD VERSIONING statement with the USE HISTORY
TABLE clause to define system-period data versioning on the table. By defining
system-period data versioning, you establish a link between the system-period
temporal table and the history table.
70 Administration Guide
Example
The following examples show how you can create a temporal table with a system
period, create a history table, and then define system-period data versioning on the
table. Also, a final example shows how to insert data.
The following example shows a CREATE TABLE statement for creating a temporal
table with a SYSTEM_TIME period. In the example, the sys_start column is the
row-begin column, sys_end is the row-end column, and create_id is the
transaction-start-ID column. The SYSTEM_TIME period is defined on the ROW
BEGIN and ROW END columns:
CREATE TABLE policy_info
(policy_id CHAR(10) NOT NULL,
coverage INT NOT NULL,
sys_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN,
sys_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END,
create_id TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID,
PERIOD SYSTEM_TIME(sys_start,sys_end));
This example shows a CREATE TABLE statement for creating a history table:
CREATE TABLE hist_policy_info
(policy_id CHAR(10) NOT NULL,
coverage INT NOT NULL,
sys_start TIMESTAMP(12) NOT NULL,
sys_end TIMESTAMP(12) NOT NULL,
create_id TIMESTAMP(12))
To define versioning, issue the ALTER TABLE statement with the ADD
VERSIONING clause and the USE HISTORY TABLE clause, which establishes a
link between the system-period temporal table and the history table:
ALTER TABLE policy_info
ADD VERSIONING USE HISTORY TABLE hist_policy_info;
The following example shows how to insert data in the POLICY_ID and
COVERAGE columns of the POLICY_INFO table:
INSERT INTO POLICY_INFO (POLICY_ID, COVERAGE)
VALUES(’A123’, 12000);
| If you want to use temporal tables to track auditing information, see the example
| in “Scenario for tracking auditing information” on page 74.
Related concepts:
Temporal tables and data versioning
Related reference:
CREATE TABLE (Db2 SQL)
ALTER TABLE (Db2 SQL)
Related information:
Managing Ever-Increasing Amounts of Data with IBM Db2 for z/OS: Using
Temporal Data Management, Archive Transparency, and the IBM Db2 Analytics
Accelerator for z/OS (IBM Redbooks)
When you create an application-period temporal table, you define begin and end
columns to indicate the application period, or period of time when the row is
valid. The begin column contains the time from which a row is valid. The end
column contains the time when a row stops being valid.
Procedure
Example
Related concepts:
Temporal tables and data versioning
Related reference:
CREATE TABLE (Db2 SQL)
CREATE INDEX (Db2 SQL)
72 Administration Guide
About this task
For a list of restrictions that apply to tables that use system-period data versioning,
see Restrictions for system-period data versioning.
Procedure
To create a bitemporal table and define system-period data versioning on the table:
1. Issue a CREATE TABLE statement with both the SYSTEM_TIME clause and the
BUSINESS_TIME clause. For more information about the requirements for the
history table, see Creating a system-period temporal table and Creating an
application-period temporal table.
2. Issue a CREATE TABLE statement to create a history table that receives the old
rows from the bitemporal table.
3. Issue the ALTER TABLE ADD VERSIONING statement with the USE HISTORY
TABLE clause to define system-period data versioning and establish a link
between the bitemporal table and the history table.
Example
The following examples show how you can create a bitemporal table, create a
history table, and then define system-period data versioning.
This example shows a CREATE TABLE statement with the SYSTEM_TIME and
BUSINESS_TIME clauses for creating a bitemporal table:
CREATE TABLE policy_info
(policy_id CHAR(4) NOT NULL,
coverage INT NOT NULL,
bus_start DATE NOT NULL,
bus_end DATE NOT NULL,
sys_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN,
sys_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END,
create_id TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID,
PERIOD BUSINESS_TIME(bus_start, bus_end),
PERIOD SYSTEM_TIME(sys_start, sys_end));
This example shows a CREATE TABLE statement for creating a history table:
CREATE TABLE hist_policy_info
(policy_id CHAR(4) NOT NULL,
coverage INT NOT NULL,
bus_start DATE NOT NULL,
bus_end DATE NOT NULL,
sys_start TIMESTAMP(12) NOT NULL,
sys_end TIMESTAMP(12) NOT NULL,
create_id TIMESTAMP(12));
This example shows the ALTER TABLE ADD VERSIONING statement with the
USE HISTORY TABLE clause that establishes a link between the bitemporal table
and the history table to enable system-period data versioning. Also, a unique index
is added to the bitemporal table.
Related concepts:
Temporal tables and data versioning
Related tasks:
Adding a system period and system-period data versioning to an existing table
Adding an application period to a table
Related reference:
CREATE TABLE (Db2 SQL)
ALTER TABLE (Db2 SQL)
| To track when the data was modified, define your table as a system-period
| temporal table. When data in a system-period temporal table is modified,
| information about the changes is recorded in its associated history table.
| To track who and what SQL statement modified the data, you can use
| non-deterministic generated expression columns. These columns can contain values that
| are helpful for auditing purposes, such as the value of the CURRENT SQLID
| special register at the time that the data was modified. You can define several
| variations of generated expression columns by using the appropriate CREATE
| TABLE or ALTER TABLE syntax. Each variation of generated expression column
| results in a different type of generated values.
| Suppose that you issue the following statement to create a system-period temporal
| table called STT:
| CREATE TABLE STT (balance INT,
| user_id VARCHAR(128) GENERATED ALWAYS AS ( SESSION_USER ) ,
| op_code CHAR(1)
| GENERATED ALWAYS AS ( DATA CHANGE OPERATION )
| ... SYSTEM PERIOD (SYS_START, SYS_END));
| The user_id column is to store who modified the data. This column is defined as a
| non-deterministic generated expression column that will contain the value of the
| SESSION_USER special register at the time of a data change operation.
| The op_code column is to store the SQL operation that modified that data. This
| column is also defined as a non-deterministic generated expression column.
| Suppose that you then issue the following statements to create a history table for
| STT and to associate that history table with STT:
74 Administration Guide
| CREATE TABLE STT_HISTORY (balance INT, user_id VARCHAR(128) , op_code CHAR(1) ... );
|
| ALTER TABLE STT ADD VERSIONING
| USE HISTORY TABLE STT_HISTORY ON DELETE ADD EXTRA ROW;
| In the ALTER TABLE statement, the ON DELETE ADD EXTRA ROW clause
| indicates that when a row is deleted from STT, an extra row is to be inserted into
| the history table. This extra row in the history table is to contain values for the
| non-deterministic generated expression columns (user_id and op_code) at the time
| of the delete operation.
| Now, consider what happens as the STT table is modified. For simplicity, date
| values are used instead of time stamps for the period columns in this scenario.
| Assume that on 15 June 2010, user KWAN issues the following statement to insert
| a row into STT:
| INSERT INTO STT (balance) VALUES (1)
| Later, on 1 December 2011, user HAAS issues the following statement to update
| the row:
| UPDATE STT SET balance = balance + 9;
| Row 2 and row 3 are identical for user data (the value of the balance column). The
| difference is the auditing columns: the new generated expression columns that
| record who initiated the action and which data change operation the row
| represents.
If you know the name of the system-period temporal table, you can find the name
of the corresponding history table.
Procedure
76 Administration Guide
SELECT VERSIONING_SCHEMA, VERSIONING_TABLE FROM SYSIBM.SYSTABLES WHERE
NAME = ’table-name’ AND CREATOR = ’creator-name’
Procedure
The following example shows how you can request data, based on time criteria
from a system-period temporal table.
SELECT policy_id, coverage FROM policy_info
FOR SYSTEM_TIME AS OF ’2009-01-08-00.00.00.000000000000’;
Likewise, the following example shows how you can request data, based on
time criteria from an application-period temporal table.
SELECT policy_id, coverage FROM policy_info
FOR BUSINESS_TIME AS OF ’2008-06-01’;
If you are requesting historical data from a system-period temporal table that is
defined with system-period data versioning, Db2 rewrites the query to include
data from the history table.
| v Specify the time criteria by using special registers: The advantage of this
| method is that you can change the time criteria later and not have to modify the
| SQL and then rebind the application.
| 1. Write the SELECT statement without any time criteria specified.
| 2. When you bind the application, ensure that the appropriate bind options are
| set as follows:
| – If you are querying a system-period temporal table, ensure that
| SYSTIMESENSITIVE is set to YES.
| – If you are querying an application-period temporal table, ensure that
| BUSTIMESENSITIVE is set to YES.
| For example, assume that you have system-period temporal table STT
| with the column POLICY_ID and you want to retrieve data from one year ago.
| You can set the CURRENT TEMPORAL SYSTEM_TIME period as follows:
| SET CURRENT TEMPORAL SYSTEM_TIME = CURRENT TIMESTAMP – 1 YEAR ;
|
Related concepts:
Temporal tables and data versioning
Related reference:
from-clause (Db2 SQL)
table-reference (Db2 SQL)
BIND and REBIND options for packages, plans, and services (Db2
Commands)
CURRENT TEMPORAL BUSINESS_TIME (Db2 SQL)
CURRENT TEMPORAL SYSTEM_TIME (Db2 SQL)
Db2 uses a materialized query table to precompute the results of data that is
derived from one or more tables. When you submit a query, Db2 can use the
results that are stored in a materialized query table rather than compute the results
from the underlying source tables on which the materialized query table is defined.
Procedure
78 Administration Guide
Example
The following CREATE TABLE statement defines a materialized query table named
TRANSCNT. TRANSCNT summarizes the number of transactions in table TRANS
by account, location, and year.
CREATE TABLE TRANSCNT (ACCTID, LOCID, YEAR, CNT) AS
(SELECT ACCOUNTID, LOCATIONID, YEAR, COUNT(*)
FROM TRANS
GROUP BY ACCOUNTID, LOCATIONID, YEAR )
DATA INITIALLY DEFERRED
REFRESH DEFERRED
MAINTAINED BY SYSTEM
ENABLE QUERY OPTIMIZATION;
The fullselect, together with the DATA INITIALLY DEFERRED clause and the
REFRESH DEFERRED clause, defines the table as a materialized query table.
Related tasks:
Using materialized query tables to improve SQL performance (Db2
Performance)
Creating a materialized query table (Db2 Performance)
Registering an existing table as a materialized query table (Db2 Performance)
| Procedure
|
|
| To create tables that are partitioned based on ranges of data values:
| Issue a CREATE TABLE statement and specify the partitioning key column in the
| PARTITION BY clause and limit key values the PARTITION ENDING AT clauses.
| Example
| Assume that you create a large transaction table that includes the date of the
| transaction in a column named POSTED. You can specify that the transactions for
| each month are placed in separate partitions. To create the table, you can issue the
| following CREATE TABLE statement:
| CREATE TABLE TRANS
| (ACCTID ...,
| STATE ...,
| POSTED ...,
| ... , ...)
| PARTITION BY (POSTED)
| (PARTITION 1 ENDING AT (’01/31/2003’),
| PARTITION 2 ENDING AT (’02/28/2003’),
| ...
| PARTITION 13 ENDING AT (’01/31/2004’));
With table-controlled partitioning, Db2 can restrict the insertion of null values into
a table with nullable partitioning columns, depending on the order of the
partitioning key:
| v If the partitioning key is ascending, Db2 prevents the INSERT of a row with a
| null value for the key column, unless a partition is created that specifies
| MAXVALUE, which will hold the null values.
| v If the partitioning key is descending, Db2 prevents the INSERT of a row with a
| null value for the key column, unless a partition is created that specifies
| MINVALUE, which will hold the null values.
Example 1:
Assume that a partitioned table space is created with the following SQL
statements:
CREATE TABLESPACE TS IN DB
USING STOGROUP SG
NUMPARTS 4 BUFFERPOOL BP0;
| Because the CREATE TABLE statement does not specify the order in which to put
| entries, Db2 puts them in ascending order by default. Db2 subsequently prevents
| any INSERT into the TB table of a row with a null value for partitioning column
| C01, because no partition specifies MAXVALUE. If the CREATE TABLE statement
| had specified the key as descending and the first partition specified MINVALUE,
| Db2 would subsequently have allowed an INSERT into the TB table of a row with
| a null value for partitioning column C01. Db2 would have inserted the row into
| partition 1.
With index-controlled partitioning, Db2 does not restrict the insertion of null
values into a value with nullable partitioning columns.
80 Administration Guide
Example 2: Assume that a partitioned table space is created with the following
SQL statements:
CREATE TABLESPACE TS IN DB
USING STOGROUP SG
NUMPARTS 4 BUFFERPOOL BP0;
Regardless of the entry order, Db2 allows an INSERT into the TB table of a row
with a null value for partitioning column C01. If the entry order is ascending, Db2
inserts the row into partition 4; if the entry order is descending, Db2 inserts the
row into partition 1. Only if the table space is created with the LARGE keyword
does Db2 prevent the insertion of a null value into the C01 column.
Although the ALTER TABLE syntax is used to create a clone table, the
authorization that is granted as part of the clone creation process is the same as
you would get during regular CREATE TABLE processing. The schema for the
clone table is the same as for the base table.
Procedure
Issue the ALTER TABLE statement with the ADD CLONE option.
Creating or dropping a clone table does not impact applications that are accessing
base table data. No base object quiesce is necessary, and this process does not
invalidate packages or the dynamic statement cache.
Example
The following example shows how to create a clone table by issuing the ALTER
TABLE statement with the ADD CLONE option:
ALTER TABLE base-table-name ADD CLONE clone-table-name
Related tasks:
Exchanging data between a base table and clone table
Related reference:
ALTER TABLE (Db2 SQL)
82 Administration Guide
Procedure
Example
EXCHANGE DATA BETWEEN TABLE table-name1 AND table-name2
What to do next
After a data exchange, the base and clone table names remain the same as they
were prior to the data exchange. No data movement actually takes place. The
instance numbers in the underlying VSAM data sets for the objects (tables and
indexes) do change, and this has the effect of changing the data that appears in the
base and clone tables and their indexes. For example, a base table exists with the
data set name *I0001.*. The table is cloned and the clone's data set is initially
named *.I0002.*. After an exchange, the base objects are named *.I0002.* and the
clones are named *I0001.*. Each time that an exchange happens, the instance
numbers that represent the base and the clone objects change, which immediately
changes the data contained in the base and clone tables and indexes.
Related tasks:
Creating a clone table
Related reference:
EXCHANGE (Db2 SQL)
| Check that the table for which you want to create an archive table meets the
| requirements that are specified in the description of the ENABLE ARCHIVE clause
| in ALTER TABLE (Db2 SQL).
| Procedure
Before you create different column names for your view, remember the naming
conventions that you established when designing the relational database.
Procedure
To define a view:
Example
Example of defining a view on a single table: Assume that you want to create a
view on the DEPT table. Of the four columns in the table, the view needs only
three: DEPTNO, DEPTNAME, and MGRNO. The order of the columns that you
specify in the SELECT clause is the order in which they appear in the view:
84 Administration Guide
CREATE VIEW MYVIEW AS
SELECT DEPTNO,DEPTNAME,MGRNO
FROM DEPT;
In this example, no column list follows the view name, MYVIEW. Therefore, the
columns of the view have the same names as those of the DEPT table on which it
is based. You can execute the following SELECT statement to see the view
contents:
Example of defining a view that combines information from several tables: You
can create a view that contains a union of more than one table. Db2 provides two
types of joins—an outer join and an inner join. An outer join includes rows in
which the values in the join columns don't match, and rows in which the values
match. An inner join includes only rows in which matching values in the join
columns are returned.
The following example is an inner join of columns from the DEPT and EMP tables.
The WHERE clause limits the view to just those columns in which the MGRNO in
the DEPT table matches the EMPNO in the EMP table:
The result of executing this CREATE VIEW statement is an inner join view of two
tables, which is shown below:
Use the CREATE VIEW statement to define and name a view. Unless you
specifically list different column names after the view name, the column names of
the view are the same as those of the underlying table. When you create different
column names for your view, remember the naming conventions that you
established when designing the relational database.
| Procedure
| To query a view that references a temporal table, use one of the following
| methods:
| v Specify a period specification (either a SYSTEM_TIME period or
| BUSINESS_TIME period) following the name of a view in the FROM clause of a
| query.
| v Use the CURRENT TEMPORAL SYSTEM_TIME or CURRENT TEMPORAL
| BUSINESS_TIME special registers. In this case, you do not need to include a
| period specification in the query. For instructions on how to use these special
| registers instead of a period specification, see “Querying temporal tables” on
| page 77.
86 Administration Guide
| Example
|
|
| The following example shows how you can create a view that references a
| system-period temporal table (stt), a bitemporal table (btt), and a regular base table
| (rt). Then you can query the view based on a point in time.
| CREATE VIEW v0 (col1, col2, col3)
| AS SELECT stt.coverage, rt.id, btt.bus_end
| FROM stt, rt, btt WHERE stt.id = rt.id AND rt.id = btt.id;
|
| SELECT * FROM v0
| FOR SYSTEM_TIME AS OF TIMESTAMP '2013-01-10 10:00:00’;
|
|
To ensure that the insert or update conforms to the view definition, specify the
WITH CHECK OPTION clause. The following example illustrates some
undesirable results of omitting that check.
A user with the SELECT privilege on view V1 can see the information from the
EMP table for employees in departments whose IDs begin with D. The EMP table
has only one department (D11) with an ID that satisfies the condition.
Assume that a user has the INSERT privilege on view V1. A user with both
SELECT and INSERT privileges can insert a row for department E01, perhaps
erroneously, but cannot select the row that was just inserted.
Example 2: You can avoid the situation in which a value that does not match the
view definition is inserted into the base table. To do this, instead define view V1 to
include the WITH CHECK OPTION clause:
CREATE VIEW V1 AS SELECT * FROM EMP
WHERE DEPT LIKE 'D%’ WITH CHECK OPTION;
With the new definition, any insert or update to view V1 must satisfy the predicate
that is contained in the WHERE clause: DEPT LIKE ‘D%'. The check can be
valuable, but it also carries a processing cost; each potential insert or update must
be checked against the view definition. Therefore, you must weigh the advantage
of protecting data integrity against the disadvantage of performance degradation.
Procedure
To drop a view:
Procedure
88 Administration Guide
Example
If you issue SELECT PROJNO, STDATE, EMPNO, and FIRSTNAME to the table on which
this index resides, all of the required data can be retrieved from the index without
reading data pages. This option might improve performance.
Related tasks:
Dropping and redefining a Db2 index
Related reference:
CREATE INDEX (Db2 SQL)
Index names
The name for an index is an identifier of up to 128 characters. You can qualify this
name with an identifier, or schema, of up to 128 characters.
The index space name is an eight-character name, which must be unique among
names of all index spaces and table spaces in the database.
You can use indexes on tables with LOBs the same way that you use them on
other tables, but consider the following facts:
v A LOB column cannot be a column in an index.
Creation of an index
If the table that you are indexing is empty, Db2 creates the index. However, Db2
does not actually create index entries until the table is loaded or rows are inserted.
If the table is not empty, you can choose to have Db2 build the index when the
CREATE INDEX statement is executed. Alternatively, you can defer the index build
until later. Optimally, you should create the indexes on a table before loading the
table. However, if your table already has data, choosing the DEFER option is
preferred; you can build the index later by using the REBUILD INDEX utility.
Copies of an index
If your index is fairly large and needs the benefit of high availability, consider
copying it for faster recovery. Specify the COPY YES clause on a CREATE INDEX
or ALTER INDEX statement to allow the indexes to be copied. Db2 can then track
the ranges of log records to apply during recovery, after the image copy of the
index is restored. (The alternative to copying the index is to use the REBUILD
INDEX utility, which might increase the amount of time that the index is
unavailable to applications.)
When you execute a CREATE INDEX statement with the USING STOGROUP
clause, Db2 generally defines the necessary VSAM data sets for the index space. In
some cases, however, you might want to define an index without immediately
allocating the data sets for the index space.
To defer the physical allocation of Db2-managed data sets, use the DEFINE NO
clause of the CREATE INDEX statement. When you specify the DEFINE NO
clause, Db2 defines the index but defers the allocation of data sets. The Db2
catalog table contains a record of the created index and an indication that the data
sets are not yet allocated. Db2 allocates the data sets for the index space as needed
when rows are inserted into the table on which the index is defined.
Related concepts:
Naming conventions (Db2 SQL)
Related reference:
CREATE INDEX (Db2 SQL)
90 Administration Guide
v When the PRIMARY KEY or UNIQUE clause is specified in the CREATE TABLE
statement and the CREATE TABLE statement is processed by the schema
processor
v When the table space that contains the table is implicitly created
Where:
v xxx is the name of the index that Db2 generates.
v table-name is the name of the table that is specified in the CREATE TABLE
statement.
v (column1,...) is the list of column names that were specified in the UNIQUE or
PRIMARY KEY clause of the CREATE TABLE statement, or the column is a
ROWID column that is defined as GENERATED BY DEFAULT.
In addition, if the table space that contains the table is implicitly created, Db2 will
check the DEFINE DATA SET subsystem parameter to determine whether to define
the underlying data set for the index space of the implicitly created index on the
base table.
If DEFINE DATA SET is NO, the index is created as if the following CREATE
INDEX statement is issued:
CREATE UNIQUE INDEX xxx ON table-name (column1,...) DEFINE NO
When you create a table and specify the organization-clause of the CREATE TABLE
statement, Db2 implicitly creates an index for hash overflow rows. This index
contains index entries for overflow rows that do not fit in the fixed hash space. If
the hash space that is specified in the organization-clause is adequate, the hash
overflow index should have no entries, or very few entries. The hash overflow
index for a table in a partition-by-range table space is a partitioned index. The
hash overflow index for a table in a partition-by-growth table space is a
non-partitioned index.
Db2 determines how much space to allocate for the hash overflow index. Because
this index will be sparsely populated, the size is relatively small compared to a
normal index.
Creating a schema by using the CREATE SCHEMA statement is also supported for
compliance testing.
Outside of the schema processor, the order of statements is important. They must
be arranged so that all referenced objects have been previously created. This
restriction is relaxed when the statements are processed by the schema processor if
the object table is created within the same CREATE SCHEMA. The requirement
that all referenced objects have been previously created is not checked until all of
the statements have been processed. For example, within the context of the schema
processor, you can define a constraint that references a table that does not exist yet
or GRANT an authorization on a table that does not exist yet.
Procedure
To create a schema:
1. Write a CREATE SCHEMA statement.
2. Use the schema processor to execute the statement.
Example
The following example shows schema processor input that includes the definition
of a schema.
CREATE SCHEMA AUTHORIZATION SMITH
92 Administration Guide
Processing schema definitions
You must use the schema processor to process CREATE SCHEMA statements.
The schema processor sets the current SQLID to the value of the schema
authorization ID before executing any of the statements in the schema definition.
Therefore, that ID must have SYSADM or SYSCTRL authority, or it must be the
primary or one of the secondary authorization IDs of the process that executes the
schema processor. The same ID must have all the privileges that are needed to
execute all the statements in the schema definition.
Procedure
The most common method for loading data into most of your tables is to use the
LOAD utility. This utility loads data into Db2 persistent tables from sequential data
sets by using BSAM. You can also use a cursor that is declared with an EXEC SQL
utility control statement to load data from another SQL table with the Db2 UDB
family cross-loader function. The LOAD utility cannot be used to load data into
Db2 temporary tables or system-maintained materialized query tables.
You can also use an SQL INSERT statement to copy all or selected rows of another
table, in any of the following methods:
v Using the INSERT statement in an application program
v Interactively through SPUFI
v With the command line processor
Before using the LOAD utility, make sure that you complete all of the prerequisite
activities for your situation.
Procedure
To load data:
Run the LOAD utility control statement with the options that you need.
What to do next
Reset the restricted status of the table space that contains the loaded data.
Related concepts:
Before running LOAD (Db2 Utilities)
Row format conversion for table spaces
Related tasks:
Collecting statistics by using Db2 utilities (Db2 Performance)
Related reference:
LOAD (Db2 Utilities)
The LOAD utility loads records into the tables and builds or extends any indexes
defined on them. If the table space already contains data, you can choose whether
you want to add the new data to the existing data or replace the existing data.
The LOAD and UNLOAD utilities can accept or produce a delimited file, which is
a sequential BSAM file with row delimiters and column delimiters. You can unload
data from other systems into one or more files that use a delimited file format and
then use these delimited files as input for the LOAD utility. You can also unload
Db2 data into delimited files by using the UNLOAD utility and then use these files
as input into another Db2 database.
94 Administration Guide
INCURSOR option
The INCURSOR option of the LOAD utility specifies a cursor for the input data
set. Use the EXEC SQL utility control statement to declare the cursor before
running the LOAD utility. You define the cursor so that it selects data from another
Db2 table. The column names in the SELECT statement must be identical to the
column names of the table that is being loaded. The INCURSOR option uses the
Db2 cross-loader function.
CCSID option
You can load input data into ASCII, EBCDIC, or Unicode tables. The ASCII,
EBCDIC, and UNICODE options on the LOAD utility statement let you specify
whether the format of the data in the input file is ASCII, EBCDIC, or Unicode. The
CCSID option of the LOAD utility statement lets you specify the CCSIDs of the
data in the input file. If the CCSID of the input data does not match the CCSID of
the table space, the input fields are converted to the CCSID of the table space
before they are loaded.
For nonpartitioned table spaces, data for other tables in the table space that is not
part of the table that is being loaded is unavailable to other application programs
during the load operation with the exception of LOAD SHRLEVEL CHANGE. For
partitioned table spaces, data that is in the table space that is being loaded is also
unavailable to other application programs during the load operation with the
exception of LOAD SHRLEVEL CHANGE. In addition, some SQL statements, such
as CREATE, DROP, and ALTER, might experience contention when they run
against another table space in the same Db2 database while the table is being
loaded.
When you load a table and do not supply a value for one or more of the columns,
the action Db2 takes depends on the circumstances.
v If the column is not a ROWID or identity column, Db2 loads the default value of
the column, which is specified by the DEFAULT clause of the CREATE or
ALTER TABLE statement.
v If the column is a ROWID column that uses the GENERATED BY DEFAULT
option, Db2 generates a unique value.
v If the column is an identity column that uses the GENERATED BY DEFAULT
option, Db2 provides a specified value.
v With XML columns, if there is an implicitly created DOCID column in the table,
it is created with the GENERATED ALWAYS attribute.
For ROWID or identity columns that use the GENERATED ALWAYS option, you
cannot supply a value because this option means that Db2 always provides a
value.
XML columns
You can load XML documents from input records if the total input record length is
less than 32 KB. For input record length greater than 32 KB, you must load the
data from a separate file. (You can also use a separate file if the input record length
is less than 32 KB.)
The XML tables are loaded when the base table is loaded. You cannot specify the
name of the auxiliary XML table to load.
LOB columns
The LOAD utility treats LOB columns as varying-length data. The length value for
a LOB column must be 4 bytes. The LOAD utility can be used to load LOB data if
the length of the row, including the length of the LOB data, does not exceed 32 KB.
The auxiliary tables are loaded when the base table is loaded. You cannot specify
the name of the auxiliary table to load.
You can use LOAD REPLACE to replace data in a single-table table space or in a
multiple-table table space. You can replace all the data in a table space (using the
REPLACE option), or you can load new records into a table space without
destroying the rows that are already there (using the RESUME option).
Procedure
What to do next
Procedure
96 Administration Guide
To insert a single row:
1. Issue an INSERT INTO statement.
2. Specify the table name, the columns into which the data is to be inserted, and
the data itself.
Example
For example, suppose that you create a test table, TEMPDEPT, with the same
characteristics as the department table:
CREATE TABLE SMITH.TEMPDEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) NOT NULL,
ADMRDEPT CHAR(3) NOT NULL)
IN DSN8D91A.DSN8S91D;
What to do next
If you write an application program to load data into tables, you use that form of
INSERT, probably with host variables instead of the actual values shown in this
example.
Procedure
98 Administration Guide
Procedure
Introductory concepts
Procedures (Introduction to Db2 for z/OS)
A stored procedure is a compiled program that can execute SQL statements and is
stored at a local or remote Db2 server. You can invoke a stored procedure from an
application program or from the command line processor. A single call to a stored
procedure from a client application can access the database at the server several
times.
A typical stored procedure contains two or more SQL statements and some
manipulative or logical processing in a host language or SQL procedure
statements. You can call stored procedures from other applications or from the
command line. Db2 provides some stored procedures, but you can also create your
own.
Procedure
Procedure
Related tasks:
Creating an external stored procedure (Db2 Application programming and
SQL)
Creating an external SQL procedure (Db2 Application programming and SQL)
You might want to drop a stored procedure for a number of reasons. You might
not use a particular stored procedure any longer, or you might want to drop a
stored procedure and re-create it. For example, you might want to migrate an
external SQL procedure to a native SQL procedure, because native SQL procedures
typically perform better and have more functionality than external SQL procedures.
Procedure
Issue the DROP PROCEDURE statement, and specify the name of the stored
procedure that you want to drop.
Example
For example, to drop the stored procedure MYPROC in schema SYSPROC, issue
the following statement:
DROP PROCEDURE SYSPROC.MYPROC;
Related tasks:
Migrating an external SQL procedure to a native SQL procedure (Db2
Application programming and SQL)
Related reference:
DROP (Db2 SQL)
Procedure
Issue the CREATE FUNCTION statement, and specify the type of function that you
want to create. You can specify the following types of functions:
v External scalar
v External table
v Sourced
v SQL scalar
Related concepts:
Procedure
Example
For example, drop the user-defined function ATOMIC_WEIGHT from the schema
CHEM:
DROP FUNCTION CHEM.ATOMIC_WEIGHT;
Related concepts:
User-defined functions (Db2 SQL)
Related reference:
DROP (Db2 SQL)
Estimating the space requirements for Db2 objects is easier if you collect and
maintain a statistical history of those objects. The accuracy of your estimates
depends on the currentness of the statistical data.
Procedure
Ensure that the statistics history is current by using the MODIFY STATISTICS
utility to delete outdated statistical data from the catalog history tables.
Related concepts:
General approach to estimating storage
The accuracy of your estimates depends on the currentness of the statistical data.
To ensure that the statistics history is current, use the MODIFY STATISTICS utility
to delete outdated statistical data from the catalog history tables.
The amount of disk space you need for your data is not just the number of bytes
of data; the true number is some multiple of that. That is,
space required = M
× (number of bytes of data)
Whether you use extended address volumes (EAV) is also a factor in estimating
storage. Although, the EAV factor is not a multiplier, you need to add 10 cylinders
for each object in the cylinder-managed space of an EAV. Db2 data sets might take
more space or grow faster on EAV compared to non-extended address volumes.
The reason is that the allocation unit in the extended addressing space (EAS) of
EAV is a multiple of 21 cylinders, and every allocation is rounded up to this
multiple. If you use EAV, the data set space estimation for an installation must take
this factor into account. The effect is more pronounced for smaller data sets.
For more accuracy, you can calculate M as the product of the following factors:
Record overhead
Allows for eight bytes of record header and control data, plus space
wasted for records that do not fit exactly into a Db2 page. The factor can
range from about 1.01 (for a careful space-saving design) to as great as 4.0.
A typical value is about 1.10.
Free space
Allows for space intentionally left empty to allow for inserts and updates.
You can specify this factor on the CREATE TABLESPACE statement. The
factor can range from 1.0 (for no free space) to 200 (99% of each page used
left free, and a free page following each used page). With default values,
the factor is about 1.05.
Unusable space
Track lengths in excess of the nearest multiple of page lengths. The
following table shows the track size, number of pages per track, and the
value of the unusable-space factor for several different device types.
The following table shows calculations of the multiplier M for three different
database designs:
v The tight design is carefully chosen to save space and allows only one index on
a single, short field.
v The loose design allows a large value for every factor, but still well short of the
maximum. Free space adds 30% to the estimate, and indexes add 40%.
v The medium design has values between the other two. You might want to use
these values in an early stage of database design.
In addition to the space for your data, external storage devices are required for:
v Image copies of data sets, which can be on tape
v System libraries, system databases, and the system log
v Temporary work files for utility and sort jobs
A rough estimate of the additional external storage needed is three times the
amount calculated for disk storage.
Records are stored within pages that are 4 KB, 8 KB, 16 KB, or 32 KB. Generally,
you cannot create a table in which the maximum record size is greater than the
page size.
Also, consider:
v Normalizing your entities
v Using larger page sizes
v Using LOB data types if a single column in a table is greater than 32 K
In addition to the bytes of actual data in the row (not including LOB and XML
data, which is not stored in the base row or included in the total length of the
row), each record has:
v A 6-byte prefix
v One additional byte for each column that can contain null values
v Two additional bytes for each varying-length column or ROWID column
v Six bytes of descriptive information in the base table for each LOB column
| v Six bytes of descriptive information in the base table for each XML column. Or,
| if the column can contain multiple versions of an XML document, then 14 bytes
| of descriptive information for each XML column.
The sum of each column's length is the record length, which is the length of data
that is physically stored in the table. You can retrieve the value of the
AVGROWLEN column in the SYSIBM.SYSTABLES catalog table to determine the
average length of rows within a table. The logical record length can be longer, for
example, if the table contains LOBs.
Furthermore, the page size of the table space in which the table is defined limits
the record length. If the table space is 4 KB, the record length of each record cannot
be greater than 4056 bytes. Because of the eight-byte overhead for each record, the
sum of column lengths cannot be greater than 4048 bytes (4056 minus the
eight-byte overhead for a record).
Db2 provides three larger page sizes to allow for longer records. You can improve
performance by using pages for record lengths that best suit your needs.
As shown in the following table, the maximum record size for each page size
depends on the size of the table space, whether the table is enabled for hash
access, and whether you specified the EDITPROC clause.
Table 17. Maximum record size (in bytes)
Table type 4 KB page 8 KB page 16 KB page 32 KB page
Non-hash table 4056 8138 16330 32714
Non-hash table 4046 8128 16320 32704
with EDITPROC
Hash table (hash 3817 7899 16091 32475
home page)
Hash table with 3807 7889 16081 32465
EDITPROC (hash
home page)
Creating a table using CREATE TABLE LIKE in a table space of a larger page size
changes the specification of LONG VARCHAR to VARCHAR and LONG
VARGRAPHIC to VARGRAPHIC. You can also use CREATE TABLE LIKE to create
a table with a smaller page size in a table space if the maximum record size is
within the allowable record size of the new table space.
Related concepts:
XML versions (Db2 Programming for XML)
General approach to estimating storage
Tables with LOBs can store byte strings up to 2 GB. A base table can be defined
with one or more LOB columns. The LOB columns are logically part of the base
Procedure
To estimate the storage required for LOB table spaces, complete the following
steps:
1. Begin with your estimates from other table spaces
2. Round the figure up to the next page size
3. Multiply the figure by 1.1
What to do next
An auxiliary table resides in a LOB table space. There can be only one auxiliary
table in a LOB table space. An auxiliary table can store only one LOB column of a
base table and there must be one and only one index on this column.
One page never contains more than one LOB. When a LOB value is deleted, the
space occupied by that value remains allocated as long as any application might
access that value.
When a LOB table space grows to its maximum size, no more data can be inserted
into the table space or its associated base table.
For a table to be loaded by the LOAD utility, assume the following values:
v Let FLOOR be the operation of discarding the decimal portion of a real number.
v Let CEILING be the operation of rounding a real number up to the next highest
integer.
v Let number of records be the total number of records to be loaded.
v Let average record size be the sum of the lengths of the fields in each record,
using an average value for varying-length fields, and including the following
amounts for overhead:
– 8 bytes for the total record
– 1 byte for each field that allows nulls
– 2 bytes for each varying-length field
v Let percsave be the percentage of kilobytes saved by compression (as reported by
the DSN1COMP utility in message DSN1940I)
v Let compression ratio be percsave/100
Procedure
To calculate the storage required when using the LOAD utility, complete the
following steps:
1. Calculate the usable page size.
Chapter 2. Implementing your database design 107
Usable page size is the page size minus a number of bytes of overhead (that is, 4
KB - 40 for 4 KB pages, 8 KB - 54 for 8 KB pages, 16 KB - 54 for 16 KB pages,
or 32 KB - 54 for 32 KB pages) multiplied by (100-p) / 100, where p is the value
of PCTFREE.
If your average record size is less than 16, then usable page size is 255
(maximum records per page) multiplied by average record size multiplied by
(100-p) / 100.
2. Calculate the records per page.
Records per page is MIN(MAXROWS, FLOOR(usable page size / average record
size)), but cannot exceed 255 and cannot exceed the value you specify for
MAXROWS.
3. Calculate the pages used.
Pages used is 2+CEILING(number of records / records per page).
4. Calculate the total pages used.
Total pages is FLOOR(pages used× (1+fp ) / fp ), where fp is the (nonzero) value
of FREEPAGE. If FREEPAGE is 0, then total pages is equal to pages used.
If you are using data compression, you need additional pages to store the
dictionary.
5. Estimate the number of kilobytes required for a table.
v If you do not use data compression, the estimated number of kilobytes is
total pages× page size (4 KB, 8 KB, 16 KB, or 32 KB).
v If you use data compression, the estimated number of kilobytes is total
pages× page size (4 KB, 8 KB, 16 KB, or 32 KB) × (1 - compression ratio).
Example
For example, consider a table space containing a single table with the following
characteristics:
v Number of records = 100000
v Average record size = 80 bytes
v Page size = 4 KB
v PCTFREE = 5 (5% of space is left free on each page)
v FREEPAGE = 20 (one page is left free for each 20 pages used)
v MAXROWS = 255
Index pages that point directly to the data in tables are called leaf pages and are
said to be on level 0. In addition to data pointers, leaf pages contain the key and
record-ID (RID).
If an index has more than one leaf page, it must have at least one nonleaf page
that contains entries that point to the leaf pages. If the index has more than one
nonleaf page, then the nonleaf pages whose entries point to leaf pages are said to
be on level 1. If an index has a second level of nonleaf pages whose entries point to
nonleaf pages on level 1, then those nonleaf pages are said to be on level 2, and so
on. The highest level of an index contains a single page, which Db2 creates when it
first builds the index. This page is called the root page. The root page is a 4-KB
index page. The following figure shows, in schematic form, a typical index.
Root Page
Level 0
Key Record-ID Key Record-ID Key Record-ID
Table Row
Row Row
If you insert data with a constantly increasing key, Db2 adds the new highest key
to the top of a new page. Be aware, however, that Db2 treats nulls as the highest
value. When the existing high key contains a null value in the first column that
An index key on an auxiliary table used for LOBs is 19 bytes and uses the same
formula as other indexes. The RID value stored within the index is 5 bytes, the
same as for large table spaces (defined with DSSIZE greater than or equal to 4 GB).
In general, the length of the index key is the sum of the lengths of all the columns
of the key, plus the number of columns that allow nulls. The length of a
varying-length column is the maximum length if the index is padded. Otherwise, if
an index is not padded, estimate the length of a varying-length column to be the
average length of the column data, and add a two-byte length field to the estimate.
You can retrieve the value of the AVGKEYLEN column in the
SYSIBM.SYSINDEXES catalog table to determine the average length of keys within
an index.
The following index calculations are intended only to help you estimate the storage
required for an index. Because there is no way to predict the exact number of
duplicate keys that can occur in an index, the results of these calculations are not
absolute. It is possible, for example, that for a nonunique index, more index entries
than the calculations indicate might be able to fit on an index page.
Example
In the following example of the entire calculation, assume that an index is defined
with these characteristics:
v The index is unique.
v The table it indexes has 100000 rows.
v The key is a single column defined as CHAR(10) NOT NULL.
v The value of PCTFREE is 5.
v The value of FREEPAGE is 4.
v The page size is 4 KB.
Length of key k 10
Average number of duplicate keys n 1
PCTFREE f 5
FREEPAGE p 4
Total nonleaf pages (level 2 pages + level 3 pages +...+ level x pages until x = 1) 3
To alter the database design you need to change the definitions of Db2 objects.
If possible, use the following SQL ALTER statements to change the definitions of
Db2 objects.
When you cannot make changes with ALTER statements, you typically must use
the following process:
1. Use the DROP statement to remove the object.
Attention: The DROP statement has a cascading effect. Objects that are
dependent on the dropped object are also dropped. For example, all authorities
for those objects disappear, and packages that reference deleted objects are
marked invalid by Db2.
2. Use the COMMIT statement to commit the changes to the object.
3. Use the CREATE statement to re-create the object.
Related concepts:
Implementing your database design
Related reference:
Statements (Db2 SQL)
DROP (Db2 SQL)
COMMIT (Db2 SQL)
For a list of Db2 catalog tables and descriptions of the information that they
contain, see Db2 catalog tables (Db2 SQL).
The information in the catalog is vital to normal Db2 operation. You can retrieve
catalog information, but changing it can have serious consequences. Therefore you
cannot execute insert or delete operations that affect the catalog, and only a limited
number of columns exist that you can update. Exceptions to these restrictions are
the SYSIBM.SYSSTRINGS, SYSIBM.SYSCOLDIST, and SYSIBM.SYSCOLDISTSTATS
catalog tables, into which you can insert rows and proceed to update and delete
rows.
To retrieve information from the catalog, you need at least the SELECT privilege
on the appropriate catalog tables.
Note: Some catalog queries can result in long table space scans.
Procedure
To obtain information about Db2 storage groups and the volumes in those storage
groups:
Related reference:
SYSSTOGROUP catalog table (Db2 SQL)
SYSVOLUMES catalog table (Db2 SQL)
The SYSIBM.SYSTABLES table contains a row for every table, view, and alias in
your Db2 system. Each row tells you whether the object is a table, a view, or an
alias, its name, who created it, what database it belongs to, what table space it
belongs to, and other information. The SYSTABLES table also has a REMARKS
column in which you can store your own information about the table in question.
Procedure
Query the SYSIBM.SYSTABLES table. The following example query displays all the
information for the project activity sample table:
SELECT *
FROM SYSIBM.SYSTABLES
WHERE NAME = ’PROJACT’
AND CREATOR = ’DSN8B10’;
Procedure
Related reference:
SYSTABLEPART catalog table (Db2 SQL)
You can use the SYSIBM.SYSTABLES table to find information about aliases by
referencing the following three columns:
v LOCATION contains your subsystem's location name for the remote system, if
the object on which the alias is defined resides at a remote subsystem.
v TBCREATOR contains the schema table or view.
v TBNAME contains the name of the table or the view.
You can also find information about aliases by using the following user-defined
functions:
v TABLE_NAME returns the name of a table, view, or undefined object found
after resolving aliases for a user-specified object.
v TABLE_SCHEMA returns the schema name of a table, view, or undefined object
found after resolving aliases for a user-specified object.
The NAME and CREATOR columns of the SYSTABLES table contain the name and
schema of the alias, and three other columns contain the following information for
aliases:
v TYPE is A.
v DBNAME is DSNDB06.
v TSNAME is SYSTSTAB.
If similar tables at different locations have names with the same second and third
parts, you can retrieve the aliases for them with a query like this one:
SELECT LOCATION, CREATOR, NAME
FROM SYSIBM.SYSTABLES
WHERE TBCREATOR=’DSN8B10’ AND TBNAME=’EMP’
AND TYPE=’A’;
Related reference:
SYSTABLES catalog table (Db2 SQL)
TABLE_NAME (Db2 SQL)
TABLE_SCHEMA (Db2 SQL)
TABLE_LOCATION (Db2 SQL)
Procedure
The result is shown below; for each column, the following information about each
column is given:
v The column name
v The name of the table that contains it
v Its data type
v Its length attribute. For LOB columns, the LENGTH column shows the length of
the pointer to the LOB.
v Whether it allows nulls
v Whether it allows default values
Related tasks:
Retrieving catalog information about LOBs
Related reference:
SYSCOLUMNS catalog table (Db2 SQL)
Procedure
A table can have more than one index. To display information about all the indexes
of a table:
SELECT *
FROM SYSIBM.SYSINDEXES
WHERE TBNAME = ’EMP’
AND TBCREATOR = ’DSN8B10’;
Related reference:
SYSINDEXES catalog table (Db2 SQL)
The following actions occur in the catalog after the execution of CREATE VIEW:
Procedure
Procedure
Query the SYSIBM.SYSTABAUTH table. The following query retrieves the names
of all users who have been granted access to the DSN8B10.DEPT table.
SELECT GRANTEE
FROM SYSIBM.SYSTABAUTH
WHERE TTNAME = ’DEPT’
AND GRANTEETYPE <> ’P’
AND TCREATOR = ’DSN8B10’;
GRANTEE is the name of the column that contains authorization IDs for users of
tables. The TTNAME and TCREATOR columns specify the DSN8B10.DEPT table.
The clause GRANTEETYPE <> 'P' ensures that you retrieve the names only of
users (not application plans or packages) that have authority to access the table.
Procedure
Related reference:
SYSCOLUMNS catalog table (Db2 SQL)
SYSINDEXES catalog table (Db2 SQL)
To obtain information about referential constraints and the columns of the foreign
key that defines the constraint:
To find information about the foreign keys of tables to which the project table is a
parent:
SELECT A.RELNAME, A.CREATOR, A.TBNAME, B.COLNAME, B.COLNO
FROM SYSIBM.SYSRELS A, SYSIBM.SYSFOREIGNKEYS B
WHERE A.REFTBCREATOR = ’DSN8B10’
AND A.REFTBNAME = ’PROJ’
AND A.RELNAME = B.RELNAME
ORDER BY A.RELNAME, B.COLNO;
Related reference:
SYSRELS catalog table (Db2 SQL)
SYSFOREIGNKEYS catalog table (Db2 SQL)
Procedure
Query the SYSIBM.SYSTABLESPACE table. To list all table spaces whose use is
restricted for any reason, issue this command:
-DISPLAY DATABASE (*) SPACENAM(*) RESTRICT
Related reference:
SYSTABLESPACE catalog table (Db2 SQL)
Procedure
Related reference:
Procedure
Related reference:
SYSAUXRELS catalog table (Db2 SQL)
SYSCOLUMNS catalog table (Db2 SQL)
Procedure
You can use this query to retrieve information about user-defined functions:
Related tasks:
Preparing a client program that calls a remote stored procedure (Db2
Application programming and SQL)
Related reference:
SYSROUTINES catalog table (Db2 SQL)
Procedure
Issue this query to determine triggers that must be rebound because they are
invalidated after objects are dropped or altered:
SELECT COLLID, NAME
FROM SYSIBM.SYSPACKAGE
WHERE TYPE = ’T’
AND (VALID = ’N’ OR OPERATIVE = ’N’);
Related reference:
SYSTRIGGERS catalog table (Db2 SQL)
Issue this query to determine the privileges that user USER1B has on sequences:
SELECT GRANTOR, NAME, DATEGRANTED, ALTERAUTH, USEAUTH
FROM SYSIBM.SEQUENCEAUTH
WHERE GRANTEE = ’USER1B’;
Related reference:
SYSSEQUENCES catalog table (Db2 SQL)
SYSSEQUENCEAUTH catalog table (Db2 SQL)
You can create comments about tables, views, indexes, aliases, packages, plans,
distinct types, triggers, stored procedures, and user-defined functions. You can
store a comment about the table or the view as a whole, and you can also include
a comment for each column. A comment must not exceed 762 bytes.
A comment is especially useful if your names do not clearly indicate the contents
of columns or tables. In that case, use a comment to describe the specific contents
of the column or table.
After you execute a COMMENT statement, your comments are stored in the
REMARKS column of SYSIBM.SYSTABLES or SYSIBM.SYSCOLUMNS. (Any
comment that is already present in the row is replaced by the new one.) The next
two examples retrieve the comments that are added by the previous COMMENT
statements.
Procedure
To verify that you have created the objects in your database and check that no
errors are in your CREATE statements:
Query the catalog tables to verify that your tables are in the correct table space,
your table spaces are in the correct storage group, and so on.
Related reference:
Db2 catalog tables (Db2 SQL)
Procedure
To change clauses that are used to create a database:
Procedure
What to do next
If you want to migrate to another device type or change the catalog name of the
integrated catalog facility, you need to move the data.
Related concepts:
Moving Db2 data
Moving a Db2 data set
Related reference:
ALTER STOGROUP (Db2 SQL)
Related information:
Implementing Db2 storage groups
Procedure
To let SMS manage the storage needed for the objects that the storage
group supports:
1. Issue an ALTER STOGROUP statement. You can specify SMS classes when you
alter a storage group.
2. Specify ADD VOLUMES ('*') and REMOVE VOLUMES (current-vols) where
current-vols is the list of the volumes that are currently assigned to the storage
group. For example,
ALTER STOGROUP DSN8G910
REMOVE VOLUMES (VOL1)
ADD VOLUMES (’*’);
Example
The following example shows how to alter a storage group to SMS-managed using
the DATACLAS, MGMTCLAS, or STORCLAS keywords.
128 Administration Guide
ALTER STOGROUP SGOS5001
MGMTCLAS REGSMMC2
DATACLAS REGSMDC2
STORCLAS REGSMSC2;
What to do next
SMS manages every new data set that is created after the ALTER STOGROUP
statement is executed. SMS does not manage data sets that are created before the
execution of the statement.
Related tasks:
Migrating to DFSMShsm
Related reference:
ALTER STOGROUP (Db2 SQL)
Also, when a storage group is used to extend a data set, the volumes must have
the same device type as the volumes that were used when the data set was defined
The changes that you make to the volume list by using the ALTER STOGROUP
statement have no effect on existing storage. Changes take effect when new objects
are defined or when the REORG, RECOVER, or LOAD REPLACE utilities are used
on those objects. For example, if you use the ALTER STOGROUP statement to
remove volume 22222 from storage group DSN8G910, the Db2 data on that volume
remains intact. However, when a new table space is defined by using DSN8G910,
volume 22222 is not available for space allocation.
Procedure
2. Make an image copy of each table space. For example, issue the statement
COPY TABLESPACE dbname.tsname DEVT SYSDA.
3. Ensure that the table space is not being updated in such a way that the data set
might need to be extended. For example, you can stop the table space with the
Db2 command STOP DATABASE (dbname) SPACENAM (tsname).
4. Use the ALTER STOGROUP statement to remove the volume that is associated
with the old storage group and to add the new volume:
For user-managed data sets, you are responsible for defining and copying the data
sets to an SSD. However, whether the data sets are Db2-managed or user-managed,
all volumes that can contain secondary extents should have the same drive type as
the drive type of the primary extent volume. In addition, you must define all of
the pieces of a multi-piece data set on volumes that have the same drive type.
Procedure
Pending definition changes are changes that are not immediately materialized. For
detailed information about pending definition changes, how to materialize them,
and related restrictions, see Pending data definition changes
Immediate definition changes are changes that are materialized immediately. Most
immediate definition changes are restricted while pending definition changes exist
for an object. For a list of such restrictions, see Restrictions for changes to objects
that have pending data definition changes.
However, depending on the type of table space and the attributes that you want to
change, you might instead need to drop the table space, and create it again with
the new attributes. Many fewer types of changes are supported by ALTER
TABLESPACE statements for the deprecated non-UTS table space types. In such
cases, it is best to first convert the table space to a partition-by-range or
partition-by-growth table space first and then use ALTER TABLESPACE statements
with pending definition changes to make the changes.
Procedure
To change the attributes of a table space, use any of the following approaches:
v Use the ALTER TABLESPACE statements to change the table space type and
attributes, or to enable or disable MEMBER CLUSTER. For example, you might
make the following changes:
– Use the MAXPARTITIONS attribute of the ALTER TABLESPACE statement to
change the maximum partition size for partition-by-growth table spaces. You
can also use this attribute to convert a simple table space, or a single-table
segmented (non-UTS) table space to a partition-by-growth table space.
– Use the SEGSIZE attribute of the ALTER TABLESPACE statement to convert a
| partitioned (non-UTS) table space to a partition-by-range table space. For
| more information, see Converting partitioned (non-UTS) table spaces to
| partition-by-range universal table spaces.
v Drop the table space and create it again with the new attributes, as described in
Dropping and re-creating a table space to change its attributes. For example,
some changes are not supported by ALTER TABLESPACE statements, such as
the following changes:
– Changing the CCSID to an incompatible value
– Moving the table space to a different database
– Converting a multi-table table segmented (non-UTS) space to a UTS table
space type
What to do next
Important: Limit the use of the NOT LOGGED attribute. Logging is not generally
a performance bottleneck, given that in an average environment logging accounts
for less than 5% of the central processing unit (CPU) utilization. Therefore, you
should use the NOT LOGGED attribute only when data is being used by a single
task, where the table space can be recovered if errors occur.
Procedure
Results
The change in logging applies to all tables in this table space and also applies to all
indexes on those tables, as well as associated LOB and XML table spaces.
Related tasks:
Altering table spaces
Loading data by using the INSERT statement
Related reference:
ALTER TABLESPACE (Db2 SQL)
You should use the NOT LOGGED attribute only for situations where the data is
in effect being duplicated. If the data is corrupted, you can re-create it from its
original source, rather than from an image copy and the log. For example, you
could use NOT LOGGED when you are inserting large volumes of data with the
INSERT statement.
Restrictions: If you use the NOT LOGGED logging attribute, you can use images
copies for recovery with certain restrictions.
v The logging attribute applies to all partitions of a table space. NOT LOGGED
suppresses only the logging of undo and redo information; control records of the
table space continue to be logged.
v You can take full and incremental SHRLEVEL REFERENCE image copies even
though the table space has the NOT LOGGED attribute. You cannot take
SHRLEVEL CHANGE copies because the NOT LOGGED attribute suppresses
the logging of changes necessary for recovery.
v System-level backups taken with the BACKUP SYSTEM utility will contain NOT
LOGGED objects, but they cannot be used for object level recovery of NOT
LOGGED objects.
You can set the NOT LOGGED attribute when creating or altering table spaces.
Consider using the NOT LOGGED attribute in the following specific situations:
v For tables that summarize information in other tables, including materialized
query tables, where the data can be easily re-created.
v When you are inserting large volumes of data with the INSERT statement.
v When you are using LOAD RESUME.
To use table spaces that are not logged, when using LOAD RESUME, complete
the following steps:
1. Alter the table space to not logged before the load. Altering the logging
attribute requires exclusive use of the table space.
2. Run the LOAD utility with the RESUME option.
3. Before normal update processing, alter the table space back to logged, and
make an image copy of the table space.
Restriction: Online LOAD RESUME against a table space that is not logged is
not recoverable if the load fails. If an online load attempt fails and rollback is
necessary, the not logged table space is placed in LPL RECOVER-pending status.
If this happens, you must terminate the LOAD job, recover the data from a prior
image copy, and restart the online LOAD RESUME.
Altering the logging attribute of a table space from LOGGED to NOT LOGGED
establishes a recoverable point for the table space. Indexes automatically inherit the
logging attribute of their table spaces. For the index, the change establishes a
recoverable point that can be used by the RECOVER utility. Each subsequent
Altering the logging attribute of a table space from NOT LOGGED to LOGGED
marks the table space as COPY-pending (a recoverable point must be established
before logging resumes). The indexes on the tables in the table space that have the
COPY YES attribute are unchanged.
Related concepts:
Recovery implications for objects that are not logged
Procedure
To change the space allocation for user-managed data sets, complete the following
steps:
1. Run the REORG TABLESPACE utility, and specify the UNLOAD PAUSE
option.
2. Make the table space unavailable with the STOP DATABASE command and the
SPACENAM option after the utility completes the unload and stops.
3. Delete and redefine the data sets.
4. Resubmit the utility job with the RESTART(PHASE) parameter specified on the
EXEC statement.
What to do next
The job now uses the new data sets when reloading.
Use of the REORG utility to extend data sets causes the newly acquired free space
to be distributed throughout the table space rather than to be clustered at the end.
For best results, use this procedure only to change attributes of a table space that
cannot be changed with ALTER TABLESPACE statements. The techniques
described here are intended for changing the attributes of non-UTS table spaces.
The compression dictionary for the table space is dropped, if one exists. All
tables in TS1 are dropped automatically.
6. Commit the DROP statement. You must commit the DROP TABLESPACE
statement before creating a table space or index with the same name. When
you drop a table space, all entries for that table space are dropped from
SYSIBM.SYSCOPY. This makes recovery for that table space impossible from
previous image copies.
7. Create the new table space, TS1, and grant the appropriate user privileges.
You can also create a partitioned table space. For example, use a statement
such as:
CREATE TABLESPACE TS1
IN DSN8D91A
USING STOGROUP DSN8G910
PRIQTY 4000
SECQTY 130
ERASE NO
NUMPARTS 95
(PARTITION 45 USING STOGROUP DSN8G910
PRIQTY 4000
SECQTY 130
11. Drop table space TS2. If a table in the table space has been created with
RESTRICT ON DROP, you must alter that table to remove the restriction
before you can drop the table space.
12. Re-create any dependent objects on the new tables TA1, TA2, TA3, ....
13. REBIND any packages that were invalidated as a result of dropping the table
space.
Related concepts:
Implications of dropping a table
Procedure
To redistribute data in partitioned table spaces, use one of the following two
methods:
v Change the partition boundaries.
v Redistribute the data across partitions by using the REORG TABLESPACE utility.
Example
Suppose that after some time, because of the popularity of certain products, you
want to redistribute the data across certain partitions. You want the third partition
to contain values 200 through 249, the fourth partition to contain values 250
through 279, and the fifth partition to contain values 280 through 299.
To change the boundary for these partitions, issue the following statements:
ALTER TABLE PRODUCTS ALTER PARTITION 3
ENDING AT (’249’);
ALTER TABLE PRODUCTS ALTER PARTITION 4
ENDING AT (’279’);
ALTER TABLE PRODUCTS ALTER PARTITION 5
ENDING AT (’299’);
| In this case, Db2 determines the appropriate limit key changes and redistributes
| the data accordingly.
Related tasks:
Increasing partition size
Creating tables partitioned by data value ranges
Related reference:
Syntax and options of the REORG TABLESPACE control statement (Db2
Utilities)
ALTER INDEX (Db2 SQL)
ALTER TABLE (Db2 SQL)
Advisory or restrictive states (Db2 Utilities)
You can increase the maximum partition size of a partitioned table space to 128 GB
or 256 GB. Depending on the partition size and page size, increasing the maximum
Procedure
This step is required because the DFSMSdss RESTORE command extends a data
set differently than Db2.
Procedure
What to do next
Using the RECOVER utility again does not resolve the extent definition.
For user-defined data sets, define the data sets with larger primary and secondary
space allocation values.
Related concepts:
The RECOVER utility and the DFSMSdss RESTORE command
Related reference:
ALTER TABLESPACE (Db2 SQL)
| Procedure
| To prevent the creation of any new tables that use index-controlled partitioning, set
| the PREVENT_NEW_IXCTRL_PART subsystem parameter to YES. For more
| information, see PREVENT INDEX PART CREATE field
| (PREVENT_NEW_IXCTRL_PART subsystem parameter) (Db2 Installation and
| Migration)
| Non-UTS table spaces for base tables are deprecated and likely to be unsupported
| in the future.
| Table 19. Differences between table-controlled and index-controlled partitioning
| Table-controlled partitioning Index-controlled partitioning
| Requires no partitioning index or clustering Requires a partitioning index and a
| index. clustering index.
| Multiple partitioned indexes can be created Only one partitioned index can be created in
| in a table space. a table space.
| A table space partition is identified by both a A table space partition is identified by a
| physical partition number and a logical physical partition number.
| partition number.
| The high-limit key is always enforced, which The high-limit key is not enforced if the table
| means that key values that are out of range space is non-large.
| might result in errors or discarded data,
| depending on the operation involved.
|
| The conversion is also required before you can convert a partitioned (non-UTS)
| table space, to a partition-by-range table space.
| Procedure
| Db2 issues SQLCODE +20272 to indicate that the associated table space no
| longer uses index-controlled partitioning .
| ALTER INDEX index-name CLUSTER;
| Results
| Db2 also invalidates any packages that depend on the table in the converted table
| space.
| Db2 places the last partition of the table space into a REORG-pending (REORP)
| state in the following situations:
| Adding or rotating a new partition
| Db2 stores the original high-limit key value instead of the default
| high-limit key value. Db2 puts the last partition into a REORP state, unless
| the high-limit key value is already being enforced, or the last partition is
| empty.
| Altering a partition results in the conversion to table-controlled partitioning
| Db2 changes the existing high-limit key to the highest value that is
| possible for the data types of the limit key columns. After the conversion
| to table-controlled partitioning, Db2 changes the high-limit key value back
| to the user-specified value and puts the last partition into a REORP state.
| Example
| For example, assume that you have a very large transaction table named TRANS
| that contains one row for each transaction. The table includes the following
| columns:
| v ACCTID is the customer account ID
| v POSTED is the date of the transaction
| The table space that contains TRANS is divided into 13 partitions, each containing
| one month of data. Two existing indexes are defined as follows:
|
|
| v A partitioning index is defined on the transaction date by the following CREATE
| INDEX statement with a PARTITION ENDING AT clause: The partitioning index
| is the clustering index, and the data rows in the table are in order by the
| transaction date. The partitioning index controls the partitioning of the data in
| the table space.
| v A nonpartitioning index is defined on the customer account ID:
| CREATE INDEX IX2 ON TRANS(ACCTID);
| Db2 usually accesses the transaction table through the customer account ID by
| using the nonpartitioning index IX2. The partitioning index IX1 is not used for
| data access and is wasting space. In addition, you have a critical requirement for
| availability on the table, and you want to be able to run an online REORG job at
| the partition level with minimal disruption to data availability.
|
|
| 1. Drop the partitioning index IX1.
| DROP INDEX IX1;
| When you drop the partitioning index IX1, Db2 converts the table space from
| index-controlled partitioning to table-controlled partitioning. Db2 changes the
| high limit key value that was originally specified to the highest value for the
| key column.
| 2. Create a partitioned clustering index IX3 that matches the 13 data partitions in
| the table, as a replacement for IX2.
| CREATE INDEX IX1 ON TRANS(POSTED)
| CLUSTER
| (PARTITION 1 ENDING AT (’01/31/2002’),
| PARTITION 2 ENDING AT (’02/28/2002’),
| ...
| PARTITION 13 ENDING AT (’01/31/2003’));
| When you create the index IX3, Db2 creates a partitioned index with 13
| partitions that match the 13 data partitions in the table. Each index partition
| contains the account numbers for the transactions during that month, and those
| account numbers are ordered within each partition. For example, partition 11 of
| the index matches the table partition that contains the transactions for
| November, 2002, and it contains the ordered account numbers of those
| transactions.
| 3. Drop index IX2,which was replaced IX3.
| DROP INDEX IX2;
| COMMIT;
|
|
| What to do next
| The result of this procedure is a partitioned (non-UTS) table space, which is also a
| deprecated table space type. For best results, also convert the table space to a
| partition-by-range table space, as described in Converting partitioned (non-UTS)
| table spaces to partition-by-range universal table spaces.
| Related tasks:
| Creating tables partitioned by data value ranges
| Related reference:
| CREATE INDEX (Db2 SQL)
| CREATE TABLE (Db2 SQL)
| Related information:
| +20272 (Db2 Codes)
Procedure
To alter a table:
Issue the ALTER TABLE statement. With the ALTER TABLE statement, you can:
v Add a new column
v Rename a column
| v Drop a column
v Change the data type of a column, with certain restrictions
v Add or drop a parent key or a foreign key
v Add or drop a table check constraint
v Add a new partition to a table space, including adding a new partition to a
partition-by-growth table space, by using the ADD PARTITION clause
v Change the boundary between partitions, extend the boundary of the last
partition, rotate partitions, or instruct Db2 to insert rows at the end of a table or
appropriate partition
v Register an existing table as a materialized query table, change the attributes of
a materialized query table, or change a materialized query table to a base table
v Change the VALIDPROC clause
v Change the DATA CAPTURE clause
v Change the AUDIT clause by using the options ALL, CHANGES, or NONE
v Add or drop the restriction on dropping the table and the database and table
space that contain the table
v Alter the length of a VARCHAR column using the SET DATA TYPE VARCHAR
clause
v Add or drop a clone table
v Alter APPEND attributes
v Drop the default value for a column
v Activate or deactivate row-level or column-level access control for the table
Tip: When designing row-level or column-level access control for a table, first
create the row permissions or column masks to avoid multiple invalidations to
packages and dynamically cached statements. After you create row permissions
or column masks, use the ALTER TABLE statement to activate row-level or
column-level access control for the table. If you must drop or alter a column
mask, first activate row-level access control to prevent access to the table, and
then drop or alter the column mask. Otherwise, the rows are accessible, but the
column values inside the rows are not protected.
Also, the new column might become the rightmost column of the table, depending
on whether you use basic row format or reordered row format.
The physical records are not actually changed until values are inserted in the new
column. When you use the ALTER TABLE ADD COLUMN statement, packages are
not invalidated, unless the following criteria are true:
v The data type of the new column is DATE, TIME, or TIMESTAMP.
v You specify the DEFAULT keyword.
v You do not specify a constant (that is, you use the system default value).
However, to use the new column in a program, you need to modify and recompile
the program and bind the plan or package again. You also might need to modify
any program that contains a static SQL statement SELECT *, which returns the new
column after the plan or package is rebound. You also must modify any INSERT
statement that does not contain a column list.
Access time to the table is not affected immediately, unless the record was
previously fixed length. If the record was fixed length, the addition of a new
column causes Db2 to treat the record as variable length, and access time is
affected immediately.
Procedure
Results
Tip: Inserting values in the new column might degrade performance by forcing
rows onto another physical page. You can avoid this situation by creating the table
space with enough free space to accommodate normal expansion. If you already
have this problem, run REORG on the table space to fix it.
If the new column is a ROWID column, Db2 returns the same, unique row ID
value for a row each time you access that row. Reorganizing a table space does not
affect the values on a ROWID column. You cannot use the DEFAULT clause for
ROWID columns.
If the new column is an identity column (a column that is defined with the AS
IDENTITY clause), Db2 places the table space in REORG-pending (REORP) status,
and access to the table space is restricted until the table space is reorganized. When
the REORG utility is run, Db2
v Generates a unique value for the identity column of each existing row
v Physically stores these values in the database
v Removes the REORP status
If the new column is a short string column, you can specify a field procedure for
it. If you do specify a field procedure, you cannot also specify NOT NULL.
Example
The following example adds a column to the table DSN8910.DEPT, which contains
a location code for the department. The column name is LOCATION_CODE, and
its data type is CHAR (4).
ALTER TABLE DSN8910.DEPT
ADD LOCATION_CODE CHAR (4);
Related concepts:
Row format conversion for table spaces
Restrictions:
v You cannot alter a column to specify a default value if the table is referenced by
a view.
v If the column is part of a unique constraint or unique index, the new default to
a value should not be the same as a value that already exists in the column.
v The new default value applies only to new rows.
Procedure
Example
Use the following statement to drop the default value from column JOB:
ALTER TABLE MYEMP ALTER COLUMN JOB DROP DEFAULT
In general, Db2 can alter a data type if the data can be converted from the old type
to the new type without truncation or without losing arithmetic precision.
When you alter the data type of a column in a table, Db2 creates a new version for
the table space that contains the data rows of the table.
Procedure
Results
When you change the data type of a column by using the ALTER TABLE
statement, the new definition of the column is stored in the catalog.
When you retrieve table rows, the columns are retrieved in the format that is
indicated by the catalog, but the data is not saved in that format. When you
change or insert a row, the entire row is saved in the format that is indicated by
the catalog. When you reorganize the table space (or perform a load replace), Db2
reloads the data in the table space according to the format of the current
definitions in the catalog.
Example:
Assume that a table contains basic account information for a small bank. The initial
account table was created many years ago in the following manner:
CREATE TABLE ACCOUNTS (
ACCTID DECIMAL(4,0) NOT NULL,
NAME CHAR(20) NOT NULL,
ADDRESS CHAR(30) NOT NULL,
BALANCE DECIMAL(10,2) NOT NULL)
IN dbname.tsname;
The NAME and ADDRESS columns can now handle longer values without
truncation, and the shorter values are no longer padded. The BALANCE column is
extended to allow for larger dollar amounts. Db2 saves these new formats in the
catalog and stores the inserted row in the new formats.
Recommendation: If you change both the length and the type of a column from
fixed-length to varying-length by using one or more ALTER statements, issue the
ALTER statements within the same unit of work. Reorganize immediately so that
the format is consistent for all of the data rows in the table.
Related concepts:
Table space versions
Related tasks:
Altering the attributes of an identity column
Related reference:
ALTER TABLE (Db2 SQL)
Example: Assume that the following indexes are defined on the ACCOUNTS table:
CREATE INDEX IX1 ON ACCOUNTS(ACCTID);
CREATE INDEX IX2 ON ACCOUNTS(NAME);
When the data type of the ACCTID column is altered from DECIMAL(4,0) to
INTEGER, the IX1 index is placed in a REBUILD-pending (RBDP) state.
REBUILD INDEX with the SHRLEVEL CHANGE option allows read and write
access to the data for most of the rebuild operation.
In certain situations, when an index is inaccessible, Db2 can bypass the index to
allow applications access to the underlying data. In these situations, Db2 offers
accessibility at the expense of performance. In making its determination of the best
access path, Db2 can bypass an index under the following circumstances:
v Dynamic PREPAREs
Db2 avoids choosing an index that is in an RBDP state. Bypassing the index
typically degrades performance, but provides availability that would not be
possible otherwise.
v Cached PREPAREs
Db2 avoids choosing an index that is both in an RBDP state and within a cached
PREPARE statement, because the dynamic cache for the table is invalidated
whenever an index is put into an RBDP state.
In the case of static BINDs, Db2 might choose an index that is in an RBDP state as
the best access path. Db2 does so by making the optimistic assumption that the
index will be available by the time it is actually used. (If the index is not available
at that time, an application can receive a resource unavailable message.)
Padding
When an index is not padded, the value of the PADDED column of the
SYSINDEXES table is set to N. An index is only considered not padded when it is
created with at least one varying length column and either:
v The NOT PADDED keyword is specified.
v The default padding value is NO.
When an index is padded, the value of the PADDED column of the SYSINDEXES
table is set to Y. An index is padded if it is created with at least one varying length
column and either:
v The PADDED keyword is specified
v The default padding is YES.
In the example of the ACCOUNTS table, the IX2 index retains its padding
attribute. The padding attribute of an index is altered only if the value is
inconsistent with the current state of the index. The value can be inconsistent, for
example, if you change the value of the PADDED column in the SYSINDEXES
table after creating the index.
| Whether indexes are padded by default depends on the Db2 release in which the
| index was created and the release in which the system was originally installed:
| v Indexes that were created in a pre-DB2 Version 8 release are padded by default.
| In this case, the value of the PADDED column of the SYSINDEXES catalog table
| is blank (PADDED = ' '). The PADDED column is also blank when there are no
| varying length columns.
Related concepts:
Indexes that are padded or not padded (Introduction to Db2 for z/OS)
Related tasks:
Saving disk space by using non-Padded indexes (Db2 Performance)
Related reference:
SYSINDEXES catalog table (Db2 SQL)
Procedure
| Run the REORG TABLESPACE utility. If the table space contains one table, REORG
| TABLESPACE updates the data format for the table to the format of the current
| table space version. If the table space contains more than one table, REORG
| TABLESPACE updates the data format for all tables that are not in version 0
| format to the format of the current table space version. The current table space
| version is the value of CURRENT_VERSION in the SYSIBM.SYSTABLESPACE
| catalog table.
Db2 uses table space versions to maximize data availability. Table space versions
enable Db2 to keep track of schema changes, and simultaneously, provide users
with access to data in altered table spaces. When users retrieve rows from an
altered table, the data is displayed in the format that is described by the most
recent schema definition, even though the data is not currently stored in this
format. The most recent schema definition is associated with the current table
space version.
Although data availability is maximized by the use of table space versions,
performance might suffer because Db2 does not automatically reformat the data in
the table space to conform to the most recent schema definition. Db2 defers any
reformatting of existing data until you reorganize the table space with the REORG
TABLESPACE utility. The more ALTER statements that you commit between
reorganizations, the more table space versions Db2 must track, and the more
performance can suffer.
| Versioning is always done at the table space level. The version of a table matches
| the table space version that it corresponds with. For example, consider that you
| have two tables in one table space, which is defined with DEFINE YES. The tables
| are named TABLE1 and TABLE2. The version for both tables and the table space is 0
| (zero). If TABLE1 is altered, the version for TABLE1 becomes SYSTABLES.VERSION = 1,
| and the table space version becomes SYSTABLESPACE.CURRENT_VERSION = 1. At this
| point, the version for TABLE2 is still SYSTABLES.VERSION = 0. Now, when the
| changes for TABLE1 are committed, and TABLE2 is altered, the version for TABLE2
| becomes SYSTABLES.VERSION = 2, which corresponds with the table space version
| of SYSTABLESPACE.CURRENT_VERSION = 2. The version of TABLE2 skips from 0 to 2,
| because SYSTABLESPACE.CURRENT_VERSION = 1 was already used by TABLE1.
The following schema changes might result in Db2 creating a table space version:
v Extending the length of a character (CHAR data type) or graphic (GRAPHIC
data type) column
v Changing the type of a column within character data types (CHAR, VARCHAR)
v Changing the type of a column within graphic data types (GRAPHIC,
VARGRAPHIC)
v Changing the type of a column within numeric data types (SMALL INTEGER,
INTEGER, FLOAT, REAL, FLOAT8, DOUBLE, DECIMAL)
v Adding a column to a table
v Extending the length of a varying character (VARCHAR data type) or varying
graphic (VARGRAPHIC data type) column, if the table already has a version
number that is greater than 0
v Altering the maximum length of a LOB column, if the table already has a
version number that is greater than 0
v Altering the inline length of a LOB column
v Extending the precision of the TIMESTAMP column
In general, Db2 creates only one table space version if you make multiple schema
changes in the same unit of work. If you make these same schema changes in
separate units of work, each change results in a new table space version. For
example, the first three ALTER TABLE statements in the following example are all
associated with the same table space version. The scope of the first COMMIT
statement encompasses all three schema changes. The last ALTER TABLE statement
is associated with the next table space version. The scope of the second COMMIT
statement encompasses a single schema change.
ALTER TABLE ACCOUNTS ALTER COLUMN ACCTID SET DATA TYPE INTEGER;
COMMIT;
Db2 does not create a table space version under the following circumstances:
v You add a column to a table in the following situations:
– You created the table space with DEFINE NO, the current version is 0, and
you add a column before adding any data is added to the table. If you
commit the change and add another column, the version is still 0.
– You created the table space with DEFINE YES. After adding a column or
altering a column, committing the change, and adding no data to the table,
you add another column.
– A non-partitioned table space and a table that it contains are not in version 0
format. No data is in the current committed version format. You add a
column to the table.
v You extend the length of a varying character (VARCHAR data type) or varying
graphic (VARGRAPHIC data type) column, and the table does not have a
version number yet.
v You specify the same data type and length that a column currently has, so that
its definition does not actually change.
v You alter the maximum length of a LOB column and the table does not have a
version number yet.
Related tasks:
Altering the data type of a column
Altering the attributes of an identity column
Db2 can store up to 256 table space versions, numbered sequentially from 0 to 255.
The next consecutive version number after 255 is 1. Version number 0 is never
reused; it is reserved for the original version of the table space. The versions are
associated with schema changes that have not been applied, but are considered to
be in use. The range of used versions is stored in the catalog.
| If a table space has multiple tables, and one table is at version number 0, the oldest
| table space version is 0. When the current table space version reaches 255, you
| need to perform special processing to allow removal of table space versions.
Index versions
Db2 uses index versions to maximize data availability. Index versions enable Db2
to keep track of schema changes and provides users with access to data in altered
columns that are contained in indexes.
When users retrieve rows from a table with an altered column, the data is
displayed in the format that is described by the most recent schema definition,
even though the data is not currently stored in this format. The most recent
schema definition is associated with the current index version.
Db2 creates an index version each time you commit one of the following schema
changes:
Table 20. Situations when Db2 creates an index version
Db2 creates this type of corresponding
When you commit this change to a schema index version
Use the ALTER TABLE statement to change A new index version for each index that is
the data type of a non-numeric column that affected by this operation.
is contained in one or more indexes.
Use the ALTER TABLE statement to change A new index version for each index that is
the length of a VARCHAR column that is affected by this operation.
contained in one or more PADDED indexes.
Exceptions: Db2 does not create an index version under the following
circumstances:
v When the index was created with DEFINE NO
v When you extend the length of a varying-length character (VARCHAR data
type) or varying-length graphic (VARGRAPHIC data type) column that is
contained in one or more indexes that are defined with the NOT PADDED
option
v When you specify the same data type and length that a column (which is
contained in one or more indexes) currently has, such that its definition does not
actually change
Db2 creates only one index version if, in the same unit of work, you make multiple
schema changes to columns that are contained in the same index. If you make
these same schema changes in separate units of work, each change results in a new
index version.
Related tasks:
Recycling index version numbers
Reorganizing indexes
Db2 can store up to 16 index versions, numbered sequentially from 0 - 15. The next
consecutive version number after 15 is 1. Version number 0 is never reused,
because it is reserved for the original version of the index. The versions that are
associated with schema changes that have not been applied yet are considered to
be “in use,” and the range of used versions is stored in the catalog. In use versions
can be recovered from image copies of the table space, if necessary.
Version numbers are considered to be unused if the schema changes that are
associated with them have been applied and no image copies contain data at those
versions.
Determine the range of version numbers that are currently in use for an index
by querying the OLDEST_VERSION and CURRENT_VERSION columns of the
SYSIBM.SYSINDEXES catalog table.
2. Next, run the appropriate utility to recycle unused index version numbers.
v For indexes that are defined as COPY YES, run the MODIFY RECOVERY
utility.
If all reusable version numbers (1 - 15) are currently in use, reorganize the
index by running REORG INDEX or REORG TABLESPACE before you
recycle the version numbers.
v For indexes that are defined as COPY NO, run the REORG TABLESPACE,
REORG INDEX, LOAD REPLACE, or REBUILD INDEX utility. These utilities
recycle the version numbers as they perform their primary functions.
Related concepts:
Index versions
If you plan to let Db2 enforce referential integrity in a set of tables, see Referential
constraints (Db2 Application programming and SQL) for a description of the
requirements for referential constraints. Db2 does not enforce informational
referential constraints.
Related concepts:
Creation of relationships with referential constraints (Introduction to Db2 for
z/OS)
Related tasks:
Creating tables for data integrity (Db2 Application programming and SQL)
Related reference:
ALTER TABLE (Db2 SQL)
Assume that the tables in the sample application (the Db2 sample activity table,
project table, project activity table, employee table, and department table) already
exist, have the appropriate column definitions, and are already populated.
You can build the same referential structure in several different ways; however, the
following process might be the simplest to understand.
Procedure
Introductory concepts
Creation of relationships with referential constraints (Introduction to Db2 for
z/OS)
Application of business rules to relationships (Introduction to Db2 for z/OS)
When you add parent keys and foreign keys to an existing table, you must
consider certain restrictions and implications.
v If you add a primary key, the table must already have a unique index on the key
columns. If multiple unique indexes include the primary key columns, the index
that was most recently created on the key columns becomes the primary index.
Because of the unique index, no duplicate values of the key exist in the table;
therefore you do not need to check the validity of the data.
v If you add a unique key, the table must already have a unique index with a key
that is identical to the unique key. If multiple unique indexes include the
primary key columns, Db2 arbitrarily chooses a unique index on the key
columns to enforce the unique key. Because of the unique index, no duplicate
values of the key exist in the table; therefore you do not need to check the
validity of the data.
v You can use only one FOREIGN KEY clause in each ALTER TABLE statement; if
you want to add two foreign keys to a table, you must execute two ALTER
TABLE statements.
v If you add a foreign key, the parent key and unique index of the parent table
must already exist. Adding the foreign key requires the ALTER privilege on the
dependent table and either the ALTER or REFERENCES privilege on the parent
table.
v Adding a foreign key establishes a referential constraint relationship. Db2 does
not validate the data when you add the foreign key. Instead, if the table is
populated (or, in the case of a nonsegmented table space, if the table space has
ever been populated), the table space that contains the table is placed in
CHECK-pending status, just as if it had been loaded with ENFORCE NO. In this
case, you need to execute the CHECK DATA utility to clear the CHECK-pending
status.
v You can add a foreign key with the NOT ENFORCED option to create an
informational referential constraint. This action does not leave the table space in
CHECK-pending status, and you do not need to execute CHECK DATA.
Procedure
Option Description
Adding a primary key To add a primary key to an existing table,
use the PRIMARY KEY clause in an ALTER
TABLE statement. For example, if the
department table and its index XDEPT1
already exist, create its primary key by
issuing the following statement:
ALTER TABLE DSN8910.DEPT
ADD PRIMARY KEY (DEPTNO);
Related tasks:
Creating indexes to improve referential integrity performance for foreign keys
(Db2 Performance)
Related reference:
ALTER TABLE (Db2 SQL)
Before you drop a foreign key or a parent key, consider carefully the effects on
your application programs. The primary key of a table serves as a permanent,
unique identifier of the occurrences of the entities it describes. Application
programs often depend on that identifier. The foreign key defines a referential
relationship and a delete rule. Without the key, your application programs must
enforce the constraints.
Procedure
Procedure
Adding partitions
You can use ALTER TABLE statements to add partitions to all types of partitioned
table spaces.
You do not need to allocate extra partitions for expected growth when you create
partitioned table spaces because you can add partitions as needed.
You can add a partition as the last logical partition of any table in any type of
partitioned table space.
When you add partitions Db2 always uses the next physical partition that is not
already in use, until you reach the maximum number of partitions for the table
space.
Procedure
To add partitions:
Add a partition after the last existing logical partition by issuing an ALTER TABLE
statement. In the ADD PARTITION clause, specify an ENDING AT value beyond
the existing limit of the last logical partition. If the table space is a large table
space, you can use the new partition immediately after the ALTER statement
completes. In this case, the partition is not placed in REORG-pending (REORP)
status because it extends the high-range values that were not previously used. For
non-large table spaces, the partition is placed in REORP status because the last
partition boundary was not previously enforced.
Examples
For example, consider a table space that contains a transaction table named
TRANS. The table is divided into 10 partitions, and each partition contains one
year of data. Partitioning is defined on the transaction date, and the limit key
value is the end of each year. The following table shows a representation of the
table space.
Table 21. An example table space with 10 partitions
Physical partition
Limit value number Data set name that backs the partition
12/31/2010 1 catname.DSNDBx.dbname.psname.I0001.A001
12/31/2011 2 catname.DSNDBx.dbname.psname.I0001.A002
12/31/2013 3 catname.DSNDBx.dbname.psname.I0001.A003
12/31/2013 4 catname.DSNDBx.dbname.psname.I0001.A004
12/31/2014 5 catname.DSNDBx.dbname.psname.I0001.A005
12/31/2015 6 catname.DSNDBx.dbname.psname.I0001.A006
12/31/2016 7 catname.DSNDBx.dbname.psname.I0001.A007
12/31/2017 8 catname.DSNDBx.dbname.psname.I0001.A008
12/31/2018 9 catname.DSNDBx.dbname.psname.I0001.A009
12/31/2019 10 catname.DSNDBx.dbname.psname.I0001.A010
What to do next
After you add partitions, you might need to complete any of the following actions.
Alter the attributes of added partitions
You might need to alter the attributes of the added partition. The attributes
of the new partition are either inherited or calculated. If it is necessary to
change specific attributes for the new partition, you must issue separate
ALTER TABLESPACE and ALTER INDEX statements after you add the
partition. Examine the catalog to determine whether the inherited values
require changes.
The added partition inherits most attributes from the previous last logical
partition.
For example, if you want to specify the space attributes for a new
partition, use the ALTER TABLESPACE and ALTER INDEX statements. For
example, suppose that the new partition is PARTITION 11 for the table
space and the index. Issue the following statements to specify quantities
for the PRIQTY, SECQTY, FREEPAGE, and PCTFREE attributes:
ALTER TABLESPACE tsname ALTER PARTITION 11
USING STOGROUP stogroup-name
PRIQTY 200 SECQTY 200
FREEPAGE 20 PCTFREE 10;
Altering partitions
You can use the ALTER TABLE statement to alter the partitions of table spaces.
Procedure
To alter a partition:
Issue the ALTER TABLE statement and specify the options that you want to
change.
Alternatively, you can let Db2 determine any appropriate limit key changes to
more evenly distribute the data across partitions. If you want Db2 to determine
any limit key changes, follow the instructions in Redistributing data across
partitions by using REORG (Db2 Utilities).
Procedure
Rotating partitions
You can use the ALTER TABLE statement to rotate any logical partition to become
the last partition. Rotating partitions is supported for partitioned (non-UTS) table
spaces and partition-by-range table spaces, but not for partition-by-growth table
spaces.
Recommendation:
When you create a partitioned table space, you do not need to allocate extra
partitions for expected growth. Instead, you can use the ALTER TABLE ADD
PARTITION statement to add partitions as needed. If rotating partitions is
appropriate for your application, use the ALTER TABLE ROTATE PARTITION
Nullable partitioning columns: Db2 lets you use nullable columns as partitioning
columns. But with table-controlled partitioning, Db2 can restrict the insertion of
null values into a table with nullable partitioning columns, depending on the order
of the partitioning key. After a rotate operation, if the partitioning key is ascending,
Db2 prevents an INSERT of a row with a null value for the key column. If the
partitioning key is descending, Db2 allows an INSERT of a row with a null value
for the key column. The row is inserted into the first partition.
Procedure
Example
For example, assume that the partition structure of the table space is sufficient
through the year 2006. The following table shows a representation of the table
space through the year 2006. When another partition is needed for the year 2007,
you determined that the data for 1996 is no longer needed. You want to recycle the
partition for the year 1996 to hold the transactions for the year 2007.
Table 23. An excerpt of a partitioned table space
Partition Limit value Data set name that backs the partition
P008 12/31/2004 catname.DSNDBx.dbname.psname.I0001.A008
P009 12/31/2005 catname.DSNDBx.dbname.psname.I0001.A009
P010 12/31/2006 catname.DSNDBx.dbname.psname.I0001.A010
To rotate the first partition for table TRANS to be the last partition, issue the
following statement:
ALTER TABLE TRANS ROTATE PARTITION FIRST TO LAST
ENDING AT (’12/31/2007’) RESET;
For a table with limit values in ascending order, the data in the ENDING AT clause
must be higher than the limit value for previous partitions. Db2 chooses the first
partition to be the partition with the lowest limit value.
For a table with limit values in descending order, the data must be lower than the
limit value for previous partitions. Db2 chooses the first partition to be the
partition with the highest limit value.
The following table shows a representation of the table space after the first
partition is rotated to become the last partition.
Table 24. Rotating the first partition to be the last partition
Partition Limit value Data set name that backs the partition
P002 12/31/1997 catname.DSNDBx.dbname.psname.I0001.A002
P003 12/31/1998 catname.DSNDBx.dbname.psname.I0001.A003
P004 12/31/1999 catname.DSNDBx.dbname.psname.I0001.A004
P005 12/31/2000 catname.DSNDBx.dbname.psname.I0001.A005
P006 12/31/2001 catname.DSNDBx.dbname.psname.I0001.A006
P007 12/31/2002 catname.DSNDBx.dbname.psname.I0001.A007
P008 12/31/2003 catname.DSNDBx.dbname.psname.I0001.A008
P009 12/31/2004 catname.DSNDBx.dbname.psname.I0001.A009
P010 12/31/2005 catname.DSNDBx.dbname.psname.I0001.A010
P011 12/31/2006 catname.DSNDBx.dbname.psname.I0001.A011
P001 12/31/2007 catname.DSNDBx.dbname.psname.I0001.A001
Procedure
Issue the ALTER TABLE statement with the ALTER PARTITION clause to specify a
new boundary for the last partition.
For more details on this process, see “Changing the boundary between partitions”
on page 164.
Example
The following table shows a representation of a table space through the year 2007.
You rotated the first partition to be the last partition. Now, you want to extend the
last partition so that it includes the year 2008.
To extend the boundary of the last partition to include the year 2008, issue the
following statement:
ALTER TABLE TRANS ALTER PARTITION 1 ENDING AT (’12/31/2008’);
You can use the partition immediately after the ALTER statement completes. The
partition is not placed in any restrictive status, because it extends the high-range
values that were not previously used.
Related tasks:
Creating tables partitioned by data value ranges
Related reference:
ALTER TABLE (Db2 SQL)
Advisory or restrictive states (Db2 Utilities)
The results of truncating the last partition in a partitioned table space depend on
the table space type, whether there is any possibility that data could fall outside
the truncated partition, and whether the limit key value of the last partition before
truncation is MAXVALUE, MINVALUE, less than MAXVALUE, or greater than
MINVALUE.
| If the partition that you truncate is empty or there is no possibility that data could
| fall outside of the new boundary, and the last partition and the previous partition
| have no pending definition changes, the definition change occurs immediately. No
| restrictive or pending status is necessary.
| The following steps assume that the data is in ascending order. The process is
| similar if the columns are in descending order.
Procedure
| v To split a partition into two when the limit key of the last partition is less than
| MAXVALUE:
| 1. Suppose that p1 is the limit key for the last partition. Issue the ALTER
| TABLE statement with the ADD PARTITION clause to add a partition with a
| limit key that is greater than p1.
| 2. Issue the ALTER TABLE statement with the ALTER PARTITION clause to
| specify a limit key that is less than p1 for the partition that is now the
| second-to-last partition. For more details on this process, see Changing the
| boundary between partitions.
| 3. Issue the ALTER TABLE statement with the ALTER PARTITION clause to
| specify p1 for the limit key of the new last partition.
| 4. Issue the REORG TABLESPACE utility on the new second-to-last and last
| partitions to remove the REORG-pending status on the last partition, and
| materialize the changes and remove the advisory REORG-pending status on
| the second-to-last partition.
Example
For example, the following table shows a representation of a table space through
the year 2015, where each year of data is saved in separate partitions. Assume that
you want to split the data for 2015 into two partitions.
| You want to create a partition to include the data for the last six months of 2015
| (from 07/01/2015 to 12/31/2015). You also want partition P001 to include only the
| data for the first six months of 2015 (through 06/30/2015).
Table 27. Table space with each year of data in a separate partition
Partition Limit value Data set name that backs the partition
P002 12/31/2005 catname.DSNDBx.dbname.psname.I0001.A002
P003 12/31/2006 catname.DSNDBx.dbname.psname.I0001.A003
P004 12/31/2007 catname.DSNDBx.dbname.psname.I0001.A004
P005 12/31/2008 catname.DSNDBx.dbname.psname.I0001.A005
P006 12/31/2009 catname.DSNDBx.dbname.psname.I0001.A006
P007 12/31/2010 catname.DSNDBx.dbname.psname.I0001.A007
P008 12/31/2011 catname.DSNDBx.dbname.psname.I0001.A008
P009 12/31/2012 catname.DSNDBx.dbname.psname.I0001.A009
P010 12/31/2013 catname.DSNDBx.dbname.psname.I0001.A010
P011 12/31/2014 catname.DSNDBx.dbname.psname.I0001.A011
P001 12/31/2015 catname.DSNDBx.dbname.psname.I0001.A001
| To truncate partition P001 to include data only through 06/30/2015, issue the
| following statement:
| ALTER TABLE TRANS ALTER PARTITION 1 ENDING AT (’06/30/2015’);
| To preserve the last partition key limit of 12/31/2015, issue the following
| statement:
| ALTER TABLE TRANS ALTER PARTITION 12 ENDING AT (’12/31/2015’);
Related reference:
ALTER TABLE (Db2 SQL)
Advisory or restrictive states (Db2 Utilities)
Procedure
Issue a CREATE TABLE or ALTER TABLE statement and specify the APPEND
option. The APPEND option has the following settings:
YES Requests data rows to be placed into the table by disregarding the
clustering during SQL INSERT and online LOAD operations. Rather than
attempting to insert rows in cluster-preserving order, rows are appended at
the end of the table or appropriate partition.
NO Requests standard behavior of SQL INSERT and online LOAD operations,
namely that they attempt to place data rows in a well clustered manner
with respect to the value in the row's cluster key columns. NO is the
default option.
After populating a table with the APPEND option in effect, you can achieve
clustering by running the REORG utility.
Restriction: You cannot specify the APPEND option for tables created in XML or
work file table spaces.
When you add an XML column to a table, an XML table and XML table space are
implicitly created to store the XML data. If the new XML column is the first XML
column that you created for the table, Db2 also implicitly creates a BIGINT DOCID
column to store a unique document identifier for the XML columns of a row.
Procedure
Issue the ALTER TABLE statement and specify the ADD column-name XML option.
Example
ALTER TABLE orders ADD shipping_info XML;
Related tasks:
Altering implicitly created XML objects
Related reference:
ALTER TABLE (Db2 SQL)
Deprecated function:
Hash organization is only available for universal (UTS) table space types. If you
want to enable hash access on a table space of another type, you must first alter
the table space to a UTS type.
Enabling hash access requires a table space reorganization, and disables some
features such as index clustering.
Procedure
Example
In this example the user alters the EMP table, specifies to ADD ORGANIZE BY
HASH, sets the EMPNO column as the unique identifier, and specifies a HASH
SPACE of 64 with the modifier M for megabytes.
What to do next
Monitor the real-time-statistics information about your table to verify that the hash
access path is used regularly and to verify that the use of disk space is optimized.
Related tasks:
| Organizing tables for hash access to individual rows (deprecated) (Db2
Performance)
Managing space and page size for hash-organized tables (Db2 Performance)
| Monitoring hash access (deprecated) (Db2 Performance)
| Altering the size of your hash spaces (deprecated)
| Creating tables that use hash organization (deprecated)
Related reference:
ALTER TABLE (Db2 SQL)
REORG TABLESPACE (Db2 Utilities)
Deprecated function:
| Hash-organized tables are deprecated. Beginning in Db2 12, packages that are
| bound with APPLCOMPAT(V12R1M504) or higher cannot create hash-organized
| tables or alter existing tables to use hash-organization. Existing hash-organized
| tables remain supported, but they are likely to be unsupported in the future.
When you tune the performance of tables that are organized by hash, you can alter
the size of the hash space with the ALTER TABLE statement.
Procedure
To alter the size of the hash space for a table, use one of the following approaches:
v Run the REORG TABLESPACE utility on the table space and specify
AUTOESTSPACE YES in the REORG TABLESPACE statement. Db2
automatically estimates a size for the hash space based on information from the
real-time statistics tables. If you specify AUTOESTSPACE NO in the REORG
TABLESPACE statement, Db2 uses the hash space that you explicitly specified
for the table space.
What to do next
Monitor the real-time-statistics information about your table to ensure that the
hash access path is used regularly and that your disk space is used efficiently.
Related tasks:
| Organizing tables for hash access to individual rows (deprecated) (Db2
Performance)
Managing space and page size for hash-organized tables (Db2 Performance)
| Monitoring hash access (deprecated) (Db2 Performance)
| Altering tables for hash access (deprecated)
Related reference:
ALTER TABLE (Db2 SQL)
REORG TABLESPACE (Db2 Utilities)
The row-begin column of the system period contains the timestamp value for when
a row is created. The row-end column contains the timestamp value for when a row
is removed. A transaction-start-ID column contains a unique timestamp value that
Db2 assigns per transaction, or the null value.
For a list of restrictions that apply to tables that use system-period data versioning,
see Restrictions for system-period data versioning.
Procedure
Example
For example, consider that you created a table named policy_info by issuing the
following CREATE TABLE statement:
CREATE TABLE policy_info
(policy_id CHAR(10) NOT NULL,
coverage INT NOT NULL);
To create a history table for this system-period temporal table, issue the following
CREATE TABLE statement:
CREATE TABLE hist_policy_info
(policy_id CHAR(10) NOT NULL,
coverage INT NOT NULL,
sys_start TIMESTAMP(12) NOT NULL,
sys_end TIMESTAMP(12) NOT NULL,
trans_id TIMESTAMP(12));
Related concepts:
Temporal tables and data versioning
Related information:
Managing Ever-Increasing Amounts of Data with IBM Db2 for z/OS: Using
Temporal Data Management, Archive Transparency, and the IBM Db2 Analytics
Accelerator for z/OS (IBM Redbooks)
Procedure
Issue the ALTER TABLE statement with the ADD PERIOD BUSINESS_TIME
clause. The table becomes an application-period temporal table.
Example
For example, consider that you created a table named policy_info by issuing the
following CREATE TABLE statement:
You can add an application period to this table by issuing the following ALTER
TABLE statement:
ALTER TABLE policy_info ADD PERIOD BUSINESS_TIME(bus_start, bus_end);
You also can add a unique index to the table by issuing the following CREATE
INDEX statement:
CREATE UNIQUE INDEX ix_policy
ON policy_info (policy_id, BUSINESS_TIME WITHOUT OVERLAPS);
Restriction: You cannot issue the ALTER INDEX statement with ADD
BUSINESS_TIME WITHOUT OVERLAPS. Db2 issues SQL error code -104 with
SQLSTATE 20522.
Procedure
Issue INSERT, UPDATE, DELETE, or MERGE statements to make the changes that
you want. Timestamp information is stored in the timestamp columns, and
historical rows are moved to the history table.
Restriction: You cannot issue SELECT FROM DELETE or SELECT FROM UPDATE
statements when the FOR PORTION OF option is specified for either the UPDATE
statement or the DELETE statement. Db2 issues an error in both of these cases
(SQL error code -104 with SQLSTATE 20522).
Example
The following example shows how you can insert data in the POLICY_INFO table
by specifying the DEFAULT keyword in the VALUES clause for each of the
generated columns:
INSERT INTO POLICY_INFO
VALUES (’A123’, 12000, DEFAULT, DEFAULT, DEFAULT);
Materialized query tables enable Db2 to use automatic query rewrite to optimize
queries. Automatic query rewrite is a process that Db2 uses to examine a query
and, if appropriate, to rewrite the query so that it executes against a materialized
query table that has been derived from the base tables in the submitted query.
| You can also use the ALTER TABLE statement to register an existing table as a
| materialized query table. For more information, see Registering an existing table as
| a materialized query table (Db2 Performance).
Related tasks:
Altering an existing materialized query table (Db2 Performance)
Procedure
Issue an ALTER TABLE statement and specify the DROP MATERIALIZED QUERY
option. For example,
ALTER TABLE TRANSCOUNT DROP MATERIALIZED QUERY;
What to do next
After you issue this statement, Db2 can no longer use the table for query
optimization, and you cannot populate the table by using the REFRESH TABLE
statement.
Option Description
Enable or disable automatic query rewrite. By default, when you create or register a
materialized query table, Db2 enables it for
automatic query rewrite. To disable
automatic query rewrite, issue the following
statement:
ALTER TABLE TRANSCOUNT DISABLE QUERY
OPTIMIZATION;
Switch between system-maintained and By default, a materialized query table is
user-maintained. system-maintained; the only way you can
change the data is by using the REFRESH
TABLE statement. To change to a
user-maintained materialized query table,
issue the following statement:
ALTER TABLE TRANSCOUNT SET MAINTAINED
BY USER;
Change back to a system-maintained Specify the MAINTAINED BY SYSTEM
materialized query table. option.
Procedure
To change the definition of an existing materialized query table, use one of the
following approaches:
v Optional: Drop and re-create the materialized query table with a different
definition.
v Optional: Use ALTER TABLE statement to change the materialized query table
into a base table. Then, change it back to a materialized query table with a
different but equivalent definition (that is, with a different but equivalent
SELECT for the query).
To ensure that the rows of a table conform to a new validation routine, you must
run the validation routine against the old rows. One way to accomplish this is to
use the REORG and LOAD utilities.
Procedure
To ensure that the rows of a table conform to a new validation routine by using
the REORG and LOAD utilities:
1. Use REORG to reorganize the table space that contains the table with the new
validation routine. Specify UNLOAD ONLY, as in this example:
REORG TABLESPACE DSN8D91A.DSN8S91E
UNLOAD ONLY
This step creates a data set that is used as input to the LOAD utility.
2. Run LOAD with the REPLACE option, and specify a discard data set to hold
any invalid records. For example,
LOAD INTO TABLE DSN8910.EMP
REPLACE
FORMAT UNLOAD
DISCARDDN SYSDISC
The EMPLNEWE validation routine validates all rows after the LOAD step has
completed. Db2 copies any invalid rows into the SYSDISC data set.
Procedure
You can retrieve the log by using a program such as the log apply feature of the
Remote Recovery Data Facility (RRDF) program offering, or Db2 DataPropagator.
LOB values are not available for DATA CAPTURE CHANGES. To return a table
back to normal logging, use DATA CAPTURE NONE.
Procedure
To change an edit procedure or a field procedure for a table space in which the
maximum record length is less than 32 KB, use the following procedure:
1. Run the UNLOAD utility or run the REORG TABLESPACE utility with the
UNLOAD EXTERNAL option to unload the data and decode it using the
existing edit procedure or field procedure.
These utilities generate a LOAD statement in the data set (specified by the
PUNCHDDN option of the REORG TABLESPACE utility) that you can use to
reload the data into the original table space.
If you are using the same edit procedure or field procedure for many tables,
unload the data from all the table spaces that have tables that use the
procedure.
2. Modify the code of the edit procedure or the field procedure.
3. After the unload operation is completed, stop Db2.
4. Link-edit the modified procedure, using its original name.
5. Start Db2.
6. Use the LOAD utility to reload the data. LOAD then uses the modified
procedure or field procedure to encode the data.
What to do next
To change an edit procedure or a field procedure for a table space in which the
maximum record length is greater than 32 KB, use the DSNTIAUL sample program
to unload the data.
For example:
ALTER TABLE table-name ALTER COLUMN column-name
SET DATA TYPE altered-data-type
Related reference:
ALTER TABLE (Db2 SQL)
Procedure
What to do next
Changing the data type of an identity column, like changing some other data
types, requires that you drop and then re-create the table.
Related concepts:
Identity columns (Db2 Application programming and SQL)
Table space versions
Related tasks:
Altering the data type of a column
Changing data types by dropping and re-creating the table
Related reference:
ALTER TABLE (Db2 SQL)
For example, you must make the following changes by redefining the column (that
is, dropping the table and then re-creating the table with the new definitions):
v An original specification of CHAR (25) to CHAR (20)
Procedure
The DROP TABLE statement deletes a table. For example, to drop the project table,
run the following statement:
DROP TABLE DSN8910.PROJ;
The statement deletes the row in the SYSIBM.SYSTABLES catalog table that
contains information about DSN8910.PROJ. This statement also drops any other
objects that depend on the project table. This action results in the following
implications:
v The column names of the table are dropped from SYSIBM.SYSCOLUMNS.
v If the dropped table has an identity column, the sequence attributes of the
identity column are removed from SYSIBM.SYSSEQUENCES.
v If triggers are defined on the table, they are dropped, and the corresponding
rows are removed from SYSIBM.SYSTRIGGERS and SYSIBM.SYSPACKAGES.
v Any views based on the table are dropped.
v Packages that involve the use of the table are invalidated.
v Cached dynamic statements that involve the use of the table are removed from
the cache.
If a table has a partitioning index, you must drop the table space or use LOAD
REPLACE when loading the redefined table. If the CREATE TABLE that is used to
redefine the table creates a table space implicitly, commit the DROP statement
before re-creating a table by the same name. You must also commit the DROP
statement before you create any new indexes with the same name as the original
indexes.
Related tasks:
Dropping and re-creating a table space to change its attributes
The following example query lists the views, with their creators, that are affected if
you drop the project table:
The next example lists the packages, identified by the package name, collection ID,
and consistency token (in hexadecimal representation), that are affected if you drop
the project table:
SELECT DNAME, DCOLLID, HEX(DCONTOKEN)
FROM SYSIBM.SYSPACKDEP
WHERE BNAME = ’PROJ’
AND BQUALIFIER = ’DSN8910’
AND BTYPE = ’T’;
The next example lists the plans, identified by plan name, that are affected if you
drop the project table:
SELECT DNAME
FROM SYSIBM.SYSPLANDEP
WHERE BNAME = ’PROJ’
AND BCREATOR = ’DSN8910’
AND BTYPE = ’T’;
In addition, the SYSIBM.SYSINDEXES table tells you what indexes currently exist
on a table. From the SYSIBM.SYSTABAUTH table, you can determine which users
are authorized to use the table.
Re-creating a table
You can re-create a Db2 table to decrease the length attribute of a string column or
the precision of a numeric column.
Procedure
INSERT INTO T2
SELECT * FROM T1;
b. Load data from your old table into the new table by using the INCURSOR
option of the LOAD utility. This option uses the Db2 UDB family
cross-loader function.
4. Issue the statement DROP TABLE T1. If T1 is the only table in an explicitly
created table space, and you do not mind losing the compression dictionary, if
one exists, you can drop the table space instead. By dropping the table space,
the space is reclaimed.
5. Commit the DROP statement.
6. Use the statement RENAME TABLE to rename table T2 to T1.
7. Run the REORG utility on the table space that contains table T1.
8. Notify users to re-create any synonyms, indexes, views, and authorizations they
had on T1.
What to do next
If you want to change a data type from string to numeric or from numeric to
string (for example, INTEGER to CHAR or CHAR to INTEGER), use the CHAR
and DECIMAL scalar functions in the SELECT statement to do the conversion.
Another alternative is to use the following method:
1. Use UNLOAD or REORG UNLOAD EXTERNAL (if the data to unload in less
than 32 KB) to save the data in a sequential file, and then
2. Use the LOAD utility to repopulate the table after re-creating it. When you
reload the table, make sure you edit the LOAD statement to match the new
column definition.
This method is particularly appealing when you are trying to re-create a large
table.
Procedure
Procedure
What to do next
Attention: When you drop a view, Db2 invalidates packages that are dependent
on the view and revokes the privileges of users who are authorized to use it. Db2
attempts to rebind the package the next time it is executed, and you receive an
error if you do not re-create the view.
To tell how much rebinding and reauthorizing is needed if you drop a view, see
the following table.
Table 29. Catalog tables to check after dropping a view
Catalog table What to check
SYSIBM.SYSPACKDEP Packages dependent on the view
SYSIBM.SYSVIEWDEP Views dependent on the view
SYSIBM.SYSTABAUTH Users authorized to use the view
Related tasks:
Creating Db2 views
Dropping Db2 views
Related reference:
DROP (Db2 SQL)
COMMIT (Db2 SQL)
CREATE VIEW (Db2 SQL)
Unlike other forms of triggers that are defined only on tables, INSTEAD OF
triggers are defined only on views. If you use the INSTEAD OF trigger, the
requested update operation against the view is replaced by the trigger logic, which
performs the operation on behalf of the view.
Procedure
Issue the CREATE TRIGGER statement and specify the INSTEAD OF trigger for
insert, update, and delete operations on the view.
Related reference:
CREATE TRIGGER (Db2 SQL)
| Procedure
| To issue a data change operation on a view that references a temporal table:
| Example
|
|
| The following example shows how you can create a view that references an
| application-period temporal table (att), and then specify a period clause for an
| update operation on the view.
| CREATE VIEW v7 (col1, col2, col3)
| AS SELECT coverage, bus_start, bus_end FROM att;
|
| UPDATE v7
| FOR PORTION OF BUSINESS_TIME FROM '2013-01-01’ TO '2013-06-01’
| SET col1 = col1 + 1.10;
For other changes, you must drop and re-create the index.
When you add a new column to an index, change how varying-length columns are
stored in the index, or change the data type of a column in the index, Db2 creates
a new version of the index.
Restrictions:
v If the padding of an index is changed, the index is placed in REBUILD-pending
(RBDP) status and a new version of the index is not created.
v Any alteration to use index compression places the index in RBDP status.
v You cannot add a column with the DESC attribute to an index if the column is a
VARBINARY column or a column with a distinct type that is based on the
VARBINARY type.
Procedure
Issue the ALTER INDEX statement. The ALTER INDEX statement can be
embedded in an application program or issued interactively.
Related concepts:
Indexes that are padded or not padded (Introduction to Db2 for z/OS)
Related tasks:
Designing indexes for performance (Db2 Performance)
Related reference:
ALTER INDEX (Db2 SQL)
Related information:
Implementing Db2 indexes
If pending changes exist at the table space level, you can materialize the pending
changes that are associated with the table space (including the pending changes for
the index) by running REORG TABLESPACE with SHRLEVEL CHANGE or
SHRLEVEL REFERENCE.
Restriction: You cannot add columns to IBM-defined indexes on the Db2 catalog.
Procedure
Results
If the column that is being added to the index is already part of the table on which
the index is defined, the index is left in a REBUILD-pending (RBDP) status.
However, if you add a new column to a table and to an existing index on that
table within the same unit of work, the index is left in advisory REORG-pending
(AREO*) status and can be used immediately for data access.
If you add a column to an index and to a table within the same unit of work, this
will cause table and index versioning.
Example
For example, assume that you created a table with columns that include
ACCTID, STATE, and POSTED:
To add a ZIPCODE column to the table and the index, issue the following
statements:
ALTER TABLE TRANS ADD COLUMN ZIPCODE CHAR(5);
ALTER INDEX STATE_IX ADD COLUMN (ZIPCODE);
COMMIT;
Because the ALTER TABLE and ALTER INDEX statements are executed within the
same unit of work, Db2 immediately can use the new index with the key STATE,
ZIPCODE for data access.
Related reference:
ALTER INDEX (Db2 SQL)
Restriction: You cannot add columns to IBM-defined indexes on the Db2 catalog.
If you want to add a column to a unique index to allow index-only access of the
data, you first must determine whether existing indexes on a unique table are
being used to query the table. You can use the RUNSTATS utility, real-time
statistics, or the EXPLAIN statement to find this information. Those indexes with
the unique constraint in common are candidates for consolidation. Other
non-unique indexes might be candidates for consolidation, depending on their
frequency of use.
Procedure
To specify that additional columns be appended to the set of index key columns of
a unique index:
1. Issue the ALTER INDEX statement with the INCLUDE clause. Any column that
is included with the INCLUDE clause is not used to enforce uniqueness. These
included columns might improve the performance of some queries through
index only access. Using this option might eliminate the need to access data
pages for more queries and might eliminate redundant indexes.
2. Commit the alter procedure. As a result of this alter procedure, the index is
placed into page set REBUILD-pending (PSRBD) status, because the additional
columns preexisted in the table.
3. To remove the PSRBD status from the index, complete one of the following
options:
Procedure
To alter how varying-length column values are stored in an index, complete the
following steps:
1. Choose the padding attribute for the columns.
2. Issue the ALTER INDEX SQL statement.
v Specify the NOT PADDED clause if you do not want column values to be
padded to their maximum length. This clause specifies that VARCHAR and
VARGRAPHIC columns of an existing index are stored as varying-length
columns.
v Specify the PADDED clause if you want column values to be padded to the
maximum lengths of the columns. This clause specifies that VARCHAR and
VARGRAPHIC columns of an existing index are stored as fixed-length
columns.
3. Commit the alter procedure.
Results
The ALTER INDEX statement is successful only if the index has at least one
varying-length column.
What to do next
When you alter the padding attribute of an index, the index is placed into a
restricted REBUILD-pending (RBDP) state. When you alter the padding attribute of
a nonpartitioned secondary index (NPSI), the index is placed into a page set
REBUILD-pending (PSRBD) state. In both cases, the indexes cannot be accessed
until they are rebuilt from the data.
Procedure
Restriction: You can only specify CLUSTER if there is not already another
clustering index. In addition, an index on a table that is organized by hash
cannot be altered to a clustering index.
v CLUSTER indicates that the index is to be used as the clustering index of the
table. The change takes effect immediately. Any subsequently inserted rows
use the new clustering index. Existing data remains clustered by the previous
clustering index until the table space is reorganized.
v NOT CLUSTER indicates that the index is not to be used as the clustering
index of the table. However, if the index was previously defined as the
clustering index, it continues to be used as the clustering index until you
explicitly specify CLUSTER for a different index.
If you specify NOT CLUSTER for an index that is not a clustering index, that
specification is ignored.
3. Commit the alter procedure.
Related reference:
ALTER INDEX (Db2 SQL)
Any primary key, unique key, or referential constraints associated with a unique
index must be dropped before you drop the unique index. However, you can drop
a unique index for a unique key without dropping the unique constraint if the
unique key was created before Version 9.
Commit the drop before you create any new table spaces or indexes by the same
name.
If you drop and index and then run an application program using that index
(and thereby automatically rebound), that application program does not use the
old index. If, at a later time, you re-create the index and the application
program is not rebound, the application program cannot take advantage of the
new index.
Related tasks:
Creating Db2 indexes
Related reference:
DROP (Db2 SQL)
CREATE INDEX (Db2 SQL)
Reorganizing indexes
A schema change that affects an index might cause performance degradation. In
this case, you might need to reorganize indexes to correct any performance
degradation.
Procedure
To reorganize an index:
Run the REORG INDEX utility as soon as possible after a schema change that
affects an index. You can also run the REORG TABLESPACE utility.
Related concepts:
Index versions
Related reference:
REORG INDEX (Db2 Utilities)
REORG TABLESPACE (Db2 Utilities)
| ALTER statements with certain options can cause pending changes to the
| definition of database objects. When an ALTER statement is issued that causes
| pending changes to the definition of an object, semantic validation and
| authorization checking are performed. However, changes to the table definition
| and data are not applied and the object is placed in advisory REORG-pending state
| (AREOR), until the REORG utility is run to resolve the pending changes.
| Most pending data definition changes are supported only for universal table
| spaces, with the following exceptions:
| v Converting single-table simple or segmented (non-UTS) table spaces to
| partition-by-growth table spaces, with the MAXPARTITIONS attribute.
| v Converting partitioned (non-UTS) table spaces to partition-by-range table spaces,
| with the SEGSIZE attribute.
| v Changing partition boundaries for partitioned (non-UTS) table spaces.
| ALTER TABLESPACE, ALTER TABLE and ALTER INDEX statements that result in
| pending definition changes are not supported in the following cases:
| v Options that cause pending changes cannot be specified with options that take
| effect immediately
| v Options that cause pending changes cannot be specified for the following
| objects:
| – The Db2 catalog
| – System objects
| – Objects in a work file database
| v The DROP PENDING CHANGES clause cannot be specified for a catalog table
| space
| v If the table space, or any table it contains is in an incomplete state, you cannot
| specify options that cause pending changes
| v For ALTER INDEX, if the definition of the table space or table on which the
| index is defined it not complete.
| Most immediate definition changes are restricted while pending definition changes
| exist for an object. For a list of such restrictions, see Restrictions for changes to
| objects that have pending data definition changes.
| Related concepts:
| Table space types and characteristics in Db2 for z/OS
| Related tasks:
| Altering table spaces
| Related reference:
| ALTER TABLE (Db2 SQL)
| ALTER TABLESPACE (Db2 SQL)
| Pending definition changes are data definition changes that do not take effect
| immediately. When definition changes are pending, the affected objects are
| available until it is convenient to implement the changes.
| Most pending data definition changes are supported only for universal table
| spaces, with the following exceptions:
| v Converting single-table simple or segmented (non-UTS) table spaces to
| partition-by-growth table spaces, with the MAXPARTITIONS attribute.
| v Converting partitioned (non-UTS) table spaces to partition-by-range table space,
| with the SEGSIZE attribute.
| v Changing partition boundaries for partitioned (non-UTS) table spaces.
| Tip: Try to run REORG at a time when the data is not heavily accessed.
| Otherwise, application outages might occur, as described in Reorganization with
| pending definition changes (Db2 Utilities).
| Procedure
| Examples
| Example: The following example provides a scenario that shows how you can use
| the ALTER TABLESPACE statement to generate pending definition changes, and
| then use the REORG TABLESPACE utility with SHRLEVEL REFERENCE to
| materialize pending definition changes at the table space level.
|
|
| Consider the following scenario:
| This statement results in one entry being inserted into the SYSPENDINGDDL
| table with OBJTYPE = 'S', for table space. This ALTER statement has not
| changed the current definition or data, so the SEGSIZE in SYSTABLESPACE is
| still 0.
| 4. Next, you issue the following ALTER statement with one pending option at the
| time of 2012-12-14-07.20.10.405008:
| ALTER INDEX USER1.IX1 BUFFERPOOL BP16K0;
| This statement results in the index being placed in AREOR state, and an entry
| is inserted into the SYSPENDINGDDL table with OBJTYPE = 'I', for index. This
| ALTER statement has not changed the current definition or data, so the buffer
| pool in SYSINDEXES still indicates BP0 for the index.
| 5. You issue another ALTER statement that is exactly the same as the previous
| one, at the time of 2012-12-20-04.10.10.605058. This statement results in another
| entry being inserted into the SYSPENDINGDDL table with OBJTYPE = 'I', for
| index.
| However, because pending definition changes exist for the table space, the
| REORG utility proceeds without materializing the pending definition changes
202 Administration Guide
| for the index, and issues warning DSNU275I with RC = 4 to indicate that no
| materialization has been done on the index, because there are pending
| definition changes for the table space. After the REORG utility runs, all the
| SYSPENDINGDDL entries still exist, and the AREOR state remains the same.
| 8. Now, you run the REORG TABLESPACE utility with SHRLEVEL REFERENCE
| on the entire table space. For example:
| REORG TABLESPACE DB1.TS1 SHRLEVEL REFERENCE
| The REORG utility materializes all of the pending definition changes for the
| table space and the associated index, applying the changes in the catalog and
| data. After the REORG utility runs, the AREOR state is cleared and all entries
| in the SYSPENDINGDDL table for the table space and the associated index are
| removed. The catalog and data now reflect a buffer pool of BP8K0,
| MAXPARTITIONS of 20, and SEGSIZE of 64.
| Related concepts:
| Reorganization with pending definition changes (Db2 Utilities)
| Table space types and characteristics in Db2 for z/OS
| Related reference:
| ALTER TABLE (Db2 SQL)
| ALTER TABLESPACE (Db2 SQL)
| ALTER INDEX (Db2 SQL)
| The following table lists immediate data definition changes that are restricted until
| any pending data definition changes are materialized for specific types of objects.
| Db2 issue SQLCODE -20385 for statements that cannot be processed because of
| pending data definition changes.
| For details about these restrictions and how to resolve them, see Recovering to a
| point in time before pending definition changes were materialized.
| Related concepts:
| Reorganization with pending definition changes (Db2 Utilities)
| Related tasks:
| Materializing pending definition changes
| Related reference:
| ALTER TABLE (Db2 SQL)
| ALTER TABLESPACE (Db2 SQL)
| ALTER INDEX (Db2 SQL)
| SYSPENDINGDDL catalog table (Db2 SQL)
| Related information:
| -20385 (Db2 Codes)
Procedure
Example
Example of changing the WLM environment: The following example changes the
stored procedure SYSPROC.MYPROC to run in the WLM environment PARTSEC:
ALTER PROCEDURE SYSPROC.MYPROC
WLM ENVIRONMENT PARTSEC;
Related tasks:
Implementing Db2 stored procedures
Related reference:
WLM_REFRESH stored procedure (Db2 SQL)
ALTER PROCEDURE (external) (Db2 SQL)
ALTER PROCEDURE (SQL - external) (Db2 SQL)
ALTER PROCEDURE (SQL - native) (Db2 SQL)
Procedure
Results
Example
Example 1: In the following example, two functions named CENTER exist in the
SMITH schema. The first function has two input parameters with INTEGER and
FLOAT data types, respectively. The specific name for the first function is FOCUS1.
The second function has three parameters with CHAR(25), DEC(5,2), and
INTEGER data types.
Using the specific name to identify the function, change the WLM environment in
which the first function runs from WLMENVNAME1 to WLMENVNAME2:
ALTER SPECIFIC FUNCTION SMITH.FOCUS1
WLM ENVIRONMENT WLMENVNAME2;
Example 2: The following example changes the second function when any
arguments are null:
ALTER FUNCTION SMITH.CENTER (CHAR(25), DEC(5,2), INTEGER)
RETURNS ON NULL CALL;
Procedure
Determine the restrictions on the XML object that you want to change. The
following table provides information about the properties that you can or cannot
change for a particular XML object.
Option Description
XML table space You can alter the following properties:
| v BUFFERPOOL (16 KB buffer pools only)
| v COMPRESS
| v PRIQTY
| v SECQTY
| v MAXROWS
| v GBPCACHE
| v USING STOGROUP
| v ERASE
| v LOCKSIZE (The only possible values are
| XML and TABLESPACE.)
| v SEGSIZE
| v DSSIZE
| v MAXPARTITIONS
Related tasks:
Adding XML columns
To concentrate on Db2-related issues, this procedure assumes that the catalog alias
resides in the same user catalog as the one that is currently used. If the new
catalog alias resides in a different user catalog, see DFSMS Access Method Services
Commands for information about planning such a move.
If the data sets are managed by the Storage Management Subsystem (SMS), make
sure that automatic class selection routines are in place for the new data set name.
You cannot change the high-level qualifier for Db2 data sets by using the Db2
installation or migration update process. You must use other methods to change
this qualifier for both system data sets and user data sets.
Changing the high-level qualifier for Db2 data sets is a complex task. You should
have experience with both Db2 and managing user catalogs.
Related concepts:
Moving Db2 data
Related information:
DFSMS Access Method Services Commands
Procedure
Procedure
5. Run the print log map utility (DSNJU004) to identify the current active log data
set and the last checkpoint RBA.
6. Run DSN1LOGP with the SUMMARY (YES) option, using the last checkpoint
RBA from the output of the print log map utility you ran in the previous step.
212 Administration Guide
The report headed DSN1157I RESTART SUMMARY identifies active units of
recovery or pending writes. If either situation exists, do not attempt to
continue. Start Db2 with ACCESS(MAINT), use the necessary commands to
correct the problem, and repeat steps 4 through 6 until all activity is complete.
Access method services does not allow ALTER where the new name does not
match the existing catalog structure for an SMS-managed VSAM data set. If the
data set is not managed by SMS, the rename succeeds, but Db2 cannot allocate it.
Db2 table spaces are defined as linear data sets with DSNDBC as the second node
of the name for the cluster and DSNDBD for the data component. The examples
shown here assume the normal defaults for Db2 and VSAM data set names. Use
access method services statements with a generic name (*) to simplify the process.
Access method services allows only one generic name per data set name string.
Procedure
If these catalog entries or data sets will not be available in the future, copy all the
table spaces in the Db2 subsystem to establish a new recovery point. You can
optionally delete the entries from the BSDS. If you do not delete the entries, they
will gradually be replaced by newer entries.
Procedure
Procedure
To start Db2 with the new xxxxMSTR cataloged procedure and load module:
1. Issue a START DB2 command with the module name as shown in the following
example.
-START DB2 PARM(new_name)
2. Optional: If you stopped DSNDB01 or DSNDB06 in Stopping Db2 when no
activity is outstanding, you must explicitly start them in this step.
You can change the databases in the following list that apply to your environment:
v DSNDB07 (work file database)
v DSNDB04 (system default database)
v DSNDDF (communications database)
v DSNRLST (resource limit facility database)
v DSNRGFDB (the database for data definition control)
v Any other application databases that use the old high-level qualifier
Important: Table spaces and indexes that span more than one data set require
special procedures. Partitioned table spaces can have different partitions allocated
to different Db2 storage groups. Nonpartitioned table spaces or indexes only have
the additional data sets to rename (those with the lowest level name of A002, A003,
and so on).
The method that you use depends on if you have a new installation or a migrated
installation of Db2 for z/OS.
You can change the high-level qualifier for your work database if you have a new
installation of Db2 for z/OS.
Procedure
You can change the high-level qualifier for your work database if you have a
migrated installation of Db2 for z/OS.
Procedure
What to do next
Renaming the data sets can be done while Db2 is down. They are included here
because the names must be generated for each database, table space, and index
space that is to change.
Procedure
Important: Before copying any Db2 data, resolve any data that is in an
inconsistent state. Use the DISPLAY DATABASE command to determine whether
Although Db2 data sets are created using VSAM access method services, they are
specially formatted for Db2 and cannot be processed by services that use VSAM
record processing. They can be processed by VSAM utilities that use
control-interval (CI) processing and, if they are linear data sets (LDSs), also by
utilities that recognize the LDS type.
Furthermore, copying the data might not be enough. Some operations require
copying Db2 object definitions. And when copying from one subsystem to another,
you must consider internal values that appear in the Db2 catalog and the log, for
example, the Db2 object identifiers (OBIDs) and log relative byte addresses (RBAs).
You might also want to use the following tools to move Db2 data:
v The Db2 DataPropagator is a licensed program that can extract data from Db2
tables, DL/I databases, VSAM files, and sequential files.
v DFSMS, which contains the following functional components:
– Data Set Services (DFSMSdss)
Use DFSMSdss to copy data between disk devices. You can use online panels
to control this, through the Interactive Storage Management Facility (ISMF)
that is available with DFSMS.
– Data Facility Product (DFSMSdfp)
This is a prerequisite for Db2. You can use access method services EXPORT
and IMPORT commands with Db2 data sets when control interval processing
(CIMODE) is used.
– Hierarchical Storage Manager (DFSMShsm)
With the MIGRATE, HMIGRATE, or HRECALL commands, which can specify
specific data set names, you can move data sets from one disk device type to
another within the same Db2 subsystem. Do not migrate the Db2 directory,
Db2 catalog, and the work file database (DSNDB07). Do not migrate any data
sets that are in use frequently, such as the bootstrap data set and the active
The following table shows which tools are applicable to specific operations.
Table 35. Tools applicable to data-moving operations
Copying an entire
Tool Moving a data set Copying a database subsystem
REORG and LOAD Yes Yes No
UNLOAD Yes No No
COPY and RECOVER Yes No No
DSNTIAUL Yes Yes No
DSN1COPY Yes Yes No
™
DataRefresher or DXT Yes Yes No
DFSMSdss Yes No Yes
DFSMSdfp Yes No Yes
DFSMShsm Yes No No
Some of the listed tools rebuild the table space and index space data sets, and they
therefore generally require longer to execute than the tools that merely copy them.
The tools that rebuild are REORG and LOAD, RECOVER and REBUILD,
DSNTIAUL, and DataRefresher. The tools that merely copy data sets are
DSN1COPY, DFSMSdss, DFSMSdfp EXPORT and IMPORT, and DFSMShsm.
DSN1COPY is fairly efficient in use, but somewhat complex to set up. It requires a
separate job step to allocate the target data sets, one job step for each data set to
copy the data, and a step to delete or rename the source data sets. DFSMSdss,
DFSMSdfp, and DFSMShsm all simplify the job setup significantly.
You can move data within Db2 in several ways: copying a database, copying a Db2
subsystem, or by moving data sets within a particular Db2 subsystem.
Copying your relational database involves not only copying data, but also finding
or generating, and executing, SQL statements to create storage groups, databases,
table spaces, tables, indexes, views, synonyms, and aliases.
You can copy a database by using the DSN1COPY utility. As with the other
operations, DSN1COPY is likely to execute faster than the other applicable tools. It
copies directly from one data set to another, while the other tools extract input for
LOAD, which then loads table spaces and builds indexes. But again, DSN1COPY is
more difficult to set up. In particular, you must know the internal Db2 object
identifiers, which other tools translate automatically.
Although you can have two Db2 subsystems on the same z/OS system, one cannot
be a copy of the other.
Only two of the tools listed are applicable: DFSMSdss DUMP and RESTORE, and
DFSMSdfp EXPORT and IMPORT.
Related concepts:
Moving a Db2 data set
Related tasks:
Changing the high-level qualifier for Db2 data sets
Related reference:
DSN1COPY (Db2 Utilities)
Both the Db2 utilities and the non-Db2 tools can be used while Db2 is running, but
the space to be moved should be stopped to prevent users from accessing it.
If you use storage groups, then you can change the storage group definition to
include the new volumes.
The following procedures differ mainly in that the first procedure assumes that
you do not want to reorganize or recover the data. Generally, this means that the
first procedure is faster. In all cases, make sure that there is enough space on the
target volume to accommodate the data set.
Procedure
Related reference:
DSN1COPY (Db2 Utilities)
Procedure
To create a new storage group that uses the correct volumes and the new alias:
1. Execute the CREATE STOGROUP SQL statement to create the new storage
group.
For example:
CREATE STOGROUP stogroup-name
VOLUMES (VOL1,VOL2)
VCAT (newcat);
2. Issue the STOP DATABASE command on the database that contains the table
spaces or index spaces whose data sets you plan to move, to prevent access to
those data sets.
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to
use the high-level qualifier for the new storage group. Also, be sure to specify
the instance qualifier of your data set, y, which can be either I or J. If you have
run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE on any
table spaces or index spaces, the fifth-level qualifier might be J0001.
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 -
NEWNAME newcat.DSNDBC.dbname.*.y0001.A001
ALTER oldcat.DSNDBD.dbname.*.y0001.A001 -
NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
5. Issue the START DATABASE command to start the database for utility
processing only.
6. Run the REORG utility or the RECOVER utility on the table space or index
space, or run the REBUILD utility on the index space.
7. Issue the START DATABASE command to start the database for full processing.
Before you can issue commands, you must have the required authorities
and privileges. For descriptions of the authorities and privileges that are required
for particular commands, see the “Authorities” sections in the topics for each
command under About Db2 and related commands (Db2 Commands).
For more information about specific privileges and authorities, see Privileges and
authorities (Managing Security).
Introductory concepts
Commands for controlling Db2 and related facilities (Introduction to Db2 for
z/OS)
Db2 attachment facilities (Introduction to Db2 for z/OS)
You can control most aspects of the operational environment by using the DSN
command of TSO and its subcommands and Db2 commands. However, you might
also any of the following types of commands to control connections from various
attachment facilities, the z/OS internal resource lock manager (IRLM), and the
Administrative task scheduler:
v The TSO command DSN and its subcommands
v Db2 commands
v CICS attachment facility commands
v IMS commands
v Administrative task scheduler commands
v z/OS IRLM commands
v TSO CLISTs
For more information about the different types of commands that you can use
control Db2 operations, see Command types and environments in Db2 (Db2
Commands).
Within the z/OS environment, you can issue most types of commands from
different interactive contexts, including the following consoles and terminals:
v z/OS consoles
v TSO terminals, by any of the following methods:
– Issuing the DSN command from the TSO READY prompt
– Entering commands in the DB2 Commands panel inDB2I
v IMS terminals
You might notice the similarities between the types of commands and the types of
consoles and terminals. However, do not confuse the types of commands with the
types of consoles and terminals. Although they are related, you can issue many of
the different commands types, and receive output messages, from many of the
different consoles or terminals.
Notes:
1. Does not apply to START DB2. Commands that are issued from IMS must have
the prefix /SSR. Commands that are issued from CICS must have the prefix
DSNC.
2. This applies when using outstanding WTOR.
3. The “Attachment facility unsolicited output” does not include “Db2 unsolicited
output.”
4. Use the z/OS command MODIFY jobname CICS command. The z/OS console
must already be defined as a CICS terminal.
5. Specify the output destination for the unsolicited output of the CICS
attachment facility in the RDO.
You can issue many commands from the background within batch programs, such
as the following types of programs:
v z/OS application programs
v Authorized CICS programs
v IMS programs
Related tasks:
Submitting work to Db2
Related reference:
Executing the terminal monitor program (TSO/E Customization)
Writing JCL for command execution (TSO/E Customization)
Related information:
About Db2 and related commands (Db2 Commands)
More than one Db2 subsystem can run under z/OS. From the console, you
must add a prefix to a Db2 command with special characters that identify the
subsystem that the command is directed to. The 1 - 8-character prefix is called the
command prefix.
Procedure
Specify the command prefix for the Db2 subsystem before the command. For
example, the following command starts the Db2 subsystem that is uses -DSN1 for
the command prefix:
-DSN1 START DB2
Related reference:
COMMAND PREFIX field (Db2 Installation and Migration)
Related information:
About Db2 and related commands (Db2 Commands)
Introductory concepts
Common ways to interact with Db2 for z/OS (Introduction to Db2 for z/OS)
TSO attachment facility (Introduction to Db2 for z/OS)
Procedure
To issue commands from a TSO terminal take one of the following actions:
v Issue a DSN command to start an explicit DSN session. The DSN command can
be issued in the foreground or background, when running under the TSO
terminal monitor program (TMP).
Examples:
Invoking a DSN session with five retries at 30-second intervals
For example, the following TSO command invokes a DSN session,
requesting five retries at 30-second intervals:
DSN SYSTEM (DB2) RETRY (5)
Displaying information about threads from a TSO session
The TSO terminal displays:
READY
You enter:
DSN SYSTEM (subsystem-name)
You enter:
-DISPLAY THREAD
Figure 18. The ISPF panel for the DB2I Primary Option Menu
When you complete operations by using the DB2I panels, DB2I invokes CLISTs,
which start the DSN session and invoke appropriate subcommands.
Related concepts:
DSN command processor (Db2 Application programming and SQL)
Related tasks:
Running TSO application programs
Controlling TSO connections
Related reference:
DSN (TSO) (Db2 Commands)
The DB2I primary option menu (Introduction to Db2 for z/OS)
Related information:
About Db2 and related commands (Db2 Commands)
Procedure
Use the DSNC transaction. CICS can attach to only one Db2 subsystem at a time,
so it does not use the Db2 command prefix. Instead, each command that is entered
through the CICS attachment facility must be preceded by a hyphen (-). The CICS
attachment facility routes the commands to the connected Db2 subsystem and
obtains the command responses.
Example
Related information:
Issuing commands to Db2 using the DSNC transaction (CICS Transaction
Server for z/OS)
An IMS subsystem can attach to more than one Db2 subsystem, so you
need to add a prefix. Commands that are directed from IMS to Db2 with a special
character that identifies which subsystem to direct the command to. That character
is called the command recognition character (CRC). It is specified it when you define
Db2 to IMS, in the subsystem member entry in IMS.PROCLIB.
Tip: You can use the same character for the CRC and the command prefix for a
single Db2 subsystem. However, to do that, you must specify a one character
command prefix. Otherwise you cannot match these identifiers.
Most examples in this information assume that both the command prefix and the
CRC are the hyphen (-) . However, if you can attach to more than one Db2
subsystem, you must issue your commands using the appropriate CRC. In the
following example, the CRC is a question mark character:
Example
Related reference:
COMMAND PREFIX field (Db2 Installation and Migration)
Related information:
IMS commands
APF-authorized programs
As with IMS, Db2 commands (including START DB2) can be passed from
an APF-authorized program to multiple Db2 subsystems by the MGCRE
(SVC 34) z/OS service. Thus, the value of the command prefix identifies
the particular subsystem to which the command is directed. The subsystem
command prefix is specified, as in IMS, when Db2 is installed (in the
SYS1.PARMLIB member IEFSSNxx). Db2 supports the z/OS WTO
command and response token (CART) to route individual Db2 command
response messages to the invoking application program. Use of the CART
is necessary if multiple Db2 commands are issued from a single application
program.
For example, to issue DISPLAY THREAD to the default Db2 subsystem
from an APF-authorized program that runs as a batch job, use the
following code:
MODESUPV DS 0H
MODESET MODE=SUP,KEY=ZERO
SVC34 SR 0,0
MGCRE CMDPARM
EJECT
CMDPARM DS 0F
CMDFLG1 DC X’00’
CMDLENG DC AL1(CMDEND-CMDPARM)
CMDFLG2 DC X’0000’
CMDDATA DC C’-DISPLAY THREAD’
CMDEND DS 0C
Related concepts:
Submitting commands from monitor programs (Db2 Performance)
Related tasks:
Submitting work to Db2
For APF-authorized programs that run in batch jobs, command responses are
returned to the master console and to the system log if hardcopy logging is
available. Hardcopy logging is controlled by the z/OS system command VARY.
Related reference:
z/OS VARY command (MVS System Commands)
-DISPLAY THREAD (Db2) (Db2 Commands)
-START DATABASE (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
Some Db2 messages that are sent to the z/OS console are marked as critical with
the WTO descriptor code (11). This code signifies “critical eventual action
requested” by Db2. Preceded by an at sign (@) or an asterisk (*), critical Db2
messages remain on the screen until they are specifically deleted. This prevents the
messages from being missed by the operator, who is required to take a specific
action.
Related concepts:
How to interpret message numbers (Db2 Messages)
Related information:
Troubleshooting for CICS Db2 (CICS Db2 Guide)
Before Db2 is stopped, the system takes a shutdown checkpoint. This checkpoint
and the recovery log give Db2 the information it needs to restart.
You can limit access to data at startup and startup after an abend.
Starting Db2
You must start Db2 to make it active and available to Db2 subsystem is active and
available to TSO applications, and other subsystems such as IMS™ and CICS®.
Procedure
Issue the START DB2 command by using one of the following methods:
v Issue the START DB2 command from a z/OS console that is authorized to issue
system control commands (z/OS command group SYS).
The command must be entered from the authorized console and cannot be
submitted through JES or TSO.
Starting Db2 by a JES batch job or a z/OS START command is impossible. The
attempt is likely to start an address space for Db2 that will abend (most likely
with reason code X'00E8000F').
v Start Db2 from an APF-authorized program by passing a START DB2 command
to the MGCRE (SVC 34) z/OS service.
Related tasks:
Installation step 14: Start the Db2 subsystem (Db2 Installation and Migration)
Migration step 17: Start Db2 11 (Db2 Installation and Migration)
Starting the Db2 subsystem (Db2 Installation and Migration)
Related reference:
Messages at start
Db2 issues a variety of messages when you start Db2. The specific messages vary
based on the parameters that you specify.
If any of the nnnn values in message DSNR004I are not zero, message DSNR007I is
issued to provide the restart status table.
For example, the module contains the name of the IRLM to connect to. In addition,
it indicates whether the distributed data facility (DDF) is available and, if it is,
whether it should be automatically started when Db2 is started. You can specify
PARM (module-name) on the START DB2 command to provide a parameter module
other than the one that is specified at installation.
The START DB2 command starts the system services address space, the database
services address space, and, depending on specifications in the load module for
subsystem parameters (DSNZPARM by default), the distributed data facility
address space. Optionally, another address space, for the internal resource lock
manager (IRLM), can be started automatically.
Procedure
Issue the START DB2 command with one of the following options:
ACCESS(MAINT)
To limit access to users who have installation SYSADM or installation
SYSOPR authority.
Users with those authorities can do maintenance operations such as
recovering a database or taking image copies. To restore access to all users,
stop Db2 and then restart it, either omitting the ACCESS keyword or
specifying ACCESS(*).
ACCESS(*)
To allow all authorized users to connect to Db2.
Procedure
Cancel the system services address space and the distributed data facility address
space from the console.
After Db2 stops, check the start procedures of all three Db2 address spaces for
correct JCL syntax.
To accomplish this check, compare the expanded JCL in the SYSOUT output with
the correct JCL provided in MVS JCL Reference. Then, take the member name of
the erroneous JCL procedure, which is also provided in the SYSOUT data set, to
the system programmer who maintains your procedure libraries. After finding out
which PROCLIB contains the JCL in question, locate the procedure and correct it.
After the STOP DB2 command, Db2 finishes its work in an orderly way and takes
a shutdown checkpoint before stopping. When Db2 is restarted, it uses information
from the system checkpoint and recovery log to determine the system status at
shutdown.
When a power failure occurs, Db2 abends without being able to finish its work or
take a shutdown checkpoint. When Db2 is restarted after an abend, it refreshes its
knowledge of its status at termination by using information on the recovery log,
Db2 then notifies the operator of the status of various units of recovery.
You can indicate that you want Db2 to postpone some of the backout work that is
traditionally performed during system restart. You can delay the backout of
long-running units of recovery by using installation options LIMIT BACKOUT and
BACKOUT DURATION on panel DSNTIPL.
Normally, the restart process resolves all inconsistent states. In some cases, you
have to take specific steps to resolve inconsistencies. There are steps you can take
to prepare for those actions. For example, you can limit the list of table spaces that
are recovered automatically when Db2 is started.
Related tasks:
Restarting Db2 after termination
Related reference:
DSNTIPL: Active log data set parameters (Db2 Installation and Migration)
Stopping Db2
Before Db2 stops, all Db2-related write to operator with reply (WTOR) messages
must receive replies.
Procedure
If the STOP DB2 command is not issued from a z/OS console, messages DSNY002I
and DSN9022I are not sent to the IMS or CICS master terminal operator. They are
routed only to the z/OS console that issued the START DB2 command.
What to do next
Before restarting Db2, the following message must also be returned to the z/OS
console that is authorized to enter the START DB2 command:
DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
Related concepts:
Normal termination
Related reference:
-START DB2 (Db2) (Db2 Commands)
-STOP DB2 (Db2) (Db2 Commands)
Application programs must meet certain conditions to embed SQL statements and
to authorize the use of Db2 resources and data. These conditions vary based on the
environment of the application program.
All application programming default values, including the subsystem name that
the programming attachment facilities use, are in the DSNHDECP load module.
Make sure that your JCL specifies the proper set of program libraries.
Related tasks:
Controlling Db2 operations by using commands
Related reference:
Executing the terminal monitor program (TSO/E Customization)
Introductory concepts
Common ways to interact with Db2 for z/OS (Introduction to Db2 for z/OS)
Procedure
Results
The terminal monitor program (TMP) attaches the Db2-supplied DSN command
processor, which in turn attaches the application program.
Example
Db2 checks the sources in the order that they are listed. If the first source is
unavailable, Db2 checks the second source, and so on.
1. RACF USER parameter supplied at logon
2. TSO logon user ID
3. Site-chosen default authorization ID
4. IBM-supplied default authorization ID
You can modify either the RACF USER parameter or the TSO user ID by a locally
defined authorization exit routine.
The program must be link-edited with the IMS language interface module
(DFSLI000). It can write to and read from other database management systems
using the distributed data facility, in addition to accessing DL/I and Fast Path
resources.
Db2 checks whether the authorization ID that IMS provides is valid. For
message-driven regions, IMS uses the SIGNON-ID or LTERM as the authorization
ID. For non-message-driven regions and batch regions, IMS uses the ASXBUSER
field (if RACF or another security package is active). The ASXBUSER field is
defined by z/OS as seven characters. If the ASXBUSER field contains binary zeros
or blanks (which indicates that RACF or another security package is not active),
IMS uses the PSB name instead.
You can run batch DL/I jobs to access Db2 resources; Db2-DL/I batch support uses
the IMS attachment facility.
Related tasks:
Loading and running a batch program (Db2 Application programming and
SQL)
Related reference:
DSNAIMS stored procedure (Db2 SQL)
DSNAIMS2 stored procedure (Db2 SQL)
Related information:
Application programming design
CICS transactions that issue SQL statements must be link-edited with the CICS
attachment facility language interface module, DSNCLI, and the CICS command
language interface module. CICS application programs can issue SQL, DL/I, or
CICS commands. After CICS connects to Db2, any authorized CICS transaction can
issue SQL requests that can write to and read from multiple Db2 instances using
the distributed data facility. The application programs run as CICS applications.
For batch work that runs in the TSO background, the input stream can invoke TSO
command processors, particularly the DSN command processor for Db2. This input
stream can include DSN subcommands, such as RUN.
Example
In this example:
v IKJEFT01 identifies an entry point for TSO TMP invocation. Alternative entry
points that are defined by TSO are also available to provide additional return
code and abend termination processing options. These options permit the user to
select the actions to be taken by the TMP on completion of command or
program execution.
Because invocation of the TSO TMP using the IKJEFT01 entry point might not be
suitable for all user environments, refer to the TSO publications to determine
which TMP entry point provides the termination processing options that are best
suited to your batch execution environment.
v USER=SYSOPR identifies the user ID (SYSOPR in this case) for authorization
checks.
v DYNAMNBR=20 indicates the maximum number of data sets (20 in this case)
that can be dynamically allocated concurrently.
Procedure
Either link-edit or make available a load module known as the call attachment
language interface, or DSNALI. Alternatively, you can link-edit with the Universal
Language Interface program (DSNULI).
When the language interface is available, your program can use CAF to connect to
Db2 in the following ways:
v DSNALI only: Implicitly, by including SQL statements or IFI calls in your
program just as you would any program.
v DSNALI or DSNULI: Explicitly, by writing CALL DSNALI or CALL DSNULI
statements.
Related concepts:
Call attachment facility (Db2 Application programming and SQL)
Before you can run an RRSAF application, z/OS RRS must be started. RRS runs in
its own address space and can be started and stopped independently of Db2.
Procedure
To use RRSAF:
Either link-edit or make available a load module known as the RRSAF language
interface, or DSNRLI. Alternatively, you can link-edit with the Universal Language
Interface program (DSNULI).
When the language interface is available, your program can use RRSAF to connect
to Db2 in the following ways:
v DSNRLI only: Implicitly, by including SQL statements or IFI calls in your
program just as you would any program.
v DSNRLI or DSNULI: Explicitly, by using CALL DSNRLI or CALL DSNULI
statements to invoke RRSAF functions. Those functions establish a connection
between Db2 and RRS and allocate Db2 resources.
Related concepts:
Resource Recovery Services attachment facility (Db2 Application programming
and SQL)
Related tasks:
Controlling RRS connections
You manage the task list of the administrative task scheduler through Db2 stored
procedures that add and remove tasks. You can monitor the task list and the status
of executed tasks through user-defined functions that are provided as part of Db2.
At each point in time when the administrative task scheduler detects that a task
should be executed, it drives the task execution according to the work described in
the task definition. There is no user interaction. The administrative task scheduler
delegates the execution of the task to one of its execution threads, which executes
the stored procedure or the JCL job described in the work definition of the task.
The execution thread waits for the end of the execution and notifies the
administrative task scheduler. The administrative task scheduler stores the
execution status of the task in its redundant task lists, in relation with the task
itself.
Adding a task
Use the stored procedure ADMIN_TASK_ADD to define new scheduled tasks. The
parameters that you use when you call the stored procedure define the schedule
and the work for each task.
At the same time, the administrative task scheduler analyzes the task to schedule
its next execution.
Related reference:
ADMIN_TASK_ADD stored procedure (Db2 SQL)
Five parameters define the scheduling behavior of the task, in one of four ways:
v interval: elapsed time between regular executions
v point-in-time: specific times for execution
v trigger-task-name alone: specific task to trigger execution
v trigger-task-name with trigger-task-cond and trigger-task-code: specific task with
required result to trigger execution
Only one of these definitions can be specified for any single task. The other
parameters must be null.
Table 37. Relationship of null and non-null values for scheduling parameters
Parameter specified Required null parameters
interval point-in-time
trigger-task-name
trigger-task-cond
trigger-task-code
point-in-time interval
trigger-task-name
trigger-task-cond
trigger-task-code
trigger-task-name alone interval
point-in-time
trigger-task-cond
trigger-task-code
trigger-task-name with trigger-task-cond and interval
trigger-task-code point-in-time
You can restrict scheduled executions either by defining a window of time during
which execution is permitted or by specifying how many times a task can execute.
Three parameters control restrictions:
v begin-timestamp: earliest permitted execution time
v end-timestamp: latest permitted execution time
v max-invocations: maximum number of executions
For repetitive or triggered tasks, the number of executions can be limited using the
max-invocations parameter. In this case, the task executes no more than the number
of times indicated by the parameter, even if the schedule and the window of time
Procedure
To define Do this
A task that executes only one time: Set max-invocations to 1.
Procedure
Specify the associated Db2 subsystem ID in the db2-ssid parameter when you
schedule the task.
The cron format has five time and date fields separated by at least one blank.
There can be no blank within a field value. Scheduled tasks are executed when the
minute, hour, and month of year fields match the current time and date, and at
least one of the two day fields (day of month, or day of week) match the current
date.
Ranges of numbers are allowed. Ranges are two numbers separated with a
hyphen. The specified range is inclusive.
Example: The range 8-11 for an hour entry specifies execution at hours 8, 9, 10 and
11.
Examples:
1,2,5,9
0-4,8-12
Unrestricted range
A field can contain an asterisk (*), which represents all possible values in the field.
The day of a command's execution can be specified by two fields: day of month
and day of week. If both fields are restricted by the use of a value other than the
asterisk, the command will run when either field matches the current time.
Example: The value 30 4 1,15 * 5 causes a command to run at 4:30 AM on the 1st
and 15th of each month, plus every Friday.
Step values
Step values can be used in conjunction with ranges. The syntax range/step defines
the range and an execution interval.
If you specify first-last/step, execution takes place at first, then at all successive
values that are distant from first by step, until last.
Example: To specify command execution every other hour, use 0-23/2. This
expression is equivalent to the value 0,2,4,6,8,10,12,14,16,18,20,22.
Example: As an alternative to 0-23/2 for execution every other hour, use */2.
Procedure
Connect to the Db2 subsystem with sufficient authorization to call the function
ADMIN_TASK_LIST. The function contacts the administrative task scheduler to
update the Db2 task list in the table SYSIBM.ADMIN_TASKS, if necessary, and
then reads the tasks from the Db2 task list. The parameters that were used to
create the task are column values of the returned table. The table also includes the
authorization ID of the task creator, in the CREATOR column, and the time that
the task was created, in the LAST_MODIFIED column.
Related reference:
ADMIN_TASK_LIST (Db2 SQL)
Before a task is first scheduled, all columns of its execution status contain null
values, as returned by the ADMIN_TASK_STATUS() table function. Afterwards, at
least the TASK_NAME, USERID, DB2_SSID, STATUS, NUM_INVOCATIONS and
START_TIMESTAMP columns contain non-null values. This information indicates
when and under which user ID the task status last changed and the number of
times this task was executed. You can interpret the rest of the execution status
according to the different values of the STATUS column.
Procedure
Results
If task execution has never been attempted, because the execution criteria have not
been met, the STATUS column contains a null value.
If the administrative task scheduler was not able to start executing the task, the
STATUS column contains NOTRUN. The START_TIMESTAMP and
END_TIMESTAMP columns are the same, and the MSG column indicates why the
task execution could not be started. All JCL job execution status columns are
NULL, but the Db2 execution status columns contain values if the reason for the
failure is related to Db2. (For example, a Db2 connection could not be established.)
If the administrative task scheduler started executing the task but the task has not
yet completed, the STATUS column contains RUNNING. All other execution status
columns contain null values.
If the task execution has completed, the STATUS column contains COMPLETED.
The START_TIMESTAMP and END_TIMESTAMP columns contain the actual start
and end times. The MSG column might contain informational or error messages.
The Db2 and JCL columns are filled with values when they apply.
If the administrative task scheduler was stopped during the execution of a task,
the status remains RUNNING until the administrative task scheduler is restarted.
When the administrative task scheduler starts again, the status is changed to
UNKNOWN, because the administrative task scheduler cannot determine if the
task was completed.
Related tasks:
Listing multiple execution statuses of scheduled tasks
Related reference:
ADMIN_TASK_STATUS (Db2 SQL)
Procedure
Tip: You can relate the task execution status to the task definition by joining
the output tables from the ADMIN_TASK_LIST and
ADMIN_TASK_STATUS(MAX_HISTORY) table functions on the TASK_NAME
column.
Related tasks:
Listing the last execution status of scheduled tasks
Related reference:
ADMIN_TASK_STATUS (Db2 SQL)
To call this user-defined table function, you must have MONITOR1 privilege.
Procedure
Procedure
Call the ADMIN_TASK_UPDATE stored procedure. If the task that you want to
update is running, the changes go into effect after the current execution finishes.
Related reference:
ADMIN_TASK_UPDATE stored procedure (Db2 SQL)
Not all tasks can be canceled as requested. Only the administrative task scheduler
that currently executes the task can cancel a JCL task or a stored procedure task.
Procedure
Even if a task has finished all of its executions and will never be executed again, it
remains in the task list until it is explicitly removed through a call to the
ADMIN_TASK_REMOVE stored procedure .
Restrictions:
Procedure
Procedure
Use one of the following commands to start the administrative task scheduler:
v To start an administrative task scheduler that is named admtproc from the
operator's console using the default tracing option, issue the MVS system
command:
start admtproc
v To start an administrative task scheduler that is named admtproc from the
operator's console with tracing enabled, issue the MVS system command:
start admtproc,trace=on
v To start an administrative task scheduler that is named admtproc from the
operator's console with tracing disabled, issue the MVS system command:
start admtproc,trace=off
Results
When the administrative task scheduler starts, message DSNA671I displays on the
console.
All administrative task schedulers of the data sharing group access this task list
once per minute to check for new tasks. The administrative task scheduler that
adds a task does not have to check the list, and can execute the task immediately.
Any other administrative task scheduler can execute a task only after finding it in
the updated task list. Any administrative task scheduler can remove a task without
waiting.
Use the MVS system command MODIFY to enable or disable tracing. When tracing
is enabled, the trace goes to SYSOUT in the job output of the started task of the
administrative task scheduler. The trace output helps IBM Support to troubleshoot
problems.
Procedure
One copy of the task list is a shared VSAM data set, by default
DSNC910.TASKLIST, where DSNC910 is the Db2 catalog prefix. The other copy is
stored in the table ADMIN_TASKS in the SYSIBM schema. Include these redundant
copies as part of your backup and recovery plan.
Tip: If Db2 is offline, message DSNA679I displays on the console. As soon as Db2
starts, the administrative task scheduler performs an autonomic recovery of the
ADMIN_TASKS table using the contents of the VSAM task list. When the recovery
is complete, message DSNA695I displays on the console to indicate that both task
lists are again available. (By default, message DSNA679I displays on the console
once per minute. You can change the frequency of this message by modifying the
ERRFREQ parameter. You can modify this parameter as part of the started task or
with a console command.)
Procedure
Symptoms
Important: The task status is overwritten as soon as the next execution of the task
starts.
Symptoms
An SQL code is returned. When SQLCODE is -443, the error message cannot be
read directly, because only a few characters are available.
Symptoms
An SQL code is returned.
Understanding the source of the error should be enough to correct the cause of the
problem. Most problems are incorrect usage of the stored procedure or an invalid
configuration.
Correct the underlying problem and resubmit the call to the stored procedure to
add or remove the task.
The administrative task scheduler is part of Db2 for z/OS. When properly
configured, it is available and operable with the first Db2 start. The administrative
task scheduler starts as a task on the z/OS system during Db2 startup. The
administrative task scheduler has its own address space, named after the started
task name.
Each Db2 subsystem has its own distinct administrative task scheduler connected
to it. Db2 is aware of the administrative task scheduler whose name is defined in
the subsystem parameter ADMTPROC. The administrative task scheduler is aware
of Db2 by the subsystem name that is defined in the DB2SSID parameter of the
started task.
The administrative task scheduler executes the tasks according to their defined
schedules. The status of the last execution is stored in the task lists as well, and
you can access it through the SQL interface.
The following figure shows the architecture of the administrative task scheduler.
DB2AMSTR DB2AADMT
START START
DB2 for z/OS Started task
DB2AADMT
Scheduler
Subsystem parameter
DB2 association
ADMTPROC = DB2AADMT DB2SSID = DB2A
Call
SSID = DB2A
Related reference:
ADMIN_TASK_ADD stored procedure (Db2 SQL)
ADMIN_TASK_REMOVE stored procedure (Db2 SQL)
When Db2 terminates, the administrative task scheduler remains active so that
scheduled JCL jobs can run. When Db2 starts again, it connects to the
Start
scheduler
Scheduler
with name in
ADMTPROC already Start
yes running? no
RRSAF
start event
Connect to DB2
Scheduler executes
Stop Scheduler executes
JCL jobs &
JCL jobs only
stored procedures
RRSAF
stop event
If you want the administrative task scheduler to terminate when Db2 is stopped,
you can specify the STOPONDB2STOP parameter in the started task before
restarting the administrative task scheduler. This parameter has no value. You
specify this parameter by entering STOPONDB2STOP without an equal sign (=) or
a value. When you specify this parameter, the administrative task scheduler
terminates after it finishes executing the tasks that are running and after executing
the tasks that are triggered by Db2 stopping. When Db2 starts again, the
administrative task scheduler is restarted.
The two task lists are the Db2 table SYSIBM.ADMIN_TASKS and the VSAM data
set that is indicated in the data definition ADMTDD1 of the started task of the
administrative task scheduler. The administrative task scheduler maintains the
consistency between the two task lists.
The administrative task scheduler works and updates both task lists redundantly,
and remains operable so long as at least one of the task lists is available. Therefore,
the administrative task scheduler continues working when Db2 is offline. If a task
list becomes unavailable, the administrative task scheduler continues to update the
task list. When both task lists are available again, the administrative task scheduler
automatically synchronizes them.
The following figure shows a data sharing group with two Db2 members and their
associated administrative task schedulers.
DB2AMSTR DB2BMSTR
SQL interface
DB2 for z/OS (stored procedures and
user-defined functions)
Subsystem parameter Subsystem parameter
DB2AADMT DB2BADMT
Security Security
DFLTUID = ... VSAM task list DFLTUID = ...
..........
External task list .......... External task list
ADMTDD1 = prefix.TASKLIST ADMTDD1 = prefix.TASKLIST
Tasks are not localized to a administrative task scheduler. They can be added,
removed, or executed in any of the administrative task schedulers in the data
sharing group with the same result. However, you can force the task to execute on
a given administrative task scheduler by specifying the associated Db2 subsystem
ID in the DB2SSID parameter when you schedule the task. The tasks that have no
affinity to a given Db2 subsystem are executed among all administrative task
schedulers. Their distribution cannot be predicted.
When executing a stored procedure task, the administrative scheduler sets the
following information in these special registers:
This information enables the stored procedure to relate its own outputs to the task
execution status in the administrative task scheduler. For example, the stored
procedure can save a log in a table together with the name of the task that
executed it, so that you can relate the task execution status to this log.
The following figure shows all of the security checkpoints that are associated with
using the administrative task scheduler.
Passwords
Task lists access
granted users
Related tasks:
Installation step 20: Set up Db2-supplied routines (Db2 Installation and
Migration)
Migration step 23: Set up Db2-supplied routines (Db2 Installation and
Migration)
Installation step 22: Set up the administrative task scheduler (Db2 Installation
and Migration)
Related reference:
z/OS UNIX System Services Planning
Users with EXECUTE rights on one of the stored procedures or user-defined table
functions of the administrative task scheduler interface are allowed to execute the
corresponding functionality: adding a scheduled task, removing a scheduled task,
or listing the scheduled tasks or their execution status. The entire interface is
configured by default with PUBLIC access rights during the installation.
Recommendations:
v Grant rights to groups or roles, rather than to individual authorization IDs.
v Restrict access to the ADMIN_TASK_ADD and ADMIN_TASK_REMOVE stored
procedures to users with a business need for their use. Access to the
user-defined table functions that list tasks and execution status can remain
unrestricted.
The authorization ID of the Db2 thread that called the stored procedure
ADMIN_TASK_ADD is passed to the administrative task scheduler and stored in
the task list with the task definition. The ADMIN_TASK_ADD stored procedure
gathers the authorities granted to this authorization ID from the subsystem
parameters and from the catalog table, and passes them over to the administrative
task scheduler. The same mechanism is used in ADMIN_TASK_REMOVE to verify
that the user is permitted to remove the task.
A task in the task list of the administrative task scheduler can be removed by the
owner of the task, or by any user that has SYSOPR, SYSCTRL, or SYSADM
privileges. The owner of a task is the CURRENT SQLID of the process at the time
the task was added to the task list.
The VSAM resource (by default DSNCAT.TASKLIST, where DSNCAT is the Db2
catalog prefix) that stores the task list of the administrative task scheduler must be
The first action that is taken by the administrative task scheduler when starting
task execution is to switch its security context to the context of the execution user.
The execution user can be explicitly identified by the user-ID parameter of the
ADMIN_TASK_ADD stored procedure, or can be the default user.
If the task is a stored procedure, and the stored procedure must be executed in
Db2 with the privileges and rights of the secondary authentication IDs of the
execution user, make sure that Db2 was started with a sign-on exit routine.
If the task must run under a certain authority, including an authority that is
different from the caller of the stored procedure, credentials are passed on to the
administrative task scheduler. These credentials are not stored anywhere. They are
validated by RACF to ensure that the caller of the stored procedure at the task
definition time has the right to assume the security context. Therefore, you can use
a PassTicket (encrypted single-use password) in the password parameter of the
ADMIN_TASK_ADD stored procedure. If no credentials are provided, then the
administrative task scheduler executes the tasks under its default execution user.
The administrative task scheduler generates and uses PassTickets for executing the
tasks under the corresponding authority. Each task executes after switching to the
requested security context using the user ID and the generated PassTicket.
| The started task module (DSNADMT0) of the administrative task scheduler uses
| the pthread_security_np() function to switch users. To set a thread-level security
| environment for the scheduler, follow the steps in Additional steps for enabling the
| administrative task scheduler and administrative enablement routines (Db2
| Installation and Migration).
The default execution user has no authority, except to attach to Db2 and write to
the JES reader.
Related reference:
Processing of sign-on requests (Managing Security)
The work of the administrative task scheduler is based on scheduled tasks that you
define. Each task is associated with a unique task name. Up to 9999 tasks are
supported in the administrative task scheduler at one time.
A scheduled task consists of a schedule and a work definition. The schedule tells
the administrative task scheduler when to execute the task. You define a window
of time during which execution is permitted, and either a time-based or
event-based schedule that triggers the execution of the job during this window.
The work definition specifies what to execute, either a JCL job or a stored
procedure, and the authority (user) under which to execute the task.
DB2AMSTR
Stored procedures
SQL call ADMIN_TASK_ADD() Interface
Call
ADMIN_TASK_REMOVE() Add Task
Call
User-defined functions Remove Task name
scheduler
SQL
ADMIN_TASK_LIST()
select from Call
Refresh task lists
ADMIN_TASK_STATUS()
The minimum permitted value for the MAXTHD parameter is 1, but this value
should not be lower than the maximum number of tasks that you expect to execute
simultaneously. If there are more tasks to be executed simultaneously than there
are available sub-threads, some tasks will not start executing immediately. The
administrative task scheduler tries to find an available sub-thread within one
minute of when the task is scheduled for execution. As a result, multiple short
tasks might be serialized in the same sub-thread, provided that their total
execution time does not go over this minute.
The parameters of the started task are not positional. Place parameters in a single
string separated by blank spaces.
If the execution of a task still cannot start one minute after it should have started,
the execution is skipped, and the last execution status of this task is set to the
NOTRUN state. The following message displays on the operator's console.
DSNA678I csect-name THE NUMBER OF TASKS TO BE CONCURRENTLY
EXECUTED BY THE ADMIN SCHEDULER proc-name EXCEEDS max-threads
If you receive this message, increase the MAXTHD parameter value and restart the
administrative task scheduler.
Related tasks:
Installation step 22: Set up the administrative task scheduler (Db2 Installation
and Migration)
Procedure
Option Description
interval The stored procedure is to execute at the
specified regular interval.
point-in-time The stored procedure is to execute at the
specified times.
trigger-task-name The stored procedure is to execute when the
specified task occurs.
trigger-task-name trigger-task-cond The stored procedure is to execute when the
trigger-task-code specified task and task result occur.
Optionally, you can also use one or more of the following parameters to control
when the stored procedure runs:
begin-timestamp
Earliest permitted execution time
end-timestamp
Latest permitted execution time
max-invocations
Maximum number of executions
When the specified time or event occurs for the stored procedure to run, the
administrative task scheduler calls the stored procedure in Db2.
2. Optional: After the task finishes execution, check the status by using the
ADMIN_TASK_STATUS function. This function returns a table with one row
that indicates the last execution status for each scheduled task. If the scheduled
task is a stored procedure, the JOB_ID, MAXRC, COMPLETION_TYPE,
SYSTEM_ABENDCD, and USER_ABENDCD fields contain null values. In the
case of a Db2 error, the SQLCODE, SQLSTATE, SQLERRMC, and SQLERRP
fields contain the information that Db2 returned from calling the stored
procedure.
Related tasks:
Adding a task
Related reference:
In the case of an error, the error is written into the last execution status of the task.
Otherwise, the job is submitted to the internal JES reader. According to the job_wait
parameter, the sub-thread waits for the execution completion or not. When the
sub-thread waits for completion, the last execution status includes the complete
returned values provided by the JES reader. Otherwise, it contains the JCL job ID
and a success message.
v If job_wait is set to NO, the sub-thread does not wait until the job completes
execution and returns immediately after the job submission. The task execution
status is set to the submission status, the result of the job execution itself will
not be available.
v If job_wait is set to YES, the sub-thread simulates a synchronous execution of the
JCL job. It waits until the job execution completes, get the job status from the JES
reader and fills in the last execution status of the task.
v If job_wait is set toPURGE, the sub-thread purges the job output from the JES
reader after execution. Execution is the same as for job_wait=YES.
JCL job execution status always contains a null value in the SQLCODE, SQLSTATE,
SQLERRMC, and SQLERRP fields. If the job can be submitted successfully to the
JES reader, the field JOB_ID contains the ID of the job in the JES reader. If the job
is executed asynchronously, the MAXRC, COMPLETION_TYPE,
SYSTEM_ABENDCD and USER_ABENDCD fields will also be null values, because
the sub-thread does not wait for job completion before writing the status. If the job
was executed synchronously, those fields contain the values returned by the JES
reader.
However, if a task has a member affinity (for example, if the db2-ssid parameter
specifies the name of a Db2 member), only the administrative task scheduler that is
associated with that Db2 member will execute the task. Therefore, if this
administrative task scheduler is unavailable, the task will not be executed.
When a task has no member affinity (if the db2-ssid parameter is a null value), the
first administrative task scheduler that is available executes the task. If the task
execution can complete on this administrative task scheduler, other administrative
task schedulers will not try executing this task. However, if the administrative task
scheduler cannot start the execution, other administrative task schedulers will try
to start executing the task until one succeeds, or all fail in executing the task. An
administrative task scheduler is unavailable if its associated Db2 member is not
running, or if all of its execution threads are busy.
You cannot predict which administrative task scheduler will execute a task.
Successive executions of the same task can be done on different administrative task
schedulers.
By default, the administrative task scheduler works in the time zone that is defined
for the z/OS operating system as the local time, provided that the z/OS
environment variable TZ is not set. If the TZ environment variable is set, the
administrative task scheduler works in the time zone that is defined for this
variable. You can set the TZ environment variable either in the z/OS Language
Environment® default settings, or in the CEEOPTS definition of the started task of
the administrative task scheduler.
When you add a task, you must specify the values for the begin-timestamp,
end-timestamp, and point-in-time parameters of the stored procedure in the time
zone that the administrative task scheduler works in. When listing scheduled tasks,
the BEGIN_TIMESTAMP, END_TIMESTAMP, and POINT_IN_TIME columns are
returned with values that are in time zone for the administrative task scheduler.
Likewise, when listing the status of a task, the LAST_MODIFIED,
START_TIMESTAMP, and END_TIMESTAMP columns are returned with values
that are in time zone for the administrative task scheduler.
Important: You must configure all of the administrative task schedulers for a data
sharing environment in the same time zone. Otherwise, the scheduled tasks might
run at a time when they are not intended to execute.
The following example shows how to set the time zone for Germany.
//CEEOPTS DD *
ENVAR("TZ=MEZ-1MESZ,M3.5.0,M10.5.0")
Related tasks:
Using the CEEOPTS DD statement (z/OS Language Environment
Customization)
Related reference:
ENVAR (z/OS Language Environment Customization)
Procedure
Related tasks:
Monitoring databases
Starting databases
Making objects unavailable
Related reference:
Starting databases
Issue the START DATABASE (*) command to start all databases for which you
have the STARTDB privilege.
The START DATABASE (*) command does not start the Db2 directory (DSNDB01),
the Db2 catalog (DSNDB06), or the Db2 work file database (called DSNDB07,
except in a data sharing environment). Start these databases explicitly by using the
SPACENAM option. Also, the START DATABASE (*) command does not start table
spaces or index spaces that have been explicitly stopped by the STOP DATABASE
command.
Use the PART keyword of the START DATABASE command to start the individual
partitions of a table space. It can also be used to start individual partitions of a
partitioning index or logical partitions of a nonpartitioning index. The started or
stopped state of other partitions is unchanged.
The START DATABASE and STOP DATABASE commands can be used with the
SPACENAM and PART options to control table spaces, index spaces, or partitions.
Example
For example, the following command starts two partitions of table space
DSN8S11E in the database DSN8D11A:
-START DATABASE (DSN8D11A) SPACENAM (DSN8S11E) PART (1,2)
Related reference:
-START DATABASE (Db2) (Db2 Commands)
-STOP DATABASE (Db2) (Db2 Commands)
Databases, table spaces, and index spaces are started with RW status when they
are created. You can make any of them unavailable by using the STOP DATABASE
command. Db2 can also make them unavailable when it detects an error.
Procedure
In cases when the object was explicitly stopped, you can make it available again by
issuing the START DATABASE command.
Example
For example, the following command starts all table spaces and index spaces in
database DSN8D11A for read-only access:
-START DATABASE (DSN8D11A) SPACENAM(*) ACCESS(RO)
Related reference:
-START DATABASE (Db2) (Db2 Commands)
-STOP DATABASE (Db2) (Db2 Commands)
Related information:
DSN9022I (Db2 Messages)
Important: These restrictions are a necessary part of protecting the integrity of the
data. If you start an object that has restrictions, the data in the object might not be
reliable.
Procedure
Example
For example:
-START DATABASE (DSN8D11A) SPACENAM (DSN8S11E) ACCESS(FORCE)
To reset these restrictive states, you must start the release of Db2 that originally ran
the utility and terminate the utility from that release.
Related tasks:
Resolving postponed units of recovery
Related reference:
-START DATABASE (Db2) (Db2 Commands)
Monitoring databases
You can use the DISPLAY DATABASE command to obtain information about the
status of databases and the table spaces and index spaces within each database. If
applicable, the output also includes information about physical I/O errors for
those objects.
To monitor databases:
1. Issue the DISPLAY DATABASE command as follows:
-DISPLAY DATABASE (dbname)
D1 TS RW,UTRO
D2 TS RW
D3 TS STOP
D4 IX RO
D5 IX STOP
D6 IX UT
LOB1 LS RW
******* DISPLAY OF DATABASE dbname ENDED **********************
11:45:15 DSN9022I - DSNTDDIS ’DISPLAY DATABASE’ NORMAL COMPLETION
In the preceding messages:
v report_type_list indicates which options were included when the DISPLAY
DATABASE command was issued.
v dbname is an 8-byte character string that indicates the database name. The
pattern-matching character, *, is allowed at the beginning, middle, and end of
dbname.
v STATUS is a combination of one or more status codes, delimited by commas.
The maximum length of the string is 17 characters. If the status exceeds 17
characters, those characters are wrapped onto the next status line. Anything
that exceeds 17 characters on the second status line is truncated.
2. Optional: Use the pattern-matching character, *, in the DISPLAY DATABASE,
START DATABASE, and STOP DATABASE commands. You can use the
pattern-matching character in the beginning, middle, and end of the database
and table space names.
3. Use additional keywords to tailor the DISPLAY DATABASE command so that
you can monitor what you want:
v The keyword ONLY can be added to the command DISPLAY DATABASE.
When ONLY is specified with the DATABASE keyword but not the
SPACENAM keyword, all other keywords except RESTRICT, LIMIT, and
AFTER are ignored. Use DISPLAY DATABASE ONLY as follows:
-DISPLAY DATABASE(*S*DB*) ONLY
This command results in the following messages:
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT361I - * DISPLAY DATABASE SUMMARY
11:44:32 * GLOBAL
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT362I - DATABASE = DSNDB01 STATUS = RW
DBD LENGTH = 8066
11:44:32 DSNT360I - ****************************************************
TS486A TS 0004
IX486A IX L0004
IX486B IX 0004
TS486C TS
IX486C IX
******* DISPLAY OF DATABASE DB486A ENDED *********************
DSN9022I = DSNTDDIS ’DISPLAY DATABASE’ NORMAL COMPLETION
The display indicates that five objects are in database DB486A: two table
spaces and three indexes. Table space TS486A has four parts, and table space
TS486C is nonpartitioned. Index IX486A is a nonpartitioning index for table
space TS486A, and index IX486B is a partitioned index with four parts for
table space TS486A. Index IX486C is a nonpartitioned index for table space
TS486C.
Procedure
Example
For example, you can issue a command that names partitions 2, 3, and 4 in table
space TPAUGF01 in database DBAUGF01:
-DISPLAY DATABASE (DBAUGF01) SPACENAM (TPAUGF01) PART (2:4) USE
Procedure
Example
For example, issue a command that names table space TSPART in database DB01:
-DISPLAY DATABASE(DB01) SPACENAM(TSPART) LOCKS
A page is logically in error if its problem can be fixed without redefining new disk
tracks or volumes. For example, if Db2 cannot write a page to disk because of a
connectivity problem, the page is logically in error. Db2 inserts entries for pages
that are logically in error in a logical page list (LPL).
A page is physically in error if physical errors exist, such as device errors. Such
errors appear on the write error page range (WEPR). The range has a low and high
page, which are the same if only one page has errors.
If the cause of the problem is undetermined, the error is first recorded in the LPL.
If recovery from the LPL is unsuccessful, the error is then recorded on the error
page range.
Write errors for large object (LOB) table spaces that are defined with LOG NO
cause the unit of work to be rolled back. Because the pages are written during
normal deferred write processing, they can appear in the LPL and WEPR. The LOB
data pages for a LOB table space with the LOG NO attribute are not written to
LPL or WEPR. The space map pages are written during normal deferred write
processing and can appear in the LPL and WEPR.
A program that tries to read data from a page that is listed on the LPL or WEPR
receives an SQLCODE for “resource unavailable.” To access the page (or pages in
the error range), you must first recover the data from the existing database copy
and the log.
Procedure
Issue the DISPLAY DATABASE command with the LPL option. The ONLY option
restricts the output to objects that have LPL pages.
For example:
-DISPLAY DATABASE(DBFW8401) SPACENAM(*) LPL ONLY
The display indicates that the pages that are listed in the LPL PAGES column are
unavailable for access.
Related reference:
-DISPLAY DATABASE (Db2) (Db2 Commands)
You can run only the following utilities on an object with pages in the LPL:
v LOAD with the REPLACE option
v MERGECOPY
v REBUILD INDEX
v RECOVER, except for:
RECOVER...PAGE
RECOVER...ERROR RANGE
v REPAIR with the SET statement
v REPORT
Procedure
When an object has pages on the LPL, to manually remove those pages and make
them available for access when Db2 is running:
Related reference:
-START DATABASE (Db2) (Db2 Commands)
LOAD (Db2 Utilities)
REBUILD INDEX (Db2 Utilities)
RECOVER (Db2 Utilities)
DROP (Db2 SQL)
Related information:
DSNI006I (Db2 Messages)
DSNI021I (Db2 Messages)
DSNI022I (Db2 Messages)
DSNI051I (Db2 Messages)
Procedure
For example:
-DISPLAY DATABASE (DBPARTS) SPACENAM (TSPART01) WEPR
Related reference:
-DISPLAY DATABASE (Db2) (Db2 Commands)
Related information:
DSNT361I (Db2 Messages)
When you issue the STOP DATABASE command for a table space, the data sets
that contain that table space are closed and deallocated.
You also can stop Db2 subsystem databases (catalog, directory, and work file).
After the directory is stopped, installation SYSADM authority is required to restart
it.
Related reference:
-STOP DATABASE (Db2) (Db2 Commands)
-START DATABASE (Db2) (Db2 Commands)
Related information:
DSNI003I (Db2 Messages)
The following examples illustrate ways to use the STOP DATABASE command:
-STOP DATABASE (*)
Stops all databases for which you have STOPDB authorization, except the
Db2 directory (DSNDB01), the Db2 catalog (DSNDB06), or the Db2 work
file database (called DSNDB07, except in a data sharing environment), all
of which must be stopped explicitly.
The data sets containing a table space are closed and deallocated by the preceding
commands.
Related reference:
-STOP DATABASE (Db2) (Db2 Commands)
Buffer pool attributes, including buffer pool sizes, sequential steal thresholds,
deferred write thresholds, and parallel sequential thresholds, are initially defined
during the Db2 installation process.
Introductory concepts
Buffer pools (Introduction to Db2 for z/OS)
The role of buffer pools in caching data (Introduction to Db2 for z/OS)
Procedure
Buffer pools are areas of virtual storage that temporarily store pages of table spaces
or indexes.
Introductory concepts
Buffer pools (Introduction to Db2 for z/OS)
The role of buffer pools in caching data (Introduction to Db2 for z/OS)
Procedure
Example
For example:
-DISPLAY BUFFERPOOL(BP0)
Related concepts:
Obtaining information about group buffer pools (Db2 Data Sharing Planning
and Administration)
Related tasks:
Tuning database buffer pools (Db2 Performance)
Monitoring and tuning buffer pools by using online commands (Db2
Performance)
Related reference:
Introductory concepts
User-defined functions (Introduction to Db2 for z/OS)
Procedure
Issue the appropriate command for the action that you want to take.
START FUNCTION SPECIFIC
Activates an external function that is stopped.
DISPLAY FUNCTION SPECIFIC
Displays statistics about external user-defined functions accessed by Db2
applications.
STOP FUNCTION SPECIFIC
Prevents Db2 from accepting SQL statements with invocations of the
specified functions.
Related concepts:
Sample user-defined functions (Db2 SQL)
Function resolution (Db2 SQL)
Related tasks:
Monitoring and controlling stored procedures
Defining a user-defined function (Managing Security)
Related reference:
-START FUNCTION SPECIFIC (Db2) (Db2 Commands)
-DISPLAY FUNCTION SPECIFIC (Db2) (Db2 Commands)
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
You cannot start built-in functions or user-defined functions that are sourced on
another function.
Procedure
Example
For example, assume that you want to start functions USERFN1 and USERFN2 in
the PAYROLL schema. Issue the following command:
START FUNCTION SPECIFIC(PAYROLL.USERFN1,PAYROLL.USERFN2)
Related reference:
-START FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information:
DSNX973I (Db2 Messages)
Example
For example, to display information about functions in the PAYROLL schema and
the HRPROD schema, issue this command:
-DISPLAY FUNCTION SPECIFIC(PAYROLL.*,HRPROD.*)
Related reference:
-DISPLAY FUNCTION SPECIFIC (Db2) (Db2 Commands)
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information:
DSNX975I (Db2 Messages)
You cannot stop built-in functions or user-defined functions that are sourced on
another function.
Example
For example, issue a command like the following one, which stops functions
USERFN1 and USERFN3 in the PAYROLL schema:
STOP FUNCTION SPECIFIC(PAYROLL.USERFN1,PAYROLL.USERFN3)
Related reference:
-STOP FUNCTION SPECIFIC (Db2) (Db2 Commands)
Related information:
DSNX974I (Db2 Messages)
The online utilities require Db2 to be running and can be controlled in several
different ways. The stand-alone utilities do not require Db2 to be running, and
they can be controlled only by means of JCL.
Related concepts:
Db2 online utilities (Db2 Utilities)
Db2 stand-alone utilities (Db2 Utilities)
Procedure
Prepare an appropriate set of JCL statements for a utility job. The input stream for
that job must include Db2 utility control statements.
If a utility is not running, you need to determine whether the type of utility access
is allowed on an object of a specific status. The following table shows the
compatibility of utility types and object status.
Table 38. Compatibility of utility types and object status
Utility type Object access
Read-only RO
All RW (default access type for an object)
Db2 UT
Procedure
Related concepts:
Db2 online utilities (Db2 Utilities)
Related reference:
-START DATABASE (Db2) (Db2 Commands)
Procedure
Stand-alone utilities
Some stand-alone utilities can be run only by means of JCL.
Most of the stand-alone utilities can be used while Db2 is running. However, for
consistency of output, the table spaces and index spaces must be stopped first
because these utilities do not have access to the Db2 buffer pools. In some cases,
Db2 must be running or stopped before you invoke the utility.
Stand-alone utility job streams require that you code specific data set names in the
JCL. To determine the fifth qualifier in the data set name, you need to query the
Db2 catalog tables SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART to
determine the IPREFIX column that corresponds to the required data set.
The change log inventory utility (DSNJU003) enables you to change the contents of
the bootstrap data set (BSDS). This utility cannot be run while Db2 is running
because inconsistencies could result. Use the STOP DB2 MODE(QUIESCE)
command to stop the Db2 subsystem, run the utility, and then restart Db2 with the
START DB2 command.
The print log map utility (DSNJU004) enables you to print the bootstrap data set
contents. The utility can be run when Db2 is active or inactive; however, when it is
run with Db2 active, the user's JCL and the Db2 started task must both specify
DISP=SHR for the BSDS data sets.
Related concepts:
Each IMS and Db2 subsystem must use a separate instance of IRLM.
In a data sharing environment, the IRLM handles global locking, and each Db2
member has its own corresponding IRLM.
Procedure
Issue the appropriate z/OS command for the action that you want to take.
In each command description, irlmproc is the IRLM procedure name and irlmnm is
the IRLM subsystem name.
MODIFY irlmproc,ABEND,DUMP
Abnormally terminates the IRLM and generates a dump.
MODIFY irlmproc,ABEND,NODUMP
Abnormally terminates the IRLM but does not generate a dump.
MODIFY irlmproc,DIAG
Initiates diagnostic dumps for IRLM subsystems in a data sharing group
when a delay occurs.
MODIFY irlmproc,SET
Dynamically sets the maximum amount of private virtual (PVT) storage or
the number of trace buffers that are used for this IRLM.
MODIFY irlmproc,STATUS
Displays the status for the subsystems on this IRLM.
START irlmproc
Starts the IRLM.
STOP irlmproc
Stops the IRLM normally.
TRACE CT,OFF,COMP=irlmnm
Stops IRLM tracing.
TRACE CT,ON,COMP=irlmnm
Starts IRLM tracing for all subtypes (DBM, SLM, XIT, and XCF).
TRACE CT,ON,COMP=irlmnm,SUB=(subname)
Starts IRLM tracing for a single subtype.
MODIFY irlmproc,SET,PVT=nnn
Sets the maximum amount of private virtual (PVT) storage that this IRLM
can use for lock control structures.
MODIFY irlmproc,SET,DEADLOCK=nnnn
Sets the time for the local deadlock detection cycle.
MODIFY irlmproc,SET,LTE=nnnn
Sets the number of LOCK HASH entries that this IRLM can use on the
next connect to the XCF LOCK structure. Use this command only for data
sharing.
MODIFY irlmproc,SET,TIMEOUT=nnnn,subsystem-name
Sets the timeout value for the specified Db2 subsystem. Display the
subsystem-name by using MODIFY irlmproc,STATUS.
MODIFY irlmproc,SET,TRACE=nnn
Sets the maximum number of trace buffers that are used for this IRLM.
MODIFY irlmproc,STATUS,irlmnm
Displays the status of a specific IRLM.
MODIFY irlmproc,STATUS,ALLD
Displays the status of all subsystems known to this IRLM in the data
sharing group.
MODIFY irlmproc,STATUS,ALLI
Displays the status of all IRLMs known to this IRLM in the data sharing
group.
MODIFY irlmproc,STATUS,MAINT
Displays the maintenance levels of IRLM load module CSECTs for the
specified IRLM instance.
MODIFY irlmproc,STATUS,STOR
Displays the current and high-water allocation for private virtual (PVT)
storage, as well as storage that is above the 2-GB bar.
MODIFY irlmproc,STATUS,TRACE
Displays information about trace types of IRLM subcomponents.
Each IMS and Db2 subsystem must use a separate instance of IRLM.
Related information:
When Db2 is installed, you normally specify that the IRLM be started
automatically. Then, if the IRLM is not available when Db2 is started, Db2 starts it,
and periodically checks whether it is up before attempting to connect. If the
attempt to start the IRLM fails, Db2 terminates.
Consider starting the IRLM manually if you are having problems starting Db2 for
either of the following reasons:
v An IDENTIFY or CONNECT to a data sharing group fails.
v Db2 experiences a failure that involves the IRLM.
When you start the IRLM manually, you can generate a dump to collect diagnostic
information because IRLM does not stop automatically.
Procedure
Procedure
To stop IRLM:
Results
Your Db2 subsystem will abend. An IMS subsystem that uses the IRLM does not
abend and can be reconnected.
IRLM uses the z/OS Automatic Restart Manager (ARM) services. However, it
de-registers from ARM for normal shutdowns. IRLM registers with ARM during
initialization and provides ARM with an event exit routine. The event exit routine
must be in the link list. It is part of the IRLM DXRRL183 load module. The event
exit routine ensures that the IRLM name is defined to z/OS when ARM restarts
IRLM on a target z/OS system that is different from the failing z/OS system. The
IRLM element name that is used for the ARM registration depends on the IRLM
mode. For local-mode IRLM, the element name is a concatenation of the IRLM
subsystem name and the IRLM ID. For global mode IRLM, the element name is a
concatenation of the IRLM data sharing group name, IRLM subsystem name, and
the IRLM ID.
IRLM de-registers from ARM when one of the following events occurs:
v PURGE irlmproc is issued.
v MODIFY irlmproc,ABEND,NODUMP is issued.
v Db2 automatically stops IRLM.
Monitoring threads
You monitor threads by using the Db2 DISPLAY THREAD command, which
displays current information about the status of threads.
Types of threads
Threads are an important resource within a Db2 subsystem. A thread is a structure
that describes a connection made by an application and traces its progress in the
Db2 subsystem.
Allied threads
Allied threads are threads that are connected to Db2 from TSO, batch, IMS,
CICS, CAF, or RRSAF.
Database Access Threads
Distributed database access threads (sometimes called DBATs) are threads
that are connected through a network to access data at a Db2 server on
behalf distributed requesting systems. Database access threads are created
in the following situations:
| v When new connections are accepted from remote requesters
| v When Db2 is configured in INACTIVE mode, and a new request is
| received from a remote requester and no pooled database access thread
| is available to service the new request
Database access threads can operate in ACTIVE or INACTIVE mode. The
mode used for database access threads is controlled by the value of the
CMTSTAT subsystem parameter.
INACTIVE mode
When the value of the CMTSTAT subsystem parameter is
INACTIVE, a database access thread can be active or pooled. When
a database access thread is active, it is processing requests from
client connections within units of work. When a database access
thread is pooled, it is waiting for the next request from a client to
start a new unit of work.
INACTIVE mode database access threads are terminated under any
of the following conditions:
v After processing 200 units of work.
| v After being idle in the pool for the amount of time specified by
| the value of the POOLINAC subsystem parameter.
However, the termination of an INACTIVE mode thread does not
prevent another database access thread from being created to meet
processing demand, as long as the value of the MAXDBAT
subsystem parameter has not been reached.
ACTIVE mode
When the value CMTSTAT subsystem parameter is ACTIVE, a
database access thread is always active from initial creation to
termination.
The DISPLAY THREAD command allows you to select which type of information
you want to include in the display by using one or more of the following:
v Active, indoubt, postponed-abort, or pooled threads
v Allied threads that are associated with the address spaces whose
connection-names are specified
v Allied threads
v Distributed threads
v Distributed threads that are associated with a specific remote location
v Detailed information about connections with remote locations
v A specific logical unit of work ID (LUWID)
To use the TYPE, LOCATION, DETAIL, and LUWID keywords, you must have
SYSOPR authority or higher.
More information about how to interpret this output can be found in the topics
describing the individual connections and in the description of message DSNV408I,
which is part of message DSNV401I.
Related tasks:
Archiving the log
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the LOCATION option, followed by a
list of location names.
Example
For example, you can specify an asterisk (*) after the THREAD and LOCATION
options:
-DISPLAY THREAD(*) LOCATION(*) DETAIL
When you issue this command, Db2 returns messages like the following:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV402I - ACTIVE THREADS -
NAME ST▌1▐A▌2▐ REQ ID AUTHID PLAN ASID TOKEN
SERVER RA * 2923 DB2BP ADMF001 DISTSERV 0036 20▌3▐
V437-WORKSTATION=ARRAKIS, USERID=ADMF001,
APPLICATION NAME=DB2BP
V436-PGM=NULLID.SQLC27A4, SEC=201, STMNT=210
V445-09707265.01BE.889C28200037=203 ACCESSING DATA FOR
( 1)2002:91E:610:1::5▌4▐
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
The LUNAME is enclosed by the less-than (<) and greater-than (>) symbols. The IP
address can be in the dotted decimal or colon hexadecimal format.
Db2 uses the one of the following formats in messages that display information
about non-Db2 requesters:
v LUNAME notation
v Dotted decimal format
v Colon hexadecimal format
Example
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
Procedure
Issue the DISPLAY THREAD command with the LOCATION and DETAIL options:
-DISPLAY THREAD(*) LOCATION(*) DETAIL
Db2 returns the following messages, which indicate that the local site application is
waiting for a conversation to be allocated in Db2, and a Db2 server that is accessed
by a DRDA client using TCP/IP.
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV402I - ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
TSO TR * 3 SYSADM SYSADM DSNESPRR 002E 2
V436-PGM=DSNESPRR.DSNESM68, SEC=1, STMNT=116
V444-DB2NET.LUND0.A238216C2FAE=2 ACCESSING DATA AT
( 1)USIBMSTODB22-LUND1
V447--INDEX SESSID A ST TIME
V448--( 1) 0000000000000000 N▌1▐ A1▌2▐ 9015816504776
TSO RA * 11 SYSADM SYSADM DSNESPRR 001A 15
V445-STLDRIV.SSLU.A23555366A29=15 ACCESSING DATA FOR
( 1)123.34.101.98
V447--INDEX SESSID A ST TIME
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
You can use an asterisk (*) in an LUWID keyword, like you can use an
asterisk in a LOCATION name. For example, issue the command DISPLAY THREAD
TYPE(INDOUBT) LUWID(NET1.*) to display all the indoubt threads whose LUWID has
a network name of NET1. The command DISPLAY THREAD TYPE(INDOUBT)
LUWID(IBM.NEW*) displays all indoubt threads whose LUWID has a network name
of "IBM" and whose LUNAME begins with "NEW."
Use the DETAIL keyword with the DISPLAY THREAD LUWID command to show
the status of every conversation that is connected to each displayed thread, and to
indicate whether a conversation is using DRDA access.
Procedure
Issue the DISPLAY THREAD command with the TYPE keyword. For example:
-DISPLAY THREAD(*) TYPE(PROC)
USIBMSTODB23
SDA
This output indicates that the application is waiting for data to be returned by the
server at USIBMSTODB22.
This output indicates that the server at USIBMSTODB22 is waiting for data to be
returned by the secondary server at USIBMSTODB24.
The secondary server at USIBMSTODB23 is accessing data for the primary server
at USIBMSTODB22. If you enter the DISPLAY THREAD command with the
DETAIL keyword from USIBMSTODB23, you receive the following output:
-DISPLAY THREAD(*) LOC(*) DET
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV402I - ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1
V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR
( 1)USIBMSTODB22-SSURLU
V447--INDEX SESSID A ST TIME
V448--( 1) 0000000600000002 W R1 9015611252369
DISPLAY ACTIVE REPORT COMPLETE
11:27:25 DSN9022I - DSNVDT ’-DISPLAY THREAD’ NORMAL COMPLETION
This output indicates that the secondary server at USIBMSTODB23 is not currently
active.
The secondary server at USIBMSTODB24 is also accessing data for the primary
server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the
DETAIL keyword from USIBMSTODB24, you receive the following output:
-DISPLAY THREAD(*) LOC(*) DET
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV402I - ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1
V436-PGM=*.BKH2C, SEC=1, STMNT=1
V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR
( 1)USIBMSTODB22-SSURLU
V447--INDEX SESSID A ST TIME
V448--( 1) 0000000900000005 S1 9015611253075
DISPLAY ACTIVE REPORT COMPLETE
11:27:32 DSN9022I - DSNVDT ’-DISPLAY THREAD’ NORMAL COMPLETION
The conversation status might not change for a long time. The conversation could
be hung, or the processing could be taking a long time. To determine whether the
conversation is hung, issue the DISPLAY THREAD command again and compare
the new timestamp to the timestamps from previous output messages. If the
timestamp is changing, but the status is not changing, the job is still processing. If
you need to terminate a distributed job, perhaps because it is hung and has been
holding database locks for a long time, you can use the CANCEL DDF THREAD
command if the thread is in Db2 (whether active or suspended). If the thread is in
VTAM, you can use the VARY NET TERM command.
Controlling connections
The method that you use to control connections between Db2 and another
subsystem or environment depends on the subsystem or environment that is
involved.
The following table summarizes how the output for the DISPLAY THREAD
command differs for a TSO online application, a TSO batch application, a QMF™
session, and a call attachment facility application.
Table 39. Differences in DISPLAY THREAD information for different environments
Connection Name AUTHID Corr-ID1 Plan1
DSN (TSO TSO Logon ID Logon ID RUN .. Plan(x)
Online)
Notes:
1. After the application connects to Db2 but before a plan is allocated, this field is
blank.
The name of the connection can have one of the following values:
Name Connection to
TSO Program that runs in TSO foreground
BATCH
Program that runs in TSO background
DB2CALL
Program that uses the call attachment facility and that runs in the same
address space as a program that uses the TSO attachment facility
The following command displays information about TSO and CAF threads,
including those threads that process requests to or from remote locations:
-DISPLAY THREAD(BATCH,TSO,DB2CALL)
Figure 25. DISPLAY THREAD output that shows TSO and CAF connections
Related tasks:
Monitoring threads
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
TSO displays:
READY
You enter:
DSN SYSTEM (DSN)
DSN displays:
DSN
You enter:
RUN PROGRAM (MYPROG)
DSN displays:
DSN
You enter:
END
TSO displays:
READY
DSNC DISCONNECT
Terminates threads using a specific Db2 plan.
DSNC DISPLAY
Displays thread information or statistics.
DSNC MODIFY
Modifies the maximum number of threads for a transaction or group.
DSNC STOP
Disconnects CICS from Db2.
DSNC STRT
Starts the CICS attachment facility.
Related information:
Overview of the CICS Db2 interface (CICS Db2 Guide)
Command types and environments in Db2 (Db2 Commands)
Procedure
Restarting CICS
One function of the CICS attachment facility is to keep data synchronized between
the two systems.
If Db2 completes phase 1 but does not start phase 2 of the commit process, the
units of recovery that are being committed are termed indoubt. An indoubt unit of
recovery might occur if Db2 terminates abnormally after completing phase 1 of the
commit process. CICS might commit or roll back work without Db2 knowing
about it.
Db2 cannot resolve those indoubt units of recovery (that is, commit or roll back the
changes made to Db2 resources) until the connection to CICS is restarted.
Procedure
To restart CICS:
You must auto-start CICS (START=AUTO in the DFHSIT table) to obtain all
necessary information for indoubt thread resolution that is available from its log.
Do not perform a cold start. You specify the START option in the DFHSIT table.
If CICS has requests active in Db2 when a Db2 connection terminates, the
corresponding CICS tasks might remain suspended even after CICS is reconnected
to Db2. Purge those tasks from CICS by using a CICS-supplied transaction such as:
The DSNC STRT command starts the CICS Db2 attachment facility, which allows
CICS application programs to access Db2 databases.
Threads are created at the first Db2 request from the application if one is not
already available for the specific Db2 plan.
Related concepts:
CICS Transaction Server for z/OS Db2 Guide
Any authorized CICS user can monitor the threads and change the connection
parameters as needed. Operators can use the following CICS attachment facility
commands to monitor the threads:
Procedure
v Authorized CICS user can monitor the threads and change the connection
parameters as needed.
– Operators can use the following CICS attachment facility commands to
monitor the threads:
– DSNC DISPLAY PLAN plan-name destination
DSNC DISPLAY TRANSACTION transaction-id destination
These commands display the threads that the resource or transaction is using.
The following information is provided for each created thread:
Related concepts:
Resolution of CICS indoubt units of recovery
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
Procedure
To disconnect a CICS application from Db2, use one of the following methods:
v The Db2 command CANCEL THREAD can be used to cancel a particular thread.
CANCEL THREAD requires that you know the token for any thread that you
want to cancel. Enter the following command to cancel the thread that is
identified by the token as indicated in the display output.
-CANCEL THREAD(46)
When you issue the CANCEL THREAD command for a thread, that thread is
scheduled to be terminated in Db2.
v The command DSNC DISCONNECT terminates the threads allocated to a plan
ID, but it does not prevent new threads from being created. This command frees
Db2 resources that are shared by the CICS transactions and allows exclusive
access to them for special-purpose processes such as utilities or data definition
statements.
The thread is not canceled until the application releases it for reuse, either at
SYNCPOINT or end-of-task.
Related concepts:
CICS Transaction Server for z/OS Db2 Guide
Forced termination is not recommended, but at times you might need to force the
connection to end. A forced termination of the connection can abnormally
terminate CICS transactions that are connected to Db2. Therefore, indoubt units of
recovery can exist at reconnect.
Procedure
v To disconnect CICS with an orderly termination, use one of the following
methods:
– Enter the DSNC STOP QUIESCE command. CICS and Db2 remain active.
For example, the following command stops the Db2 subsystem (QUIESCE)
allows the currently identified tasks to continue normal execution, and does
not allow new tasks to identify themselves to Db2:
-STOP DB2 MODE (QUIESCE)
IMS command responses are sent to the terminal from which the corresponding
command was entered. Authorization to enter IMS commands is based on IMS
security.
Related information:
Command types and environments in Db2 (Db2 Commands)
IMS commands
The order of starting IMS and Db2 is not vital. If IMS is started first, when Db2
comes up, Db2 posts the control region MODIFY task, and IMS again tries to
reconnect.
If Db2 is stopped by the STOP DB2 command, the /STOP SUBSYS command, or a
Db2 abend, IMS cannot reconnect automatically. You must make the connection by
using the /START SUBSYS command.
The following messages can be produced when IMS attempts to connect a Db2
subsystem. In each message, imsid is the IMS connection name.
v If Db2 is active, these messages are sent:
– To the z/OS console:
DFS3613I ESS TCB INITIALIZATION COMPLETE
– To the IMS master terminal:
DSNM001I IMS/TM imsid CONNECTED TO SUBSYSTEM ssnm
v If Db2 is not active, this message is sent to the IMS master terminal:
DSNM003I IMS/TM imsid FAILED TO CONNECT TO SUBSYSTEM ssnm
RC=00 imsid
RC=00 means that a notify request has been queued. When Db2 starts, IMS is
also notified.
No message goes to the z/OS console.
Execution of the first SQL statement of the program causes the IMS attachment
facility to create a thread and allocate a plan, whose name is associated with the
IMS application program module name. Db2 sets up control blocks for the thread
and loads the plan.
Two threads can have the same correlation ID (pst#.psbname) if all of these
conditions occur:
v Connections have been broken several times.
v Indoubt units of recovery were not recovered.
v Applications were subsequently scheduled in the same region.
The NID is shown in a condensed form on the messages that are issued by the
Db2 DISPLAY THREAD command processor. The IMS subsystem name (imsid) is
displayed as the net_node. The net_node is followed by the 8-byte OASN, which is
displayed in hexadecimal format (16 characters), with all leading zeros omitted.
The net_node and the OASN are separated by a period.
For example, if the net_node is IMSA, and the OASN is 0003CA670000006E, the
NID is displayed as IMSA.3CA670000006E on the Db2 DISPLAY THREAD
command output.
If two threads have the same corr-id, use the NID instead of corr-id on the
RECOVER INDOUBT command. The NID uniquely identifies the work unit.
The OASN is a 4-byte number that represents the number of IMS scheduling since
the last IMS cold start. The OASN is occasionally found in an 8-byte format, where
the first 4 bytes contain the scheduling number, and the last 4 bytes contain the
number of IMS sync points (commits) during this schedule. The OASN is part of
the NID.
The NID is a 16-byte network ID that originates from IMS. The NID contains the
4-byte IMS subsystem name, followed by four bytes of blanks, followed by the
8-byte version of the OASN. In communications between IMS and Db2, the NID
serves as the recovery token.
Notes:
1. After the application connects to Db2 but before sign-on processing completes,
this field is blank.
2. After sign-on processing completes but before a plan is allocated, this field is
blank.
The following command displays information about IMS threads, including those
accessing data at remote locations:
-DISPLAY THREAD(imsid)
One function of the thread that connects Db2 to IMS is to keep data in
synchronization between the two systems. If the application program requires it, a
change to IMS data must also be made to Db2 data. If Db2 abends while connected
to IMS, IMS might commit or back out work without Db2 being aware of it. When
Db2 restarts, that work is termed indoubt. Typically, some decision must be made
about the status of the work.
Procedure
Procedure
Results
One of the following messages might be issued after you issue the RECOVER
command:
DSNV414I - THREAD pst#.psbname COMMIT SCHEDULED
DSNV415I - THREAD pst#.psbname ABORT SCHEDULED
Related tasks:
Resolving indoubt units of recovery
Procedure
In this command, imsid is the connection name. The command produces messages
similar to these:
Related tasks:
Restarting Db2 after termination
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
At certain times, IMS builds a list of residual recovery entries (RREs). RREs are units
of recovery about which Db2 might be in doubt. RREs occur in the following
situations:
v If Db2 is not operational, IMS has RREs that cannot be resolved until Db2 is
operational. Those are not a problem.
v If Db2 is operational and connected to IMS, and if IMS rolled back the work that
Db2 has committed, the IMS attachment facility issues message DSNM005I. If
the data in the two systems must be consistent, this is a problem situation.
v If Db2 is operational and connected to IMS, RREs can still exist, even though no
messages have informed you of this problem. The only way to recognize this
problem is to issue the IMS /DISPLAY OASN SUBSYS command after the Db2
connection to IMS has been established.
Procedure
These commands reset the status of IMS; they do not result in any communication
with Db2.
Related information:
Recovering from IMS indoubt units of recovery
The IMS attachment facility that is used in the control region is also loaded into
dependent regions. A connection is made from each dependent region to Db2. This
connection is used to pass SQL statements and to coordinate the commitment of
Db2 and IMS work.
If Db2 is not active, or if resources are not available when the first SQL statement
is issued from an application program, the action taken depends on the error
option specified on the SSM user entry. The options are:
Option Action
R The appropriate return code is sent to the application, and the SQL code is
returned.
Q The application abends. This is a PSTOP transaction type; the input
transaction is re-queued for processing, and new transactions are queued.
A The application abends. This is a STOP transaction type; the input
transaction is discarded, and new transactions are not queued.
The region error option can be overridden at the program level by using the
resource translation table (RTT).
Related concepts:
IMS attachment facility macro (DSNMAPN) (Db2 Installation and Migration)
However, they might want to change values in the SSM member of IMS.PROCLIB.
To do that, they can issue /STOP REGION, update the SSM member, and issue /START
REGION.
Procedure
Results
The connection between IMS and Db2 is shown as one of the following states:
v CONNECTED
v NOT CONNECTED
v CONNECT IN PROGRESS
v STOPPED
v STOP IN PROGRESS
v INVALID SUBSYSTEM NAME=name
v SUBSYSTEM name NOT DEFINED BUT RECOVERY OUTSTANDING
The thread status from each dependent region is shown as one of the following
states:
v CONN
v CONN, ACTIVE (includes LTERM of user)
Example
The following four examples show the output that might be generated when you
issue the IMS /DISPLAY SUBSYS command.
The following figure shows the output that is returned for a DSN subsystem that is
not connected. The IMS attachment facility issues message DSNM003I in this
example.
The following figure shows the output that is returned for a DSN subsystem that is
connected. The IMS attachment facility issues message DSNM001I in this example.
The following figure shows the output that is returned for a DSN subsystem that is
in a stopped status. The IMS attachment facility issues message DSNM002I in this
example.
Figure 29. Example of output from the IMS /DISPLAY SUBSYS command
The following figure shows the output that is returned for a DSN subsystem that is
connected and region 1. You can use the values from the REGID and the
PROGRAM fields to correlate the output of the command to the LTERM that is
involved.
Figure 30. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected and
the region ID (1) that is included.
That command sends the following message to the terminal that entered it, usually
the master terminal operator (MTO):
DFS058I STOP COMMAND IN PROGRESS
If an application attempts to access Db2 after the connection ended and before a
thread is established, the attempt is handled according to the region error option
specification (R, Q, or A).
If RRS is not started, an IDENTIFY request fails with reason code X'00F30091'.
Related tasks:
Invoking the Resource Recovery Services attachment facility (Db2 Application
programming and SQL)
Programming for concurrency (Db2 Performance)
Db2 cannot resolve those indoubt units of recovery (that is, commit or roll back the
changes made to Db2 resources) until Db2 restarts with RRS.
If any unit of work is indoubt when a failure occurs, Db2 and RRS automatically
resolve the unit of work when Db2 restarts with RRS.
Procedure
For RRSAF connections, a network ID is the z/OS RRS unit of recovery ID (URID),
which uniquely identifies a unit of work. A z/OS RRS URID is a 32-character
number.
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Chapter 8. Monitoring and controlling Db2 and its connections 333
Related information:
DSNV401I (Db2 Messages)
Procedure
Results
If you recover a thread that is part of a global transaction, all threads in the global
transaction are recovered.
The following messages might be issued when you issue the RECOVER INDOUBT
command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED
DSNV415I - THREAD correlation-id ABORT SCHEDULED
If this message is issued, use the following NID option of RECOVER INDOUBT:
-RECOVER INDOUBT(RRSAF) ACTION(action) NID(nid)
where nid is the 32-character field that is displayed in the DSNV449I message.
Related concepts:
Multiple system consistency
Related tasks:
Resolving indoubt units of recovery
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
-RECOVER INDOUBT (Db2) (Db2 Commands)
For RRSAF connections, a network ID is the z/OS RRS unit of recovery ID (URID),
which uniquely identifies a unit of work. A z/OS RRS URID is a 32-character
number.
Related reference:
-DISPLAY THREAD (Db2) (Db2 Commands)
Related information:
DSNV401I (Db2 Messages)
RRSAF uses the RRS Switch Context (CTXSWCH) service to do this. Only
authorized programs can execute CTXSWCH.
Db2 stores information in an RRS CONTEXT about an RRSAF thread so that Db2
can locate the thread later. An application or application monitor can then invoke
CTXSWCH to disassociate the CONTEXT from the current TCB and then associate
the CONTEXT with the same TCB or a different TCB.
The command produces output similar to the output in the following figure:
Procedure
When you issue the CANCEL THREAD command, Db2 schedules the thread for
termination.
An allied thread is a thread that is connected locally to your Db2 subsystem, that is
from TSO, CICS, IMS, or a stored procedures address space. A database access thread
is a thread that is initiated by a remote DBMS to your Db2 subsystem.
Related concepts:
Db2 commands for monitoring connections to other systems
Related tasks:
Resolving indoubt units of recovery
Related information:
Recovering from database access thread failure
Starting DDF
You can start the distributed data facility (DDF) if you have at least SYSOPR
authority.
When you install Db2, you can request that the DDF start automatically when Db2
starts.
Procedure
If the DDF is not properly installed, the START DDF command fails, and message
,DSN9032I, - REQUESTED FUNCTION IS NOT AVAILABLE, is issued. If the DDF has
started, the START DDF command fails, and message ,DSNL001I, - DDF IS
ALREADY STARTED, is issued. Use the DISPLAY DDF command to display the status
of DDF.
Related concepts:
Maintaining consistency across multiple systems
Related reference:
-START DDF (Db2) (Db2 Commands)
Suspending DDF server threads frees all resources that are held by the server
threads and lets the following operations complete:
v CREATE
v ALTER
v DROP
v GRANT
v REVOKE
When you issue the STOP DDF MODE(SUSPEND) command, Db2 waits for all
active DDF database access threads to become pooled or to terminate. Two
optional keywords on this command, WAIT and CANCEL, let you control how
long Db2 waits and what action Db2 takes after a specified time period.
Related reference:
-STOP DDF (Db2) (Db2 Commands)
Procedure
Related reference:
-START DDF (Db2) (Db2 Commands)
To issue the DISPLAY DDF command, you must have SYSOPR authority or higher.
Tip: You can use the optional DETAIL keyword to receive additional configuration
and statistical information.
The DISPLAY DDF DETAIL command is especially useful, because the command
displays new inbound connections that are not indicated by other commands. For
example, sometimes new inbound connections are not yet reflected in the DISPLAY
THREAD report. Cases of when a new inbound connection is not displayed
include if DDF is in INACTIVE MODE, as denoted by a DT value of I in the
message DSNL090I, and DDF is stopped with mode SUSPEND, or the maximum
number of active database access threads has been reached. These new connections
are displayed in the DISPLAY DDF DETAIL report. However, specific details
regarding the origin of the connections, such as the client system IP address or LU
name, are not available until the connections are associated with a database access
thread.
Procedure
Related reference:
-DISPLAY DDF (Db2) (Db2 Commands)
Related information:
DSNL080I (Db2 Messages)
By issuing certain Db2 commands, you can generate information about the status
of distributed threads.
DISPLAY DDF
Displays information about the status and configuration of the distributed
data facility (DDF), and about the connections or threads controlled by
DDF.
DISPLAY LOCATION
Displays statistics about threads and conversations between a remote Db2
subsystem and the local subsystem.
DISPLAY THREAD
Displays information about Db2, distributed subsystem connections, and
parallel tasks.
You can specify location names, SNA LU names, or IP addresses, and the DETAIL
keyword is supported.
Procedure
You can use an asterisk (*) in place of the end characters of a location name. For
example, you can use DISPLAY LOCATION(SAN*) to display information about all
active connections between your Db2 subsystem and a remote location that begins
with “SAN”. The results include the number of conversations and conversation
role, requester, or server.
When Db2 connects with a remote location, information about that location,
persists in the report even if no active connections exist, including:
v The physical address of the remote location (LOCATION)
v The product identifier (PRDID)
Example
For example, suppose two threads are at location USIBMSTODB21. One thread is a
database access thread that is related to a non-Db2 requester system, and the other
is an allied thread that goes from USIBMSTODb21 to the non-Db2 server system.
Both threads use SNA connections. The DISPLAY LOCATION command that is
issued at USIBMSTODB21 displays the following output:
DSNL200I @ DISPLAY LOCATION REPORT FOLLOWS-
LOCATION PRDID T ATT CONNS
LUND1 R 1
LULA DSN09010 S 1
DISPLAY LOCATION REPORT COMPLETE
The following output shows the result of the DISPLAY LOCATION(*) command when
Db2 is connected to the following DRDA partners:
v TCP/IP for DRDA connections to ::FFFF:124.38.54.16
v SNA connections to LULA.
DSNL200I @ DISPLAY LOCATION REPORT FOLLOWS-
LOCATION PRDID T ATT CONNS
::FFFF:124.38.54.16..446 DSN10015 R 2
::FFFF:124.38.54.16 DSN10015 S 1
LULA DSN09010 R 1
LULA DSN09010 S 2
LULA
DISPLAY LOCATION REPORT COMPLETE
The DISPLAY LOCATION command displays information for each remote location
that currently is, or once was, in contact with Db2. If a location is displayed with
zero conversations, one of the following conditions exists:
v Sessions currently exist with the partner location, but no active conversations are
allocated to any of the sessions.
v Sessions no longer exist with the partner, because contact with the partner has
been lost.
When you specify the DETAIL option in the DISPLAY LOCATION command, the
report might include additional messages lines for each remote location, including:
v Information about the number of conversations in the CONNS column with each
remote location that have particular attributes. that are identified by a value in
the ATT column. For example, the number of connections that use trusted
context are identified by TRS.
v Information about conversations that are owned by Db2 system threads, such as
those that are used for resynchronization of indoubt units of work.
| There are several types of IBM data server drivers available from Db2 for Linux,
| UNIX, and Windows. You can cancel SQL from a remote client running:
| v IBM Data Server Driver for ODBC and CLI
| v IBM Data Server Driver for JDBC and SQLJ type 4 connectivity
| Related information:
| IBM data server client and driver types
| You can configure the cancel behavior by setting configuration properties for your
| client driver:
| v For C-based drivers, configure the InterruptProcessingMode property.
| v For Java-based drivers, configure the interruptProcessingMode and
| queryTimeoutInterruptProcessingMode properties.
Procedure
To cancel SQL statements that are running on a remote Db2 for z/OS server:
| Note: A client driver can implicitly cancel an SQL statement when the query
| timeout interval is reached.
| Results
When you cancel an SQL statement from a client application, you do not eliminate
the original connection to the remote server. The original connection remains active
to process additional SQL requests. Any cursor that is associated with the canceled
statement is closed, and the Db2 server returns an SQLCODE of -952 to the client
application when you cancel a statement by using this method.
You can cancel only dynamic SQL codes that excludes transaction-level statements
(CONNECT, COMMIT, ROLLBACK) and bind statements from a client application.
Related reference:
SQLCancel function (CLI) - Cancel statement
Configuration of Sysplex workload balancing for non-Java clients
InterruptProcessingMode IBM Data Server Driver configuration keyword
Driver support for JDBC APIs
Canceling threads
You can use the CANCEL THREAD command to terminate threads that are active
or suspended in Db2.
The command has no effect if the thread is not active or suspended in Db2.
You can use the DISPLAY THREAD command to determine if a thread is hung in
Db2 or VTAM. In VTAM, there is no reason to use the CANCEL command.
Procedure
Results
The token is a 1-character to 5-character number that identifies the thread result.
When Db2 schedules the thread for termination, the following message for a
distributed thread is issued:
DSNL010I - DDF THREAD token or luwid HAS BEEN CANCELED
A database access thread can be in the prepared state, waiting for the commit
decision from the coordinator. When you issue the CANCEL THREAD command
for a database access thread that is in the prepared state, the thread is converted
from active to indoubt status.
The conversation with the coordinator and all conversations with downstream
participants are terminated, and message DSNL450I is returned. The resources that
are held by the thread are not released until the indoubt state is resolved. This is
accomplished automatically by the coordinator or by using the command
RECOVER INDOUBT.
When the command is entered at the Db2 subsystem that has a database access
thread servicing requests from a Db2 subsystem that owns the allied thread, the
database access thread is terminated. Any active SQL request (and all later
requests) from the allied thread result in a "resource not available" return code.
Related tasks:
Resolving indoubt units of recovery
Use the DISPLAY PROCEDURE command and the DISPLAY THREAD command
to obtain information about a stored procedure while it is running.
Because native SQL procedures do not run in WLM-established address spaces, the
best way to monitor native SQL procedures is by using the START TRACE
command and specifying accounting class 10, which activates IFCID 239.
Related concepts:
Db2 trace output (Db2 Performance)
Related reference:
-START TRACE (Db2) (Db2 Commands)
-DISPLAY THREAD (Db2) (Db2 Commands)
-DISPLAY PROCEDURE (Db2) (Db2 Commands)
This command can display the following information about stored procedures:
v Status (started, stop-queue, stop-reject, or stop-abend)
v Number of requests that are currently running and queued
v Maximum number of threads that are running a stored procedure load module
and queued
v Count of timed-out SQL CALLs
Procedure
To display information about all stored procedures in all schemas that have been
accessed by Db2 applications:
For example:
-DISPLAY PROCEDURE
Note: To display information about a native SQL procedure, you must run the
procedure in DEBUG mode. If you do not run the native SQL procedure in
DEBUG mode (for example, in a production environment), the DISPLAY
PROCEDURE command will not return output for the procedure.
If you do run the procedure in DEBUG mode the WLM environment column in
the output contains the WLM ENVIRONMENT FOR DEBUG that you specified
when you created the native SQL procedure. The DISPLAY PROCEDURE output
shows the statistics of native SQL procedures as '0' if the native SQL procedures
are under the effect of a STOP PROCEDURE command.
The following example shows two schemas (PAYROLL and HRPROD) that have
been accessed by Db2 applications. You can also display information about specific
stored procedures.
DSNX940I csect - DISPLAY PROCEDURE REPORT FOLLOWS-
------ SCHEMA=PAYROLL
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
PAYRPRC1
STARTED 0 0 1 0 0 PAYROLL
PAYRPRC2
STOPQUE 0 5 5 3 0 PAYROLL
PAYRPRC3
STARTED 2 0 6 0 0 PAYROLL
USERPRC4
STOPREJ 0 0 1 0 1 SANDBOX
------ SCHEMA=HRPROD
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
HRPRC1
STARTED 0 0 1 0 1 HRPROCS
HRPRC2
STOPREJ 0 0 1 0 0 HRPROCS
DISPLAY PROCEDURE REPORT COMPLETE
DSN9022I = DSNX9COM ’-DISPLAY PROC’ NORMAL COMPLETION
Related reference:
-DISPLAY PROCEDURE (Db2) (Db2 Commands)
Issue the DISPLAY THREAD command to display thread information about stored
procedures.
Procedure
The SP status indicates that the thread is executing within the stored procedure. An
SW status indicates that the thread is waiting for the stored procedure to be
scheduled.
Example 2: This example of output from the DISPLAY THREAD command shows
a thread that is executing a native SQL procedure. If you do not specify the
DETAIL option, the output will not include information that is specific to the
stored procedure.
Use the z/OS command DISPLAY WLM to determine the status of an application
environment in which a stored procedure runs.
The output from the DISPLAY WLM command lets you determine whether a
stored procedure can be scheduled in an application environment.
For example, you can issue this command to determine the status of application
environment WLMENV1:
D WLM,APPLENV=WLMENV1
Results
The output indicates that WLMENV1 is available, so WLM can schedule stored
procedures for execution in that environment.
When you make certain changes to a stored procedure or to the JCL startup
procedure for a WLM application environment, you need to refresh the WLM
application environment.
Before you refresh a WLM application environment, ensure that WLM is operating
in goal mode.
Refreshing the WLM environment starts a new instance of each address space that
is active for the WLM environment. Existing address spaces stop when the current
requests that are executing in those address spaces complete.
Refresh the WLM application environment if any of the following situations are
true:
v For external procedures (including external SQL procedures), you changed the
stored procedure logic, the load module, or the CREATE PROCEDURE
definition.
v For a Java stored procedure, you changed the properties that are pointed to by
the JAVAENV data set.
v You changed the JCL startup procedure for the WLM application environment.
Procedure
The address spaces stop when the current requests that are executing in those
address spaces complete.
This command puts the WLM application environment in QUIESCED state.
When the WLM application environment is in QUIESCED state, the stored
procedure requests are queued. If the WLM application environment is
restarted within a certain time, the stored procedures are executed. If a stored
procedure cannot be executed, the CALL statement returns SQL code -471 with
reason code 00E79002.
2. To restart all stored procedures address spaces that are associated with WLM
application environment name, use the following z/OS command:
VARY WLM,APPLENV=name,RESUME
New address spaces start when all JCL changes are established. Until that time,
work requests that use the new address spaces are queued.
Also, you can use the VARY WLM command with the RESUME option when
the WLM application environment is in the STOPPED state due to a failure.
This state might be the result of a failure when starting the address space, or
because WLM detected five abnormal terminations within 10 minutes. When an
application environment is in the STOPPED state, WLM does not schedule
stored procedures for execution in it. If you try to call a stored procedure when
the WLM application environment is in the STOPPED state, the CALL
statement returns SQL code -471 with reason code 00E7900C. After correcting
the condition that caused the failure, you need to restart the application
environment.
Related concepts:
WLM management of stored procedures (Db2 Installation and Migration)
Related tasks:
Setting up a WLM application environment for stored procedures during
installation (Db2 Installation and Migration)
You can enable automatic refreshes for WLM environments for stored procedures.
Procedure
Add the following DD statement to the startup procedure for the WLM-established
stored procedure address space:
//AUTOREFR DD DUMMY
You have several options for obtaining diagnostic information and debugging
stored procedures, depending on the type of stored procedure.
Procedure
Take the appropriate action, depending on the type of stored procedure that you
use.
What to do next
After developing and testing a stored procedure, you can migrate it from a test
environment to production.
The process that you follow to migrate a stored procedure from a test environment
to production depends on the type of stored procedure that you want to migrate.
The process that you follow also depends on the change management policy of
your site. You can migrate a native SQL procedure, an external SQL procedure, or
an external stored procedure. You also can choose to recompile to create new object
code and a new package on the production server, or you can choose not to
recompile.
Related tasks:
Implementing Db2 stored procedures
Because native SQL procedures do not contain load modules that need to be
link-edited or WLM address spaces that need to be refreshed, you do not need to
determine whether you want to recompile to create new object code and a new
package on the production server.
Procedure
Use IBM Data Studio to migrate external SQL procedures from a test environment
to production.
External SQL procedures are usually built by using IBM Data Studio.
Procedure
Use IBM Data Studio to deploy the stored procedure. The binary deploy capability
in IBM Data Studio promotes external SQL procedures without recompiling. The
binary deploy capability copies all necessary components from one environment to
another and performs the bind in the target environment.
For Java stored procedures, you also can use IBM Data Studio to do a binary
deploy of the stored procedure. The binary deploy capability promotes Java stored
procedures without recompiling. The binary deploy capability copies all necessary
components from one environment to another and performs the bind in the target
environment.
Procedure
Note: Make sure that the schema, collection ID and WLM application
environment correspond to the new environment or code level.
d. Copy the DBRM and bind the DBRM to produce a Db2 package.
Note: Make sure that the collection ID of the BIND statement and
collection ID of the CREATE PROCEDURE statement are the same.
e. Copy the load module and refresh the WLM application environment.
v To migrate the stored procedure and recompile to create new object code and
a new package on the production server:
a. Copy the CREATE PROCEDURE statement.
b. Modify the CREATE PROCEDURE statement to reflect the new schema,
new collection ID and new WLM application environment.
c. Define the stored procedure with the new CREATE PROCEDURE
statement. You can use the IBM-supplied programs DSNTIAD or
DSNTEP2.
Note: Make sure that the schema, collection ID and WLM application
environment correspond to the new environment or code level.
d. Copy the source code.
e. Precompile, compile, and link-edit. This step produces a DBRM and a
load module.
f. Bind the DBRM to produce a Db2 package.
Note: Make sure that the collection ID of the BIND statement and
collection ID of the CREATE PROCEDURE statement are the same.
g. Refresh the WLM application environment.
| Procedure
|
|
| To control autonomous procedures:
| 1. Issue a DISPLAY THREAD command to find the status of the autonomous
| procedure and the token for the invoking thread. The token for the thread is
|
|
| Results
To see the recommended action for solving a particular problem, enter the selection
number, and then press Enter. This displays the Recommended Action for Selected
Event panel, shown in the following figure.
Figure 32. Recommended action for selected event panel in NetView. In this example, the AR
(USIBMSTODB21) reports the problem, which affects the AS (USIBMSTODB22).
Key Description
▌1▐ The system that is reporting the error. The system that is reporting the
error is always on the left side of the panel. That system name appears
first in the messages. Depending on who is reporting the error, either the
LUNAME or the location name is used.
▌2▐ The system that is affected by the error. The system that is affected by the
error is always displayed to the right of the system that is reporting the
error. The affected system name appears second in the messages.
Depending on what type of system is reporting the error, either the
LUNAME or the location name is used.
If no other system is affected by the error, this system does not appear on
the panel.
▌3▐ Db2 reason code.
Related concepts:
Obtaining Db2 Diagnosis Guide and Reference ()
Related reference:
IBM Tivoli NetView for z/OS User's Guide
DDF alerts:
Alerts for DDF are displayed on NetView Hardware Monitor panels and are
logged in the hardware monitor database. The following figure is an example of
the Alerts-Static panel in NetView.
Figure 33. Alerts-static panel in NetView. DDF errors are denoted by the resource name AS
(server) and AR (requester). For Db2-only connections, the resource names are RS (server)
and RQ (requester).
Stopping DDF
You can stop the distributed data facility (DDF) if you have SYSOPR authority or
higher.
Procedure
Results
If the distributed data facility has already been stopped, the STOP DDF command
fails and message DSNL002I - DDF IS ALREADY STOPPED appears.
Stop DDF by using the QUIESCE option of the STOP DDF command whenever
possible. This option is the default.
With the QUIESCE option, the STOP DDF command does not complete
until all VTAM or TCP/IP requests have completed. In this case, no
resynchronization work is necessary when you restart DDF. If any indoubt units of
work require resynchronization, the QUIESCE option produces message DSNL035I.
Procedure
Stop DDF by using the FORCE option of the STOP DDF command only when you
must stop DDF quickly.
When DDF is stopped with the FORCE option, and DDF has indoubt
thread responsibilities with remote partners, message DSNL432I, DSNL433I, or
both are issued.
DSNL432I shows the number of threads that DDF has coordination responsibility
over with remote participants who could have indoubt threads. At these
participants, database resources that are unavailable because of the indoubt threads
remain unavailable until DDF is started and resolution occurs.
DSNL433I shows the number of threads that are indoubt locally and need
resolution from remote coordinators. At the DDF location, database resources are
unavailable because the indoubt threads remain unavailable until DDF is started
and resolution occurs.
To force the completion of outstanding VTAM or TCP/IP requests, use the FORCE
option, which cancels the threads that are associated with distributed requests.
When the FORCE option is specified with STOP DDF, database access threads in
the prepared state that are waiting for the commit or abort decision from the
coordinator are logically converted to the indoubt state. The conversation with the
coordinator is terminated. If the thread is also a coordinator of downstream
participants, these conversations are terminated. Automatic indoubt resolution is
initiated when DDF is restarted.
Procedure
One way to force DDF to stop is to issue the VTAM VARY NET,INACT command.
This command makes VTAM unavailable and terminates DDF. VTAM forces the
completion of any outstanding VTAM requests immediately.
Procedure
where db2lu is the VTAM LU name for the local Db2 system.
When DDF has stopped, you must issue the following command before you can
issue the START DDF command:
VARY NET,ACT,ID=db2lu
Controlling traces
Several traces are available for problem determination.
To use the trace commands, you must have one of the following types of authority:
v SYSADM or SYSOPR authority
v Authorization to issue start and stop trace commands (the TRACE privilege)
v Authorization to issue the display trace command (the DISPLAY privilege)
Procedure
Select the appropriate command for the action that you want to take. The trace
commands include:
START TRACE
Invokes one or more different types of trace.
DISPLAY TRACE
Displays the trace options that are in effect.
STOP TRACE
Stops any trace that was started by either the START TRACE command or
as a result of the parameters that were specified during installation or
migration.
MODIFY TRACE
Changes the trace events (IFCIDs) that are being traced for a specified
active trace.
You can specify several parameters to further qualify the scope of a trace. You can
trace specific events within a trace type as well as events within specific Db2 plans,
authorization IDs, resource manager IDs, and locations. You can also control where
trace data is sent.
When you install Db2, you can request that any trace type and class start
automatically when Db2 starts.
Related concepts:
Recommendations:
v Do not use the external component trace writer to write traces to the data set.
v Activate all traces during IRLM startup. Use the following command to activate
all traces:
START irlmproc,TRACE=YES
Related information:
Command types and environments in Db2 (Db2 Commands)
For example, if you call a stored procedure from CICS, it runs at CICS priority. If
you call a stored procedure from batch, it runs at batch priority. If you call a stored
procedure from DDF, it runs at DDF priority.
|
PSPI
|
| Before you can use profiles to set special registers, you must create a set of profile
| tables on the Db2 subsystem.
| The profile tables and related indexes are created when you run job DSNTIJSG
| during Db2 installation or migration.
| A complete set of profile tables and related indexes includes the following objects:
| v SYSIBM.DSN_PROFILE_TABLE
| v SYSIBM.DSN_PROFILE_HISTORY
| v SYSIBM.DSN_PROFILE_ATTRIBUTES
| v SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
| v SYSIBM.DSN_PROFILE_TABLE_IX_ALL
| v SYSIBM.DSN_PROFILE_TABLE_IX2_ALL
| v SYSIBM.DSN_PROFILE_ATTRIBUTES_IX_ALL
| You can use profile tables to set the value of certain Db2 special registers that are
| listed in Rules for setting special registers in the profile table.
| Procedure
| To use profiles that modify special register values that can affect the behavior of
| dynamic SQL statements:
| 1. Create a profile by inserting rows in DSN_PROFILE_TABLE. The row defines
| the scope of the profile for the special register setting.
Db2 writes each log record to a disk data set called the active log. When the active
log is full, Db2 copies its contents to a disk or tape data set called the archive log.
That process is called offloading.
| At migration to Db2 12, you cannot start Db2 12 until the BSDS is converted to use
| the 10-byte RBA and LRSN formats. You can convert the BSDS before or during the
| Db2 12 migration process.
Related information:
Reading log records
A unit of recovery is the work that changes Db2 data from one point of consistency
to another. This work is done by a single Db2 DBMS for an application. The point
of consistency (also referred to as sync point or commit point) is a time when all
recoverable data that an application program accesses is consistent with other data.
Unit of recovery
SQL transaction 1 SQL transaction 2
Time
line
For example, a bank transaction might transfer funds from account A to account B.
First, the program subtracts the amount from account A. Next, it adds the amount
to account B. After subtracting the amount from account A, the two accounts are
inconsistent. These accounts are inconsistent until the amount is added to account
B. When both steps are complete, the program can announce a point of consistency
and thereby make the changes visible to other application programs.
For a partition-by-growth table space, if a new partition was added in the unit of
recovery, any uncommitted updates can be backed out, but the physical partition is
not deleted.
Time
Database updates Back out updates
line
The possible events that trigger "Begin rollback" in this figure include:
v SQL ROLLBACK statement
v Deadlock (reported as SQLCODE -911)
v Timeout (reported as SQLSTATE 40001)
The effects of inserts, updates, and deletes to large object (LOB) values are backed
out along with all the other changes that were made during the unit of work that
is being rolled back, even if the LOB values that were changed reside in a LOB
table space that has the LOG NO attribute.
An operator or an application can issue the CANCEL THREAD command with the
NOBACKOUT option to cancel long-running threads without backing out data
changes. Db2 backs out changes to catalog and directory tables regardless of the
NOBACKOUT option. As a result, Db2 does not read the log records and does not
write or apply the compensation log records. After CANCEL THREAD
NOBACKOUT processing, Db2 marks all objects that are associated with the
thread as refresh-pending (REFP) and puts the objects in a logical page list (LPL).
The NOBACKOUT request might fail for either of the following two reasons:
v Db2 does not completely back out updates of the catalog or directory (message
DSNI032I with reason 00C900CC).
v The thread is part of a global transaction (message DSNV439I).
Related reference:
REFRESH-pending status (Db2 Utilities)
Installation panels enable you to specify options, such as whether to have dual
active logs (strongly recommended), what media to use for archive log volumes,
and how many log buffers to have.
Related reference:
DSNTIPH: System resource data set names panel (Db2 Installation and
Migration)
Chapter 9. Managing the log and the bootstrap data set 369
How Db2 creates log records
Log records typically go through a standard life cycle.
1. Db2 registers changes to data and significant events in recovery log records.
2. Db2 processes recovery log records and breaks them into segments if necessary.
| 3. Log records are placed sequentially in output log buffers, which are formatted as
| VSAM control intervals (CIs). Each log record is identified by a continuously
| increasing RBA. For basic 6-byte RBA format, the range is 0 to 248-1, where 248
| represents 2 to the 48th power. For extended 10-byte RBA format, the range is 0
| to 280-1, where 280 represents 2 to the 80th power. (In a data sharing
| environment, a log record sequence number (LRSN) is also used to identify log
| records.)
4. The CIs are written to a set of predefined disk active log data sets, which are
used sequentially and recycled.
5. As each active log data set becomes full, its contents are automatically offloaded
to a new archive log data set.
If you change or create data that is compressed, the data logged is also
compressed. Changes to compressed rows that result from inserts, updates, and
deletes are also logged as compressed data. Updates to compressed indexes are not
logged as compressed data.
Db2 also writes the log buffers to an active log data set when they become full, or
when the write threshold is reached.
When Db2 forces the log buffer to be written (such as at commit time), the same
control interval can be written several times to the same location.
Be sure to set your subsystem parameters so that there are enough log buffers to
avoid the need to wait for a buffer to become available (DSN6LOGP OUTBUFF
parameter). Switching log data sets may also cause a temporary performance
impact when the switch takes place and the associated recovery checkpoint is
taken. This can be minimized by ensuring that the active log data sets are large
enough to avoid frequent switching. In addition, some events can cause log buffers
to be written before the ZPARM-defined threshold has been reached. These events
include, but are not limited to:
v Phase 1 commit
v Abort processing
v GBP-dependent index split
v Mass delete in a data-sharing environment
v Use of GBPCACHE NO
v All log buffers being filled
Consider the probable frequency of these events when you determine how often to
commit changes.
When Db2 is initialized, the active log data sets that are named in the BSDS are
dynamically allocated for exclusive use by Db2 and remain allocated exclusively to
Write to Triggering
active log event
Offload
process
Write to
archive log
Record on
BSDS
During the process, Db2 determines which data set to offload. Using the last log
relative byte address (RBA) that was offloaded, as registered in the BSDS, Db2
calculates the log RBA at which to start. Db2 also determines the log RBA at which
to end, from the RBA of the last log record in the data set, and registers that RBA
in the BSDS.
When all active logs become full, the Db2 subsystem runs an offload and halts
processing until the offload is completed. If the offload processing fails when the
active logs are full, Db2 cannot continue doing any work that requires writing to
the log.
Related information:
Recovering from active log failures
The value of the WRITE TO OPER field of the DSNTIPA installation panel
determines whether the request is received. If the value is YES, the request is
preceded by a WTOR (message number DSNJ008E) informing the operator to
prepare an archive log data set for allocating.
The operator can respond by canceling the offload. In that case, if the allocation is
for the first copy of dual archive data sets, the offload is merely delayed until the
next active log data set becomes full. If the allocation is for the second copy, the
archive process switches to single-copy mode, but for the one data set only.
When Db2 switches active logs and finds that the offload task has been active since
the last log switch, it issues the following message to notify the operator of a
possible outstanding tape mount or some other problem that prevents the offload
of the previous active log data set.
DSNJ017E - csect-name WARNING - OFFLOAD TASK HAS BEEN ACTIVE SINCE
date-time AND MAY HAVE STALLED
Db2 continues processing. The operator can cancel and then restart the offload.
Archive logs are always read by using BSAM. The block size of an archive log data
set is a multiple of 4 KB.
Output archive log data sets are dynamically allocated, with names chosen by Db2.
The data set name prefix, block size, unit name, and disk sizes that are needed for
allocation are specified when Db2 is installed, and recorded in the DSNZPxxx
module. You can also choose, at installation time, to have Db2 add a date and time
to the archive log data set name.
Restrictions: Consider the following restrictions for archive log data sets and
volumes:
v You cannot specify specific volumes for new archive logs. If allocation errors
occur, offloading is postponed until the next time loading is triggered.
v Do not use partitioned data set extended (PDSE) for archive log data. PDSEs are
not supported for archive logs.
Related reference:
DSNTIPA: Archive log data set parameters panel (Db2 Installation and
Migration)
Chapter 9. Managing the log and the bootstrap data set 373
DSNTIPH: System resource data set names panel (Db2 Installation and
Migration)
Archiving to disk offers faster recoverability but is more expensive than archiving
to tape. If you use dual logging, on installation panel DSNTIPA enables you to
specify that the primary copy of the archive log go to disk and the secondary copy
go to tape.
Dual archive logging increases recovery speed without using as much disk. The
second tape is intended as a backup, or it can be sent to a remote site in
preparation for disaster recovery. To make recovering from the COPY2 archive tape
faster at the remote site, use the installation parameter ARC2FRST to specify that
COPY2 archive log should be read first. Otherwise, Db2 always attempts to read
the primary copy of the archive log data set first.
If you choose to archive to tape, following certain tips can help you avoid
problems.
If the unit name reflects a tape device, Db2 can extend to a maximum of twenty
volumes. Db2 passes a file sequence number of 1 on the catalog request for the
first file on the next volume. Although a file sequence number of 1 might appear
to be an error in the integrated catalog facility catalog, be aware that this situation
causes no problems in Db2 processing.
If you choose to offload to tape, consider adjusting the size of your active log data
sets so that each data set contains the amount of space that can be stored on a
nearly full tape volume. That adjustment minimizes tape handling and volume
mounts, and it maximizes the use of tape resources. However, such an adjustment
is not always necessary.
If you want the active log data set to fit on one tape volume, consider placing a
copy of the BSDS on the same tape volume as the copy of the active log data set.
Adjust the size of the active log data set downward to offset the space that is
required for the BSDS.
If you choose to archive to disk, following certain tips can help you avoid
problems.
All archive log data sets that are allocated on disk must be cataloged. If you
choose to archive to disk, the field CATALOG DATA of installation panel DSNTIPA
must contain YES. If this field contains NO, and you decide to place archive log
data sets on disk, you receive message DSNJ072E each time an archive log data set
is allocated, although the Db2 subsystem still catalogs the data set.
You can use z/OS DFSMS (Data Facility Storage Management Subsystem) to
manage archive log data sets.
When archiving to disk, Db2 uses the number of online storage volumes for the
specified unit to determine a count of candidate volumes, up to a maximum of 15
volumes. If you are using SMS to direct archive log data set allocation, override
this candidate volume count by specifying YES for the SVOLARC subsystem
parameter. This enables SMS to manage the allocation volume count appropriately
when creating multi-volume disk archive log data sets.
Because SMS requires disk data sets to be cataloged, ensure that the value of the
CATALOG subsystem parameter is YES. Even if it does not, message DSNJ072E is
returned, and Db2 forces the data set to be cataloged.
Db2 uses the basic sequential access method (BSAM) to read archive logs from disk
and BSAM supports the use of extended format (EF) data sets.
Ensure that z/OS DFSMS does not alter the LRECL, BLKSIZE, or RECFM of the
archive log data sets. Altering these attributes could result in read errors when Db2
attempts to access the log data.
Attention: Db2 does not issue an error or a warning if you write or alter archive
data to an unreadable format.
Related concepts:
Db2 and DFSMS (Introduction to Db2 for z/OS)
Related reference:
SINGLE VOLUME field (SVOLARC subsystem parameter) (Db2 Installation
and Migration)
CATALOG DATA field (CATALOG subsystem parameter) (Db2 Installation
and Migration)
The length of the retention period (in days), which is passed to the management
system in the JCL parameter RETPD, is determined by the RETENTION PERIOD
field on the DSNTIPA installation panel.
The default for the retention period keeps archive logs forever. Any other retention
period must be long enough to contain as many recovery cycles as you plan for.
Chapter 9. Managing the log and the bootstrap data set 375
For example, if your operating procedures call for a full image copy every sixty
days of the least frequently-copied table space, and you want to keep two
complete image copy cycles on hand at all times, you need an archive log retention
period of at least 120 days. For more than two cycles, you need a correspondingly
longer retention period.
Tip: In case of data loss or corruption, retain all Db2 recovery log and image copy
data sets until you can identify the source of the data loss or corruption. You might
need these data sets to aid in nonstandard recovery procedures. On at least a
weekly basis, take an image copy backup of every Db2 table space object that was
updated and retain the Db2 recovery log data sets for a minimum of seven days. If
you detect data loss or corruption, prevent older Db2 recovery log and image copy
data sets from being discarded until the issue is assessed and resolved.
If archive log data sets or tapes are deleted automatically, the operation does not
update the archive log data set inventory in the BSDS. You can update the BSDS
with the change log inventory utility. This update is not required and recording
old archive logs in the BSDS wastes space. However, it does no harm because the
archive log data set inventory wraps and automatically deletes the oldest entries.
Related concepts:
Management of the bootstrap data set
Recommendations for changing the BSDS log inventory
Related reference:
DSNTIPA: Archive log data set parameters panel (Db2 Installation and
Migration)
With this option, Db2 work is quiesced after a commit point, and the resulting
point of consistency is captured in the current active log before it is offloaded.
Unlike the QUIESCE utility, ARCHIVE LOG MODE(QUIESCE) does not force all
changed buffers to be written to disk and does not record the log RBA in
SYSIBM.SYSCOPY. It does record the log RBA in the bootstrap data set.
The MODE(QUIESCE) option suspends all new update activity on Db2 up to the
maximum period of time that is specified on the installation panel DSNTIPA. If the
time needed to quiesce is less than the time that is specified, the command
completes successfully; otherwise, the command fails when the time period
expires. This time amount can be overridden when you issue the command, by
using the TIME option:
-ARCHIVE LOG MODE(QUIESCE) TIME(60)
Important: Use of this option during prime time, or when time is critical, can
cause a significant disruption in Db2 availability for all jobs and users that use Db2
resources.
By default, the command is processed asynchronously from the time you submit
the command. (To process the command synchronously with other Db2 commands,
use the WAIT(YES) option with QUIESCE; the z/OS console is then locked from
Db2 command input for the entire QUIESCE period.)
Chapter 9. Managing the log and the bootstrap data set 377
v Jobs and users that only read data can be affected, because they can be waiting
for locks that are held by jobs or users that were suspended.
v New tasks can start, but they are not allowed to update data.
As shown in the following example, the DISPLAY THREAD output issues message
DSNV400I to indicate that a quiesce is in effect:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV400I - ARCHIVE LOG QUIESCE CURRENTLY ACTIVE
DSNV402I - ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
BATCH T * 20 TEPJOB SYSADM MYPLAN 0012 12
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I - DSNVDT ’-DISPLAY THREAD’ NORMAL COMPLETION
When all updates are quiesced, the quiesce history record in the BSDS is updated
with the date and time that the active log data sets were truncated, and with the
last-written RBA in the current active log data sets. Db2 truncates the current
active log data sets, switches to the next available active log data sets, and issues
message DSNJ311E, stating that offload started.
If updates cannot be quiesced before the quiesce period expires, Db2 issues
message DSNJ317I, and archive log processing terminates. The current active log
data sets are not truncated and not switched to the next available log data sets,
and offload is not started.
Regardless of whether the quiesce is successful, all suspended users and jobs are
then resumed, and Db2 issues message DSNJ312I, stating that the quiesce is ended
and update activity is resumed.
If ARCHIVE LOG is issued when the current active log is the last available active
log data set, the command is not processed, and Db2 issues this message:
DSNJ319I - csect-name CURRENT ACTIVE LOG DATA SET IS THE LAST
AVAILABLE ACTIVE LOG DATA SET. ARCHIVE LOG PROCESSING WILL
BE TERMINATED.
Related reference:
DSNTIPA: Archive log data set parameters panel (Db2 Installation and
Migration)
-ARCHIVE LOG (Db2) (Db2 Commands)
You must have either SYSADM authority or have been granted the ARCHIVE
privilege.
Procedure
When you issue the preceding command, Db2 truncates the current active log data
sets, runs an asynchronous offload, and updates the BSDS with a record of the
offload. The RBA that is recorded in the BSDS is the beginning of the last complete
log record that is written in the active log data set that is being truncated.
Example
You can use the ARCHIVE LOG command as follows to capture a point of
consistency for the MSTR01 and XUSR17 databases:
-STOP DATABASE (MSTR01,XUSR17)
-ARCHIVE LOG
-START DATABASE (MSTR01,XUSR17)
In this simple example, the STOP command stops activity for the databases before
archiving the log.
In some cases, the offload of an active log might be suspended when something
goes wrong with the offload process, such as a problem with allocation or tape
mounting. If the active logs cannot be offloaded, the Db2 active log data sets
become full and Db2 stops logging.
When you enter the command, Db2 restarts the offload, beginning with the oldest
active log data set and proceeding through all active log data sets that need
offloading. If the offload fails again, you must fix the problem that is causing the
failure before the command can work.
| Adding an active log data set to the active log inventory with
| the SET LOG command
| You can use the SET LOG command to add a new active log data set to the active
| log inventory without stopping and starting Db2.
Chapter 9. Managing the log and the bootstrap data set 379
| Before you begin
| Before you issue the SET LOG command, define and format the new active log
| data sets.
| 1. Use the Access Method Services DEFINE command to define new active log
| data sets.
| 2. Use the DSNJLOGF utility to preformat the new active log data sets.
| If you do not preformat the active logs with the DSNJLOGF utility, the Db2
| database manager needs to preformat them the first time that they are used,
| which might have a performance impact.
| Procedure
| Issue the SET LOG command with the NEWLOG and COPY keywords. If the Db2
| database manager can open the newly defined log data set, the log data set is
| added to the active log inventory in the bootstrap data set (BSDS). The new active
| log data set is immediately available for use without stopping and starting the
| database manager.
| Currently, the database manager uses active log data sets in the reverse order from
| the order in which you add them to the active log inventory with the SET LOG
| command. For example, suppose that the active log inventory contains data sets
| DS01, DS02, and DS03, and you add data set DS04 and then data set DS05. If data
| set DS03 is active, and you issue the ARCHIVE LOG command, the new active log
| becomes DS05. This behavior might change in the future, so schemes for adding
| and switching active logs should not depend on this order.
| Related reference:
| -SET LOG (Db2) (Db2 Commands)
| DSNJLOGF (preformat active log) (Db2 Utilities)
The LOGLOAD value specifies the number of log records that Db2 writes between
checkpoints. The CHKTIME value specifies the number of minutes between
checkpoints.
You specify the initial LOGLOAD and CHKTIME parameter values at installation
time. Also at installation time, you specify which of these values controls when a
checkpoint occurs, or whether both do. If you specify both the LOGLOAD and
CHKTIME options together, when the threshold for one is reached, a system
checkpoint is taken, and the counters for both thresholds are reset. In the following
example, the SET LOG command changes checkpoint scheduling to use both log
records and time:
-SET LOG BOTH CHKTIME(10) LOGLOAD(500000)
You also can use either the LOGLOAD option or the CHKTIME option to initiate
an immediate system checkpoint. For example:
-SET LOG LOGLOAD(0)
-SET LOG CHKTIME(0)
The CHKFREQ value that is altered by the SET LOG command persists only while
Db2 is active. On restart, Db2 uses the CHKFREQ value in the Db2 subsystem
parameter load module.
Related reference:
-SET LOG (Db2) (Db2 Commands)
This command overrules the values that are specified during installation, or in a
previous invocation of the SET ARCHIVE command. The changes that are initiated
by the SET ARCHIVE command are temporary. At restart, Db2 uses the values that
are set during installation.
Related reference:
-SET ARCHIVE (Db2) (Db2 Commands)
If Db2 switches active logs and finds that there has not been a system checkpoint
since the last log switch, it issues the following message to notify the operator that
the system checkpoint processor might not be functioning.
DSNJ016E - csect-name WARNING - SYSTEM CHECKPOINT PROCESSOR MAY
HAVE STALLED. LAST CHECKPOINT WAS TAKEN date-time
Db2 continues processing. This situation can result in a very long restart if logging
continues without a system checkpoint. If Db2 continues logging beyond the
Chapter 9. Managing the log and the bootstrap data set 381
defined checkpoint frequency, quiesce activity and terminate Db2 to minimize the
restart time.
Procedure
To display the most recent checkpoint, use one of the following approaches:
v Issue the DISPLAY LOG command.
v Run the print log map utility (DSNJU004).
Related tasks:
Displaying log information
Related reference:
DSNJU004 (print log map) (Db2 Utilities)
-DISPLAY LOG (Db2) (Db2 Commands)
The checkpoint frequency can be either the number of log records or the minutes
between checkpoints.
Procedure
Issue the DISPLAY LOG command, or use the print log map utility (DSNJU004).
Related reference:
-DISPLAY LOG (Db2) (Db2 Commands)
-SET LOG (Db2) (Db2 Commands)
DSNJU004 (print log map) (Db2 Utilities)
| The log limits are expressed as RBA values in non-data-sharing environments and
| as LRSN timestamps in data-sharing environments. Approximately one year before
| the end of the LRSN is reached, message DSNJ034I is issued to inform you that the
| LRSN is approaching the log limit.
| The rate at which the log RBA value increases through this range depends on the
| logging rate of the Db2 subsystem. When a heavy logging rate is sustained over a
| period of years, the log RBA value can begin to approach the end of the range.
| Important: If the RBA for a non data sharing Db2 subsystem reaches or exceeds
| the hard or soft limit, you must either convert all table spaces and indexes to the
| 10-byte format or use the RBA reset procedure. Enabling data sharing is not
| sufficient to resolve the problem.
| Db2 might issue message DSNB233I to remind you to convert page sets to the
| 10-byte format.
| Procedure
| 1. Determine when the 6-byte RBA and LRSN limits are likely to be reached, by
| using one of the following methods:
| v Message DSNJ032I is issued at the active log switch when the RBA threshold
| is reached. If the RBA exceeds x'F00000000000' for the 6-byte RBA format or
| x'FFFFFFFF000000000000' for the 10-byte RBA format, the message is issued
| with the keyword WARNING and processing continues. If the RBA exceeds
| x'FFFF00000000' for the 6-byte RBA format or x'FFFFFFFFFF0000000000' for
| the 10-byte RBA format, the message is issued with the keyword CRITICAL,
| and Db2 is stopped. To resolve any outstanding units of work, Db2 restarts
| automatically in restart-light mode. Then, Db2 stop again. In this situation,
| you need to restart Db2 in ACCESS(MAINT) mode.
Chapter 9. Managing the log and the bootstrap data set 383
| Important: After the BSDS data sets have been converted to the 10-byte RBA
| or LRSN format, the DSNJ032I message is no longer issued, even if reaching
| the 6-byte RBA or 6-byte LRSN soft limit for table spaces or index spaces in
| basic format is imminent.
| Important: The rate of RBA usage is faster after the BSDS has been
| converted to the 10-byte format.
| v Calculate how much space is left in the log. You can use the print log map
| (DSNJU004) utility to obtain the highest written RBA value in the log.
| Subtract this RBA from x'FFFFFFFFFFFF' for the 6-byte RBA format or
| x'FFFFFFFFFFFFFFFFFFFF' for the 10-byte RBA format to determine how
| much space is left in the log.
| You can use the output for the print log map utility to determine how many
| archive logs are created on an average day. This number multiplied by the
| RBA range of the archive log data sets (ENDRBA minus STARTRBA)
| provides the average number of bytes that are logged per day. Divide this
| value into the space remaining in the log to determine approximately how
| much time is left before the end of the log RBA range is reached. If there is
| less than one year remaining before the end of the log RBA range is reached,
| start planning to reset the log RBA value. If less than three months remain
| before the end of the log RBA range is reached, you need to take immediate
| action to reset the log RBA value.
| 2. Convert the BSDS to the extended 10-byte format. For instructions, see
| Converting the BSDS to the 10-byte RBA and LRSN.
| In data sharing, if any Db2 member is approaching the logging limit for the
| 6-byte RBA but the LRSN is not approaching the limit of the 6-byte range,
| converting the BSDS of just that member sufficient to resolve the immediate
| problem and prevent outages. However, if the LRSN is also approaching the
| end of the 6-byte range, you must continue and convert page sets to use the
| 10-byte format before the limit is reached.
| Before you can convert the BSDS, the Db2 subsystem or data sharing member must
| be started in Db2 11 new-function mode. For instructions, see Migrating from
| enabling-new-function mode to new-function mode (Db2 Installation and
| Migration).
| Attention: In Db2 subsystems that are not data sharing members, if Db2 is
| already at risk of reaching the 6-byte RBA limit, it is strongly recommended that
| you first convert all catalog and directory objects, then convert all user objects to
| the 10-byte RBA format, before you convert the BSDS.
| Attention: After the BSDS is converted to the 10-byte format, Db2 stops issuing
| messages to warn you about the risk of reaching the 6-byte RBA or LRSN limits.
| The increased size of all log records also accelerates progress toward the 6-byte
| RBA logging limit.
| In Db2 subsystems that are not data sharing members, always convert all Db2
| catalog, directory, and user objects to use the extended 10-byte RBA format before
| you convert the BSDS, especially if Db2 is close to reaching the logging limit for
| the 6-byte RBA. Failure to convert page sets to the 10-byte RBA format before Db2
| reaches the 6-byte logging limit results in failed updates with reason code
| 00C2026D. No updates are allowed for any object that is still in the 6-byte format.
| In data sharing, if any Db2 member is approaching the logging limit for the 6-byte
| RBA but the LRSN is not approaching the limit of the 6-byte range, converting the
| BSDS of just that member sufficient to resolve the immediate problem and prevent
| outages. However, if the LRSN is also approaching the end of the 6-byte range,
| you must continue and convert page sets to use the 10-byte format before the limit
| is reached.
| At migration to Db2 12, you cannot start Db2 12 until the BSDS is converted to use
| the 10-byte RBA and LRSN formats. You can convert the BSDS before or during the
| Db2 12 migration process.
| Procedure
| To convert the BSDS to use the extended 10-byte RBA and LRSN format, complete
| the following steps:
| 1. Stop Db2.
| 2. Run job DSNTIJCB. The DSNTIJCB job for each Db2 subsystem or data sharing
| member is located in the prefix.NEW.SDSNSAMP sample library that is
| generated when you run the installation CLIST in MIGRATE mode. Db2 runs
| the DSNJCNVT conversion utility to convert the bootstrap data set records to
| support 10-byte RBA and LRSN fields. For more information, see DSNJCNVT
| (Db2 Utilities).
| 3. Start Db2.
| What to do next
| If you have not already done so, first convert all page sets for the Db2 catalog,
| directory, and , as described in Convert the BSDS, Db2 catalog, and directory to
| 10-byte RBA and LRSN format (Optional) (Db2 Installation and Migration). . Then
| convert all user objects to support the 10-byte RBA and LRSN, as described in
Chapter 9. Managing the log and the bootstrap data set 385
| Converting page sets to the 10-byte RBA or LRSN format. After Db2 reaches the
| hard logging limit, you might not be able to convert the catalog and directory page
| sets.
| You must continuously monitor the RBA and LRSN values until all catalog,
| directory, and user objects are converted to the 10-byte RBA or LRSN format.
| Failure to convert page sets before the 6-byte soft logging limit is reached results in
| failed updates with reason code 00C2026D, and any objects still in the 6-byte
| format become read-only. RBA or LRSN values greater than x'F00000000000'
| indicate that your system is at risk of reaching the 6-byte logging limit.
| For more information the RBA and LRSN logging limits, see What to do before
| RBA or LRSN limits are reached. For instructions for converting page sets to the
| 10-byte format, see Converting page sets to the 10-byte RBA or LRSN format.
| Related concepts:
| Management of the bootstrap data set
| At migration to Db2 12, you cannot start Db2 12 until the BSDS is converted to use
| the 10-byte RBA and LRSN formats. You can convert the BSDS before or during the
| Db2 12 migration process.
| Tip: In Db2 data sharing, you can get a incremental performance benefit by
| converting the BSDS to the extended 10-byte format before converting the page
| sets for Db2 catalog, directory, and user objects. However, for non-data sharing
| Db2 subsystems, it is best to convert the BSDS at the end of the conversion
| process.
| For instructions, see Converting the BSDS to the 10-byte RBA and LRSN.
| If the RBA or LRSN has already reached the hard limit Db2 subsystems, the RBA
| or LRSN reset procedure must be used first, unless all the necessary tables and
| indexes have previously been converted to the extended 10-byte format. If
| database objects are in the basic 6-byte format, they cannot be updated after the
| hard limit is reached.
| Tip: Data pages and space map pages for the work file database use the 10-byte
| format as soon as they are first accessed in Db2 11 (in any migration mode),
| regardless of whether the Db2 subsystem is migrated from DB2 10 or is a new
| installation. However, for migrated subsystems, the Db2 catalog is not updated to
| reflect the format of the work files.
| For more information about how Db2 uses 10-byte format values used at migration
| to Db2 11, see The extended 10-byte RBA and LRSN in Db2 11 (Db2 for z/OS
| What's New?).
| Db2 might issue message DSNB233I to remind you to convert page sets to the
| 10-byte format.
| To convert the RBA and LRSN to extended 10-byte format, complete the following
| steps:
| 1. Identify the objects to convert by querying the RBA_FORMAT columns of the
| SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART catalog tables.
| RBA_FORMAT='B', and blank values in those columns, indicate objects in the
| 6-byte format.
| 2. Enable Db2 to create new table spaces and indexes with the RBA or LRSN in
| extended 10-byte format, and convert the RBA for existing table spaces and
| indexes to extended 10-byte format:
| a. Run the installation CLIST and on panel DSNTIPA1 specify UPDATE for
| INSTALL TYPE. On panel DSNTIP7, set the OBJECT CREATE FORMAT
| and UTILITY OBJECT CONVERSION fields to EXTENDED. If any existing
| objects are in extended 10-byte format, you can set the UTILITY OBJECT
| CONVERSION subsystem parameter to NOBASIC to prevent the utilities
| from converting those table spaces back to basic 6-byte format.
| b. Run the updated DSNTIJUZ job to rebuild the subsystem parameter
| (DSNZPxxx) module.
| c. Issue the -SET SYSPARM command or restart Db2.
| 3. Identify objects to convert by checking the RBA_FORMAT column of the
| SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART catalog tables. If the
| RBA_FORMAT value is 'B' or blank, the object is in the basic 6-byte format.
| 4. Run LOAD REPLACE, REBUILD, or REORG with the
| RBALRSN_CONVERSION keyword specified with the EXTENDED option.
| During utility processing, any objects processed by the utility that are in basic
| 6-byte format are converted to extended 10-byte format. To verify the
| conversion, check the RBA_FORMAT column value of the
| SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART catalog tables. The
| value is 'E' for converted objects.
| Results
| New table spaces and indexes are created in extended format with 10-byte RBA or
| LRSN values, and existing table spaces and indexes are converted to extended
| format with 10-byte RBA or LRSN values.
| What to do next
| After the page sets for all Db2 catalog, directory, and user objects are converted,
| convert the BSDS to the extended 10-byte format after all page sets for Db2
| catalog, directory, and user objects. For instructions, see Converting the BSDS to
| the 10-byte RBA and LRSN.
| Related concepts:
| “How RBA and LRSN values are displayed” on page 679
| Related reference:
| DSNTIP7: SQL OBJECT DEFAULTS PANEL 1 (Db2 Installation and Migration)
|
| DSNJCNVT (Db2 Utilities)
| OBJECT CREATE FORMAT field (OBJECT_CREATE_FORMAT subsystem
| parameter) (Db2 Installation and Migration)
Chapter 9. Managing the log and the bootstrap data set 387
| UTILITY OBJECT CONVERSION field (UTILITY_OBJECT_CONVERSION
| subsystem parameter) (Db2 Installation and Migration)
| Syntax and options of the LOAD control statement (Db2 Utilities)
| Syntax and options of the REBUILD INDEX control statement (Db2 Utilities)
| Syntax and options of the REORG INDEX control statement (Db2 Utilities)
| Syntax and options of the REORG TABLESPACE control statement (Db2
| Utilities)
| Related information:
|
| DSNB233I (Db2 Messages)
| The recommended response before your Db2 data sharing group reaches the end
| of the 6-byte log RBA range is to convert the BSDS and page sets to use the
| 10-byte format. For details see What to do before RBA or LRSN limits are reached
| and The extended 10-byte RBA and LRSN in Db2 11 (Db2 for z/OS What's New?).
| At migration to Db2 12, you cannot start Db2 12 until the BSDS is converted to use
| the 10-byte RBA and LRSN formats. You can convert the BSDS before or during the
| Db2 12 migration process.
| However, you can continue to use the following procedure in Db2 11 until you
| complete the conversion.
| Procedure
| Db2 11 introduces extended 10-byte RBA and LRSN formats. The recommended
| response before your Db2 subsystem reaches the end of the 6-byte log RBA range
Chapter 9. Managing the log and the bootstrap data set 389
| is to convert the BSDS and page sets to use the 10-byte format. For details see
| What to do before RBA or LRSN limits are reached and The extended 10-byte RBA
| and LRSN in Db2 11 (Db2 for z/OS What's New?).
| At migration to Db2 12, you cannot start Db2 12 until the BSDS is converted to use
| the 10-byte RBA and LRSN formats. You can convert the BSDS before or during the
| Db2 12 migration process.
| Procedure
| To reset the log RBA value in a non-data sharing environment by using the COPY
| utility:
| 1. Drop any user-created indexes on the SYSIBM.SYSTSCPY catalog table.
| 2. Alter all of the indexes on the catalog tables so that they have the COPY YES
| attribute by issuing the ALTER INDEX COPY YES statement, and commit the
| changes after running every 30 ALTER statements.
| Restriction: Because FlashCopy technology does not reset the log RBA, you
| cannot use the COPY FLASHCOPY option in this situation.
| 16. Re-create any user-created indexes on the SYSIBM.SYSCOPY table that were
| dropped in step 1 of this procedure.
| 17. Verify that the log RBA values were reset:
| a. Run a query against the tables SYSIBM.SYSCOPY,
| SYSIBM.SYSTABLEPART, and SYSIBM.SYSINDEXPART to verify that all
| objects were copied.
| b. Use the DSN1PRNT utility with the FORMAT option to print several
| pages from some of the objects so that you can verify that the PGLOGRBA
| field in the pages are reset to zero. The COPY utility updates the
| PGLOGRBA field and other RBA fields in header pages (page zero) so
| these fields will contain non-zero values.
| 18. Stop Db2, and disable the reset RBA function in the COPY utility by following
| the instructions in step 10 and setting SPRMRRBA to '0'.
| 19. Restart Db2 for normal access.
| 20. Alter the Db2 catalog indexes and user-created indexes to have the COPY NO
| attribute by issuing the ALTER INDEX COPY NO statement. Commit the
| changes after every thirty ALTER statements. However, you should issue these
| ALTER statements over several days, because during this process SYSCOPY
| and SYSLGRNX records are deleted and contention might occur.
| Note: If the RBA fields for an object are not reset, abend04E RC00C200C1 is
| returned during SQL update, delete, and insert operations. The object also is
| placed in STOPE status. You can use the DSN1COPY utility with the RESET
| option to reset the log RBA values. This two-step process requires copying the
| data out and then back into the specified data sets. Before using DSN1COPY
| with the RESET option, make sure that the object is stopped by issuing the
| command -STOP DB(...) SPACENAM(...).
Chapter 9. Managing the log and the bootstrap data set 391
| Related concepts:
| Expanded RBA and LRSN log records (Db2 for z/OS What's New?)
| Related tasks:
| What to do before RBA or LRSN limits are reached
| Related reference:
| COPY (Db2 Utilities)
Procedure
Procedure
Related reference:
-DISPLAY LOG (Db2) (Db2 Commands)
To recover units of recovery, you need log records at least until all current actions
are completed. If Db2 terminates abnormally, restart requires all log records since
the previous checkpoint or the beginning of the oldest UR that was active at the
abend, whichever is first on the log.
To tell whether all units of recovery are complete, read the status counts in the Db2
restart messages. If all counts are zero, no unit-of-recovery actions are pending. If
To recover databases, you need log records and image copies of table spaces. How
long you keep log records depends, on how often you make those image copies. If
you do not already know what records you want to keep, see Backing up and
recovering your data for suggestions about recovery cycles.
You can discard active data sets, based on their log RBA ranges. The earliest log
record that you need to retain is identified by a log RBA. You can discard any
archive log data sets that contain only records with log RBAs that are lower than
that RBA.
Procedure
Chapter 9. Managing the log and the bootstrap data set 393
c. Reissue the DISPLAY THREAD TYPE(INDOUBT) command to ensure that the
indoubt units have been recovered. When no indoubt units of recovery
remain, continue with step 2.
2. Find the startup log RBA. Keep at least all log records with log RBAs greater
than the one that is given in this message, which is issued at restart:
DSNR003I RESTART...PRIOR CHECKPOINT RBA=xxxxxxxxxxxx
If you suspended Db2 activity while performing step 1, restart Db2 now.
3. Find the minimum log RBA that is needed. Suppose that you have determined
to keep some number of complete image copy cycles of your least-frequently
copied table space. You now need to find the log RBA of the earliest full image
copy that you want to keep.
a. If you have any table spaces that were created so recently that no full image
copies of them have ever been taken, take full image copies of them. If you
do not take image copies of them, and you discard the archive logs that log
their creation, Db2 can never recover them.
The following SQL statement generates a list of the table spaces for
which no full image copy is available:
SELECT X.DBNAME, X.NAME, X.CREATOR, X.NTABLES, X.PARTITIONS
FROM SYSIBM.SYSTABLESPACE X
WHERE NOT EXISTS (SELECT * FROM SYSIBM.SYSCOPY Y
WHERE X.NAME = Y.TSNAME
AND X.DBNAME = Y.DBNAME
AND Y.ICTYPE = ’F’)
ORDER BY 1, 3, 2;
The statement generates a list of all databases and the table spaces within
them, in ascending order by date.
c. Find the START_RBA value for the earliest full image copy (ICTYPE=F) that
you intend to keep. If your least-frequently copied table space is partitioned,
and you take full image copies by partition, use the earliest date for all the
partitions.
If you plan to discard records from SYSIBM.SYSCOPY and
SYSIBM.SYSLGRNX, note the date of the earliest image copy that you want
to keep.
4. Use job DSNTIJIC to copy all catalog and directory table spaces. Doing so
ensures that copies of these table spaces are included in the range of log
records that you plan to keep.
5. Locate and discard archive log volumes. Now that you know the minimum log
RBA, from step 3, suppose that you want to find archive log volumes that
contain only log records earlier than that. Proceed as follows:
a. Execute the print log map utility (DSNJU004) to print the contents of the
BSDS.
The BSDS is defined with access method services when Db2 is installed and is
allocated by a DD statement in the Db2 startup procedure. It is deallocated when
Db2 terminates.
The active logs are first registered in the BSDS by job DSNTIJID, during Db2
| installation.To add a new active log data set or to delete an active log data set, you
Chapter 9. Managing the log and the bootstrap data set 395
| can use DSNJU003, the change the log inventory utility. To use DSNJU003, you
| need to first stop Db2 and then restart it after the DSNJU003 job has completed.
| Alternatively, you can add a new active log data set without stopping and starting
| Db2 by using the SET LOG command. Note that the SET LOG command does not
| support deletion of an active log data set.
Archive log data sets are dynamically allocated. When one is allocated, the data set
name is registered in the BSDS in separate entries for each volume on which the
archive log resides. The list of archive log data sets expands as archives are added,
and the list wraps around when a user-determined number of entries is reached.
The maximum number of archive log data sets that Db2 keeps in the BSDS
depends on the value of the MAXARCH subsystem parameter. The allowable
values for the MAXARCH subsystem parameter are between 10 and 10,000
(inclusive). If two copies of the archive log are being created, the BSDS will contain
records for both copies, resulting in between 20 and 20,000 entries.
You can manage the inventory of archive log data sets with the change log
inventory utility (DSNJU003).
A wide variety of tape management systems exist, along with the opportunity for
external manual overrides of retention periods. Because of that, Db2 does not have
an automated method to delete the archive log data sets from the BSDS inventory
of archive log data sets. Thus, the information about an archive log data set can be
in the BSDS long after the archive log data set is scratched by a tape management
system following the expiration of the retention period of the data set.
Conversely, the maximum number of archive log data sets might be exceeded, and
the data from the BSDS might be dropped long before the data set reaches its
expiration date.
If you specified at installation that archive log data sets are to be cataloged when
allocated, the BSDS points to the integrated catalog facility catalog for the
information that is needed for later allocations. Otherwise, the BSDS entries for
each volume register the volume serial number and unit information that is needed
for later allocation.
| The BSDS can be converted to use extended 10-byte RBA and LRSN format records
| any time after migration to Db2 11 new function mode. The BSDS must be
| converted to use extended 10-byte RBA and LRSN format records before Db2 is
| started in Db2 12.
Related concepts:
Automatic archive log deletion
Related tasks:
Convert the BSDS, Db2 catalog, and directory to 10-byte RBA and LRSN
format (Optional) (Db2 Installation and Migration)
Related reference:
| DSNJCNVT (Db2 Utilities)
| DSNJCNVB (Db2 Utilities)
DSNJU003 (change log inventory) (Db2 Utilities)
-SET LOG (Db2) (Db2 Commands)
Procedure
If the archive log is on tape, the BSDS is the first file on the first output volume. If
the archive log is on disk, the BSDS copy is a separate file, which could reside on a
separate volume.
The data set names of the BSDS copy and the archive log are the same, except that
the first character of the last data set name qualifier in the BSDS name is B instead
of A, as in the following example:
Archive log name
DSNCAT.ARCHLOG1.A0000001
BSDS copy name
DSNCAT.ARCHLOG1.B0000001
If a read error occurs while copying the BSDS, the copy is not created. Message
DSNJ125I is issued, and the offload to the new archive log data set continues
without the BSDS copy.
The utility DSNJU004, print log map, lists the information that is stored in the
BSDS.
Related reference:
DSNJU004 (print log map) (Db2 Utilities)
Chapter 9. Managing the log and the bootstrap data set 397
v Copy active log data sets to newly allocated data sets, as when providing larger
active log allocations.
v Move log data sets to other devices.
v Recover a damaged BSDS.
v Discard outdated archive log data sets.
v Create or cancel control records for conditional restart.
v Add or change the DDF communication record.
You can change the BSDS by running the Db2 batch change log inventory
(DSNJU003) utility. You can only run this utility when Db2 is inactive.
You can copy an active log data set using the access method services IDCAMS
REPRO statement. The copy can be performed when only Db2 is down, because
Db2 allocates the active log data sets as exclusive (DISP=OLD) at Db2 startup. .
Related reference:
DSNJU003 (change log inventory) (Db2 Utilities)
Related information:
REPRO command (DFSMS Access Method Services for Catalogs)
The term object, used in any discussion of restarting Db2 after termination, refers to
any database, table space, or index space.
Related concepts:
Maintaining consistency across multiple systems
Related tasks:
Backing up and recovering your data
Methods of restarting
Db2 can restart in several different ways. Some options are based on how Db2
terminated or what your environment is.
Types of termination
Db2 terminates normally in response to the STOP DB2 command . If Db2 stops for
any other reason, the termination is considered abnormal.
Normal termination
In a normal termination, Db2 stops all activity in an orderly way.
You can use either STOP DB2 MODE (QUIESCE) or STOP DB2 MODE (FORCE).
The effects of each command are compared in the following table.
Table 42. Termination using QUIESCE and FORCE
Thread type QUIESCE FORCE
Active threads Run to completion Roll back
New threads Permitted Not permitted
New connections Not permitted Not permitted
You can use either command to prevent new applications from connecting to Db2.
When you issue the STOP DB2 MODE(QUIESCE) command, current threads can
run to completion, and new threads can be allocated to an application that is
running.
With IMS and CICS, STOP DB2 MODE(QUIESCE) allows a current thread to run
only to the end of the unit of recovery, unless either of the following conditions are
true:
v Open, held cursors exist.
v Special registers are not in their original state.
With CICS, QUIESCE mode stops the CICS attachment facility, so an active task
might not necessarily run to completion.
For example, assume that a CICS transaction opens no cursors that are declared
WITH HOLD and modifies no special registers, as follows:
EXEC SQL
.
. ← -STOP DB2 MODE(QUIESCE) issued here
.
.
SYNCPOINT
.
.
.
EXEC SQL ← This receives an AETA abend
When you issue the command STOP DB2 MODE(FORCE), no new threads are
allocated, and work on existing threads is rolled back.
A data object might be left in an inconsistent state, even after a shutdown with
mode QUIESCE, if it was made unavailable by the command STOP DATABASE, or
if Db2 recognized a problem with the object. MODE (QUIESCE) does not wait for
asynchronous tasks that are not associated with any thread to complete before it
stops Db2. This can result in data commands such as STOP DATABASE and
START DATABASE having outstanding units of recovery when Db2 stops. These
outstanding units of recovery become inflight units of recovery when Db2 is
restarted; then they are returned to their original states.
An abend can leave data in an inconsistent state for any of the following reasons:
v Units of recovery might be interrupted before reaching a point of consistency.
v Committed data might not be written to external media.
v Uncommitted data might be written to external media.
After Db2 is initialized, the restart process goes through four phases, which are
described in the following sections:
“Phase 1: Log initialization” on page 401
“Phase 2: Current status rebuild” on page 402
“Phase 3: Forward log recovery” on page 403
“Phase 4: Backward log recovery” on page 404
At the end of the fourth phase recovery, a checkpoint is taken and committed
changes are reflected in the data.
Application programs that do not commit often enough cause long-running units
of recovery (URs). These long-running URs might be inflight after a Db2 failure.
Inflight URs can extend Db2 restart time. You can restart Db2 more quickly by
postponing the backout of long-running URs. Installation options LIMIT
BACKOUT and BACKOUT DURATION establish what work to delay during
restart.
If your Db2 subsystem has the UR checkpoint count option enabled, Db2 generates
console message DSNR035I and trace records for IFCID 0313 to inform you about
long-running URs. The UR checkpoint count option is enabled at installation time,
through field UR CHECK FREQ on panel DSNTIPL.
If your Db2 subsystem has the UR log threshold option enabled, Db2 generates
console message DSNB260I when an inflight UR writes more than the
installation-defined number of log records. Db2 also generates trace records for
IFCID 0313 to inform you about these long-running URs. The UR log threshold
option is established at installation time, through field UR LOG WRITE CHECK on
panel DSNTIPL.
Restart of large object (LOB) table spaces is like restart of other table spaces. LOB
table spaces that are defined with LOG NO do not log LOB data, but they log
enough control information (and follow a force-at-commit policy) so that they can
restart without loss of data integrity.
After Db2 has gone through a group or normal restart that involves group buffer
pool (GBP) failure, group buffer pool recovery pending (GRECP) can be
automatically initiated for all objects except the object that is explicitly deferred
during restart (ZPARM defer), or the object that is associated with the indoubt or
postponed-abort UR.
Related reference:
DSNTIPL: Active log data set parameters (Db2 Installation and Migration)
In phase 1:
1. Db2 compares the high-level qualifier of the integrated catalog facility catalog
name that is in the BSDS with the corresponding qualifier of the name in the
current subsystem parameter module (DSNZPxxx).
v If they are equal, processing continues with step 2 on page 402.
v If they are not equal, Db2 terminates with this message:
DSNJ130I ICF CATALOG NAME IN BSDS
DOES NOT AGREE WITH DSNZPARM.
BSDS CATALOG NAME=aaaaa,
DSNZPARM CATALOG NAME=bbbbb
In phase 2:
1. Db2 checks the BSDS to find the log RBA of the last complete checkpoint before
termination.
During phase 2, no database changes are made, nor are any units of recovery
completed. Db2 determines what processing is required by phase 3, forward log
recovery, before access to databases is allowed.
Related reference:
-SET LOG (Db2) (Db2 Commands)
DSNTIPL: Active log data set parameters (Db2 Installation and Migration)
Restart time is longer when lock information needs to be recovered during a group
restart, because Db2 needs to go back to the earliest begin_UR for an inflight UR
belonging to that subsystem. This is necessary to rebuild the locks that member
has obtained during the inflight UR. (A normal restart goes back only as far as the
earliest RBA that is needed for database writes or is associated with the begin_UR
of indoubt units of recovery.)
If Db2 encounters a problem while applying log records to an object during phase
3, the affected pages are placed in the logical page list. Message DSNI001I is issued
once per page set or partition, and message DSNB250E is issued once per page.
Restart processing continues.
In phase 4:
1. Db2 scans the log backward, starting at the current end. The scan continues
until the earliest “Begin Unit of Recovery” record for any outstanding inflight
or in-abort unit of recovery.
If Db2 encounters a problem while applying a log record to an object during phase
4, the affected pages are placed in the logical page list. Message DSNI001I is issued
once per page set or partition, and message DSNB250E is issued once per page.
Restart processing continues.
Automatic restart
If you run Db2 in a sysplex, you can have the automatic restart function of z/OS
automatically restart Db2 or IRLM after a failure.
When Db2 or IRLM stops abnormally, z/OS determines whether z/OS failed too,
and where Db2 or IRLM should be restarted. It then restarts Db2 or IRLM.
You must have Db2 installed with a command prefix scope of S to take advantage
of automatic restart.
When certain critical resources are lost, restart includes additional processing to
recover and rebuild those resources. This process is called group restart.
Related concepts:
Group restart phases (Db2 Data Sharing Planning and Administration)
Therefore you still need to perform frequent commits to limit the distance that
undo processing might need to go backward on the log to find the beginning of
the unit of recovery. If a transaction that is not logged does not do frequent
commits, it is subject to being reported as a long-running unit of recovery in
message DSNR035I. This means that if such a unit of recovery persists for a
duration that qualifies as a long-running unit of recovery, message DSNR035I is
issued to identify it.
If, during restart, you need to undo work that has not been logged because of the
NOT LOGGED attribute, the table space loses its data integrity, and therefore must
be recovered. Recovery can be accomplished by using the RECOVER utility or by
reinserting the appropriate data. For example, a summary table can be re-created
by one or more INSERT statements; a materialized query table can be rebuilt by
using a REFRESH TABLE SQL statement.
To mark the need for recovery, the table space or partition is marked with
RECOVER-pending status. To prevent any access to the corrupt data, the table
space or partition is placed in the LPL. When undo processing places a table space
or partition in the logical page list (LPL) and marks it with RECOVER-pending
status, it also places all of the updated indexes on all tables in the table space in
the LPL. The corresponding partitions of data-partitioned secondary indexes
(DPSIs) are placed in the LPL, which prevents other processes that use index-only
access from seeing data whose integrity is in doubt. These indexes are also marked
with REBUILD-pending status.
After restart, when Db2 is operational, if undo processing is needed for a unit of
recovery in which modifications were made to the table space that was not logged,
the entire table space or partition is placed in the LPL, and the table space is
marked with RECOVER-pending status. This can happen, for example, as a result
of a rollback, abort, trigger error, or duplicate key or referential constraint
violation. The LPL ensures that no concurrently running agent can see any of the
data whose integrity is in doubt.
Avoid duplicate key or referential constraint violations in table spaces that are not
logged because the result is an unavailable table space that requires manual action.
When a table space or partition is placed in the LPL because undo processing is
needed for a table space that is not logged, either at restart time or during rollback
Conditional restart
A conditional restart is a Db2 restart that is directed by a user-defined conditional
restart control record (CRCR).
If you want to skip some portion of the log processing during Db2 restart, you can
use a conditional restart. However, if a conditional restart skips any database
change log records, data in the associated objects becomes inconsistent, and any
attempt to process them for normal operations might cause unpredictable results.
The only operations that can safely be performed on the objects are recovery to a
prior point of consistency, total replacement, or dropping.
In unusual cases, you might choose to make inconsistent objects available for use
without recovering them. For example, the only inconsistent object might be a table
space that is dropped as soon as Db2 is restarted, or the Db2 subsystem might be
used only for testing application programs that are still under development. In
cases like those, where data consistency is not critical, normal recovery operations
can be partially or fully bypassed by using conditional restart control records in
the BSDS.
Cold starts and conditional restarts that skip forward recovery can cause additional
data inconsistency within identity columns and sequence objects. After such
restarts, Db2 might assign duplicate identity column values and create gaps in
identity column sequences.
Related concepts:
Restarting a member with conditions (Db2 Data Sharing Planning and
Administration)
Related reference:
DSNJU003 (change log inventory) (Db2 Utilities)
Procedure
Restarting automatically
You control how automatic restart works by using automatic restart policies.
When the automatic restart function is active, the default action is to restart the
subsystems when they fail. If this default action is not what you want, then you
must create a policy that defines the action that you want taken.
Procedure
To create a policy:
When you defer restart of an object, Db2 puts pages that are necessary for restart
of the object in the logical page list (LPL). Only those pages are inaccessible; the
rest of the object can still be accessed after restart.
Procedure
Procedure
For example, create the conditional restart record by using the data complete LRSN
as the CRESTART ENDRBA, ENDLRSN, or SYSPITR log truncation point for the
change log inventory utility, DSNJU003. The data complete LRSN is externalized in
the DSNU1614I message that is issued by the BACKUP SYSTEM utility when a
system-level backup completes.
You can use the print log map utility, DSNJU004, to print the information for a
system-level backup. This information includes the data complete LRSN. Using the
DSNJU004 utility is beneficial if you did not preserve the job output or the log that
contains the DSNU1614I message.
A conditional restart record that specifies left truncation of the log causes any
postponed-abort units of recovery that began earlier than the truncation RBA to
end without resolution. The combination of unresolved postponed-abort units of
recovery can cause more records than requested by the BACKODUR system
parameter to be processed. The left truncation RBA takes precedence over
BACKODUR in this case.
Be careful about doing a conditional restart that discards log records. If the
discarded log records contain information from an image copy of the Db2
directory, a future execution of the RECOVER utility on the directory fails.
You can use the utility DSN1LOGP to read information about checkpoints and
conditional restart control records.
Related reference:
DSN1LOGP (Db2 Utilities)
If you specify LBACKOUT = YES or LIGHT, you must use the RECOVER
POSTPONED command to resolve postponed units of recovery.
Procedure
Use the RECOVER POSTPONED command. You cannot specify a single unit of
work for resolution. This command might take several hours to complete
depending on the content of the long-running job.
In some circumstances, you can elect to use the CANCEL option of the RECOVER
POSTPONED command. This option leaves the objects in an inconsistent state
(REFP) that you must resolve before using the objects. However, you might choose
the CANCEL option for the following reasons:
v You determine that the complete recovery of the postponed units of recovery
will take more time to complete than you have available. You also determine it
is faster to either recover the objects to a prior point in time or run the LOAD
utility with the REPLACE option.
v You want to replace the existing data in the object with new data.
v You decide to drop the object. To drop the object successfully, complete the
following steps:
1. Issue the RECOVER POSTPONED command with the CANCEL option.
2. Issue the DROP TABLESPACE statement.
v You do not have the Db2 logs to successfully recover the postponed units of
recovery.
Example
Procedure
If data in more than one subsystem is to be consistent, all update operations at all
subsystems for a single logical unit of work must either be committed or backed
out.
When these actions update recoverable resources, the commit process ensures that
either all the effects of the logical unit of work persist, or none of the effects
persist. The commit process ensures this outcome despite component, system, or
communications failures.
The following figure illustrates the two-phase commit process. Events in the
coordinator (IMS, CICS, or Db2) are shown on the upper line, events in the
participant on the lower line.
Time
1 2 3 4 5 6 7 8 9 10 11 12 13
line
Participant
Phase 1 Phase 2
Figure 38. Time line illustrating a commit that is coordinated with another subsystem
The numbers below are keyed to the timeline in the figure. The resultant state of
the update operations at the participant are shown between the two lines.
1. The data in the coordinator is at a point of consistency.
2. An application program in the coordinator calls the participant to update
some data, by executing an SQL statement.
3. This starts a unit of recovery in the participant.
4. Processing continues in the coordinator until an application synchronization
point is reached.
5. The coordinator then starts commit processing. IMS can do that by using a
DL/I CHKP call, a fast path SYNC call, a GET UNIQUE call to the I/O PCB,
or a normal application termination. CICS uses a SYNCPOINT command or a
normal application termination. A Db2 application starts commit processing
by an SQL COMMIT statement or by normal termination. Phase 1 of commit
processing begins.
6. The coordinator informs the participant that it is to prepare for commit. The
participant begins phase 1 processing.
7. The participant successfully completes phase 1, writes this fact in its log, and
notifies the coordinator.
8. The coordinator receives the notification.
9. The coordinator successfully completes its phase 1 processing. Now both
subsystems agree to commit the data changes because both have completed
phase 1 and could recover from any failure. The coordinator records on its log
the instant of commit—the irrevocable decision of the two subsystems to make
the changes.
The coordinator of a unit of work that involves two or more other DBMSs must
ensure that all systems remain consistent. After the first phase of the two-phase
commit process, the Db2 coordinator waits for the other participants to indicate
that they can commit the unit of work. If all systems are able, the Db2 coordinator
sends the commit decision and each system commits the unit of work.
If even one system indicates that it cannot commit, the Db2 coordinator
communicates the decision to roll back the unit of work at all systems. This
process ensures that data among multiple DBMSs remains consistent. When Db2 is
the participant, it follows the decision of the coordinator, whether the coordinator
is another Db2 or another DBMS.
Db2 is always the participant when interacting with IMS, CICS, or WebSphere
Application Server systems. However, Db2 can also serve as the coordinator for
other DBMSs or for other Db2 subsystems in the same unit of work. For example,
if Db2 receives a request from a coordinating system that also requires data
manipulation on another system, Db2 propagates the unit of work to the other
system and serves as the coordinator for that system.
In the following figure, DB2A is the participant for an IMS transaction, but Db2
becomes the coordinator for the two database servers (AS1 and AS2), for DB2B,
and for its respective Db2 servers (DB2C, DB2D, and DB2E).
IMS/ DB2D
DB2A DB2B
CICS Server
DB2E
AS2 Server
If the connection between DB2A and the coordinating IMS system fails, the
connection becomes an indoubt thread. However, DB2A connections to the other
systems are still waiting and are not considered indoubt. Automatic recovery
occurs to resolve the indoubt thread. When the thread is recovered, the unit of
work commits or rolls back, and this action is propagated to the other systems that
are involved in the unit of work.
The following figure shows an example of a multi-site update with one coordinator
and two participants.
Coordinator
Phase 1 Phase 2
Prepare Commit
Time 1 2 3 4 5
line
The following process describes each action that the figure illustrates.
Phase 1
1. When an application commits a logical unit of work, it signals the Db2
coordinator. The coordinator starts the commit process by sending
messages to the participants to determine whether they can commit.
2. A participant (Participant 1) that is willing to let the logical unit of work
be committed, and which has updated recoverable resources, writes a
Important: If you try to resolve any indoubt threads manually, you need to know
whether the participants committed or rolled back their units of work. With this
information, you can make an appropriate decision regarding processing at your
site.
For certain units of recovery, Db2 has enough information to make the decision.
For others, Db2 must get information from the coordinator after the connection is
re-established.
Time
1 2 3 4 5 6 7 8 9 10 11 12 13
line
Participant
Phase 1 Phase 2
Figure 41. Time line illustrating a commit that is coordinated with another subsystem
The phases of restart and recovery deal with that information as follows:
Phase 1: Log initialization
This phase proceeds as described in “Phase 1: Log initialization” on page
401.
Phase 2: Current status rebuild
While reading the log, Db2 identifies:
v The coordinator and all participants for every unit of recovery.
v All units of recovery that are outstanding and their statuses (indoubt,
in-commit, in-abort, or inflight, as described under “Consistency after
termination or failure” on page 420).
Phase 3: Forward log recovery
Db2 makes all database changes for each indoubt unit of recovery and
locks the data to prevent access to it after restart. Later, when an indoubt
unit of recovery is resolved, processing is completed in one of these ways:
v For the ABORT option of the RECOVER INDOUBT command, Db2
reads and processes the log, reversing all changes.
v For the COMMIT option of the RECOVER INDOUBT command, Db2
reads the log but does not process the records because all changes have
been made.
At the end of this phase, indoubt activity is reflected in the database as
though the decision was made to commit the activity, but the activity is
not yet committed. The data is locked and cannot be used until Db2
recognizes and acts on the indoubt decision. (For a description of indoubt
units of recovery, see “Resolving indoubt units of recovery” on page 422.)
Phase 4: Backward log recovery
This phase reverses changes that are performed for inflight or in-abort
units of recovery. At the end of this phase, interrupted inflight and in-abort
changes are removed from the database (the data is consistent and can be
used) or removal of the changes is postponed (the data is inconsistent and
unavailable).
If conditional restart is performed when Db2 is acting together with other systems,
the following actions occur:
1. All information about another coordinator and other participants that are
known to Db2 is displayed by messages DSNL438I and DSNL439I.
2. This information is purged. Therefore the RECOVER INDOUBT command must
be used at the local Db2 when the local location is a participant, and at another
Db2 subsystem when the local location is the coordinator.
3. Indoubt database access threads continue to appear as indoubt, and no
resynchronization with either a coordinator or a participant is allowed.
Related tasks:
Resolving inconsistencies resulting from a conditional restart
If you commit or roll back a unit of work and your decision is different than the
other system's decision, data inconsistency occurs. This type of damage is called
heuristic damage.
If this situation should occur, and your system then updates any data that is
involved with the previous unit of work, your data is corrupted and is extremely
difficult to correct.
In order to make a correct decision, you must be absolutely sure that the action
you take on indoubt units of recovery is the same as the action that the
coordinator takes. Validate your decision with the administrator of the other
systems that are involved with the logical unit of work.
Check the console for message DSNR036I for unresolved units of recovery
encountered during a checkpoint. This message might occur to remind operators of
existing indoubt threads.
Important: If the TCP/IP address that is associated with a DRDA server is subject
to change, the domain name of each DRDA server must be defined in the CDB.
This allows Db2 to recover from situations where the server's IP address changes
prior to successful resynchronization.
Related information:
DSNR036I (Db2 Messages)
When IMS restarts, it automatically commits or backs out incomplete DL/I work,
based on whether the commit decision was recorded on the IMS log. The existence
of indoubt units of recovery does not imply that DL/I records are locked until Db2
connects.
During the current status rebuild phase of Db2 restart, the Db2 participant makes a
list of indoubt units of recovery. IMS builds its own list of residual recovery entries
(RREs). The RREs are logged at IMS checkpoints until all entries are resolved.
When indoubt units of recovery are recovered, the following steps occur:
1. IMS either passes an RRE to the IMS attachment facility to resolve the entry or
informs the attachment facility of a cold start. The attachment facility passes the
required information to Db2.
2. If Db2 recognizes that an entry is marked by Db2 for commit and by IMS for
roll back, it issues message DSNM005I. Db2 issues this message for
inconsistencies of this type between Db2 and IMS.
3. The IMS attachment facility passes a return code to IMS, indicating that it
should either destroy the RRE (if it was resolved) or keep it (if it was not
resolved). The procedure is repeated for each RRE.
4. Finally, if Db2 has any remaining indoubt units of recovery, the attachment
facility issues message DSNM004I.
The IMS attachment facility writes all the records that are involved in indoubt
processing to the IMS log tape as type X'5501FE'.
For all resolved units of recovery, Db2 updates databases as necessary and releases
the corresponding locks. For threads that access offline databases, the resolution is
logged and acted on when the database is started.
Db2 maintains locks on indoubt work that was not resolved. This can create a
backlog for the system if important locks are being held. You can use the DISPLAY
DATABASE LOCKS command to find out which tables and table spaces are locked
by indoubt units of recovery. The connection remains active so that you can clean
up the IMS RREs. You can then recover the indoubt threads.
In the first case, SQL processing is prevented in all dependent regions until the
indoubt resolution is completed. IMS does not allow connections between IMS
dependent regions and Db2 before the indoubt units of recovery are resolved.
Related tasks:
Controlling IMS connections
For all resolved units of recovery, Db2 updates databases as necessary and releases
the corresponding locks. For threads that access offline databases, the resolution is
logged and acted on when the database is started. Unresolved units of work can
remain after restart; you can then resolve them.
Related tasks:
Monitoring and CICS threads and recovering CICS-Db2 indoubt units of recovery
This is a state where a failure occurs when the participant (Db2 for a thread that
uses RRSAF or RRS for a stored procedure) has completed phase 1 of commit
Normally, automatic resolution of indoubt units of recovery occurs when Db2 and
RRS re-establish communication with each other. If something prevents this, you
can manually resolve an indoubt unit of recovery. This process is not
recommended because it might lead to inconsistencies in recoverable resources.
Both Db2 and RRS can display information about indoubt units of recovery. Both
also provide techniques for manually resolving these indoubt units of recovery.
In Db2, the DISPLAY THREAD command provides information about indoubt Db2
thread. The display output includes RRS unit of recovery IDs for those Db2 threads
that have RRS either as a coordinator or as a participant. If Db2 is a participant,
you can use the RRS unit of recovery ID that is displayed to determine the
outcome of the RRS unit of recovery. If Db2 is the coordinator, you can determine
the outcome of the unit of recovery from the DISPLAY THREAD output.
In Db2, the RECOVER INDOUBT command lets you manually resolve a Db2
indoubt thread. You can use RECOVER INDOUBT to commit or roll back a unit of
recovery after you determine what the correct decision is.
If a communication failure occurs between the first phase (prepare) and the second
phase (commit decision) of a commit, an indoubt transaction is created on the
resource manager that experienced the failure. When an indoubt transaction is
created, a message like this is displayed on the console of the resource manager:
DSNL405I = THREAD
G91E1E35.GFA7.00F962CC4611.0001=217
PLACED IN INDOUBT STATE BECAUSE OF
COMMUNICATION FAILURE WITH COORDINATOR ::FFFF:9.30.30.53.
INFORMATION RECORDED IN TRACE RECORD WITH IFCID=209
AND IFCID SEQUENCE NUMBER=00000001
Procedure
If automatic recovery is not possible, Db2 alerts you to any indoubt units of
recovery that you need to resolve. If releasing locked resources and bypassing the
normal recovery process is imperative, you can resolve indoubt situations
manually.
Procedure
To resolve units of recovery manually, you must use the following approaches:
v Commit changes that were made by logical units of work that were committed
by the other system.
v Roll back changes that were made by logical units of work that were rolled back
by the other system.
Procedure
To ascertain the status of indoubt units of work, use one of the following
approaches:
v Use a NetView program. Write a program that analyzes NetView alerts for each
involved system, and returns the results through the NetView system.
v Use an automated z/OS console to ascertain the status of the indoubt threads at
the other involved systems.
This command does not erase the indoubt status of the thread. The status still
appears as an indoubt thread until the systems go through the normal
resynchronization process. An indoubt thread can be identified by its LUWID,
LUNAME, or IP address. You can also use the LUWID token with the command.
Procedure
Example
Assume that you need to recover two indoubt threads. The first has
LUWID=DB2NET.LUNSITE0.A11A7D7B2057.0002, and the second has a token of
442. To commit the LUWs, enter the following command:
-RECOVER INDOUBT ACTION(COMMIT) LUWID(DB2NET.LUNSITE0.A11A7D7B2057.0002,442)
Related concepts:
Scenarios for resolving problems with indoubt threads
Procedure
To delete indoubt thread information for a thread whose reset status is set to YES:
Example
Assume that you need to purge information about two indoubt threads. The first
has an LUWID=DB2NET.LUNSITE0.A11A7D7B2057.0002 and a resync port number
of 123, and the second has a token of 442. Use the following command to purge
the information:
-RESET INDOUBT LUWID(DB2NET.LUNSITE0.A11A7D7B2057.0002:123,442)
You can also use a LUNAME or IP address with the RESET INDOUBT command.
You can use the keyword IPADDR in place of LUNAME or LUW keywords, when
the partner uses TCP/IP instead of SNA. The resync port number of the parameter
is required when using the IP address. The DISPLAY THREAD output lists the
resync port number. This allows you to specify a location, instead of a particular
thread. You can reset all the threads that are associated with that location by using
the (*) option.
Use this method of resolving an indoubt UR only when the Db2 logs are not
available. All database updates for the indoubt UR are committed after Db2
restarts.
Procedure
Run the DSNJU003 utility and specify the CRESTART control statement with the
following options:
v STARTRBA, where the value is the first log RBA that is available after the
indoubt UR
v FORWARD=YES to allow forward-log recovery
v BACKOUT=YES to allow backward-log recovery
Related reference:
DSNJU003 (change log inventory) (Db2 Utilities)
Tip: For best results, check the consistency of the Db2 catalog and directory
regularly, even outside of the migration process. For detailed instructions, see
Migration step 2: Verify the integrity of Db2 table spaces (optional) (Db2
Installation and Migration) and Migration step 4: Check for consistency between
catalog tables (optional) (Db2 Installation and Migration)
The following utilities are the principal tools for Db2 recovery:
v QUIESCE
v REPORT
v COPY
v COPYTOCOPY
v RECOVER
v MERGECOPY
v BACKUP SYSTEM
v RESTORE SYSTEM
This information provides an overview of these utilities to help you with your
backup and recovery planning. Note that the term page set in this information can
refer to a table space, index space, or any combination of these.
Related concepts:
How the initial Db2 logging environment is established
Related reference:
Implications of moving data sets after a system-level backup
At a Db2 server, the entire unit is either committed or rolled back. It is not
processed if it violates referential constraints that are defined within the target
system. Whatever changes it makes to data are logged. A set of related table spaces
can be quiesced at the same point in the log, and image copies can be made of
them simultaneously. If that is done, and if the log is intact, any data can be
recovered after a failure and be internally consistent.
Point-in-time recovery (to the last image copy or to a relative byte address (RBA))
presents other challenges. You cannot control a utility in one subsystem from
another subsystem. In practice, you cannot quiesce two sets of table spaces, or
make image copies of them, in two different subsystems at exactly the same
instant. Neither can you recover them to exactly the same instant, because two
different logs are involved, and a RBA does not mean the same thing for both of
them.
In planning, the best approach is to consider carefully what the QUIESCE, COPY,
and RECOVER utilities do for you, and then plan not to place data that must be
closely coordinated on separate subsystems. After that, recovery planning is a
matter of agreement among database administrators at separate locations.
Db2 is responsible for recovering Db2 data only; it does not recover non-Db2 data.
Non-Db2 systems do not always provide equivalent recovery capabilities.
| Plans for recovering the Db2 tables and indexes used to support Db2
| query acceleration
| The process of recovering the Db2 tables and indexes that are created specifically
| for use with IBM Db2 Analytics Accelerator for z/OS is different than the process
| of recovering Db2 catalog tables and indexes.
| To support Db2 query acceleration with IBM Db2 Analytics Accelerator, certain
| Db2 tables and indexes are created and then used by both Db2 and IBM Db2
| Analytics Accelerator. These Db2 tables and indexes have the qualifier SYSACCEL
| and are not created in the Db2 catalog table spaces. Instead, they are created
| independently in separate table spaces by running DDL that is provided by IBM.
| Because these Db2 SYSACCEL objects are not part of the Db2 catalog space, they
| must be backed up and recovered separately, as you do with your user data. For
| these SYSACCEL objects, follow the recommended backup and recovery steps and
| strategies that are provided for user data.
| For more information about the SYSACCEL objects that are used to support Db2
| query acceleration and how they are created, see Tables that support query
| acceleration (Db2 SQL) and Creating database objects that support query
| acceleration (Db2 Installation and Migration).
All Db2-owned data sets (executable code, the Db2 catalog, and user databases)
must be on a disk that is shared between the primary and alternate XRF
processors. In the event of an XRF recovery, Db2 must be stopped on the primary
processor and started on the alternate. For CICS, that can be done automatically, by
using the facilities provided by CICS, or manually, by the system operator. For
IMS, that is a manual operation and must be done after the coordinating IMS
system has completed the processor switch. In that way, any work that includes
SQL can be moved to the alternate processor with the remaining non-SQL work.
Other Db2 work (interactive or batch SQL and Db2 utilities) must be completed or
terminated before Db2 can be switched to the alternate processor. Consider the
effect of this potential interruption carefully when planning your XRF recovery
scenarios.
Plan carefully to prevent Db2 from being started on the alternate processor until
the Db2 system on the active, failing processor terminates. A premature start can
cause severe integrity problems in data, the catalog, and the log. The use of global
resource serialization (GRS) helps avoid the integrity problems by preventing
simultaneous use of Db2 on the two systems. The bootstrap data set (BSDS) must
be included as a protected resource, and the primary and alternate XRF processors
must be included in the GRS ring.
The REBUILD INDEX utility reconstructs the indexes by reading the appropriate
rows in the table space, extracting the index keys, sorting the keys, and then
loading the index keys into the index. The RECOVER utility recovers indexes by
restoring an image copy or system-level backup and then applying the log. It can
also recover indexes to a prior point in time.
You can use the REBUILD INDEX utility to recover any index, and you do not
need to prepare image copies or system-level backups of those indexes.
To use the RECOVER utility to recover indexes, you must include the following
actions in your normal database operation:
v Create or alter indexes by using the SQL statement ALTER INDEX with the
option COPY YES before you backup and recover them using image copies or
system-level backups.
v Create image copies of all indexes that you plan to recover or take system-level
backups by using the BACKUP SYSTEM utility.
The COPY utility makes full image copies or concurrent copies of indexes.
Incremental copies of indexes are not supported. If full image copies of the index
are taken at timely intervals, recovering a large index might be faster than
rebuilding the index.
Db2 can recover a page set by using an image copy or system-level backup, the
recovery log, or both. The Db2 recovery log contains a record of all changes that
are made to the page set. If Db2 fails, it can recover the page set by restoring the
image copy or system-level backup and applying the log changes to it from the
point of the image copy or system-level backup.
The Db2 catalog and directory page sets must be copied at least as frequently as
the most critical user page sets. Moreover, you are responsible for periodically
copying the tables in the communications database (CDB), the application
registration table, the object registration table, and the resource limit facility
(governor), or for maintaining the information that is necessary to re-create them.
Plan your backup strategy accordingly.
The following backup scenario suggests how Db2 utilities might be used when
taking object level backups with the COPY utility:
Imagine that you are the database administrator for DBASE1. Table space TSPACE1
in DBASE1 has been available all week. On Friday, a disk write operation for
TSPACE1 fails. You need to recover the table space to the last consistent point
before the failure occurred. You can do that because you have regularly followed a
cycle of preparations for recovery. The most recent cycle began on Monday
morning:
Monday morning
You start the DBASE1 database and make a full image copy of TSPACE1
and all indexes immediately. That gives you a starting point from which to
recover. Use the COPY utility with the SHRLEVEL CHANGE option to
improve availability.
Tuesday morning
You run the COPY utility again. This time you make an incremental image
copy to record only the changes that have been made since the last full
image copy that you took on Monday. You also make a full index copy.
TSPACE1 can be accessed and updated while the image copy is being
made. For maximum efficiency, however, you schedule the image copies
when online use is minimal.
Wednesday morning
You make another incremental image copy, and then create a full image
copy by using the MERGECOPY utility to merge the incremental image
copy with the full image copy.
Thursday and Friday mornings
You make another incremental image copy and a full index copy each
morning.
Friday afternoon
This scenario is somewhat simplistic. You might not have taken daily incremental
image copies on just the table space that failed. You might not ordinarily recover
an entire table space. However, it illustrates this important point: with proper
preparation, recovery from a failure is greatly simplified.
Related reference:
COPY (Db2 Utilities)
RECOVER (Db2 Utilities)
v Where the BACKUP SYSTEM utility is used, the RECOVER utility uses
information in the BSDS and in the SYSIBM.SYSCOPY catalog table to:
– Restore the page set from the most recent backup, in this case, it is a
system-level backup (appearing, in Figure 43 at X'80000').
– Apply all changes to the page set that are registered in the log, beginning at
the log RBA of the most recent system-level backup.
Figure 43. Overview of Db2 recovery from BACKUP SYSTEM utility. The figure shows one
complete cycle of image copies. The SYSIBM.SYSCOPY catalog table can record many
complete cycles.
In deciding how often to take image copies or system-level backups, consider the
time needed to recover a table space. The time is determined by all of the
following factors:
v The amount of log to traverse
v The time that it takes an operator to mount and remove archive tape volumes
v The time that it takes to read the part of the log that is needed for recovery
v The time that is needed to reprocess changed pages
However, if you run REORG or LOAD REPLACE with the COPYDDN keyword or
the FCCOPYDDN keyword, Db2 creates a full image copy of a table space during
execution of the utility, so Db2 does not place the table space in COPY-pending
status. Inline copies of indexes during LOAD and REORG are not supported.
If you use LOG YES and log all updates for table spaces, an image copy of the
table space is not required for data integrity. However, taking an image copy
makes the recovery process more efficient. The process is even more efficient if you
use MERGECOPY to merge incremental image copies with the latest full image
copy. You can schedule the MERGECOPY operation at your own convenience,
whereas the need for a recovery can arise unexpectedly. The MERGECOPY
operation does not apply to indexes.
Even if the BACKUP SYSTEM is used, it is important to take full image copies
(inline copies) during REORG and LOAD to avoid the COPY-pending status on the
table space.
Recommendation: Copy your indexes after the associated utility has run.
Optionally, you can take a FlashCopy image copy when running the REBUILD
INDEX or REORG INDEX utilities. Indexes are placed in informational
COPY-pending (ICOPY) status after running the LOAD TABLESPACE, REORG
TABLESPACE, REBUILD INDEX, or REORG INDEX utilities. Only structural
modifications of the index are logged when these utilities are run, so the log does
not have enough information to recover the index.
Use the CHANGELIMIT option of the COPY utility to let Db2 determine when an
image copy should be performed on a table space and whether a full or
incremental copy should be taken. Use the CHANGELIMIT and REPORTONLY
options together to let Db2 recommend what types of image copies to make. When
you specify both CHANGELIMIT and REPORTONLY, Db2 makes no image copies.
The CHANGELIMIT option does not apply to indexes.
In determining how many complete copy and log cycles to keep, you are guarding
against damage to a volume that contains an important image copy or a log data
set. A retention period of at least two full cycles is recommended. For enhanced
security, keep records for three or more copy cycles.
Table 43 on page 440 suggests how often a user group with 10 locally defined table
spaces (one table per table space) might take image copies, based on frequency of
updating. Their least-frequently copied table is EMPSALS, which contains
employee salary data. If the group chooses to keep two complete image copy
cycles on hand, each time EMPSALS is copied, records prior to its previous copy
or copies, made two months ago, can be deleted. They will always have on hand
between two months and four months of log records.
If you are using the BACKUP SYSTEM utility, you should schedule the frequency
of system-level backups based on your most critical data.
If you do a full recovery, you do not need to recover the indexes unless they are
damaged. If you recover to a prior point in time, you do need to recover the
indexes. See “Plans for recovery of indexes” on page 433 for information about
indexes.
Restriction: Because DFSMShsm can migrate data sets to different disk volumes,
you cannot use DFSMShsm in conjunction with the BACKUP SYSTEM utility. The
RECOVER utility requires that the data sets reside on the volumes where they had
been at the time of the system-level backup.
DFSMShsm manages your disk space efficiently by moving data sets that have not
been used recently to less-expensive storage. It also makes your data available for
recovery by automatically copying new or changed data sets to tape or disk. It can
delete data sets or move them to another device. Its operations occur daily, at a
specified time, and they allow for keeping a data set for a predetermined period
before deleting or moving it.
DFSMShsm:
v Uses cataloged data sets
v Operates on user tables, image copies, and logs
If a volume has a Db2 storage group specified, the volume should be recalled only
to like devices of the same VOLSER as defined by CREATE or ALTER STOGROUP.
Db2 can recall user page sets that have been migrated. Whether DFSMShsm recall
occurs automatically is determined by the values of the RECALL DATABASE and
RECALL DELAY fields of installation panel DSNTIPO. If the value of the RECALL
DATABASE field is NO, automatic recall is not performed and the page set is
considered an unavailable resource. It must be recalled explicitly before it can be
used by Db2. If the value of the RECALL DATABASE field is YES, DFSMShsm is
invoked to recall the page sets automatically. The program waits for the recall for
the amount of time that is specified by the RECALL DELAY parameter. If the recall
is not completed within that time, the program receives an error message
indicating that the page set is unavailable but that recall was initiated.
The deletion of DFSMShsm migrated data sets and the Db2 log retention period
must be coordinated with use of the MODIFY utility. If not, you could need
recovery image copies or logs that have been deleted.
Related tasks:
Discarding archive log records
Related information:
DFSMShsm Managing Your Own Data
Consider the following factors when you develop and implement your plan:
v “Decide on the level of availability you need”
v “Practice for recovery” on page 442
v “Minimize preventable outages” on page 442
v “Determine the required backup frequency” on page 442
v “Minimize the elapsed time of RECOVER jobs” on page 442
v “Minimize the elapsed time for COPY jobs” on page 443
v “Determine the right characteristics for your logs” on page 443
v “Minimize Db2 restart time” on page 443
Start by determining the primary types of outages you are likely to experience.
Then, for each of those types of outages, decide on the maximum amount of time
that you can spend on recovery. Consider the trade-off between cost and
availability. Recovery plans for continuous availability are very costly, so you need
to think about what percentage of the time your systems really need to be
available.
You cannot know whether a backup and recovery plan is workable unless you
practice it. In addition, the pressure of a recovery situation can cause mistakes. The
best way to minimize mistakes is to practice your recovery scenario until you
know it well. The best time to practice is outside of regular working hours, when
fewer key applications are running.
For example, if you use image copies and if the maximum acceptable recovery
time after you lose a volume of data is two hours, your volumes typically hold
about 4 GB of data, and you can read about 2 GB of data per hour, you should
make copies after every 4 GB of data that is written. You can use the COPY option
SHRLEVEL CHANGE or DFSMSdss concurrent copy to make copies while
transactions and batch jobs are running. You should also make a copy after
running jobs that make large numbers of changes. In addition to copying your
table spaces, you should also consider copying your indexes.
You can take system-level backups using the BACKUP SYSTEM utility. Because the
FlashCopy technology is used, the entire system is backed up very quickly with
virtually no data unavailability.
You can make additional backup image copies from a primary image copy by
using the COPYTOCOPY utility. This capability is especially useful when the
backup image is copied to a remote site that is to be used as a disaster recovery
site for the local site. Applications can run concurrently with the COPYTOCOPY
utility. Only utilities that write to the SYSCOPY catalog table cannot run
concurrently with COPYTOCOPY.
When recovering system-level backups from disk, the RECOVER utility restores
data sets serially by the main task. When recovering system-level backups from
tape, the RECOVER utility creates multiple subtasks to restore the image copies
and system-level backups for the objects.
If you are using system-level backups, be sure to have recent system-level backups
on disk to reduce the recovery time.
If you are recovering to a non-quiesce point, the following factors can have an
impact on performance:
v The duration of URs that were active at the point of recovery.
v The number of Db2 members that have active URs to be rolled back.
You can use the COPY utility to make image copies of a list of objects in parallel.
Image copies can be made to either disk or tape.
Also, you can take FlashCopy image copies with the COPY utility. FlashCopy can
reduce both the unavailability of data during the copy operation and the amount
of time that is required for recovery operations.
Consider the following criteria when determining the right characteristics for your
logs:
v If you have enough disk space, use more and larger active logs. Recovery from
active logs is quicker than from archive logs.
v To speed recovery from archive logs, consider archiving to disk.
v If you archive to tape, be sure that you have enough tape drives so that Db2
does not have to wait for an available drive on which to mount an archive tape
during recovery.
v Make the buffer pools and the log buffers large enough to be efficient.
Many recovery processes involve restart of Db2. You need to minimize the time
that Db2 shutdown and startup take.
You can limit the backout activity during Db2 system restart. You can postpone the
backout of long-running units of recovery until after the Db2 system is operational.
Use the installation options LIMIT BACKOUT and BACKOUT DURATION to
determine what backout work will be delayed during restart processing.
The REPORT utility provides information necessary for recovering a page set. The
REPORT utility displays:
v Recovery information from the SYSIBM.SYSCOPY catalog table
v Log ranges of the table space from the SYSIBM.SYSLGRNX directory
v Archive log data sets from the bootstrap data set
v The names of all members of a table space set
v Recovery information about system-level backup copies retrieved from the
bootstrap data sets of each member in the data sharing group
You can also use the REPORT utility to obtain recovery information about the
catalog and directory.
Use the output from the DFSMShsm LIST COPYPOOL command with the
ALLVOLS option, in conjunction with the Db2 system-level backup information in
the PRINT LOG MAP (DSNJU004) utility output, to determine whether the
system-level backups of your database copy pool still reside on DASD or if they
have been dumped to tape. For a data sharing system, you should run the print
log map utility with the MEMBER * option to obtain system-level backup
information from all members.
Related concepts:
REPORT output (Db2 Utilities)
Related reference:
LIST command (z/OS DFSMShsm Storage Administration Reference)
REPORT (Db2 Utilities)
To keep a table space and its indexes synchronized, the MODIFY utility deletes the
SYSCOPY and SYSLGRNX records for the table space and its indexes that are
defined with the COPY YES option.
Important: The earliest image copies and log data sets that you need for
recovery to the present date are not necessarily the earliest ones that you want
to keep. If you foresee resetting the Db2 subsystem to its status at any earlier
date, you also need the image copies and log data sets that allow you to
recover to that date.
If the most recent image copy of an object is damaged, the RECOVER utility
seeks a backup copy. If no backup copy is available, or if the backup is lost or
damaged, RECOVER uses a previous image copy. It continues searching until it
finds an undamaged image copy or no more image copies exist. The process
has important implications for keeping archive log data sets. At the very least,
you need all log records since the most recent image copy; to protect against
loss of data from damage to that copy, you need log records as far back as the
earliest image copy that you keep.
2. Run the MODIFY RECOVERY utility to clean up obsolete entries in
SYSIBM.SYSCOPY and SYSIBM.SYSLGRNX.
Pick one of the following MODIFY strategies, based on keyword options, that is
consistent with your overall backup and recovery plan:
DELETE DATE
With the DELETE DATE keyword, you can specify date of the earliest
entry you want to keep.
DELETE AGE
With the DELETE AGE keyword, you can specify the age of the earliest
entry you want to keep.
RETAIN LAST
With the RETAIN LAST keyword, you can specify the number of image
copy entries you want to keep.
RETAIN GDGLIMIT
If GDG data sets are used for your image copies, you can specify
RETAIN GDGLIMIT keyword to keep the number of image copies
matching your GDG definition.
RETAIN LOGLIMIT
With the RETAIN LOGLIMIT keyword, you can clean up all of the
obsolete entries that are older than the oldest archive log in your BOOT
STRAP DATA SET (BSDS).
For example, you could enter one of the following commands:
The DELETE DATE option removes records that were written earlier than the
given date. You can also specify the DELETE AGE option to remove records
that are older than a specified number of days or the DELETE RETAIN option
to specify a minimum number of image copies to keep.
MODIFY RECOVERY TABLESPACE dbname.tsname
DELETE DATE date
MODIFY RECOVERY TABLESPACE dbname.tsname
RETAIN LAST( n )
The RETAIN LAST( n ) option keeps the n recent records and removes the
older one.
You can provide shorter restart times after system failures by using the installation
options LIMIT BACKOUT and BACKOUT DURATION. These options postpone
the backout processing of long-running URs during Db2 restart.
For data sharing, you need to consider whether you want the Db2 group to use
light mode at the recovery site. A light start might be desirable if you have
configured only minimal resources at the remote site. If this is the case, you might
run a subset of the members permanently at the remote site. The other members
are restarted and then directly shutdown.
| It is important that the disaster recovery process does not convert any objects to or
| from 10 byte extended RBA or LRSN format during the recovery and rebuild
| process. If some objects are in extended 10-byte format, temporarily change the
| UTILITY_OBJECT_CONVERSION subsystem parameter to NONE before you
| begin a disaster recovery and do not specify the RBALRSN_CONVERSION
| keyword in the control statements. After the disaster recovery is complete, change
| the UTILITY_OBJECT_CONVERSION subsystem parameter to its original value.
Figure 44. Preparing for disaster recovery. The information that you need to recover is
contained in the copies of data (including the Db2 catalog and directory) and the archive log
data sets.
Restriction: Do not rename your data sets if you take system-level backups.
You can use the PARALLEL keyword on the RECOVER utility to support the
recovery of a list of objects in parallel. For those objects in the list that can be
452 Administration Guide
processed independently, multiple subtasks are created to restore the image copies
for the objects. The parallel function can be used for either disk or tape.
If an image copy on tape was taken at the table space level and not on partition or
data set level, the PARALLEL keyword cannot enable RECOVER utility on
different parts (RECOVER TABLESPACE name DSNUM 1 TABLESPACE name DSNUM 2 etc.)
to restore the parts in parallel. RECOVER must read the appropriate part of the
data set for every DSNUM specification. This means that for a table space level
copy on tape, the tape is always read from the beginning. It is recommended for
image copies to tape, that a partitioned table space should be copied at the
partition level and if practical, to different tape stacks. You can use LISTDEF with
PARTLEVEL to simplify your work.
When you use one utility statement to recover indexes and table spaces, the logs
for all indexes and tables spaces are processed in one pass. This approach results
in a significant performance advantage, especially when the required archive log
data is on tape, or the fast log apply function is enabled, or if both of these
conditions occur.
This parallel log processing for fast log apply is not dependent whether you
specify RECOVER TABLESPACE name DSNUM 1 TABLESPACE name DSNUM 2 etc. or only
RECOVER TABLESPACE name, because the log apply is always done at the
partition level. You should also note that if you have copies at the partition level,
you cannot specify RECOVER TABLESPACE dbname.tsname, you must specify RECOVER
TABLESPACE dbname.tsname DSNUM 1 TABLESPACE dbname.tsname DSNUM 2 etc.. You
can simplify this specification by using LISTDEF with PARTLEVEL if all parts
must be recovered.
You can schedule concurrent RECOVER jobs that process different partitions. The
degree of parallelism in this case is limited by contention for both the image copies
and the required log data.
Typically, RECOVER restores an object to its current state by applying all image
copies or a system-level backup and log records. It can also restore the object to a
prior state, which is one of the following points in time:
v A specified point on the log (use the TOLOGPOINT or TORBA keyword)
v A particular image copy (use the TOCOPY, TOLASTCOPY, or
TOLASTFULLCOPY keywords)
You can use the RECOVER utility with the BACKOUT YES keyword to recover
data to a prior state by backing out committed work from the current state of an
object.
With z/OS Version 1.8, the RECOVER utility can use system-level backups for
object level recovery. The RECOVER utility chooses the most recent backup of
either an image copy, concurrent copy, or system-level backup to restore. The most
recent backup determination is based on the point of recovery (current or prior
point in time) for the table spaces or indexes (with the COPY YES attribute) being
recovered.
The RECOVER utility can use image copies for the local site or the recovery site,
regardless of where you invoke the utility. The RECOVER utility locates all full
and incremental image copies.
The RECOVER utility first attempts to use the primary image copy data set. If an
error is encountered (allocation, open, or I/O), RECOVER attempts to use the
backup image copy. If Db2 encounters an error in the backup image copy or no
backup image copy exists, RECOVER falls back to an earlier full copy and
attempts to apply incremental copies and log records. If an earlier full copy in not
available, RECOVER attempts to apply log records only.
In addition, you can use the COPY utility to create a FlashCopy image copy in
VSAM format from a page set. Fast replication makes the copy process virtually
instantaneous. FlashCopy image copies are always full copies.
Use the COPYTOCOPY utility to make additional image copies from a primary
image copy that you made with the COPY utility.
You can use the CONCURRENT option of the COPY utility to make a copy, with
DFSMSdss concurrent copy, that is recorded in the Db2 catalog.
Use the MERGECOPY utility to merge several image copies. MERGECOPY does
not apply to indexes.
The CHANGELIMIT option of the COPY utility causes Db2 to make an image
copy automatically when a table space has changed past a default limit or a limit
that you specify. Db2 determines whether to make a full or incremental image
copy based on the values specified for the CHANGELIMIT option.
v If the percent of changed pages is greater than the low CHANGELIMIT value
and less than the high CHANGELIMIT value, then Db2 makes an incremental
image copy.
v If the percentage of changed pages is greater than or equal to the high
CHANGELIMIT value, then Db2 makes a full image copy.
If you want Db2 to recommend what image copies should be made but not to
make the image copies, use the CHANGELIMIT and REPORTONLY options of the
COPY utility. If you specify the parameter DSNUM(ALL) with CHANGELIMIT
and REPORTONLY, Db2 reports information for each partition of a partitioned
table space or each piece of a nonpartitioned table space. For partitioned objects, if
you only want the partitions in COPY-pending status or informational
COPY-pending status to be copied, then you must specify a list of partitions. You
can do this by invoking COPY on a LISTDEF list built with the PARTLEVEL
option. An output image copy data set is created for each partition that is in
COPY-pending or informational COPY-pending status.
If you want to copy objects that are in copy pending (COPY) or informational copy
pending (ICOPY), you can use the SCOPE PENDING option of the COPY utility. If
you specify the parameter DSNUM(ALL) with SCOPE PENDING for partitioned
objects, and if one or more of the partitions are in COPY or ICOPY, the copy will
be taken of the entire table or index space.
You can add conditional code to your jobs so that an incremental, full image copy,
or some other step is performed depending on how much the table space has
changed. When you use the COPY utility with the CHANGELIMIT option to
display image copy statistics, the COPY utility uses the following return codes to
indicate the degree that a table space or list of table spaces has changed:
Code Meaning
1 Successful; no CHANGELIMIT value is met. No image copy is
recommended or taken.
2 Successful; the percentage of changed pages is greater than the low
CHANGELIMIT value and less than the high CHANGELIMIT value. An
incremental image copy is recommended or taken.
3 Successful; the percentage of changed pages is greater than or equal to the
high CHANGELIMIT value. A full image copy is recommended or taken.
When you use generation data groups (GDGs) and need to make an incremental
image copy, you can take the following steps to prevent an empty image copy
output data set from being created if no pages have been changed. You can
perform either of the following actions:
v Make a copy of your image copy step, but add the REPORTONLY and
CHANGELIMIT options to the new COPY utility statement. The REPORTONLY
keyword specifies that you want only image copy information to be displayed.
Change the SYSCOPY DD card to DD DUMMY so that no output data set is
allocated. Run this step to visually determine the change status of your table
space.
v Add step 1 before your existing image copy step, and add a JCL conditional
statement to examine the return code and execute the image copy step if the
table space changes meet either of the CHANGELIMIT values.
You can use the COPY utility with the CHANGELIMIT option to determine
whether any space map pages are broken. You can also use the COPY utility to
identify any other problems that might prevent an image copy from being taken,
You can also make a full image copy when you run the LOAD or REORG utility.
This technique is better than running the COPY utility after the LOAD or REORG
utility because it decreases the time that your table spaces are unavailable.
However, only the COPY utility makes image copies of indexes.
In addition, you can make a FlashCopy image copy when you run the LOAD,
REORG TABLESPACE, REORG INDEX, or REBUILD INDEX utilities.
Related concepts:
Plans for recovery of indexes
FlashCopy image copies (Db2 Utilities)
Related reference:
COPY (Db2 Utilities)
MERGECOPY (Db2 Utilities)
The following Db2 utilities support the creation of FlashCopy image copies:
v COPY
v LOAD
v REBUILD INDEX
v REORG INDEX
v REORG TABLESPACE
FlashCopy image copies are output to VSAM data sets. The following utilities
accept the VSAM data sets that are produced by FlashCopy as input:
v COPYTOCOPY
v DSN1COMP
v DSN1COPY
v DSN1PRNT
v RECOVER
Procedure
Specify the FlashCopy option by using Db2 subsystem parameters, utility control
statement parameters, or both.
You can use the FlashCopy subsystem parameters to define the FlashCopy option
as the default behavior for each of the utilities that support the FlashCopy option.
When the FlashCopy subsystem parameters are enabled as the default behavior,
you do not need to specify the FlashCopy options in the utility control statement.
If you specify the FlashCopy options in both the subsystem parameters and the
The two ways to use the concurrent copy function of DFSMS are:
v Run the COPY utility with the CONCURRENT option. Db2 records the resulting
image copies in SYSIBM.SYSCOPY. To recover with these DFSMS copies, you
can run the RECOVER utility to restore those image copies and apply the
necessary log records to them to complete recovery.
v Make copies using DFSMS outside of Db2 control. To recover with these copies,
you must manually restore the data sets, and then run RECOVER with the
LOGONLY option to apply the necessary log records.
You can use RVAs, PPRC, and the RVA fast copy function, SnapShot, to create
entire Db2 subsystem backups to a point in time on a hot stand-by remote site
without interruption of any application process. Another option is to use the
Enterprise Storage Server FlashCopy function to create point-in-time backups of
entire Db2 subsystems.
After the remote copy is created, use the RESUME option on the SET LOG
command to return to normal logging activities.
Related reference:
-SET LOG (Db2) (Db2 Commands)
Related information:
RAMAC Virtual Array: Implementing Peer-to-Peer Remote Copy (IBM
Redbooks)
RAMAC Virtual Array (IBM Redbooks)
IBM TotalStorage Enterprise Storage Server Introduction and Planning Guide
If DFSMShsm cannot restore the data sets, message DSNU1522I with reason code 8
is issued. If OPTIONS EVENT(ITEMERROR,SKIP) was specified, the object is
skipped and the recovery proceeds on the rest of the objects. Otherwise, the
RECOVER utility terminates.
Use the output from LISTCOPY POOL and PRINT LOG MAP to see the
system-level backup information.
You can take system-level backups using the BACKUP SYSTEM utility. However, if
any of the following utilities were run since the system-level backup that was
chosen as the recovery base, then the use of the system-level backup is prohibited
for object level recoveries to a prior point in time:
v REORG TABLESPACE
v REORG INDEX
v REBUILD INDEX
v LOAD REPLACE
v RECOVER from image copy or concurrent copy
To restore data to a prior point in time, use the methods that are described in the
following topics:
v Options for restoring data to a prior point in time
v Restoring data by using DSN1COPY
v Backing up and restoring data with non-Db2 dump and restore
The following terms apply to the subject of recovery to a prior point in time:
Term Meaning
DBID Database identifier
An inline copy that is made during LOAD REPLACE can produce unpredictable
results if that copy is used later in a RECOVER TOCOPY operation. Db2 makes the
copy during the RELOAD phase of the LOAD operation. Therefore, the copy does
not contain corrections for unique index violations and referential constraint
violations because those corrections occur during the INDEXVAL and ENFORCE
phases.
You can use the QUIESCE utility to establish an RBA or LRSN to recover to. The
RBA or LRSN can be used in point-in-time recovery.
For partitioned table spaces, if the data was redistributed across partitions by
running REORG (either REORG REBALANCE or REORG to materialize limit key
changes), additional recovery implications apply:
v If you recover to a point in time and use an image copy or system level-backup
that was taken before the redistributing REORG, the affected partitions are
placed in REORG-pending (REORP) status.
v If you recover to a point in time after you used ALTER to modify the limit keys
but before running REORG to redistribute the data, the affected partitions are
also placed in REORP status.
If you use the REORG TABLESPACE utility with the SHRLEVEL REFERENCE or
SHRLEVEL CHANGE option on only some partitions of a table space, you must
recover that table space at the partition level. When you take an image copy of
such a table space, the COPY utility issues the informational message DSNU429I.
You can take system-level backups using the BACKUP SYSTEM utility.
The BACKUP SYSTEM utility requires z/OS Version 1 Release 5 or later data
structures called copy pools. Because these data structures are implemented in
z/OS, Db2 cannot generate copy pools automatically. Before you invoke the
BACKUP SYSTEM utility, copy pools must be allocated in z/OS.
The BACKUP SYSTEM utility invokes the DFSMShsm fast replication function to
take volume level backups using FlashCopy.
You can use the BACKUP SYSTEM utility to ease the task of managing data
recovery. Choose either DATA ONLY or FULL, depending on your recovery needs.
Choose FULL if you want to backup both your Db2 data and your Db2 logs.
Because the BACKUP SYSTEM utility does not quiesce transactions, the
system-level backup is a fuzzy copy, which might not contain committed data and
might contain uncommitted data. The RESTORE SYSTEM utility uses these
backups to restore databases to a given point in time. The Db2 data is made
consistent by Db2 restart processing and the RESTORE SYSTEM utility. Db2 restart
processing determines which transactions were active at the given recovery point,
and writes the compensation log records for any uncommitted work that needs to
be backed out. The RESTORE SYSTEM utility restores the database copy pool, and
then applies the log records to bring the Db2 data to consistency. During the
LOGAPPLY phase of the RESTORE SYSTEM utility, log records are applied to redo
the committed work that is missing from the system-level backup, and log records
are applied to undo the uncommitted work that might have been contained in the
system-level backup.
Data-only system backups
The BACKUP SYSTEM DATA ONLY utility control statement creates
system-level backups that contain only databases.
The RESTORE SYSTEM utility uses these backups to restore databases to a
given point in time. In this type of recovery, you lose only a few seconds
of data, or none, based on the given recovery point. However, recovery
time varies and might be extended due to the processing of the Db2 logs
during Db2 restart and during the LOGAPPLY phase of the RESTORE
SYSTEM utility. The number of logs to process depends on the amount of
activity on your Db2 system between the time of the system-level backup
and the given recovery point.
Full system backups
The BACKUP SYSTEM FULL utility control statement creates system-level
backups that contain both logs and databases. With these backups, you can
recover your Db2 system to the point in time of a backup by using normal
Db2 restart recovery, or to a given point in time by using the RESTORE
SYSTEM utility.
You can use the BACKUP SYSTEM utility to manage system-level backups on tape.
Choose either DUMP or DUMPONLY to dump to tape.
Restriction: The DUMP and DUMPONLY options require z/OS Version 1.8.
The BACKUP SYSTEM utility invokes the DFSMShsm fast replication function to
take volume level backups using FlashCopy. The RESTORE phase of the RECOVER
utility is faster if you use these backups, because Db2 table spaces and index
spaces can be restored by using FlashCopy. You might need to change some utility
control statements to use FlashCopy technology for the LOAD utility, the REORG
utility, and the REBUILD INDEX utility.
If any of the following utilities were run since the system-level backup that was
chosen as the recovery base, the use of the system-level backup by the RECOVER
utility is prohibited:
v REORG TABLESPACE
v REORG INDEX
v REBUILD INDEX
v LOAD REPLACE
v RECOVER from image copy or concurrent copy
In these cases, the recovery terminates with message DSNU1528I and return code
8.
Note: For z/OS Version 1 Release 11 and later, the RECOVER utility can use a
system-level backup, even if the REBUILD INDEX, RECOVER, REORG, and LOAD
utilities ran after the system-level backup was created. The RECOVER utility has
been modified so that you can use system-level backups, even if a data set has
moved since the backup was created.
| If a REORG that removes dropped columns has run since the system-level backup
| that was chosen as the recovery base, the use of the system-level backup by the
| RECOVER utility is prohibited, and the recovery terminates with message
| DSNU556I and return code 8.
If the image copy data set is cataloged when the image copy is made, the entry for
that copy in SYSIBM.SYSCOPY does not record the volume serial numbers of the
data set. You can identify that copy by its name by using TOCOPY data set name. If
the image copy data set was not cataloged when it was created, you can identify
the copy by its volume serial identifier by using TOVOLUME volser.
With TOLOGPOINT, the RECOVER utility restores the object from the most recent
of either a full image copy or system-level backup taken prior to the recovery
point. If a full image copy is restored, the most recent set of incremental copies
that occur before the specified log point are restored. The logged changes are
applied up to, and including, the record that contains the log point. If no full
image copy or system-level backup exists before the chosen log point, recovery is
attempted entirely from the log. The log is applied from the log point at which the
page set was created or the last LOAD or REORG TABLESPACE utility was run to
a log point that you specify. You can apply the log only if you have not used the
MODIFY RECOVERY utility to delete the SYSIBM.SYSLGRNX records for the log
range your recovery requires.
You can use the TOLOGPOINT option in both data sharing and non-data-sharing
environments. In a non-data-sharing environment, TOLOGPOINT and TORBA are
interchangeable keywords that identify an RBA on the log at which recovery is to
stop. TORBA can be used in a data sharing environment only if the TORBA value
is before the point at which data sharing was enabled.
Tip: If you take SHRLEVEL CHANGE image copies and need to recover to a prior
point in time, then you can use the RBA or LRSN (the START_RBA syscopy
column value) associated with image copy as the TOLOGPOINT value.
All page sets must be restored to the same level; otherwise the data is inconsistent.
To avoid setting CHECK-pending status, you must perform both of the following
tasks:
v Recover the table space set to a quiesce point.
If you do not recover each table space of the table space set to the same quiesce
point, and if any of the table spaces are part of a referential integrity structure:
– All dependent table spaces that are recovered are placed in CHECK-pending
status with the scope of the whole table space.
– All table spaces that are dependent on the table spaces that are recovered are
placed in CHECK-pending status with the scope of the specific dependent
tables.
v Establish a quiesce point or take an image copy after you add check constraints
or referential constraints to a table.
If you recover each table space of a table space set to the same quiesce point, but
referential constraints were defined after the quiesce point, the CHECK-pending
status is set for the table space that contains the table with the referential
constraint.
The RECOVER utility sets various states on table spaces. The following
point-in-time recoveries set various states on table spaces:
v When the RECOVER utility finds an invalid column during the LOGAPPLY
phase on a LOB table space, it sets the table space to auxiliary-warning (AUXW)
status.
v When you recover a LOB or XML table space to a point in time that is not a
quiesce point or to an image copy that is produced with SHRLEVEL CHANGE,
the LOB or XML table space is placed in CHECK-pending (CHKP) status.
v When you recover the LOB or XML table space, but not the base table space, to
any previous point in time, the base table space is placed in auxiliary
CHECK-pending (ACHKP) status, and the index space that contains an index on
the auxiliary table is placed in REBUILD-pending (RBDP) status.
v When you recover only the base table space to a point in time, the base table
space is placed in CHECK-pending (CHKP) status.
v When you recover only the index space that contains an index on the auxiliary
table to a point in time, the index space is placed in CHECK-pending (CHKP)
status.
v When you recover partitioned table spaces with the RECOVER utility to a point
in time that is prior to the redistribution of data across partitions, all affected
partitions are placed in REORG-pending (REORP) status.
Important: The RECOVER utility does not back out CREATE or ALTER
statements. After a recovery to a previous point in time, all previous alterations to
identity column attributes remain unchanged. Because these alterations are not
backed out, a recovery to a point in time might put identity column tables out of
sync with the SYSIBM.SYSSEQUENCES catalog table. You might need to modify
identity column attributes after a recovery to resynchronize identity columns with
the catalog table.
You can use the RECOVER utility with the TOLOGPOINT option to recover a data
sharing Db2 subsystem.
The following figure shows a data sharing system with three Db2 members (A, B,
and C). Table space TS1 and TS2 are being recovered to time TR using the
RECOVER TOLOGPOINT option, they are listed in the same RECOVER job. UR1
was inflight at TR time running on member A, its start time was T1, at time T2 it
updated TS1, and at time T3 it updated TS2, T2 is earlier than T3. UR2 was
aborting at TR time running on member C, its start time was T4 and at time T5 it
updated TS2. There was no active UR on member B at time TR.
The RECOVER utility takes the following actions and provides the following
messages during recovery in a data sharing system.
LOGCSR phase
After the RECOVER LOGAPPLY phase, the RECOVER utility enters the
log analysis phase, known as the LOGCSR phase. The following messages
are issued during this phase:
DSNU1550I
Indicates the start of log analysis on member A.
DSNU1551I
Indicates the end of log analysis on member A.
DSNU1550I
Indicates the start of log analysis on member C.
DSNU1551I
Indicates the end of log analysis on member C.
DSNU1552I
Indicates the end of LOGCSR phase of RECOVER utility.
DSNU1553I
Issued after the end of the LOGCSR phase. The following
information is shown in the message:
v UR1 on member A modified TS1 at T2 time.
v UR1 on member A modified TS2 at T3 time.
v UR2 on member C modified TS2 at T5 time.
You can use the RECOVER utility with the TOLOGPOINT option to recover a
non-data sharing Db2 subsystem.
Figure 47 shows a non-data sharing system. Table spaces TS1 and TS2 are being
recovered to time TR using the RECOVER TORBA option. TS1 and TS2 are listed
in the same RECOVER job. UR1 was inflight at TR time, the start time was T1, at
time T2 it updated TS1, and at time T3 it updated TS2. T2 is earlier than T3. UR2
was aborting at TR time, the start time was T4, and at time T5 it updated TS2.
Figure 47. Using the RECOVER TOLOGPOINT option in a non-data sharing system
The RECOVER utility takes the following actions and provides the following
messages during recovery in a non-data sharing system.
LOGCSR phase
After the LOGAPPLY phase, the RECOVER utility enters the log analysis
phase, known as the LOGCSR phase. The following messages are issued
during this phase:
DSNU1550I
Indicates the start of log analysis.
DSNU1551I
Indicates the end of log analysis.
DSNU1552I
Indicates the end of the LOGCSR phase of the RECOVER utility.
| Procedure
| Restrictions: After you complete this step, and before you complete the next
| step, you cannot perform any of the following actions:
| v Execute any of the following statements on the table space, on any objects in
| the table space, on indexes that are related to tables in the table space, or on
| auxiliary objects that are associated with the table space:
| – CREATE TABLE
| – CREATE AUXILIARY TABLE
| – CREATE INDEX
| – ALTER TABLE
| – ALTER INDEX
| – RENAME
| – DROP TABLE
| v Execute SQL statements that result in pending definition changes on any of
| the following objects:
| – The table space
| – Tables in the table space
| – Auxiliary table spaces that are related to the table space
| – Indexes on tables in the table space
| v Run any utilities that are not in this list:
| Example
| The following example provides a scenario that shows how you can recover a table
| space to a point in time before pending definition changes were materialized, and
| then use the REORG TABLESPACE utility with SHRLEVEL REFERENCE to
| complete recovery.
|
|
| 1. In DB2 10 new-function mode or later, you execute the following ALTER
| TABLESPACE statement to change the buffer pool page size. This change is a
| pending definition change.
| ALTER TABLESPACE DB1.TS1 BUFFERPOOL BP8K0 MAXPARTITIONS 20 ;
| 2. In Db2 11 new-function mode or later, you run REORG to materialize the
| pending definition change.
| 3. You run the following RECOVER control statement to recover the table space to
| point in time 2012-10-09-07.15.22.216020.
| RECOVER TABLESPACE DB1.TS1
| TOLOGPOINT X’00000551BE7D’
| When this statement runs, the table space is placed in REORG-pending
| (REORP) state, and an entry is inserted into the SYSPENDINGDDL table with
| OBJTYPE = 'S', for table space.
| 4. You run the following SELECT statement to query the
| SYSIBM.SYSPENDINGDDL catalog table:
| SELECT DBNAME, TSNAME, OBJSCHEMA, OBJNAME, OBJTYPE, OPTION_SEQNO,
| OPTION_KEYWORD, OPTION_VALUE, CREATEDTS
| FROM SYSIBM.SYSPENDINGDDL
| WHERE DBNAME = ’DB1’
| AND TSNAME = ’TS1’
| ;
| This query results in the following output:
| Table 44. Output from the SELECT statement for the SYSPENDINGDDL catalog table after
| RECOVER to a point in time before materialization of pending definition changes
| DBNAME TSNAME OBJSCHEMA OBJNAME OBJTYPE
| DB1 TS1 DB1 TS1 S
|
| The REORG utility completes point-in-time recovery. After the REORG utility
| runs, the REORG-pending (REORP) state is cleared, and all entries in the
| SYSPENDINGDDL table for the table space are removed.
| Related reference:
| RECOVER (Db2 Utilities)
| REORG TABLESPACE (Db2 Utilities)
| SYSPENDINGDDL catalog table (Db2 SQL)
Note: For z/OS Version 1 Release 11 and later, to recover data sets from a
system-level backup, the data sets do not need to reside on the same volume as
they were on when the backup was made. The RECOVER utility has been
modified so that you can use system-level backups, even if a data set has moved
since the backup was created.
You can still recover the objects if you have an object-level backup such as an
image copy or concurrent copy. Take object-level backups to augment system-level
backups, or take new system-level backups whenever a data set is moved.
You can force the RECOVER utility to ignore system-level backups and use the
latest applicable object-level backup by setting the SYSTEM-LEVEL-BACKUPS
parameter on installation panel DSNTIP6 to NO. This subsystem parameter can be
updated online.
Activities that can affect the ability to recover an object to a prior point in time
from a system-level backup include:
v Migrating to new disk storage or redistributing data sets for performance
reasons
The movement of data sets for disk management should be restricted or limited
when system-level backups are taken. When movement of data sets does occur,
a new system-level backup or object-level backup should immediately be taken.
v Using the DFSMShsm migrate and recall feature
Do not use the DFSMShsm migrate and recall feature if you take system-level
backups.
If you operate on z/OS Version 1 Release 10 or earlier, these activities prevent the
recovery at the object level to a point in time that would use the system-level
backup. This restriction happens because the current volume allocation information
in the ICF catalog for the data set (or data sets) differs from the volume allocation
at the time of the system-level backup.
When you recover table spaces to a prior point of consistency, you need to
consider:
v How partitioned table spaces, segmented table spaces, LOB table spaces, XML
table spaces, and table space sets can restrict recovery.
v If you take system-level backups, how certain utility events can prohibit
recovery to a prior point in time.
Related reference:
Implications of moving data sets after a system-level backup
If you recover to a point in time that is prior to the addition of a partition, Db2
cannot roll back the definition of the partition. In such a recovery, Db2 clears all
data from the partition, and the partition remains part of the database.
If you recover a table space partition to a point in time that is before the data was
redistributed across table space partitions, you must include all partitions that are
affected by that redistribution in your recovery list.
See the information about point-in-time recovery for more details on these
restrictions.
Related concepts:
Point-in-time recovery (Db2 Utilities)
If you use the Db2 RECOVER utility, the DBD is updated dynamically to match
the restored table space on the next non-index access of the table. The table space
must be in write access mode.
If you use a method outside of Db2 control, such as DSN1COPY to restore a table
space to a prior point in time, run the REPAIR utility with the LEVELID option to
force Db2 to accept the down-level data. Then, run the REORG utility on the table
space to correct the DBD.
If you use the RECOVER utility to recover a LOB table space to a prior point of
consistency, the RECOVER utility might place the table space in a pending state.
Related concepts:
Options for restoring data to a prior point in time
For example, in the Db2 sample application, a column in the EMPLOYEE table
identifies the department to which each employee belongs. The departments are
described by records in the DEPARTMENT table, which is in a different table
space. If only that table space is restored to a prior state, a row in the unrestored
EMPLOYEE table might identify a department that does not exist in the restored
DEPARTMENT table.
Run the CHECK INDEX utility to validate the consistency of indexes with their
associated table data. Run the CHECK DATA utility to validate the consistency of
base table space data with related table spaces. If LOB columns exist, run the
CHECK LOB utility on any related LOB table spaces to validate the integrity of
each LOB table space within itself.
You can use the REPORT utility with the TABLESPACESET option to determine all
of the page sets that belong to a single table space set. Then, you can restore those
page sets that are related. However, if page sets are logically related outside of Db2
in application programs, you are responsible for identifying all of those page sets
on your own.
To determine a valid quiesce point for a table space set, determine a RECOVER
TOLOGPOINT value.
Related concepts:
Creation of relationships with referential constraints (Introduction to Db2 for
z/OS)
LOB table spaces
XML table spaces
Temporal tables and data versioning
Archive-enabled tables and archive tables (Introduction to Db2 for z/OS)
Related reference:
CHECK DATA (Db2 Utilities)
CHECK INDEX (Db2 Utilities)
CHECK LOB (Db2 Utilities)
RECOVER (Db2 Utilities)
REPORT (Db2 Utilities)
Recovery of indexes
When you recover indexes to a prior point of consistency, some rules apply.
More specifically, you must consider how indexes on altered tables and indexes on
tables in partitioned table spaces can restrict recovery.
You cannot use the RECOVER utility to recover an index to a point in time that
existed before you issued any of the following ALTER statements on that index.
These statements place the index in REBUILD-pending (RBDP) status:
v ALTER INDEX PADDED
v ALTER INDEX NOT PADDED
v ALTER TABLE SET DATA TYPE on an indexed column for numeric data type
changes
v ALTER TABLE ADD COLUMN and ALTER INDEX ADD COLUMN that are not
issued in the same commit scope
v ALTER INDEX REGENERATE
When you recover a table space to prior point in time and the table space uses
indexes that were set to RBDP at any time after the point of recovery, you must
use the REBUILD INDEX utility to rebuild these indexes.
If you use the COPY utility at the partition level, you need to use the RECOVER
utility at the partition level, too. If you use the COPY utility at the partition level
and then try to the RECOVER utility the index, an error occurs. If the COPY utility
is used at the index level, you can use the RECOVER utility at either the index
level or the partition level.
You cannot recover an index space to a point in time that is prior to rotating
partitions. After you rotate a partition, you cannot recover the contents of that
partition to a point in time that is before the rotation.
If you recover to a point in time that is prior to the addition of a partition, Db2
cannot roll back the addition of that partition. In such a recovery, Db2 clears all
data from the partition, and it remains part of the database.
To provide a recovery base for media failures, create one or more additional
sequential format image copies when you create a FlashCopy image copy. If the
FlashCopy image copy has been migrated or deleted, the recovery proceeds from
the sequential image copies if available. The following utilities can create
additional sequential format image copies in a single execution:
v COPY
v LOAD with the REPLACE option specified
v REORG TABLESPACE
You can recover an object to a specific FlashCopy image copy by specifying the
RECOVER utility with the TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY
options. If the object is partitioned, you must specify the data set number on the
DSNUM parameter in the RECOVERY utility control statement for each partition
that is being recovered.
For recovery to a log point or full recovery, if uncommitted work was backed out
from the FlashCopy image copy during consistency processing, recovery requires
Procedure
To identify the objects that must be recovered to the same point in time:
Procedure
In one operation, you can copy the data and establish a point of consistency for a
list of objects by using the COPY utility with the option SHRLEVEL REFERENCE.
That operation allows only read access to the data while it is copied. The data is
consistent at the time when copying starts and remains consistent until copying
ends. The advantage of this operation is that the data can be restarted at a point of
consistency by restoring the copy only, with no need to read log records. The
disadvantage is that updates cannot be made while the data is being copied.
You can use the CONCURRENT option of the COPY utility to make a backup,
with DFSMSdss concurrent copy, that is recorded in the Db2 catalog.
Ideally, you should copy data without allowing updates. However, restricting
updates is not always possible. To allow updates while the data is being copied,
you can take either of the following actions:
v Use the COPY utility with the SHRLEVEL CHANGE option.
v Use an offline program to copy the data, such as DSN1COPY, DFSMShsm, or
disk dump.
You can copy all of the data in your system, including the Db2 catalog and
directory data by using the BACKUP SYSTEM utility. Since the BACKUP SYSTEM
utility allows updates to the data, the system-level backup can include
uncommitted data.
Related reference:
COPY (Db2 Utilities)
Procedure
To establish a single point of consistency, or a quiesce point, for one or more page
sets:
Example
Procedure
Alternate method: Alternatively, you can use an off-line method to copy the
data. In that case, stop Db2 first; that is, do the next step before doing this step.
If you do not stop Db2 before copying, you might have trouble restarting after
restoring the system. If you do a volume restore, verify that the restored data is
cataloged in the integrated catalog facility catalog. Use the access method
services LISTCAT command to get a listing of the integrated catalog.
3. Stop Db2 with the command STOP DB2 MODE (QUIESCE).
Procedure
To create essential disaster recovery elements:
1. Make image copies:
a. Make copies of your data sets and Db2 catalogs and directories.
Use the COPY utility to make copies for the local subsystem and additional
copies for disaster recovery. You can also use the COPYTOCOPY utility to
make additional image copies from the primary image copy made by the
COPY utility. Install your local subsystem with the LOCALSITE option of
the SITE TYPE field on installation panel DSNTIPO. Use the
RECOVERYDDN option when you run COPY to make additional copies for
disaster recovery. You can use those copies on any Db2 subsystem that you
have installed using the RECOVERYSITE option.
Tip: You can also use these copies on a subsystem that is installed with the
LOCALSITE option if you run RECOVER with the RECOVERYSITE option.
Alternatively, you can use copies that are prepared for the local site on a
recovery site if you run RECOVER with the option LOCALSITE.
However, if you take precautions when using dual logging, such as making
another copy of the first archive log, you can send the second copy to the
recovery site. If recovery is necessary at the recovery site, specify YES for
the READ COPY2 ARCHIVE field on installation panel DSNTIPO. Using
this option causes Db2 to request the second archive log first.
b. Optional: Catalog the archive logs if you want to track them.
You will probably need some way to track the volume serial numbers and
data set names. One way of doing this is to catalog the archive logs to
create a record of the necessary information. You can also create your own
tracking method and do it manually.
c. Use the print log map utility to create a BSDS report.
d. Send the archive copy, the BSDS report, and any additional information
about the archive log to the recovery site.
e. Record this activity at the recovery site when the archive copy and the
report are received.
3. Choose consistent system time:
Important: After you establish a consistent system time, do not alter the
system clock. Any manual change in the system time (forward or backward)
can affect how Db2 writes and processes image copies and log records.
a. Choose a consistent system time for all Db2 subsystems.
Db2 utilities, including the RECOVER utility, require system clocks that are
consistent across all members of a Db2 data-sharing group. To prevent
inconsistencies during data recovery, ensure that all system times are
consistent with the system time at the failing site.
b. Ensure that the system clock remains consistent.
4. Back up integrated catalog facility catalogs:
a. Back up all Db2-related integrated catalog facility catalogs with the VSAM
EXPORT command on a daily basis.
b. Synchronize the backups with the cataloging of image copies and archives.
c. Use the VSAM LISTCAT command to create a list of the Db2 entries.
d. Send the EXPORT backup and list to the recovery site.
e. Record this activity at the recovery site when the EXPORT backup and list
are received.
5. Back up Db2 libraries:
a. Back up Db2 libraries to tape when they are changed. Include the SMP/E,
load, distribution, and target libraries, as well as the most recent user
applications and DBRMs.
b. Back up the DSNTIJUZ job that builds the ZPARM and DECP modules.
c. Back up the data set allocations for the BSDS, logs, directory, and catalogs.
d. Document your backups.
e. Send backups and corresponding documentation to the recovery site.
f. Record activity at the recovery site when the library backup and
documentation are received.
For disaster recovery to be successful, all copies and reports must be updated and
sent to the recovery site regularly. Data is up to date through the last archive that
is sent.
Related concepts:
Multiple image copies (Db2 Utilities)
Related tasks:
Archiving the log
Related information:
Performing remote-site disaster recovery
Procedure
To resolve problems:
1. Issue the following Db2 command:
-STOP DATABASE (DSNDB07)
Note: If you are adding or deleting work files, you do not need to stop
database DSNDB07.
2. Use the DELETE and DEFINE functions of access method services to redefine a
user work file on a different volume, and reconnect it to Db2.
3. Issue the following Db2 command:
-START DATABASE (DSNDB07)
Procedure
Note: If you are adding or deleting work files, you do not need to stop
database DSNDB07.
3. Issue the following SQL statement to drop the table space that has the problem:
DROP TABLESPACE DSNDB07.tsname:
4. Re-create the table space. You can use the same storage group, because the
problem volume has been removed, or you can use an alternate volume.
Procedure
If the error range is reset while the disk error still exists, and if Db2 has an I/O
error when using the work file table space again, Db2 sets the error range again.
| If the unavailable portion includes information that is needed for internal Db2
| processing, an attempt to use the RECOVER utility to restore directory table spaces
| DSNDBD01 or SYSUTILX, or catalog table space SYSTSCPY fails with ABEND04E
| RC00E40119.
Procedure
Instead of using the RECOVER utility, use the following procedure to recover those
table spaces and their indexes:
However, you can use the same RECOVER control statement to recover a catalog
or directory table space along with its corresponding IBM-defined indexes. After
these logically dependent objects are restored to an undamaged state, you can
recover the remaining catalog and directory objects in a single RECOVER utility
control statement. These restrictions apply regardless of the type of recovery that
you perform on the catalog.
You can use the REPORT utility to report on recovery information about the
catalog and directory.
To avoid restart processing of any page sets before attempts are made to recover
any of the members of the list of catalog and directory objects, you must set
subsystem parameters DEFER and ALL. You can do this by setting the values
DEFER in field 1 and ALL in field 2 of installation panel DSNTIPS.
Important: Recovering the Db2 catalog and directory to a prior point in time is
strongly discouraged.
Related concepts:
How to report recovery information
Copying catalog and directory objects (Db2 Utilities)
Related tasks:
Deferring restart processing
Recovering catalog and directory objects (Db2 Utilities)
Related reference:
RECOVER (Db2 Utilities)
Procedure
If you recover to a point in time at which the identity column existed, you might
create a gap in the sequence of identity column values. When you insert a row
after this recovery, Db2 produces an identity value for the row as if all previously
added rows still exist.
To prevent a gap in identity column values, use the following ALTER TABLE
statement to modify the attributes of the identity column before you insert rows
after the recovery:
ALTER TABLE table-name
ALTER COLUMN identity-column-name
RESTART WITH next-identity-value
Tip: To determine the last value in an identity column, issue the MAX column
function for ascending sequences of identity column values, or the MIN column
function for descending sequences of identity column values. This method works
only if the identity column does not use CYCLE.
If you recover to a point in time at which the identity column was not yet defined,
that identity column remains part of the table. The resulting identity column no
longer contains values.
For the log point, you can identify a quiesce point or a common SHRLEVEL
REFERENCE copy point. This action avoids placing indexes in the
CHECK-pending or RECOVER-pending status. If the log point is not a common
quiesce point or SHRLEVEL REFERENCE copy point for all objects, use the
following procedure, which ensures that the table spaces and indexes are
synchronized and eliminates the need to run the CHECK INDEX utility.
With recovery to a point in time with consistency, which is the default recovery
type, you do not need to identify a quiesce point or a common SHRLEVEL
REFERENCE copy point. This recovery might be faster because inconsistencies do
not have to be resolved.
Procedure
To recover to a log point:
1. Use the RECOVER utility to recover table spaces to the log point.
2. Use concurrent REBUILD INDEX jobs to rebuild the indexes for each table
space.
Objects that are not logged include the table space, the index, and the index space.
Recovery can be to any recoverable point. A recoverable point is established when:
v A table space is altered from logged to not logged.
v When an image copy is taken against an object that is not logged.
v An ALTER TABLE statement is issued with the ADD PARTITION clause, against
a table in a table space that has the NOT LOGGED attribute.
v Db2 adds a new partition in response to insertion of data into a
partition-by-growth table space.
If a base table space is altered so that it is not logged, and its associated LOB table
spaces already have the NOT LOGGED attribute, then the point where the table
space is altered is not a recoverable point.
If Db2 restart recovery determines that a table space that is not logged might have
been in the process of being updated at the time of the failure, then the table space
or partition is placed in the LPL and is marked RECOVER-pending.
Related concepts:
The NOT LOGGED attribute
Related reference:
ALTER TABLE (Db2 SQL)
Procedure
The following tables show how the logging attribute of the utility and the logging
attribute of the table space interact:
The following table shows the possible table space statuses for non-LOB tables
spaces that are not logged:
Table 48. Status of non-LOB table spaces that are not logged, after LOAD or REORG with
LOG NO keyword
Inline copy Records discarded Table space status
Yes No No pending status
Yes Yes ICOPY-pending
No not applicable ICOPY-pending
If Db2 restart recovery determines that a not logged table space may have been in
the process of being updated at the time of the failure, then the table space or
partition is placed in the LPL and is marked RECOVER-pending. You have several
options for removing a table space from the LPL and resetting the
RECOVER-pending status:
When a job fails and a rollback begins, the undo records are not available for table
spaces that are not logged during the back-out. Therefore, the rows that are in the
table space after recovery might not be the correct rows. You can issue the
appropriate SQL statements to re-create the intended rows.
Use the REFRESH TABLE statement to repopulate a materialized query table, but
only if the materialized query table is alone in its table space. If the table is not
alone in its table space, a utility must be used to reset the table space and remove
it from RECOVER-pending status.
You can run the RECOVER utility against a table space with the NOT LOGGED
logging attribute. To do so, the current logging attribute of the table space must
match the logging attribute of the recovery base (that is, the logging attribute of
the table space when the image copy was taken). If no changes have been made to
the table space since the last point of recovery, the utility completes successfully. If
changes have been made, the utility completes with message DSNU1504I.
Use the LOAD REPLACE utility or the LOAD REPLACE PART utility in the
following situations:
v With an input data set to empty the table space and repopulate the table.
v Without an input data set to empty the table space to prepare for one or more
INSERT statements to repopulate the table.
Use the DELETE statement without a WHERE clause to empty the table, when the
table space is segmented or universal, the table is alone in its table space and the
table does not have:
v A VALIDPROC
v Referential constraints
v Delete Triggers
v A SECURITY LABEL column (or it does have such a column, but multilevel
security with row level granularity is not in effect)
Use the TRUNCATE TABLE statement to empty the table, when the table space is
segmented and the table is alone in its table space and the table does not have:
v A VALIDPROC
v Referential constraints
v A SECURITY LABEL column (or it does have such a column, but multilevel
security with row level granularity is not in effect)
Removing various pending states from LOB and XML table spaces
You can remove various pending states from a LOB table space or an XML table
space by using a collection of utilities in a specific order.
Procedure
To remove pending states from a LOB table space or an XML table space:
1. Use the REORG TABLESPACE utility to remove the REORP status.
2. If the table space status is auxiliary CHECK-pending status:
a. Use CHECK LOB for all associated LOB table spaces.
b. Use CHECK INDEX for all LOB indexes, as well as the document ID, node
ID, and XML indexes.
3. Use the CHECK DATA utility to remove the CHECK-pending status.
You cannot use the DSN1COPY utility to restore data that was backed up with the
DFSMSdss concurrent copy facility.
Be careful when creating backups with the DSN1COPY utility. You must ensure
that the data is consistent, or you might produce faulty backup copies. One
advantage of using COPY to create backups is that it does not allow you to copy
data that is in CHECK-pending or RECOVER-pending status. You can use COPY
to prepare an up-to-date image copy of a table space, either by making a full
image copy or by making an incremental image copy and merging that
incremental copy with the most recent full image copy.
Keep access method services LISTCAT listings of table space data sets that
correspond to each level of retained backup data.
Related reference:
DSN1COPY (Db2 Utilities)
Even though Db2 data sets are defined as VSAM data sets, Db2 data cannot be
read or written by VSAM record processing because it has a different internal
format. The data can be accessed by VSAM control interval (CI) processing. If you
manage your own data sets, you can define them as VSAM linear data sets (LDSs),
and access them through services that support data sets of that type.
Access method services for CI and LDS processing are available in z/OS. IMPORT
and EXPORT use CI processing; PRINT and REPRO do not, but they do support
LDSs.
DFSMS Data Set Services (DFSMSdss) is available on z/OS and provides dump
and restore services that can be used on Db2 data sets. Those services use VSAM
CI processing.
When you recover a dropped object, you essentially recover a table space to a
point in time. If you want to use log records to perform forward recovery on the
table space, you need the IBM Db2 Log Analysis Tool for z/OS
Procedure
To recover the object:
1. Run regular catalog reports that include a list of all OBIDs in the subsystem.
When a table has been created with the clause WITH RESTRICT ON DROP, then
nobody can drop the table, or the table space or database that contains the table,
until the restriction on the table is removed. The ALTER TABLE statement includes
a clause to remove the restriction, as well as one to impose it.
To recover a dropped table, you need a full image copy or a DSN1COPY file that
contains the data from the dropped table.
For segmented or universal table spaces, the image copy or DSN1COPY file must
contain the table when it was active (that is, created). Because of the way space is
reused for segmented table spaces, this procedure cannot be used if the table was
not active when the image copy or DSN1COPY was made. For nonsegmented table
spaces, the image copy or DSN1COPY file can contain the table when it was active
or not active.
Procedure
Stopping the table space is necessary to ensure that all changes are written out
and that no data updates occur during this procedure.
4. Find the OBID for the table that you created in step 2 by querying the
SYSIBM.SYSTABLES catalog table.
The following statement returns the object ID (OBID) for the table:
SELECT NAME, OBID FROM SYSIBM.SYSTABLES
WHERE NAME=’table_name’
AND CREATOR=’creator_name’;
This value is returned in decimal format, which is the format that you need
for DSN1COPY.
5. Run DSN1COPY with the OBIDXLAT and RESET options to perform the
OBID translation and to copy data from the dropped table into the original
data set. You must specify a previous full image copy data set, inline copy
data set, or DSN1COPY file as the input data set SYSUT1 in the control
statement. Specify each of the input records in the following order in the
SYSXLAT file to perform OBID translations:
a. The DBID that you recorded in step 1 on page 495 as both the translation
source and the translation target
b. The PSID that you recorded in step 1 on page 495 as both the translation
source and the translation target
c. The original OBID that you recorded in step 1 on page 495 for the dropped
table as the translation source and the OBID that you recorded in step 4 as
the translation target
d. OBIDs of all other tables in the table space that you recorded in step 2 as
both the translation sources and translation targets
Be sure that you have named the VSAM data sets correctly by checking
messages DSN1998I and DSN1997I after DSN1COPY completes.
6. Use DSN1COPY with the OBIDXLAT and RESET options to apply any
incremental image copies. You must apply these incremental copies in
sequence, and specify the same SYSXLAT records that step 5 specifies.
When a table space is dropped, Db2 loses all information about the image copies
of that table space. Although the image copy data set is not lost, locating it might
require examination of image copy job listings or manually recorded information
about the image copies.
The recovery procedures for user-managed data sets and for Db2-managed data
sets are slightly different.
6. Run DSN1COPY with the OBIDXLAT and RESET options to translate the
OBID and to copy data from a previous full image copy data set, inline copy
data set, or DSN1COPY file. Use one of these copies as the input data set
SYSUT1 in the control statement. Specify each of the input records in the
following order in the SYSXLAT file to perform OBID translations:
a. The DBID that you recorded in step 1 as both the translation source and
the translation target
b. The PSID that you recorded in step 1 as the translation source and the
PSID that you recorded in step 5 as the translation target
c. The OBIDs that you recorded in step 1 as the translation sources and the
OBIDs that you recorded in step 5 as the translation targets
Be sure that you name the VSAM data sets correctly by checking messages
DSN1998I and DSN1997I after DSN1COPY completes.
7. Use DSN1COPY with the OBIDXLAT and RESET options to apply any
incremental image copies to the recovered table space. You must apply these
incremental copies in sequence, and specify the same SYSXLAT records that
step Recovering accidentally dropped user-managed data sets specifies.
To copy the data sets, use the OBID-translate function of the DSN1COPY utility.
Procedure
PSPI
Find the target identifiers of the objects that you created in step 4 on page 499
(which consist of a PSID for the table space and the OBIDs for the tables
within that table space) by querying the SYSIBM.SYSTABLESPACE and
SYSIBM.SYSTABLES catalog tables.
The following statement returns the object ID for a table space; this is the
PSID.
SELECT DBID, PSID FROM SYSIBM.SYSTABLESPACE
WHERE NAME=’tablespace_name’ and DBNAME=’database_name’
AND CREATOR=’creator_name’;
The following statement returns the object ID for a table:
SELECT NAME, OBID FROM SYSIBM.SYSTABLES
WHERE NAME=’table_name’
AND CREATOR=’creator_name’;
These values are returned in decimal format, which is the format that you
need for the DSN1COPY utility.
PSPI
7. Run DSN1COPY with the OBIDXLAT and RESET options to perform the
OBID translation and to copy the data from the renamed VSAM data set that
contains the dropped table space to the newly defined VSAM data set. Specify
the VSAM data set that contains data from the dropped table space as the
input data set SYSUT1 in the control statement. Specify each of the input
records in the following order in the SYSXLAT file to perform OBID
translations:
a. The DBID that you recorded in step 1 on page 499 as both the translation
source and the translation target
b. The PSID that you recorded in step 1 on page 499 as the translation source
and the PSID that you recorded in step 6 as the translation target
c. The original OBIDs that you recorded in step 1 on page 499 as the
translation sources and the OBIDs that you recorded in step 6 as the
translation targets
Be sure that you have named the VSAM data sets correctly by checking
messages DSN1998I and DSN1997I after DSN1COPY completes.
8. Use DSN1COPY with the OBIDXLAT and RESET options to apply any
incremental image copies to the recovered table space. You must apply these
incremental copies in sequence, and specify the same SYSXLAT records that
step 7 specifies.
Important: After you complete this step, you have essentially recovered the
table space to the point in time of the last image copy. If you want to use log
records to perform forward recovery on the table space, you must use the IBM
Db2 UDB Log Analysis Tool for z/OS.
For more information about point-in-time recovery, see “Recovery of data to a
prior point in time” on page 460.
9. Start the table space for normal use by using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
10. Rebuild all indexes on the table space.
The RESTORE SYSTEM utility uses system-level backups that contain only Db2
objects to restore your Db2 system to a given point in time. The following
prerequisites apply:
v Before you can use the RESTORE SYSTEM utility, you must use the BACKUP
SYSTEM utility to create system-level backups. Choose either DATA ONLY or
FULL, depending on your recovery needs. Choose FULL if you want to backup
both your Db2 data and your Db2 logs.
v When a system-level backup on tape is the input for the RESTORE SYSTEM
utility, the user who submits the job must have the following two RACF
authorities:
– Operations authority, as in ATTRIBUTES=OPERATIONS
– DASDVOL authority, which you can set in the following way:
SETROPTS GENERIC(DASDVOL)
REDEFINE DASDVOL * UACC(ALTER)
SETROPTS CLASSACT(DASDVOL)
SETROPTS GENERIC(DASDVOL) REFRESH
Procedure
Results
After the RESTORE SYSTEM utility completes successfully, your Db2 system has
been recovered to the given point in time with consistency.
Related concepts:
Backup and recovery involving clone tables
Related tasks:
Recovering a Db2 subsystem to a prior point in time
Related reference:
RESTORE SYSTEM (Db2 Utilities)
Related information:
Tape Authorization for DB2 RESTORE SYSTEM Utility
Procedure
Procedure
If you choose to recover catalog and directory tables to a prior point in time, you
need to first shut down the Db2 subsystem cleanly and then restart in
ACCESS(MAINT) mode before the recovery.
2. PSPI Execute the following SELECT statements to find a list of table space
and table definitions in the Db2 catalog:
SELECT NAME, DBID, PSID FROM SYSIBM.SYSTABLESPACE;
SELECT NAME, TSNAME, DBID, OBID FROM SYSIBM.SYSTABLES;
PSPI
3. For each table space name in the catalog, look for a data set with a
corresponding name. If a data set exists, take the following additional actions:
a. Find the field HPGOBID in the header page section of the DSN1PRNT
output. This field contains the DBID and PSID for the table space. Check if
the corresponding table space name in the Db2 catalog has the same DBID
and PSID.
b. If the DBID and PSID do not match, execute DROP TABLESPACE and
CREATE TABLESPACE statements to replace the incorrect table space entry
in the Db2 catalog with a new entry. Be sure to make the new table space
definition exactly like the old one. If the table space is segmented, SEGSIZE
must be identical for the old and new definitions.
You can drop a LOB table space only if it is empty (that is, it does not
contain auxiliary tables). If a LOB table space is not empty, you must first
drop the auxiliary table before you drop the LOB table space. To drop
auxiliary tables, you can perform one of the following actions:
v Drop the base table.
v Delete all rows that reference LOBs from the base table, and then drop
the auxiliary table.
c. Find the PGSOBD fields in the data page sections of the DSN1PRNT output.
These fields contain the OBIDs for the tables in the table space. For each
OBID that you find in the DSN1PRNT output, search the Db2 catalog for a
table definition with the same OBID.
d. If any of the OBIDs in the table space do not have matching table
definitions, examine the DSN1PRNT output to determine the structure of
the tables that are associated with these OBIDs. If you find a table whose
structure matches a definition in the catalog, but the OBIDs differ, proceed
to the next step. The OBIDXLAT option of DSN1COPY corrects the
mismatch. If you find a table for which no table definition exists in the
catalog, re-create the table definition by using the CREATE TABLE
statement. To re-create a table definition for a table that has had columns
added, first use the original CREATE TABLE statement, and then use
ALTER TABLE to add columns, which makes the table definition match the
current structure of the table.
e. Use the DSN1COPY utility with the OBIDXLAT option to copy the existing
data to the new tables in the table space, and translate the DBID, PSID, and
OBIDs.
Important: For objects involved in cloning, rename the base and clone
objects at the same time.
Example: Assume that the catalog specifies an IPREFIX of J for an object but
the VSAM data set that corresponds to this object is .
catname.DSNDBC.dbname.spname.I0001.A001
You must rename this data set to:
catname.DSNDBC.dbname.spname.J0001.A001
7. Delete the VSAM data sets that are associated with table spaces that were
created with the DEFINE NO option and that reverted to an unallocated state.
After you delete the VSAM data sets, you can insert or load rows into these
unallocated table spaces to allocate new VSAM data sets.
Related concepts:
Recovery of tables that contain identity columns
Related reference:
DROP (Db2 SQL)
CREATE TABLESPACE (Db2 SQL)
DSN1COPY (Db2 Utilities)
DSN1PRNT (Db2 Utilities)
Procedure
where .
v In this control statement, substitute log-truncation-timestamp with the
timestamp of the point to which you want to recover.
CRESTART CREATE,SYSPITRT=log-truncation-timestamp
b. Start Db2.
c. Run the RESTORE SYSTEM utility by issuing the RESTORE SYSTEM control
statement.
This utility control statement performs a recovery to the current time (or to
the time of the last log transmission from the local site).
Procedure
To recover your Db2 subsystem without using the BACKUP SYSTEM utility:
1. Prepare for recovery.
a. Issue the Db2 command SET LOG SUSPEND to suspend logging and
update activity, and to quiesce 32 KB page writes and data set extensions.
For data sharing systems, issue the command to each member of the data
sharing group.
b. Use the FlashCopy function to copy all Db2 volumes. Include any ICF
catalogs that are used by Db2, as well as active logs and BSDSs.
c. Issue the Db2 command SET LOG RESUME to resume normal Db2 activity.
d. Use DFSMSdss to dump the disk copies that you just created to tape, and
then transport this tape to the remote site. You can also use other methods
to transmit the copies that you make to the remote site.
2. Recover your Db2 subsystem.
a. Use DFSMSdss to restore the FlashCopy data sets to disk.
b. Run the DSNJU003 utility by using the CRESTART CREATE,
SYSPITR=log-truncation-point control statement.
The log-truncation-point is the RBA or LRSN of the point to which you want
to recover.
c. Restore any logs on tape to disk.
d. Start Db2.
e. Run the RESTORE SYSTEM utility using the RESTORE SYSTEM LOGONLY
control statement to recover to the current time (or to the time of the last
log transmission from the local site).
Introductory concepts
Clone tables spaces and index spaces are stored in separate physical data sets. You
must copy them and recover them separately. Output from the REPORT utility
includes information about clone tables, if they exist.
The QUIESCE command and the COPY and RECOVER utilities each use the
CLONE keyword to function on clone objects.
v Running QUIESCE with the CLONE keyword establishes a quiesce point for a
clone object.
v Running the COPY utility with the CLONE keyword takes a copy of a clone
object.
v Running the RECOVER utility with the CLONE keyword recovers a clone object.
The RESTORE SYSTEM utility restores the databases in the volume copies that the
BACKUP SYSTEM utility provided. After restoring the data, the RESTORE
SYSTEM utility can then recover a Db2 subsystem to a given point in time.
Symptoms
The IRLM waits, loops, or abends. The following message might be issued:
DXR122E irlmnm ABEND UNDER IRLM TCB/SRB IN MODULE xxxxxxxx
ABEND CODE zzzz
Environment
If the IRLM abends, Db2 terminates. If the IRLM waits or loops, the IRLM
terminates, and Db2 terminates automatically.
Symptoms
No processing is occurring.
Symptoms
No I/O activity occurs for the affected disk address. Databases and tables that
reside on the affected unit are unavailable.
| where nnn is the data set or partition number, left padded by 0 (zero).
| 7. For user-managed table spaces, define the VSAM cluster and data components
| for the new volume by issuing the access method services DEFINE CLUSTER
| command with the same data set name as in the previous step, in the
| following format:
| catnam.DSNDBx.dbname.tsname.y0001.znnn
Chapter 13. Recovering from different Db2 for z/OS problems 511
z/OS MVS System Commands
z/OS MVS Initialization and Tuning Reference
Symptoms
Unexpected data is returned from an SQL SELECT statement, even though the
SQLCODE that is associated with the statement is 0.
Causes
An SQLCODE of 0 indicates that Db2 and SQL did not cause the problem, so the
cause of the incorrect data in the table is the application.
Procedure
To back out the incorrect changes:
1. Run the REPORT utility twice, once using the RECOVERY option and once
using the TABLESPACESET option. On each run, specify the table space that
contains the inaccurate data. If you want to recover to the last quiesce point,
specify the option CURRENT when running REPORT RECOVERY.
2. Examine the REPORT output to determine the RBA of the quiesce point.
3. Run RECOVER TOLOGPOINT with the RBA that you found, specifying the
names of all related table spaces.
Results
Recovering all related table spaces to the same quiesce point prevents violations of
referential constraints.
Procedure
To back out the incorrect changes:
1. Run the DSN1LOGP stand-alone utility on the log scope that is available at
Db2 restart, using the SUMMARY(ONLY) option.
2. Determine the RBA of the most recent checkpoint before the first bad update
occurred, from one of the following sources:
v Message DSNR003I on the operator's console, which looks similar to this
message:
DSNR003I RESTART ..... PRIOR CHECKPOINT RBA=000007425468
The required RBA in this example is X'7425468'.
This technique works only if no checkpoints have been taken since the
application introduced the bad updates.
v Output from the print log map utility. You must know the time that the first
bad update occurred. Find the last BEGIN CHECKPOINT RBA before that time.
3. Run DSN1LOGP again, using SUMMARY(ONLY), and specify the checkpoint
RBA as the value of RBASTART. The output lists the work in the recovery log,
including information about the most recent complete checkpoint, a summary
of all processing, and an identification of the databases that are affected by each
active user.
4. Find the unit of recovery in which the error was made. One of the messages in
the output (identified as DSN1151I or DSN1162I) describes the unit of recovery
in which the error was made. To find the unit of recovery, use your knowledge
of the time that the program was run (START DATE= and TIME=), the
connection ID (CONNID=), authorization ID (AUTHID=), and plan name
(PLAN=). In that message, find the starting RBA as the value of START=.
5. Run the Db2 RECOVER utility with the TOLOGPOINT option, and specify the
starting RBA that you found in the previous step.
6. Recover any related table spaces or indexes to the same point in time.
Related concepts
DSN1LOGP summary report
Related reference
DSN1LOGP (Db2 Utilities)
Symptoms
Problems that occur in a Db2-IMS environment can result in a variety of
symptoms:
v An IMS wait, loop, or abend is accompanied by a Db2 message that goes to the
IMS console. This symptom indicates an IMS control region failure.
Chapter 13. Recovering from different Db2 for z/OS problems 513
v When IMS connects to Db2, Db2 detects one or more units of recovery that are
indoubt.
v When IMS connects to Db2, Db2 detects that it has committed one or more units
of recovery that IMS indicates should be rolled back.
v Messages are issued to the IMS master terminal, to the logical terminal, or both
to indicate that some sort of IMS or Db2 abend has occurred.
Environment
Db2 can be used in an XRF (Extended Recovery Facility) recovery environment
with IMS.
Symptoms
v IMS waits, loops, or abends.
v Db2 attempts to send the following message to the IMS master terminal during
an abend:
DSNM002 IMS/TM xxxx DISCONNECTED FROM SUBSYSTEM yyyy RC=RC
This message cannot be sent if the failure prevents messages from being
displayed.
v Db2 does not send any messages for this problem to the z/OS console.
Environment
v Db2 detects that IMS has failed.
v Db2 either backs out or commits work that is in process.
v Db2 saves indoubt units of recovery, which need to be resolved at reconnection
time.
Symptoms
If Db2 has indoubt units of recovery that IMS did not resolve, the following
message is issued at the IMS master terminal, where xxxx is the subsystem
identifier:
DSNM004I RESOLVE INDOUBT ENTRY(S) ARE OUTSTANDING FOR SUBSYSTEM xxxx
Environment
v The connection remains active.
v IMS applications can still access Db2 databases.
v Some Db2 resources remain locked out.
If the indoubt thread is not resolved, the IMS message queues might start to back
up. If the IMS queues fill to capacity, IMS terminates. Be aware of this potential
difficulty, and monitor IMS until the indoubt units of work are fully resolved.
Enter the following Db2 command, choosing to commit or roll back, and
specify the correlation ID:
-RECOVER INDOUBT (imsid) ACTION(COMMIT|ABORT) NID (nid)
If the command is rejected because of associated network IDs, use the same
command again, substituting the recovery ID for the network ID.
Related concepts
Duplicate IMS correlation IDs
Chapter 13. Recovering from different Db2 for z/OS problems 515
Recovering IMS indoubt units of work that need to be rolled back
When units of recovery between IMS and Db2 are indoubt at restart time, Db2 and
IMS sometimes handle the indoubt units of recovery differently. When this
situation happens, you might need to roll back the changes.
Symptoms
The following messages are issued after a Db2 restart:
DSNM005I IMS/TM RESOLVE INDOUBT PROTOCOL PROBLEM WITH SUBSYSTEM xxxx
Causes
The reason that these messages are issued is that indoubt units of work exist for a
Db2-IMS application, and the way that Db2 and IMS handle these units of work
differs.
At restart time, Db2 attempts to resolve any units of work that are indoubt. Db2
might commit some units and roll back others. Db2 records the actions that it takes
for the indoubt units of work. At the next connect time, Db2 verifies that the
actions that it took are consistent with the IMS decisions. If the Db2 RECOVER
INDOUBT command is issued prior to an IMS attempt to reconnect, Db2 might
decide to commit the indoubt units of recovery, whereas IMS might decide to roll
back the units of recovery. This inconsistency results in the DSNM005I message
being issued. Because Db2 tells IMS to retain the inconsistent entries, the DFS3602I
message is issued when the attempt to resolve the indoubt units of recovery ends.
Environment
v The connection between Db2 and IMS remains active.
v Db2 and IMS continue processing.
v No Db2 locks are held.
v No units of work are in an incomplete state.
Causes
The problem might be caused by a usage error in the application or by a Db2
problem.
Environment
v The failing unit of recovery is backed out by both DL/I and Db2.
v The connection between IMS and Db2 remains active.
Symptoms
Db2 fails or is not running, and one of the following status situations exists:
v If you specified error option Q, the program terminates with a U3051 user abend
completion code.
v If you specified error option A, the program terminates with a U3047 user abend
completion code.
In either of these situations, the IMS master terminal receives IMS message
DFS554, and the terminal that is involved in the problem receives IMS message
DFS555.
Symptoms
Chapter 13. Recovering from different Db2 for z/OS problems 517
Problems that occur in a Db2-CICS environment can result in a variety of
symptoms, such as:
v Messages that indicate an abend in CICS or the CICS attachment facility
v A CICS wait or a loop
v Indoubt units of recovery between CICS and Db2
Environment
Db2 can be used in an XRF (Extended Recovery Facility) recovery environment
with CICS.
Symptoms
The following message is issued at the user's terminal:
DFH2206 TRANSACTION tranid ABEND abcode BACKOUT SUCCESSFUL
In this message, tranid represents the transaction that abnormally terminated, and
abcode represents the specific abend code.
Environment
v The failing unit of recovery is backed out in both CICS and Db2.
v The connection between CICS and Db2 remains active.
Symptoms
Any of the following symptoms might occur:
v CICS waits or loops.
v CICS abends, as indicated by messages or dump output.
Environment
Db2 performs each of the following actions:
v Detects the CICS failure.
v Backs out inflight work.
If you receive messages that indicate a CICS abend, examine the messages and
dump output for more information.
If threads are connected to Db2 when CICS terminates, Db2 issues message
DSN3201I. The message indicates that Db2 end-of-task (EOT) routines have
cleaned up and disconnected any connected threads.
Symptoms
Any of the possible symptoms can occur:
v CICS remains operational, but the CICS attachment facility abends.
v The CICS attachment facility issues a message that indicates the reason for the
connection failure, or it requests a X'04E' dump.
v The reason code in the X'04E' dump indicates the reason for failure.
v CICS issues message DFH2206 that indicates that the CICS attachment facility
has terminated abnormally with the DSNC abend code.
v CICS application programs that try to access Db2 while the CICS attachment
facility is inactive are abnormally terminated. The code AEY9 is issued.
Environment
Chapter 13. Recovering from different Db2 for z/OS problems 519
CICS backs out the abnormally terminated transaction and treats it like an
application abend.
Symptoms
One of the following messages is sent to the user-named CICS destination that is
specified for the MSGQUEUEn(name) attribute in the RDO (resource definition
online): DSN2001I, DSN2034I, DSN2035I, or DSN2036I.
Causes
For CICS, a Db2 unit of recovery might be indoubt if the forget entry (X'FD59') of
the task-related installation exit routine is absent from the CICS system journal.
The indoubt condition applies only to the Db2 unit of recovery in this case because
CICS already committed or backed out any changes to its resources.
A Db2 unit of recovery is indoubt for Db2 if an End Phase 1 is present and the
Begin Phase 2 is absent.
Environment
The following table summarizes the situations that can exist when CICS units of
recovery are indoubt.
Table 49. Situations that involve CICS abnormal indoubt units of recovery
Message ID Meaning
DSN2001I The named unit of recovery cannot be resolved by CICS because CICS
was cold started. The CICS attachment facility continues the startup
process.
DSN2034I The named unit of recovery is not indoubt for Db2, but it is indoubt
according to CICS log information. The reason is probably a CICS restart
with the wrong tape. The problem might also be caused by a Db2 restart
to a prior point in time.
DSN2035I The named unit of recovery is indoubt for Db2, but it is not in the CICS
indoubt list. This is probably due to an incorrect CICS restart. The CICS
attachment facility continues the startup process and provides a
transaction dump. The problem might also be caused by a Db2 restart to
a prior point in time.
CICS retains details of indoubt units of recovery that were not resolved during
connection startup. An entry is purged when it no longer shows up on the list that
is presented by Db2 or, when the entry is present in the list, when Db2 resolves it.
Obtain a list of the indoubt units of recovery from Db2 by issuing the following
command:
-DISPLAY THREAD (connection-name) TYPE (INDOUBT)
Messages like these are then issued:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS - DSNV406I - INDOUBT THREADS - COORDINATOR
STATUS RESET URID AUTHID
coordinator_name status yes/no urid authid
DISPLAY INDOUBT REPORT COMPLETE DSN9022I - DSNVDT ’-DISPLAY THREAD’ NORMAL
COMPLETION
The corr_id (correlation ID) for CICS Transaction Server for z/OS 1.2 and
subsequent releases of CICS consists of:
Bytes 1 - 4
Thread type: COMD, POOL, or ENTR
Bytes 5 - 8
Transaction ID
Bytes 9 - 12
Unique thread number
Two threads can sometimes have the same correlation ID when the connection
has been broken several times and the indoubt units of recovery have not been
resolved. In this case, use the network ID (NID) instead of the correlation ID to
uniquely identify indoubt units of recovery.
The network ID consists of the CICS connection name and a unique number
that is provided by CICS at the time that the syncpoint log entries are written.
This unique number is an 8-byte store clock value that is stored in records that
are written to both the CICS system log and to the Db2 log at syncpoint
processing time. This value is referred to in CICS as the recovery token.
2. Scan the CICS log for entries that are related to a particular unit of recovery.
Look for a PREPARE record (JCRSTRID X'F959'), for the task-related installation
Chapter 13. Recovering from different Db2 for z/OS problems 521
where the recovery token field (JCSRMTKN) equals the value that is obtained
from the network-ID. The network ID is supplied by Db2 in the DISPLAY
THREAD command output.
You can find the CICS task number by locating the prepare log record in the
CICS log for indoubt units of recovery. Using the CICS task number, you can
locate all other entries on the log for this CICS task.
You can use the CICS journal print utility DFHJUP to scan the log.
3. Use the change log inventory utility (DSNJU003) to scan the Db2 log for entries
that are related to a particular unit of recovery. Locate the End Phase 1 record
with the required network ID. Then use the URID from this record to obtain
the rest of the log records for this unit of recovery.
When scanning the Db2 log, note that the Db2 startup message DSNJ099I
provides the start log RBA for this session.
4. If needed, do indoubt resolution in Db2.
To invoke Db2 to take the recovery action for an indoubt unit of recovery, issue
the Db2 RECOVER INDOUBT command, where the correlation_id is unique:
DSNC -RECOVER INDOUBT (connection-name)
ACTION (COMMIT/ABORT)
ID (correlation_id)
If the transaction is a pool thread, use the value of the correlation ID (corr_id)
that is returned by DISPLAY THREAD for thread#.tranid in the RECOVER INDOUBT
command. In this case, the first letter of the correlation ID is P. The transaction
ID is in characters five through eight of the correlation ID.
If the transaction is assigned to a group (group is a result of using an entry
thread), use thread#.groupname instead of thread#.tranid. In this case, the first
letter of the correlation ID is a G, and the group name is in characters five
through eight of the correlation ID. The groupname is the first transaction that is
listed in a group.
Where the correlation ID is not unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name)
ACTION (COMMIT|ABORT)
NID (network-id)
When two threads have the same correlation ID, use the NID keyword instead
of the ID keyword. The NID value uniquely identifies the work unit.
To recover all threads that are associated with connection-name, omit the ID
option.
The command results that are in either of the following messages indicate
whether the thread is committed or rolled back:
DSNV414I - THREAD thread#.tranid COMMIT SCHEDULED
DSNV414I - THREAD thread#.tranid ABORT SCHEDULED
When you resolve indoubt units of work, note that CICS and the CICS
attachment facility are not aware of the commands to Db2 to commit or abort
indoubt units of recovery because only Db2 resources are affected. However,
CICS keeps details about the indoubt threads that could not be resolved by
Db2. This information is purged either when the presented list is empty or
when the list does not include a unit of recovery that CICS remembers.
Investigate any inconsistencies that you found in the preceding steps.
Related reference
Symptoms
The symptoms depend on whether the CICS attachment facility or one of its thread
subtasks terminated:
v If the main CICS attachment facility subtask abends, an abend dump is
requested. The contents of the dump indicate the cause of the abend. When the
dump is issued, shutdown of the CICS attachment facility begins.
v If a thread subtask terminates abnormally, a X'04E' dump is issued, and the
CICS application abends with a DSNC dump code. The X'04E' dump generally
indicates the cause of the abend. The CICS attachment facility remains active.
Symptoms
One of the following QMF messages is issued:
v DSQ10202
v DSQ10205
v DSQ11205
v DSQ12105
v DSQ13005
v DSQ14152
v DSQ14153
v DSQ14154
v DSQ15805
v DSQ16805
v DSQ17805
v DSQ22889
v DSQ30805
v DSQ31805
v DSQ32029
v DSQ35805
Chapter 13. Recovering from different Db2 for z/OS problems 523
v DSQ36805
Causes
Key QMF installation jobs were not run.
Environment
The Db2 for z/OS subsystem was migrated to a new release or to new-function
mode after completion of an installation or migration to a new QMF release.
Symptoms
When a Db2 subsystem terminates, the specific failure is identified in one or
messages. The following messages might be issued at the z/OS console:
DSNV086E - DB2 ABNORMAL TERMINATION REASON=XXXXXXXX
DSN3104I - DSN3EC00 -TERMINATION COMPLETE
DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
The following message might be issued to the CICS transient data error
destination, which is defined in the RDO:
DSNC2025I - THE ATTACHMENT FACILITY IS INACTIVE
Environment
v IMS and CICS continue.
v In-process IMS and CICS applications receive SQLCODE -923 (SQLSTATE
'57015') when accessing Db2.
In most cases, if an IMS or CICS application program is running when a -923
SQLCODE is returned, an abend occurs. This is because the application program
generally terminates when it receives a -923 SQLCODE. To terminate, some
synchronization processing occurs (such as a commit). If Db2 is not operational
when synchronization processing is attempted by an application program, the
application program abends. In-process applications can abend with an abend
code X'04F'.
v IMS applications that begin to run after subsystem termination begins are
handled according to the error options.
Symptoms
Db2 issues messages for the access failure for each log data set. These messages
provide information that is needed to resolve the access error. For example:
DSNJ104I ( DSNJR206 RECEIVED ERROR STATUS 00000004
FROM DSNPCLOC FOR DSNAME=DSNC710.ARCHLOG1.A0000049
Causes
Db2 might experience a problem when it attempts to allocate or open archive log
data sets during the rollback of a long-running unit of recovery. These temporary
failures can be caused by:
v A temporary problem with DFHSM recall
v A temporary problem with the tape subsystem
v Uncataloged archive logs
v Archive tape mount requests being canceled
If the problem persists, quiesce other work in the system before replying N, which
terminates Db2.
Chapter 13. Recovering from different Db2 for z/OS problems 525
Recovering from active log failures
A variety of active log failures might occur, but you can recover from them.
Symptoms
Most active log failures are accompanied by or preceded by error messages to
inform you of out-of-space conditions, write or read I/O errors, or loss of dual
active logging.
If you receive message DSNJ103I at startup time, the active log is experiencing
dynamic allocation problems. If you receive message DSNJ104I, the active log is
experiencing open-close problems. In either case, you should follow procedures in
“Recovering from BSDS or log failures during restart” on page 537.
Symptoms
The following warning message is issued when the last available active log data
set is 5% full:
DSNJ110E - LAST COPY n ACTIVE LOG DATA SET IS nnn PERCENT FULL
The Db2 subsystem reissues the message after each additional 5% of the data set
space is filled. Each time the message is issued, the offload process is started.
IFCID trace record 0330 is also issued if statistics class 3 is active.
If the active log fills to capacity, after having switched to single logging, Db2 issues
the following message, and an offload is started.
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
Causes
The active log is out of space.
Environment
An out-of-space condition on the active log has very serious consequences.
Corrective action is required before Db2 can continue processing. When the active
log becomes full, the Db2 subsystem cannot do any work that requires writing to
the log until an offload is completed. Until that offload is completed, Db2 waits for
an available active log data set before resuming normal Db2 processing. Normal
shutdown, with either a QUIESCE or FORCE command, is not possible because the
shutdown sequence requires log space to record system events that are related to
shutdown (for example, checkpoint records).
This command causes Db2 to restart the offload task. Issuing this command
might solve the problem.
3. If issuing this command does not solve the problem, determine and resolve the
cause of the problem, and then reissue the command. If the problem cannot be
resolved quickly, have the system programmer define additional active logs
until you can resolve the problem.
System programmer response: Define additional active log data sets so that Db2
can continue its normal operation while the problem that is causing the offload
failures is corrected.
1. Use the z/OS command CANCEL to stop Db2.
2. Use the access method services DEFINE command to define new active log data
sets.
3. Run utility DSNJLOGF to initialize the new active log data sets.
4. Define the new active log data sets in the BSDS by using the change log
inventory utility (DSNJU003). .
5. Restart Db2. Offload is started automatically during startup, and restart
processing occurs.
Recommendation: To minimize the number of offloads that are taken per day in
your installation, consider increasing the size of the active log data sets.
Related reference
DSNJLOGF (preformat active log) (Db2 Utilities)
DSNJU003 (change log inventory) (Db2 Utilities)
Symptoms
The following message is issued:
DSNJ105I - csect-name LOG WRITE ERROR DSNAME=..., LOGRBA=...,
ERROR STATUS= ccccffss
Causes
Although this problem can be caused by several problems, one possible cause is a
CATUPDT failure.
Environment
When a write error occurs on an active log data set, the following characteristics
apply:
v Db2 marks the failing Db2 log data set TRUNCATED in the BSDS.
v Db2 goes on to the next available data set.
v If dual active logging is used, Db2 truncates the other copy at the same point.
v The data in the truncated data set is offloaded later, as usual.
v The data set is not stopped; it is reused on the next cycle. However, if a
DSNJ104 message indicates a CATUPDT failure, the data set is marked
STOPPED.
Chapter 13. Recovering from different Db2 for z/OS problems 527
System programmer response: If the DSNJ104 message indicates a CATUPDT
failure, use access method services and the change log inventory utility
(DSNJU003) to add a replacement data set. In this case, you need to stop Db2. The
timing of when you should take this action depends on how widespread the
problem is.
v If the additional problem is localized and does not affect your ability to recover
from any other problems, you can wait until the earliest convenient time.
v If the problem is widespread (perhaps affecting an entire set of active log data
sets), stop Db2 after the next offload.
Related reference
DSNJU003 (change log inventory) (Db2 Utilities)
Symptoms
The following message is issued:
DSNJ004I - ACTIVE LOG COPY n INACTIVE, LOG IN SINGLE MODE,
ENDRBA=...
Causes
This problem occurs when Db2 completes one active log data set and then finds
that the subsequent copy (COPY n) data sets have not been offloaded and are
marked STOPPED.
Environment
Db2 continues in single mode until offloading completes and then returns to dual
mode. If the data set is marked STOPPED, however, intervention is required.
Symptoms
The following message is issued:
DSNJ106I - LOG READ ERROR DSNAME=..., LOGRBA=...,
ERROR STATUS=ccccffss
Environment
Chapter 13. Recovering from different Db2 for z/OS problems 529
additional I/O errors prevent you from copying the entire data set, a gap
occurs in the log and restart might fail, although the data still exists and is
not overlaid. If this occurs, see “Recovering from BSDS or log failures during
restart” on page 537.
4. Stop Db2, and use the change log inventory utility to update information in
the BSDS about the data set with the error.
a. Use DELETE to remove information about the bad data set.
b. Use NEWLOG to name the new data set as the new active log data set and
to give it the RBA range that was successfully copied.
The DELETE and NEWLOG operations can be performed by the same job step;
put DELETE before NEWLOG in the SYSIN input data set. This step clears the
stopped status, and Db2 eventually archives it.
c. Delete the data set that is in error by using access method services.
d. Redefine the data set so that you can write to it. Use the access method
services DEFINE command to define the active log data sets. If you use
dual logs and have a good copy of the log, use the REPRO command to
copy the contents to the new data set that you just created. If you do not
use dual logs, initialize the new data set by using the DSNJLOGF utility.
Related reference
PRIMARY QUANTITY field (PRIQTY subsystem parameter) (Db2
Installation and Migration)
DSNJU004 (print log map) (Db2 Utilities)
Symptoms
Archive log failures can result in a variety of Db2 and z/OS messages that identify
problems with archive log data sets.
One specific symptom that might occur is message DSNJ104I, which indicates an
open-close problem on the archive log.
Symptoms
The following message is issued:
DSNJ103I - csect-name LOG ALLOCATION ERROR DSNAME=dsname,
ERROR STATUS=eeeeiiii, SMS REASON CODE=ssssssss
Causes
Archive log allocation problems can occur when various Db2 operations fail; for
example:
v The RECOVER utility executes and requires an archive log. If neither archive log
can be found or used, recovery fails.
Symptoms
No specific Db2 message is issued for write I/O errors. Only a z/OS error
recovery program message is issued.
If Db2 message DSNJ128I is issued, an abend in the offload task occurred, in which
case you should follow the instructions for this message.
Environment
v Offload abandons that output data set (no entry in BSDS).
v Offload dynamically allocates a new archive and restarts offloading from the
point at which it was previously triggered. For dual archiving, the second copy
waits.
v If an error occurs on the new data set, these additional actions occur:
– For dual archive mode, the following DSNJ114I message is generated, and the
offload processing changes to single mode.
DSNJ114I - ERROR ON ARCHIVE DATA SET, OFFLOAD CONTINUING
WITH ONLY ONE ARCHIVE DATA SET BEING GENERATED
– For single mode, the offload process abandons the output data set. Another
attempt to offload this RBA range is made the next time offload is triggered.
– The active log does not wrap around; if no more active logs are available,
data is not lost.
Symptoms
No specific Db2 message is issued; only the z/OS error recovery program message
is issued.
Chapter 13. Recovering from different Db2 for z/OS problems 531
Environment
v If a second copy of the archive log exists, the second copy is allocated and used.
v If a second copy of the archive log does not exist, recovery fails.
System programmer response: Recover to the last image copy or to the RBA of the
last quiesce point.
Related reference
RECOVER (Db2 Utilities)
Symptoms
Prior to the failure, z/OS issues abend message IEC030I, IEC031I, or IEC032I.
Offload processing terminates unexpectedly. Db2 issues the following message:
DSNJ128I - LOG OFFLOAD TASK FAILED FOR ACTIVE LOG nnnnn
Causes
The following situations can cause problems with insufficient disk space during
Db2 offload processing:
v The size of the archive log data set is too small to contain the data from the
active log data sets during offload processing. All secondary space allocations
have been used.
v All available space on the disk volumes to which the archive data set is being
written has been exhausted.
v The primary space allocation for the archive log data set (as specified in the load
module for subsystem parameters) is too large to allocate to any available online
disk device.
| Environment
| The archive data sets that are allocated to the offload task in which the error
| occurred are deallocated and deleted. Another attempt to offload the RBA range of
| the active log data sets is made the next time offload is invoked.
Symptoms
If a BSDS is damaged, Db2 issues one of the following message numbers:
DSNJ126I, DSNJ100I, or DSNJ120I.
Related concepts
Management of the bootstrap data set
Symptoms
The following message is issued:
DSNJ126I - BSDS ERROR FORCED SINGLE BSDS MODE
Causes
A write I/O error occurred on a BSDS.
Environment
If Db2 is in a dual-BSDS mode and one copy of the BSDS is damaged by an I/O
error, the BSDS mode changes from dual-BSDS mode to single-BSDS mode. If Db2
is in a single-BSDS mode when the BSDS is damaged by an I/O error, Db2
terminates until the BSDS is recovered.
Chapter 13. Recovering from different Db2 for z/OS problems 533
2. Issue the Db2 command RECOVER BSDS to make a copy of the good BSDS in the
newly allocated data set and to reinstate dual-BSDS mode.
Related tasks
Recovering the BSDS from a backup copy
Symptoms
The following message is issued:
DSNJ100I - ERROR OPENING BSDS n DSNAME=..., ERROR STATUS=eeii
Symptoms
The following message is issued:
DSNJ120I - DUAL BSDS DATA SETS HAVE UNEQUAL TIMESTAMPS,
BSDS1 SYSTEM=..., UTILITY=..., BSDS2 SYSTEM=..., UTILITY=...
Causes
Unequal timestamps can occur for the following reasons:
v One of the volumes that contains the BSDS has been restored. All information of
the restored volume is outdated. If the volume contains any active log data sets
or Db2 data, their contents are also outdated. The outdated volume has the
lower timestamp.
v Dual BSDS mode has degraded to single BSDS mode, and you are trying to start
without recovering the bad copy of the BSDS.
v The Db2 subsystem abended after updating one copy of the BSDS, but prior to
updating the second copy.
Db2 stops and does not restart until dual-BSDS mode is restored in the following
situations:
v Db2 is operating in single-BSDS mode, and the BSDS is damaged.
v Db2 is operating in dual-BSDS mode, and both BSDSs are damaged.
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 535
4. Use the access method services REPRO command to copy the BSDS from the
archive log to one of the replacement BSDSs that you defined in the prior step.
Do not copy any data to the second replacement BSDS; data is placed in the
second replacement BSDS in a later step in this procedure.
a. Use the print log map utility (DSNJU004) to print the contents of the
replacement BSDS. You can then review the contents of the replacement
BSDS before continuing your recovery work.
b. Update the archive log data set inventory in the replacement BSDS.
Examine the print log map output, and note that the replacement BSDS
does not obtain a record of the archive log from which the BSDS was
copied. If the replacement BSDS is a particularly old copy, it is missing all
archive log data sets that were created later than the BSDS backup copy.
Therefore, you need to update the BSDS inventory of the archive log data
sets to reflect the current subsystem inventory.
Use the change log inventory utility (DSNJU003) NEWLOG statement to
update the replacement BSDS, adding a record of the archive log from
which the BSDS was copied. Ensure that the CATALOG option of the
NEWLOG statement is properly set to CATALOG = YES if the archive log
data set is cataloged. Also, use the NEWLOG statement to add any
additional archive log data sets that were created later than the BSDS copy.
c. Update DDF information in the replacement BSDS. If the Db2 subsystem for
your installation is part of a distributed network, the BSDS contains the
DDF control record. You must review the contents of this record in the
output of the print log map utility. If changes are required, use the change
log inventory DDF statement to update the BSDS DDF record.
d. Update the active log data set inventory in the replacement BSDS.
In unusual circumstances, your installation might have added, deleted, or
renamed active log data sets since the BSDS was copied. In this case, the
replacement BSDS does not reflect the actual number or names of the active
log data sets that your installation has currently in use.
If you must delete an active log data set from the replacement BSDS log
inventory, use the change log inventory utility DELETE statement.
If you need to add an active log data set to the replacement BSDS log
inventory, use the change log inventory utility NEWLOG statement. Ensure
that the RBA range is specified correctly on the NEWLOG statement.
If you must rename an active log data set in the replacement BSDS log
inventory, use the change log inventory utility DELETE statement, followed
by the NEWLOG statement. Ensure that the RBA range is specified correctly
on the NEWLOG statement.
e. Update the active log RBA ranges in the replacement BSDS. Later, when a
restart is performed, Db2 compares the RBAs of the active log data sets that
are listed in the BSDS with the RBAs that are found in the actual active log
data sets. If the RBAs do not agree, Db2 does not restart. The problem is
magnified when a particularly old copy of the BSDS is used. To resolve this
problem, use the change log inventory utility to change the RBAs that are
found in the BSDS to the RBAs in the actual active log data sets. Take the
appropriate action, described below, to change RBAs in the BSDS:
v If you are not certain of the RBA range of a particular active log data set,
use DSN1LOGP to print the contents of the active log data set. Obtain the
logical starting and ending RBA values for the active log data set from
the DSN1LOGP output. The STARTRBA value that you use in the change
log inventory utility must be at the beginning of a control interval.
Similarly, the ENDRBA value that you use must be at the end of a control
If the problem is discovered at restart, begin with one of the following recovery
procedures:
v “Recovering from active log failures” on page 526
Chapter 13. Recovering from different Db2 for z/OS problems 537
v “Recovering from archive log failures” on page 530
v Recovering from BSDS failures
When Db2 recovery log damage terminates restart processing, Db2 issues messages
to the console to identify the damage and issue an abend reason code. (The SVC
dump title includes a more specific abend reason code to assist in problem
diagnosis.) If the explanations for the reason codes indicate that restart failed
because of some problem that is not related to a log error, contact IBM Software
Support.
To minimize log problems during restart, the system requires two copies of the
BSDS. Dual logging is also recommended.
Basic approaches to recovery: The two basic approaches to recovery from problems
with the log are:
v Restart Db2, bypassing the inaccessible portion of the log and rendering some
data inconsistent. Then recover the inconsistent objects by using the RECOVER
utility, or re-create the data by using REPAIR. Use the methods that are
described following this procedure to recover the inconsistent data.
v Restore the entire Db2 subsystem to a prior point of consistency. The method
requires that you have first prepared such a point; for suggestions, see Preparing
to recover to a prior point of consistency. Methods of recovery are described
under “Recovering from unresolvable BSDS or log data set problem during
restart” on page 561.
Even if the log is damaged, and Db2 is started by circumventing the damaged
portion, the log is the most important source for determining what work was lost
and what data is inconsistent.
Bypassing a damaged portion of the log generally proceeds with the following
steps:
1. Db2 restart fails. A problem exists on the log, and a message identifies the
location of the error. The following abend reason codes, which appear only in
the dump title, can be issued for this type of problem. This is not an exhaustive
list; other codes might occur.
Time
line
RBA: X Y
2. Db2 cannot skip over the damaged portion of the log and continue restart
processing. Instead, you restrict processing to only a part of the log that is error
free. For example, the damage shown in the preceding figure occurs in the log
RBA range between X to Y. You can restrict restart to all of the log before X;
then changes later than X are not made. Alternatively, you can restrict restart to
all of the log after Y; then changes between X and Y are not made. In either
case, some amount of data is inconsistent.
3. You identify the data that is made inconsistent by your restart decision. With
the SUMMARY option, the DSN1LOGP utility scans the accessible portion of
the log and identifies work that must be done at restart, namely, the units of
recovery that are to be completed and the page sets that they modified.
Because a portion of the log is inaccessible, the summary information might not
be complete. In some circumstances, your knowledge of work in progress is
needed to identify potential inconsistencies.
4. You use the CHANGE LOG INVENTORY utility to identify the portion of the
log to be used at restart, and to tell whether to bypass any phase of recovery.
You can choose to do a cold start and bypass the entire log.
5. You restart Db2. Data that is unaffected by omitted portions of the log is
available for immediate access.
6. Before you allow access to any data that is affected by the log damage, you
resolve all data inconsistencies. That process is described under “Resolving
inconsistencies resulting from a conditional restart” on page 567.
Where to start
The specific procedure depends on the phase of restart that was in control when
the log problem was detected. On completion, each phase of restart writes a
message to the console. You must find the last of those messages in the console
log. The next phase after the one that is identified is the one that was in control
when the log problem was detected. Accordingly, start at:
v “Recovering from failure during log initialization or current status rebuild” on
page 540
v “Recovering from a failure during forward log recovery” on page 552
v “Recovering from a failure during backward log recovery” on page 557
Chapter 13. Recovering from different Db2 for z/OS problems 539
Message ID Procedure to use
DSNJ107 “Recovering from unresolvable BSDS or log data set problem during
restart” on page 561
DSNJ1191 “Recovering from unresolvable BSDS or log data set problem during
restart” on page 561
DSNR002I None. Normal restart processing can be expected.
DSNR004I “Recovering from a failure during forward log recovery” on page 552
DSNR005I “Recovering from a failure during backward log recovery” on page 557
DSNR006I None. Normal restart processing can be expected.
Other “Recovering from failure during log initialization or current status
rebuild”
Symptoms
An abend was issued, indicating that restart failed. In addition, either the last
restart message that was received was a DSNJ001I message that indicates a failure
during current status rebuild, or none of the following messages was issued:
v DSNJ001I
v DSNR004I
v DSNR005I
If none of the preceding messages was issued, the failure occurred during the log
initialization phase of restart.
Environment
The following figure illustrates the timeline of events that exist when a failure
occurs during the log initialization phase.
Time
line
The portion of the log between log RBAs X and Y is inaccessible. For failures that
occur during the log initialization phase, the following activities occur:
1. Db2 allocates and opens each active log data set that is not in a stopped state.
2. Db2 reads the log until the last log record is located.
3. During this process, a problem with the log is encountered, preventing Db2
from locating the end of the log. Db2 terminates and issues an abend reason
code. Some of the abend reason codes that might be issued include:
v 00D10261
v 00D10262
v 00D10263
v 00D10264
Chapter 13. Recovering from different Db2 for z/OS problems 541
v 00D10265
v 00D10266
v 00D10267
v 00D10268
v 00D10329
v 00D1032A
v 00D1032B
v 00D1032C
v 00E80084
During its operations, Db2 periodically records in the BSDS the RBA of the last log
record that was written. This value is displayed in the print log map report as
follows:
HIGHEST RBA WRITTEN: 00000742989E
Because this field is updated frequently in the BSDS, the “highest RBA written”
can be interpreted as an approximation of the end of the log. The field is updated
in the BSDS when any one of a variety of internal events occurs. In the absence of
these internal events, the field is updated each time a complete cycle of log buffers
is written. A complete cycle of log buffers occurs when the number of log buffers
that are written equals the value of the OUTPUT BUFFER field of installation
panel DSNTIPL. The value in the BSDS is, therefore, relatively close to the end of
the log.
To find the actual end of the log at restart, Db2 reads the log forward sequentially,
starting at the log RBA that approximates the end of the log and continuing until
the actual end of the log is located.
Because the end of the log is inaccessible in this case, some information is lost:
v Units of recovery might have successfully committed or modified additional
page sets past point X.
v Additional data might have been written, including those that are identified
with writes that are pending in the accessible portion of the log.
v New units of recovery might have been created, and these might have modified
data.
The following figure illustrates the timeline of events that exist when a failure
occurs during current status rebuild.
Time
line
The portion of the log between log RBAs X and Y is inaccessible. For failures that
occur during the current status rebuild phase, the following activities occur:
1. Log initialization completes successfully.
2. Db2 locates the last checkpoint. (The BSDS contains a record of its location on
the log.)
3. Db2 reads the log, beginning at the checkpoint and continuing to the end of the
log.
4. Db2 reconstructs the status of the subsystem as it existed at the prior
termination of Db2.
5. During this process, a problem with the log is encountered, preventing Db2
from reading all required log information. Db2 terminates and issues an abend
reason code. Some of the abend reason codes that might be issued include:
v 00D10261
v 00D10262
v 00D10263
v 00D10264
v 00D10265
v 00D10266
v 00D10267
v 00D10268
v 00D10329
v 00D1032A
v 00D1032B
v 00D1032C
v 00E80084
Because the end of the log is inaccessible in this case, some information is lost:
v Units of recovery might have successfully committed or modified additional
page sets past point X.
v Additional data might have been written, including those that are identified
with writes that are pending in the accessible portion of the log.
v New units of recovery might have been created, and these might have modified
data.
Chapter 13. Recovering from different Db2 for z/OS problems 543
precisely what units of recovery failed to complete, what page sets had been
modified, and what page sets have writes pending. You need to gather that
information, and restart Db2.
Task 1: Find the log RBA after the inaccessible part of the log:
The first task in restarting Db2 by truncating the log is to locate the log RBA after
the inaccessible part of the log.
The range of the log between RBAs X and Y is inaccessible to all Db2 processes.
Procedure
To find the RBA after the inaccessible part of the log, take the action that is
associated with the message number that you received (DSNJ007I, DSNJ012I,
DSNJ103I, DSNJ104I, DSNJ106I, and DSNJ113E):
v When message DSNJ007I is issued:
The problem is that an operator canceled a request for archive mount. Reason
code 00D1032B is associated with this situation and indicates that an entire data
set is inaccessible.
For example, the following message indicates that the archive log data set
DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a
request for archive mount, resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE
DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the value of X, run the print log map utility (DSNJU004) to list the
log inventory information. The output of this utility provides each log data set
name and its associated log RBA range, the values of X and Y.
v When message DSNJ012I is issued:
The problem is that a log record is logically damaged. Message DSNJ012I
identifies the log RBA of the first inaccessible log record that Db2 detects. The
following reason codes are associated with this situation:
– 00D10261
– 00D10262
– 00D10263
– 00D10264
– 00D10265
– 00D10266
– 00D10267
– 00D10268
– 00D10348
For example, the following message indicates a logical error in the log record at
log RBA X'7429ABA'.
DSNJ012I ERROR D10265 READING RBA 000007429ABA
IN DATA SET DSNCAT.LOGCOPY2.DS01
CONNECTION-ID=DSN,
CORRELATION-ID=DSN
A given physical log record is actually a set of logical log records (the log
records that are generally spoken of) and the log control interval definition
(LCID). Db2 stores logical records in blocks of physical records to improve
efficiency. When this type of an error on the log occurs during log initialization
or current status rebuild, all log records within the physical log record are
Chapter 13. Recovering from different Db2 for z/OS problems 545
A given physical log record is actually a set of logical log records (the log
records that are generally spoken of) and the log control interval definition
(LCID). When this type of an error on the log occurs during log initialization or
current status rebuild, all log records within the physical log record, and beyond
it to the end of the log data set, are inaccessible. This is due to the log
initialization or current status rebuild phase of restart. Therefore, the value of X
is the log RBA that was reported in the message, rounded down to a 4-KB
boundary. (For the example message above, the rounded 4-KB boundary value
would be X'7429000'.)
v When message DSNJ113E is issued:
The problem is that the log RBA could not be found in the BSDS. Message
DSNJ113E identifies the log RBA of the inaccessible log record. This log RBA is
not registered in the BSDS. Reason code 00D1032B is associated with this
situation.
For example, the following message indicates that the log RBA X'7429ABA' is
not registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
Use the print log map utility (DSNJU004) to list the contents of the BSDS.
A given physical log record is actually a set of logical log records (the log
records that are generally spoken of) and the log control interval definition
(LCID). When this type of an error on the log occurs during log initialization or
current status rebuild, all log records within the physical log record are
inaccessible.
Using the print log map output, locate the RBA that is closest to, but less than,
X'7429ABA' for the value of X. If you do not find an RBA that is less than
X'7429ABA', a considerable amount of log information has been lost. If this is
the case, continue with “Recovering from a failure resulting from total or
excessive loss of log data” on page 563. Otherwise, continue with the next topic.
Related concepts
Description of failure during current status rebuild
Failure during log initialization phase
Related reference
DSNJU004 (print log map) (Db2 Utilities)
Related information
DSNJ007I (Db2 Messages)
DSNJ012I (Db2 Messages)
DSNJ103I (Db2 Messages)
DSNJ104I (Db2 Messages)
DSNJ106I (Db2 Messages)
DSNJ113E (Db2 Messages)
In certain recovery situations (such as when you recover by truncating the log),
you need to identify what work was lost and what data is inconsistent.
Chapter 13. Recovering from different Db2 for z/OS problems 547
The following figure is an example of a DSN1LOGP job that obtains summary
information for the checkpoint that was described previously.
Figure 51. Sample JCL for obtaining DSN1LOGP summary output for restart
The following figure shows an excerpt from the restart summary in a sample
DSN1LOGP summary report. The report is described after the figure.
Those units of recovery with a START log RBA equal to, or prior to, the point Y
cannot be completed at restart. All page sets that were modified by these units of
recovery are inconsistent after completion of restart when you attempt to identify
lost work and inconsistent data.
All page sets that are identified in message DSN1160I with a START log RBA value
equal to, or prior to, the point Y have database changes that cannot be written to
disk. As in the previously described case, all of these page sets are inconsistent
after completion of restart when you attempt to identify lost work and inconsistent
data.
At this point, you need to identify only the page sets in preparation for restart.
After restart, you need to resolve the problems in the page sets that are
inconsistent.
Because the end of the log is inaccessible, some information is lost; therefore, the
information is inaccurate. Some of the units of recovery that appear to be inflight
might have successfully committed, or they might have modified additional page
sets beyond point X. Additional data might have been written, including those
page sets that are identified as having pending writes in the accessible portion of
the log. New units of recovery might have been created, and these might have
modified data. Db2 cannot detect that these events occurred.
From this and other information (such as system accounting information and
console messages), you might be able to determine what work was actually
outstanding and which page sets are likely to be inconsistent after you start Db2.
This is because the record of each event contains the date and time to help you
determine how recent the information is. In addition, the information is displayed
in chronological sequence.
The third task in restarting Db2 by truncating the log is to determine what status
information has been lost.
Procedure
The fourth task in restarting Db2 by truncating the log is to truncate the log at the
point of error.
No Db2 process, including the RECOVER utility, allows a gap in the log RBA
sequence. You cannot process up to point X, skip over points X through Y, and
continue after Y.
Procedure
Create a conditional restart control record (CRCR) in the BSDS by using the change
log inventory utility. Specify the following options:
ENDRBA=endrba
The endrba value is the RBA at which Db2 begins writing new log records.
If point X is X'7429000', specify ENDRBA=7429000 on the CRESTART
control statement.
At restart, Db2 discards the portion of the log beyond X'7429000' before
processing the log for completing work (such as units of recovery and
database writes). Unless otherwise directed, Db2 performs normal restart
processing within the scope of the log. Because log information is lost, Db2
errors might occur. For example, a unit of recovery that has actually been
committed might be rolled back. Also, some changes that were made by
that unit of recovery might not be rolled back because information about
data changes is lost.
FORWARD=NO
Terminates forward-log recovery before log records are processed. This
option and the BACKOUT=NO option minimize errors that might result
from normal restart processing.
BACKOUT=NO
Terminates backward-log recovery before log records are processed. This
option and the FORWARD=NO option minimize errors that might result
from normal restart processing.
Results
Recovering and backing out units of recovery with lost information might
introduce more inconsistencies than the incomplete units of recovery.
Example
The following example is a CRESTART control statement for the ENDRBA value of
X'7429000':
CRESTART CREATE,ENDRBA=7429000,FORWARD=NO,BACKOUT=NO
The final task in restarting Db2 by truncating the log is to restart Db2 and resolve
inconsistencies.
You must have a CRESTART control statement with the correct ENDRBA value
and the FORWARD and BACKOUT options set to NO.
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 551
Phase 4: Backward log recovery
Phase 3: Forward log recovery
Related tasks
Resolving inconsistencies resulting from a conditional restart
Symptoms
A Db2 abend occurred, indicating that restart had failed. In addition, the last
restart message that was received was a DSNR004I message, which indicates that
log initialization completed; therefore, the failure occurred during forward log
recovery.
Environment
Db2 terminates because a portion of the log is inaccessible, and Db2 is therefore
unable to guarantee the consistency of the data after restart.
The following figure illustrates the events surrounding a failure during the
forward-log recovery phase of Db2 restart.
Time
line
The earliest log record for each unit of recovery is identified on the log line in
Figure 53 on page 552. In order for Db2 to complete each unit of recovery, Db2
requires access to all log records from the beginning point for each unit of recovery
to the end of the log.
The error on the log prevents Db2 from guaranteeing the completion of any
outstanding work that began prior to point Y on the log. Consequently, database
changes that are made by URID1 and URID2 might not be fully committed or
backed out. Writes that were pending for page set A (from points in the log prior
to Y) are lost.
You must determine which page sets are involved because after this procedure is
used, the page sets will contain inconsistencies that you must resolve. In addition,
using this procedure results in the completion of all database writes that are
pending.
Related concepts
Write operations (Db2 Performance)
Task 1: Find the log RBA after the inaccessible part of the log:
The first task in restarting Db2 by limiting restart processing is to locate the log
RBA that is after the inaccessible part of the log.
The range of the log between RBAs X and Y is inaccessible to all Db2 processes.
Chapter 13. Recovering from different Db2 for z/OS problems 553
Procedure
To find the RBA after the inaccessible part of the log, take the action that is
associated with the message number that you received (DSNJ007I, DSNJ012I,
DSNJ103I, DSNJ104I, DSNJ106I, and DSNJ113E):
v When message DSNJ007I is issued:
The problem is that an operator canceled a request for archive mount. Reason
code 00D1032B is associated with this situation and indicates that an entire data
set is inaccessible.
For example, the following message indicates that the archive log data set
DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a
request for archive mount, resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE
DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the value of X, run the print log map utility (DSNJU004) to list the
log inventory information. The output of this utility provides each log data set
name and its associated log RBA range, the values of X and Y.
v When message DSNJ012I is issued:
The problem is that a log record is logically damaged. Message DSNJ012I
identifies the log RBA of the first inaccessible log record that Db2 detects. The
following reason codes are associated with this situation:
– 00D10261
– 00D10262
– 00D10263
– 00D10264
– 00D10265
– 00D10266
– 00D10267
– 00D10268
– 00D10348
For example, the following message indicates a logical error in the log record at
log RBA X'7429ABA'.
DSNJ012I ERROR D10265 READING RBA 000007429ABA
IN DATA SET DSNCAT.LOGCOPY2.DS01
CONNECTION-ID=DSN,
CORRELATION-ID=DSN
A given physical log record is actually a set of logical log records (the log
records that are generally spoken of) and the log control interval definition
(LCID). Db2 stores logical records in blocks of physical records to improve
efficiency. When this type of an error on the log occurs during forward log
recovery, all log records within the physical log record are inaccessible.
Therefore, the value of X is the log RBA that was reported in the message,
rounded down to a 4-KB boundary. (For the example message above, the
rounded 4-KB boundary value would be X'7429000'.)
v When message DSNJ103I or DSNJ104I is issued:
For message DSNJ103I, the underlying problem depends on the reason code that
is issued:
– For reason code 00D1032B, an allocation error occurred for an archive log
data set.
– For reason code 00E80084, an active log data set that is named in the BSDS
could not be allocated during log initialization.
For message DSNJ104I, the underlying problem is that an open error occurred
for an archive and active log data set.
Chapter 13. Recovering from different Db2 for z/OS problems 555
DSNJ007I (Db2 Messages)
DSNJ012I (Db2 Messages)
DSNJ103I (Db2 Messages)
DSNJ104I (Db2 Messages)
DSNJ106I (Db2 Messages)
DSNJ113E (Db2 Messages)
Units of recovery that cannot be fully processed are considered incomplete units of
recovery. Page sets that will be inconsistent after completion of restart are
considered inconsistent page sets.
Procedure
Task 3: Restrict restart processing to the part of the log after the damage:
The third task in restarting Db2 by limiting restart processing is to restrict restart
processing to the part of the log that is after the damage.
To restrict restart processing to the part of the log after the damage:
1. Create a conditional restart control record (CRCR) in the BSDS by using the
change log inventory utility.
2. Identify the accessible portion of the log beyond the damage by using the
STARTRBA specification, which will be used at the next restart.
3. Specify the value Y+1 (that is, if Y is X'7429FFF', specify STARTRBA=742A000).
Restart restricts its processing to the portion of the log beginning with the
specified STARTRBA and continuing to the end of the log. For example:
CRESTART CREATE,STARTRBA=742A000
The final task in restarting Db2 by limiting restart processing is to start Db2 and
resolve problems with inconsistent data.
At the end of restart, the CRCR is marked DEACTIVATED to prevent its use on a
subsequent restart. Until the restart is complete, the CRCR will be in effect. Use
START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped.
Procedure
At the end of restart, the conditional restart control record (CRCR) is marked
“Deactivated” to prevent its use on a later restart. Until the restart completes
successfully, the CRCR is in effect. Until data is consistent or page sets are
stopped, start Db2 with the ACCESS (MAINT) option.
2. Resolve all data inconsistency problems.
Related tasks
Resolving inconsistencies resulting from a conditional restart
Symptoms
An abend is issued to indicate that restart failed because of a log problem. In
addition, the last restart message that is received is a DSNR005I message,
indicating that forward log recovery completed and that the failure occurred
during backward log recovery.
Environment
Because a portion of the log is inaccessible, Db2 needs to roll back some database
changes during restart.
Chapter 13. Recovering from different Db2 for z/OS problems 557
Operations management response: To start Db2, choose one of the following
approaches:
v Read the information about relevant messages and codes that you received to
determine if this approach is possible. The explanations of the messages and
codes identify any corrective action that you can take to resolve the problem. If
it is possible, correct the problem that made the log inaccessible, and start Db2
again.
v Restore the Db2 log and all data to a prior consistent point and start Db2. This
procedure is described in “Recovering from unresolvable BSDS or log data set
problem during restart” on page 561.
v Start Db2 without completing some database changes. Do this only if the exact
changes cannot be identified; all that can be determined is which page sets
might have incomplete changes. The procedure for determining which page sets
contain incomplete changes is described in “Bypassing backout before
restarting.” Other related topics might help you better understand the problem.
When a failure occurs during the backward log recovery phase, certain
characteristics of the situation are evident, as the following figure shows.
Time
line
RBA: X Y Checkpoint
The portion of the log between log RBA X and Y is inaccessible. The restart process
was reading the log in a backward direction, beginning at the end of the log and
continuing backward to the point marked by Begin URID5 in order to back out the
changes that were made by URID5, URID6, and URID7. You can assume that Db2
determined that these units of recovery were inflight or in-abort. The portion of the
log from point Y to the end of the log has been processed. However, the portion of
the log from Begin URID5 to point Y has not been processed and cannot be
processed by restart. Consequently, database changes that were made by URID5
and URID6 might not be fully backed out. All database changes made by URID7
have been fully backed out, but these database changes might not have been
written to disk. A subsequent restart of Db2 causes these changes to be written to
disk during forward recovery.
Related concepts
Recommendations for changing the BSDS log inventory
Chapter 13. Recovering from different Db2 for z/OS problems 559
3. Start Db2. At the end of restart, the CRCR is marked “Deactivated” to prevent
its use on a subsequent restart. Until the restart is complete, the CRCR is in
effect. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are
stopped.
4. Resolve all inconsistent data problems. After the successful start of Db2, resolve
all data inconsistency problems. “Resolving inconsistencies resulting from a
conditional restart” on page 567 describes how to do this. At this time, make all
other data available for use.
Related concepts
DSN1LOGP summary report
Symptoms
Abend code 00D1032A is issued, and message DSNJ113E is displayed:
DSNJ113E RBA log-rba NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=aaaaaaaa, CORRELATION-ID=aaaaaaaa
Causes
The BSDS is wrapping around too frequently when log RBA read requests are
submitted; when the last archive log data sets were added to the BSDS, the
maximum allowable number of log data sets in the BSDS was exceeded. This
caused the earliest data sets in the BSDS to be displaced by the new entry.
Subsequently, the requested RBA containing the dropped log data set cannot be
read after the wrap occurs.
Symptoms
The following messages are issued:
v DSNJ100I
v DSNJ107I
v DSNJ119I
Causes
Any of the following problems might cause problems with the BSDS or log data
sets during restart:
v A log data set is physically damaged.
v Both copies of a log data set are physically damaged in the case of dual logging
mode.
v A log data set is lost.
v An archive log volume was reused even though it was still needed.
v A log data set contains records that are not recognized by Db2 because they are
logically broken.
Environment
Db2 cannot be started until this procedure is performed.
Chapter 13. Recovering from different Db2 for z/OS problems 561
Procedure
Symptoms
This situation is generally accompanied by messages or abend reason codes that
indicate that an excessive amount of log information, or the entire log, has been
lost.
Chapter 13. Recovering from different Db2 for z/OS problems 563
are inconsistent, you must decide whether you are willing to assume the risk that
is involved in restarting Db2 under those conditions.
All system and user table spaces must be intact, and you must have a recent copy
of the BSDS. Other VSAM clusters on disk, such as the system databases
DSNDB01, DSNDB04, and DSNB06, and also user databases are assumed to exist.
Procedure
You can recover from an excessive loss of active log data in one of two ways.
If your site experiences an excessive loss of active log data, you can recover by
creating a gap in the active log.
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 565
v If you want to minimize the number of potential read operations on the
archive log data sets, use the access method services REPRO command to copy
the data from each archive log data set into the corresponding active log data
set. Ensure that you copy the proper RBA range into the active log data set.
Ensure that the active log data set is large enough to hold all the data from
the archive log data set. When Db2 does an archive operation, it copies the
log data from the active log data set to the archive log data set, and then
pads the archive log data set with binary zeros to fill a block. In order for the
access method services REPRO command to be able to copy all of the data
from the archive log data set to a recently defined active log data set, the
new active log data set might need to be larger than the original one.
For example, if the block size of the archive log data set is 28 KB, and the
active log data set contains 80 KB of data, Db2 copies the 80 KB and pads
the archive log data set with 4 KB of nulls to fill the last block. Thus, the
archive log data set now contains 84 KB of data instead of 80 KB. In order
for the access method services REPRO command to complete successfully, the
active log data set must be able to hold 84 KB, rather than just 80 KB of data.
v If you are not concerned about read operations against the archive log data
sets, complete the two steps that appear in the steps 5a on page 565 and 5b
on page 565 (as though the archive data sets did not exist).
6. Choose the appropriate point for Db2 to start logging. To do this, determine the
highest possible log RBA of the prior log. From previous console logs that were
written when Db2 was operational, locate the last DSNJ001I message. When
Db2 switches to a new active log data set, this message is written to the
console, identifying the data set name and the highest potential log RBA that
can be written for that data set. Assume that this is the value X'8BFFF'. Add
one to this value (X'8C000'), and create a conditional restart control record that
specifies the following change log inventory control statement:
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
When Db2 starts, all phases of restart are bypassed, and logging begins at log
RBA X'8C000'. If you choose this method, you do not need to use the RESET
option of the DSN1COPY utility, and you can save a lot of time.
7. To restart Db2 without using any log data, create a conditional restart control
record for the change log inventory utility (DSNJU003).
8. Start Db2. Use the START DB2 ACCESS(MAINT) command until data is consistent
or page sets are stopped.
9. After restart, resolve all inconsistent data as described in “Resolving
inconsistencies resulting from a conditional restart” on page 567.
Results
This procedure causes all phases of restart to be bypassed and logging to begin at
the point in the log RBA that you identified in step 6 (X'8C000' in the example
given in this procedure). This procedure creates a gap in the log between the
highest RBA kept in the BSDS and, in this example, X'8C000', and that portion of
the log is inaccessible.
What to do next
Because no Db2 process can tolerate a gap, including RECOVER, you need to take
image copies of all data after a cold start, even data that you know is consistent.
Related reference
You can do a cold start without creating a gap in the log. Although this approach
does eliminate the gap in the physical log record, you cannot use a cold start to
resolve the logical inconsistencies.
Procedure
What to do next
Chapter 13. Recovering from different Db2 for z/OS problems 567
A cold status is associated with a cold start, which is a process by which a Db2
subsystem restarts without processing any log records. Db2 has no memory of
previous connections with its partner. The general process that occurs with a cold
start in a distributed environment is as follows:
1. The partner (for example CICS) accepts the cold start connection and
remembers the recovery log name of the Db2 subsystem that experienced the
cold start.
2. If the partner has indoubt thread resolution requirements with the cold-starting
Db2 subsystem, those requirements cannot be satisfied.
3. The partner terminates its indoubt resolution responsibility with the
cold-starting Db2 subsystem. However, as a participant, the partner has indoubt
logical units of work that must be resolved manually.
4. Because the Db2 subsystem has an incomplete record of resolution
responsibilities, Db2 attempts to reconstruct as much resynchronization
information as possible.
5. Db2 displays the information that it was able to reconstruct in one or more
DSNL438 or DSNL439 messages.
6. Db2 then discards the synchronization information that it was able to
reconstruct and removes any restrictive states that are maintained on the object.
Resolving inconsistencies
In some problem situations, you need to determine what you must do in order to
resolve any data inconsistencies that exist.
Procedure
To resolve inconsistencies:
1. Determine the scope of any inconsistencies that are introduced by the situation.
a. If the situation is either a cold start that is beyond the current end of the log
or a conditional restart that skips backout or forward log recovery, use the
DSN1LOGP utility to determine what units of work have not been backed
out and which objects are involved. For a cold start that is beyond the end
of the log, you can also use DSN1LOGP to help identify any restrictive
object states that have been lost.
b. If a conditional restart truncates the log in a non-data sharing environment,
recover all data and indexes to the new current point in time, and rebuild
the data and indexes as needed. You need to recover or rebuild (or both
recover and rebuild) the data and indexes because data and index updates
might exist without being reflected in the Db2 log. When this situation
occurs, a variety of inconsistency errors might occur, including Db2 abends
with reason code 00C200C1.
2. Decide what approach to take to resolve inconsistencies by reading related
topics about the approaches:
v Recovery to a prior point of consistency
v Restoration of a table space
v Use of the REPAIR utility on the data
The first two approaches are less complex than, and therefore preferred over,
the third approach.
3. If one or more of the following conditions are applicable, take image copies of
all Db2 table spaces:
v You did a cold start.
v You did a conditional restart that altered or truncated the log.
568 Administration Guide
v The log is damaged.
v Part of the log is no longer accessible.
When a portion of the Db2 recovery log becomes inaccessible, all Db2 recovery
processes have difficulty operating successfully, including restart, RECOVER,
and deferred restart processing. Conditional restart allows circumvention of the
problem during the restart process. To ensure that RECOVER does not attempt
to access the inaccessible portions of the log, secure a copy (either full or
incremental) that does not require such access. A failure occurs any time a Db2
process (such as the RECOVER utility) attempts to access an inaccessible
portion of the log. You cannot be sure which Db2 processes must use that
portion of the recovery log. Therefore, you need to assume that all data
recovery activity requires that portion of the log.
A cold start might cause down-level page set errors, which you can find out
about in different ways:
v Message DSNB232I is sometimes displayed during Db2 restart, once for each
down-level page set that Db2 detects. After you restart Db2, check the
console log for down-level page set messages.
– If a small number of those messages exist, run DSN1COPY with the
RESET option to correct the errors to the data before you take image
copies of the affected data sets.
– If a large number of those messages exist, the actual problem is not that
page sets are down-level but that the conditional restart inadvertently
caused a high volume of DSNB232I messages. In this case, temporarily
turn off down-level detection by turning off the DLDFREQ ZPARM.
In either case, continue with step 4.
v If you run the COPY utility with the SHRLEVEL REFERENCE option to
make image copies, the COPY utility sometimes issues message DSNB232I
about down-level page sets that Db2 does not detect during restart. If any of
those messages were issued when you are making image copies, correct the
errors, and continue making image copies of the affected data sets.
v If you use some other method to make image copies, you will find out about
down-level page set errors during normal operation. In this case, you need to
correct the errors by using the information in “Recovering from a down-level
page set problem” on page 575.
4. For any Db2 (catalog and directory) system table spaces that are inconsistent,
recover them in the proper order. You might need to recover to a prior point in
time, prior to the conditional restart.
5. For any objects that you suspect might be inconsistent, resolve the database
inconsistencies before proceeding.
v First, resolve inconsistencies in Db2 system databases DSNDB01 and
DSNDB06. Catalog and directory inconsistencies need to be resolved before
inconsistencies in other databases because the subsystem databases describe
all other databases, and access to other databases requires information from
DSNDB01 and DSNDB06.
v If you determine that the existing inconsistencies involve indexes only (not
data), use the REBUILD INDEX utility to rebuild the affected indexes.
Alternatively, you can use the RECOVER utility to recover the index if
rebuilding the indexes is not possible.
v For a table space that cannot be recovered (and is thus inconsistent),
determine the importance of the data and whether it can be reloaded. If the
data is not critical or can be reloaded, drop the table after you restart Db2,
and reload the data rather than trying to resolve the inconsistencies.
Chapter 13. Recovering from different Db2 for z/OS problems 569
Related concepts:
Recovery of data to a prior point in time
Related tasks:
Recovering catalog and directory objects (Db2 Utilities)
Related reference:
LEVELID UPDATE FREQ field (DLDFREQ subsystem parameter) (Db2
Installation and Migration)
DSN1COPY (Db2 Utilities)
RECOVER (Db2 Utilities)
You can restore the table space by reloading data into it or by re-creating the table
space, which requires advance planning. Either of these methods is easier than
using REPAIR.
Reloading the table space is the preferred approach, when it is possible, because
reloading is easier and requires less advance planning than re-creating a table
space. Re-creating a table space involves dropping and then re-creating the table
space and associated tables, indexes, authorities, and views, which are implicitly
dropped when the table space is dropped. Therefore, re-creating the objects means
that you need to plan ahead so that you will be prepared to re-establish indexes,
views, authorizations, and the data content itself.
Restriction:
You cannot drop Db2 system tables, such as the catalog and directory. For these
system tables, follow one of these procedures instead of this one:
v Recovery of data to a prior point in time
v “Using the REPAIR utility on inconsistent data” on page 571
Procedure
You can resolve inconsistencies with the REPAIR utility. However, using REPAIR is
not recommended unless the inconsistency is limited to a small number of data or
index pages.
Db2 does not provide a mechanism to automatically inform users about data that
is physically inconsistent or damaged. When you use SQL to access data that is
physically damaged, Db2 issues messages to indicate that data is not available due
to a physical inconsistency.
However, Db2 includes several utilities that can help you identify data that is
physically inconsistent before you try to access it. These utilities are:
v CHECK DATA
v CHECK INDEX
v CHECK LOB
v COPY with the CHECKPAGE option
v DSN1COPY with the CHECK option
Attention: If you decide to use this method to resolve data inconsistencies, use
extreme care. Use of the REPAIR utility to correct inconsistencies requires in-depth
knowledge of Db2 data structures. Incorrect use of the REPAIR utility can cause
further corruption and loss of data. Read this topic carefully because it contains
information that is important to the successful resolution of the inconsistencies.
Restrictions:
v Although the DSN1LOGP utility can identify page sets that contain
inconsistencies, this utility cannot identify the specific data modifications that
are involved in the inconsistencies within a given page set.
v Any pages that are on the logical page list (perhaps caused by this restart)
cannot be accessed by using the REPAIR utility.
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 571
START DATABASE (dbase) SPACENAM (space) ACCESS(FORCE)
In this command, space identifies the table space that is involved.
2. If any system data is inconsistent, use the REPAIR utility to resolve those
inconsistencies. Db2 system data (such as data that is in the catalog and
directory) exists in interrelated tables and table spaces. Data in Db2 system
databases cannot be modified with SQL, so use of the REPAIR utility is
necessary to resolve the inconsistencies that are identified.
3. Determine if you have any structural violations in data pages. Db2 stores data
in data pages. The structure of data in a data page must conform to a set of
rules for Db2 to be able to process the data accurately. Using a conditional
restart process does not cause violations to this set of rules; but, if violations
existed prior to conditional restart, they continue to exist after conditional
restart.
4. Use the DSN1COPY utility with the CHECK option to identify any violations
that you detected in the previous step, and then resolve the problems, possibly
by recovering or rebuilding the object or by dropping and re-creating it.
5. Examine the various types of pointers that Db2 uses to access data (indexes,
hashes, and links), and identify inconsistencies that need to be manually
corrected.
Hash and link pointers exist in the Db2 directory database; link pointers also
exist in the catalog database. Db2 uses these pointers to access data. During a
conditional restart, data pages might be modified without update of the
corresponding pointers. When this occurs, one of the following actions might
occur:
v If a pointer addresses data that is nonexistent or incorrect, Db2 abends the
request. If SQL is used to access the data, a message that identifies the
condition, and the page in question is issued.
v If data exists but no pointer addresses it, that data is virtually invisible to all
functions that attempt to access it by using the damaged hash or link pointer.
The data might, however, be visible and accessible by some functions, such
as SQL functions that use another pointer that was not damaged. This
situation can result in inconsistencies.
If a row that contains a varying-length field is updated, it can increase in size.
If the page in which the row is stored does not contain enough available space
to store the additional data, the row is placed in another data page, and a
pointer to the new data page is stored in the original data page. After a
conditional restart, one of the following conditions might exist.
v The row of data exists, but the pointer to that row does not exist. In this
case, the row is invisible, and the data cannot be accessed.
v The pointer to the row exists, but the row itself no longer exists. Db2 abends
the requester when any operation (for instance, a SELECT) attempts to access
the data. If termination occurs, one or more messages are issued to identify
the condition and the page that contains the pointer.
6. Use the REPAIR utility to resolve any inconsistencies that you detected in the
previous step.
7. Reset the log RBA in every data and index page set that are to be corrected
with this procedure by using the DSN1COPY RESET option. This step is
necessary for the following reason. If the log was truncated, changing data by
using the REPAIR utility can cause problems. Each data and index page
contains the log RBA of the last recovery log record that was applied against
the page. Db2 does not allow modification of a page that contains a log RBA
that is higher than the current end of the log.
Symptoms
The symptoms vary based on whether the failure was an allocation or an open
problem:
Allocation problem
The following message indicates an allocation problem:
DSNB207I - DYNAMIC ALLOCATION OF DATA SET FAILED.
REASON=rrrr DSNAME=dsn
Environment
When this type of problem occurs:
v The table space is automatically stopped.
v Programs receive a -904 SQLCODE (SQLSTATE '57011').
Chapter 13. Recovering from different Db2 for z/OS problems 573
v If the problem occurs during restart, the table space is marked for deferred
restart, and restart continues. The changes are applied later when the table space
is started.
In this recovery procedure, you create and populate a table that contains data that
is both valid and invalid. You need to restore your Db2 subsystem to a point in
time before the invalid data was inserted into the table, but after the point in time
when the valid data was inserted. Also, you create an additional table space and
table that Db2 will re-create during the log-apply phase of the restore process.
Procedure
To insert data into a table, determine the point in time that you want to recover to,
and then recover the Db2 subsystem to a prior point in time:
1. Issue the START DB2 command to start Db2 and all quiesced members of the
data sharing group. Quiesced members are ones that you removed from the
data sharing group either temporarily or permanently. Quiesced members
remain dormant until you restart them.
2. Issue SQL statements to create a database, a table space, and two tables with
one index for each table.
3. Issue the BACKUP SYSTEM DATA ONLY utility control statement to create a
backup copy of only the database copy pool for a Db2 subsystem or data
sharing group.
4. Issue an SQL statement to first insert rows into one of the tables, and then
update some of the rows.
5. Use the LOAD utility with the LOG NO attribute to load the second table.
6. Issue SQL statements to create an additional table space, table, and index in
an existing database. Db2 will re-create the additional table space and table
during the log-apply phase of the restore process.
7. Issue the SET LOG SUSPEND command or the SET LOG RESUME command
to obtain a log truncation point, logpoint1, which is the point you want to
recover to. For a non-data sharing group, use the RBA value. For a data
sharing group, use the lowest log record sequence number (LRSN) from the
active members.
The following example shows sample output for the SET LOG SUSPEND
command:
Symptoms
The following message is issued:
DSNB232I csect-name - UNEXPECTED DATA SET LEVEL ID ENCOUNTERED
The message also contains the level ID of the data set, the level ID that Db2
expects, and the name of the data set.
Causes
A down-level page set can be caused by:
Chapter 13. Recovering from different Db2 for z/OS problems 575
v A Db2 data set is inadvertently replaced by an incorrect or outdated copy.
Usually this happens in conjunction with use of a stand-alone or non-Db2 utility,
such as DSN1COPY or DFSMShsm.
v A cold start of Db2 occurs.
v A VSAM high-used RBA of a table space becomes corrupted.
Db2 associates a level ID with every page set or partition. Most operations detect a
down-level ID, and return an error condition, when the page set or partition is first
opened for mainline or restart processing. The exceptions are the following
operations, which do not use the level ID data:
v LOAD REPLACE
v RECOVER
v REBUILD INDEX
v DSN1COPY
v DSN1PRNT
Environment
v If the error was reported during mainline processing, Db2 sends a "resource
unavailable" SQLCODE and a reason code to the application to explain the error.
v If the error was detected while a utility was processing, the utility generates a
return code 8.
Related system programmer actions: Consider taking the following actions, which
might help you minimize or deal with down-level page set problems in the future:
v To control how often the level ID of a page set or partition is updated, specify a
value between 0 and 32767 on the LEVELID UPDATE FREQ field of panel
DSNTIPL.
Unless your LOBs are fairly small, specifying LOG NO for LOB objects is
recommended for the best performance. However, to maximize recoverability,
specifying LOG YES is recommended. The performance cost of logging exceeds the
benefits that you can receive from logging such large amounts of data. If no
changes are made to LOB data, the logging cost is not an issue. However, you
should make image copies of the LOB table space to prepare for failures. The
frequency with which you make image copies is based on how often you update
LOB data.
Procedure
To recover LOB data from a LOB table space that is defined with LOG NO:
1. Run the RECOVER utility as you do for other table spaces:
RECOVER TABLESPACE dbname.lobts
If changes were made after the image copy, Db2 puts the table space in
auxiliary warning status, which indicates that some of your LOBs are invalid.
Applications that try to retrieve the values of those LOBs receive SQLCODE
-904. Applications can still access other LOBs in the LOB table space.
2. Get a report of the invalid LOBs by running CHECK LOB on the LOB table
space:
CHECK LOB TABLESPACE dbname.lobts
Chapter 13. Recovering from different Db2 for z/OS problems 577
Fix the invalid LOBs, by updating the LOBs or setting them to the null value.
For example, suppose that you determine from the CHECK LOB utility that the
row of the EMP_PHOTO_RESUME table with ROWID
X'C1BDC4652940D40A81C201AA0A28' has an invalid value for column
RESUME. If host variable hvlob contains the correct value for RESUME, you can
use this statement to correct the value:
UPDATE DSN8B10. EMP_PHOTO_RESUME
SET RESUME = :hvlob
WHERE EMP_ROWID = ROWID(X’C1BDC4652940D40A81C201AA0A28’);
Symptoms
The following message is issued, where dddddddd is a table space name:
DSNU086I DSNUCDA1 READ I/O ERRORS ON SPACE=dddddddd.
DATA SET NUMBER=nnn.
I/O ERROR PAGE RANGE=aaaaaa, bbbbbb.
Any table spaces that are identified in DSNU086I messages must be recovered.
Follow the steps later in this topic.
Environment
Db2 remains active.
Symptoms
The following message is issued, where dddddddd is the name of the table space
from the catalog or directory that failed (for example, SYSIBM.SYSCOPY):
DSNU086I DSNUCDA1 READ I/O ERRORS ON SPACE=dddddddd.
DATA SET NUMBER=NNN.
I/O ERROR PAGE RANGE=aaaaaa, bbbbbb
This message can indicate either read or write errors. You might also receive a
DSNB224I or DSNB225I message, which indicates an input or output error for the
catalog or directory.
Environment
Db2 remains active.
If the Db2 directory or any catalog table is damaged, only user IDs with the
RECOVERDB privilege in DSNDB06, or an authority that includes that privilege,
can perform the recovery. Furthermore, until the recovery takes place, only those
IDs can do anything with the subsystem. If an ID without proper authorization
attempts to recover the catalog or directory, message DSNU060I is displayed. If the
authorization tables are unavailable, message DSNT500I is displayed to indicate
that the resource is unavailable.
Chapter 13. Recovering from different Db2 for z/OS problems 579
The dddddddd is the name of the table space that failed.
– If the table space is in the Db2 directory, the data set name format is:
DSNC110.DSNDBC.DSNDB01.dddddddd.I0001.A001
The dddddddd is the name of the table space that failed.
If you do not use the default (IBM-supplied) formats, the formats for data set
names might be different.
3. Use the access method services DELETE command to delete the data set,
specifying the fully qualified data set name.
4. After the data set is deleted, use the access method services DEFINE command
with the REUSE parameter to redefine the same data set, again specifying the
same fully qualified data set name. Use the JCL for installing Db2 to determine
the appropriate parameters.
5. Issue the command START DATABASE ACCESS(UT), naming the table space that is
involved.
6. Use the RECOVER utility to recover the table space that failed.
7. Issue the command START DATABASE, specifying the table space name and RO or
RW access, whichever is appropriate.
Related tasks
Recovering catalog and directory objects (Db2 Utilities)
Symptoms
The symptoms for integrated catalog facility problems vary according to the
underlying problems.
Symptoms
Db2 sends the following message to the master console:
DSNP012I - DSNPSCT0 - ERROR IN VSAM CATALOG LOCATE FUNCTION
FOR data_set_name
CTLGRC=50
CTLGRSN=zzzzRRRR
CONNECTION-ID=xxxxxxxx,
CORRELATION-ID=yyyyyyyyyyyy
LUW-ID=logical-unit-of-work-id=token
In this VSAM message, yy is 28, 30, or 32 for an out-of-space condition. Any other
values for yy indicate a damaged VVDS.
Symptoms
The symptoms vary based on the specific situation. The following messages and
codes might be issued:
v DSNP007I
v DSNP001I
Chapter 13. Recovering from different Db2 for z/OS problems 581
v -904 SQL return code (SQLSTATE '57011')
Environment
For a demand request failure during restart, the object that is supported by the
data set (an index space or a table space) is stopped with deferred restart pending.
Otherwise, the state of the object remains unchanged.
Procedure
Procedure
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 583
1. Use the SQL statement ALTER TABLESPACE or ALTER INDEX with a USING
clause. (You do not need to stop the table space before you use ALTER
TABLESPACE.) You can give new values of PRIQTY and SECQTY in either the
same or a new Db2 storage group.
2. Use one of the following procedures. No movement of data occurs until this
step is completed.
v For indexes: If you have taken full image copies of the index, run the
RECOVER INDEX utility. Otherwise, run the REBUILD INDEX utility.
v For table spaces other than LOB table spaces: Run one of the following
utilities on the table space: REORG, RECOVER, or LOAD REPLACE.
v For LOB table spaces that are defined with LOG YES: Run the RECOVER
utility on the table space.
v For LOB table spaces that are defined with LOG NO:
a. Start the table space in read-only (RO) mode to ensure that no updates
are made during this process.
b. Make an image copy of the table space.
c. Run the RECOVER utility on the table space.
d. Start the table space in read-write (RW) mode.
Procedure
Procedure
Procedure
Enlarging a fully extended data set for the work file database
If you have an out-of-disk-space or extent limit problem with the work file
database (DSNDB07), you need to add space to the data set.
Procedure
To enlarge a fully extended data set for the work file database:
Add space to the Db2 storage group, choosing one of the following approaches:
v Use SQL to create more table spaces in database DSNDB07.
v Execute these steps:
1. Use the command STOP DATABASE(DSNDB07) to ensure that no users are
accessing the database.
2. Use SQL to alter the storage group, adding volumes as necessary.
3. Use the command START DATABASE(DSNDB07) to allow access to the database.
Symptoms
One of the following messages is issued at the end of utility processing, depending
on whether the table space is partitioned:
DSNU561I csect-name - TABLESPACE=tablespace-name PARTITION=partnum
IS IN CHECK PENDING
DSNU563I csect-name - TABLESPACE=tablespace-name IS IN CHECK PENDING
Causes
Db2 detected one or more referential constraint violations.
Environment
The table space is still generally available. However, it is not available to the
COPY, REORG, and QUIESCE utilities, or to SQL select, insert, delete, or update
operations that involve tables in the table space.
Symptoms
The symptoms for DDF failures vary based on the precise problems. The
symptoms include messages, SQL return codes, and apparent wait states.
Symptoms
VTAM or TCP/IP returns a resource-unavailable condition along with the
appropriate diagnostic reason code and message. A DSNL500 or DSNL511
(conversation failed) message is sent to the console for the first failure to a location
for a specific logical unit (LU) mode or TCP/IP address. All other threads that
detect a failure from that LU mode or IP address are suppressed until
communications to the LU that uses that mode are successful.
Db2 returns messages DSNL501I and DSNL502I. Message DSNL501I usually means
that the other subsystem is not operational. When the error is detected, it is
reported by a console message, and the application receives an SQL return code.
Environment
The application can choose to request rollback or commit, both of which deallocate
all but the first conversation between the allied thread and the remote database
access thread. A commit or rollback message is sent over this remaining
conversation.
Errors during the deallocation process of the conversation are reported through
messages, but they do not stop the commit or rollback processing. If the
conversation that is used for the commit or rollback message fails, the error is
reported. If the error occurred during a commit process and if the remote database
access was read-only, the commit process continues. Otherwise the commit process
is rolled back.
Symptoms
A DSNL700I message, which indicates that a resource-unavailable condition exists,
is sent to the console. Other messages that describe the cause of the failure are also
sent to the console.
Environment
If the distributed data facility (DDF) has already started when an individual CDB
table becomes unavailable, DDF does not terminate. Depending on the severity of
the failure, threads are affected as follows:
v The threads receive a -904 SQL return code (SQLSTATE '57011') with resource
type 1004 (CDB).
v The threads continue using VTAM default values.
The only threads that receive a -904 SQL return code are those that access locations
that have not had any prior threads. Db2 and DDF remain operational.
Chapter 13. Recovering from different Db2 for z/OS problems 587
Resolving the problem
Operator response:
1. Examine the messages to determine the source of the error.
2. Correct the error, and then stop and restart DDF.
Symptoms
A DSNL701I, DSNL702I, DSNL703I, DSNL704I, or DSNL705I message is issued to
identify the problem. Other messages that describe the cause of the failure are also
sent to the console.
Environment
DDF fails to start. Db2 continues to run.
Symptoms
In the event of a failure of a database access thread, the Db2 server terminates the
database access thread only if a unit of recovery exists. The server deallocates the
database access thread and then deallocates the conversation with an abnormal
indication (a negative SQL code), which is subsequently returned to the requesting
application. The returned SQL code depends on the type of remote access:
v DRDA access
For a database access thread or non-Db2 server, a DDM error message is sent to
the requesting site, and the conversation is deallocated normally. The SQL error
status code is a -30020 with a resource type 1232 (agent permanent error
received from the server).
Environment
Normal Db2 error recovery mechanisms apply, with the following exceptions:
v Errors that are encountered in the functional recovery routine are automatically
converted to rollback situations. The allied thread experiences conversation
failures.
v Errors that occur during commit, rollback, and deallocate within the DDF
function do not normally cause Db2 to abend. Conversations are deallocated,
and the database access thread is terminated. The allied thread experiences
conversation failures.
Symptoms
VTAM messages and Db2 messages are issued to indicate that distributed data
facility (DDF) is terminating and to explain why.
Causes
Environment
DDF terminates. An abnormal VTAM failure or termination causes DDF to issue a
STOP DDF MODE(FORCE) command. The VTAM commands Z NET,QUICK and Z
NET,CANCEL cause an abnormal VTAM termination. A Z NET,HALT causes a STOP DDF
MODE(QUIESCE) to be issued by DDF.
| Symptoms
| Db2 messages, such as DSNL013I and DSNL004I, are issued to indicate the
| problem.
| DSNL013I
| Contains the error field value that is returned from the VTAM ACB OPEN.
| For information about possible values, see OPEN macroinstruction error
| fields(z/OS Communications Server: IP and SNA Codes).
| DSNL004I
| Normally specifies a fully qualified LU name, network-name.luname. The
| absence of the network-name indicates a problem.
Chapter 13. Recovering from different Db2 for z/OS problems 589
| v The LU name is defined, but the LU did not start automatically during
| VTAM startup.
| v The LU name that is displayed in the DSNL004I message is not valid. In this
| case, stop Db2, run a DSNJU003 utility job, and restart Db2.
| To diagnose the problem within VTAM:
| a. Issue the following command: DISPLAY NET,ID=luname,SCOPE=ALL, where
| luname is displayed in message DSNL004I, and examine the output of the
| DISPLAY command.
| v If the output suggests that the LU name does not exist, display each
| APPL node by issuing the DISPLAY APPLS command. For command
| details, see DISPLAY APPLS command (z/OS Communications Server:
| SNA Operation).
| v If you do not find an active APPL node that contains the LU name or a
| model LU name (a wildcard LU name) that Db2 can use, take one of the
| following actions:
| – If no active APPL node exists and no APPL major node definition
| exists in any library that is referenced by the VTAM started procedure
| VTAMLST DD specification, define an APPL node to VTAM, start it,
| and then start DDF.
| – If no active APPL node exists, but an APPL major node definition
| exists in a library that is referenced by the VTAM started procedure
| VTAMLST DD specification, start the major node, and then start DDF.
| The cause of this situation is that the major node was not
| automatically started during VTAM startup.
| b. Work with your VTAM administrator to ensure that the required APPL
| node or APPL major node is started automatically during VTAM startup in
| the future. For more information, see The APPL statement (Db2 Installation
| and Migration).
| 3. If the previous steps do not resolve the problem, examine the reason code from
| the DSNL013I message. If the reason code is X'24', the PRTCT value in the
| APPL definition does not match the password value that was defined to Db2 in
| the DSNJU003 DDF statement. If the password specification to Db2 is missing
| or incorrect, with Db2 stopped, run a DSNJU003 utility job, specifying a DDF
| statement with the correct password, and restart Db2.
| 4. If none of the previous steps resolves the issue, contact your VTAM
| administrator for additional help in resolving the problem.
Symptoms
TCP/IP messages and Db2 messages are issued to indicate that TCP/IP is
unavailable.
Environment
Distributed data facility (DDF) periodically attempts to reconnect to TCP/IP. If the
TCP/IP listener fails, DDF automatically tries to re-establish the TCP/IP listener
for the DRDA SQL port or the resync port every three minutes. TCP/IP
connections cannot be established until the TCP/IP listener is re-established.
Symptoms
Message DSNL501I is issued when a CNOS request to a remote LU fails. The
CNOS request is the first attempt to connect to the remote site and must be
negotiated before any conversations can be allocated. Consequently, if the remote
LU is not active, message DSNL501I is displayed to indicate that the CNOS request
cannot be negotiated. Message DSNL500I is issued only once for all the SQL
conversations that fail as a result of a remote LU failure.
Message DSNL502I is issued for system conversations that are active to the remote
LU at the time of the failure. This message contains the VTAM diagnostic
information about the cause of the failure.
Environment
Any application communications with a failed LU receives a message to indicate a
resource-unavailable condition. Any attempt to establish communication with such
an LU fails.
Symptoms
An application is in an indefinitely long wait condition. This can cause other Db2
threads to fail due to resources that are held by the waiting thread. Db2 sends an
error message to the console, and the application program receives an SQL return
code.
Environment
Db2 does not respond.
Chapter 13. Recovering from different Db2 for z/OS problems 591
Operator response:
1. Use the DISPLAY THREAD command with the LOCATION and DETAIL options to
identify the LUWID and the session allocation for the waiting thread.
2. Use the CANCEL DDF THREAD command to cancel the waiting thread.
3. If the CANCEL DDF THREAD command fails to break the wait (because the thread
is not suspended in Db2), try using VTAM commands such as VARY
TERM,SID=xxx or use the TCP/IP DROP command. For instructions on how to
use the VTAM commands and TCP/IP commands, see -CANCEL THREAD
(Db2) (Db2 Commands).
Related tasks
Canceling threads
Symptoms
Message DSNL500I is issued at the requester for VTAM conversations (if it is a
Db2 subsystem) with return codes RTNCD=0, FDBK2=B, RCPRI=4, and RCSEC=5.
These return codes indicate that a security violation has occurred. The server has
deallocated the conversation because the user is not allowed to access the server.
For conversations that use DRDA access, LU 6.2 communications protocols present
specific reasons for why the user access failed, and these reasons are
communicated to the application. If the server is a Db2 database access thread,
message DSNL030I is issued to describe what caused the user to be denied access
into Db2 through DDF. No message is issued for TCP/IP connections.
Causes
This problem is caused by a remote user who attempts to access Db2 through DDF
without the necessary security authority.
Symptoms
The specific symptoms of a disaster that affects your local system hardware vary,
but when this happens, the affected Db2 subsystem is not operational.
Causes
Your local system hardware has suffered physical damage.
Procedure
For a remote site recovery procedure where tape volumes that contain system data
are sent from the production site, specify the dump class that is available at the
remote site by using the following installation options on installation panel
DSNTIP6:
v Either RESTORE FROM DUMP or RECOVER FROM DUMP
v DUMP CLASS NAME
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 593
1. If an integrated catalog facility catalog does not already exist, run job
DSNTIJCA to create a user catalog.
2. Use the access method services IMPORT command to import the integrated
catalog facility catalog.
3. Restore Db2 libraries. Some examples of libraries that you might need to
restore include:
v Db2 SMP/E libraries
v User program libraries
v User DBRM libraries
v Db2 CLIST libraries
v Db2 libraries that contain customized installation jobs
v JCL for creating user-defined table spaces
4. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects. (Because
step 2 imports a user ICF catalog, the catalog reflects data sets that do not
exist on disk.)
5. Obtain a copy of installation job DSNTIJIN, which creates Db2 VSAM and
non-VSAM data sets. Change the volume serial numbers in the job to volume
serial numbers that exist at the recovery site. Comment out the steps that
create Db2 non-VSAM data sets, if these data sets already exist. Run
DSNTIJIN. However, do not run DSNTIJID.
6. Recover the BSDS:
a. Use the access method services REPRO command to restore the contents of
one BSDS data set (allocated in step 5). You can find the most recent BSDS
image in the last file (archive log with the highest number) on the latest
archive log tape.
b. Determine the RBA range for this archive log by using the print log map
utility (DSNJU004) to list the current BSDS contents. Find the most recent
archive log in the BSDS listing, and add 1 to its ENDRBA value. Use this
as the STARTRBA. Find the active log in the BSDS listing that starts with
this RBA, and use its ENDRBA as the ENDRBA.
c. Delete the oldest archive log from the BSDS.
d. Register this latest archive log tape data set in the archive log inventory of
the BSDS that you just restored by using the change log inventory utility
(DSNJU003). This step is necessary because the BSDS image on an archive
log tape does not reflect the archive log data set that resides on that tape.
After these archive logs are registered, use the print log map utility
(DSNJU004) to list the contents of the BSDS.
e. Adjust the active logs in the BSDS by using the change log inventory
utility (DSNJU003), as necessary:
1) To delete all active logs in the BSDS, use the DELETE option of
DSNJU003. Use the BSDS listing that is produced in step 6d to
determine the active log data set names.
2) To add the active log data sets to the BSDS, use the NEWLOG
statement of DSNJU003. Do not specify a STARTRBA or ENDRBA in
the NEWLOG statement. This specification indicates to Db2 that the
new active logs are empty.
f. If you are using the Db2 distributed data facility, update the LOCATION
and the LUNAME values in the BSDS by running the change log inventory
utility with the DDF statement.
Chapter 13. Recovering from different Db2 for z/OS problems 595
e. Optional: Specify which archive log to use by selecting OPERATOR
FUNCTIONS from panel DSNTIPB. Panel DSNTIPO opens. From panel
DSNTIPO, type YES in the READ COPY2 ARCHIVE field if you are using
dual archive logging and want to use the second copy of the archive logs.
Press Enter to continue.
f. Reassemble DSNZPxxx by using job DSNTIJUZ (produced by the CLIST
started in the first step of this procedure).
At this point, you have the log, but the table spaces have not been
recovered. With DEFER ALL, Db2 assumes that the table spaces are
unavailable but does the necessary processing to the log. This step also
handles the units of recovery that are in process.
11. Create a conditional restart control record by using the change log inventory
utility with one of the following forms of the CRESTART statement:
v CRESTART CREATE,ENDRBA=nnnnnnnnn000
The nnnnnnnnn000 equals a value that is one more than the ENDRBA of the
latest archive log.
v CRESTART CREATE,ENDTIME=nnnnnnnnnnnn
The nnnnnnnnnnnn is the end time of the log record. Log records with a
timestamp later than nnnnnnnnnnnn are truncated.
12. Enter the command START DB2 ACCESS(MAINT).
You must enter this command, because real-time statistics are active and
enabled; otherwise, errors or abends could occur during Db2 restart
processing and recovery processing (for example, GRECP recovery, LPL
recovery, or the RECOVER utility).
Even though Db2 marks all table spaces for deferred restart, log records are
written so that in-abort and inflight units of recovery are backed out.
In-commit units of recovery are completed, but no additional log records are
written at restart to cause this. This happens when the original redo log
records are applied by the RECOVER utility.
At the primary site, Db2 probably committed or aborted the inflight units of
recovery, but you have no way of knowing.
During restart, Db2 accesses two table spaces that result in DSNT501I,
DSNT500I, and DSNL700I resource unavailable messages, regardless of DEFER
status. The messages are normal and expected, and you can ignore them.
The following return codes can accompany the message. Other codes are also
possible.
00C90081
This return code is issued for activity against the object that occurs
during restart as a result of a unit of recovery or pending writes. In
this case, the status that is shown as a result of DISPLAY is STOP,DEFER.
00C90094
Because the table space is currently only a defined VSAM data set, it
is in a state that Db2 does not expect.
00C90095
Db2 cannot access the page, because the table space or index space
has not been recovered yet.
00C900A9
An attempt was made to allocate a deferred resource.
13. Resolve the indoubt units of recovery. The RECOVER utility, which you run in
a subsequent step, fails on any table space that has indoubt units of recovery.
Because of this, you must resolve them first. Determine the proper action to
Chapter 13. Recovering from different Db2 for z/OS problems 597
progress at time of failure” on page 606. You cannot restart a utility at the
recovery site that was interrupted at the disaster site. Use the TERM UTILITY
command to terminate any utilities that are running against user table spaces
or index spaces.
a. To determine which, if any, of your table spaces or index spaces are
user-managed, perform the following queries for table spaces and index
spaces.
v Table spaces:
SELECT * FROM SYSIBM.SYSTABLEPART WHERE STORTYPE=’E’;
v Index spaces:
SELECT * FROM SYSIBM.SYSINDEXPART WHERE STORTYPE=’E’;
To allocate user-managed table spaces or index spaces, use the access
method services DEFINE CLUSTER command. To find the correct IPREFIX for
the DEFINE CLUSTER command, perform the following queries for table
spaces and index spaces.
v Table spaces:
SELECT DBNAME, TSNAME, PARTITION, IPREFIX FROM SYSIBM.SYSTABLEPART
WHERE DBNAME=dbname AND TSNAME=tsname
ORDER BY PARTITION;
v Index spaces:
SELECT IXNAME, PARTITION, IPREFIX FROM SYSIBM.SYSINDEXPART
WHERE IXCREATOR=ixcreator AND IXNAME=ixname
ORDER BY PARTITION;
Now you can perform the DEFINE CLUSTER command with the correct
IPREFIX (I or J) in the data set name:
catname.DSNDBx.dbname.psname.y0001.znnn
The y can be either I or J, x is C (for VSAM clusters) or D (for VSAM data
components), and spname is either the table space or index space name.
b. If your user table spaces or index spaces are STOGROUP-defined, and if
the volume serial numbers at the recovery site are different from those at
the local site, use the SQL statement ALTER STOGROUP to change them
in the Db2 catalog.
c. Recover all user table spaces and index spaces from the appropriate image
copies. If you do not copy your indexes, use the REBUILD INDEX utility
to reconstruct the indexes.
d. Start all user table spaces and index spaces for read-write processing by
issuing the command START DATABASE with the ACCESS(RW) option.
e. Resolve any remaining CHECK-pending states that would prevent COPY
execution.
f. Run queries for which the results are known.
24. Make full image copies of all table spaces and indexes with the COPY YES
attribute.
25. Finally, compensate for work that was lost since the last archive was created
by rerunning online transactions and batch jobs.
What to do next
Determine what to do about any utilities that were in progress at the time of
failure.
Related concepts
Preparations for disaster recovery
Additional recovery procedures for data sharing environments are also available.
Procedure
Connections for the SCA are not held at termination; therefore you do not
need to force off any SCA connections.
c. Delete all the Db2 coupling facility structures that have a STATUS of
ALLOCATED by using the following command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
This step is necessary to remove old information that exists in the coupling
facility from your practice startup when you installed the group.
2. If an integrated catalog facility catalog does not already exist, run job
DSNTIJCA to create a user catalog.
3. Use the access method services IMPORT command to import the integrated
catalog facility catalog.
4. Restore Db2 libraries. Some examples of libraries that you might need to
restore include:
v Db2 SMP/E libraries
v User program libraries
v User DBRM libraries
v Db2 CLIST libraries
Chapter 13. Recovering from different Db2 for z/OS problems 599
v Db2 libraries that contain customized installation jobs
v JCL for creating user-defined table spaces
5. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects. (Because
step 3 on page 599 imports a user ICF catalog, the catalog reflects data sets
that do not exist on disk.)
6. Obtain a copy of the installation job DSNTIJIN, which creates Db2 VSAM and
non-VSAM data sets, for the first data sharing member. Change the volume
serial numbers in the job to volume serial numbers that exist at the recovery
site. Comment out the steps that create Db2 non-VSAM data sets, if these data
sets already exist. Run DSNTIJIN on the first data sharing member. However,
do not run DSNTIJID.
For subsequent members of the data sharing group, run the DSNTIJIN that
defines the BSDS and logs.
7. Recover the BSDS by following these steps for each member in the data
sharing group:
a. Use the access method services REPRO command to restore the contents of
one BSDS data set (allocated in step 6) on each member. You can find the
most recent BSDS image in the last file (archive log with the highest
number) on the latest archive log tape.
b. Determine the RBA and LRSN ranges for this archive log by using the
print log map utility (DSNJU004) to list the current BSDS contents. Find
the most recent archive log in the BSDS listing, and add 1 to its ENDRBA
value. Use this as the STARTRBA. Find the active log in the BSDS listing
that starts with this RBA, and use its ENDRBA as the ENDRBA. Use the
STARTLRSN and ENDLRSN of this active log data set as the LRSN range
(STARTLRSN and ENDLRSN) for this archive log.
c. Delete the oldest archive log from the BSDS.
d. Register this latest archive log tape data set in the archive log inventory of
the BSDS that you just restored by using the change log inventory utility
(DSNJU003). This step is necessary because the BSDS image on an archive
log tape does not reflect the archive log data set that resides on that tape.
Running DSNJU003 is critical for data sharing groups. Include the group
buffer pool checkpoint information that is stored in the BSDS from the
most recent archive log.
After these archive logs are registered, use the print log map utility
(DSNJU004) with the GROUP option to list the contents of all BSDSs. You
receive output that includes the start and end LRSN and RBA values for
the latest active log data sets (shown as NOTREUSABLE). If you did not
save the values from the DSNJ003I message, you can get those values by
running DSNJU004, which creates output as shown below
The following sample DSNJU004 output shows the (partial) information
for the archive log member DB1G.
| ACTIVE LOG COPY 1 DATA SETS
| START RBA/LRSN/TIME END RBA/LRSN/TIME DATE/LTIME DATA SET INFORMATION
| ---------------------- ---------------------- ---------- --------------------
| 0000000007A5C5360000 0000000007A5DB31FFFF 2005.034 DSN=DSNT3LOG.DT31.LOGCOPY1.DS01
| 00CAC6509C994A000000 00CAC650C5EDD8000000 20:22 PASSWORD=(NULL) STATUS=REUSABLE
| 2013.015 14:41:16.4 2013.015 14:41:59.7
| 0000000007A5DB320000 0000000007A5F12DFFFF 2007.051 DSN=DSNT3LOG.DT31.LOGCOPY1.DS04
| 00CAC650C5EDD8000000 00CAC650EA3857000000 13:27 PASSWORD=(NULL) STATUS=REUSABLE
| 2013.015 14:41:59.7 2013.015 14:42:37.7
The following sample DSNJU004 output shows the (partial) information
for the archive log member DB2G.
Chapter 13. Recovering from different Db2 for z/OS problems 601
10. Examine the DSN1LOGP output for each data sharing member, and identify
any utilities that were executing at the end of the last archive log. Determine
the appropriate recovery action to take on each table space that is involved in
a utility job. If DSN1LOGP output showed that utilities are inflight
(PLAN=DSNUTIL), examine SYSUTILX to identify the utility status and
determine the appropriate recovery approach.
11. Modify DSNZPxxx parameters for each member of the data sharing group:
a. Run the DSNTINST CLIST in UPDATE mode.
b. To defer processing of all databases, select DATABASES TO START
AUTOMATICALLY from panel DSNTIPB. Panel DSNTIPS opens. On panel
DSNTIPS, type DEFER in the first field and ALL in the second field; then
press Enter. You are returned to panel DSNTIPB.
c. To specify where you are recovering, select OPERATOR FUNCTIONS from
panel DSNTIPB. Panel DSNTIPO opens. From panel DSNTIPO, type
RECOVERYSITE in the SITE TYPE field. Press Enter to continue.
d. Optional: Specify which archive log to use by selecting OPERATOR
FUNCTIONS from panel DSNTIPB. Panel DSNTIPO opens. From panel
DSNTIPO, type YES in the READ ARCHIVE COPY2 field if you are using
dual archive logging and want to use the second copy of the archive logs.
Press Enter to continue.
e. Reassemble DSNZPxxx by using job DSNTIJUZ (produced by the CLIST
started in the first step of this procedure).
At this point, you have the log, but the table spaces have not been
recovered. With DEFER ALL, Db2 assumes that the table spaces are
unavailable but does the necessary processing to the log. This step also
handles the units of recovery that are in process.
12. Create a conditional restart control record for each data sharing member by
using the change log inventory utility with one of the following forms of the
CRESTART statement:
v CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn
The nnnnnnnnnnnn is the LRSN of the last log record that is to be used
during restart.
v CRESTART CREATE,ENDTIME=nnnnnnnnnnnn
The nnnnnnnnnnnn is the end time of the log record. Log records with a
timestamp later than nnnnnnnnnnnn are truncated.
Use the same LRSN or system time-of-day clock timestamp value for all
members in a data sharing group. Determine the ENDLRSN value by using
one of the following methods:
v Use the DSN1LOGP summary utility. In the “Summary of Completed
Events” section, find the lowest LRSN value that is listed in the DSN1213I
message for the data sharing group. Use this value for the ENDLRSN in the
CRESTART statement.
v Use the print log map utility (DSNJU004) to list the BSDS contents. Find the
ENDLRSN of the last log record that is available for each active member of
the data sharing group. Subtract 1 from the lowest ENDLRSN in the data
sharing group. Use this value for the ENDLRSN in the CRESTART
statement. (In the sample output that is shown in step 7d on page 600, the
value is AE3C45273A77 - 1, which is AE3C45273A76.)
v If only the console logs are available, use the archive offload message
(DSNJ003I) to obtain the ENDLRSN. Compare the ending LRSN values for
the archive logs of all members. Subtract 1 from the lowest LRSN in the
data sharing group. Use this value for the ENDLRSN in the CRESTART
Chapter 13. Recovering from different Db2 for z/OS problems 603
15. Recover the catalog and directory. The RECOVER function includes:
RECOVER TABLESPACE, RECOVER INDEX, or REBUILD INDEX. If you
have an image copy of an index, use RECOVER INDEX. If you do not have
an image copy of an index, use REBUILD INDEX to reconstruct the index
from the recovered table space.
a. Recover DSNDB01.SYSUTILX. This must be a separate job step.
b. Recover all indexes on SYSUTILX. This must be a separate job step.
c. Determine whether a utility was running at the time the latest archive log
was created by entering the DISPLAY UTILITY(*) command, and record the
name and current phase of any utility that is running. (You cannot restart a
utility at the recovery site that was interrupted at the disaster site. To
terminate a utility at the recovery site that was interrupted at the disaster
site, you must use the TERM UTILITY command.)
d. Run the DIAGNOSE utility with the DISPLAY SYSUTIL option. The output
consists of information about each active utility, including the table space
name (in most cases). This is the only way to correlate the object name
with the utility. Message DSNU866I gives information about the utility,
and DSNU867I gives the database and table space name in USUDBNAM
and USUSPNAM, respectively.
e. Use the TERM UTILITY command to terminate any utilities that are in
progress on catalog or directory table spaces.
f. Recover the rest of the catalog and directory objects, starting with DBD01,
in the order shown in the description of the RECOVER utility.
16. Define and initialize the work file database
a. Define temporary work files. Use installation job DSNTIJTM as a model.
b. Issue the command START DATABASE(work-file-database) to start the work file
database.
17. Use any method that you want to verify the integrity of the Db2 catalog and
directory. Use the catalog queries in member DSNTESQ of data set
DSN1110.SDSNSAMP after the work file database is defined and initialized.
18. If you use data definition control support, recover the objects in the data
definition control support database.
19. If you use the resource limit facility, recover the objects in the resource limit
control facility database.
20. Modify DSNZPxxx to restart all databases on each member of the data sharing
group:
a. Run the DSNTINST CLIST in UPDATE mode. For more information, see
Tailoring Db2 11 installation and migration jobs with the CLIST (Db2
Installation and Migration).
b. From panel DSNTIPB, select DATABASES TO START AUTOMATICALLY.
Panel DSNTIPS opens. Type RESTART in the first field and ALL in the
second field, and press Enter. You are returned to DSNTIPB.
c. Reassemble DSNZPxxx by using job DSNTIJUZ (produced by the CLIST
started in step 4 on page 599).
21. Stop Db2.
22. Start Db2.
23. Make a full image copy of the catalog and directory.
24. Recover user table spaces and index spaces. If utilities were running on any
table spaces or index spaces, see “What to do about utilities that were in
progress at time of failure” on page 606. You cannot restart a utility at the
What to do next
Determine what to do about any utilities that were in progress at the time of
failure.
Related concepts
Preparations for disaster recovery
What to do about utilities that were in progress at time of failure
Chapter 13. Recovering from different Db2 for z/OS problems 605
Recovering data (Db2 Data Sharing Planning and Administration)
Related tasks
Migration step 1: Actions to complete before migration (Db2 Installation
and Migration)
Recovering catalog and directory objects (Db2 Utilities)
Related reference
DSN1LOGP (Db2 Utilities)
You might need to take additional steps if any utility jobs were running after the
last time that the log was offloaded before the disaster.
After restarting Db2, only certain utilities need to be terminated with the TERM
UTILITY command.
Allowing the RECOVER utility to reset pending states is preferable. However, you
might occasionally need to use the REPAIR utility to reset them. Do not start the
table space with ACCESS(FORCE) because FORCE resets any page set exception
conditions described in “Database page set controls.”
To avoid extra loss of data in a future disaster situation, run the QUIESCE
utility on table spaces before invoking the LOAD utility. This enables you
to recover a table space by using the TOLOGPOINT option instead of
TOCOPY.
REORG
For a user table space, find the options that you specified in the following
table, and perform the specified actions.
Chapter 13. Recovering from different Db2 for z/OS problems 607
RELOAD phase of the REORG job on that table space had not
completed when the disaster occurred, recover the table space to the
current time. Because REORG does not generate any log records prior
to the RELOAD phase for catalog and directory objects, a recovery to
the current time restores the data to the state that it was in before the
REORG job. If the RELOAD phase completed, perform the following
actions:
a. Run the DSN1LOGP utility against the archive log data sets from
the disaster site.
b. Find the begin-UR log record for the REORG job that failed in the
DSN1LOGP output.
c. Run the RECOVER utility with the TOLOGPOINT option on the
table space that was being reorganized. Use the URID of the
begin-UR record as the TOLOGPOINT value.
3. Recover or rebuild all indexes.
If you have image copies from immediately before the REORG job failed,
run the RECOVER utility with the TOCOPY option to recover the catalog
and directory, in the correct order.
Related tasks
Recovering catalog and directory objects (Db2 Utilities)
Using a tracker site for disaster recovery is somewhat similar to other methods.
From the primary site, you transfer the BSDS and the archive logs, and that tracker
site runs periodic LOGONLY recoveries to keep the shadow data up-to-date. If a
disaster occurs at the primary site, the tracker site becomes the takeover site.
Because the tracker site has been shadowing the activity on the primary site, you
do not need to constantly ship image copies; the takeover time for the tracker site
might be faster because Db2 recovery does not need to use image copies.
Because the tracker site must use only the primary site logs for recovery, you must
not update the catalog and directory or the data at the tracker site. The Db2
subsystem at the tracker site disallows updates.
v The following SQL statements are not allowed at a tracker site:
– GRANT or REVOKE
– DROP, ALTER, or CREATE
– UPDATE, INSERT, or DELETE
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 609
What to do next
Important: Do not attempt to start the tracker site when you are setting it up. You
must follow the procedure described in “Establishing a recovery cycle by using
RESTORE SYSTEM LOGONLY” on page 611.
Related reference
BACKUP SYSTEM (Db2 Utilities)
| You need to migrate or convert the tracker site after you perform any of the
| following operations on the production site:
| v Migrate from DB2 10 new-function mode to Db2 11 conversion mode
| v Convert from Db2 11 conversion mode to Db2 11 enabling-new-function mode
| v Convert from Db2 11 enabling-new-function mode to Db2 11 new-function mode
| v Revert from Db2 11 new-function mode to Db2 11 conversion mode*
| You do not need to perform fallback of the tracker site after you perform fallback
| of a production site from Db2 11 conversion mode to DB2 10 new-function mode,
| unless the fallback process restores the catalog and directory to a point in time
| prior to the migration to Db2 11.
| Procedure
| To migrate or convert the tracker site to Db2 11, follow one of the following
| procedures:
| v If you use RESTORE SYSTEM for recovery cycles, follow the procedure in
| Establishing a recovery cycle by using RESTORE SYSTEM LOGONLY.
| v If you use the RECOVER utility for recovery cycles, follow these steps.
| 1. If the tracker site is a data sharing group, issue the z/OS SETXCF FORCE
| command to delete the shared communications area (SCA) structure.
| 2. Recover the BSDS and active logs using copies of the production site's BSDS
| and active logs.
| 3. Create a conditional restart control record (CRCR) in the BSDS by using the
| change log inventory utility.
| 4. Start Db2 on the tracker site. The subsystem parameter module for starting
| Db2 needs to include TRKRSITE=YES.
| 5. Recover the DSNDB01.SYSUTILX, DSNDB01.DBD01, and
| DSNDB01.SYSDBDXA table spaces, and rebuild their indexes.
| 6. Stop Db2.
| 7. Perform steps 1, 2, and 3 again.
| 8. Start Db2 on the tracker site with the TRKRSITE=YES option again, to cause
| Db2 to read the new database descriptor information from the
| DSNDB01.DBD01 table space.
Full image copies of all the data at the primary site must be available at the tracker
site.
Using the LOGONLY option of the RESTORE SYSTEM utility enables you to
periodically apply the active log, archive logs, and the BSDS from the primary site
at the tracker site.
Procedure
To establish a recovery cycle at your tracker site by using the RESTORE SYSTEM
utility:
1. While your primary site continues its usual workload, send a copy of the
primary site active log, archive logs, and BSDS to the tracker site. Send full
image copies for the following objects:
v Table spaces or partitions that are reorganized, loaded, or repaired with the
LOG NO option after the latest recovery cycle
v Objects that, after the latest recovery cycle, have been recovered to a point
in time
Chapter 13. Recovering from different Db2 for z/OS problems 611
3. Use the change log inventory utility (DSNJU003) with the following
CRESTART control statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000, FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnn equals the RBA at which the latest
archive log record ends +1. Do not specify the RBA at which the archive log
begins because you cannot cold start or skip logs in tracker mode.
Data sharing
If you are recovering a data sharing group, you must use the
following CRESTART control statement on all members of the data
sharing group. The ENDLRSN value must be the same for all
members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
Procedure
Chapter 13. Recovering from different Db2 for z/OS problems 613
v Register the primary site active log in the new BSDS by using the change
log inventory utility (DSNJU003).
3. Use the change log inventory utility (DSNJU003) with the following
CRESTART control statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000, FORWARD=NO,BACKOUT=NO
In this control statement, nnnnnnnnn000 equals the value of the ENDRBA of
the latest archive log plus 1. Do not specify STARTRBA because you cannot
cold start or skip logs in a tracker system.
Data sharing
If you are recovering a data sharing group, you must use the
following CRESTART control statement on all members of the data
sharing group. The ENDLRSN value must be the same for all
members.
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
The ENDLRSN or ENDRBA value indicates the end log point for data
recovery and for truncating the archive log. With ENDLRSN, the missing log
records between the lowest and highest ENDLRSN values for all the members
are applied during the next recovery cycle.
4. If the tracker site is a data sharing group, delete all Db2 coupling facility
structures before restarting the tracker members.
5. At the tracker site, restart Db2 to begin a tracker site recovery cycle.
Data sharing
For data sharing, restart every member of the data sharing group.
6. At the tracker site, submit RECOVER utility jobs to recover database objects.
Run the RECOVER utility with the LOGONLY option on all database objects
that do not require recovery from an image copy.
You must recover database objects as the following procedure specifies:
a. Restore the full image copy or DSN1COPY of SYSUTILX.
If you are doing a LOGONLY recovery on SYSUTILX from a previous
DSN1COPY backup, make another DSN1COPY copy of that table space
after the LOGONLY recovery is complete and before recovering any other
catalog or directory objects.
After you recover SYSUTILX and either recover or rebuild its indexes, and
before you recover other system and user table spaces, determine what
utilities were running at the primary site.
b. Recover the catalog and directory in the correct order.
If you have user-defined catalog indexes, rebuilding them is optional until
the tracker Db2 site becomes the takeover Db2 site. (You might want to
rebuild them sooner if you require them for catalog query performance.)
However, if you are recovering user-defined catalog indexes, do the
recovery in this step.
Chapter 13. Recovering from different Db2 for z/OS problems 615
If an entire volume is corrupted and you are using Db2 storage groups, you cannot
use the ALTER STOGROUP statement to remove the corrupted volume and add
another. (This is possible, however, for a non-tracker system.) Instead, you must
remove the corrupted volume and re-initialize another volume with the same
volume serial number before you invoke the RECOVER utility for all table spaces
and indexes on that volume.
Procedure
Results
When restarting a data sharing group, the first member that starts during a
recovery cycle puts the ENDLRSN value in the shared communications area (SCA)
of the coupling facility. If an SCA failure occurs during a recovery cycle, you must
go through the recovery cycle again, using the same ENDLRSN value for your
conditional restart.
Procedure
One way that you can make the tracker site be the takeover site is by using the
RESTORE SYSTEM utility with the LOGONLY option in the recovery cycles at the
tracker site.
Procedure
To make the tracker site be the takeover site by using the RESTORE SYSTEM
utility with the LOGONLY option:
1. If log data for a recovery cycle is en route or is available but has not yet been
used in a recovery cycle, perform the procedure in “Establishing a recovery
cycle by using RESTORE SYSTEM LOGONLY” on page 611.
2. Ensure that the TRKSITE NO subsystem parameter is specified.
3. For scenarios other than data sharing, continue with step 4.
Data sharing
If this is a data sharing system, delete the coupling facility structures.
4. Start Db2 at the same RBA or ENDLRSN that you used in the most recent
tracker site recovery cycle. Specify FORWARD=YES and BACKOUT=YES in the
CRESTART statement; this takes care of uncommitted work.
5. Restart the objects that are in GRECP or LPL status by issuing the START
DATABASE(*) SPACENAM(*) command.
6. If you used the DSN1COPY utility to create a copy of SYSUTILX in the last
recovery cycle, use DSN1COPY to restore that copy.
7. Terminate any in-progress utilities by using the following procedure:
a. Enter the DISPLAY UTILITY(*) command .
b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of
objects on which utilities are being run.
c. Terminate in-progress utilities in the correct order by using the TERM
UTILITY(*) command.
8. Rebuild indexes, including IBM and user-defined indexes on the Db2 catalog
and user-defined indexes on table spaces.
Related tasks
Restoring data from image copies and archive logs
Recovering at a tracker site that uses the RECOVER utility
One way that you can make the tracker site be the takeover site is by using the
RECOVER utility in the recovery cycles at your tracker site.
Procedure
To make the tracker site be the takeover site by using the RECOVER utility:
1. Restore the BSDS, and register the archive log from the last archive log that you
received from the primary site.
2. For environments that do not use data sharing, continue with step 3.
Data sharing
If this is a data sharing system, delete the coupling facility structures.
3. Ensure that the DEFER ALL and TRKSITE NO subsystem parameters are
specified.
Chapter 13. Recovering from different Db2 for z/OS problems 617
4. Take the appropriate action, which depends on whether you received more logs
from the primary site. If this is a non-data-sharing Db2 subsystem, the log
truncation point varies depending on whether you have received more logs
from the primary site since the last recovery cycle:
v If you did not receive more logs from the primary site:
Start Db2 using the same ENDRBA that you used on the last tracker cycle.
Specify FORWARD=YES and BACKOUT=YES; this takes care of
uncommitted work. If you have fully recovered the objects during the
previous cycle, they are current except for any objects that had outstanding
units of recovery during restart. Because the previous cycle specified NO for
both FORWARD and BACKOUT and you have now specified YES, affected
data sets are placed in the LPL. Restart the objects that are in LPL status by
using the following command:
START DATABASE(*) SPACENAM(*)
After you issue the command, all table spaces and indexes that were
previously recovered are now current. Remember to rebuild any indexes that
were not recovered during the previous tracker cycle, including user-defined
indexes on the Db2 catalog.
v If you received more logs from the primary site:
Start Db2 using the truncated RBA nnnnnnnnn000, which equals the value of
the ENDRBA of the latest archive log plus 1. Specify FORWARD=YES and
BACKOUT=YES. Run your recoveries as you did during recovery cycles.
Data sharing
You must restart every member of the data sharing group; use the
following CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
In this statement, nnnnnnnnnnnn is the LRSN of the last log record that
is to be used during restart. Specify one of the following values for the
ENDLRSN:
v If you receive the ENDLRSN from the output of the print log map
utility (DSNJU004) or from message DSNJ003I at the console logs use
ENDLRSN -1 as the input to the conditional restart.
v If you receive the ENDLRSN from the output of the DSN1LOGP
utility (DSN1213I message), use the displayed value.
The ENDLRSN or ENDRBA value indicates the end log point for data
recovery and for truncating the archive log. With ENDLRSN, the
missing log records between the lowest and highest ENDLRSN values
for all the members are applied during the next recovery cycle.
The takeover Db2 sites must specify conditional restart with a common
ENDLRSN value to allow all remote members to logically truncate the
logs at a consistent point.
5. As described for a tracker recovery cycle, recover SYSUTILX from an image
copy from the primary site, or from a previous DSN1COPY copy that was
taken at the tracker site.
6. Terminate any in-progress utilities by using the following procedure:
a. Enter the command DISPLAY UTILITY(*).
b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of
objects on which utilities are being run.
c. Terminate in-progress utilities by using the command TERM UTILITY(*).
Follow the appropriate procedure for recovering from a disaster by using data
mirroring.
To use data mirroring for disaster recovery, you must mirror data from your local
site with a method that does not reproduce a rolling disaster at your recovery site.
To recover a Db2 subsystem and data with data integrity, you must use volumes
that end at a consistent point in time for each Db2 subsystem or data sharing
group. Mirroring a rolling disaster causes volumes at your recovery site to end
over a span of time rather than at one single point.
The following figure shows how a rolling disaster can cause data to become
inconsistent between two subsystems.
Chapter 13. Recovering from different Db2 for z/OS problems 619
Primary Secondary
Database Database
Disk fails Device Device
at 12:00
Example: In a rolling disaster, the following events at the primary site cause data
inconsistency at your recovery site. This data inconsistency example follows the
same scenario that the preceding figure depicts.
1. Some time prior to 12:00: A table space is updated in the buffer pool.
2. 12:00 The log record is written to disk on logical storage subsystem 1.
3. 12:01: Logical storage subsystem 2 fails.
4. 12:02: The update to the table space is externalized to logical storage subsystem
2 but is not written because subsystem 2 failed.
5. 12:03: The log record that indicates that the table space update was made is
written to disk on logical storage subsystem 1.
6. 12:03: Logical storage subsystem 1 fails.
Because the logical storage subsystems do not fail at the same point in time, they
contain inconsistent data. In this scenario, the log indicates that the update is
applied to the table space, but the update is not applied to the data volume that
holds this table space.
Important: Any disaster recovery solution that uses data mirroring must guarantee
that all volumes at the recovery site contain data for the same point in time.
A consistency group, which is a collection of related data, can span logical storage
subsystems and disk subsystems. For Db2 specifically, a consistency group contains
an entire Db2 subsystem or an entire Db2 data sharing group.
Additionally, all objects within a consistency group must represent the same point
in time in at least one of the following situations:
v At the time of a backup
v After a normal Db2 restart
When a rolling disaster strikes your primary site, consistency groups guarantee
that all volumes at the recovery site contain data for the same point in time. In a
data mirroring environment, you must perform both of the following actions for
each consistency group that you maintain:
v Mirror data to the secondary volumes in the same sequence that Db2 writes data
to the primary volumes.
In many processing situations, Db2 must complete one write operation before it
begins another write operation on a different disk group or a different storage
server. A write operation that depends on a previous write operation is called a
dependent write. Do not mirror a dependent write if you have not mirrored the
write operation on which the dependent write depends. If you mirror data out
of sequence, your recovery site will contain inconsistent data that you cannot
use for disaster recovery.
v Temporarily suspend and queue write operations to create a group point of
consistency when an error occurs between any pair of primary and secondary
volumes.
When an error occurs that prevents the update of a secondary volume in a
single-volume pair, this error might mark the beginning of a rolling disaster. To
prevent your secondary site from mirroring a rolling disaster, you must suspend
and queue data mirroring by taking the following steps after a write error
between any pairs:
1. Suspend and queue all write operations in the volume pair that experiences
a write error.
2. Invoke automation that temporarily suspends and queues data mirroring to
all your secondary volumes.
3. Save data at the secondary site at a point of consistency.
4. If a rolling disaster does not strike your primary site, resume normal data
mirroring after some amount of time that you define. If a rolling disaster
does strike your primary site, follow the recovery procedure in Recovering in
a data mirroring environment.
Chapter 13. Recovering from different Db2 for z/OS problems 621
About this task
This procedure applies to all Db2 data mirroring scenarios except those that use
Extended Remote Copy (XRC). This general procedure is valid only if you have
established and maintained consistency groups before the disaster struck the
primary site. If you use data mirroring to recover, you must recover your entire
Db2 subsystem or data sharing group with data mirroring.
You do not need to restore Db2 image copies or apply Db2 logs to bring Db2 data
to the current point in time when you use data mirroring. However, you might
need image copies at the recovery site if the LOAD, UNLOAD, or RECOVER
utility was active at the time of the disaster.
Procedure
If utilities are pending, record the output from this command, and continue to
the next step. You cannot restart utilities at a recovery site. You will terminate
these utilities in step 8. If no utilities are pending, continue to step number 9.
7. Use the DIAGNOSE utility to access the SYSUTIL directory table. You cannot
access this directory table by using normal SQL statements (as you can with
most other directory tables). You can access SYSUTIL only by using the
DIAGNOSE utility, which is normally intended to be used under the direction
of IBM Software Support.
Use the following control statement to run the DIAGNOSE utility job:
DIAGNOSE DISPLAY SYSUTIL
To stop the utility, issue this control statement:
END DIAGNOSE
Examine the output. Record the phase in which each pending utility was
interrupted, and record the object on which each utility was operating.
8. Terminate all pending utilities with the following command:
-TERM UTILITY(*)
9. For environments that do not use data sharing, continue to step 10.
Data sharing
For data sharing groups, use the following START DATABASE command
on each database that contains objects that are in LPL status:
-START DATABASE(database) SPACENAM(*)
When you use the START DATABASE command to recover objects, you
do not need to provide Db2 with image copies.
Chapter 13. Recovering from different Db2 for z/OS problems 623
v If the object was a target of a LOAD utility control statement that specified
SHRLEVEL NONE and the LOAD job was interrupted during or after the
RELOAD phase, recover this object to a point in time that is before this
utility ran.
v Otherwise, recover the object to a point in time that is before the LOAD job
ran.
12. For each object that the REORG utility places in a restrictive status, take one
of the following actions:
v When the object was a target of a REORG utility control statement that
specified SHERLEVEL NONE:
– If the REORG job was interrupted before the RELOAD phase, no further
action is required. This object contains valid data, and the indexes on this
object are valid.
– If the REORG job was interrupted during the RELOAD phase, recover
this object to a point in time that is before this utility ran.
– If the REORG job was interrupted after the RELOAD phase, rebuild the
indexes on the object.
v When the object was a target of a REORG utility control statement that does
not specify SHRLEVEL NONE:
– If the REORG job was interrupted before the SWITCH phase, no further
action is required. This object contains valid data, and the indexes on this
object are valid.
– If the REORG job was interrupted during the SWITCH phase, no further
action is required. This object contains valid data, and the indexes on this
object are valid.
– If the REORG job was interrupted after the SWITCH phase, you might
need to rebuild non-partitioned secondary indexes.
For example, consider that the source volumes in the SMS storage groups for your
database or log copy pools are mirrored, or that the target volumes in the SMS
backup storage groups for your copy pools are mirrored. You can use IBM Remote
Pair FlashCopy (Preserve Mirror) for Peer-to-Peer Remote Copy (PPRC). Also, you
can allow FlashCopy to PPRC primary volumes. However, you might need to set
or override the DFSMShsm default settings for the BACKUP SYSTEM, RESTORE
SYSTEM, and RECOVER utilities.
Procedure
This procedure assumes that you are familiar with basic use of XRC.
Procedure
The recovery scenarios for indoubt threads are based on a sample environment,
which this topic describes. System programmer, operator, and database
administrator actions are indicated for the examples as appropriate. In these
descriptions, the term “administrator” refers to the database administrator (DBA) if
not otherwise specified.
Configuration
The configuration includes four systems at three geographic locations:
Seattle (SEA), San Jose (SJ) and Los Angeles (LA). The system descriptions
are as follows.
v Db2 subsystem at Seattle, Location name = IBMSEADB20001, Network
name = IBM.SEADB21
v Db2 subsystem at San Jose, Location name = IBMSJ0DB20001, Network
name = IBM.SJDB21
Chapter 13. Recovering from different Db2 for z/OS problems 625
v Db2 subsystem at Los Angeles, Location name = IBMLA0DB20001,
Network name = IBM.LADB21
v IMS subsystem at Seattle, Connection name = SEAIMS01
Applications
The following IMS and TSO applications run at Seattle and access both
local and remote data.
v IMS application, IMSAPP01, at Seattle, accesses local data and remote
data by DRDA access at San Jose, which accesses remote data on behalf
of Seattle by Db2 private protocol access at Los Angeles.
v TSO application, TSOAPP01, at Seattle, accesses data by DRDA access at
San Jose and at Los Angeles.
Threads
The following threads are described and keyed to Figure 56 on page 627.
Database access threads (DBAT) access data on behalf of a thread (either
allied or DBAT) at a remote requester.
v Allied IMS thread A at Seattle accesses data at San Jose by DRDA access.
– DBAT at San Jose accesses data for Seattle by DRDA access 1 and
requests data at Los Angeles by Db2 private protocol access 2.
– DBAT at Los Angeles accesses data for San Jose by Db2 private
protocol access 2.
v Allied TSO thread B at Seattle accesses local data and remote data at San
Jose and Los Angeles, by DRDA access.
– DBAT at San Jose accesses data for Seattle by DRDA access 3.
– DBAT at Los Angeles accesses data for Seattle by DRDA access 4.
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 56. Resolution of indoubt threads. Results of issuing DISPLAY THREAD TYPE(ACTIVE) at
each Db2 subsystem.
Read one or more of the scenarios to learn how best to handle problems with
indoubt threads in your own environment.
Symptoms
A communication failure occurred between Seattle (SEA) and Los Angeles (LA)
after the database access thread (DBAT) at LA completed phase 1 of commit
processing. At SEA, the TSO thread, LUWID=16 and TOKEN=2 B, cannot complete
the commit with the DBAT at LA4.
Chapter 13. Recovering from different Db2 for z/OS problems 627
At SEA, NetView alert A006 is generated, and message DSNL406 is displayed,
indicating that an indoubt thread at LA because of a communication failure. At LA,
alert A006 is generated, and message DSNL405 is displayed, to indicate that a
thread is in an indoubt state because of a communication failure with SEA.
Causes
A communication failure caused the indoubt thread.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 57. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
At SEA, an IFCID 209 trace record is written. After the alert is generated and the
message is displayed, the thread completes the commit, which includes the DBAT
at SJ 3. Concurrently, the thread is added to the list of threads for which the SEA
Db2 subsystem has an indoubt resolution responsibility. The thread shows up in a
DISPLAY THREAD report for indoubt threads. The thread also shows up in a DISPLAY
THREAD report for active threads until the application terminates.
The TSO application is informed that the commit succeeded. If the application
continues and processes another SQL request, it is rejected with an SQL code to
indicate that it must roll back before any more SQL requests can be processed. This
is to ensure that the application does not proceed with an assumption based on
data that is retrieved from LA, or with the expectation that cursor positioning at
LA is still intact.
The Db2 subsystems, at both SEA and LA, periodically attempt to reconnect and
automatically resolve the indoubt thread. If the communication failure affects only
the session that is being used by the TSO application, and other sessions are
available, automatic resolution occurs in a relatively short time. At this time,
message DSNL407 is displayed by both Db2 subsystems.
Symptoms
In this scenario, an indoubt thread at Los Angeles (LA) holds database resources
that are needed by other applications. The organization makes a heuristic decision
about whether to commit or abort an indoubt thread.
Environment
The following figure illustrates the environment for this scenario.
Chapter 13. Recovering from different Db2 for z/OS problems 629
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 58. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
Symptoms
When IMS is cold started and later reconnects with the SEA Db2 subsystem, IMS is
not able to resolve the indoubt thread with Db2. Message DSNM004I is displayed
at the IMS master terminal.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 59. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
The abnormal termination of IMS has left one allied thread A at the SEA Db2
subsystem indoubt. This is the thread whose LUWID=15. Because the SEA Db2
subsystem still has effective communication with the Db2 subsystem at SJ, the
LUWID=15 DBAT 1 at this subsystem is waiting for the SEA Db2 to communicate
the final decision and is not aware that IMS has failed. Also, the LUWID=15 DBAT
Chapter 13. Recovering from different Db2 for z/OS problems 631
at LA 2, which is connected to SJ, is also waiting for SJ to communicate the final
decision. This cannot be done until SEA communicates the decision to SJ.
v The connection remains active.
v IMS applications can still access Db2 databases.
v Some Db2 resources remain locked out.
If the indoubt thread is not resolved, the IMS message queues can start to back up.
If the IMS queues fill to capacity, IMS terminates. Therefore, users must be aware
of this potential difficulty and must monitor IMS until the indoubt units of work
are fully resolved.
Symptoms
The Db2 subsystem at SEA is started with a conditional restart record in the BSDS
to indicate a cold start:
v When the IMS subsystem reconnects, it attempts to resolve the indoubt thread
that is identified in IMS as NID=A5. IMS has a resource recovery element (RRE)
Causes
An abnormal termination of the SEA Db2 subsystem caused the outage.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 60. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
Chapter 13. Recovering from different Db2 for z/OS problems 633
The abnormal termination of the SEA Db2 subsystem has left the two DBATs at SJ
1, 3, and the LUWID=16 DBAT at LA 4 indoubt. The LUWID=15 DBAT at LA 2,
connected to SJ, is waiting for the SJ Db2 subsystem to communicate the final
decision.
The IMS subsystem at SEA is operational and has the responsibility of resolving
indoubt units with the SEA Db2 subsystem.
The Db2 subsystems at both SJ and LA accept the cold start connection from SEA.
Processing continues, waiting for the heuristic decision to resolve the indoubt
threads.
Chapter 13. Recovering from different Db2 for z/OS problems 635
CICS. Knowing the NID enables correlation to the DSN3005 message, or to the
234 trace event, either of which provides the correct decision.
v If an incomplete entry is found, the NID might have been reported by
DSN1LOGP. If it was reported, use it as previously discussed.
v Determine if any of the following conditions exists:
– No NID is found.
– The SEA Db2 subsystem has not been started.
– Reconnecting to IMS has not occurred.
If any of these conditions exists, the administrator must use the correlation-id that
is used by IMS to correlate the IMS logical unit of work to the Db2 thread in a
search of the IMS recovery log. The SEA Db2 site provided this value to the SJ
Db2 subsystem when distributing the thread to SJ. The SJ Db2 site displays this
value in the report that is generated by the DISPLAY THREAD TYPE(INDOUBT)
command.
v For IMS, the correlation-id is:
pst#.psbname
v In CICS, the correlation-id consists of four parts:
Byte 1 - Connection type - G=Group, P=Pool
Byte 2 - Thread type - T=transaction, G=Group, C=Command
Bytes 3-4 - Thread number
Bytes 5—8 - Transaction-id
Related concepts
Scenario: What happens when the wrong Db2 subsystem is cold started
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 61. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
If the Db2 subsystem at SJ is cold started instead of the Db2 at SEA, the LA Db2
subsystem has the LUWID=15 2 thread indoubt. The administrator can see that
this thread did not originate at SJ, but that it did originate at SEA. To determine
the commit or abort action, the LA administrator requests that DISPLAY THREAD
TYPE(INDOUBT) be issued at the SEA Db2 subsystem, specifying LUWID=15. IMS
does not have any indoubt status for this thread because it completes the
two-phase commit process with the SEA Db2 subsystem.
The Db2 subsystem at SEA tells the application that the commit succeeded.
Chapter 13. Recovering from different Db2 for z/OS problems 637
Scenario: Correcting damage from an incorrect heuristic
decision about an indoubt thread
If an incorrect heuristic decision is made regarding an indoubt thread, an
organization can recover from this incorrect decision.
Symptoms
When the Db2 subsystem at SEA reconnects with the Db2 at LA, indoubt
resolution occurs for LUWID=16. Both systems detect heuristic damage, and both
generate alert A004; each writes an IFCID 207 trace record. Message DSNL400 is
displayed at LA, and message DSNL403 is displayed at SEA.
Causes
This scenario is based on the conditions described in “Scenario: Recovering from
communication failure” on page 627.
Environment
The following figure illustrates the environment for this scenario.
Db2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
Db2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 Db2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 62. Resolution of indoubt threads. Scenarios for resolving problems with indoubt threads contains a detailed
description of the scenario depicted in this figure.
Chapter 13. Recovering from different Db2 for z/OS problems 639
640 Administration Guide
Chapter 14. Reading log records
Reading Db2 log records is useful for diagnostic and recovery purposes.
This information discusses three approaches to writing programs that read log
records.
PSPI
The three main types of log records are unit of recovery, checkpoint, and database
page set control records.
Each log record has a header that indicates its type, the Db2 subcomponent that
made the record, and, for unit-of-recovery records, the unit-of-recovery identifier.
The log records can be extracted and printed by the DSN1LOGP utility.
The log relative byte address and log record sequence number
| For basic 6-byte RBA format, the Db2 log can contain up to 248 bytes, where 248 is 2
| to the 48th power. For extended 10-byte RBA format, the Db2 log can contain up to
| 280 bytes, where 280 is 2 to the 80th power. Each byte is addressable by its offset
| from the beginning of the log. That offset is known as its relative byte address (RBA).
A log record is identifiable by the RBA of the first byte of its header; that RBA is
called the relative byte address of the record. The record RBA is like a timestamp
because it uniquely identifies a record that starts at a particular point in the
continuing log.
In the data sharing environment, each member has its own log. The log record
sequence number (LRSN) identifies the log records of a data sharing member. The
LRSN might not be unique on a data sharing member. The LRSN is a hexadecimal
value derived from a store clock timestamp. Db2 uses the LRSN for recovery in the
data sharing environment.
PSPI
This section describes changes to the Db2 database, the effects of these changes,
and the log records that correspond to the changes.
PSPI
The redo information is required if the work is committed and later must be
recovered. The undo information is used to back out work that is not committed.
If the work is rolled back, the undo/redo record is used to remove the change. At
the same time that the change is removed, a new redo/undo record is created that
contains information, called compensation information, that is used if necessary to
reverse the change. For example, if a value of 3 is changed to 5, redo compensation
information changes it back to 3.
If the work must be recovered, Db2 scans the log forward and applies the redo
portions of log records and the redo portions of compensation records, without
keeping track of whether the unit of recovery was committed or rolled back. If the
unit of recovery had been rolled back, Db2 would have written compensation redo
log records to record the original undo action as a redo action. Using this
technique, the data can be completely restored by applying only redo log records
on a single forward pass of the log.
Db2 also logs the creation and deletion of data sets. If the work is rolled back, the
operations are reversed. For example, if a table space is created using
Db2-managed data sets, Db2 creates a data set; if rollback is necessary, the data set
is deleted. If a table space using Db2-managed data sets is dropped, Db2 deletes
the data set when the work is committed, not immediately. If the work is rolled
back, Db2 does nothing.
PSPI
PSPI
DBET log records also register exception information that is not related to units of
recovery.
DBET log records register whether any database, table space, index space, or
partition is in an exception state. To list all objects in a database that are in an
exception state, use the command DISPLAY DATABASE (database name) RESTRICT.
PSPI
Table 53 shows the log records for processing and rolling back an insertion.
Table 53. Log records written for rolling back an insertion
Type of record Information recorded
1. Begin_UR Beginning of the unit of recovery.
2. Undo/Redo for data Insertion of data. Includes the database ID (DBID), page set
ID, page number, internal record identifier, and the data
inserted.
3. Begin_Abort Beginning of the rollback process.
4. Compensation Redo/Undo Backing-out of data. Includes the database ID (DBID), page
set ID, page number, internal record ID (RID), and data to
undo the previous change.
5. End_Abort End of the unit of recovery, with rollback complete.
PSPI
PSPI
The log record identifies the RID, the operation (insert, delete, or update), and the
data. Depending on the data size and other variables, Db2 can write a single log
record with both undo and redo information, or it can write separate log records
for undo and redo.
The following table summarizes the information logged for data and index
changes.
Table 54. Information logged for database changes
Operation Information logged
Insert data The new row.
v On redo, the row is inserted with its original RID.
v On undo, the row is deleted and the RID is made available for
another row.
Delete data The deleted row.
v On redo, the RID is made available for another row.
v On undo, the row is inserted again with its former RID.
Update data1 The old and new values of the changed data.
v On redo, the new data is replaced.
v On undo, the old data is replaced.
Insert index entry The new key value and the data RID.
Delete index entry The deleted key value and the data RID.
Add column The information about the column being added, if the table was
defined with DATA CAPTURE(CHANGES).
Alter column The information about the column being altered, if the table was
defined with DATA CAPTURE(CHANGES).
Roll back to a savepoint Information about the savepoint.
Modify table space Information about the table space version.
LOAD SHRLEVEL The database ID (DBID) and the page set ID (PSID) of the table
NONE RESUME YES space on which the operation was run.
LOAD SHRLEVEL The database ID (DBID) and the page set ID (PSID) of the table
NONE RESUME NO space on which the operation was run.
REPLACE
REORG TABLESPACE The database ID (DBID) and the page set ID (PSID) of the table
DISCARD space on which the operation was run.
CHECK DATA DELETE The database ID (DBID) and the page set ID (PSID) of the table
YES space on which the operation was run.
Note:
1. If an update occurs to a table defined with DATA CAPTURE(CHANGES), the
entire before-image of the data row is logged.
PSPI
PSPI
At a checkpoint, Db2 logs its current status and registers the log RBA of the
checkpoint in the bootstrap data set (BSDS). At restart, Db2 uses the information in
the checkpoint records to reconstruct its state when it terminated.
Many log records can be written for a single checkpoint. Db2 can write one to
begin the checkpoint; others can then be written, followed by a record to end the
checkpoint. The following table summarizes the information logged.
Table 55. Contents of checkpoint log records
Type of log record Information logged
Begin_Checkpoint Marks the start of the summary information. All later records in
the checkpoint have type X'0100' (in the LRH).
PSPI
Related reference:
DSNTIPL: Active log data set parameters (Db2 Installation and Migration)
Database page set control records
PSPI
PSPI
PSPI
The physical output unit written to the active log data set is a control interval (CI)
of 4096 bytes (4 KB). Each CI contains one VSAM record.
PSPI
|
PSPI
|
| One physical record can contain several logical records, one or more logical records
| and part of another logical record, or only part of one logical record. The physical
| record must also contain 37 bytes of Db2 control information if the log record is in
| 10-byte format, or 21 bytes of Db2 control information if the log record is in
| six-byte format. The control information is called the log control interval definition
| (LCID).
Record 4
VSAM record
ends here
For data sharing, the LRSN of
the last log record in this CI
Offset of last segment in this CI
(beginning of log record 4)
Total length of spanned record that
ends in this CI (log record 1)
Total length of spanned record that
begins in this CI (log record 4)
The term log record refers to a logical record, unless the term physical log record is
used. A part of a logical record that falls within one physical record is called a
segment.
Related reference:
The log control interval definition (LCID)
PSPI
The first segment of a log record must contain the header and some bytes of data.
If the current physical record has too little room for the minimum segment of a
new record, the remainder of the physical record is unused, and a new log record
is written in a new physical record.
The log record can span many VSAM CIs. For example, a minimum of nine CIs are
required to hold the maximum size log record of 32815 bytes. Only the first
segment of the record contains the entire LRH; later segments include only the first
two fields. When a specific log record is needed for recovery, all segments are
retrieved and presented together as if the record were stored continuously.
| Table 56. Contents of the log record header for 10-byte format
| Hex offset Length Information
| 00 4 Length of this record or segment
| 04 2 Length of any previous record or segment in this CI; 0 if
| this is the first entry in the CI.
| 06 1 Flags
| 07 1 Release identifier
| 08 1 Resource manager ID (RMID) of the Db2 component that
| created the log record
| 09 1 Flags
| 0A 16 Unit of recovery ID, if this record relates to a unit of
| recovery; otherwise, 0
| 1A 16 Log RBA of the previous log record, if this record relates to
| a unit of recovery; otherwise, 0
| 2A 1 Length of header
| 2B 1 Available
| 2C 2 Type of log record
| 2E 2 Subtype of the log record
| 30 12 Undo next LSN
| 3C 14 LRHTIME
| 4A 6 Available
|
| Table 57. Contents of the log record header for 6-byte format
Hex offset Length Information
00 2 Length of this record or segment
PSPI
Related concepts:
Unit of recovery log records
Related reference:
Log record type codes
Log record subtype codes
|
PSPI
|
| The following tables describe the contents of the LCID. You can determine the
| LCID format by testing the first bit of the next to last byte. If the bit is 1, then the
| LCID is in the 10-byte format. If the bit is 0, the LCID is in the 6-byte format.
| Table 58. Contents of the log control interval definition for 10 byte RBA and LRSN
| Hex offset Length Information
| 00 1 An indication of whether the CI contains free space: X'00' =
| Yes, X'FF' = No
| 01 2 Reserved
| 03 4 Total length of a spanned record that begins in this CI
Each recovery log record consists of two parts: a header, which describes the record,
and data. The following illustration shows the format schematically; the following
list describes each field.
Flags (1)
Resource manager ID (1)
Flags (1)
Length of previous record or segment (2)
Length of this record or segment (4)
LINK (6)
Unit of recovery ID (6)
Flags (1)
PSPI
Related reference:
Log record type codes
Log record subtype codes
PSPI
A single record can contain multiple type codes that are combined. For example,
0600 is a combined UNDO/REDO record; F400 is a combination of four
Db2-assigned types plus a REDO. A diagnostic log record for the TRUNCATE
PSPI
Log record type 0004 (SYSCOPY utility) has log subtype codes that correspond to
the page set ID values of the table spaces that have their SYSCOPY records in the
log (SYSIBM.SYSUTILX, SYSIBM.SYSCOPY, DSNDB01.DBD01, and
DSNDB01.SYSDBDXA).
Log record type 0800 (quiesce) does not have subtype codes.
Some log record types (1000 - 8000 assigned by Db2) can have proprietary log
record subtype codes assigned.
PSPI
Related reference:
DSN1LOGP (Db2 Utilities)
PSPI
The macros are contained in the data set library prefix SDSNMACS and are
documented by comments in the macros themselves.
Log record formats for the record types and subtypes are detailed in the mapping
macro DSNDQJ00. DSNDQJ00 provides the mapping of specific data change log
records, UR control log records, and page set control log records that you need to
interpret data changes by the UR. DSNDQJ00 also explains the content and usage
of the log records.
PSPI
Related reference:
Log record subtype codes
PSPI
You can write a program that uses IFI to capture log records while Db2 is running.
You can read the records asynchronously, by starting a trace that reads the log
records into a buffer and then issuing an IFI call to read those records out of the
buffer. Alternatively, you can read those log records synchronously, by using an IFI
call that returns those log records directly to your IFI program.
PSPI
Related concepts:
Programming for the instrumentation facility interface (IFI) (Db2 Performance)
Related tasks:
Requesting data synchronously from a monitor program (Db2 Performance)
Requesting data asynchronously from a monitor program (Db2 Performance)
Related reference:
READA (Db2 Performance)
READS (Db2 Performance)
Procedure
PSPI
where:
v P signifies to start a Db2 performance trace. Any of the Db2 trace types can be
used.
v CLASS(30) is a user-defined trace class (31 and 32 are also user-defined classes).
v IFCID(126) activates Db2 log buffer recording.
v DEST(OPX) starts the trace to the next available Db2 online performance (OP)
buffer. The size of this OP buffer can be explicitly controlled by the BUFSIZE
keyword of the START TRACE command. Valid sizes range from 256 KB to 16
MB. The number must be evenly divisible by 4.
When the START TRACE command takes effect, from that point forward until Db2
terminates, Db2 begins writing 4-KB log buffer VSAM control intervals (CIs) to the
OP buffer as well as to the active log. As part of the IFI COMMAND invocation,
the application specifies an ECB to be posted and a threshold to which the OP
buffer is filled when the application is posted to obtain the contents of the buffer.
The IFI READA request is issued to obtain OP buffer contents.
PSPI
PSPI
To retrieve the log control interval, your program must initialize certain fields in
the qualification area:
WQALLTYP
This is a 3-byte field in which you must specify CI (with a trailing blank),
which stands for “control interval”.
If you specify a range of log CIs, but some of those records have not yet been
written to the active log, Db2 returns as many log records as possible. You can find
the number of CIs returned in field QWT02R1N of the self-defining section of the
record.
PSPI
Related concepts:
Log capture routines
Db2 trace output (Db2 Performance)
Related reference:
Qualification fields for READS requests (Db2 Performance)
Procedure
IFCID 0306 must appear in the IFCID area. IFCID 0306 returns complete log
records. Multi-segmented control interval log records are combined for a complete
log record.
Generally, catalog and directory objects cannot be in group buffer pool
RECOVER-pending (GRECP) status when an IFCID 0306 request accesses the
compression dictionary. Only log entries for tables that are defined with DATA
CAPTURE CHANGES enabled are decompressed.
PSPI
Related tasks:
Reading specific log records (IFCID 0129)
|
PSPI
|
| The return area for monitor programs that issue IFCID 0306 requests must reside
| either in ECSA key 7 storage or in the 64-bit common key 7 storage area above the
| 2-GB bar. The IFCARA64 processing flag in the IFCA controls the location of the
| return area. If the return area resides in 64-bit common storage, the first eight bits
| of the ECSA key 7 return area must contain a pointer to the location of the return
| area in the 64-bit common storage.
The IFI application program must set the eye-catcher to 'I306' at offset 4 in the
return area before making the IFCID 0306 call. An additional 60-byte area must be
included after the 4-byte length indicator and the 'I306' eye-catcher. This area is
used by Db2 between successive application calls and must not be modified by the
application.
The IFI application program must run in supervisor state to request the key 7
return area. The storage size of the return area must be a minimum of the largest
Db2 log record returned plus the additional area defined in DSNDQW04. Minimize
the number of IFI calls required to get all log data but do not over use ECSA by
the IFI program. The other IFI storage areas can remain in user storage key 8. The
IFI application must be in supervisor state and key 0 when making IFCID 0306
calls.
IFCID 0306 has a unique return area format. The first section is mapped by
QW0306OF instead of the write header DSNDQWIN.
PSPI
Related concepts:
Programming for the instrumentation facility interface (IFI) (Db2 Performance)
PSPI
In the next F call for a data sharing environment, you specify either QW0306ES
and QW0306ES+1 as the input for WQALLRBA:
| WQALLMOD WQALLRBA
| -------- --------------------
| READS input: C6 00000000CAC5B606CB6C
|
| QW0306ES QW0306CT IFCARC1 IFCARC2 IFCABM
| -------------------- -------- ------- -------- --------
| READS output: 00000000CAC5B606CE91 008E 04 00E60812 00004B7B
| WQALLCRI
| In this 1-byte field, indicate what types of log records are to be returned:
| X'00' (WQALLCR0)
| Only log records for changed data capture and unit of recovery control.
| X'01' (WQALLCR1)
| Only log records for changed data capture and unit of recovery control
| from the proxy data sharing group in a GDPS® Continuous Availability
| with zero data loss environment. Records are returned until the
| end-of-scope log point is reached.
| X'02' (WQALLCR2)
| All types of log records from the proxy data sharing group in a GDPS
| Continuous Availability with zero data loss environment. Records are
| returned until the end-of-scope log point is reached.
| X'03' (WQALLCR3)
| Only log records for changed data capture and unit of recovery control
| from the proxy data sharing group in a GDPS Continuous Availability with
| zero data loss environment. Records are returned until the end-of-log point
| is reached for all members of the data sharing group.
| X'04' (WQALLCR4)
| All types of log records from the proxy data sharing group in a GDPS
| Continuous Availability with zero data loss environment. Records are
| returned until the end-of-log point is reached for all members of the data
| sharing group.
Important: To avoid data loss, before you issue an IFCID 0306 call with
WQALLMOD='H' to a proxy group in a GDPS Continuous Availability with
zero data loss environment, ensure that the Sysplex Timers for the source
Sysplex and the proxy Sysplex are set to within 150 milliseconds of each other.
WQALLMOD='F'
The WQALLRBA, WQALLCRI and WQALLOPT fields need to be set. If
00E60812 is returned, you have all the data for this scope. You should wait a
while before issuing another WQALLMOD='F' call. In a data sharing
environment, log buffers are flushed when a request with WQALLMOD='F'
request is issued.
WQALLMOD='N'
If the 00E60812 has not been returned, you issue this call until it is. You should
wait a while before issuing another WQALLMOD='F' call.
WQALLMOD='T'
This should only be used if you do not want to continue with the
WQALLMOD='N' before the end is reached. It has no use if a position is not
held in the log.
| Reading complete log data for the GDPS Continuous Availability with
| zero data loss solution
| The GDPS Continuous Availability with zero data loss solution provides disaster
| recovery with continuous availability in a z/OS environment. When Db2
| participates in a GDPS Continuous Availability with zero data loss environment,
| IFI applications issue READS calls for IFCID 0306 to a proxy data sharing group,
| and the proxy data sharing group captures log records from a source data sharing
| group. Before the IFI applications can do that, you need to modify your Db2
| environment, and modify the IFI applications that capture log records.
| You need to make a number of changes to your Db2 environment if one of the
| following situations is true:
| v You are using the original release of the GDPS Continuous Availability with zero
| data loss solution, and you are ready to upgrade to the GDPS Continuous
| Availability with zero data loss (GDPS Continuous Availability with zero data
| loss) solution.
| v You have not used the GDPS Continuous Availability with zero data loss
| solution before, and you are installing the GDPS Continuous Availability with
| zero data loss (GDPS Continuous Availability with zero data loss) solution.
| To prepare Db2 data sharing groups to use the GDPS Continuous Availability with
| zero data loss solution, follow these steps:
| 1. Convert all members of your source and proxy data sharing groups to Db2 11
| new-function mode.
| 2. Convert the BSDS data sets to extended 10-byte format by running the
| DSNTIJCB job on all members of your source and proxy data sharing groups.
| 3. Choose a member of the source data sharing group that is not running a
| capture program to be the first member to be upgraded. Create the CDDS on
| that member.
| To minimize the possibility of an out-of-space condition, you should define an
| SMS data class for the CDDS with the following attributes enabled:
| v Extended addressability
| v Extended format
| v Extent constraint relief
| v CA reclaim
| Define the CDDS with a DEFINE CLUSTER command like the one below. In
| your DEFINE CLUSTER command, you need to specify the same values that
| are shown in the example for these parameters:
| v KEYS
| v RECORDSIZE
| v SPANNED
| v SHAREOPTIONS
| v CONTROLINTERVALSIZE
| DEFINE CLUSTER -
| ( NAME(prefix.CDDS) -
| KEYS(8 0) -
| RECORDSIZE(66560 66560) -
| SPANNED -
| SHAREOPTIONS(3 3)) -
| DATA -
| ( CYLINDERS(1000 1000) -
| CONTROLINTERVALSIZE(16384)) -
| INDEX -
| ( CYLINDERS(20 20) -
| CONTROLINTERVALSIZE(512))
| The CDDS name must be of the form prefix.CDDS.
| 4. Stop Db2 on the first source member that is being upgraded.
| 5. On the first source member that is being upgraded, apply all Db2 PTFs that
| provide GDPS Continuous Availability with zero data loss support.
| 6. On the first source member that is being upgraded, set the following
| subsystem parameters:
| v Set subsystem parameter CDDS_MODE to SOURCE.
| v Set subsystem parameter CDDS_PREFIX to the prefix value that you
| specified when you created the CDDS.
| 7. Start Db2 on the first source member that is being upgraded.
| 8. Upgrade each of the remaining members of the source group, one at a time.
| To do that, follow these steps.
| Important: This step is essential to ensure zero data loss, and to avoid
| extended data replication latency.
| Related tasks:
| Convert the BSDS, Db2 catalog, and directory to 10-byte RBA and LRSN
| format (Optional) (Db2 Installation and Migration)
| Related reference:
| CDDS_MODE in macro DSN6LOGP (Db2 Installation and Migration)
| CDDS_PREFIX in macro DSN6LOGP (Db2 Installation and Migration)
| Syntax and options of the REORG TABLESPACE control statement (Db2
| Utilities)
| Allocation of data sets with space constraint relief attributes (z/OS DFSMS
| Using Data Sets)
| Extended format VSAM data sets (z/OS DFSMS Using Data Sets)
| Defining data class attributes (z/OS DFSMSdfp Storage Administration)
| Reclaiming CA space for a KSDS (z/OS DFSMS Using Data Sets)
| Related information:
666 Administration Guide
| DEFINE CLUSTER command (DFSMS Access Method Services for Catalogs)
| Defining volume and data set attributes for data classes (DFSMSdfp Storage
| Administration)
| Before you can capture log records in a GDPS Continuous Availability with zero
| data loss (GDPS Continuous Availability with zero data loss) environment, you
| need to perform the tasks that are described in Modifying Db2 for the GDPS
| Continuous Availability with zero data loss solution.
| Procedure
| To modify an IFI READS call for IFCID 0306 to capture log records in a GDPS
| Continuous Availability with zero data loss environment:
| Specify one of the following values in the WQALLCRI field in the IFI qualification
| area to indicate that log records are being returned by the proxy data sharing
| group.
| X'01' (WQALLCR1)
| Only log records for changed data capture and unit of recovery control from
| the proxy data sharing group in a GDPS Continuous Availability with zero
| data loss environment. Records are returned until the end-of-scope log point is
| reached.
| X'02' (WQALLCR2)
| All types of log records from the proxy data sharing group in a GDPS
| Continuous Availability with zero data loss environment. Records are returned
| until the end-of-scope log point is reached.
| X'03' (WQALLCR3)
| Only log records for changed data capture and unit of recovery control from
| the proxy data sharing group in a GDPS Continuous Availability with zero
| data loss environment. Records are returned until the end-of-log point is
| reached for all members of the data sharing group.
| X'04' (WQALLCR4)
| All types of log records from the proxy data sharing group in a GDPS
| Continuous Availability with zero data loss environment. Records are returned
| until the end-of-log point is reached for all members of the data sharing group.
| PSPI
| Related tasks:
| Reading complete log data (IFCID 0306)
| Modifying Db2 for the GDPS Continuous Availability with zero data loss solution
| Related reference:
| Qualification fields for READS requests (Db2 Performance)
| Procedure
| To recover or rebuild a CDDS, follow these steps in the source data sharing group:
| 1. Issue the -STOP CDDS command to direct all members of the data sharing group
| to close and deallocate the CDDS.
| 2. Issue the DFSMSdss RESTORE command to restore the CDDS from the latest
| backup copy.
| If you do not have a backup copy, delete and redefine the CDDS. See
| Modifying Db2 for the GDPS Continuous Availability with zero data loss
| solution for an example of the CDDS definition.
| 3. Issue the -START CDDS command to direct all members of the data sharing
| group to allocate and open the CDDS.
| 4. Run REORG TABLESPACE with the INITCDDS option to repopulate the
| CDDS.
| You can specify the SEARCHTIME option with the INITCDDS option to allow
| REORG to populate the CDDS with an earlier dictionary than the dictionary
| that currently resides in the target table space.
| Related concepts:
| RESTORE command for DFSMSdss (z/OS DFSMSdss Storage Administration)
|
| Related tasks:
| Modifying Db2 for the GDPS Continuous Availability with zero data loss solution
| Related reference:
| Syntax and options of the REORG TABLESPACE control statement (Db2
| Utilities)
| -STOP CDDS (Db2) (Db2 Commands)
| -START CDDS (Db2) (Db2 Commands)
PSPI
Db2 provides the following stand-alone log services that user-written application
programs can use to read Db2 recovery log records and control intervals even
when Db2 is not running:
v The OPEN function initializes stand-alone log services.
v The GET function returns a pointer to the next log record or log record control
interval.
v The CLOSE function deallocates data sets and frees storage.
To invoke these services, use the assembler language DSNJSLR macro and specify
one of the preceding functions.
These log services use a request block, which contains a feedback area in which
information for all stand-alone log GET calls is returned. The request block is
created when a stand-alone log OPEN call is made. The request block must be
passed as input to all subsequent stand-alone log calls (GET and CLOSE). The
request block is mapped by the DSNDSLRB macro, and the feedback area is
mapped by the DSNDSLRF macro.
When you issue an OPEN request, you can indicate whether you want to get log
records or log record control intervals. Each GET request returns a single logical
record or control interval depending on which you selected with the OPEN
request. If neither is specified, the default, RECORD, is used. Db2 reads the log in
the forward direction of ascending relative byte addresses or log record sequence
numbers (LRSNs).
If a bootstrap data set (BSDS) is allocated before stand-alone services are invoked,
appropriate log data sets are allocated dynamically by z/OS. If the bootstrap data
set is not allocated before stand-alone services are invoked, the JCL for your
user-written application to read a log must specify and allocate the log data sets to
be read.
Important: Use one of the following methods to read active logs while the Db2
subsystem that owns the logs is active:
v IFCID 0129
v IFCID 0306
v Log capture exit
PSPI
PSPI
The following tables list and describe the JCL DD statements that are used by
stand-alone services.
Table 61. JCL DD statements for Db2 stand-alone log services in a data-sharing environment
JCL DD
statement Explanation
GROUP If you are reading logs from every member of a data sharing group in
LRSN sequence, you can use this statement to locate the BSDSs and log
data sets needed. You must include the data set name of one BSDS in
the statement. Db2 can find the rest of the information from that one
BSDS.
All members' logs and BSDS data sets must be available. If you use this
DD statement, you must also use the LRSN and RANGE parameters on
the OPEN request. The GROUP DD statement overrides any MxxBSDS
statements that are used.
(Db2 searches for the BSDS DD statement first, then the GROUP
statement, and then the MxxBSDS statements. If you want to use a
particular member's BSDS for your own processing, you must call that
DD statement something other than BSDS.)
MxxBSDS Names the BSDS data set of a member whose log must participate in the
read operation and whose BSDS is to be used to locate its log data sets.
Use a separate MxxBSDS DD statement for each Db2 member. xx can be
any two valid characters.
Use these statements if logs from selected members of the data sharing
group are required and the BSDSs of those members are available. These
statements are ignored if you use the GROUP DD statement.
For one MxxBSDS statement, you can use either RBA or LRSN values to
specify a range. If you use more than one MxxBSDS statement, you must
use the LRSN to specify the range.
You can use this statement if the BSDS data sets are unavailable or if
you want only some of the log data sets from selected members of the
group.
The DD statements must specify the log data sets in ascending order of log RBA
(or LRSN) range. If both ARCHIVE and ACTIVEn DD statements are included, the
first archive data set must contain the lowest log RBA or LRSN value. If the JCL
specifies the data sets in a different order, the job terminates with an error return
code with a GET request that tries to access the first record breaking the sequence.
If the log ranges of the two data sets overlap, this is not considered an error;
instead, the GET function skips over the duplicate data in the second data set and
returns the next record. The distinction between out-of-order and overlap is as
follows:
v An out-of-order condition occurs when the log RBA or LRSN of the first record
in a data set is greater than that of the first record in the following data set.
v An overlap condition occurs when the out-of-order condition is not met but the
log RBA or LRSN of the last record in a data set is greater than that of the first
record in the following data set.
Gaps within the log range are permitted. A gap is created when one or more log
data sets containing part of the range to be processed are not available. This can
happen if the data set was not specified in the JCL or is not reflected in the BSDS.
When the gap is encountered, an exception return code value is set, and the next
complete record after the gap is returned.
PSPI
Related reference:
Stand-alone log CLOSE request
Stand-alone log OPEN request
Stand-alone log GET request
PSPI
If you use the GROUP DD statement, then the determinant is the number of
members in the group. Otherwise, the number of different xxs and yys used in the
Mxx and Myy type DD statements.
For example, assume you need to read log records from members S1, S2, S3, S4, S5
and S6.
v S1 and S2 locate their log data sets by their BSDSs.
v S3 and S4 need both archive and active logs.
v S4 has two active log data sets.
v S5 needs only its archive log.
v S6 needs only one of its active logs.
You then need the following DD statements to specify the required log data sets:
PSPI
PSPI
The request macro invoking these services can be used by reentrant programs. The
macro requires that register 13 point to an 18-word save area at invocation. In
addition, registers 0, 1, 14, and 15 are used as work and linkage registers. A return
code is passed back in register 15 at the completion of each request. When the
return code is nonzero, a reason code is placed in register 0. Return codes identify
a class of errors, while the reason code identifies a specific error condition of that
class. The stand-alone log return codes are shown in the following table.
The stand-alone log services invoke executable macros that can execute only in
24-bit addressing mode and reference data below the 16-MB line. User-written
applications should be link-edited as AMODE(24), RMODE(24).
PSPI
PSPI
You can use the PMO option to retrieve log control intervals from archive log data
sets. DSNJSLR also retrieves log control intervals from the active log if the Db2
system is not active. During OPEN, if DSNJSLR detects that the control interval
range is not within the archive log range available (for example, the range purged
from BSDS), an error condition is returned.
Specify CI and use GET to retrieve the control interval you have chosen. The rules
remain the same regarding control intervals and the range specified for the OPEN
function. Control intervals must fall within the range specified on the RANGE
parameter.
PSPI
Related reference:
JCL DD statements for Db2 stand-alone log services
Registers and return codes
PSPI
A log record is available in the area pointed to by the request block until the next
GET request is issued. At that time, the record is no longer available to the
requesting program. If the program requires reference to a log record's content
after requesting a GET of the next record, the program must move the record into
a storage area that is allocated by the program.
The first GET request, after a FUNC=OPEN request that specified a RANGE
parameter, returns a pointer in the request feedback area. This points to the first
record with a log RBA value greater than or equal to the low log RBA value
specified by the RANGE parameter. If the RANGE parameter was not specified on
the FUNC=OPEN request, then the data to be read is determined by the JCL
specification of the data sets. In this case, a pointer to the first complete log record
in the data set that is specified by the ARCHIVE, or by ACTIVE1 if ARCHIVE is
omitted, is returned. The next GET request returns a pointer to the next record in
ascending log RBA order. Subsequent GET requests continue to move forward in
log RBA sequence until the function encounters the end of RANGE RBA value, the
end of the last data set specified by the JCL, or the end of the log as determined
by the bootstrap data set.
Reason codes 00D10261 - 00D10268 reflect a damaged log. In each case, the RBA of
the record or segment in error is returned in the stand-alone feedback block field
(SLRFRBA). A damaged log can impair Db2 restart; special recovery procedures are
required for these circumstances.
On return from this request, the first part of the request block contains the
feedback information that this function returns. Mapping macro DSNDSLRF
defines the feedback fields which are shown in the following table. The
information returned is status information, a pointer to the log record, the length
of the log record, and the 6-byte or 10-byte log RBA value of the record.
Table 63. Stand-alone log get feedback area contents
Hex Length
Field name offset (bytes) Field contents
SLRFRC 00 2 Log request return code
SLRFINFO 02 2 Information code returned by dynamic allocation.
Refer to the z/OS SPF job management publication
for information code descriptions
SLRFERCD 04 2 VSAM or dynamic allocation error code, if register
15 contains a nonzero value.
SLRFRG15 06 2 VSAM register 15 return code value.
SLRFFRAD 08 4 Address of area containing the log record or CI
| SLRFRCLL 0C 4 Length of the log record or RBA
| SLRFRBA16 10 16 Log RBA of the log record
| SLRFDDNM 20 8 ddname of data set on which activity occurred
| SLRFMBID 28 8 Member identification of the current log record
PSPI
Related concepts:
X'D1......' codes (Db2 Codes)
Related tasks:
Recovering from different Db2 for z/OS problems
Related reference:
JCL DD statements for Db2 stand-alone log services
Registers and return codes
PSPI
PSPI
Related reference:
JCL DD statements for Db2 stand-alone log services
Registers and return codes
Related information:
00D10030 (Db2 Codes)
PSPI
For example:
Figure 66. Excerpts from a sample program using stand-alone log services
PSPI
PSPI This installation exit routine presents log data to a log capture exit routine
when the data is written to the Db2 active log. Do not use this exit routine for
general purpose log auditing or tracking. The IFI interface is designed for this
purpose.
The log capture exit routine executes in an area of Db2 that is critical for
performance. As such, it is primarily intended as a mechanism to capture log data
for recovery purposes. In addition, the log capture exit routine operates in a very
restrictive z/OS environment, which severely limits its capabilities as a stand-alone
routine.
Procedure
You must write an exit routine (or use the one that is provided by the preceding
program offering) that can be loaded and called under the various processing
conditions and restrictions that are required by this exit routine. PSPI
Related concepts:
Contents of the log
Log capture routines
Related tasks:
Reading log records with IFI
Related reference:
The physical structure of the log
| Internally, the values that are kept in memory are all 10 bytes, except when they
| need to be externalized to structures that remain in the 6-byte format. The values
| are stored internally as 10 bytes even in conversion mode. The conversion from the
| 10-byte values to 6-byte format is done at end points, such as when a log record is
| written, or when the PGLOGRBA field in a data or index page is updated.
| Even before the BSDS is converted to Db2 11 format on all data sharing members,
| 10-byte LRSN values might be displayed with non-zero digits in the low order 3
| bytes. LRSN values captured before the BSDS is converted continue to be
| displayed as they were saved until they are no longer available for display (for
| example, deleted by MODIFY RECOVERY). This behavior is normal and to be
| expected, given the many ways LRSN values are generated, stored, and handled in
| Db2. If these LRSN values are specified as input to Db2, specify them as shown. If
| the LRSN value contains non-zero digits in the low order 3 bytes, do not remove
| them. Any conversion that might be required takes place inside Db2.
| Related concepts:
| Expanded RBA and LRSN log records (Db2 for z/OS What's New?)
| The extended 10-byte RBA and LRSN in Db2 11 (Db2 for z/OS What's New?)
| Related tasks:
| What to do before RBA or LRSN limits are reached
Edit procedures
An edit procedure is assigned to a table by the EDITPROC clause of the CREATE
TABLE statement. An edit procedure receives the entire row of a base table in
internal Db2 format. It can transform the row when it is stored by an INSERT or
UPDATE SQL statement or by the LOAD utility.
PSPI
The edit-decoding function must be the exact inverse of the edit-encoding function.
For example, if a routine encodes 'ALABAMA' to '01', it must decode '01' to
'ALABAMA'. A violation of this rule can lead to an abend of the Db2 connecting
thread, or other undesirable effects.
Your edit procedure can encode the entire row of the table, including any index
keys. However, index keys are extracted from the row before the encoding is done,
The sample application contains a sample edit procedure, DSN8EAE1. To print it,
use ISPF facilities, IEBPTPCH, or a program of your own. Or, assemble it and use
the assembly listing.
There is also a sample routine that does Huffman data compression, DSN8HUFF in
library prefix.SDSNSAMP. That routine not only exemplifies the use of the exit
parameters, it also has potentially some use for data compression. If you intend to
use the routine in any production application, please pay particular attention to the
warnings and restrictions given as comments in the code. You might prefer to let
Db2 compress your data.
PSPI
Related concepts:
General guidelines for writing exit routines
Procedure
PSPI
Specify the EDITPROC clause of the CREATE TABLE statement, followed by the
name of the procedure. The procedure is loaded on demand during operation. You
can specify the EDITPROC clause on a table that is activated with row and column
access control. The rows of the table are passed to these procedures if your security
administrator determines that these procedures are allowed to access sensitive
data.
PSPI
PSPI
An edit routine is invoked after any date routine, time routine, or field procedure.
If there is also a validation routine, the edit routine is invoked after the validation
routine. Any changes made to the row by the edit routine do not change entries
made in an index.
The same edit routine is invoked to edit-decode a row whenever Db2 retrieves one.
On retrieval, it is invoked before any date routine, time routine, or field procedure.
If retrieved rows are sorted, the edit routine is invoked before the sort. An edit
PSPI
PSPI
At invocation, registers are set, and the edit procedure uses the standard exit
parameter list (EXPL). The following table shows the exit-specific parameter list, as
described by macro DSNDEDIT.
Table 64. Parameter list for an edit procedure
Name Hex offset Data type Description
EDITCODE 0 Signed 4-byte Edit code telling the type of function to be
integer performed, as follows:
0 Edit-encode row for insert or
update
4 Edit-decode row for retrieval
EDITROW 4 Address Address of a row description. The value of
this parameter is 0 (zero) if both of the
following conditions are true:
v The edit procedure is insensitive to the
format in which Db2 stores the rows of
the table.
v The edit procedure is defined as
WITHOUT ROW ATTRIBUTES in
CREATE TABLE statements.
8 Signed 4-byte Reserved
integer
EDITILTH C Signed 4-byte Length of the input row
integer
EDITIPTR 10 Address Address of the input row
EDITOLTH 14 Signed 4-byte Length of output row. On entry, this is the
integer size of the area in which to place the output
row. The exit must not modify storage
beyond this length.
EDITOPTR 18 Address Address of the output row
PSPI
Columns for which no input field is provided and that are not in reordered row
format are always at the end of the row and are never defined as NOT NULL. In
this case, the columns allow nulls, they are defined as NOT NULL WITH
DEFAULT, or the columns are ROWID or DOCID columns.
Use macro DSNDEDIT to get the starting address and row length for edit exits.
Add the row length to the starting address to get the first invalid address beyond
the end of the input buffer; your routine must not process any address as large as
that.
The following diagram shows how the parameter list points to other row
information. The address of the nth column description is given by: RFMTAFLD +
(n-1)*(FFMTE-FFMT).
Register 1 EXPL
Address of
EXPL Address of
work area Work area
Address of (256 bytes)
edit parameter Length of
list work area
Reserved
Return code
Parameter list
Reason code
EDITCODE: Function to be
performed
Row descriptions
Address of row description
Number of columns
Reserved in row (n)
Length of input row Address of column
list
Address of input row
Row type
Length of output row
Data type
...n
Input row Data attribute
Column name
Figure 67. How the edit exit parameter list points to row information
PSPI
If EDITCODE contains 0, the input row is in decoded form. Your routine must
encode it.
In that case, the maximum length of the output area, in EDITOLTH, is 10 bytes
more than the maximum length of the record. In counting the maximum length
for a row in basic row format, “record” includes fields for the lengths of
varying-length columns and for null indicators. In counting the maximum
length for a row in reordered row format, “record” includes fields for the
offsets to the varying length columns and for null indicators. The maximum
length of the record does not include the 6-byte record header.
If EDITCODE contains 4, the input row is in coded form. Your routine must
decode it.
In that case, EDITOLTH contains the maximum length of the record. In
counting the maximum length for a row in basic row format, “record” includes
fields for the lengths of varying length columns and for null indicators. In
counting the maximum length for a row in reordered row format, “record”
includes fields for the offsets to the varying-length columns and for null
indicators. The maximum length of the record does not include the 6-byte
record header.
In either case, put the result in the output area, pointed to by EDITOPTR, and put
the length of your result in EDITOLTH. The length of your result must not be
greater than the length of the output area, as given in EDITOLTH on invocation,
and your routine must not modify storage beyond the end of the output area.
Required return code: Your routine must also leave a return code in EXPLRC1 with
the following meanings:
Table 65. Required return code in EXPLRC1
Value Meaning
0 Function performed successfully.
Nonzero Function failed.
If the function fails, the routine might also leave a reason code in EXPLRC2. Db2
returns SQLCODE -652 (SQLSTATE '23506') to the application program and puts
the reason code in field SQLERRD(6) of the SQL communication area (SQLCA).
PSPI
Validation routines
Validation routines are assigned to a table by the VALIDPROC clause of the
CREATE TABLE and ALTER TABLE statement. A validation routine receives an
entire row of a base table as input. The routine can return an indication of whether
to allow a subsequent INSERT, UPDATE, DELETE, FETCH, or SELECT operation.
PSPI
Typically, a validation routine is used to impose limits on the information that can
be entered in a table; for example, allowable salary ranges, perhaps dependent on
job category, for the employee sample table.
The return code from a validation routine is checked for a 0 value before any
insert, update, or delete is allowed.
PSPI
Related concepts:
General guidelines for writing exit routines
You can add a validation routine to an existing table, but the routine is not
invoked to validate data that was already in the table.
Procedure
PSPI
Issue the CREATE TABLE or ALTER TABLE statement with the VALIDPROC
clause.
You can specify the VALIDPROC clause on a table that is activated with row and
column access control. The rows of the table are passed to these routines if your
security administrator determines that these routines are allowed to access
sensitive data.
You can cancel a validation routine for a table by specifying the VALIDPROC
NULL clause in an ALTER TABLE statement.
PSPI
PSPI
The routine is invoked for most delete operations, including a mass delete of all
the rows of a table. If there are other exit routines, the validation routine is
invoked before any edit routine, and after any date routine, time routine, or field
procedure.
PSPI
PSPI
The following diagram shows how the parameter list points to other information.
Register 1 EXPL
Address of
EXPL Address of
work area Work area
(256 bytes)
Address of
Length of
validation
work area
parameter list
Reserved
Return code
Parameter list
Reason code
Reserved
Address of row description Row descriptions
Reserved Number of columns
in row (n)
Length of input row to be
validated Address of column
list
Address of input row to be
validated Row type
.
. Column descriptions
.
Column length
Data type ...n
Input row
Data attribute
Column name
Figure 68. How a validation parameter list points to information. The address of the nth
column description is given by: RFMTAFLD + (n-1)*(FFMTE-FFMT).
The following table shows the exit-specific parameter list, described by macro
DSNDRVAL.
Table 66. Parameter list for a validation routine
Name Hex offset Data type Description
0 Signed 4-byte Reserved
integer
RVALROW 4 Address Address of a row description.
8 Signed 4-byte Reserved
integer
RVALROWL C Signed 4-byte Length of the input row to be validated
integer
PSPI
PSPI
Columns for which no input field is provided and that are not in reordered row
format are always at the end of the row and are never defined as NOT NULL. In
this case, the columns allow nulls, they are defined as NOT NULL WITH
DEFAULT, or the columns are ROWID or DOCID columns.
Use macro DSNDRVAL to get the starting address and row length for validation
exits. Add the row length to the starting address to get the first invalid address
beyond the end of the input buffer; your routine must not process any address as
large as that.
PSPI
PSPI
If the operation is not allowed, the routine might also leave a reason code in
EXPLRC2. Db2 returns SQLCODE -652 (SQLSTATE '23506') to the application
program and puts the reason code in field SQLERRD(6) of the SQL communication
area (SQLCA).
PSPI
PSPI
Example: Suppose that you want to insert and retrieve dates in a format like
“September 21, 2006”. You can use a date routine that transforms the date to a
format that is recognized by Db2 on insertion, such as ISO: “2006-09-21”. On
retrieval, the routine can transform “2006-09-21” to “September 21, 2006”.
You can have either a date routine, a time routine, or both. These routines do not
apply to timestamps. Special rules apply if you execute queries at a remote DBMS,
through the distributed data facility.
PSPI
Related concepts:
General guidelines for writing exit routines
PSPI
To specify date and time routines:
1. Set LOCAL DATE LENGTH or LOCAL TIME LENGTH to the length of the
longest field that is required to hold a date or time in your local format.
Allowable values range from 10 to 254. For example, if you intend to insert and
retrieve dates in the form “September 21, 2006”, you need an 18-byte field. You
would set LOCAL DATE LENGTH to 18.
2. Replace all of the IBM-supplied exit routines. Use CSECTs DSNXVDTX,
DSNXVDTA, and DSNXVDTU for a date routine, and DSNXVTMX,
DSNXVTMA, and DSNXVTMU for a time routine. The routines are loaded
when Db2 starts.
3. To make the local date or time format the default for retrieval, set DATE
FORMAT or TIME FORMAT to LOCAL when installing Db2.
This specification has the effect that Db2 always takes the exit routine when you
retrieve from a DATE or TIME column. For example, suppose that you want to
retrieve dates in your local format only occasionally; most of the time you use
the USA format. You would set DATE FORMAT to USA.
What to do next
The installation parameters for LOCAL DATE LENGTH, LOCAL TIME LENGTH,
DATE FORMAT, and TIME FORMAT can also be updated after Db2 is installed. If
you change a length parameter, you might need to rebind the applications.
PSPI
PSPI
Db2 checks that the value supplied by the exit routine represents a valid date or
time in some recognized format, and then converts it into an internal format for
storage or comparison. If the value is entered into a column that is a key column
in an index, the index entry is also made in the internal format.
On retrieval, a date or time routine can be invoked to change a value from ISO to
the locally-defined format when a date or time value is retrieved by a SELECT or
FETCH statement. If LOCAL is the default, the routine is always invoked unless
overridden by a precompiler option or by the CHAR function, as by specifying
CHAR(HIREDATE, ISO); that specification always retrieves a date in ISO format. If
LOCAL is not the default, the routine is invoked only when specifically called for
by CHAR, as in CHAR(HIREDATE, LOCAL); that always retrieves a date in the
format supplied by your date exit routine.
On retrieval, the exit is invoked after any edit routine or Db2 sort. A date or time
routine is not invoked for a DELETE operation without a WHERE clause that
deletes an entire table in a segmented table space.
PSPI
PSPI
The following diagram shows how the parameter list points to other information.
Register 1
EXPL
Address of
EXPL Address of
work area Work area
Address of (512 bytes)
parameter Length of
list work area
Return code
Parameter list
Address of function code
Function code:
Address of format length Function to be
performed
Address of LOCAL value
LOCAL value
Figure 69. How a date or time parameter list points to other information
PSPI
PSPI
If the function code is 4, the input value is in local format, in the area pointed to
by DTXPLOC. Your routine must change it to ISO, and put the result in the area
pointed to by DTXPISO.
If the function code is 8, the input value is in ISO, in the area pointed to by
DTXPISO. Your routine must change it to your local format, and put the result in
the area pointed to by DTXPLOC.
Your routine must also leave a return code in EXPLRC1, a 4-byte integer and the
third word of the EXPL area. The return code can have the following meanings:
Table 70. Required return code in EXPLRC1
Value Meaning
0 No errors; conversion was completed.
4 Invalid date or time value.
8 Input value not in valid format; if the function is insertion, and LOCAL
is the default, Db2 next tries to interpret the data as a date or time in one
of the recognized formats (EUR, ISO, JIS, or USA).
12 Error in exit routine.
Conversion procedures
A conversion procedure is a user-written exit routine that converts characters from
one coded character set to another coded character set.
PSPI
In most cases, any conversion that is needed can be done by routines provided by
IBM. The exit for a user-written routine is available to handle exceptions.
PSPI
Related concepts:
General guidelines for writing exit routines
Procedure
PSPI
PSPI
PSPI
A conversion procedure does not use an exit-specific parameter list. Instead, the
area pointed to by register 1 at invocation includes three words, which contain the
addresses of the following items:
1. The EXPL parameter list
2. A string value descriptor that contains the character string to be converted
3. A copy of a row from SYSIBM.SYSSTRINGS that names the conversion
procedure identified in TRANSPROC.
The length of the work area pointed to by the exit parameter list is generally 512
bytes. However, if the string to be converted is ASCII MIXED data (the value of
TRANSTYPE in the row from SYSSTRINGS is PM or PS), then the length of the
work area is 256 bytes, plus the length attribute of the string.
The string value descriptor: The descriptor has the following formats:
Table 71. Format of string value descriptor for a conversion procedure
Name Hex offset Data type Description
FPVDTYPE 0 Signed 2-byte Data type of the value:
integer
Code Means
20 VARCHAR
28 VARGRAPHIC
FPVDVLEN 2 Signed 2-byte The maximum length of the string
integer
FPVDVALE 4 None The string. The first halfword is the string's
actual length in characters. If the string is
ASCII MIXED data, it is padded out to the
maximum length by undefined bytes.
PSPI
PSPI
When converting MIXED data, your procedure must ensure that the result is
well-formed. In any conversion, if you change the length of the string, you must
set the length control field in FPVDVALE to the proper value. Over-writing storage
beyond the maximum length of the FPVDVALE causes an abend.
Your procedure must also set a return code in field EXPLRC1 of the exit parameter
list.
The following is a list of the codes for the converted string in FPVDVALE:
Table 72. Codes for the converted string in FPVDVALE
Code Meaning
0 Successful conversion
4 Conversion with substitution
For the following remaining codes, Db2 does not use the converted string:
Table 73. Remaining codes for the FPVDVALE
Code Meaning
8 Length exception
12 Invalid code point
16 Form exception
20 Any other error
24 Invalid CCSID
For an invalid code point (code 12), place the 1- or 2-byte code point in field
EXPLRC2 of the exit parameter list.
Return a form exception (code 16) for EBCDIC MIXED data when the source string
does not conform to the rules for MIXED data.
In the case of a conversion error, Db2 sets the SQLERRMC field of the SQLCA to
HEX(EXPLRC1) CONCAT X'FF' CONCAT HEX(EXPLRC2).
The following diagram shows how the parameter list points to other information.
Register 1
Address of EXPL
EXPL Address of
Work area
Address of work area
string value
list Length of
work area
Address of
SYSSTRINGS Reserved
row copy
Return code
Invalid code
String value descriptor
Data type of string
Copy of row from
Maximum string length
SYSIBM.SYSSTRINGS
String length
String value
PSPI
Field procedures
A field procedure is a user-written exit routine that is used to transform values in a
single, short string column. You can assign field procedures to a table by specifying
the FIELDPROC clause of the CREATE TABLE or ALTER TABLE statement.
PSPI
When values in the column are changed, or new values inserted, the field
procedure is invoked for each value, and can transform that value (encode it) in
any way. The encoded value is then stored. When values are retrieved from the
column, the field procedure is invoked for each value, which is encoded, and must
decode it back to the original string value.
Any indexes, including partitioned indexes, defined on a column that uses a field
procedure are built with encoded values. For a partitioned index, the encoded
value of the limit key is put into the LIMITKEY column of the SYSINDEXPART
698 Administration Guide
table. Hence, a field procedure might be used to alter the sorting sequence of
values entered in a column. For example, telephone directories sometimes require
that names like “McCabe” and “MacCabe” appear next to each other, an effect that
the standard EBCDIC sorting sequence does not provide. And languages that do
not use the Roman alphabet have similar requirements. However, if a column is
provided with a suitable field procedure, it can be correctly ordered by ORDER BY.
PSPI
Related concepts:
General guidelines for writing exit routines
PSPI
The data type of the encoded value can be any valid SQL data type except DATE,
TIME, TIMESTAMP, LONG VARCHAR, or LONG VARGRAPHIC. The length,
precision, or scale of the encoded value must be compatible with its data type.
A user-defined data type can be a valid field if the source type of the data type is a
short string column that has a null default value. Db2 casts the value of the
column to the source type before it passes it to the field procedure.
PSPI
Related reference:
Value descriptor for field procedures
PSPI
Procedure
Issue the CREATE TABLE or ALTER TABLE statement with the FIELDPROC
clause.
The optional parameter list that follows the procedure name is a list of constants,
enclosed in parentheses, called the literal list. The literal list is converted by Db2
into a data structure called the field procedure parameter value list (FPPVL). That
structure is passed to the field procedure during the field-definition operation. At
that time, the procedure can modify it or return it unchanged. The output form of
the FPPVL is called the modified FPPVL. The modified FPPVL is stored in the Db2
catalog as part of the field description. The modified FPPVL is passed again to the
field procedure whenever that procedure is invoked for field-encoding or
field-decoding.
PSPI
PSPI
For field-definition, when the CREATE TABLE or ALTER TABLE statement that
names the procedure is executed. During this invocation, the procedure is
expected to:
– Determine whether the data type and attributes of the column are valid.
– Verify the literal list, and change it if wanted.
– Provide the field description of the column.
– Define the amount of working storage needed by the field-encoding and
field-decoding processes.
v For field-encoding, when a column value is to be field-encoded. That occurs for
any value that:
A field procedure is never invoked to process a null value, nor for a DELETE
operation without a WHERE clause on a table in a segmented table space.
PSPI
PSPI
The following diagram shows those areas. The FPPL and the areas are described
by the mapping macro DSNDFPPB.
FPIB address
Field procedure
CVD address information
block (FPIB)
FVD address
FPPVL address
Column value
descriptor (CVD)
Field value
descriptor (FVD)
Field procedure
parameter value
list (FPPVL) or
literal list
PSPI
PSPI
The work area can be used by a field procedure as working storage. A new area is
provided each time the procedure is invoked. The size of the area that you need
depends on the way you program your field-encoding and field-decoding
operations.
At field-definition time, Db2 allocates a 512-byte work area and passes the value of
512 bytes as the work area size to your routine for the field-definition operation. If
subsequent field-encoding and field-decoding operations need a work area of 512
bytes or less, your field definition doesn't need to change the value as provided by
Db2. If those operations need a work area larger than 512 bytes (i.e. 1024 bytes),
your field definition must change the work area size to the larger size and pass it
back to Db2 for allocation.
PSPI
PSPI
The information block tells what operation is to be done, allows the field
procedure to signal errors, and gives the size of the work area. It has the following
formats:
Table 74. Format of FPIB, defined in copy macro DSNDFPPB
Name Hex offset Data type Description
FPBFCODE 0 Signed 2-byte Function code
integer
Code Means
0 Field-encoding
4 Field-decoding
8 Field-definition
FPBWKLN 2 Signed 2-byte Length of work area; the maximum is 32767
integer bytes.
FPBSORC 4 Signed 2-byte Reserved
integer
FPBRTNC 6 Character, 2 Return code set by field procedure
bytes
FPBRSNCD 8 Character, 4 Reason code set by field procedure
bytes
FPBTOKPT C Address Address of a 40-byte area, within the work
area or within the field procedure's static
area, containing an error message
PSPI
PSPI
At that time, the field procedure can reformat the FPPVL; it is the reformatted
FPPVL that is stored in SYSIBM.SYSFIELDS and communicated to the field
procedure during field-encoding and field-decoding as the modified FPPVL.
PSPI
PSPI
The column value descriptor (CVD) contains a description of a column value and, if
appropriate, the value itself. During field-encoding, the CVD describes the value to
be encoded. During field-decoding, it describes the decoded value to be supplied
by the field procedure. During field-definition, it describes the column as defined
in the CREATE TABLE or ALTER TABLE statement.
The field value descriptor (FVD) contains a description of a field value and, if
appropriate, the value itself. During field-encoding, the FVD describes the encoded
value to be supplied by the field procedure. During field-decoding, it describes the
value to be decoded. Field-definition must put into the FVD a description of the
encoded value.
PSPI
Related reference:
Field-definition for field procedures
On entry
The input that provided to the field-definition operation and the output that is
required are as follows:
PSPI
The contents of all other registers, and of fields not listed in the following tables,
are unpredictable.
The FPVDVALE field is omitted. The FVD provided is 4 bytes long. The FPPVL
field has the information:
Table 80. Contents of the FPPVL on entry
Field Contains
FPPVLEN The length, in bytes, of the area containing the parameter value list. The
minimum value is 254, even if there are no parameters.
FPPVCNT The number of value descriptors that follow; zero if there are no
parameters.
FPPVVDS A contiguous set of value descriptors, one for each parameter in the
parameter value list, each preceded by a 4-byte length field.
On exit
Field FPVDVALE must not be set; the length of the FVD is 4 bytes only.
The FPPVL can be redefined to suit the field procedure, and returned as the
modified FPPVL, subject to the following restrictions:
v The field procedure must not increase the length of the FPPVL.
v FPPVLEN must contain the actual length of the modified FPPVL, or 0 if no
parameter list is returned.
The modified FPPVL is recorded in the catalog table SYSIBM.SYSFIELDS, and is
passed again to the field procedure during field-encoding and field-decoding. The
modified FPPVL need not have the format of a field procedure parameter list, and
it need not describe constants by value descriptors.
PSPI
The input that is provided to the field-encoding operation, and the output that is
required, are as follows:
PSPI
The contents of all other registers, and of fields not listed, are unpredictable.
The work area is contiguous, uninitialized, and of the length specified by the field
procedure during field-definition.
On exit
The FVD must contain the encoded (field) value in field FPVDVALE. If the value
is a varying-length string, the first halfword must contain its length.
PSPI
On entry
PSPI
The contents of all other registers, and of fields not listed, are unpredictable.
The work area is contiguous, uninitialized, and of the length specified by the field
procedure during field-definition.
On exit
The CVD must contain the decoded (column) value in field FPVDVALE. If the
value is a varying-length string, the first halfword must contain its length.
PSPI
PSPI
The routine receives data when Db2 writes data to the active log. Your local
specifications determine what the routine does with that data. The routine does not
enter or return data to Db2.
Performance factor: Your log capture routine receives control often. Design it with
care: a poorly designed routine can seriously degrade system performance.
Whenever possible, use the instrumentation facility interface (IFI), rather than a log
capture exit routine, to read data from the log.
General guidelines for writing exit routines applies, but with the following
exceptions to the description of execution environments:
A log capture routine can execute in either TCB mode or SRB mode, depending
on the function it is performing. When in SRB mode, it must not perform any
I/O operations nor invoke any SVC services or ESTAE routines.
PSPI
The module name for the log capture routine is DSNJL004, and its entry point is
DSNJW117. The module is loaded during Db2 initialization and deleted during
Db2 termination.
Procedure
PSPI
PSPI
A log control interval can be passed more than once. Use the timestamp to
determine the last occurrence of the control interval. This last occurrence should
replace all others. The timestamp is found in the control interval. PSPI
PSPI
PSPI
PSPI
You can enable dynamic plan allocation by using one of the following techniques:
v Use Db2 packages and versioning to manage the relationship between CICS
transactions and Db2 plans. This technique can help minimize plan outage time,
processor time, and catalog contention.
v Use a dynamic plan exit routine to determine the plan to use for each CICS
transaction.
PSPI
PSPI
PSPI
Related concepts:
Connection routines and sign-on routines (Managing Security)
Access control authorization exit routine (Managing Security)
PSPI
Even though Db2 has functional recovery routines of its own, you can establish
your own functional recovery routine (FRR), specifying MODE=FULLXM and
PSPI
PSPI
With some exceptions, which are noted under “General Considerations” in the
description of particular types of routine, the execution environment is:
v Supervisor state
v Enabled for interrupts
v PSW key 7
v No MVS locks held
v For local requests, under the TCB of the application program that requested the
Db2 connection
v For remote requests, under a TCB within the Db2 distributed data facility
address space
v 31-bit addressing mode
v Cross-memory mode
In cross-memory mode, the current primary address space is not equal to the
home address space. Therefore, some z/OS macro services you cannot use at all,
and some you can use only with restrictions. For more information about
cross-memory restrictions for macro instructions, which macros can be used
fully, and the complete description of each macro, refer to the appropriate z/OS
publication.
PSPI
PSPI
The following are registers that are set at invocation for exit routines:
Table 98. Contents of registers when Db2 passes control to an exit routine
Register Contains
1 Address of pointer to the exit parameter list. For a field procedure, the
address is that of the field procedure parameter list.
13 Address of the register save area.
14 Return address.
15 Address of entry point of exit routine.
PSPI
| The parameter list for the log capture exit routine consists of two 64-bit pointers.
| The parameter list for all other exit routines consists of two 31-bit pointers.
| Register 1 points to the address of parameter list EXPL, described by macro
| DSNDEXPL. The field that follows points to a second parameter list, which differs
| for each type of exit routine.
Register 1
Address of EXPL parameter list
Figure 72. Use of register 1 on invoking an exit routine. (Field procedures and translate
procedures do not use the standard exit-specific parameter list.)
The following is a list of the EXPL parameters. Its description is given by macro
DSNDEXPL:
Table 99. Contents of EXPL parameter list
Hex
Name offset Data type Description
EXPLWA 0 Address Address of a work area to be used by the
routine
EXPLWL 4 Signed 4-byte Length of the work area. The value is:
integer 2048 for connection routines and sign-on
routines
512 for date and time routines and
translate procedures (see Note 1).
256 for edit, validation, and log capture
routines
EXPLRSV1 8 Signed 2-byte Reserved
integer
EXPLRC1 A Signed 2-byte Return code
integer
EXPLRC2 C Signed 4-byte Reason code
integer
EXPLARC 10 Signed 4-byte Used only by connection routines and
integer sign-on routines
EXPLSSNM 14 Character, 8 Used only by connection routines and
bytes sign-on routines
EXPLCONN 1C Character, 8 Used only by connection routines and
bytes sign-on routines
PSPI
PSPI
You cannot specify edit procedures for any table that contains a LOB column. You
cannot define edit procedures as WITH ROW ATTRIBUTES for any table that
contains a ROWID column. In addition, LOB values are not available to validation
procedures; indicator columns and ROWID columns represent LOB columns as
input to a validation procedure.
PSPI
PSPI
This byte is X'00' if the column value is not null; it is X'FF' if the value is null. This
extra byte is included in the column length attribute (parameter FFMTFLEN).
PSPI
PSPI
Example: The sample project activity table has five fixed-length columns. The first
two columns do not allow nulls; the last three do.
Table 100. A row in fixed-length format
Column 1 Column 2 Column 3 Column 4 Column 5
MA2100 10 00 0.5 00 820101 00 821101
PSPI
PSPI
The following table shows a row of the sample department table in basic row
format. The first value in the DEPTNAME column indicates the column length as a
hexadecimal value.
Varying-length columns have no gaps after them. Hence, columns that appear after
varying-length columns are at variable offsets in the row. To get to such a column,
you must scan the columns sequentially after the first varying-length column. An
empty string has a length of zero with no data following.
ROWID and indicator columns are treated like varying length columns. Row IDs
are VARCHAR(17). A LOB indicator column is VARCHAR(4), and an XML
indicator column is VARCHAR(6). It is stored in a base table in place of a LOB or
XML column, and indicates whether the LOB or XML value for the column is null
or zero length.
In reordered row format, if a table has any varying-length columns, all fixed length
columns are placed at the beginning of the row, followed by the offsets to the
varying length columns, followed by the values of the varying length columns.
The following table shows the same row of the sample department table, but in
reordered row format. The value in the offset column indicates the offset value as a
hexadecimal value.
Table 102. A varying-length row in reordered row format in the sample department table
Offset
DEPTNO MGRNO ADMRDEPT LOCATION column DEPTNAME
C01 00 000030 A00 00 New York 20 Information
center
PSPI
PSPI
The following table shows how the row would look in storage if nulls were
allowed in DEPTNAME. The first value in the DEPTNAME column indicates the
column length as a hexadecimal value.
Table 103. A varying-length row in basic row format in the sample department table
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
C01 0013 Information 00 000030 A00 00 New York
center
An empty string has a length of one, a X'00' null indicator, and no data following.
PSPI
PSPI
If you write new edit and validation routines on tables with rows in basic row
format (BRF) or reordered row format (RRF), make sure that EDITPROCs and
VALIDPROCs are coded to check RFMTTYPE and handle both BRF and RRF
formats.
PSPI
Procedure
PSPI To convert a table space to reordered row format, complete the following
steps for each table that has an edit routine:
1. Use the UNLOAD utility to unload data from the table or tables that have edit
routines.
2. Use the DROP statement to drop the table or tables that have edit routines.
3. Make any necessary modifications to the edit routines so that they can be used
with rows in reordered row format.
4. Use the REORG utility to reorganize the table space. Using the REORG utility
converts the table space to reordered row format.
5. Re-create tables with your modified edit routines. Also, re-create any additional
related objects, such as indexes and check constraints.
Procedure
PSPI To convert a table space to reordered row format, complete the following
steps for each table that has a validation routine:
1. Use the ALTER TABLE statement to alter the validation routine to NULL.
2. Run the REORG utility or the LOAD REPLACE utility to convert the table
space to reordered row format.
3. Make any necessary modifications to the validation routine so that it can be
used with rows in reordered row format.
4. Use the ALTER TABLE statement to add the modified validation routine to the
converted table. PSPI
If the Db2 subsystem parameter RRF is set to ENABLE, the table space is
converted from basic row format to reordered row format when you run the
LOAD REPLACE utility or the REORG TABLESPACE utility. (The default setting
for the RRF subsystem parameter is ENABLE.) If the RRF subsystem parameter is
set to DISABLE, the table space is not converted. Therefore, if the table space was
in basic row format before running the LOAD REPLACE utility or the REORG
TABLESPACE utility, the table space remains in basic row format. Likewise, if the
table space was in reordered row format before running either of these utilities, the
table space remains in reordered row format.
Exceptions:
v LOB table spaces and table spaces in the catalog and directory databases always
remain in basic row format, regardless of the RRF subsystem parameter setting,
or the setting of the ROWFORMAT keyword for the utility. (The ROWFORMAT
keyword specifies the output row format in a table space or partition. This
keyword overrides the existing RRF setting when specified.)
v XML table spaces always remain in reordered row format, regardless of the RRF
subsystem parameter setting or the utility keyword setting.
v For universal table spaces that are cloned, both the base table space and the
clone table space remain in the same format as when they were created,
regardless of the RRF subsystem parameter setting or the utility keyword
setting.
v When multiple data partitions are affected by the LOAD REPLACE utility or the
REORG TABLESPACE utility, and some of the partitions are in basic row format
and some are in reordered row format, the utilities convert every partition to
reordered row format. This behavior is the default, regardless of the RRF
subsystem parameter setting. Alternatively, you can specify ROWFORMAT BRF
Example
To convert an existing table space from reordered row format to basic row format,
run REORG TABLESPACE ROWFORMAT BRF against the table space. To keep the
table space in basic row format on subsequent executions of the LOAD REPLACE
utility or the REORG TABLESPACE utility, continue to specify ROWFORMAT BRF
in the utility statement. Alternatively, you can set the RRF subsystem parameter to
DISABLE.
Related reference:
REORDERED ROW FORMAT field (RRF subsystem parameter) (Db2
Installation and Migration)
REORG TABLESPACE (Db2 Utilities)
LOAD (Db2 Utilities)
PSPI
The following table shows the TIMESTAMP format, which consists of 7 to 13 total
bytes.
Table 106. TIMESTAMP format
Year Month Day Hours Minutes Seconds Partial second
2 bytes 1 byte 1 byte 1 byte 1 byte 1 byte 0 to 6 bytes
PSPI
PSPI
DSNDROW defines the columns in the order as they are defined in the CREATE
TABLE statement or possibly the ALTER TABLE statement. For rows in the
reordered row format, the new column order in DSNDROW does not necessarily
correspond to the order in which the columns are stored in the row. The following
is the general row description:
Table 107. Description of a row format
Name Hex offset Data type Description
RFMTNFLD 0 Signed fullword Number of columns in a row
integer
RFMTAFLD 4 Address Address of a list of column descriptions
RFMTTYPE 8 Character, 1 byte Row type:
X'00' = row with fixed-length columns
X'04' = row with varying-length
columns in basic row format
X'08' = row with varying-length
columns in reordered row format
9 Character, 3 Reserved
bytes
PSPI
PSPI
To retrieve numeric data in its original form, you must Db2-decode it according to
its data type:
Table 110. Db2 decoding procedure according to data type
Data type Db2 decoding procedure
SMALLINT Invert the sign bit (high-order bit).
Value Meaning
8001 0001 (+1 decimal)
7FF3 FFF3 (-13 decimal)
INTEGER Invert the sign bit (high-order bit).
Value Meaning
800001F2
000001F2 (+498 decimal)
7FFFFF85
FFFFFF85 (-123 decimal)
PSPI
For the complete list of stored procedures that are provided with Db2 for z/OS,
see Procedures that are supplied with Db2 (Db2 SQL).
Related concepts:
Migration step 22: Configure Db2 for running stored procedures and
user-defined functions (optional) (Db2 Installation and Migration)
Supplied stored procedures for utility operations (Db2 Utilities)
Db2-supplied stored procedures for application programming (Db2
Application programming and SQL)
Related tasks:
Migration step 23: Set up Db2-supplied routines (Db2 Installation and
Migration)
Installing Db2-supplied routines during installation (Db2 Installation and
Migration)
The Common SQL API is a solution-level API that supports common tooling across
IBM data servers. This Common SQL API ensures that tooling does not break
when a data server is upgraded, and it notifies the caller when an upgrade to
tooling is available to capitalize on new data server functionality. Applications that
support more than one IBM data server will benefit from using the Common SQL
API, as it lowers the complexity of implementation. Such applications typically
perform a variety of common administrative functions. For example, you can use
these stored procedures to retrieve data server configuration information, return
system information about the data server, and return the short message text for an
SQLCODE.
The Common SQL API includes the following stored procedures that are supplied
with Db2:
v GET_CONFIG stored procedure (Db2 SQL)
v GET_MESSAGE stored procedure (Db2 SQL)
v GET_SYSTEM_INFO stored procedure (Db2 SQL)
v SET_PLAN_HINT stored procedure (Db2 SQL)
The version of all three of these documents remains in sync when you call a stored
procedure. For example, if you call the GET_SYSTEM_INFO stored procedure and
specify the major_version parameter as 1 and the minor_version parameter as 1, the
XML input, XML output, and XML message documents will be Version 1.1
documents.
If the XML input document in the xml_input parameter specifies the Document Type
Major Version and Document Type Minor Version keys, the value for those keys
must be equal to the values that you specified in the major_version and
minor_version parameters, or an error (+20458) is raised.
The XML input document consists of a set of entries that are common to all stored
procedures, and a set of entries that are specific to each stored procedure. The
XML input document has the following general structure:
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Document Type Name</key><string>Data Server Message Input</string>
<key>Document Type Major Version</key><integer>1</integer>
<key>Document Type Minor Version</key><integer>0</integer>
<key>Document Locale</key><string>en_US</string>
<key>Complete</key><false/>
<!-- Document type specific data appears here. -->
</dict>
</plist>
The Document Type Name key varies depending on the stored procedure. This
example shows an XML input document for the GET_MESSAGE stored procedure.
In addition, the values of the Document Type Major Version and Document Type
Minor Version keys depend on the values that you specified in the major_version
and minor_version parameters for the stored procedure.
If the stored procedure is not running in Complete mode, you must specify the
Document Type Name key, the required parameters, and any optional parameters
that you want to specify. Specifying the Document Type Major Version and
Document Type Minor Version keys are optional. If you specify the Document Type
Major Version and Document Type Minor Version keys, the values must be the
same as the values that you specified in the major_version and minor_version
parameters. You must either specify both or omit both of the Document Type Major
Version and Document Type Minor Version keys. Specifying the Document Locale
key is optional. If you specify the Document Locale key, the value is ignored.
Important: XML input documents must be encoded in UTF-8 and contain only
English characters.
If the Complete key is included and you set the value to true, the stored procedure
will run in Complete mode, and all other entries in the XML input document will
be ignored. The following example shows the minimal XML input document that
is required for the stored procedure to run in Complete mode:
All entries in the returned XML input document can be rendered and changed in
ways that are independent of the operating system or data server. Subsequently,
the modified XML input document can be passed in the xml_input parameter in a
new call to the same stored procedure. This enables you to programmatically
create valid xml_input documents.
At a minimum, the XML output documents that are returned in the xml_output
parameter include the following key and value pairs, followed by information that
is specific to each stored procedure:
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Document Type Name</key>
<string>Data Server Configuration Output</string>
<key>Document Type Major Version</key><integer>1</integer>
<key>Document Type Minor Version</key><integer>0</integer>
<key>Data Server Product Name</key><string>DSN</string>
<key>Data Server Product Version</key><string>9.1.5</string>
<key>Data Server Major Version</key><integer>9</integer>
<key>Data Server Minor Version</key><integer>1</integer>
<key>Data Server Platform</key><string>z/OS</string>
<key>Document Locale</key><string>en_US</string>
<!-- Document type specific data appears here. -->
</dict>
</plist>
The Document Type Name key varies depending on the stored procedure. This
example shows an XML output document for the GET_CONFIG stored procedure.
In addition, the values of the Document Type Major Version and Document Type
Minor Version keys depend on the values that you specified in the major_version
and minor_version parameters for the stored procedure.
Entries in the XML output document are grouped by using nested dictionaries.
Each entry in the XML output document describes a single piece of information. In
general, an XML output document is comprised of Display Name, Value, and Hint,
as shown in the following example:
<key>SQL Domain</key>
<dict>
<key>Display Name</key>
XML output documents are generated in UTF-8 and contain only English
characters.
To filter the output, specify a valid XPath query string in the xml_filter parameter
of the stored procedure.
The following restrictions apply to the XPath expression that you specify:
v The XPath expression must reference a single value.
v The XPath expression must always be absolute from the root node. For example,
the following path expressions are allowed: /, nodename, ., and ... The following
expressions are not allowed: // and @
v The only predicates allowed are [path=’value’] and [n].
v The only axis allowed is following-sibling.
v The XPath expression must end with one of the following, and, if necessary, be
appended with the predicate [1]: following-sibling::string,
following-sibling:: data, following-sibling::date, following-sibling::real,
or following-sibling::integer.
v Unless the axis is found at the end of the XPath expression, it must be followed
by a ::dict, ::string, ::data, ::date, ::real, or ::integer, and if necessary, be
appended with the predicate [1].
v The only supported XPath operator is =.
v The XPath expression cannot contain a function, namespace, processing
instruction, or comment.
Tip: If the stored procedure operates in complete mode, do not apply filtering, or
an SQLCODE (+20458) is raised.
Example: The following XPath expression selects the value for the Data Server
Product Version key from an XML output document:
/plist/dict/key[.=’Data Server Product Version’]/following-sibling::string[1]
The stored procedure returns the string 9.1.5 in the xml_output parameter if the
value of the Data Server Product Version is 9.1.5. Therefore, the stored procedure
call returns a single value rather than an XML document.
An XML message document contains key and value pairs followed by details
about an SQL warning condition. The general structure of an XML message
document is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Document Type Name</key>
<string>Data Server Message</string>
<key>Document Type Major Version</key><integer>1</integer>
<key>Document Type Minor Version</key><integer>0</integer>
<key>Data Server Product Name</key><string>DSN</string>
<key>Data Server Product Version</key><string>9.1.5</string>
<key>Data Server Major Version</key><integer>9</integer>
<key>Data Server Minor Version</key><integer>1</integer>
<key>Data Server Platform</key><string>z/OS</string>
<key>Document Locale</key><string>en_US</string>
--- Details about an SQL warning condition are included here. ---
</dict>
</plist>
XML message documents are generated in UTF-8 and contain only English
characters.
Procedure
Current Db2 11 for z/OS publications are available from the following websites:
http://www-01.ibm.com/support/docview.wss?uid=swg27039165
Links to IBM Knowledge Center and the PDF version of each publication are
provided.
Db2 for z/OS publications are also available for download from the IBM
Publications Center (http://www.ibm.com/shop/publications/order).
In addition, books for Db2 for z/OS are available on a CD-ROM that is included
with your product shipment:
v Db2 11 for z/OS Licensed Library Collection, LK5T-8882, in English. The
CD-ROM contains the collection of books for Db2 11 for z/OS in PDF format.
Periodically, IBM refreshes the books on subsequent editions of this CD-ROM.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
The performance data and client examples cited are presented for illustrative
purposes only. Actual performance results may vary depending on specific
configurations and operating conditions.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as shown below:
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
PSPI
PSPI
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered marks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at "Copyright and
trademark information" at: http://www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 737
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
Applicability: These terms and conditions are in addition to any terms of use for
the IBM website.
Personal use: You may reproduce these publications for your personal,
noncommercial use provided that all proprietary notices are preserved. You may
not distribute, display or make derivative work of these publications, or any
portion thereof, without the express consent of IBM.
Commercial use: You may reproduce, distribute and display these publications
solely within your enterprise provided that all proprietary notices are preserved.
You may not make derivative works of these publications, or reproduce, distribute
or display these publications or any portion thereof outside your enterprise,
without the express consent of IBM.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
This Software Offering does not use cookies or other technologies to collect
personally identifiable information.
If the configurations deployed for this Software Offering provide you as customer
the ability to collect personally identifiable information from end users via cookies
For more information about the use of various technologies, including cookies, for
these purposes, see IBM’s Privacy Policy at http://www.ibm.com/privacy and
IBM’s Online Privacy Statement at http://www.ibm.com/privacy/details the
section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM
Software Products and Software-as-a-Service Privacy Statement” at
http://www.ibm.com/software/info/product-privacy.
Notices 739
740 Administration Guide
Glossary
The glossary is available in IBM Knowledge Center.
See the Glossary topic for definitions of Db2 for z/OS terms.
Index 745
catalog tables (continued) CICS (continued)
SYSTABAUTH DSNC command 234
dropping tables 186 DSNC DISCONNECT command 320
table authorizations 120 dynamic plan selection
updated by CREATE VIEW statement 120 exit routine 714
view authorizations 120 environment
SYSTABLEAPART planning 245
partition order 117 facilities 714
SYSTABLEPART 129 diagnostic traces 360
SYSTABLES indoubt units of recovery 317
rows maintained 116 operating
updated by COMMENT ON statement 126 outstanding indoubt units 424
updated by CREATE VIEW statement 120 terminates AEY9 524
SYSTRIGGERS 125 programming
SYSVIEWDEP applications 245
view dependencies 186 recovery procedures 517
SYSVOLUMES 24 application failures 518
catalog tables, DB2 attachment facility failures 523
image copy 434 CICS not operational 518
catalog, DB2 DB2 connection failures 519
constraint information 123 indoubt units of recovery 520
database design 115, 127 restarting 317
retrieving information from 115 threads
catalogs connecting 318
Db2 two-phase commit 415
DSNDB06 database 445 CICS commands
recovery procedures 579 DSNC DISCONNECT 316
point-in-time recovery 486 DSNC DISPLAY 316
recovering 486, 487 DSNC DISPLAY PLAN 318
CDB (communications database) DSNC DISPLAY TRANSACTION 318
backing up 436 DSNC MODIFY
high-level qualifier ERRDEST option 316
changing 215 DSNC STOP 316
CDDS (compression dictionary data set) DSNC STRT 316, 318
recovering 668 example 317
CHANGE command responses 234
IMS CICS terminal
purging residual recovery entries 321 issuing commands 231
change log inventory utility CLONE keyword 507
bootstrap data set (BSDS) 299 clone tables
BSDS (bootstrap data set) backup and recovery 507
changing 397 creating 81
change number of sessions (CNOS) 591 CNOS (change number of sessions)
CHANGE SUBSYS command failures 591
IMS 327 coding
check constraints exit routines
adding 160 general rules 715
dropping 160 parameters 717
check pending status cold start
retrieving catalog information 122 bypassing the damaged log 538
checkpoint recovery procedures 411
queue 411 special situations 563
checkpoint frequency column
changing 380 retrieving
checkpoints catalog information 118
log records 641, 646 comments 126
CICS column boundaries 718
applications columns
disconnecting 320 adding 145
commands data types
accessing databases 316 altering 147
connecting 317 default values
connecting to Db2 altering 147
authorization IDs 245 dropping 184
connections XML
controlling 316 adding 172
disconnecting from Db2 320
Index 747
current status rebuild data sets (continued)
failure recovery 540 high-level qualifier
phase of restart 402 changing 210
managing 22, 35
access method services 35
D DFSMShsm 31
using DFSMShsm 31
damaged data
with DB2 storage groups 23
renaming data sets 434
migrating 130
data
moving 221
access control
with utilities 222
START Db2 command 239
without utilities 222
backing up 506
naming 38
distributed
partitioned partitioning indexes 36
controlling connections 337
partitioned secondary indexes 36
exchanging 83
partitioned table spaces 36
inconsistencies
recovering 499
resolving 568
using non-Db2 dump 494
loading 93
renaming 450
a single row 96
user-managed 35
multiple rows 97
defining 36
modeling 4
space allocation 134
moving 220
VSAM 647
recovering 393
data sharing environment
restoring 506, 593
RECOVER TOLOGPOINT option 467
point-in-time recovery 460
restarting 406
data availability
data sharing members
maximizing 441
read requests 672
data classes
data types
assigning 40
altering 184
SMS construct 40
implications 149
data compression
specifying 8
log records 641
subtypes
logging 370
altering 183
data consistency
database
maintaining 415
designing
point-in-time recovery 465
using catalog 115
Data Facility Product (DFSMSdfp) 218
implementing a design 127
Data Facility Storage Management Subsystem (DFSMS) 47
database design 3
concurrent copy 458
hash access 19
copying data 458
implementing 21
recovery 458
logical 3
data management
physical 15
automatic 440
database exception table (DBET)
data mirroring 619
log records 647
recovery 619, 622
exception states 642
data pages
image copies 642
changes
database management systems (DBMSs)
control information 644
monitoring 311
data 644
database objects 3
pointers 644
databases
data set
access
damaged 434
controlling 337
data sets
access threads
adding 581, 584
failures 588
backing up
security failure 592
using DFSMS 458
altering 115, 127
copying 455
backing up
Db2-managed
copying data 455
extending 26, 28
backups
extension failures 26
planning 431
nonpartitioned spaces 26
changes
partitioned spaces 26
rollbacks 367
primary space allocation 28
units of recovery 367
recovering 498
controlling 279
secondary space allocation 28, 29, 31
copying 220
deferring allocation 25
creating 21
extending 581
Index 749
DFSMShsm (continued) DISPLAY DDF command 339
data classes (continued) DISPLAY FUNCTION SPECIFIC command 294, 295
assigning table spaces 31 DISPLAY LOCATION command 340
data sets DISPLAY LOG command 382
migrating 31 DISPLAY OASN command
DFSMShsm (Data Facility Hierarchical Storage Manager) IMS 514
advantages 31 displaying RREs 327
archive logs DISPLAY PROCEDURE command
recalling 32 examples 346
BACKUP SYSTEM utility 33 DISPLAY THREAD command 310
backups 440 DETAIL keyword 311
data sets DETAIL option
moving 218 controlling DDF connections 308
FRBACKUP PREPARE command 624 examples 347
recovery 440 IMS threads
DFSMShsm (Hierarchical Storage Manager) displaying 324
HMIGRATE command 218 showing 329
HRECALL command 218 LOCATION option 306, 307, 308
DFSMSsms (DFSMS storage management subsystem) LUWID keyword 309
BACKUP SYSTEM utility 34 output 305
diagnostic information TYPE (INDOUBT) option 520
obtaining 352 DISPLAY UTILITY command
directory log records 641
high-level qualifier DISPLAY WLM command 349
changing 215 distributed data
image copies recovering 432
frequency 438, 439 distributed data facility (DDF)
order of recovery information
I/O errors 579 displaying 339
point-in-time recovery 486 resuming 338
recovering 486, 487 server activity
SYSLGRNX table resuming 338
discarding records 446 starting 337
records log RBA ranges 445 stopping 358
directory, DB2 FORCE option 359
image copy 434 QUIESCE option 358
disability xv suspending 338
disaster recovery distributed environment
archive logs 593, 599 restart conditions 568
data mirroring 619 restarting
essential elements 483 DB2 subsystem 568
image copies 593, 599 DL/I databases
preparation 448 data
remote site recovery 506 loading 99
rolling disaster 619 down-level detection
scenarios 593 controlling 575
system-level backups 593 DSNTIPN panel 575
tracker sites 608 down-level page sets
disk dump and restore recovering 575
considerations 453 DROP statement 22, 88, 101, 102
disk storage TABLESPACE option 134
estimating 102 DROP TABLE statement 185
space requirements 102 DSN
disks command (TSO) 230
archiving 374 command processor 230
requirements 103 DSN command
storage TSO
estimating 103 command processor 246
storage group assignments END subcommand 315
altering 128 TSO applications
DISPLAY BUFFERPOOL command 293 running 244
DISPLAY command DSN1COPY utility 494
IMS data
SUBSYS option 321 restoring 506
DISPLAY DATABASE command 279, 283 log RBA
LPL option 287 resetting 571
SPACENAM option 285
Index 751
failure symptoms (continued) GET_SYSTEM_INFO stored procedure
messages (continued) output
DSNJ004I 528 filtering 731
DSNJ100 561 global transactions 425
DSNJ103I 530 GUPI symbols 737
DSNJ105I 527
DSNJ106I 528
DSNJ107 561
DSNJ114I 531
H
hash access 19
DSNM002I 514
altering 175
DSNM005I 516
enabling
DSNM3201I 518
altering tables 173
DSNP007I 581
creating tables 61
DSNP012I 580
size 175
DSNU086I 578, 579
space 175
processing failure 509
heuristic damage 422
subsystem termination 524
Hierarchical Storage Manager (DFSMShsm) 218
fast copy function
history tables 67
Enterprise Storage Server FlashCopy 458
names
RVA SnapShot 458
finding 76
fast log apply
HMIGRATE command
RECOVER utility 452
DFSMShsm (Hierarchical Storage Manager) 218
field procedures
HRECALL command
changing 183
DFSMShsm (Hierarchical Storage Manager) 218
control blocks 701
field-decoding 709
field-definition 699, 705
field-encoding 708 I
information block (FPIB) 703 I/O errors
invoking 700 archive log data sets
overview 698 recovering 531
parameter list 701, 703 catalog 579
specifying 700 directory 579
value descriptor 704 occurrence 395
work area 702 table spaces 578
fixed-length rows 719 ICOPY status
FlashCopy backups clearing 490
incremental 34 identity columns
FlashCopy image copies attributes
creating 457 altering 184
recovery 479 conditional restart 407
FORCE option data
STOP DB2 command 331 loading 94
foreign keys recovering 488
dropping 159 values
format regenerating 487
column 724 IFCID (instrumentation facility component identifier)
row 724 0330 371, 526
forward log recovery identifiers by number
failures 552 0129 658
restart phase 403 0306 659, 664, 667
scenarios 552 IFI (instrumentation facility interface)
FREEPAGE clause log data
segmented table space 48 decompressing 659, 667
reading in GDPS Continuous Availability with zero data
loss environment 664
G log records
reading 657, 658
GDPS Continuous Availability with zero data loss
READA request 657
DB2 setup 664
READS request 657, 658
general-use programming information, described 737
image copies
GET_CONFIG stored procedure
catalog 438, 439
output
directory 438, 439
filtering 731
frequency 438
GET_MESSAGE stored procedure
incremental 438
output
recovery speed 438
filtering 731
Index 753
informational COPY-pending status LOAD utility
clearing 490 CCSID option 94
INSERT statement 98 data
data moving 218
loading 93 delimited files 94
examples 96, 97 LOG option 490
segmented table spaces 48 segmented table spaces 48
installation storage
macros estimating 107
starting IRLM automatically 302 tables
INSTEAD OF triggers 190 availability 94
integrated catalog facility loading 94
alias names loading
changing 210 data
integrated catalog facility catalog DL/I 99
VVDS (VSAM volume data set) failure sequential data sets 94
recovering 580 tables 93
Interactive System Productivity Facility (ISPF) 243 INSERT statement 98
internal resource lock manager (IRLM) LOAD utility 94
controlling 300 LOB (large object)
IRLM (internal resource lock manager) retrieving catalog information 124
connections LOB table spaces 45
monitoring 301 pending states
diagnostic traces 362 clearing 493
element name recovering 476
global mode 302 LOBs
local mode 302 invalid
failure 509 recovering 577
recovery procedures 509 storage
starting estimating 106
automatically 302 locations
manually 302 displaying 306
starting automatically 238 non-DB2
stopping 302 displaying 307
issuing commands locks
from application programs 233 finding 286
log
archive
J dual
retaining 434
JCL jobs
recovery 434
scheduled execution 277
log activities
stand-alone 672
log capture exit routine 641
K log capture exit routines
key log records
foreign reading 679
catalog information 121 log capture routines
primary overview 711
catalog information 121 specifying 712
keys log CLOSE requests
adding stand-alone 676
foreign 158 log data
parent 158 decompressing 659
unique 158 GDPS Continuous Availability with zero data loss
solution 667
reading 659, 667
L log GET requests
stand-alone 675
language interface modules
log initialization phase
DSNCLI 245
failure recovery 540, 541
large objects (LOBs)
log OPEN requests
indexes 89
stand-alone 673
LCID (log control interval definition) 650
log RBA (relative byte address)
leaf pages 109
converting 386
indexes 109
data sharing environment 388
links
display 679
non-IBM Web sites 738
Index 755
message by identifier (continued) message by identifier (continued)
DSNB232I 575 DSNM002I 329, 331, 514, 524
DSNC016I 424 DSNM003I 322, 329
DSNI006I 288 DSNM004I 423, 514
DSNI021I 288 DSNM005I 327, 423, 516
DSNI022I 288 DSNP001I 581
DSNI051I 288 DSNP007I 581
DSNJ001I 238, 372, 401, 537, 540 DSNP012I 580
DSNJ002I 372 DSNR001I 238
DSNJ003I 372, 535 DSNR002I 238, 537
DSNJ004I 372, 528 DSNR003I 238, 393, 513, 556, 559
DSNJ005I 372 DSNR004I 238, 402, 403, 537, 540, 552
DSNJ007I 544, 553 DSNR005I 238, 403, 537, 540, 557
DSNJ008E 372 DSNR006I 238, 404, 537
DSNJ012I 544, 553 DSNR007I 238, 402, 403
DSNJ072E 375 DSNR031I 403
DSNJ099I 238 DSNT360I 283, 285, 286, 289
DSNJ100I 533, 534, 537, 561 DSNT361I 283, 285, 286, 289, 642
DSNJ103I 530, 544, 553 DSNT362I 283, 285, 286, 289
DSNJ104I 530, 544, 553 DSNT397I 285, 286, 289
DSNJ105I 527 DSNU086I 578, 579
DSNJ106I 528, 544, 553 DSNU561I 585
DSNJ107I 533, 537, 561 DSNU563I 585
DSNJ108I 533 DSNV086E 524
DSNJ110E 371, 526 DSNV400I 377
DSNJ111E 371, 526 DSNV401I 313, 325, 326, 377, 520
DSNJ113E 544, 553, 560 DSNV402I 306, 329, 377
DSNJ114I 531 DSNV406I 325, 326, 520
DSNJ115I 530 DSNV408I 326, 334, 413, 520
DSNJ1191 537 DSNV414I 326, 334, 520
DSNJ119I 561 DSNV415I 326, 334, 520
DSNJ120I 401, 533, 534 DSNX940I 346
DSNJ124I 528 DSNY001I 238
DSNJ125I 397 DSNY002I 240
DSNJ126I 533 DSNZ002I 238
DSNJ127I 238 DXR105E 302
DSNJ128I 532 DXR117I 302
DSNJ130I 401 DXR1211 302
DSNJ139I 372 DXR122E 509
DSNJ311E 377 DXR1651 302
DSNJ312I 377 EDC3009I 580
DSNJ317I 377 IEC161I 573
DSNJ318I 377 message processing program (MPP)
DSNJ319I 377 connections 328
DSNL001I 337 messages
DSNL002I 359 CICS 235
DSNL003I 337 route codes 234
DSNL004I 337 unsolicited 235
DSNL005I 359 MIGRATE command
DSNL006I 359 DFSMShsm (Hierarchical Storage Manager) 218
DSNL009I 344 modeling
DSNL010I 344 data 4
DSNL030I 592 MODIFY utility
DSNL080I 339 image copies
DSNL200I 340 retaining 450
DSNL432I 359 moving
DSNL433I 359 data 220
DSNL500I 591 tools 218
DSNL501I 586, 591 data sets 221
DSNL502I 586, 591 with utilities 222
DSNL700I 587 without utilities 222
DSNL701I 588 MPP (message processing program)
DSNL702I 588 connections 328
DSNL703I 588 multi-site updates 417
DSNL704I 588 examples 418
DSNL705I 588 multiple systems
DSNM001I 322, 329 conditional restart 422
Index 757
point of consistency 367 RECOVER TOLOGPOINT option
IMS 415 data sharing environment 467
multiple system 415 non-data sharing environment 470
point-in-time recovery 460, 465, 501 RECOVER utility 464, 624
catalog 486 catalog tables 486
data consistency 465 data inconsistency 450
Db2 subsystem 574 deferred objects 407
directory 486 DFSMS concurrent copies 458
planning 461 DFSMSdss RESTORE command 33
RECOVER utility 464 directory tables 486
postponed units of recovery DSNDB07 database 485
resolving 412 fast log apply 452
postponed-abort unit of recovery 420 functions 453
prefix messages 453
command 229 object-level recoveries 459
primary space allocation objects 453
examples 31 options
PRINT command TOCOPY 465
access method services 494 TOLOGPOINT 465, 512
print log map utility 382, 393 TORBA 465
before fall back 562 recovery cycle 613
bootstrap data set (BSDS) contents 299 restrictions 455
prior point of consistency running in parallel 452
recovery procedures 480 segmented table spaces 48
product-sensitive programming information, described 737 RECOVER-pending status
profiles clearing 491
setting special register values 363 recovery
programming interface information, described 737 acceleration tables and indexes
PSB name planning 432
IMS 244 application changes
PSPI symbols 737 backing out 512
PSTOP transaction type 328 backward log recovery failures 558
BSDS (bootstrap data set) 535
catalog 486, 487
Q catalog definitions
consistency 503
QMF-related failures 523
communications failure 627
QSAM (queued sequential access method) 373
compressed data 472
queued sequential access method (QSAM) 373
data
QUIESCE option
moving 474
STOP DB2 command 331
data availability
maximizing 441
data sets 499
R Db2-managed 498
range-partitioned universal table spaces 43 DFSMS 458
RBA (relative byte address) DFSMShsm 440
range in messages 372 non-DB2 dump and restore 494
RDO (resource definition online) databases
MSGQUEUE attribute 235 active logs 641
STATSQUEUE attribute 235 backup copies 436
REBUILD-pending status RECOVER TOCOPY 465
for indexes 433 RECOVER TOLOGPOINT 465
RECORDING MAX field RECOVER TORBA 465
panel DSNTIPA Db2 outages
preventing frequent BSDS wrapping 560 cold start 632
records Db2 subsystem 565, 574
performance 105 DB2 subsystem 641
size DDF (distributed data facility) failures 586
calculating 105 directory 486
RECOVER BSDS command disk failures 510
copying BSDS 397 distributed data
RECOVER INDOUBT command 334, 429 planning 432
free locked resources 520 down-level page sets 575
RECOVER TABLESPACE utility FlashCopy image copies 479
modified data FlashCopy volume backups 503
recovering 562 heuristic decisions
correcting 638
Index 759
REPORT utility RESTORE SYSTEM utility (continued)
options Db2 subsystem
RECOVERY 512 recovering 506
TABLESPACESET 512 restoring
table spaces data 460
recovering 446 databases 501
REPRO command Db2 subsystem 501
access method services 494, 535 return areas
RESET INDOUBT command 429 specifying 660
residual recovery entry (RRE) 327 return codes 672
resource definition online (RDO) 235 RFMTTYPE
STATUSQUEUE attribute 316 BRF 721
resource limit facility RRF 721
recovery preparation 436 rolling disaster 619
resource managers root pages 109
indoubt units of recovery indexes 109
resolving 425 routines
Resource Recovery Services (RRS) conversion procedures 695, 696, 697
abend 333 date routines 691, 692, 693, 694
connections edit routines 684, 687
controlling 332 field procedures 698, 699, 700, 701, 702, 703, 704, 705, 708,
indoubt units of recovery 333 709
postponed units of recovery 335 log capture routines 711
Resource Recovery Services attachment facility (RRSAF) time routines 691, 692, 693, 694
connections validation routines 687, 688, 689, 690
displaying 335 writing 683
monitoring 335 row format conversion
disconnecting 336 table spaces 722
resource translation table (RTT) 328 row formats 719, 720
restart 407 ROWID column
automatic 405 data
backward log recovery loading 94
failure during 557 inserting 98
phase 404 rows
BSDS (bootstrap data set) problems 561 formats for exit routines 718
cold start situations 563 incomplete 690
conditional RRDF (Remote Recovery Data Facility)
control record governs 407 tables
excessive loss of active log data 565 altering 182
total loss of log 564 RRE (residual recovery entry)
current status rebuild phase 402 detecting 327
failure recovery 540 logged at IMS checkpoint 423
forward log recovery phase 403 not resolved 423
failure during 552 purging 327
implications RRSAF (Resource Recovery Services attachment facility)
table spaces 406 application programs
inconsistencies running 248
resolving 567 RTT (resource translation table)
log data set problems 561 transaction types 328
log initialization phase 401 RVA (RAMAC Virtual Array)
failure recovery 540 backups 458
lost connections 423
multiple-system environment 421
normal 400
recovery 411
S
sample library 93
recovery preparation 482
scheduled tasks
restart processing
adding 249
deferring 408
defining 251
limiting 553
listing 255
restarting
removing 258
Db2 subsystem 399, 408
status
RESTORE phase
listing 255
RECOVER utility 453
multiple executions 256
RESTORE SYSTEM
stopping 258
recovery cycle
updating 258
establishing 611
schema definition
RESTORE SYSTEM utility 33, 501, 508
authorization 93
Index 761
stored procedures (continued) system-period data versioning (continued)
common SQL API (continued) restrictions 69
XML output documents 730 temporal tables 69
XML parameter documents 728 system-period temporal tables 67
creating 100 altering 179
debugging 352 creating 69, 176
diagnostic information 352 querying 77
displaying recovering 508
statistics 346 system-wide points of consistency 449
thread information 347
dropping 101
external
migrating 354
T
table
external SQL
creating
migrating 354
description 64
GET_CONFIG
retrieving
filtering output 731
catalog information 116
GET_MESSAGE
comments 126
filtering output 731
types 64
GET_SYSTEM_INFO
table check constraint
filtering output 731
catalog information 123
implementing 99
table space set
information
recovering 477
displaying 346
table spaces
migrating 353
altering 131, 200, 472
monitoring 345
copying 455
native SQL
creating
migrating 354
explicitly 53
prioritizing 362
data
scheduling 275
loading 93
SQLCODE -430 732
rebalancing 136
troubleshooting 732
defining
subsystem member (SSM) 328
implicitly 51
subsystem parameters
dropping 134
CATALOG 375
EA-enabled 40
SVOLARC 375
implementing 41
SVOLARC subsystem parameter 375
large object 45
syntax diagram
logging attribute 490
how to read xv
not logged 406
SYS1.LOGREC data set 524
NOT LOGGED attribute 133
SYS1.PARMLIB library
partition-by-growth (UTS) 44
IRLM
partition-by-range 43
specifying 300
partitioned
SYSCOPY
increasing partition size 137
catalog table records
partitioned (non-UTS) 47
retaining 446
quiescing 481
SYSIBMADM.MOVE_TO_ARCHIVE global variable
re-creating 134
effect 83
recovering 475, 489, 497, 578
SYSLGRNX directory table
reordered row format
REPORT utility information 446
converting 721, 722
table space records
reorganizing 151
retaining 446
restoring 570
SYSSYNONYMS catalog table 185
row format
SYSTABLES catalog table 185
converting 722
system checkpoints
schema changes
monitoring 381
applying 151
system period 67
segmented
adding 176
defining 48
system-level backups 462
EA-enabled index spaces 59
conditional restarts 410
EA-enabled table spaces 59
data
simple 50
moving 474
starting 280
disaster recovery 593
stopping 290
object-level recoveries 459
types 41
system-period data versioning 67
version numbers
bitemporal tables 73
recycling 153
defining 176
versions 151, 152
Index 763
TOLOGPOINT option two-phase commit (continued)
RECOVER utility 465 process 415
TORBA option
RECOVER utility 465
TRACE SUBSYS command
IMS 321
U
UDF
traces
catalog information 124
controlling 360, 361
Unified Modeling Language (UML) 14
IMS 360
unit of recovery ID (URID) 650
diagnostic
units of recovery
CICS 360
in-abort 420
IRLM 362
backward log recovery 404
tracker site 608
excluded in forward log recovery 403
characteristics 608
in-commit 420
converting
included in forward log recovery 403
takeover site 616, 617
indoubt 240, 420
disaster recovery 608
CICS 520
maintaining 616
displaying 325
migrating to Db2 11 conversion mode 610
IMS 514
recovering
included in forward log recovery 403
RECOVER utility 617
inconsistent states 399
RESTORE SYSTEM utility 617
recovering 334
recovery cycle
recovering IMS 326
RESTORE SYSTEM utility 611
resolving 423, 424, 425
setting up 609
inflight 420
transaction managers
backward log recovery 404
distributed transactions
excluded in forward log recovery 403
recovering 425
log records 642, 643
transactions
overview 367
CICS
postponed
accessing 318
displaying 326, 335
entering 245
postponed-abort 420
IMS 244
Resource Recovery Services (RRS) 335
connecting to Db2 321
rollbacks 420
thread attachment 323
rolling back 368
thread termination 325
SQL transactions 367
types 328
units of work
trigger
status
catalog information 125
determining 428
troubleshooting
universal table space
QMF-related failures 523
partition-by-growth 44
stored procedures 732
UNIX
truncation
cron format 253
active logs 371, 547
UNLOAD utility
TSO
delimited files 94
application programs
URID (unit of recovery ID) 650
conditions 244
user-defined data sets
running 244
extending 582
background execution 246
volumes
connections
adding 582
controlling 313
user-defined functions
disconnecting 315
altering 208
monitoring 313
controlling 294
DSNELI language interface module
creating 101
link editing 244
dropping 102
TSO commands
implementing 101
DSN 230
monitoring 295
END subcommand 315
starting 295
TSO connections
stopping 296
monitoring 313
user-managed data sets
TSO consoles
data classes
issuing commands 230
specifying 40
two-phase commit
deleting 39
CICS 415
enlarging 583
coordinator 415
extending 39
IMS 415
high-level qualifier
participants 415
changing 217
Index 765
XML table spaces 46
creating implicitly 52
pending states
removing 493
recovering 476
XRC (Extended Remote Copy) 625
XRF (extended recovery facility)
CICS toleration 433
IMS toleration 433
Z
z/OS
commands
DISPLAY WLM 349
power failure
recovering 509
restart function 405
z/OS abend
IEC030I 532
IEC031I 532
IEC032I 532
z/OS commands
MODIFY irlmproc 300, 301, 302
MODIFY irlmproc,ABEND 302
START irlmproc 300, 302
STOP irlmproc 300, 302
TRACE 300
z/OS console
issuing commands to Db2 229
Printed in USA
SC19-4050-07
Spine information: