DB2 V8 Utility Guide and Reference
DB2 V8 Utility Guide and Reference
Version 8
SC18-7427-00
DB2 Universal Database for z/OS
®
Version 8
SC18-7427-00
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
881.
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
| Chapter 6. CATENFM . . . . . . . . . . . . . . . . . . . . . . 51
Chapter 7. CATMAINT . . . . . . . . . . . . . . . . . . . . . . 53
Contents v
Concurrency and compatibility for REORG TABLESPACE . . . . . . . . 464
Reviewing REORG TABLESPACE output . . . . . . . . . . . . . . . 468
After running REORG TABLESPACE . . . . . . . . . . . . . . . . 468
Effects of running REORG TABLESPACE . . . . . . . . . . . . . . 469
Sample REORG TABLESPACE control statements . . . . . . . . . . . 470
Contents vii
Chapter 42. DSN1PRNT . . . . . . . . . . . . . . . . . . . . . 747
Syntax and options of the DSN1PRNT control statement . . . . . . . . . 748
Before running DSN1PRNT . . . . . . . . . . . . . . . . . . . . 754
Sample DSN1PRNT control statements . . . . . . . . . . . . . . . 755
DSN1PRNT output . . . . . . . . . . . . . . . . . . . . . . . 756
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 885
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 919
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
.
Contents ix
x Utility Guide and Reference
About this book
This book contains usage information for the tasks of system administration,
database administration, and operation. It presents detailed information about using
utilities, specifying syntax (including keyword and parameter descriptions), and
starting, stopping, and restarting utilities. This book also includes job control
language (JCL) and control statements for each utility.
Important
In this version of DB2 UDB for z/OS, the DB2 Utilities Suite is available as an
optional product. You must separately order and purchase a license to such
utilities, and discussion of those utility functions in this publication is not
intended to otherwise imply that you have a license to them. See Chapter 2,
“DB2 utilities packaging,” on page 7 for packaging details.
Recommendation: Familiarize yourself with DB2 UDB for z/OS prior to using this
book.
When referring to a DB2 product other than DB2 UDB for z/OS, this information
uses the product’s full name to avoid ambiguity.
When you use a parameter for an object that is created by SQL statements (for
example, tables, table spaces, and indexes), identify the object by following the
SQL syntactical naming conventions. See the description for naming conventions in
DB2 SQL Reference.
See Part 4 (Volume 1) of DB2 Administration Guide for more information about
managing DB2 connections.
correlation-id
An identifier of 1 to 12 characters that identifies a process within an address
space connection. A correlation ID must begin with a letter.
A correlation ID can be one of the following values:
v The TSO logon identifier (for DSN processes that run in TSO foreground and
for CAF processes).
v The job name (for DSN processes that run in TSO batch).
v The PST#.PSBNAME (for IMS processes).
v The entry identifier.thread_number.transaction_identifier (for CICS
processes).
See Part 4 (Volume 1) of DB2 Administration Guide for more information about
correlation IDs.
required_item
If an optional item appears above the main path, that item has no effect on the
execution of the statement and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
required_item repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in all lowercase letters (for example, column-name).
They represent user-supplied names or values.
v If punctuation marks, parentheses, arithmetic operators, or other such symbols
are shown, you must enter them as part of the syntax.
Assistive technology products, such as screen readers, function with the DB2 UDB
for z/OS user interfaces. Consult the documentation for the assistive technology
products for specific information when you use assistive technology to access these
interfaces.
Online documentation for Version 8 of DB2 UDB for z/OS is available in the DB2
Information Center, which is an accessible format when used with assistive
technologies such as screen reader or screen magnifier software. The DB2
Information Center for z/OS solutions is available at the following Web site:
http://publib.boulder.ibm.com/infocenter/db2zhelp.
www.ibm.com/software/db2zos/library.html
This Web site has a feedback page that you can use to send comments.
v Print and fill out the reader comment form located at the back of this book. You
can give the completed form to your local IBM branch office or IBM
representative, or you can send it to the address printed on the reader comment
form.
DB2 UDB for z/OS, Version 8 changes to online utilities are included in the
following chapters:
Chapter 9, “CHECK INDEX,” on page 73
Chapter 10, “CHECK LOB,” on page 89
Chapter 11, “COPY,” on page 95
Chapter 16, “LOAD,” on page 183
Chapter 18, “MODIFY RECOVERY,” on page 285
Chapter 22, “REBUILD INDEX,” on page 321
Chapter 23, “RECOVER,” on page 341
Chapter 24, “REORG INDEX,” on page 375
Chapter 25, “REORG TABLESPACE,” on page 403
Chapter 26, “REPAIR,” on page 483
Chapter 27, “REPORT,” on page 509
Chapter 29, “RUNSTATS,” on page 535
Chapter 32, “UNLOAD,” on page 595
DB2 UDB for z/OS, Version 8 includes one new stand-alone utility, which is
described in the following chapter:
Chapter 34, “DSNJCNVB,” on page 655
DB2 UDB for z/OS, Version 8 changes to stand-alone utilities are included in the
following chapters:
Chapter 36, “DSNJU003 (change log inventory),” on page 659
Chapter 37, “DSNJU004 (print log map),” on page 679
Chapter 40, “DSN1COPY,” on page 709
Chapter 42, “DSN1PRNT,” on page 747
The following appendix is new for DB2 UDB for z/OS, Version 8:
Appendix F, “Delimited file format,” on page 875
The following appendixes have changed for DB2 UDB for z/OS, Version 8:
Appendix A, “Limits in DB2 UDB for z/OS,” on page 771.
Appendix B, “DB2-supplied stored procedures,” on page 777
Appendix C, “Resetting an advisory or restrictive status,” on page 831
Appendix D, “Running the productivity-aid sample programs,” on page 839
All technical changes to the text are indicated by vertical bars (|) in the left margin.
A process is represented to DB2 by a set of identifiers (IDs). What the process can
do with DB2 is determined by privileges and privileges that can be held by its
identifiers. The phrase ″privilege set of a process″ means the entire set of privileges
and authorities that can be used by the process in a specific situation.
If you use the access control authorization exit routine, that exit routine might
control the authorization rules, rather than the exit routines that are documented for
each utility.
For detailed information about target object support, see the “Concurrency and
compatibility” section in each utility chapter.
You can populate table spaces whose data sets are not yet defined by using the
LOAD utility with either the RESUME keyword, the REPLACE keyword, or both.
Using LOAD to populate these table spaces results in the following actions:
1. DB2 allocates the data sets.
2. DB2 updates the SPACE column in the catalog table to show that data sets
exist.
3. DB2 loads the specified table space.
For a partitioned table space, all partitions are allocated even if the LOAD utility is
loading only one partition. Avoid attempting to populate a partitioned table space
with concurrent LOAD PART jobs until after one of the jobs has caused all the data
sets to be created.
| The following online utilities issue informational message DSNU185I when a table
| space or index space with the DEFINE NO attribute is encountered. The object is
| not processed.
| v CHECK DATA
| v CHECK INDEX
| v COPY
| v MERGECOPY
| v MODIFY RECOVERY
| v QUIESCE
| v REBUILD INDEX
Online utilities that encounter an undefined target object might issue informational
message DSNU185I, but processing continues.
You cannot use stand-alone utilities on objects whose data sets have not been
defined.
| However, running any of the following utilities on encrypted data might produce
| unexpected results:
| v CHECK DATA
| v LOAD
| v REBUILD INDEX
| v REORG TABLESPACE
| v REPAIR
| v RUNSTATS
| v UNLOAD
| v DSN1PRNT
| For more information about how each of these utilities processes encrypted data,
| see the individual chapters for each utility.
| All other utilities are available as a separate product called the DB2 Utilities Suite
| (5655-K61, FMIDs JDB881K and JDB881M), which includes the following utilities:
| v BACKUP SYSTEM
| v CHECK DATA
| v CHECK INDEX
| v CHECK LOB
| v COPY
| v COPYTOCOPY
| v EXEC SQL
| v LOAD
| v MERGECOPY
| v MODIFY RECOVERY
| v MODIFY STATISTICS
| v REBUILD INDEX
| v RECOVER
| v REORG INDEX
| v REORG TABLESPACE
| v RESTORE SYSTEM
| v RUNSTATS
| v STOSPACE
| v UNLOAD
All DB2 utilities operate on catalog, directory, and sample objects, without requiring
any additional products.
DB2 provides several jobs that invoke SMP/E. These jobs are on the tape or
cartridge that you received with the utility product. The job prologues in these jobs
contain directions on how to tailor the job for your site. Follow these directions
carefully to ensure that your DB2 UDB for z/OS SMP/E process works correctly. To
copy the jobs from the tapes, submit the copy job that is listed in DB2 Program
Directory.
The SMP/E APPLY job, DSNAPPLS, copies and link-edits the program modules,
macros, and procedures for both the DB2 Diagnostic and Recovery Utilities and the
DB2 Operational Utilities into the DB2 target libraries. Use job DSNAPPL1, which is
described in DB2 Installation Guide, as a guide to help you with the APPLY job.
The SMP/E ACCEPT job, DSNACCPS, copies the program modules, macros, and
procedures for both the DB2 Diagnostic and Recovery Utilities and the DB2
Operational Utilities into the DB2 distribution libraries. Use job DSNACEP1, which is
described in DB2 Installation Guide, as a guide to help you with the ACCEPT job.
With Version 7 and subsequent releases, each utility has separate load modules
and aliases. Table 1. lists the alias name and load module name or names for each
utility.
| Table 1. Relationship between utility names, aliases, and load modules
| Utility name Alias name Load module name
| BACKUP SYSTEM and DSNU81AV DSNU8RLV
| RESTORE SYSTEM
1
| CATMAINT and CATENFM DSNU81AA DSNU8CLA
| CHECK DSNU81AB DSNU8RLB
| COPY DSNU81AC DSNU8OLC or DSNU8RLC
| COPYTOCOPY DSNU81AT DSNU8RLT
| DIAGNOSE DSNU81AD DSNU8CLD
| EXEC SQL DSNU81AU DSNU8OLU
| LISTDEF DSNU81AE DSNU8CLE
| LOAD DSNU81AF DSNU8OLF
| MERGECOPY DSNU81AG DSNU8RLG
| MODIFY RECOVERY and DSNU81AH DSNU8RLH
| MODIFY STATISTICS
Chapter 7. CATMAINT . . . . . . . . . . . . . . . . . . . . . . 53
Creating utility control statements is the first step that is required to run an online
utility.
After creating the utility statements, use one of the following methods for invoking
the online utilities:
1. “Using the DB2 Utilities panel in DB2I” on page 25
2. “Using the DSNU CLIST command in TSO” on page 28
3. “Using the supplied JCL procedure (DSNUPROC)” on page 35
4. “Creating the JCL data set yourself by using the EXEC statement” on page 38
| 5. “Invoking utilities as a stored procedure (DSNUTILS)” on page 777 or
| “DSNUTILU stored procedure” on page 787
For the least involvement with JCL, use either the first or second method, and then
edit the generated JCL to alter or add necessary fields on the JOB or ROUTE cards
before submitting the job. Both of these methods require TSO, and the first method
also requires access to the DB2 Utilities Panel in DB2 Interactive (DB2I).
If you want to work with JCL or create your own JCL, choose the third or fourth
method.
To invoke online utilities from a DB2 application program, use the fifth method. For
more information about these stored procedures and other stored procedures that
are supplied by DB2, see Appendix B, “DB2-supplied stored procedures,” on page
777.
You can create the utility control statements with the ISPF/PDF edit function. After
they are created, save them in a sequential or partitioned data set.
The options that you can specify after the online utility name depend on which
online utility you use. You can specify more than one utility control statement in the
SYSIN stream. However, if any of the control statements returns a return code of 8
or greater, the subsequent statements in the job step are not executed.
To specify a utility option, specify the option keyword, followed by its associated
parameter or parameters, if any. The parameter value can be a keyword. You need
to enclose the values of some parameters in parentheses. The syntax diagrams for
utility control statements that are included in this book show parentheses where
they are required.
You can enter comments within the SYSIN stream. Comments must begin with two
hyphens (--) and are subject to the following rules:
v You must use two hyphens on the same line with no space between them.
v You can start comments wherever a space is valid, except within a delimiter
token.
For input data sets, the online utilities use the logical record length (LRECL), the
record format (RECFM) and the block size (BLKSIZE) with which the data set was
created. Variable-spanned (VS) or variable-blocked-spanned (VBS) record formats
| are not allowed for utility input data sets. The only exception is for the LOAD utility,
| which accepts unloaded data in VBS format.
For both input and output data sets, the online utilities use the value that you
supply for the number of buffers (BUFNO), with a maximum of 99 buffers. The
default number of buffers is 20. The utilities set the number of channel programs
equal to the number of buffers. The parameters that specify the buffer size
(BUFSIZE) and the number of channel programs (NCP) are ignored. If you omit any
DCB parameters, the utilities choose default values.
Increasing the number of buffers (BUFNO) can result in an increase in real storage
utilization and page fixing below the 16-MB line.
Restriction: DB2 does not support the undefined record format (RECFM=U) for any
data set.
Because you might need to restart a utility, take the following precautions when
defining the disposition of data sets:
v Use DISP=(NEW,CATLG,CATLG) or DISP=(MOD,CATLG) for data sets that you
want to retain.
v Use DISP=(MOD,DELETE,CATLG) for data sets that you want to discard after
utility execution.
v Use DISP=(NEW,DELETE) for DFSORT SORTWKnn data sets, or refer to
DFSORT Application Programming: Guide for alternatives.
v Do not use temporary data set names.
See Table 148 on page 778 and Table 149 on page 779 for information about the
default data dispositions that are specified for dynamically allocated data sets.
| All other utilities ignore the row-level granularity. They check only for authorization
| to operate on the table space; they do not check row-level authorization. For more
| information about multilevel security, see Part 3 of DB2 Administration Guide.
| Restriction: You cannot use the DB2 Utilities panel in DB2I to submit a BACKUP
| SYSTEM job, a COPYTOCOPY job, a RESTORE SYSTEM job, or a COPY job for
| a list of objects (with or without the CONCURRENT keyword).
If your site does not have default JOB and ROUTE statements, you must edit the
JCL to define them. If you edit the utility job before submitting it, you must use the
ISPF editor and submit your job directly from the editor. Use the following
procedure:
1. Create the utility control statement for the online utility that you intend to
execute, and save it in a sequential or partitioned data set.
For example, the following utility control statement specifies that the COPY
utility is to make an incremental image copy of table space
DSN8D81A.DSN8S81D with a SHRLEVEL value of CHANGE:
COPY TABLESPACE DSN8D81A.DSN8S81D
FULL NO
SHRLEVEL CHANGE
For the rest of this example, suppose that you save the statement in the
default data set, UTIL.
2. From the ISPF Primary Option menu, select the DB2I menu.
3. On the DB2I Utilities panel, select the UTILITIES option. Items that you must
specify are highlighted on the DB2 Utilities panel, as shown in Figure 1.
* The data set names panel will be displayed when required by a utility.
4. Fill in field 1 with the function that you want to execute. In this example, you
want to submit the utility job, but you want to edit the JCL first, so specify
EDITJCL. After you edit the JCL, you do not need to return to this panel to
submit the job. Instead, type SUBMIT on the editor command line.
5. Ensure that Field 2 is a unique identifier for your utility job. The default value is
TEMP. In this example, that value is satisfactory; leave it as is.
6. Fill in field 3 with the utility that you want to run. To indicate REORG
TABLESPACE of a LOB table space, specify REORG LOB.
In this example, specify COPY.
7. Fill in field 4 if you want to use an input data set other than the default data
set. Unless you enclose the data set name between apostrophes, TSO adds
your user identifier as a prefix. In this example, specify UTIL, which is the
default data set.
8. Change field 5 if this job restarts a stopped utility or if you want to execute a
utility in PREVIEW mode. In this example, leave the default value, NO.
9. Specify in field 6 whether you are using LISTDEF statements or TEMPLATE
statements in this utility. If you specify YES for LISTDEF or TEMPLATE, DB2
displays the Control Statement Data Set Names panel, but the field entries are
optional.
10. Press Enter.
If LISTDEF YES or TEMPLATE YES is specified, the Control Set Data Set Names
panel is displayed, as shown in Figure 3 on page 28.
Enter output data sets for local/current site for COPY, MERGECOPY,
LOAD, or REORG:
3 COPYDSN ==> ABC
4 COPYDSN2 ==>
Enter output data sets for recovery site for COPY, LOAD, or REORG:
5 RCPYDSN1 ==> ABC1
6 RCPYDSN2 ==>
Enter output data sets for REORG or UNLOAD:
7 PUNCHDSN ==>
PRESS: ENTER to process END to exit HELP for more information
If the Data Set Names panel is displayed, complete the following steps. If you do
not specify COPY, LOAD, MERGECOPY, REORG TABLESPACE, or UNLOAD in
field 3 of the DB2 Utilities panel, the Data Set Names panel is not displayed; skip
this procedure and continue with Figure 3 on page 28.
1. Fill in field 1 if you are running LOAD, REORG, or UNLOAD. For LOAD, you
must specify the data set name that contains the records that are to be loaded.
For REORG or UNLOAD, you must specify the unload data set. In this example,
you do not need to fill in field 1, because you are running COPY.
2. Fill in field 2 if you are running LOAD or REORG with discard processing, in
which case you must specify a discard data set. In this example, you do not
need to fill in field 2, because you are running COPY.
3. Fill in field 3 with the primary output data set name for the local site if you are
running COPY, LOAD, or REORG, or with the current site if you are running
MERGECOPY. The DD name that the panel generates for this field is
SYSCOPY. This is an optional field for LOAD and for REORG with SHRLEVEL
NONE; this field is required for COPY, for MERGECOPY, and for REORG with
SHRLEVEL REFERENCE or CHANGE. In this example, the primary output data
set name for the local site is ABC.
4. Fill in field 4 with the backup output data set name for the local site if you are
running COPY, LOAD, or REORG, or the current site if you are running
MERGECOPY. The DD name that the panel generates for this field is
SYSCOPY2. This is an optional field. In this example, you do not need to fill in
field 4.
5. Fill in field 5 with the primary output data set for the recovery site if you are
running COPY, LOAD, or REORG. The DD name that the panel generates for
this field is SYSRCOPY1. This is an optional field. In this example, the primary
output data set name for the recovery site is ABC1.
6. Fill in field 6 with the backup output data set for the recovery site if you are
running COPY, LOAD, or REORG. The DD name that the panel generates for
this field is SYSRCOPY2. This field is optional. In this example, you do not
need to fill in field 6.
7. Fill in field 7 with the output data set for the generated LOAD utility control
statements if you are running REORG UNLOAD EXTERNAL, REORG
DISCARD, or UNLOAD. The DD name that the panel generates for this field is
SYSPUNCH. In this example, you do not need to fill in field 7.
The Control Statement Data Set Names panel, which is shown in Figure 3, is
displayed if either LISTDEF YES or TEMPLATE YES is specified on the DB2
Utilities panel.
Enter the data set name for the LISTDEF data set (SYSLISTD DD):
1 LISTDEF DSN ===>
OPTIONAL or IGNORED
Enter the data set name for the TEMPLATE data set (SYSTEMPL DD):
2 TEMPLATE DSN ===>
OPTIONAL or IGNORED
1. Fill in field 1 to specify the data set that contains a LISTDEF control statement.
The default is the SYSIN data set. This field is ignored if you specified NO in
the LISTDEF? field in the DB2 Utilities panel.
For information about using a LISTDEF control statement, see Chapter 15,
“LISTDEF,” on page 163.
2. Fill in field 2 to specify the data set that contains a TEMPLATE. The default is
the SYSIN data set. This field is ignored if you specified NO in the TEMPLATE?
field in the DB2 Utilities panel.
For information about using TEMPLATE, see Chapter 31, “TEMPLATE,” on page
575.
Restriction: You cannot use the DSNU CLIST command to submit a COPY job for
a list of objects (with or without the CONCURRENT keyword).
The CLIST command creates a job that performs only one utility operation.
However, you can invoke the CLIST command for each utility operation that you
need, and then edit and merge the outputs into one job or step.
You can execute the DSNU CLIST command from the TSO command processor or
from the DB2I Utilities panel.
CONTROL ( control-option )
COPYDSN(data-set-name)
COPYDSN2(data-set-name)
RCPYDSN1(data-set-name) RECDSN(data-set-name)
RCPYDSN2(data-set-name)
EDIT ( NO ) RESTART ( NO )
PUNCHDSN ( data-set-name ) EDIT ( SPF ) RESTART ( CURRENT )
TSO PHASE
PREVIEW
UNIT ( SYSDA )
UNIT ( unit-name ) VOLUME(vol-ser)
DB2 places the JCL in a data set that is named DSNUxxx.CNTL, where
DSNUxxx is a control file name. The control file contains the statements that
are necessary to invoke the DSNUPROC procedure which, in turn, executes the
utility. If you execute another job with the same utility name, the first job is
deleted. See “UID” on page 32 for a list of the online utilities and the control file
name that is associated with each utility.
INDSN(data-set-name(member-name))
Specifies the data set that contains the utility statements and control
statements. Do not specify a data set that contains double-byte character set
data.
(data-set-name)
Specifies the name of the data set. If you do not specify a data set
name, the default command procedure prompts you for the data set
name.
(member-name)
Specifies the member name. You must specify the member name if the
data set is partitioned.
CONTROL(control-option: ...)
Specifies whether to trace the CLIST command execution.
NONE Omits tracing. The default is NONE.
control-option
Lists one or more of the following options. Separate items in the list by
colons (:). To abbreviate, specify only the first letter of the option.
LIST Displays TSO commands after symbolic substitution and before
command execution.
CONLIST
Displays CLIST commands after symbolic substitution and
before command execution.
SYMLIST
Displays all executable statements (TSO commands and CLIST
statements) before the scan for symbolic substitution.
UNIT (unit-name)
Assigns a unit address, a generic device type, or a user-assigned group name
for a device on which a new temporary or permanent data set resides. When
the CLIST command generates the JCL, it places unit-name after the UNIT
clause of the generated DD statement. The default is SYSDA.
VOLUME (vol-ser)
Assigns the serial number of the volume on which a new temporary or
permanent data set resides. When the CLIST command generates the JCL, it
places vol-ser after the VOL=SER clause of the generated DD statement. If you
omit VOLUME, the VOL=SER clause is omitted from the generated DD
statement.
/*
Figure 4. Control file DSNUCOP.CNTL. This is an example of the JCL data set before editing.
The following list describes the required JCL data set statements:
Statement Description
JOB The CLIST command uses any JOB statements that you saved
The CLIST command builds the necessary JCL DD statements. Those statements
vary depending on the utility that you execute. Data sets that might be required are
listed under “Data sets that online utilities use” on page 21. The following DD
statements are generated by the CLIST command:
SYSPRINT DD SYSOUT=A
Defines OUTPUT, SYSPRINT as SYSOUT=A. Utility messages are sent to the
SYSPRINT data set. You can use the TSO command to control the disposition
of the SYSPRINT data set. For example, you can send the data set to your
terminal. For more information, see z/OS TSO/E Command Reference.
UTPRINT DD SYSOUT=A
Defines UTPRINT as SYSOUT=A. If any utility requires a sort, it executes
DFSORT. Messages from that program are sent to UTPRINT.
SYSIN DD *
Defines SYSIN. To build the SYSIN DD * job stream, DSNU copies the data set
that is named by the INDSN parameter. The INDSN data set does not change,
and you can reuse it when the DSNU procedure has finished running.
If you use a ddname that is not the default on a utility statement that you use, you
must change the ddname in the JCL that is generated by the DSNU procedure. For
example, in the REORG TABLESPACE utility, the default option for UNLDDN is
SYSREC, and DSNU builds a SYSREC DD statement for REORG TABLESPACE. If
you use a different value for UNLDDN, you must edit the JCL data set and change
SYSREC to the ddname that you used.
When you finish editing the data set, you can either save changes to the data set
(by issuing SAVE), or instruct the editor to ignore all changes.
The SUBMIT parameter specifies whether to submit the data set statement as a
background job. The temporary data set that holds the JCL statement is reused. If
you want to submit more than one job that executes the same utility, you must
rename the JCL data sets and submit them separately.
Examples
Example 1: The following CLIST command statement generates a data set that is
called authorization-id.DSNURGT.CNTL and that contains JCL statements that
invoke the DSNUPROC procedure.
Example 2: The following example shows how to invoke the CLIST command for
the COPY utility.
%DSNU
UTILITY (COPY)
INDSN (’MYCOPY(STATEMNT)’)
COPYDSN (’MYCOPIES.DSN8D81A.JAN1’)
EDIT (TSO)
SUBMIT (YES)
UID (TEMP)
RESTART (NO)
To execute the DSNUPROC procedure, write and submit a JCL data set like the
one that the DSNU CLIST command builds (An example is shown in Figure 4 on
page 33.) In your JCL, the EXEC statement executes the DSNUPROC procedure.
DSNUPROC syntax
The EXEC statement can be a procedure that contains the required JCL, or it can
be of the following form:
//stepname EXEC PGM=DSNUTILB,PARM=’system,[uid],[utproc]’
The brackets, [ ], indicate optional parameters. The parameters have the following
meanings:
DSNUTILB
Specifies the utility control program. The program must reside in an
APF-authorized library.
system
Specifies the DB2 subsystem.
uid The unique identifier for your utility job. Do not reuse the utility ID of a
stopped utility that has not yet been terminated. If you do use the same
utility ID to invoke a different utility, DB2 tries to restart the original stopped
utility with the information that is stored in the SYSUTIL directory table.
utproc The value of the UTPROC parameter in the DSNUPROC procedure.
Specify this option only when you want to restart the utility job. Specify:
'RESTART'
To restart at the most recent commit point. This option has the
same meaning as ’RESTART(CURRENT).’
'RESTART(CURRENT)'
To restart the utility at the most recent commit point. This option has
the same meaning as ’RESTART.’
'RESTART(PHASE)'
To restart at the beginning of the phase that executed most
recently.
'RESTART(PREVIEW)'
To restart the utility in preview mode. While in PREVIEW mode, the
utility checks for syntax errors in all utility control statements, but
normal utility execution does not take place.
For the example in Figure 5 on page 37 you can use the following EXEC statement:
//stepname
EXEC PGM=DSNUTILB,PARM=’DSN,TEMP’
Use the DB2 DISPLAY UTILITY command to check the current status of online
utilities. Figure 6 shows an example of the output that the DISPLAY UTILITY
command generates. In the example output, DB2 returns a message that indicates
the member name (A), utility identifier (B), utility name (C), utility phase (D),
the number of pages or records that are processed by the utility1 (E), the number
of objects in the list (F), the last object that started (G), and the utility status
(H). The output might also report additional information about an executing utility,
such as log phase estimates or utility subtask activity.
1. In a data sharing environment, the number of records is current when the command is issued from the same member on which the
utility is executing. When the command is issued from a different member, the count might lag substantially.
To determine why a utility failed to complete, consider the following problems that
can cause a failure during execution of the utility:
v Problem: DB2 terminates the utility job step and any subsequent utility steps.
Solution: Submit a new utility job to execute the terminated steps. Use the same
utility identifier for the new job to ensure that no duplicate utility job is running.
v Problem: DB2 does not execute the particular utility function, but prior utility
functions are executed.
Solution: Submit a new utility step to execute the function.
v Problem: DB2 places the utility function in the stopped state.
Solution: Restart the utility job step at either the last commit point or the
beginning of the phase by using the same utility identifier. Alternatively, use a
TERM UTILITY (uid) command to terminate the job step and resubmit it.
v Problem: DB2 terminates the utility and issues return code 8.
Solution: One or more objects might be in a restrictive or advisory status. See
Appendix C, “Resetting an advisory or restrictive status,” on page 831 for more
information on resetting the status of an object.
Alternatively, a DEADLINE condition in online REORG might have terminated the
reorganization.
For more information about the DEADLINE condition, see the description of this
option in Chapter 24, “REORG INDEX,” on page 375 or in Chapter 25, “REORG
TABLESPACE,” on page 403.
If the utility supports parallelism, it can use additional threads to support the parallel
subtasking. Consider increasing the values of subsystem parameters that control
threads, such as MAX BATCH CONNECT and MAX USERS. These parameters are
on installation panel DSNTIPE and are described in DB2 Installation Guide.
See Part 5 (Volume 2) of DB2 Administration Guide for a description of the claim
classes and the use of claims and drains by online utilities.
Submitting online utility jobs: When you submit a utility job, you must specify the
name of the DB2 subsystem to which the utility is to attach or the group attach
name. If you do not use the group attach name, the utility job must run on the z/OS
system where the specified DB2 subsystem is running. Ensure that the utility job
runs on the appropriate z/OS system. You must use one of several z/OS
installation-specific statements to make sure this happens. These include:
v For JES2 multi-access spool (MAS) systems, insert the following statement into
the utility JCL:
/*JOBPARM SYSAFF=cccc
v For JES3 systems, insert the following statement into the utility JCL:
//*MAIN SYSTEM=(main-name)
The preceding JCL statements are described in z/OS MVS JCL Reference. Your
installation might have other mechanisms for controlling where batch jobs run, such
as by using job classes.
You can restart a utility only on a member that is running the same DB2 release
level as the member on which the utility job was originally submitted. The same
utility ID (UID) must be used to restart the utility. That UID is unique within a data
sharing group. However, if DB2 fails, you must restart DB2 on either the same or
another z/OS system before you restart the utility.
Use the TERM UTILITY command to terminate the execution of an active utility or
to release the resources that are associated with a stopped utility.
After you issue the TERM UTILITY command, you cannot restart the terminated
utility job. The objects on which the utility was operating might be left in an
In a data sharing environment, TERM UTILITY is effective for active utilities when
the command is submitted from the DB2 subsystem that originally issued the
command. You can terminate a stopped utility from any active member of the data
sharing group.
If the utility is active, TERM UTILITY terminates it at the next commit point. It then
performs any necessary cleanup operations.
You might choose to put TERM UTILITY in a conditionally executed job step; for
example, if you never want to restart certain utility jobs. Figure 7 shows a sample
job stream.
Alternatively, consider specifying the TIMEOUT TERM parameter for some Online
REORG situations.
| Before you restart a job, correct the problem that caused the utility job to stop. Then
| resubmit the job. DB2 recognizes the utility ID and restarts the utility job if possible.
| DB2 retrieves information about the stopped utility from the SYSUTIL directory
| table.
| Do not reuse the utility ID of a stopped utility that has not yet been terminated,
| unless you want to restart that utility. If you do use the same utility ID to invoke a
| different utility, DB2 tries to restart the original stopped utility with the information
| that is stored in the SYSUTIL directory table.
| For each utility, DB2 uses the default RESTART value that is specified in Table 4.
| For a complete description of the restart behavior for an individual utility, including
| any phase restrictions, refer to the restart section for that utility.
| You can override the default RESTART value by specifying the RESTART
| parameter in the original JCL data set. DB2 ignores the RESTART parameter if you
| are submitting the utility job for the first time. For instructions on how to specify this
| parameter, see “Using the RESTART parameter” on page 44.
| Table 4. Default RESTART values for each utility
| Utility Default RESTART value
| BACKUP SYSTEM RESTART(CURRENT)
| CATMAINT No restart
| CHECK DATA RESTART(CURRENT)
| CHECK INDEX RESTART(CURRENT)
| CHECK LOB RESTART(CURRENT)
| COPY RESTART(CURRENT)
| COPYTOCOPY RESTART(CURRENT)
| DIAGNOSE Restarts from the beginning
| EXEC SQL Restarts from the beginning
| LISTDEF Restarts from the beginning
| LOAD RESTART(CURRENT) or
| RESTART(PHASE)1
| MERGECOPY RESTART(PHASE)
| MODIFY RECOVERY RESTART(CURRENT)
| MODIFY STATISTICS RESTART(CURRENT)
| OPTIONS Restarts from the beginning
| QUIESCE RESTART(CURRENT)
| REBUILD INDEX RESTART(PHASE)
| RECOVER RESTART(CURRENT)
| REORG INDEX RESTART(CURRENT) or
| RESTART(PHASE)1
| REORG TABLESPACE RESTART(CURRENT) or
| RESTART(PHASE)1
| REPAIR No restart
| REPORT RESTART(CURRENT)
| RESTORE SYSTEM RESTART(CURRENT)
| RUNSTATS RESTART(CURRENT)
| STOSPACE RESTART(CURRENT)
| TEMPLATE Restarts from the beginning
If you cannot restart a utility job, you might have to terminate it to make the data
available to other applications. To terminate a utility job, issue the DB2 TERM
UTILITY command. Use the command only if you must start the utility from the
beginning.
To add the RESTART parameter, you can use one of the following three methods:
v Using DB2I. Add the RESTART parameter by following these steps:
1. Access the DB2 Utilities panel.
2. Fill in the panel fields, as documented in Figure 2 on page 27, except for field
5.
3. Change field 5 to CURRENT or PHASE, depending on the desired method of
restart.
4. Press Enter.
v Using the DSNU CLIST command. When you invoke the DSNU CLIST
command, as described in “Using the DSNU CLIST command in TSO” on page
28, change the value of the RESTART parameter by specifying either RESTART,
RESTART (CURRENT), or RESTART(PHASE).
| v Creating your own JCL. If you create your own JCL, you can specify RESTART
| (CURRENT) or RESTART(PHASE) to override the default RESTART value. You
must also check the DISP parameters on the DD statements. For example, for
DD statements that have DISP=NEW and need to be reused, change DISP to
OLD or MOD. If generation data groups (GDGs) are used and any (+1)
generations were cataloged, ensure that the JCL is changed to GDG (+0) for
such data sets.
Automatically generated JCL normally has DISP=MOD. DISP=MOD allows a data
set to be allocated during the first execution and then reused during a restart.
When restarting a job that involves templates, DB2 automatically changes the
disposition from NEW to MOD. Therefore, you do not need to change template
specifications for restart.
| Use caution when changing LISTDEF lists prior to a restart. When DB2 restarts list
| processing, it uses a saved copy of the list. Modifying the LISTDEF list that is
| referred to by the stopped utility has no effect. Only control statements that follow
| the stopped utility are affected.
Do not change the position of any other utilities that have been executed.
If the utility that you are restarting was processing a LIST, you will see a list size
that is greater than 1 on the DSNU100 or DSNU105 message. DB2 checkpoints the
expanded, enumerated list contents prior to executing the utility. DB2 uses this
checkpointed list to restart the utility at the point of failure. After a successful restart,
the LISTDEF is re-expanded before subsequent utilities in the same job step use it.
Restart is not always possible. The restrictions applying to the phases of each utility
are discussed under the description of each utility.
| The BACKUP SYSTEM utility uses copy pools, which are new constructs in z/OS
| DFSMShsm V1R5. A copy pool is a defined set of storage groups that contain data
| that DFSMShsm can backup and recover collectively. For more information about
| copy pools, see z/OS DFSMSdfp Storage Administration Reference.
| Each DB2 subsystem can have up to two copy pools, one for databases and one
| for logs. BACKUP SYSTEM copies the volumes that are associated with these copy
| pools at the time of the copy.
| Output: The output for BACKUP SYSTEM is the copy of the volumes on which the
| DB2 data and log information resides. The BACKUP SYSTEM history is recorded in
| the bootstrap data sets (BSDSs).
| Authorization required: To execute this utility, you must use a privilege set that
| includes SYSCTRL or SYSADM authority.
| When you specify BACKUP SYSTEM, you can specify only the following
| statements in the same step:
| v DIAGNOSE
| v OPTIONS PREVIEW
| v OPTIONS OFF
| v OPTIONS KEY
| v OPTIONS EVENT WARNING
| In addition, BACKUP SYSTEM must be the last statement in SYSIN.
| Syntax diagram
|
| FULL
BACKUP SYSTEM
DATA ONLY
|
|
| Option descriptions
| “Control statement coding rules” on page 19 provides general information about
| specifying options for DB2 utilities.
| FULL
| Indicates that you want to copy both the database copy pool and the log copy
| pool. The default is FULL.
| You must ensure that the database copy pool is set up to contain the volumes
| for the databases and the associated integrated catalog facility (ICF) catalogs.
| You must also ensure that the log copy pool is set up to contain the volumes for
| the BSDSs, the active logs, and the associated catalogs.
| Use BACKUP SYSTEM FULL to allow for recovery of both data and logs. You
| can use the RESTORE SYSTEM utility to recover the data. However,
| RESTORE SYSTEM does not restore the logs; the utility only applies the logs.
| If you want to restore the logs, you must use another method to restore them.
| DATA ONLY
| Indicates that you want to copy only the database copy pool. You must ensure
| that the database copy pool is set up to contain the volumes for the databases
| and the associated ICF catalogs.
|
| Instructions for running BACKUP SYSTEM
| To run BACKUP SYSTEM, you must:
| 1. Read “Before running BACKUP SYSTEM” on page 49.
| 2. Prepare the necessary data sets, as described in “Data sets that BACKUP
| SYSTEM uses” on page 49.
| 3. Create JCL statements by using one of the methods that are described in either
| “Using the supplied JCL procedure (DSNUPROC)” on page 35 or “Creating the
| JCL data set yourself by using the EXEC statement” on page 38.
| 4. Prepare a utility control statement that specifies the options for the tasks that
| you want to perform.
| 5. Check “Concurrency and compatibility for BACKUP SYSTEM” on page 50 if you
| want to run other jobs concurrently on the same target objects.
| 6. Plan for restarting BACKUP SYSTEM if the job doesn’t complete, as described
| in “Terminating or restarting BACKUP SYSTEM” on page 50.
| 7. Run BACKUP SYSTEM by using one of the methods that are described in
| Chapter 3, “Invoking DB2 online utilities,” on page 19.
| For information about defining copy pools and associated backup storage groups,
| see z/OS DFSMSdfp Storage Administration Reference. Use the following DB2
| naming convention when you define these copy pools:
| DSN$locn-name$cp-type
| The variables that are used in this naming convention have the following meanings:
| DSN The unique DB2 product identifier.
| $ A delimiter. You must use the dollar sign character ($).
| locn-name
| The DB2 location name.
| cp-type
| The copy pool type. Use DB for database and LG for log.
| You can restart a BACKUP SYSTEM utility job, but it starts from the beginning
| again. For guidance in restarting online utilities, see “Restarting an online utility” on
| page 42.
|
| Concurrency and compatibility for BACKUP SYSTEM
| BACKUP SYSTEM can run concurrently with any other utility; however, it must wait
| for the following DB2 events to complete before the copy can begin:
| v Extending of data sets
| v Writing of 32-KB pages
| v Writing close page set control log records (PSCRs)
| v Creating data sets (for table spaces, indexes, and so forth)
| v Deleting data sets (for dropping tables spaces, indexes, and so forth)
| v Renaming data sets (for online reorganizing of table spaces, indexes, and so
| forth during the SWITCH phase)
Run CHECK DATA after a conditional restart or a point-in-time recovery on all table
spaces where parent and dependent tables might not be synchronized or where
base tables and auxiliary tables might not be synchronized. You can run CHECK
DATA against a base table space only, not against a LOB table space.
| Restriction: Do not run CHECK DATA on encrypted data. Because CHECK DATA
| does not decrypt the data, the utility might produce unpredictable results.
For a diagram of CHECK DATA syntax and a description of available options, see
“Syntax and options of the CHECK DATA control statement” on page 56. For
detailed guidance on running this utility, see “Instructions for running CHECK DATA”
on page 61.
Output: CHECK DATA optionally deletes rows that violate referential or table check
constraints. CHECK DATA copies each row that violates one or more constraints to
an exception table. If a row violates two or more constraints, the row is copied only
once.
If the utility finds any violation of constraints, CHECK DATA puts the table space
that it is checking in the CHECK-pending status.
You can specify that the CHECK-pending status is to be reset when CHECK DATA
execution completes.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute CHECK DATA. However,
you cannot use SYSOPR authority to execute CHECK DATA on table space
SYSDBASE in database DSNDB06 or on any object except SYSUTILX in database
DSNDB01.
If you specify the DELETE option, the privilege set must include the DELETE
privilege on the tables that are being checked. If you specify the FOR EXCEPTION
option, the privilege set must include the INSERT privilege on any exception table
that is used. If you specify the AUXERROR INVALIDATE option, the privilege set
must include the UPDATE privilege on the base tables that contain LOB columns.
SCANTAB Extracts foreign keys; uses foreign key index if it matches exactly;
otherwise scans the table
SORT Sorts foreign keys if they are not extracted from the foreign key
index
CHECKDAT Looks in primary indexes for foreign key parents, and issue
messages to report detected errors
REPORTCK Copies error rows into exception tables, and delete them from
source table if DELETE YES is specified
UTILTERM Performs cleanup
Syntax diagram
DELETE NO
FOR EXCEPTION IN table-name1 USE table-name2
LOG YES
DELETE YES
LOG NO
SORTDEVT device-type SORTNUM integer
table-space-spec:
TABLESPACE table-space-name
database-name.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
| DATA Indicates that you want the utility to check referential and table
check constraints. CHECK DATA does not check informational
| referential constraints.
TABLESPACE database-name.table-space-name
Specifies the table space to which the data belongs.
database-name is the name of the database and is optional. The
default is DSNDB04.
table-space-name is the name of the table space.
PART integer Identifies which partition to check for constraint violations.
| integer is the number of the partition and must be in the range from
| 1 to the number of partitions that are defined for the table space.
| The maximum is 4096.
SCOPE Limits the scope of the rows in the table space that are to be
checked.
PENDING
Indicates that the only rows that are to be checked are
those that are in table spaces, partitions, or tables that are
in CHECK-pending status. The referential integrity check,
constraint check, and the LOB check are all performed.
If you specify this option for a table space that is not in
CHECK-pending status, the CHECK DATA utility does not
check the table space and does not issue an error
message.
The default is PENDING.
AUXONLY
Indicates that only the LOB column check is to be
performed for table spaces that have tables with LOB
columns. The referential integrity and constraint checks are
not performed.
ALL Indicates that all dependent tables in the specified table
spaces are to be checked. The referential integrity check,
constraint check, and the LOB check are performed.
REFONLY
Same as the ALL option, except that the LOB column
check is not to be performed.
AUXERROR Specifies the action that CHECK DATA is to perform when it finds a
LOB column check error.
REPORT A LOB column check error is reported with a
warning message. The base table space is set to
the auxiliary CHECK-pending (ACHKP) status.
The default is REPORT.
INVALIDATE A LOB column check error is reported with a
warning message. The base table LOB column is
set to an invalid status. A LOB column with invalid
status that is now correct is set valid. This action is
also reported with a message. The base table
space is set to the auxiliary warning (AUXW) status
if any LOB column remains in invalid status.
EXCEPTIONS integer
Specifies the maximum number of exceptions, which are reported
by messages only. CHECK DATA terminates in the CHECKDAT
phase when it reaches the specified number of exceptions; if
termination occurs, the error rows are not written to the
EXCEPTION table.
Only records that contain primary referential integrity errors or table
check constraint violations are applied toward the exception limit.
The number of records that contain secondary errors is not limited.
integer is the maximum number of exceptions. The default is 0,
which indicates no limit on the number of exceptions.
ERRDDN ddname
Specifies a DD statement for an error processing data set.
ddname is either a DD name or a TEMPLATE name specification
from a previous TEMPLATE control statement. If utility processing
detects that the specified name is both a DD name in the current
job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 575. The default is SYSERR.
WORKDDN (ddname1,ddname2)
Specifies the DD statements for the temporary work file for sort
input and the temporary work file for sort output. A temporary work
file for sort input and output is required.
You can use the WORKDDN keyword to specify either a DD name
or a TEMPLATE name specification from a previous TEMPLATE
control statement. If utility processing detects that the specified
name is both a DD name in the current job step and a TEMPLATE
name, WORKDDN uses the DD name. For more information about
TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on page
575.
ddname1 is the DD name of the temporary work file for sort input.
The default is SYSUT1.
ddname2 is the DD name of the temporary work file for sort output.
The default is SORTOUT.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be
dynamically allocated by DFSORT. You can specify any device type
that is acceptable to the DYNALLOC parameter of the SORT or
OPTION control statement for DFSORT, as described in DFSORT
Application Programming: Guide.
Do not use a TEMPLATE specification to dynamically allocate sort
work data sets. The presence of the SORTDEVT keyword controls
dynamic allocation of these data sets.
device-type is the device type. If you omit SORTDEVT and a sort is
required, you must provide the DD statements that the sort program
requires for the temporary data sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be
dynamically allocated by the sort program.
| The relationship between a base table with a LOB column and the LOB table space
| is shown in Figure 8 on page 62. The LOB column in the base table points to the
| auxiliary index on the LOB table space, as illustrated in the figure. For more
| information about LOBs and auxiliary tables, see Part 2 of DB2 Administration
| Guide.
|
Figure 8. Relationship between a base table with a LOB column and the LOB table space
Notes:
1. You can use CHAR(5) for any type of table space, but you must use it for table spaces that are defined with the
LARGE or DSSIZE options.
If you delete rows by using the CHECK DATA utility with SCOPE ALL, you must
create exception tables for all tables that are named in the table spaces and for all
their descendents. All descendents of any row are deleted.
An auxiliary table cannot be an exception table. A LOB column check error is not
included in the exception count. A row with only a LOB column check error does not
participate in exception processing.
You can create an exception table for the project activity table by using the following
SQL statements:
EXEC SQL
CREATE TABLE EPROJACT
LIKE DSN8810.PROJACT
IN DATABASE DSN8D81AENDEXEC
EXEC SQL
ALTER TABLE EPROJACT
ADD RID CHAR(4)
ENDEXEC
EXEC SQL
ALTER TABLE EPROJACT
ADD TIME TIMESTAMP NOT NULL WITH DEFAULT
ENDEXEC
Table EPROJACT has the same structure as table DSN8810.PROJACT, but it can
have two extra columns. The columns in EPROJACT are:
v Its first five columns mimic the columns of the project activity table; they have
exactly the same names and descriptions. Although the column names are the
same, they do not need to be. However, the rest of the column attributes for the
initial columns must be same as those of the table that is being checked.
v The next column, which is added by ALTER TABLE, is optional; CHECK DATA
uses it as an identifier. The name “RID” is an arbitrary choice; if the table already
has a column with that name, use a different name. The column description,
CHAR(4), is required.
v The final timestamp column is also optional. If you define the timestamp column,
a row identifier (RID) column must precede this column. You might define a
permanent exception table for each table that is subject to referential or table
check constraints. You can define it once and use it to hold invalid rows that
CHECK DATA detects. The TIME column allows you to identify rows that were
added by the most recent run of the utility.
Eventually, you correct the data in the exception tables, perhaps with an SQL
UPDATE statement, and transfer the corrections to the original tables by using
statements that are similar to those in the following example:
INSERT INTO DSN8810.PROJACT
SELECT PROJNO, ACTNO, ACSTAFF, ACSTDATE, ACENDATE
FROM EPROJACT
WHERE TIME > CURRENT TIMESTAMP - 1 DAY;
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space
Object that is to be checked. (If you want to check only one partition of a
table space, use the PART option in the control statement.)
Exception table
Table that stores rows that violate any referential constraints. For each table
in a table space that is checked, specify the name of an exception table in
the utility control statement. Any row that violates a referential constraint is
copied to the exception table.
Defining work data sets: Three sequential data sets are required during execution
of CHECK DATA. Two work data sets and one error data set are described by DD
statements in the WORKDDN and ERRDDN options.
Create the ERRDDN data set so that it is large enough to accommodate one error
entry (length=60 bytes) per violation that CHECK DATA detects.
Whenever the scope information is in doubt, run the utility with the SCOPE ALL
option. The scope information is recorded in the DB2 catalog. The scope
information can become indoubt whenever you start the target table space with
ACCESS(FORCE), or when the catalog is recovered to a point in time.
If you want to check only the tables with LOB columns, specify the AUXONLY
option. If you want to check all dependent tables in the specified table spaces
except tables with LOB columns, specify the REFONLY option.
Finding violations
CHECK DATA issues a message for every row that contains a referential or table
check constraint violation. The violation is identified by:
v The RID of the row
v The name of the table that contains the row
v The name of the constraint that is being violated
|
| Figure 9. Example of messages that CHECK DATA issues
|
You can automatically delete rows that violate referential or table check constraints
by specifying CHECK DATA with DELETE YES. However, you should be aware of
the following possible problems:
v The violation might be created by a non-referential integrity error. For example,
the indexes on a table might be inconsistent with the data in a table.
v Deleting a row might cause a cascade of secondary deletes in dependent tables.
The cascade of deletes might be especially inconvenient within referential
integrity cycles.
v The error might be in the parent table.
CHECK DATA uses the primary key index and all indexes that exactly match a
foreign key. Therefore, before running CHECK DATA, ensure that the indexes are
consistent with the data by using the CHECK INDEX utility.
If you run CHECK DATA with the DELETE NO option and referential or table check
constraint violations are found, the table space or partition is placed in
CHECK-pending status.
Orphan LOBs: An orphan LOB column is a LOB that is found in the LOB table
space but that is not referenced by the base table space. An orphan can result from
the following situations:
v You recover the base table space to a point in time prior to the insertion of the
base table row.
v You recover the base table space to a point in time prior to the definition of the
LOB column.
v You recover the LOB table space to a point in time prior to the deletion of a base
table row.
Missing LOBs: A missing LOB column is a LOB that is referenced by the base
table space but that is not in the LOB table space. A missing LOB can result from
the following situations:
v You recover the LOB table space to a point in time prior to the first insertion of
the LOB into the base table.
v You recover the LOB table space to a point in time when the LOB column is null
or has a zero length
Out-of-synch LOBs: An out-of-synch LOB error is a LOB that is found in both the
base table and the LOB table space, but the LOB in the LOB table space is at a
different level. A LOB column is also out-of-synch if the base table is null or has a
zero length, but the LOB is found in the LOB table space. An out-of-synch LOB can
occur anytime you recover the LOB table space or the base table space to a prior
point in time.
Invalid LOBs: An invalid LOB is an uncorrected LOB column error that is found by
a previous execution of CHECK DATA AUXERROR INVALIDATE.
Detecting LOB column errors: If you specify either CHECK DATA AUXERROR
REPORT or AUXERROR INVALIDATE and a LOB column check error is detected,
DB2 issues a message that identifies the table, row, column, and type of error. Any
additional actions depend on the option that you specify for the AUXERROR
parameter:
v When you specify the AUXERROR REPORT option, DB2 sets the base table
space to the auxiliary CHECK-pending (ACHKP) status. If CHECK DATA
encounters only invalid LOB columns and no other LOB column errors, the base
table space is set to the auxiliary warning (AUXW) status.
v When you specify the AUXERROR INVALIDATE option, DB2 sets the base
table LOB columns that are in error to an invalid status. DB2 resets the invalid
status of LOB columns that have been corrected. If any invalid LOB columns
remain in the base table, DB2 sets the base table space to auxiliary warning
(AUXW) status. You can use SQL to update a LOB column that is in the AUXW
status; however, any other attempt to access the column results in a -904 SQL
return code.
Use one of the following actions to remove the auxiliary CHECK-pending status if
DB2 does not find any inconsistencies:
v Use the SCOPE(ALL) option to check all dependent tables in the specified table
space. The checks include referential integrity constraints, table check
constraints, and the existence of LOB columns.
v Use the SCOPE(PENDING) option to check table spaces or partitions with CHKP
status. The checks include referential integrity constraints, table check
constraints, and the existence of LOB columns.
v Use the SCOPE(AUXONLY) option to check for LOB columns.
Claims and drains: Table 8 on page 70 shows which claim classes CHECK DATA
claims and drains and any restrictive status that the utility sets on the target object.
The legend for these claim classes is located at the bottom of the table.
Table 9 shows claim classes on a LOB table space and an index on the auxiliary
table.
Table 9. Claim classes of CHECK DATA operations on a LOB table space and index on the
auxiliary table
CHECK DATA CHECK DATA
Target objects DELETE NO DELETE YES
LOB table space DW/UTRO DA/UTUT
Index on the auxiliary table DW/UTRO DA/UTUT
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v DA: Drain all claim classes, no concurrent SQL access
v UTRO: Utility restrictive state, read-only access allowed
v UTUT: Utility restrictive state, exclusive control
Compatibility: The following utilities are compatible with CHECK DATA and can run
concurrently on the same target object:
v DIAGNOSE
v MERGECOPY
v MODIFY
v REPORT
v STOSPACE
To run on DSNDB01.SYSUTILX, CHECK DATA must be the only utility in the job
step and the only utility that is running in the DB2 subsystem.
The index on the auxiliary table for each LOB column inherits the same
compatibility and concurrency attributes of a primary index.
Figure 10. Example of using the CHECK DATA utility to copy invalid data into exception
tables and to delete the invalid data from the original table.
You can create exception tables by using the LIKE clause in the CREATE TABLE
statement. For an example of creating an exception table, see “Example: creating
an exception table for the project activity table” on page 63.
| Example 2: Running CHECK DATA on a table space with LOBs. Before you run
| CHECK DATA on a table space that contains at least one LOB column, complete
| the steps that are listed in “For a table with LOB columns” on page 61.
v Inconsistencies between the base table space and the corresponding LOB table
space.
The AUXERROR INVALIDATE option indicates that if the CHECK DATA utility finds
a LOB column error in this table space, it is to perform the following actions:
v Issues a warning message
v Sets the base table LOB column to an invalid status
v Sets the base table to auxiliary warning (AUXW) status
Figure 11. Example of running CHECK DATA on a table space with LOBs
Run the CHECK INDEX utility after a conditional restart or a point-in-time recovery
on all table spaces whose indexes might not be consistent with the data.
Also run CHECK INDEX before running CHECK DATA, especially if you specify
DELETE YES. Running CHECK INDEX before CHECK DATA ensures that the
indexes that CHECK DATA uses are valid. When checking an auxiliary table index,
CHECK INDEX verifies that each LOB is represented by an index entry, and that an
index entry exists for every LOB. For more information about running the CHECK
DATA utility on a table space that contains at least one LOB column, see “For a
table with LOB columns” on page 61.
For a diagram of CHECK INDEX syntax and a description of available options, see
“Syntax and options of the CHECK INDEX control statement” on page 74. For
detailed guidance on running this utility, see “Instructions for running CHECK
INDEX” on page 77.
Output: CHECK INDEX generates several messages that show whether the
indexes are consistent with the data. See Part 2 of DB2 Messages and Codes for
more information about these messages.
For unique indexes, any two null values are treated as equal values, unless the
index was created with the UNIQUE WHERE NOT NULL clause. In that case, if the
key is a single column, it can contain any number of null values, and CHECK
INDEX does not issue an error message.
CHECK INDEX issues an error message if it finds two or more null values and the
unique index was not created with the UNIQUE WHERE NOT NULL clause.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority.
An ID with installation SYSOPR authority can also execute CHECK INDEX, but only
on a table space in the DSNDB01 or DSNDB06 databases.
Syntax diagram
CHECK INDEX
LIST listdef-name
( index-name )
PART integer
( ALL ) TABLESPACE table-space-name
database-name. PART integer
SORTNUM integer
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
INDEX Indicates that you are checking for index consistency.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
list should contain only index spaces. Do not specify the name of
an index or of a table space. DB2 groups indexes by their related
table space and executes CHECK INDEX once per table space.
CHECK INDEX allows one LIST keyword for each control statement
in CHECK INDEX. For more information about LISTDEF
specifications, see Chapter 15, “LISTDEF,” on page 163.
(index-name, ...)
Specifies the indexes that are to be checked. All indexes must
belong to tables in the same table space. If you omit this option,
you must use the (ALL) TABLESPACE option. Then CHECK INDEX
checks all indexes on all tables in the table space that you specify.
index-name is the name of an index, in the form creator-id.name. If
you omit the qualifier creator-id., the user identifier for the utility job
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Index space
Object that is to be checked. (If you want to check only one partition of an
index, use the PART option in the control statement.)
Another method of estimating the size of the WORKDDN data set is to obtain the
high-used relative byte address (RBA) for each index from a VSAM catalog listing.
Then add the RBAs.
| Shadow data set names: Each shadow data set must have the following name:
| catname.DSNDBx.psname.y0001.Lnnn
| To determine the names of existing shadow data sets, execute one of the following
| queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
| SELECT DBNAME, TSNAME, IPREFIX
| FROM SYSIBM.SYSTABLEPART
| WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
| SELECT DBNAME, IXNAME, IPREFIX
| FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
| WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
| AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
| For a partitioned table space, DB2 returns rows from which you select the row for
| the partitions that you want to check.
| Defining shadow data sets: Consider the following actions when you preallocate
| the data sets:
| v Allocate the shadow data sets according to the rules for user-managed data sets.
| v Define the shadow data sets as LINEAR.
| v Use SHAREOPTIONS(3,3).
| v Define the shadow data sets as EA-enabled if the original table space or index
| space is EA-enabled.
| v Allocate the shadow data sets on the volumes that are defined in the storage
| group for the original table space or index space.
| If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
| the SECQTY value for the table space or index space.
| Recommendation: Use the MODEL option, which causes the new shadow data set
| to be created like the original data set. This method is shown in the following
| example:
| DEFINE CLUSTER +
| (NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
| MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
| DATA +
| (NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
| MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’) )
| Creating shadow data sets for indexes: When you preallocate shadow data sets
| for indexes, create the data sets as follows:
| v Create shadow data sets for the partition of the table space and the
| corresponding partition in each partitioning index and data-partitioned secondary
| index.
| v Create a shadow data set for logical partitions of nonpartitioned secondary
| indexes.
| Use the same naming scheme for these index data sets as you use for other data
| sets that are associated with the base index, except use J0001 instead of I0001.
| For more information about this naming scheme, see the information about the
| shadow data set naming convention at the beginning of this section, “Shadow data
| sets” on page 78.
| Estimating the size of shadow data sets: If you have not changed the value of
| FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
| comparable to the amount of required space for the original data set.
In this example, the keys are unique within each logical partition, but both logical
partitions contain the key, T; so for the index as a whole, the keys are not unique.
CHECK INDEX does not detect the duplicates.
v CHECK INDEX does not detect keys that are out of sequence between different
logical partitions. For example, the following keys are out of sequence:
1 7 5 8 9 10 12
| Figure 13 shows the flow of a CHECK INDEX job with a parallel index check for a
| nonpartitioned table space or a single partition of a partitioned table space.
|
Table
space
Indexes
Snapshot copy
Sort Check
Table Unload Sort Check
space Sort Check Indexes
Figure 13. Parallel index check for a nonpartitioned table space or a single partition of a
partitioned table space
| Figure 14 shows the flow of a CHECK INDEX job with a parallel index check for all
| partitioning indexes on a partitioned table space.
|
Table
space Index
parts parts
Snapshot copy
Figure 14. Parallel index check for all partitioning indexes on a partitioned table space
| Figure 15 shows the flow of a CHECK INDEX job with a parallel index check for a
| partitioned table space with a single nonpartitioned secondary index.
|
Table
space
Index
parts
Snapshot copy
Unload Sort
Table Unload Sort Merge Check Index
space Unload Sort
parts
Figure 15. Parallel index check for a partitioned table space with a single nonpartitioned
secondary index
| Figure 16 shows the flow of a CHECK INDEX job with a parallel index check for all
| indexes on a partitioned table space.
|
Table
space
parts Indexes
Snapshot copy
Figure 16. Parallel index check for all indexes on a partitioned table space
You can restart a CHECK INDEX utility job, but it starts from the beginning again.
For guidance in restarting online utilities, see “Restarting an online utility” on page
42.
Claims and drains: Table 11 shows which claim classes CHECK INDEX claims
and drains and any restrictive state that the utility sets on the target object.
Table 11. Claim classes of CHECK INDEX operations
Target CHECK INDEX CHECK INDEX PART
Table space or partition DW/UTRO DW/UTRO
Partitioning index or index partition DW/UTRO DW/UTRO
| Secondary index DW/UTRO none
| Data-partitioned secondary index or index DW/UTRO DW/UTRO
| partition
Logical partition of an index none DW/UTRO
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v UTRO: Utility restrictive state, read only-access allowed
v none: Object not affected by this utility
CHECK INDEX does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Compatibility: Table 12 shows which utilities can run concurrently with CHECK
INDEX on the same target object. The first column lists the other utility and the
second column lists whether or not that utility is compatible with CHECK INDEX.
The target object can be a table space, an index space, or an index partition. If
compatibility depends on particular options of a utility, that information is also
documented in the table.
Table 12. Compatibility of CHECK INDEX with other utilities
Compatible with
Action CHECK INDEX?
CHECK DATA No
CHECK INDEX Yes
CHECK LOB Yes
COPY INDEXSPACE Yes
COPY TABLESPACE Yes
DIAGNOSE Yes
LOAD No
MERGECOPY Yes
MODIFY Yes
QUIESCE Yes
REBUILD INDEX No
Example 2: Checking one index. The following control statement specifies that the
CHECK INDEX utility is to check the project-number index (DSN8810.XPROJ1) on
the sample project table. SORTDEVT SYSDA specifies that SYSDA is the device
type for temporary data sets that are to be dynamically allocated by DFSORT.
CHECK INDEX (DSN8810.XPROJ1)
SORTDEVT SYSDA
Example 3: Checking more than one index. The following control statement
specifies that the CHECK INDEX utility is to check the indexes
DSN8810.XEMPRAC1 and DSN8810.XEMPRAC2 on the employee-to-project-
activity sample table.
CHECK INDEX NAME (DSN8810.XEMPRAC1, DSN8810.XEMPRAC2)
Figure 18. CHECK INDEX output from a job that checks the third partition of all indexes.
For a diagram of CHECK LOB syntax and a description of available options, see
“Syntax and options of the CHECK LOB control statement” on page 90. For detailed
guidance on running this utility, see “Instructions for running CHECK LOB” on page
91.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
Syntax diagram
|
| EXCEPTIONS 0
| CHECK LOB lob-table-space-spec
EXCEPTIONS integer SORTDEVT device-type
|
| SORTNUM integer
lob-table-space-spec:
TABLESPACE lob-table-space-name
database-name.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LOB Indicates that you are checking a LOB table space for defects.
TABLESPACE database-name.lob-table-space-name
Specifies the table space to which the data belongs.
database-name is the name of the database and is optional. The
default is DSNDB04.
lob-table-space-name is the name of the LOB table space.
EXCEPTIONS integer
Specifies the maximum number of exceptions, which are reported
by messages only. CHECK LOB terminates in the CHECKLOB
phase when it reaches the specified number of exceptions.
All defects that are reported by messages are applied to the
exception count.
integer is the maximum number of exceptions. The default is 0,
which indicates no limit on the number of exceptions.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be
dynamically allocated by DFSORT.
The following object is named in the utility control statement and does not require
DD statements in the JCL:
Table space
Object that is to be checked.
| Beginning in Version 8, the CHECK LOB utility does not require SYSUT1 and
| SORTOUT data sets. Work records are written to and processed from an
| asynchronous SORT phase. The WORKDDN keyword, which provided the DD
| names of the SYSUT1 and SORTOUT data sets in earlier versions of DB2, is not
| needed and is ignored. You do not need to modify existing control statements to
| remove the WORKDDN keyword.
1. Correct any defects that are found in the LOB table space by using the REPAIR
utility.
2. To reset CHECK-pending or auxiliary-warning status, run CHECK LOB again, or
run the REPAIR utility.
Use the REPAIR utility with care, as improper use can further damage the data. If
necessary, contact IBM Software Support for guidance on using the REPAIR utility.
Claims and drains: Table 14 shows which claim classes CHECK LOB claims and
drains and any restrictive state that the utility sets on the target object.
Table 14. Claim classes for CHECK LOB operations on a LOB table space and index on the
auxiliary table
Target objects CHECK LOB
LOB table space DW/UTRO
Index on the auxiliary table DW/UTRO
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v UTRO: Utility restrictive state, read-only access allowed
Compatibility: Any SQL operation or other online utility that attempts to update the
same LOB table space is incompatible.
The RECOVER utility uses these copies when recovering a table space or index
space to the most recent time or to a previous time. Copies can also be used by
the MERGECOPY, RECOVER, COPYTOCOPY, and UNLOAD utilities.
You can copy a list of objects in parallel to improve performance. Specifying a list of
objects along with the SHRLEVEL REFERENCE option creates a single recovery
point for that list of objects. Specifying the PARALLEL keyword allows you to copy a
list of objects in parallel, rather than serially.
To calculate the number of threads you need when you specify the PARALLEL
keyword, use the formula (n * 2 + 1), where n is the number of objects that are to
be processed in parallel, regardless of the total number of objects in the list. If you
do not use the PARALLEL keyword, n is one and COPY uses three threads for a
single-object COPY job.
For a diagram of COPY syntax and a description of available options, see “Syntax
and options of the COPY control statement” on page 96. For detailed guidance on
running this utility, see “Instructions for running COPY” on page 106.
The COPY-pending status is set off for table spaces if the copy was a full image
copy. However, DB2 does not reset the COPY-pending status if you copy a single
piece of a multi-piece linear data set. If you copy a single table space partition, DB2
resets the COPY-pending status only for the copied partition and not for the whole
table space. DB2 resets the informational COPY-pending (ICOPY) status after you
copy an index space or index.
Related information: See Part 4 (Volume 1) of DB2 Administration Guide for uses
of COPY in the context of planning for database recovery. For information about
creating inline copies during LOAD, see “Using inline COPY with LOAD” on page
239
© Copyright IBM Corp. 1983, 2004 95
COPY
239. You can also create inline copies during REORG; see “Using inline copy with
REORG TABLESPACE” on page 455 for more information.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute COPY, but only on a
table space in the DSNDB01 or DSNDB06 database.
The batch user ID that invokes COPY with the option must provide the necessary
authority to execute the DFDSS DUMP command.
Syntax diagram
Notes:
1 Use the copy-spec if you do not want to use the CONCURRENT option.
2 Use the concurrent-spec if you want to use the CONCURRENT option, but not the FILTERDDN
option.
3 Use the filterddn spec if you want to use the CONCURRENT and FILTERDDN options.
copy-spec:
FULL YES
LIST listdef-name data-set-spec
FULL NO
changelimit-spec
PARALLEL CHECKPAGE
(num-objects) TAPEUNITS ( num-tape-units )
| SYSTEMPAGES YES
SYSTEMPAGES NO
Notes:
1 Not valid for nonpartioning indexes.
concurrent-spec:
DSNUM ALL
table-space-spec data-set-spec
index-name-spec (1)
DSNUM integer
Notes:
1 Not valid for nonpartioning indexes.
filterddn-spec:
DSNUM ALL
table-space-spec
index-name-spec (1)
DSNUM integer
Notes:
1 Not valid for nonpartioning indexes.
data-set-spec:
(1)
COPYDDN( ddname1 )
,ddname2 RECOVERYDDN( ddname3 )
,ddname2 ,ddname4
,ddname4
RECOVERYDDN( ddname3 )
,ddname4
,ddname4
Notes:
1 COPYDDN SYSCOPY is the default for the primary copy, but this default can only be used for one
object in the list.
changelimit-spec:
CHANGELIMIT
(percent_value1 ) REPORTONLY
,percent_value2
table-space-spec:
TABLESPACE table-space-name
database-name.
index-name-spec:
(1)
INDEXSPACE index-space-name
database-name.
INDEX index-name
creator-id.
Notes:
1 INDEXSPACE is the preferred specification.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name.
LIST specifies one LIST keyword for each COPY control statement.
Do not specify LIST with either the INDEX or the TABLESPACE
keyword. DB2 invokes COPY once for the entire list. For more
information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
TABLESPACEdatabase-name.table-space-name
Specifies the table space (and, optionally, the database it belongs
to) that is to be copied.
database-name is the name of the database that the table space
belongs to. The default is DSNDB04.
table-space-name is the name of the table space to be copied.
Specify the DSNDB01.SYSUTILX, DSNDB06.SYSCOPY, or
DSNDB01.SYSLGRNX table space by itself in a single COPY
statement. Alternatively, specify the DSNDB01.SYSUTILX,
DSNDB06.SYSCOPY, or DSNDB01.SYSLGRNX table space with
indexes over the table space that were defined with the COPY YES
attribute.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space that is to be copied;
the name is obtained from the SYSIBM.SYSINDEXES table. The
specified index space must be defined with the COPY YES
attribute.
database-name Optionally specifies the name of the database that
the index space belongs to. The default is DSNDB04.
index-space-name specifies the name of the index space that is to
be copied.
INDEX creator-id.index-name
Specifies the index that is to be copied. Enclose the index name in
quotation marks if the name contains a blank.
creator-id optionally specifies the creator of the index. The default
is the user identifier for the utility.
In this format:
| catname Is the ICF catalog name or alias.
x Is C (for VSAM clusters) or D (for VSAM
data components).
dbname Is the database name.
spacename Is the table space or index space name.
y Is I or J, which indicates the data set name
used by REORG with FASTSWITCH.
nnn Is the data set integer.
Notes:
1. Required if you specify CONCURRENT and the SYSPRINT DD statement points to a
data set.
2. Required if you specify the FILTERDDN option.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or index space
Object that is to be copied. (If you want to copy only certain data sets in a
table space, you must use the DSNUM option in the control statement.)
DB2 catalog objects
Objects in the catalog that COPY accesses. The utility records each copy in
the DB2 catalog table SYSIBM.SYSCOPY.
Output data set size: Image copies are written to sequential non-VSAM data sets.
| Recommendation: Use a template for the image copy data set by specifying a
| TEMPLATE statement without the SPACE keyword. When you omit this keyword,
| the utility calculates the appropriate size of the data set for you.
Alternatively, you can find the approximate size of the image copy data set for a
table space, in bytes, by either executing COPY with the CHANGELIMIT
REPORTONLY option, or using the following procedure:
| 1. Find the high-allocated page number, either from the NACTIVEF column of
| SYSIBM.SYSTABLESPACE after running the RUNSTATS utility, or from
| information in the VSAM catalog data set.
2. Multiply the high-allocated page number by the page size.
Alternatively, you can determine the approximate size of the filter data set size that
is required, in bytes, by using the following formula, where n = the number of
specified objects in the COPY control statement:
(240 + (80 × n))
JCL parameters: You can specify a block size for the output by using the BLKSIZE
parameter on the DD statement for the output data set. Valid block sizes are
multiples of 4096 bytes. You can increase the buffer using the BUFNO parameter;
for example, you might specify BUFNO=30, which creates 30 buffers.
See also “Data sets that online utilities use” on page 21 for information about using
BUFNO.
Cataloging image copies: To catalog your image copy data sets, use the
DISP=(MOD,CATLG,CATLG) parameter in the DD statement or TEMPLATE that is
named by the COPYDDN option. After the image copy is taken, the DSVOLSER
column of the row that is inserted into SYSIBM.SYSCOPY contains blanks.
Duplicate image copy data sets are not allowed. If a cataloged data set is already
recorded in SYSIBM.SYSCOPY with the same name as the new image copy data
set, the COPY utility issues a message and does not make the copy.
When RECOVER locates the SYSCOPY entry, it uses the operating system catalog
to allocate the required data set. If you have uncataloged the data set, the
allocation fails. In that case, the recovery can still go forward; RECOVER searches
for a previous image copy. But even if it finds one, RECOVER must use
correspondingly more of the log during recovery.
Recommendation: Keep the ICF catalog consistent with the information about
existing image copy data sets in the SYSIBM.SYSCOPY catalog table.
The following statement specifies that the COPY utility is to make a full image copy
of the DSN8S81E table space in database DSN8D81A:
COPY TABLESPACE DSN8D81A.DSN8S81E
The COPY utility writes pages from the table space or index space to the output
data sets. The JCL for the utility job must include DD statements or have a template
specification for the data sets. If the object consists of multiple data sets and all are
copied in one run, the copies reside in one physical sequential output data set.
Image copies should be made either by entire page set or by partition, but not by
both.
Recommendations:
v Take a full image copy after any of the following operations:
– CREATE or LOAD operations for a new object that is populated.
– REORG operation for an existing object.
– LOAD RESUME of an existing object.
v Copy the indexes over a table space whenever a full copy of the table space is
taken. More frequent index copies decrease the number of log records that need
to be applied during recovery. At a minimum, you should copy an index when it is
placed in informational COPY-pending (ICOPY) status. For more information
about the ICOPY status, see Appendix C, “Resetting an advisory or restrictive
status,” on page 831.
If you create an inline copy during LOAD or REORG, you do not need to execute a
separate COPY job for the table space. If you do not create an inline copy, and if
the LOG option is NO, the COPY-pending status is set for the table space. You
must then make a full image copy for any subsequent recovery of the data. An
incremental image copy is not allowed in this case.
If the LOG option is YES, the COPY-pending status is not set. However, your next
image copy must be a full image copy. Again, an incremental image copy is not
allowed.
The COPY utility automatically takes a full image copy of a table space if you
attempt to take an incremental image copy when it is not allowed.
Copy by partition or data set: You can make an incremental image copy by
partition or data set (specified by DSNUM) in the following situations:
v A full image copy of the table space exists.
v A full image copy of the same partition or data set exists and the COPY-pending
status is not on for the table space or partition.
In addition, the full image copy must have been made after the most recent use of
CREATE, REORG or LOAD, or it must be an inline copy that was made during the
most recent use of LOAD or REORG.
Remote-site recovery: For remote site recovery, DB2 assumes that the system
and application libraries and the DB2 catalog and directory are identical at the local
site and recovery site. You can regularly transport copies of archive logs and
database data sets to a safe location to keep current data for remote-site recovery
current. This information can be kept on tape until needed.
Naming the data sets for the copies: The COPYDDN option of COPY names the
output data sets that receive copies for local use. The RECOVERYDDN option
names the output data sets that receive copies that are intended for remote-site
recovery. The options have the following formats:
COPYDDN (ddname1,ddname2)
RECOVERYDDN (ddname3,ddname4)
The DD names for the primary output data sets are ddname1 and ddname3. The
ddnames for the backup output data sets are ddname2 and ddname4.
Sample control statement: The following statement makes four full image copies
of the table space DSN8S81E in database DSN8D81A. The statement uses
LOCALDD1 and LOCALDD2 as DD names for the primary and backup copies that
are used on the local system and RECOVDD1 and RECOVDD2 as DD names for
the primary and backup copies for remote-site recovery:
COPY TABLESPACE DSN8D81A.DSN8S81E
COPYDDN (LOCALDD1,LOCALDD2)
RECOVERYDDN (RECOVDD1,RECOVDD2)
You do not need to make copies for local use and for remote-site recovery at the
same time. COPY allows you to use either the COPYDDN or the RECOVERYDDN
option without the other. If you make copies for local use more often than copies for
remote-site recovery, a remote-site recovery could be performed with an older copy,
and more of the log, than a local recovery; hence, the recovery would take longer.
However, in your plans for remote-site recovery, that difference might be
acceptable. You can also use MERGECOPY RECOVERYDDN to create
recovery-site full image copies, and merge local incremental copies into new
recovery-site full copies.
Conditions for making multiple incremental image copies: DB2 cannot make
incremental image copies if any of the following conditions is true:
v The incremental image copy is requested only for a site other than the current
site (the local site from which the request is made).
v Incremental image copies are requested for both sites, but the most recent full
image copy was made for only one site.
v Incremental image copies are requested for both sites and the most recent full
image copies were made for both sites, but between the most recent full image
copy and current request, incremental image copies were made for the current
site only.
If you attempt to make incremental image copies under any of these conditions,
COPY terminates with return code 8, does not take the image copy or update the
SYSIBM.SYSCOPY table, and issues the following message:
DSNU404I csect-name
LOCAL SITE AND RECOVERY SITE INCREMENTAL
IMAGE COPIES ARE NOT SYNCHRONIZED
To proceed, and still keep the two sets of data synchronized, take another full
image copy of the table space for both sites, or change your request to make an
incremental image copy only for the site at which you are working.
DB2 cannot make an incremental image copy if the object that is being copied is an
index or index space.
Maintaining copy consistency: Make full image copies for both the local and
recovery sites:
v If a table space is in COPY-pending status
v After a LOAD or REORG procedure that did not create an inline copy
v If an index is in the informational COPY-pending status
This action helps to ensure correct recovery for both local and recovery sites. If the
requested full image copy is for one site only, but the history shows that copies
were made previously for both sites, COPY continues to process the image copy
and issues the following warning message:
DSNU406I FULL IMAGE COPY SHOULD BE TAKEN FOR BOTH LOCAL SITE AND
RECOVERY SITE.
The COPY-pending status of a table space is not changed for the other site when
you make multiple image copies at the current site for that other site. For example,
if a table space is in COPY-pending status at the current site, and you make copies
from there for the other site only, the COPY-pending status is still on when you
bring up the system at that other site.
If a nonpartitioned table space consists of more than one data set, you can copy
several or all of the data sets independently in separate jobs. To do so, run
simultaneous COPY jobs (one job for each data set) and specify SHRLEVEL
CHANGE on each job.
However, creating copies simultaneously does not provide you with a consistent
recovery point unless you subsequently run a QUIESCE for the table space.
– DB2 drains the write claim class on each table space and index in the
UTILINIT phase, which is held for the duration of utility processing.
– Utility processing inserts SYSCOPY rows for all of the objects in the list at the
same time, after all of the objects have been copied.
– All objects in the list have identical RBA or LRSN values for the START_RBA
column for the SYSCOPY rows: the START_RBA is set to the current LRSN
at the end of the COPY phase.
v If you use COPY with the SHRLEVEL(CHANGE) option:
– If you specify OPTIONS EVENT(ITEMERROR,SKIP), each object in the list is
placed in UTRW status and the read claim class is held only while the object
is being copied. If you do not specify OPTIONS EVENT(ITEMERROR,SKIP),
all of the objects in the list are placed in UTRW status and the read claim
class is held on all objects for the entire duration of the COPY.
– Utility processing inserts a SYSCOPY row for each object in the list when the
copy of each object is complete.
– Objects in the list have different LRSN values for the START_RBA column for
the SYSCOPY rows; the START_RBA value is set to the current RBA or
LRSN at the start of copy processing for that object.
When you specify the PARALLEL keyword, DB2 supports parallelism for image
copies on disk or tape devices. You can control the number of tape devices to
allocate for the copy function by using TAPEUNITS with the PARALLEL keyword. If
you use JCL statements to define tape devices, the JCL controls the allocation of
the devices.
| When you explicitly specify objects with the PARALLEL keyword, the objects are not
| necessarily processed in the specified order. Objects that are to be written to tape
| and whose file sequence numbers have been specified in the JCL are processed in
| the specified order. If templates are used, you cannot specify file sequence
| numbers. In the absence of overriding JCL specifications, DB2 determines the
| placement and, thus, the order of processing for such objects. When only templates
| are used, objects are processed according to their size, with the largest objects
| processed first.
To calculate the number of threads that you need when you specify the PARALLEL
keyword, use the formula (n * 2 + 1), where n is the number of objects that are to
be processed in parallel, regardless of the total number of objects in the list. If you
do not use the PARALLEL keyword, n is 1 and COPY uses three threads for a
single-object COPY job.
The following table spaces cannot be included in a list of table spaces. You must
specify each one as a single object:
v DSNDB01.SYSUTILX
v DSNDB06.SYSCOPY
v DSNDB01.SYSLGRNX
The only exceptions to this restriction are the indexes over these table spaces that
were defined with the COPY YES attribute. You can specify such indexes along with
the appropriate table space.
If a job step that contains more than one COPY statement abends, do not use
TERM UTILITY. Restart the job from the last commit point by using RESTART
instead. Terminating COPY by using TERM UTILITY in this case creates
inconsistencies between the ICF catalog and DB2 catalogs.
Restrictions on using DFSMSdss concurrent copy: You cannot use a copy that
is made with DFSMSdss concurrent copy with the PAGE or ERRORRANGE options
of the RECOVER utility. If you specify PAGE or ERROR RANGE, RECOVER
bypasses any concurrent copy records when searching the SYSIBM.SYSCOPY
table for a recovery point.
| You can use the CONCURRENT option with SHRLEVEL CHANGE on a table
| space if the page size in the table space matches the control interval for the
| associated data set.
Also, you cannot run the following DB2 stand-alone utilities on copies that are made
by DFSMSdss concurrent copy:
DSN1COMP
DSN1COPY
DSN1PRNT
You cannot execute the CONCURRENT option from the DB2I Utilities panel or from
the DSNU TSO CLIST command.
| Table space availability: If you specify COPY SHRLEVEL REFERENCE with the
| CONCURRENT option, and if you want to copy all of the data sets for a list of table
| spaces to the same dump data set, specify FILTERDDN in your COPY statement to
| improve table space availability. If you do not specify FILTERDDN, COPY might
| force DFSMSdss to process the list of table spaces sequentially, which might limit
| the availability of some of the table spaces that are being copied.
You cannot use the CHANGELIMIT option for a table space or partition that is
defined with TRACKMOD NO. If you change the TRACKMOD option from NO to
YES, you must take an image copy before you can use the CHANGELIMIT option.
When you change the TRACKMOD option from NO to YES for a linear table space,
you must take a full image copy by using DSNUM ALL before you can copy using
the CHANGELIMIT option.
Chapter 11. COPY 115
COPY
Obtaining image copy information about a table space: When you specify
COPY CHANGELIMIT REPORTONLY, COPY reports image copy information for
the table space and recommends the type of copy, if any, to take. The report
includes:
v The total number of pages in the table space. This value is the number of pages
that are to be copied if a full image copy is taken.
v The number of empty pages, if the table space is segmented.
v The number of changed pages. This value is the number of pages that are to be
copied if an incremental image copy is taken.
v The percentage of changed pages.
v The type of image copy that is recommended.
Adding conditional code to your COPY job: You can add conditional code to
your jobs so that an incremental or full image copy, or some other step, is
performed depending on how much the table space has changed. For example, you
can add a conditional MERGECOPY step to create a new full image copy if your
COPY job took an incremental copy. COPY CHANGELIMIT uses the following
return codes to indicate the degree that a table space or list of table spaces has
changed:
1 (informational)
If no CHANGELIMIT was met.
2 (informational)
If the percentage of changed pages is greater than the low CHANGELIMIT
and less than the high CHANGELIMIT value.
3 (informational)
If the percentage of changed pages is greater than or equal to the high
CHANGELIMIT value.
If you specify multiple COPY control statements in one job step, that job step
reports the highest return code from all of the imbedded statements. Basically, the
statement with the highest percentage of changed pages determines the return
code and the recommended action for the entire list of COPY control statements
that are contained in the subsequent job step.
Using conditional copy with generation data groups (GDGs): When you use
generation data groups (GDGs) and need to make an incremental image copy, take
the following steps to prevent creating an empty image copy:
1. Include in your job a first step in which you run COPY with CHANGELIMIT
REPORTONLY. Set the SYSCOPY DD statement to DD DUMMY so that no
output data set is allocated. If you specify REPORTONLY and use a template,
DB2 does not dynamically allocate the data set.
2. Add a conditional JCL statement to examine the return code from the COPY
CHANGELIMIT REPORTONLY step.
3. Add a second COPY step without CHANGELIMIT REPORTONLY to copy the
table space or table space list based on the return code from the second step.
Even if you do not periodically merge multiple image copies into one copy when
you do not have enough tape units, RECOVER TABLESPACE can still attempt to
recover the object. RECOVER dynamically allocates the full image copy and
attempts to dynamically allocate all the incremental image copy data sets. If every
incremental copy can be allocated, recovery proceeds to merge pages to table
spaces and apply the log. If a point is reached where RECOVER TABLESPACE
cannot allocate an incremental copy, the log RBA of the last successfully allocated
data set is noted. Attempts to allocate incremental copies cease, and the merge
proceeds using only the allocated data sets. The log is applied from the noted RBA,
and the incremental image copies that were not allocated are simply ignored.
For LOB data, you should quiesce and copy both the base table space and the
LOB table space at the same time to establish a recovery point of consistency,
called a recovery point. Be aware that QUIESCE does not create a recovery point
for a LOB table space that contains LOBs that are defined with LOG NO.
Setting and clearing the informational COPY-pending status: For an index that
was defined with the COPY YES attribute the following utilities can place the index
in the informational COPY-pending (ICOPY) status:
v REORG INDEX
v REORG TABLESPACE LOG YES or NO
v LOAD TABLE LOG YES or NO
v REBUILD INDEX
After the utility processing completes, take a full image copy of the index space so
that the RECOVER utility can recover the index space. If you need to recover an
index of which you did not take a full image copy, use the REBUILD INDEX utility to
rebuild the index from data in the table space.
Improving performance
You can merge a full image copy and subsequent incremental image copies into a
new full copy by running the MERGECOPY utility. After reorganizing a table space,
the first image copy must be a full image copy.
Do not base the decision of whether to run a full image copy or an incremental
image copy on the number of rows that are updated since the last image copy was
taken. Instead, base your decision on the percentage of pages that contain at least
one updated record (not the number of updated records). Regardless of the size of
the table, if more than 50% of the pages contain updated records, use full image
copy (this saves the cost of a subsequent MERGECOPY). To find the percentage of
changed pages, you can execute COPY with the CHANGELIMIT REPORTONLY
option. Alternatively, you can execute COPY CHANGELIMIT to allow COPY to
determine whether a full image copy or incremental copy is required; see
“Specifying conditional image copies” on page 115 for more information.
Using data compression can improve COPY performance because COPY does not
decompress data. The performance improvement is proportional to the amount of
compression.
Attention: Do not take incremental image copies when using generation data
groups unless data pages have changed. When you use generation data groups,
taking an incremental image copy when no data pages have changed causes the
following results:
v The new image copy data set is empty.
v No SYSCOPY record is inserted for the new image copy data set.
v Your oldest image copy is deleted.
See “Using conditional copy with generation data groups (GDGs)” on page 116 for
guidance on executing COPY with the CHANGELIMIT and REPORTONLY options
to ensure that you do not create empty image copy data sets when using GDGs.
If you plan to use SMS, catalog all image copies. Never maintain cataloged and
uncataloged image copies that have the same name.
Terminating COPY
This section explains the recommended way to terminate the COPY utility.
Recommendation: Do not stop a COPY job with the TERM UTILITY command. If
you issue TERM UTILITY while COPY is in the active or stopped state, DB2 inserts
an ICTYPE=T record in the SYSIBM.SYSCOPY catalog table for each object that
COPY had started processing, but not yet completed. For copies that are made with
SHRLEVEL REFERENCE, some objects in the list might not have an ICTYPE=T
record. For SHRLEVEL CHANGE, some objects might have a valid an ICTYPE=F,
I, or T record, or no record at all. The COPY utility does not allow you to take an
incremental image copy if an ICTYPE=T record exists. To reset the status in this
case, you must make a full image copy.
DB2 uses the same image copy data set when you RESTART from the last commit
point. Therefore, specify DISP=(MOD,CATLG,CATLG) on your DD statements. You
cannot use RESTART(PHASE) for any COPY job. If you do specify
RESTART(PHASE), the request is treated as if you specified RESTART, also known
as RESTART(CURRENT).
Restarting COPY
| If you do not use the TERM UTILITY command, you can restart a COPY job.
| COPY jobs with the CONCURRENT option restart from the beginning, and other
| COPY jobs restart from the last commit point. You cannot use RESTART(PHASE)
| for any COPY job. For general instructions on restarting a utility job, see “Restarting
| an online utility” on page 42.
Restarting with a new data set: If you define a new output data set for a current
restart, complete the following actions before restarting the COPY job:
1. Copy the failed COPY output to the new data set.
2. Delete the old data set.
3. Rename the new data set to use the old data set name.
Restricted states: Do not copy a table space that is in any of the following states:
v CHECK-pending
v RECOVER-pending
v REFRESH-pending
v Logical error range
v Group buffer pool RECOVER-pending
v Stopped
v STOP-pending
Claims and drains: Table 16 shows which claim classes COPY claims and drains
and any restrictive status that the utility sets on the target object.
Table 16. Claim classes of COPY operations
SHRLEVEL SHRLEVEL CHANGE
Target REFERENCE
Table space, index space, or partition DW CR
UTRO UTRW1
Legend:
v DW - Drain the write claim class - concurrent access for SQL readers
v CR - Claim the read claim class
v UTRO - Utility restrictive state, read-only access allowed
v UTRW - Utility restrictive state, read-write access allowed
Notes:
1. If the target object is a segmented table space, SHRLEVEL CHANGE does not allow you
to concurrently execute an SQL DELETE without the WHERE clause.
COPY does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Compatibility: Table 17 documents which utilities can run concurrently with COPY
on the same target object. The target object can be a table space, an index space,
or a partition of a table space or index space. If compatibility depends on particular
options of a utility, that information is also documented in the table.
Table 17. Compatibility of COPY with other utilities
COPY COPY COPY COPY
INDEXSPACE INDEXSPACE TABLESPACE TABLESPACE
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Action REFERENCE CHANGE REFERENCE CHANGE
BACKUP SYSTEM Yes Yes Yes Yes
CHECK DATA Yes Yes No No
CHECK INDEX Yes Yes Yes Yes
CHECK LOB Yes Yes Yes Yes
COPY INDEXSPACE No No Yes Yes
COPY TABLESPACE Yes Yes No No
COPYTOCOPY No No No No
DIAGNOSE Yes Yes Yes Yes
LOAD No No No No
MERGECOPY No No No No
MODIFY No No No No
QUIESCE Yes No Yes No
REBUILD INDEX No No Yes Yes
RECOVER INDEX No No Yes Yes
RECOVER TABLESPACE Yes Yes No No
REORG INDEX No No Yes Yes
REORG TABLESPACE No No No No
UNLOAD CONTINUE or
PAUSE
REORG TABLESPACE Yes Yes Yes Yes
UNLOAD ONLY or
EXTERNAL
REPAIR LOCATE by KEY, Yes Yes Yes Yes
RID, or PAGE DUMP or
VERIFY
REPAIR LOCATE by KEY No No No No
or RID DELETE or
REPLACE
To run on DSNDB01.SYSUTILX, COPY must be the only utility in the job step. Also,
if SHRLEVEL REFERENCE is specified, the COPY job of DSNDB01.SYSUTILX
must be the only utility running in the Sysplex.
COPY on SYSUTILX is an “exclusive” job; such a job can interrupt another job
between job steps, possibly causing the interrupted job to time out.
Example 1: Making a full image copy. The following control statement specifies
that the COPY utility is to make a full image copy of table space
DSN8D81A.DSN8S81E. The copy is to be written to the data set that is defined by
the SYSCOPY DD statement in the JCL; SYSCOPY is the default.
//STEP1 EXEC DSNUPROC,UID=’IUJMU111.COPYTS’,
// UTPROC=’’,
// SYSTEM=’DSN’,DB2LEV=DB2A
//SYSCOPY DD DSN=COPY001F.IFDY01,UNIT=SYSDA,VOL=SER=CPY01I,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//SYSIN DD *
COPY TABLESPACE DSN8D81A.DSN8S81E
/*
| Instead of defining the data sets in the JCL, you can use templates. In the following
| example, the preceding job is modified to use a template. In this example, the name
| of the template is LOCALDDN. The LOCALDDN template is identified in the COPY
| statement by the COPYDDN option.
| //STEP1 EXEC DSNUPROC,UID=’IUJMU111.COPYTS’,
| // UTPROC=’’,
| // SYSTEM=’DSN’,DB2LEV=DB2A
| //SYSIN DD *
| Recommendation: When possible, use templates to allocate data sets. For more
| information about templates, see Chapter 31, “TEMPLATE,” on page 575.
| Example 2: Making full image copies for local site and recovery site. The
| following COPY control statement specifies that COPY is to make primary and
| backup full image copies of table space DSN8D81P.DSN8S81C at both the local
| site and the recovery site. The COPYDDN option specifies the output data sets for
| the local site, and the RECOVERYDDN option specifies the output data sets for the
| recovery site. The PARALLEL option indicates that up to 2 objects are to be
| processed in parallel.
| The OPTIONS statement at the beginning indicates that if COPY encounters any
| errors (return code 8) while making the requested copies, DB2 ignores that
| particular item. COPY skips that item and moves on to the next item. For example,
| if DB2 encounters an error copying the specified data set to the COPY1 data set,
| DB2 ignores the error and tries to copy the table space to the COPY2 data set.
| OPTIONS EVENT(ITEMERROR,SKIP)
| COPY TABLESPACE DSN8D81P.DSN8S81C
| COPYDDN(COPY1,COPY2)
| RECOVERYDDN(COPY3,COPY4)
| PARALLEL(2)
Example 3: Making full image copies of a list of objects. The control statement
in Figure 20 on page 124 specifies that COPY is to make local and recovery full
image copies (both primary and backup) of the following objects:
v Table space DSN8D81A.DSN8S81D, and its indexes:
– DSN8810.XDEPT1
– DSN8810.XDEPT2
– DSN8810.XDEPT3
v Table space DSN8D81A.DSN8S81E, and its indexes:
– DSN8710.XEMP1
– DSN8710.XEMP2
These copies are to be written to the data sets that are identified by the COPYDDN
and RECOVERYDDN options for each object. The COPYDDN option specifies the
data sets for the copies at the local site, and the RECOVERYDDN option specifies
the data sets for the copies at the recovery site. The first parameter of each of
these options specifies the data set for the primary copy, and the second parameter
specifies the data set for the backup copy. For example, the primary copy of table
space DSN8D81A.DSN8S81D at the recovery site is to be written to the data set
that is identified by the COPY3 DD statement.
SHRLEVEL REFERENCE specifies that no updates are allowed during the COPY
job. This option is the default and is recommended to ensure the integrity of the
data in the image copy.
|
| Figure 20. Example of making full image copies of multiple objects (Part 1 of 2)
|
//COPY16 DD DSN=C81A.S00004.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY17 DD DSN=C81A.S00005.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY18 DD DSN=C81A.S00005.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY19 DD DSN=C81A.S00005.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY20 DD DSN=C81A.S00005.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY21 DD DSN=C81A.S00006.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY22 DD DSN=C81A.S00006.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY23 DD DSN=C81A.S00006.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY24 DD DSN=C81A.S00006.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY25 DD DSN=C81A.S00007.D2003142.T155241.LP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY26 DD DSN=C81A.S00007.D2003142.T155241.LB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY27 DD DSN=C81A.S00007.D2003142.T155241.RP,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//COPY28 DD DSN=C81A.S00007.D2003142.T155241.RB,
// SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//SYSIN DD *
COPY
TABLESPACE DSN8D81A.DSN8S81D
COPYDDN (COPY1,COPY2)
RECOVERYDDN (COPY3,COPY4)
INDEX DSN8810.XDEPT1
COPYDDN (COPY5,COPY6)
RECOVERYDDN (COPY7,COPY8)
INDEX DSN8810.XDEPT2
COPYDDN (COPY9,COPY10)
RECOVERYDDN (COPY11,COPY12)
INDEX DSN8810.XDEPT3
COPYDDN (COPY13,COPY14)
RECOVERYDDN (COPY15,COPY16)
TABLESPACE DSN8D81A.DSN8S81E
COPYDDN (COPY17,COPY18)
RECOVERYDDN (COPY19,COPY20)
INDEX DSN8810.XEMP1
COPYDDN (COPY21,COPY22)
RECOVERYDDN (COPY23,COPY24)
INDEX DSN8810.XEMP2
COPYDDN (COPY25,COPY26)
RECOVERYDDN (COPY27,COPY28)
PARALLEL(4)
SHRLEVEL REFERENCE
/*
| Figure 20. Example of making full image copies of multiple objects (Part 2 of 2)
| You can also write this COPY job so that it uses lists and templates, as shown in
| Figure 21 on page 126. In this example, the name of the template is COPY. Note
| that this TEMPLATE statement does not contain any space specifications for the
| dynamically allocated data sets. Instead, DB2 determines the space requirements.
| The COPY template is identified in the COPY statement by the COPYDDN and
| RECOVERYDDN options. The name of the list is COPYLIST. This list is identified in
| the COPY control statement by the LIST option.
|
| Figure 21. Example of using a list and template to make full image copies of multiple objects
|
| Note that the DSN option of the TEMPLATE statement identifies the names of the
| data sets to which the copies are to be written. These names are similar to the data
| set names in the JCL in Figure 20 on page 124. For more information about using
| variable notation for data set names in TEMPLATE statements, see “Creating data
| set names” on page 587.
Each of the preceding COPY jobs create a point of consistency for the table spaces
and their indexes. You can subsequently use the RECOVER utility with the
TOLOGPOINT option to recover all of these objects; see 371 for an example.
| The TEMPLATE utility control statements define the templates A1 and A2. For more
| information about TEMPLATE control statements, see “Syntax and options of the
| TEMPLATE control statement” on page 575 in the TEMPLATE chapter.
//COPY2A EXEC DSNUPROC,SYSTEM=DSN
//SYSIN DD *
TEMPLATE A1 DSN(&DB..&SP..COPY1) UNIT CART STACK YES
TEMPLATE A2 DSN(&DB..&SP..COPY2) UNIT CART STACK YES
COPY PARALLEL 2 TAPEUNITS 2
TABLESPACE DSN8D81A.DSN8S81D COPYDDN(A1)
INDEXSPACE DSN8810.XDEPT COPYDDN(A1)
TABLESPACE DSN8D81A.DSN8S81E COPYDDN(A2)
INDEXSPACE DSN8810.YDEPT COPYDDN(A2)
Although use of templates is recommended, you can also define the output data
sets by coding JCL DD statements, as in Figure 22 on page 127. This COPY
The following table spaces are to be processed in parallel on two different tape
devices:
v DSN8D81A.DSN8S81D on the device that is defined by the DD1 DD statement
and the device that is defined by the DD5 DD statement
v DSN8D81A.DSN8S81E on the device that is defined by the DD2 DD statement
Copying of the following tables spaces must wait until processing has completed for
DSN8D81A.DSN8S81D and DSN8D81A.DSN8S81E:
v DSN8D81A.DSN8S81F on the device that is defined by the DD2 DD statement
after DSN8D81A.DSN8S81E completes processing
v DSN8D81A.DSN8S81G on the device that is defined by the DD1 DD statement
after DSN8D81A.DSN8S81D completes processing
Figure 22. Example of making full image copies of a list of objects in parallel on tape
JCL defines two data sets (DB1.TS1.CLP and DB2.TS2.CLB.BACKUP), and the
TEMPLATE utility control statements define two data sets that are to be dynamically
allocated (&DB..&SP..COPY1 and &DB..&SP..COPY2). For more information about
TEMPLATE control statements, see “Syntax and options of the TEMPLATE control
statement” on page 575 in the TEMPLATE chapter.
The COPYDDN options in the COPY control statement specify the data sets that
are to be used for the local primary and backup image copies of the specified table
spaces. For example, the primary copy of table space DSN8D81A.DSN8S71D is to
be written to the data set that is defined by the DD1 DD statement (DB1.TS1.CLP),
and the primary copy of table space DSN8D81A.DSN8S71E is to be written to the
data set that is defined by the A1 template (&DB..&SP..COPY1).
Four tape devices are allocated for this COPY job: the JCL allocates two tape
drives, and the TAPEUNITS 2 option in the COPY statement indicates that two tape
devices are to be dynamically allocated. Note that the TAPEUNITS option applies
only to those tape devices that are dynamically allocated by the TEMPLATE
statement.
Recommendation: Although this example shows how to use both templates and
DD statements, use only templates, if possible.
Figure 23. Example of using both JCL-defined and template-defined data sets to copy a list
of objects on a tape
In the preceding example, the utility determines the number of tape streams to use
by dividing the value for TAPEUNITS (8) by the number of output data sets (2) for a
total of 4 in this example. For each tape stream, the utility attaches one subtask.
The list of objects is sorted by size and processed in descending order. The first
subtask to finish processes the next object in the list. In this example, the
PARALLEL(10) option limits the number of objects to be processed in parallel to 10
and attaches four subtasks. Each subtask copies the objects in the list in parallel to
two tape drives, one for the primary and one for the recovery output data sets.
For more information about LISTDEF control statements, see “Syntax and options
of the LISTDEF control statement” on page 163 in the LISTDEF chapter. For more
information about TEMPLATE control statements, see “Syntax and options of the
TEMPLATE control statement” on page 575 in the TEMPLATE chapter.
All specified copies (local primary and backup copies and remote primary and
backup copies) are written to data sets that are dynamically allocated according to
the specifications of the COPYDS template. This template is defined in the
preceding TEMPLATE utility control statement. For more information about
templates, see Chapter 31, “TEMPLATE,” on page 575.
The SHRLEVEL CHANGE option in the following COPY control statement specifies
that updates can be made during the COPY job.
TEMPLATE COPYDS DSN &US.2.&SN..&LR.&PB..D&DATE.
LISTDEF NAME1 INCLUDE INDEXSPACE DSN8D81A.XEMP1
INCLUDE TABLESPACE DSN8D81A.DSN8S81D
COPY LIST NAME1 COPYDDN(COPYDS, COPYDS) RECOVERYDDN(COPYDS,COPYDS)
FULL NO SHRLEVEL CHANGE
|
| Figure 24. Example of invoking DFSMSdss concurrent copy with the COPY utility
|
| Example 11: Invoking DFSMSdss concurrent copy and using a filter data set.
| The control statement in Figure 25 specifies that DFSMSdss concurrent copy is to
| make full image copies of the objects in the TSLIST list (table spaces TS1, TS2,
| and TS3). The FILTERDDN option specifies that COPY is to use the filter data set
| that is defined by the FILT template. All output is sent to the SYSCOPY data set, as
| indicated by the COPYDDN(SYSCOPY) option. SYSCOPY is the default. This data
| set is defined in the preceding TEMPLATE control statement.
| LISTDEF TSLIST
| INCLUDE TABLESPACE TS1
| INCLUDE TABLESPACE TS2
| INCLUDE TABLESPACE TS3
| TEMPLATE SYSCOPY DSN &DB..&TS..COPY&IC.&LR.&PB..D&DATE..T&TIME.
| UNIT(SYSDA) DISP (MOD,CATLG,CATLG)
| TEMPLATE FILT DSN FILT.TEST1.&SN..D&DATE.
| UNIT(SYSDA) DISP (MOD,CATLG,DELETE)
| COPY LIST TSLIST
| FILTERDDN(FILT)
| COPYDDN(SYSCOPY)
| CONCURRENT
| SHRLEVEL REFERENCE
|
| Figure 25. Example of invoking DFSMSdss concurrent copy with the COPY utility and using a
| filter data set
Example 12: Copying LOB table spaces together with related objects. Assume
that table space TPIQUD01 is a base table space and that table spaces TLIQUDA1,
TLIQUDA2, TLIQUDA3, and TLIQUDA4 are LOB table spaces. The control
statement in Figure 26 specifies that COPY is to take the following actions:
v Take a full image copy of each specified table space if the percentage of
changed pages is equal to or greater than the highest decimal percentage value
for the CHANGELIMIT option for that table space. For example, if the percentage
of changed pages for table space TPIQUD01 is equal to or greater than 6.7%,
COPY is to take a full image copy.
v Take an incremental image copy of each specified table space if the percentage
of changed pages falls in the range between the specified decimal percentage
values for the CHANGELIMIT option for that table space. For example, if the
percentage of changed pages for table space TLIQUDA1 is greater than 7.9%
and less than 25.3%, COPY is to take an incremental image copy.
v Do not take an image copy of each specified table space if the percentage of
changed pages is equal to or less than the lowest decimal percentage value for
the CHANGELIMIT option for that table space. For example, if the percentage of
changed pages for table space TLIQUDA2 is equal to or less than 2.2%, COPY
is not to take an incremental image copy.
v Take full image copies of index spaces IPIQUD01, IXIQUD02, IUIQUD03,
IXIQUDA1, IXIQUDA2, IXIQUDA3, and IXIQUDA4.
COPY
TABLESPACE DBIQUD01.TPIQUD01 DSNUM ALL CHANGELIMIT(3.3,6.7)
COPYDDN(COPYTB1)
TABLESPACE DBIQUD01.TLIQUDA1 DSNUM ALL CHANGELIMIT(7.9,25.3)
COPYDDN(COPYTA1)
TABLESPACE DBIQUD01.TLIQUDA2 DSNUM ALL CHANGELIMIT(2.2,4.3)
COPYDDN(COPYTA2)
TABLESPACE DBIQUD01.TLIQUDA3 DSNUM ALL CHANGELIMIT(1.2,9.3)
COPYDDN(COPYTA3)
TABLESPACE DBIQUD01.TLIQUDA4 DSNUM ALL CHANGELIMIT(2.2,4.0)
COPYDDN(COPYTA4)
INDEXSPACE DBIQUD01.IPIQUD01 DSNUM ALL
COPYDDN(COPYIX1)
INDEXSPACE DBIQUD01.IXIQUD02 DSNUM ALL
COPYDDN(COPYIX2)
INDEXSPACE DBIQUD01.IUIQUD03 DSNUM ALL
COPYDDN(COPYIX3)
INDEXSPACE DBIQUD01.IXIQUDA1 DSNUM ALL
COPYDDN(COPYIXA1)
INDEXSPACE DBIQUD01.IXIQUDA2 DSNUM ALL
COPYDDN(COPYIXA2)
INDEXSPACE DBIQUD01.IXIQUDA3 DSNUM ALL
COPYDDN(COPYIXA3)
INDEXSPACE DBIQUD01.IXIQUDA4 DSNUM ALL
COPYDDN(COPYIXA4)
SHRLEVEL REFERENCE
Figure 26. Example of copying LOB table spaces together with related objects
Example 13: Using GDGs to make a full image copy. The following control
statement specifies that the COPY utility is to make a full image copy of table space
DBLT2501.TPLT2501. The local copies are to be written to data sets that are
dynamically allocated according to the COPYTEM1 template. The remote copies
are to be written to data sets that are dynamically allocated according to the
COPYTEM2 template. For both of these templates, the DSN option indicates the
name of generation data group JULTU225 and the generation number of +1. (If a
GDG base does not already exist, DB2 creates one.) Both of these output data sets
are to be modeled after the JULTU255.MODEL data set (as indicated by the
MODELCB option in the TEMPLATE statements).
//***********************************************************
//* COMMENT: MAKE A FULL IMAGE COPY OF THE TABLESPACE.
//* USE A TEMPLATE FOR THE GDG.
//***********************************************************
//STEP2 EXEC DSNUPROC,UID=’JULTU225.COPY’,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSIN DD *
TEMPLATE COPYTEM1
UNIT SYSDA
DSN ’JULTU225.GDG.LOCAL.&PB.(+1)’
MODELDCB JULTU225.MODEL
TEMPLATE COPYTEM2
UNIT SYSDA
DSN ’JULTU225.GDG.REMOTE.&PB.(+1)’
MODELDCB JULTU225.MODEL
COPY TABLESPACE DBLT2501.TPLT2501
FULL YES
COPYDDN (COPYTEM1,COPYTEM1)
RECOVERYDDN (COPYTEM2,COPYTEM2)
SHRLEVEL REFERENCE
The RECOVER utility uses the copies when recovering a table space or index
space to the most recent time or to a previous time. These copies can also be used
by MERGECOPY, UNLOAD, and possibly a subsequent COPYTOCOPY execution.
The entries for SYSCOPY columns remain the same as the original entries in the
SYSCOPY row when the COPY utility recorded them. The COPYTOCOPY job
inserts values in the columns DSNAME, GROUP_MEMBER, JOBNAME, AUTHID,
DSVOLSER, and DEVTYPE.
Restrictions: COPYTOCOPY does not support the following catalog and directory
objects:
v DSNDB01.SYSUTILX, and its indexes
v DSNDB01.DBD01, and its indexes
v DSNDB06.SYSCOPY, and its indexes
An image copy from a COPY job with the CONCURRENT option cannot be
processed by COPYTOCOPY.
Related information: See Part 4 (Volume 1) of DB2 Administration Guide for uses
of COPYTOCOPY in the context of planning for database recovery.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute COPYTOCOPY, but only
on a table space in the DSNDB01 or DSNDB06 database.
Syntax diagram
ts-num-spec:
DSNUM ALL
TABLESPACE table-space-name
database-name. DSNUM integer
index-name-spec:
Notes:
1 INDEXSPACE is the preferred specification.
2 Not valid for nonpartitioning indexes.
from-copy-spec:
FROMLASTCOPY
FROMLASTFULLCOPY
(1)
FROMLASTINCRCOPY
(2)
FROMCOPY dsn
FROMVOLUME CATALOG
volser
FROMSEQNO n
Notes:
1 Not valid with the INDEXSPACE or INDEX keyword.
2 Not valid with the LIST keyword.
data-set-spec:
(1) (2)
COPYDDN( ddname1 )
,ddname2 RECOVERYDDN( ddname3 )
,ddname2 ,ddname4
,ddname4
RECOVERYDDN( ddname3 )
,ddname4
,ddname4
Notes:
1 Use this option if you want to make a local site primary copy from one of the recovery site copies.
2 You can specify up to three DD names for both the COPYDDN and RECOVERYDDN options
combined.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
utility allows one LIST keyword for each COPYTOCOPY control
statement. Do not specify LIST with either the INDEX or
TABLESPACE keywords. DB2 invokes COPYTOCOPY once for the
entire list. For more information about LISTDEF specifications, see
Chapter 15, “LISTDEF,” on page 163.
TABLESPACE
Specifies the table space (and, optionally, the database it belongs
to) that is to be copied.
database-name is the name of the database that the table space
belongs to. The default is DSNDB04.
table-space-name is the name of the table space to be copied.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space that is to be copied;
the name is obtained from the SYSIBM.SYSINDEXES table. Define
the index space with the COPY YES attribute.
database-name optionally specifies the name of the database that
the index space belongs to. The default is DSNDB04.
index-space-name specifies the name of the index space that is to
be copied.
INDEX creator-id.index-name
Specifies the index that is to be copied. Enclose the index name in
quotation marks if the name contains a blank.
creator-id optionally specifies the creator of the index. The default
is the user identifier for the utility.
index-name specifies the name of the index that is to be copied.
DSNUM Identifies a partition or data set, within the table space or the index
In this format:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
spacename Is the table space or index space name.
y Is I or J.
nnn Is the data set integer.
version number. If the image copy data set is not a generation data
set and more than one image copy data set have the same data
set name, use the FROMVOLUME option to identify the data set
exactly.
FROMVOLUME
Identifies the image copy data set.
CATALOG
Identifies the data set as cataloged. Use this option only for an
image copy that was created as a cataloged data set. (Its
volume serial is not recorded in SYSIBM.SYSCOPY.)
COPYTOCOPY refers to the SYSIBM.SYSCOPY catalog table
during execution. If you use FROMVOLUME CATALOG, the
data set must be cataloged. If you remove the data set from the
catalog after creating it, you must catalog the data set again to
make it consistent with the record that appears in
SYSIBM.SYSCOPY for this copy.
vol-ser
Identifies the data set by an alphanumeric volume serial
identifier of its first volume. Use this option only for an image
copy that was created as a noncataloged data set. Specify the
first vol-ser in the SYSCOPY record to locate a data set that is
stored on multiple tape volumes.If an individual volume serial
number contains leading zeros, it must be enclosed in single
quotation marks.
FROMSEQNO n
Identifies the image copy data set by its file sequence number.
n is the file sequence number.
COPYDDN (ddname1,ddname2)
Specifies a DD name (ddname) or a TEMPLATE name for the
primary (ddname1) and backup (ddname2) copied data sets for the
image copy at the local site. If ddname2 is specified by itself,
COPYTOCOPY expects the local site primary image copy to exist.
If it does not exist, error message DSNU1401 is issued and the
process for the object is terminated.
Recommendation: Catalog all of your image copy data sets.
You cannot have duplicate image copy data sets. If the DD
statement identifies a noncataloged data set with the same name,
volume serial, and file sequence number as one that is already
recorded in SYSIBM.SYSCOPY, COPYTOCOPY issues a message
and no copy is made. If the DD statement identifies a cataloged
data set with only the same name, no copy is made. For cataloged
image copy data sets, you must specify CATLG for the normal
termination disposition in the DD statement; for example,
DISP=(MOD,CATLG,CATLG). The DSVOLSER field of the
SYSCOPY entry is blank.
When the image copy data set is going to a tape volume, specify
VOL=SER parameter in the DD statement.
The COPYDDN keyword specifies either a DD name or a
TEMPLATE name specification from a previous TEMPLATE control
statement. If utility processing detects that the specified name is
both a DD name in the current job step and a TEMPLATE name,
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or Index space
Object that is to be copied. (If you want to copy only certain partitions in a
partitioned table space, use the DSNUM option in the control statement.)
DB2 catalog objects
Objects in the catalog that COPYTOCOPY accesses. The utility records
each copy in the DB2 catalog table SYSIBM.SYSCOPY.
Input image copy data set
This information is accessed through the DB2 catalog. However, if you want
to preallocate your image copy data sets by using DD statements, see
“Retaining tape mounts” on page 143 for more information. COPYTOCOPY
retains all tape mounts for you.
Output data set size: Image copies are written to sequential non-VSAM data sets.
| Recommendation: Use a template for the image copy data set for a table space
| by specifying a TEMPLATE statement without the SPACE keyword. When you omit
| this keyword, the utility calculates the appropriate size of the data set for you.
Alternatively, you can find the approximate size, in bytes, of the image copy data
set for a table space by using the following procedure:
1. Find the high-allocated page number from the COPYPAGESF column of
SYSIBM.SYSCOPY or from information in the VSAM catalog data set.
2. Multiply the high-allocated page number by the page size.
JCL parameters: You can specify a block size for the output by using the BLKSIZE
parameter on the DD statement for the output data set. Valid block sizes are
multiples of 4096 bytes.
Cataloging image copies: To catalog your image copy data sets, use the
DISP=(NEW,CATLG,CATLG) parameter in the DD statement or TEMPLATE that is
named by the COPYDDN or RECOVERYDDN option. After the image copy is
taken, the DSVOLSER column of the row that is inserted into SYSIBM.SYSCOPY
contains blanks.
Duplicate image copy data sets are not allowed. If a cataloged data set is already
recorded in SYSIBM.SYSCOPY with the same name as the new image copy data
set, a message is issued and the copy is not made.
When RECOVER locates the entry in SYSIBM.SYSCOPY, it uses the ICF catalog
to allocate the required data set. If you have uncataloged the data set, the
allocation fails. In that case, the recovery can still go forward; RECOVER searches
for a previous image copy. But even if RECOVER finds one, it must use
correspondingly more of the log to recover. You are responsible for keeping the
z/OS catalog consistent with SYSIBM.SYSCOPY with regard to existing image copy
data sets.
The COPYTOCOPY utility makes a copy from an existing image copy and writes
pages from the image copy to the output data sets. The JCL for the utility job must
include DD statements or a template for the output data sets. If the object consists
of multiple data sets and all are copied in one job, the copies reside in one physical
sequential output data set.
If a job step that contains more than one COPYTOCOPY statement abnormally
terminates, do not use TERM UTILITY. Restart the job from the last commit point by
using RESTART instead. Terminating COPYTOCOPY in this case might cause
inconsistencies between the ICF catalog and DB2 catalogs if generation data sets
are used.
If you specify the FROMCOPY keyword and the specified data set is not found in
SYSIBM.SYSCOPY, COPYTOCOPY issues message DSNU1401I. Processing for
the object then terminates.
If you use the FROMCOPY keyword, only the specified data set is used as the
input to the COPYTOCOPY job.
If you plan to use SMS, catalog all image copies. Never maintain cataloged and
uncataloged image copies with the same name.
you use TEMPLATES to allocate tape drives for the output data sets, the utility
dynamically allocates the tape drives according to the following algorithm:
v One tape drive if the input data set resides on tape.
v A tape drive for each template with STACK YES that references tape.
v Three tape drives, one for each of the local and remote output image copies, in
case non-stacked templates reference tape.
Thus, COPYTOCOPY allocates a minimum of three tape drives. The utility allocates
four tape drives if the input data set resides on tape, and more tape drives if you
specified tape templates with STACK YES.
If input data sets to be copied are stacked on tape and output data sets are defined
by a template, the utility sorts the list of objects by the file sequence numbers (FSN)
of the input data sets and processes the objects serially.
For example, image copies of the following table spaces with their FSNs are
stacked on TAPE1:
v DB2.TS1 FSN=1
v DB2.TS2 FSN=2
v DB2.TS3 FSN=3
v DB2.TS4 FSN=4
In the following statements, COPYTOCOPY uses a template for the output data set:
//COPYTOCOPY EXEC DSNUPROC,SYSTEM=V71A
//SYSIN DD *
TEMPLATE A1 &DB..&SP..COPY1 TAPE UNIT CART STACK YES
COPYTOCOPY
TABLESPACE DB1.TS4
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS1
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS2
LASTFULL
RECOVERYDDN(A1)
TABLESPACE DB1.TS3
LASTFULL
RECOVERYDDN(A1)
As a result, the utility sorts the objects by FSN and processes them in the following
order:
v DB1.TS1
v DB1.TS2
v DB1.TS3
v DB1.TS4
If the output data sets are defined by JCL, the utility gives stacking preference to
the output data sets over input data sets. If the input data sets are not stacked, the
utility sorts the objects by size in descending order.
Terminating COPYTOCOPY
You can use the TERM utility command to terminate a COPYTOCOPY job. For
instructions on terminating an online utility, see “Terminating an online utility with the
TERM UTILITY command” on page 41.
Restarting COPYTOCOPY
For instructions on restarting a utility job, see “Restarting an online utility” on page
42.
Claims: Table 19 shows which claim classes COPYTOCOPY claims on the target
object.
Table 19. Claim classes of COPYTOCOPY operations.
Target COPYTOCOPY
Table space or partition, or index space or partition UTRW
Legend:
v UTRW - Utility restrictive state - read-write access allowed
Example 2: Copying the most recent copy. The following control statement
specifies that COPYTOCOPY is to make a local site backup copy, a recovery site
primary copy, and a recovery site backup copy of table space
DBA90102.TPA9012C. The COPYDDN and RECOVERYDDN options also indicate
the data sets to which these copies should be written. For example, the recovery
site primary copy is to be written to the COPY3 data set. The FROMLASTCOPY
option specifies that the most recent full image copy or incremental image copy is
to be used as the input copy data set. This option is the default and is therefore not
required.
COPYTOCOPY TABLESPACE DBA90102.TPA9012C
FROMLASTCOPY COPYDDN(,COPY2)
RECOVERYDDN(COPY3,COPY4)
Example 3: Copying the most recent full image copy. The following control
statement specifies that COPYTOCOPY is to make primary and backup copies at
the recovery site of table space DBA90201.TPA9021C. The FROMLASTFULLCOPY
option specifies that the most recent full image copy is to be used as the input copy
data set.
COPYTOCOPY TABLESPACE DBA90201.TPA9021C
FROMLASTFULLCOPY
RECOVERYDDN(COPY3,COPY4)
Example 4: Specifying a copy data set for input. The following control statement
specifies that COPYTOCOPY is to make a local site backup copy, a recovery site
primary copy, and a recovery site backup copy from data set
DH109003.COPY1.STEP1.COPY3. This input data set is specified by the
FROMCOPY option. The output data sets (COPY2, COPY3, and COPY4) are
specified by the COPYDDN and RECOVERYDDN options.
COPYTOCOPY TABLESPACE DBA90301.TPA9031C
FROMCOPY DH109003.COPY1.STEP1.COPY3
COPYDDN(,COPY2)
RECOVERYDDN(COPY3,COPY4)
Example 5: Identifying a cataloged image copy data set. The following control
statement specifies that COPYTOCOPY is to make a local site backup copy from a
cataloged data set that is named DH109003.COPY1.STEP1.COPY4. This data set
is identified by the FROMCOPY and FROMVOLUME options. The FROMCOPY
option specifies the input data set name, and the FROMVOLUME CATALOG option
indicates that the input data set is cataloged. Use the FROMVOLUME option to
distinguish a data set from other data sets that have the same name.
COPYTOCOPY TABLESPACE DBA90302.TLA9032A
FROMCOPY DH109003.COPY1.STEP1.COPY4
FROMVOLUME CATALOG
COPYDDN(,COPY2)
TEMPLATE C2C1_T1
DSN(JUKQU2BP.C2C1.LB.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
TEMPLATE C2C1_T2
DSN(JUKQU2BP.C2C1.RP.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
TEMPLATE C2C1_T3
DSN(JUKQU2BP.C2C1.RB.&SN.)
DISP(NEW,CATLG,CATLG)
UNIT(SYSDA)
The OPTIONS PREVIEW statement before the LISTDEF statement is used to force
the CPY1 list contents to be included in the output. For long lists, using this
statement is not recommended, because it might cause the output to be too long.
The OPTIONS OFF statement ends the PREVIEW mode processing, so that the
following TEMPLATE and COPYTOCOPY jobs run normally.
OPTIONS PREVIEW
LISTDEF CPY1 INCLUDE TABLESPACES TABLESPACE DBA906*.T*A906*
INCLUDE INDEXSPACES COPY YES INDEXSPACE ADMF001.I?A906*
OPTIONS OFF
TEMPLATE TMP1 UNIT SYSDA
DSN (DH109006.COPY&LOCREM.&PRIBAC..&SN..T&TIME.)
DISP (MOD,CATLG,CATLG)
COPYTOCOPY LIST CPY1 COPYDDN(TMP1,TMP1)
For more information about LISTDEF control statements, see “Syntax and options
of the LISTDEF control statement” on page 163 in the LISTDEF chapter. For more
information about TEMPLATE control statements, see “Syntax and options of the
TEMPLATE control statement” on page 575 in the TEMPLATE chapter. For more
information about OPTIONS control statements, see “Syntax and options of the
OPTIONS control statement” on page 303 in the OPTIONS chapter.
Interpreting output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2 problems, you might need to refer to
licensed documentation to interpret output from this utility.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorizations:
v REPAIR privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSADM authority can execute the DIAGNOSE utility with
the WAIT statement option on any table space.
Syntax diagram
diagnose statement:
, ALLDUMPS
,
TYPE( integer )
( X'abend-code' )
NODUMPS
,
( X'abend-code' )
display statement wait statement abend statement
display statement:
wait statement:
abend statement:
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
TYPE(integer, ...)
Specifies one or more types of diagnose that you want to perform.
integer is the number of types of diagnoses. The maximum number of types
is 32. IBM Software Support defines the types as needed to diagnose
problems with IBM utilities.
ALLDUMPS(X'abend-code', ...)
Forces a dump to be taken in response to any utility abend code.
X'abend-code' is a member of a list of abend codes to which the scope of
ALLDUMPS is limited.
abend-code is a hexadecimal value.
NODUMPS(X'abend-code', ...)
Suppresses the dump for any utility abend code.
X'abend-code' is a member of a list of abend codes to which the scope of
NODUMPS is limited.
abend-code is a hexadecimal value.
DISPLAY
Formats the specified database items using SYSPRINT.
OBD database-name.table-space-name
Formats the object descriptor (OBD) of the table space.
database-name is the name of the database in which the table space
belongs.
table-space-name is the name of the table space whose OBD is to be
formatted.
ALL Formats all OBDs of the table space. The OBD of any object
that is associated with the table space is also formatted.
TABLES
Formats the OBDs of all tables in the specified table spaces.
INDEXES
Formats the OBDs of all indexes in the specified table spaces.
SYSUTIL
Formats every record from SYSIBM.SYSUTIL. This directory table
stores information about all utility jobs.
MEPL
Dumps the module entry point lists (MEPLs) to SYSPRINT.
AVAILABLE
| Displays the utilities that are installed on this subsystem in both bitmap
| and readable format. The presence or absence the utility products
| 5655-K61 (IBM DB2 Utilities Suite for z/OS) affects the results of this
| display. See message DSNU862I for the output of this display.
DBET
Dumps the contents of a database exception table (DBET) to
SYSPRINT.
DATABASE database-name
Dumps the DBET entry that is associated with the specified
database.
database-name is the name of the database.
TABLESPACE database-name.table-space-name
Dumps the DBET entry that is associated with the specified
table space.
database-name is the name of the database.
table-space-name is the name of the table space.
INDEX creator-name.index-name
Dumps the DBET entry that is associated with the specified
index.
creator-name is the ID of the creator of the index.
index-name is the name of the index.
Enclose the index name in quotation marks if the name
contains a blank.
WAIT Suspends utility execution when it encounters the specified utility message
or utility trace ID. DIAGNOSE issues a message to the console and utility
execution does not resume until the operator replies to that message, the
utility job times out, or the utility job is canceled. This waiting period allows
events to be synchronized while you are diagnosing concurrency problems.
The utility waits for the operator to reply to the message, allowing the
opportunity to time or synchronize events.
If neither the utility message nor the trace ID are encountered, processing
continues.
ABEND
Forces an abend during utility execution if the specified utility message or
utility trace ID is issued.
If neither the utility message nor the trace ID are encountered, processing
continues.
NODUMP
Suppresses the dump that is generated by an abend of DIAGNOSE.
MESSAGE message-id
Specifies a DSNUxxx or DSNUxxxx message that causes a wait or an
abend to occur when that message is issued. For information about the
valid message IDs, see Part 2 of DB2 Messages and Codes.
message-id is the message, in the form of Uxxx or Uxxxx.
INSTANCE integer
Specifies that a wait or an abend is to occur when the MESSAGE
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Database
Database about which DIAGNOSE is to gather diagnosis information.
Table space
Table space about which DIAGNOSE is to gather diagnosis information.
Index space
Index about which DIAGNOSE is to gather diagnosis information.
DIAGNOSE can force a utility to abend when a specific message is issued. To force
an abend when unique-index or referential-constraint violations are detected, you
must specify the message that is issued when the error is encountered. Specify this
message by using the MESSAGE option of the ABEND statement.
Instead of using a message, you can force an abend by using the TRACEID option
of the ABEND statement to specify a trace IFCID that is associated with the utility to
force an abend.
Use the INSTANCE keyword to specify the number of times that the specified
message or trace record is to be generated before the utility abends.
You can restart a DIAGNOSE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 42.
The following control statement forces a dump for any utility abend that occurs
during the execution of the specified COPY job. The DIAGNOSE END option ends
DIAGNOSE processing.
DIAGNOSE
ALLDUMPS
COPY TABLESPACE DSNDB06.SYSDBASE
DIAGNOSE END
The following control statement forces an abend of the specified LOAD job when
message DSNU311 is issued for the fifth time. The NODUMP option indicates that
the DIAGNOSE utility is not to generate a dump in this situation.
DIAGNOSE
ABEND MESSAGE U311 INSTANCE 5 NODUMP
LOAD DATA RESUME NO
INTO TABLE TABLE1
(NAME POSITION(1) CHAR(20))
DIAGNOSE END
The output from this utility job looks similar to the following output:
The MAP output line shows a ″1″ for each installed utility. Each position represents
a specific utility. This example output shows that the core utilities and the DB2
Utilities Suite are installed.
Output: The EXEC SQL control statement produces a result table when you specify
a cursor.
Execution phases of EXEC SQL: The EXEC SQL control statement executes
entirely in the EXEC phase. You can restart the EXEC phase if necessary.
Syntax diagram
declare-cursor-spec:
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
cursor-name Specifies the cursor name. The name must not identify a cursor that
is already declared within the same input stream. When using the
DB2 cross-loader function to load data from a remote server, you
must identify the cursor with a three-part name. Cursor names that
are specified with the EXEC SQL utility cannot be longer than
eight characters.
select-statement
Specifies the result table for the cursor. This statement can be any
valid SQL SELECT statement, including joins, unions, conversions,
aggregations, special registers, and user-defined functions. See
DB2 SQL Reference for a description of the SELECT statement.
non-select dynamic SQL statement
Specifies a dynamic SQL statement that is to be used as input to
EXECUTE IMMEDIATE. You can specify the following dynamic SQL
statements in a utility statement:
ALTER RENAME
COMMENT ON REVOKE
COMMIT ROLLBACK
CREATE SET CURRENT DEGREE
DELETE SET CURRENT LOCALE LC_CTYPE
DROP SET CURRENT OPTIMIZATION HINT
EXPLAIN SET PATH
GRANT SET CURRENT PRECISION
INSERT SET CURRENT RULES
LABEL ON SET CURRENT SQLID
LOCK TABLE UPDATE
You can restart an EXEC SQL utility job, but it starts from the beginning again. If
you are restarting this utility as part of a larger job in which EXEC SQL completed
successfully, but a later utility failed, do not change the EXEC SQL utility control
statement, if possible. If you must change the EXEC SQL utility control statement,
use caution; any changes can cause the restart processing to fail. For guidance in
restarting online utilities, see “Restarting an online utility” on page 42.
This type of statement can be used to create a mapping table. For an example of
creating and using a mapping table, see “Sample REORG TABLESPACE control
statements” on page 470 in the REORG TABLESPACE chapter.
Example 2: Inserting rows into a table: The following control statement specifies
that DB2 is to insert all rows from sample table EMP into table MYEMP.
EXEC SQL
INSERT INTO MYEMP SELECT * FROM DSN8810.EMP
ENDEXEC
You can use a declared cursor with the DB2 cross-loader function to load data from
a local server or from any DRDA-compliant remote server as part of the DB2
cross-loader function. For more information about using the cross-loader function,
see “Loading data by using the cross-loader function” on page 238.
You can use LISTDEF to standardize object lists and the utility control statements
that refer to them. This standardization reduces the need to customize or alter utility
job streams.
If you do not use lists and you want to run a utility on multiple objects, you must run
the utility multiple times or specify an itemized list of objects in the utility control
statement.
Restriction: Objects that are created with the DEFINE NO attribute are excluded
from all LISTDEF lists.
Output: Output from the LISTDEF control statement consists of a list with a name.
Authorization required: To execute the LISTDEF utility, you must have SELECT
authority on SYSIBM.SYSINDEXES, SYSIBM.SYSTABLES, and
SYSIBM.SYSTABLESPACE.
Additionally, you must have the authority to execute the utility that is used to
process the list, as currently documented in the “Authorization required” section of
each utility in this book.
Syntax diagram
LISTDEF list-name
Notes:
1 You must specify type-spec if you specify DATABASE.
type-spec:
TABLESPACES
INDEXSPACES
COPY NO
YES
initial-object-spec:
DATABASE database-name
table-space-spec PARTLEVEL
index-space-spec (n)
table-spec
index-spec
table-space-spec:
TABLESPACE table-space-name
database-name.
index-space-spec:
INDEXSPACE index-space-name
database-name.
table-spec:
TABLE table-name
creator-id.
index-spec:
INDEX index-name
creator-id.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LISTDEF list-name
Defines a list of DB2 objects and assigns a name to the list. The list
name makes the list available for subsequent execution as the
object of a utility control statement or as an element of another
LISTDEF statement.
list-name is the name (up to 18 alphanumeric characters in length)
of the defined list.
You can put LISTDEF statements either in a separate LISTDEF
library data set or before a DB2 utility control statement that refers
to the list-name.
INCLUDE Specifies that the list of objects that results from the expression that
follows is to be added to the list. You must first specify an
INCLUDE clause. You can then specify subsequent INCLUDE or
EXCLUDE clauses in any order to add to or delete clauses from the
existing list.
For detailed information about the order of INCLUDE and
EXCLUDE processing, see “Including objects in a list” on page 171.
EXCLUDE Specifies, after the initial INCLUDE clause, that the list of objects
that results from the expression that follows is to be excluded from
the list if the objects are in the list. If the objects are not in the list,
they are ignored, and DB2 proceeds to the next INCLUDE or
EXCLUDE clause.
For detailed information about the order of INCLUDE and
EXCLUDE processing, see “Including objects in a list” on page 171.
TABLESPACES
Specifies that the INCLUDE or EXCLUDE object expression is to
create a list of related table spaces.
TABLESPACES is the default type for lists that use a table space or
a table for the initial search. For more information about specifying
these objects, see the descriptions of the TABLESPACE and TABLE
options.
No default type value exists for lists that use other lists for the initial
search. The list that is referred to by the LIST option is used unless
you specify TABLESPACES or INDEXSPACES. Likewise, no type
default value exists for lists that use databases for the initial search.
If you specify the DATABASE option, you must specify
INDEXSPACES or TABLESPACES. For more information about
specifying lists and databases, see the descriptions of the LIST and
DATABASE options.
The result of the TABLESPACES keyword varies depending on the
type of object that you specify in the INCLUDE or EXCLUDE
clause. These results are shown in Table 22.
Table 22. Result of the TABLESPACES keyword based on the object type that is specified in
the INCLUDE or EXCLUDE clause.
Object type specified in
INCLUDE or EXCLUDE
clause Result of the TABLESPACES keyword
DATABASE Returns all table spaces that are contained within the database
TABLESPACE Returns the specified table space
TABLE Returns the table space that contains the table
INDEXSPACE Returns the table space that contains the related table
INDEX Returns the table space that contains the related table
LIST of table spaces Returns the table spaces from the expanded referenced list
LIST of index spaces Returns the related table spaces for the index spaces in the
expanded referenced list
LIST of table spaces and Returns the table spaces from the expanded referenced list and
index spaces the related table spaces for the index spaces in the same list
INDEXSPACES
Specifies that the INCLUDE or EXCLUDE object expression is to
create a list of related index spaces.
INDEXSPACES is the default type for lists that use an index space
or an index for the initial search. For more information about
specifying these objects, see the descriptions of the INDEXSPACE
and INDEX options.
No default type value exists for lists that use other lists for the initial
search. The list that is referred to by the LIST option is used unless
you specify TABLESPACES or INDEXSPACES. Likewise, no type
default value exists for lists that use databases for the initial search.
If you specify the DATABASE option, you must specify
INDEXSPACES or TABLESPACES. For more information about
specifying lists and databases, see the descriptions of the LIST and
DATABASE options.
LOB indicator keywords: Use one of three LOB indicator keywords to direct
LISTDEF processing to follow auxiliary relationships to include related LOB objects
in the list. The auxiliary relationship can be followed in either direction. LOB objects
include the LOB table spaces, auxiliary tables, indexes on auxiliary tables, and their
containing index spaces.
| Incomplete LOB definitions cause seemingly related LOB objects to not to be found.
| The auxiliary relationship does not exist until you create the AUX TABLE with the
| STORES keyword.
No default LOB indicator keyword exists. If you do not specify BASE, LOB, or ALL,
DB2 does not follow the auxiliary relationships and does not filter LOB from base
objects in the enumerated list.
ALL
Specifies that both related BASE and LOB objects are to be included in the list.
Auxiliary relationships are to be followed from all objects that result from the
initial object lookup, and both BASE and LOB objects are to remain in the final
enumerated list.
BASE
Specifies that only base table spaces (non-LOB) and index spaces are to be
included in this element of the list.
If the result of the initial search for the object is a base object, auxiliary
relationships are not followed. If the result of the initial search for the object is a
LOB object, the auxiliary relationship is applied to the base table space or index
space, and only those objects become part of the resulting list.
LOB
Specifies that only LOB table spaces and related index spaces that contain
indexes on auxiliary tables are to be included in this element of the list.
If the result of the initial search for the object is a LOB object, auxiliary
relationships are not followed. If the result of the initial search for the object is a
base object, the auxiliary relationship is applied to the LOB table space or index
space, and only those objects become part of the resulting list.
For a description of the elements that must be included in each INCLUDE and
EXCLUDE clause, see “Specifying objects to include or exclude.”
DB2 constructs the list, one clause at a time, by adding objects to or removing
objects from the list. If an EXCLUDE clause attempts to remove an object that is
not yet in the list, DB2 ignores the EXCLUDE clause of that object and proceeds to
the next INCLUDE or EXCLUDE clause. Be aware that a subsequent INCLUDE can
return a previously excluded object to the list.
You must include the following elements in each INCLUDE or EXCLUDE clause:
v The object that is to be used in the initial catalog lookup for each INCLUDE or
EXCLUDE clause. The search for objects can begin with databases, table
spaces, index spaces, tables, indexes, or other lists. You can explicitly specify
the names of these objects or, with the exception of other lists, use a pattern
matching expression. The resulting list contains only table spaces, only index
spaces, or both.
v The type of objects that the list contains, either TABLESPACES or
INDEXSPACES. You must explicitly specify the list type only when you specify a
database as the initial object by using the keyword DATABASE. Otherwise,
LISTDEF uses the default list type values shown in Table 24 on page 172. These
values depend on the type of object that you specified for the INCLUDE or
EXCLUDE clause.
Table 24. Default list type values that LISTDEF uses.
Specified object Default list type value
TABLESPACE TABLESPACES
TABLE TABLESPACES
INDEXSPACE INDEXSPACES
INDEX INDEXSPACES
| LIST Existing type value of the list
For example, the following INCLUDE clause specifies that table space
DBLT0301.TLLT031A is to be added to the LIST:
INCLUDE TABLESPACE DBLT0301.TLLT031A
The following example INCLUDE clause is similar to the preceding example, except
that it includes the INDEXSPACES keyword:
INCLUDE INDEXSPACES TABLESPACE DBLT0301.TLLT031A
In this example, the clause specifies that all index spaces over all tables in table
space DBLT0301.TLLT031A are to be added to the list.
Optionally, you can add related objects to the list by specifying keywords that
indicate a relationship, such as referentially related objects or auxiliary related
objects. Valid specifications include the following keywords:
v BASE (non-LOB objects)
v LOB (LOB objects)
v ALL (both BASE and LOB objects)
v TABLESPACES (related table spaces)
v INDEXSPACES (related index spaces)
| v RI (related by referential constraints, including informational referential
| constraints)
The preceding keywords perform two functions: they determine which objects are
related, and they then filter the contents of the list. The behavior of these keywords
varies depending on the type of object that you specify. For example, if your initial
object is a LOB object, the LOB keyword is ignored. If, however, the initial object is
not a LOB object, the LOB keyword determines which LOB objects are related, and
DB2 excludes non-LOB objects from the list. For more information about the
keywords that can be used to indicate relationships, see“Option descriptions” on
page 165.
DB2 processes each INCLUDE and EXCLUDE clause in the following order:
1. Perform the initial search for the object that is based on the specified
pattern-matching expression, including PARTLEVEL specification, if specified.
2. Add or remove related objects and filter the list elements based on the specified
list type, either TABLESPACES or INDEXSPACES (COPY YES or COPY NO).
3. Add or remove related objects depending on the presence or absence of the RI,
BASE, LOB, and ALL keywords.
For example, to generate a list of all table spaces in the ACCOUNT database but
exclude all LOB table spaces, you can specify the following LISTDEF statement:
LISTDEF ACCNT INCLUDE TABLESPACES DATABASE ACCOUNT BASE
In the preceding example, the name of the list is ACCNT. The TABLESPACES
keyword indicates that the list is to include table spaces that are associated with the
specified object. In this case, the table spaces to be included are those table
spaces in database ACCOUNT. Finally, the BASE keyword limits the objects to only
base table spaces.
If you want a list of only LOB index spaces in the ACCOUNT database, you can
specify the following LISTDEF statement:
LISTDEF ACLOBIX INCLUDE INDEXSPACES DATABASE ACCOUNT LOB
In the preceding example, the INDEXSPACES and LOB keywords indicate that the
INCLUDE clause is to add only LOB index spaces to the ACLOBIX list.
Although DB2 catalog and directory objects can appear in LISTDEF lists, these
objects might be invalid for a utility and result in an error message.
The following valid INCLUDE clauses contain catalog and directory objects:
v INCLUDE TABLESPACE DSNDB06.SYSDBASE
v INCLUDE TABLESPACES TABLESPACE DSNDB06.SYSDBASE
v INCLUDE INDEXSPACE DSNDB06.DSNDXX01
v INCLUDE INDEXSPACES INDEXSPACE DSNDB06.DSNDXX01
All LISTDEF lists automatically exclude work file databases, which consist of
DSNDB07 objects and user-defined work file objects, because DB2 utilities do not
process these objects.
Any data sets that are identified as part of a LISTDEF library must contain only
LISTDEF statements.
In the utility job that references those LISTDEF statements, include an OPTIONS
statement before the utility statement. In the OPTIONS statement, specify the DD
name of the LISTDEF library as LISTDEFDD ddname.
DB2 uses this LISTDEF library for any subsequent utility control statements, until
either the end of input or until you specify another OPTIONS LISTDEFDD ddname.
The default DD name for the LISTDEF definition library is SYSLISTD.
When DB2 encounters a reference to a list, DB2 first searches SYSIN. If DB2 does
not find the definition of the referenced list, DB2 searches the specified LISTDEF
library.
Any LISTDEF statement that is defined within the SYSIN DD statement overrides
another LISTDEF definition of the same name found in a LISTDEF library data set.
In general, utilities processes the objects in the list in the order in which they are
specified. However, some utilities alter the list order for optimal processing as
follows:
v CHECK INDEX, REBUILD INDEX, and RUNSTATS INDEX process all index
spaces that are related to a given table space at one time, regardless of list
order.
v UNLOAD processes all specified partitions of a given table space at one time
regardless of list order.
| The LIST keyword is supported by the utilities that are listed in Table 26. When
| possible, utility processing optimizes the order of list processing as indicated in the
| table.
| Table 26. How specific utilities process lists
| Utility Order of list processing
| CHECK INDEX Items are grouped by related table space.
| COPY Items are processed in the specified order on a single call to COPY;
| the PARALLEL keyword is supported.
| COPYTOCOPY Items are processed in the specified order on a single call to
| COPYTOCOPY.
| MERGECOPY Items are processed in the specified order.
| MODIFY Items are processed in the specified order.
| RECOVERY
| MODIFY Items are processed in the specified order.
| STATISTICS
| QUIESCE All items are processed in the specified order on a single call to
| QUIESCE.
| REBUILD Items are grouped by related table space.
| RECOVER Items are processed in the specified order on a single call to
| RECOVER.
| REORG Items are processed in the specified order.
| REPORT Items are processed in the specified order.
| RUNSTATS INDEX Items are grouped by related table space.
| RUNSTATS Items are processed in the specified order.
| TABLESPACE
| UNLOAD Items at the partition level are grouped by table space.
|
| Some utilities such as COPY and RECOVER, can process a LIST without a
| specified object type. Object types are determined from the list contents. Other
| utilities, such as REPORT, RUNSTATS, and REORG INDEX, must know the object
| type that is to be processed before processing can begin. These utilities require that
| you specify an object type in addition to the LIST keyword (for example: REPORT
| RECOVERY TABLESPACE LIST, RUNSTATS INDEX LIST, and REORG INDEX
| LIST). See the syntax diagrams for an individual utility for details.
In some cases you can use traditional JCL DD statements with LISTDEF lists, but
this method is usually not practical unless you are processing small lists one object
at a time.
Together, the LISTDEF and TEMPLATE utilities enable faster development of utility
job streams, and require fewer modifications when the underlying list of database
objects change.
You can restart a LISTDEF utility job, but it starts from the beginning again. Use
caution when changing LISTDEF lists prior to a restart. When DB2 restarts list
processing, it uses a saved copy of the list. Modifying the LISTDEF list that is
referred to by the stopped utility has no effect. Only control statements that follow
the stopped utility are affected. For guidance in restarting online utilities, see
“Restarting an online utility” on page 42.
List processing limitations: Although DB2 does not limit the number of objects
that a list can contain, be aware that if your list is too large, the utility might fail with
an error or abend in either DB2 or another program. These errors or abends can be
caused by storage limitations, limitations of the operating system, or other
Assume that three table spaces qualify. Of these table spaces, two are partitioned
table spaces (PAY2.DEPTA and PAY2.DEPTF) that each have three partitions and
one is a nonpartitioned table space (PAY1.COMP). In this case, the EXAMPLE4 list
includes the following items:
v PAY2.DEPTA partition 1
v PAY2.DEPTA partition 2
v PAY2.DEPTA partition 3
v PAY2.DEPTF partition 1
v PAY2.DEPTF partition 2
v PAY2.DEPTF partition 3
v PAY1.COMP
Example 5: Defining a list of COPY YES indexes. The following control statement
defines a list (EXAMPLE5) that includes related index spaces from the referenced
list (EXAMPLE4) that have been defined or altered to COPY YES.
LISTDEF EXAMPLE5 INCLUDE LIST EXAMPLE4 INDEXSPACES COPY YES
Example 6: Defining a list that includes all table space partitions except for
one. The following control statement defines a list (EXAMPLE6) that includes all
partitions of table space X, except for partition 12. The INCLUDE clause adds an
entry for each partition, and the EXCLUDE clause removes the entry for partition
12.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X PARTLEVEL
EXCLUDE TABLESPACE X PARTLEVEL(12)
Note that if the PARTLEVEL keyword is not specified in both clauses, as in the
following two sample statements, the INCLUDE and EXCLUDE items do not
intersect. For example, in the following statement, table space X is included is
included in the list in its entirety, not at the partition level. Therefore, partition 12
cannot be excluded.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X
EXCLUDE TABLESPACE X PARTLEVEL(12)
In the following sample statement, the list includes only partition 12 of table space
X, so table space X in its entirety can not be excluded.
LISTDEF EXAMPLE6 INCLUDE TABLESPACE X PARTLEVEL(12)
EXCLUDE TABLESPACE X
The LISTLIB DD statement (in the JCL for the QUIESCE job) defines a LISTDEF
library. When you define a LISTDEF library, you give a name to a group of data
sets that contain LISTDEF statements. In this case, the library is to include the
following data sets:
v The sequential data set JULTU103.TCASE.DATA2 (which includes the NAME1
list)
v The MEM1 member of the partitioned data set JULTU103.TCASE.DATA3 (which
includes the NAME2 list).
Defining such a library enables you to subsequently refer to a group of LISTDEF
statements with a single reference.
The OPTIONS utility control statement in this example specifies that the library that
is identified by the LISTLIB DD statement is to be used as the default LISTDEF
definition library. This declaration means that for any referenced lists, DB2 is to first
search SYSIN for the list definition. If DB2 does not find the list definition in SYSIN,
it is to search any data sets that are included in the LISTLIB LISTDEF library.
The last LISTDEF statement defines the NAME3 list. This list includes all objects in
the NAME1 and NAME2 lists, except for three table spaces (TSLT032B, TSLT031B,
TSLT032C). Because the NAME1 and NAME2 lists are not included in SYSIN, DB2
searches the default LISTDEF library (LISTLIB) to find them.
Finally, the QUIESCE utility control statement specifies this list of objects (NAME3)
for which DB2 is to establish a quiesce point.
Figure 32. Example of building a LISTDEF library and then running the QUIESCE utility (Part
1 of 2)
Figure 32. Example of building a LISTDEF library and then running the QUIESCE utility (Part
2 of 2)
Example 8: Defining a list that includes related objects. The following LISTDEF
control statement defines a list (EXAMPLE8) that includes table space
DBLT0101.TPLT011C and all objects that are referentially related to it. Only base
table spaces are included in the list. The subsequent RECOVER utility control
statement specifies that all objects in the EXAMPLE8 list are to be recovered.
//STEP2 EXEC DSNUPROC,UID=’JULTU101.RECOVE5’,
// UTPROC=’’,SYSTEM=’SSTR’
//SYSIN DD *
LISTDEF EXAMPLE8 INCLUDE TABLESPACE DBLT0101.TPLT011C RI BASE
RECOVER LIST EXAMPLE8
/*
For a diagram of LOAD syntax and a description of available options, see “Syntax
and options of the LOAD control statement” on page 185. For detailed guidance on
running this utility, see “Instructions for running LOAD” on page 221.
Output: LOAD DATA generates one or more of the following forms of output:
v A loaded table space or partition.
v A discard file of rejected records.
v A summary report of errors that were encountered during processing; this report
is generated only if you specify ENFORCE CONSTRAINTS or if the LOAD
involves unique indexes.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorizations:
v Ownership of the table
v LOAD privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
LOAD operates on a table space level, so you must have authority for all tables in
the table space when you perform LOAD.
To run LOAD STATISTICS, the privilege set must include STATS authority on the
database. To run LOAD STATISTICS REPORT YES, the privilege set must also
include the SELECT privilege on the tables required.
| If you use RACF access control with multilevel security and LOAD is to process a
| table space that contains a table that has multilevel security with row-level
| granularity, you must be identified to RACF and have an accessible valid security
| label. You must also meet the following authorization requirements:
| v To replace an entire table space with LOAD REPLACE, you must have the
| write-down privilege unless write-down rules are not in effect.
| v You must have the write-down privilege to specify values for the security label
| columns, unless write-down rules are not in effect. If these rules are in effect and
| you do not have write-down privilege, DB2 assigns your security label as the
| value for the security label column for the rows that you are loading.
| For more information about multilevel security and security labels, see Part 3 of
| DB2 Administration Guide.
Execution phases of LOAD: The LOAD utility operates in the phases that are
listed in Table 27 on page 184.
| A subtask is started at the beginning of the RELOAD phase to sort the keys.
The sort subtask initializes and waits for the main RELOAD phase to pass its
keys to SORT. RELOAD loads the data, extracts the keys, and passes them in
memory for sorting. At the end of the RELOAD phase, the last key is passed
to SORT, and record sorting completes.
Note that load partition parallelism starts subtasks. PREFORMAT for table
spaces occurs at the end of the RELOAD phase.
SORT Sorts temporary file records before creating indexes or validating referential
constraints, if indexes or foreign keys exist. The SORT phase is skipped if all
the following conditions apply for the data that is processed during the
RELOAD phase:
v Each table has no more than one key.
v All keys are the same type (index key only, indexed foreign key, or foreign
key only).
| v The data that is being loaded or reloaded is in key order (if a key exists). If
| the key is an index key only and the index is a data-partitioned secondary
| index, the data is considered to be in order if the data is grouped by
| partition and ordered within partition by key value. If the key in question is
| an indexed foreign key and the index is a data-partitioned secondary index,
| the data is never considered to be in order.
v The data that is being loaded or reloaded is grouped by table, and each
input record is loaded into one table only.
| SORT passes the sorted keys in memory to the BUILD phase, which builds
| the indexes.
BUILD Creates indexes from temporary file records for all indexes that are defined on
the loaded tables. Build also detects duplicate keys. PREFORMAT for indexes
occurs at the end of the BUILD phase.
SORTBLD Performs all activities that normally occur in both the SORT and BUILD
phases, if you specify a parallel index build.
INDEXVAL Corrects unique index violations from the information in SYSERR, if any exist.
| ENFORCE Checks referential constraints, except informational referential constraints, and
corrects violations. Information about violations of referential constraints is
stored in SYSERR.
DISCARD Copies records that cause errors from the input data set to the discard data
set.
REPORT Generates a summary report, if you specified ENFORCE CONSTRAINT or if
load index validation is performed. The report is sent to SYSPRINT.
UTILTERM Performs cleanup.
Syntax diagram
| (1)
SORTKEYS 0 FLOAT(S390) EBCDIC
format-spec
SORTKEYS integer FLOAT(IEEE) ASCII ,
UNICODE
CCSID( integer )
DISCARDS 0
DISCARDS integer SORTDEVT device-type SORTNUM integer
INTO-TABLE-spec
CONTINUEIF(start )= X’byte-string’
:end ’character-string’
Notes:
1 The default is 0 if the input is on tape, a cursor, a PDS member or for SYSREC DD *. For
sequential data sets, LOAD computes the default based upon the input data set size.
copy-spec:
(SYSCOPY) RECOVERYDDN(ddname3 )
COPYDDN ,ddname4
(ddname1 )
,ddname2
(,ddname2)
statistics-spec:
STATISTICS
TABLE ( ALL )
SAMPLE integer
COLUMN ALL
TABLE ( table-name )
SAMPLE integer ,
COLUMN ( column-name )
| UPDATE ALL
UPDATE ACCESSPATH HISTORY ALL FORCEROLLUP YES
SPACE ACCESSPATH NO
NONE SPACE
NONE
correlation-stats-spec:
format-spec:
|
FORMAT UNLOAD
SQL/DS
COLDEL ',' CHARDEL '"' DECPT '.'
DELIMITED
COLDEL coldel CHARDEL chardel DECPT decpt
INTO-TABLE-spec:
For the syntax diagram and the option descriptions of the into-table specification,
see “INTO-TABLE-spec” on page 202.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
DATA Specifies that data is to be loaded. This keyword is optional and is used for
clarity only. You identify the data that you want to load by using table-name
on the INTO TABLE option. See “INTO-TABLE-spec” on page 202 for a
description of the statement.
INDDN ddname
Specifies the data definition (DD) statement or template that identifies the
| input data set for the partition. The record format for the input data set must
be fixed-length or variable-length. The data set must be readable by the
basic sequential access method (BSAM).
The ddname is the name of the input data set. The default is SYSREC.
INCURSOR cursor-name
Specifies the cursor for the input data set. You must declare the cursor
before it is used by the LOAD utility. Use the EXEC SQL utility control
statement to define the cursor. You cannot load data into the same table on
which you defined the cursor.
The specified cursor can be used with the DB2 UDB family cross-loader
function, which enables you to load data from any DRDA-compliant remote
server. For more information about using the cross-loader function, see
“Loading data by using the cross-loader function” on page 238.
cursor-name is the cursor name. Cursor names that are specified with the
LOAD utility cannot be longer than eight characters.
You cannot use the INCURSOR option with the following options:
v SHRLEVEL CHANGE
v NOSUBS
v FORMAT UNLOAD
v FORMAT SQL/DS
v CONTINUEIF
v WHEN
| In addition, you cannot specify field specifications or use discard processing
with the INCURSOR option.
PREFORMAT
Specifies that the remaining pages are preformatted up to the
high-allocated RBA in the table space and index spaces that are associated
with the table that is specified in table-name. The preformatting occurs after
the data has been loaded and the indexes are built.
PREFORMAT can operate on an entire table space and its index spaces, or
on a partition of a partitioned table space and on the corresponding
partitions of partitioned indexes, if any exist. Specifying LOAD
PREFORMAT (rather than PART integer PREFORMAT) tells LOAD to
serialize at the table space level, which can inhibit concurrent processing of
separate partitions. If you want to serialize at the partition level, specify
PART integer PREFORMAT. See “Option descriptions for INTO TABLE” on
page 204 for information about specifying PREFORMAT at the partition
level.
RESUME
Indicates whether records are to be loaded into an empty or non-empty
table space. For nonsegmented table spaces, space is not reused for rows
that have been marked as deleted or for rows of dropped tables.
Important: Specifying LOAD RESUME (rather than PART integer
RESUME) tells LOAD to serialize on the entire table space, which can
inhibit concurrent processing of separate partitions. If you want to process
other partitions concurrently, use “INTO-TABLE-spec” on page 202 to
specify PART integer RESUME.
NO
Loads records into an empty table space. If the table space is not
empty, and you have not used REPLACE, a message is issued and the
utility job step terminates with a job step condition code of 8.
For nonsegmented table spaces that contain deleted rows or rows of
dropped tables, using the REPLACE keyword provides increased
efficiency.
The default is NO, unless you override it with PART integer RESUME
YES.
YES
Loads records into a non-empty table space. If the table space is
empty, a warning message is issued, but the table space is loaded.
Loading begins at the current end of data in the table space. Space is
not reused for rows that are marked as deleted or for rows of dropped
tables.
CHANGE
Specifies that applications can concurrently read from and write to the
| table space or partition into which LOAD is loading data. If you specify
| SHRLEVEL CHANGE, you cannot specify the following parameters:
| INCURSOR, RESUME NO, REPLACE, KEEPDICTIONARY, LOG NO,
| ENFORCE NO, STATISTICS, COPYDDN, RECOVERYDDN,
| PREFORMAT, REUSE, or PART integer REPLACE.
For a partition-directed LOAD, if you specify SHRLEVEL CHANGE, only
RESUME YES can be specified or inherited from the LOAD statement.
LOAD SHRLEVEL CHANGE does not perform the SORT, BUILD,
SORTBLD, INDEXVAL, ENFORCE, or REPORT phases, and the
compatibility and concurrency considerations differ.
A LOAD SHRLEVEL CHANGE job functions like a mass INSERT.
Whereas a regular LOAD job drains the entire table space, LOAD
SHRLEVEL CHANGE functions like an INSERT statement and uses
claims when accessing an object.
Normally, a LOAD RESUME YES job loads the records at the end of
the already existing records. However, for a LOAD RESUME YES job
with the SHRLEVEL CHANGE option, the utility tries to insert the new
records in available free space as close to the clustering order as
possible. This LOAD job does not create any additional free pages. If
you insert a lot of records, these records are likely to be stored out of
clustering order. In this case, you should run the REORG TABLESPACE
utility after loading the records.
Recommendation: If you have loaded a lot of records, run RUNSTATS
SHRLEVEL CHANGE UPDATE SPACE and then a conditional REORG.
Log records that DB2 creates during LOAD SHRLEVEL CHANGE can
be used by DB2 DataPropagator, if the tables that are being loaded are
defined with DATA CAPTURE CHANGES.
Note that before and after row triggers are activated for SHRLEVEL
CHANGE but not for SHRLEVEL NONE. Statement triggers for each
row are also activated for SHRLEVEL CHANGE but not for SHRLEVEL
NONE.
REPLACE
Indicates whether the table space and all its indexes need to be reset to
empty before records are loaded. With this option, the newly loaded rows
replace all existing rows of all tables in the table space, not just those of
the table that you are loading. For DB2 STOGROUP-defined data sets, the
data set is deleted and redefined with this option, unless you also specified
the REUSE option. You must have LOAD authority for all tables in the table
space where you perform LOAD REPLACE. If you attempt a LOAD
REPLACE without this authority, you get an error message.
You cannot use REPLACE with the PART integer REPLACE option of INTO
TABLE; you must either replace an entire table space by using the
REPLACE option or replace a single partition by using the PART integer
REPLACE option of INTO TABLE.
Specifying LOAD REPLACE (rather than PART integer REPLACE) tells
LOAD to serialize at the table space level. If you want to serialize at the
partition level, specify PART integer REPLACE. See the information about
specifying REPLACE at the partition level under the keyword descriptions
for INTO TABLE.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the primary (ddname1) and backup
(ddname2) copy data sets for the image copy.
ddname is the DD name.
The default is SYSCOPY for the primary copy. No default exists for the
backup copy.
The COPYDDN keyword can be specified only with REPLACE. A full image
copy data set (SHRLEVEL REFERENCE) is created for the table or
partitions that are specified when LOAD executes. The table space or
partition for which an image copy is produced is not placed in
COPY-pending status.
Image copies that are taken during LOAD REPLACE are not recommended
for use with RECOVER TOCOPY because these image copies might
contain unique index violations or referential constraint violations.
Using COPYDDN when loading a table with LOB columns does not create
a copy of any index or LOB table space. You must perform these tasks
separately.
The COPYDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 575.
RECOVERYDDN ddname3,ddname4
Specifies the DD statements for the primary (ddname3) and backup
(ddname4) copy data sets for the image copy at the recovery site.
ddname is the DD name.
You cannot have duplicate image copy data sets. The same rules apply for
RECOVERYDDN and COPYDDN.
The RECOVERYDDN keyword specifies either a DD name or a TEMPLATE
name specification from a previous TEMPLATE control statement. If utility
processing detects that the specified name is both a DD name in the
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 575.
STATISTICS
Specifies the gathering of statistics for a table space, index, or both; the
statistics are stored in the DB2 catalog.
If you specify the STATISTICS keyword with no other statistics-spec or
correlation-stats-spec options, DB2 gathers only table space statistics.
Statistics are collected on a base table space, but not on a LOB table
space.
| Restriction: If you specify STATISTICS for encrypted data, DB2 might not
| provide useful statistics on this data.
TABLE
Specifies the table for which column information is to be gathered. All tables
must belong to the table space that is specified in the TABLESPACE option.
(ALL)
Specifies that information is to be gathered for all columns of all tables
in the table space. The default is ALL.
(table-name)
Specifies the tables for which column information is to be gathered. If
you omit the qualifier, the user identifier for the utility job is used.
Enclose the table name in quotation marks if the name contains a
blank.
If you specify more than one table, you must repeat the TABLE option.
SAMPLE integer
Indicates the percentage of rows that LOAD is to sample when collecting
non-indexed column statistics. You can specify any value from 1 through
100. The default is 25.
COLUMN
Specifies the columns for which column information is to be gathered.
You can specify this option only if you specify the particular tables for which
statistics are to be gathered (TABLE (table-name)). If you specify particular
tables and do not specify the COLUMN option, the default, COLUMN(ALL),
is used. If you do not specify a particular table when using the TABLE
option, you cannot specify the COLUMN option; however, COLUMN(ALL) is
assumed.
(ALL)
Specifies that statistics are to be gathered for all columns in the table.
The default is ALL.
(column-name, ...)
Specifies the columns for which statistics are to be gathered.
You can specify a list of column names; the maximum is 10. If you
specify more than one column, separate each name with a comma.
INDEX
Specifies indexes for which information is to be gathered. Column
information is gathered for the first column of the index. All the indexes
must be associated with the same table space, which must be the table
space that is specified in the TABLESPACE option.
(ALL)
Specifies that the column information is to be gathered for all indexes
that are defined on tables in the table space.
(index-name)
Specifies the indexes for which information is to be gathered. Enclose
the index name in quotation marks if the name contains a blank.
KEYCARD
Requests the collection of all distinct values in all of the 1 to n key column
combinations for the specified indexes. n is the number of columns in the
index.
FREQVAL
Controls the collection of frequent-value statistics. If you specify FREQVAL,
it must be followed by two additional keywords:
NUMCOLS
Indicates the number of key columns that are to be concatenated
together when collecting frequent values from the specified index.
| SPACE
| Indicates that only space-related catalog statistics are to be
| updated in catalog history tables.
| NONE Indicates that no catalog history tables are to be updated with the
| collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when
RUNSTATS is executed even if some parts are empty. This keyword
enables the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some parts might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all parts.
If data is not available for all parts, DSNU623I message is issued if
the installation value for STATISTICS ROLLUP on panel DSNTIPO
is set to NO.
KEEPDICTIONARY
Prevents the LOAD utility from building a new compression dictionary.
LOAD retains the current compression dictionary and uses it for
compressing the input data. This option eliminates the cost that is
associated with building a new dictionary.
This keyword is valid only if the table space that is being loaded has the
COMPRESS YES attribute.
If the table space or partition is empty, DB2 performs one of these actions:
v DB2 builds a dictionary if a compression dictionary does not exist.
v DB2 keeps the dictionary if a compression dictionary exists.
If the table space or partition is not empty and RESUME YES is specified,
DB2 performs one of these actions:
v DB2 does not build a dictionary if a compression dictionary does not
exist.
v DB2 keeps the dictionary if a compression dictionary exists.
If a data set has multiple extents, the extents are not released if you specify
the REUSE parameter.
LOG Indicates whether logging is to occur during the RELOAD phase of the load
process.
YES
Specifies normal logging during the load process. All records that are
loaded are logged. The default is YES.
NO
Specifies no logging of data during the load process. The NO option
sets the COPY-pending restriction against the table space or partition
that the loaded table resides in. No table or partition in the table space
can be updated by SQL until the restriction is removed. For ways to
remove the restriction, see “Resetting COPY-pending status” on page
255.
| If you load a single partition of a partitioned table space and the table
| space has a secondary index, some logging might occur during the
| build phase as DB2 logs any changes to the index structure. This
| logging allows recoverability of the secondary index in case an abend
| occurs, and it also allows concurrency.
A LOB table space affects logging while DB2 loads a LOB column
regardless of whether the LOB table space was defined with LOG YES
or LOG NO. See Table 40 on page 249 for more information.
NOCOPYPEND
Specifies that LOAD is not to set the table space in the
COPY-pending status, even though LOG NO was specified. A
NOCOPYPEND specification does not turn on or change any
informational COPY-pending (ICOPY) status for indexes. A
NOCOPYPEND specification will not turn off any COPY-pending
status that was set prior to the LOAD. Normal completion of a
LOAD LOG NO NOCOPYPEND job returns a 0 code if no other
errors or warnings exist.
DB2 ignores a NOCOPYPEND specification if you also specified
COPYDDN to make a local-site inline image copy during the LOAD.
current job step and a TEMPLATE name, the utility uses the DD name. For
more information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 575.
SORTKEYS integer
Specifies that index keys are to be sorted in parallel during the SORTBLD
phase to improve performance. Optionally, you can specify a value for
integer to provide an estimate of the number of index keys that are to be
sorted. Integer must be a positive integer between 0 and 2 147 483 647.
| The default is 0 if the input is on tape, a cursor, a PDS member, or for
| SYSREC DD *. For sequential data sets, LOAD computes an estimate
| based upon the input data set size.
For more information about sorting keys, see “Improved performance with
SORTKEYS” on page 241 and “Building indexes in parallel for LOAD” on
page 245.
FORMAT
Identifies the format of the input record. If you use FORMAT UNLOAD or
FORMAT SQL/DS, it uniquely determines the format of the input, and no
field specifications are allowed in an INTO TABLE option.
If you omit FORMAT, the format of the input data is determined by the rules
for field specifications that are described in“Option descriptions for INTO
| TABLE” on page 204. If you specify FORMAT DELIMITED, the format of the
| input data is determined by the rules that are described in Appendix F,
| “Delimited file format,” on page 875.
UNLOAD
Specifies that the input record format is compatible with the DB2
unload format. (The DB2 unload format is the result of REORG with
the UNLOAD ONLY option.)
Input records that were unloaded by the REORG utility are loaded
into the tables from which they were unloaded, if an INTO TABLE
option specifies each table. Do not add columns or change column
definitions of tables between the time you run REORG UNLOAD
ONLY and LOAD FORMAT UNLOAD.
Any WHEN clause on the LOAD FORMAT UNLOAD statement is
ignored; DB2 reloads the records into the same tables from which
they were unloaded. Not allowing a WHEN clause with the
FORMAT UNLOAD clause ensures that the input records are
loaded into the proper tables. Input records that cannot be loaded
are discarded.
If the DCB RECFM parameter is specified on the DD statement for
the input data set, and the data set format has not been modified
since the REORG UNLOAD (ONLY) operation, the record format
must be variable (RECFM=V).
SQL/DS
Specifies that the input record format is compatible with the
SQL/DS unload format. The data type of a column in the table that
is to be loaded must be the same as the data type of the
corresponding column in the SQL/DS table.
If the SQL/DS input contains rows for more than one table, the
WHEN clause of the INTO TABLE option indicates which input
records are to be loaded into which DB2 table.
| default is a comma (,). For ASCII and UTF-8 data this is X'2C',
| and for EBCDIC data it is a X'6B'.
| CHARDEL chardel
| Specifies the character string delimiter that is used in the input
| file. The default is a double quotation mark (“). For ASCII and
| UTF-8 data this is X'22', and for EBCDIC data it is X'3F'.
| To delimit character strings that contain the character string
| delimiter, repeat the character string delimiter where it is used
| in the character string. LOAD interprets any pair of character
| delimiters that are found between the enclosing character
| delimiters as a single character. For example, the phrase “what
| a ““nice warm”” day” is interpreted as what a “nice warm”
| day. The LOAD utility recognizes these character delimiter pairs
| for only CHAR, VARCHAR, and CLOB fields.
| Character string delimiters are required only when the string
| contains the CHARDEL character. However, you can put the
| character string delimiters around other character strings. Data
| that has been unloaded in delimited format by the UNLOAD
| utility includes character string delimiters around all character
| strings.
| DECPTdecpt
| Specifies the decimal point character that is used in the input
| file. The default is a period (.).
| The default decimal point character is a period in a delimited
| file, X'2E' in an ASCII or Unicode UTF-8 file.
FLOAT
Specifies that LOAD is to expect the designated format for floating point
numbers.
(S390)
Specifies that LOAD is to expect that floating point numbers are
provided in System/390 hexadecimal floating point (HFP) format. (S390)
is the format that DB2 stores floating point numbers in. It is also the
default if you do not explicitly specify the FLOAT keyword.
(IEEE)
Specifies that LOAD is to expect that floating point numbers are
provided in IEEE binary floating point (BFP) format.
When you specify FLOAT(IEEE), DB2 converts the BFP data to HFP
format as the data is being loaded into the DB2 table. If a conversion
error occurs while DB2 is converting from BFP to HFP, DB2 places the
record in the discard file.
FLOAT(IEEE) is mutually exclusive with any specification of the
FORMAT keyword. If you specify both FLOAT(IEEE) and FORMAT, DB2
issues message DSNU070I.
BFP format is sometimes called IEEE floating point.
EBCDIC
Specifies that the input data file is EBCDIC. The default is EBCDIC.
ASCII Specifies that the input data file is ASCII. Numeric, date, time, and
timestamp internal formats are not affected by the ASCII option.
UNICODE
Specifies that the input data file is Unicode. The UNICODE option does not
affect the numeric, date, time, and timestamp formats.
CCSID
Specifies up to three coded character set identifiers (CCSIDs) for the input
file. The first value specifies the CCSID for SBCS data that is found in the
input file, the second value specifies the CCSID for mixed DBCS data, and
the third value specifies the CCSID for DBCS data. If any of these values is
specified as 0 or omitted, the CCSID of the corresponding data type in the
input file is assumed to be the same as the installation default CCSID. If the
input data is EBCDIC, the omitted CCSIDs are assumed to be the EBCDIC
CCSIDs that are specified at installation, and if the input data is ASCII, the
omitted CCSIDs are assumed to be the ASCII CCSIDs that are specified at
installation. If the CCSIDs of the input data file do not match the CCSIDs of
the table that is being loaded, the input data is converted to the table
CCSIDs before being loaded.
integer is any valid CCSID specification.
If the input data is Unicode, the default CCSID values are the Unicode
CCSIDs that are specified at system installation.
NOSUBS
Specifies that LOAD is not to accept substitution characters in a string.
Place a substitution character in a string when that string is being converted
from ASCII to EBCDIC, or when the string is being converted from one
CCSID to another. For example, this substitution occurs when a character
(sometimes referred to as a code point) that exists in the source CCSID
(code page) does not exist in the target CCSID (code page).
When you specify the NOSUBS option and the LOAD utility determines that
a substitution character has been placed in a string as a result of a
conversion, it performs one of the following actions:
v If discard processing is active: DB2 issues message DSNU310I and
places the record in the discard file.
v If discard processing is not active: DB2 issues message DSNU334I,
and the utility abnormally terminates.
ENFORCE
| Specifies whether LOAD is to enforce check constraints and referential
| constraints, except informational referential constraints, which are not
| enforced.
| CONSTRAINTS
| Indicates that constraints are to be enforced. If LOAD detects a
| violation, it deletes the errant row and issues a message to identify
| it. If you specify this option and referential constraints exist, sort
| input and sort output data sets must be defined.
| The default is CONSTRAINTS.
| NO Indicates that constraints are not to be enforced. This option places
| the target table space in the CHECK-pending status if at least one
| referential constraint or check constraint is defined for the table.
ERRDDN ddname
Specifies the DD statement for a work data set that is being used during
DISCARDS 0 specifies that you do not want to set a maximum value. The
entire input data set can be discarded.
The default is DISCARDS 0.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be dynamically
allocated by DFSORT. You can specify any device type that is acceptable to
the DYNALLOC parameter of the SORT or OPTION options for DFSORT.
If you omit SORTDEVT and a sort is required, you must provide the DD
statements that the sort application program needs for the temporary data
sets.
A TEMPLATE specification does not dynamically allocate sort work data
sets. The SORTDEVT keyword controls dynamic allocation of these data
sets.
SORTNUM integer
Indicates the number of temporary data sets that are to be dynamically
allocated by the sort application program.
If you omit SORTDEVT, SORTNUM is ignored. If you use SORTDEVT and
omit SORTNUM, no value is passed to DFSORT. In this case, DFSORT
uses its own default.
CONTINUEIF
Indicates that you want to be able to treat each input record as a portion of
a larger record. After CONTINUEIF, write a condition in one of the following
forms:
(start:end) = X’byte-string’
(start:end) = ’character-string’
If the condition is true in any record, the next record is concatenated with it
before loading takes place. You can concatenate any number of records
into a larger record, up to a maximum size of 32767 bytes.
Data in the input record can be in ASCII or Unicode format, but the utility
control statement always interprets character constants as EBCDIC. To use
CONTINUEIF with the ASCII or UNICODE option, you must code the
condition by using the hexadecimal form, not the character-string form. For
example, use (1:1)=X’31’ rather than (1:1)=’1’. As an alternative, you can
code the control statements in UTF-8.
(start:end)
Specifies column numbers in the input record; the first column of the
record is column 1. The two numbers tell the starting and ending
columns of a continuation field in the input record.
Other field position specifications (such as those for WHEN, POSITION,
or NULLIF) refer to the field position within the final assembled load
record, not within the input record.
The continuation field is removed from the input record and is not part
of the final load record.
If you omit :end, DB2 assumes that the length of the continuation field
is the length of the byte string or character string. If you use :end, and
the length of the resulting continuation field is not the same as the
length of the byte string or character string, the shorter string is padded.
Character strings are padded with blanks. Hexadecimal strings are
padded with zeros.
X'byte-string'
Specifies a string of hexadecimal characters. This byte-string value in
the continuation field indicates that the next input record is a
continuation of the current load record. Records with this byte-string
value are concatenated until the value in the continuation field changes.
For example, the following CONTINUEIF specification indicates that for
any input records that have a value of X'FF'in column 72, LOAD is to
concatenate that record with the next input record.
CONTINUEIF (72) = X’FF’
'character-string'
Specifies a string of characters that has the same effect as
X'byte-string'. For example, the following CONTINUEIF specification
indicates that for any input records that have the string CC in columns
99 and 100, LOAD is to concatenate that record with the next input
record.
CONTINUEIF (99:100) = ’CC’
INTO-TABLE-spec
More than one table or partition for each table space can be loaded with a single
invocation of the LOAD utility. At least one INTO TABLE statement is required for
each table that is to be loaded. Each into TABLE statement:
v Identifies the table that is to be loaded
v Describes fields within the input record
v Defines the format of the input data set
All tables that are specified by INTO TABLE statements must belong to the same
table space.
INTO-TABLE-spec:
IGNOREFIELDS YES
INTO TABLE table-name
IGNOREFIELDS NO
INDDN SYSREC
PART integer resume-spec
PREFORMAT INDDN ddname
DISCARDDN ddname
INCURSOR cursor-name
WHEN SQL/DS=’table-name’ ,
field selection criterion
( field specification )
resume-spec:
RESUME NO
REPLACE KEEPDICTIONARY
REUSE copy-spec
RESUME YES
field-name = X’byte-string’
(start ) ’character-string’
:end G’graphic-string’
N’graphic-string’
field specification:
field-name
POSITION(start )
:end
|
CHAR
BIT (length) BOTH TRUNCATE
MIXED STRIP
TRAILING 'strip-char'
LEADING X'strip-char'
VARCHAR
BIT BOTH TRUNCATE
MIXED STRIP
TRAILING 'strip-char'
LEADING X'strip-char'
GRAPHIC
EXTERNAL (length) BOTH TRUNCATE
STRIP
TRAILING X'strip-char'
LEADING
VARGRAPHIC
BOTH TRUNCATE
STRIP
TRAILING X'strip-char'
LEADING
SMALLINT
INTEGER
EXTERNAL
(length)
PACKED
DECIMAL
ZONED
EXTERNAL
,0
(length )
,scale
FLOAT
EXTERNAL (length)
DATE EXTERNAL
(length)
TIME EXTERNAL
(length)
TIMESTAMP EXTERNAL
(length)
ROWID
BLOB
CLOB
MIXED
DBCLOB
NULLIF field selection criterion
DEFAULTIF field selection criterion
PREFORMAT
Specifies that the remaining pages are to be preformatted up to the
high-allocated RBA in the partition and its corresponding partitioning index
space. The preformatting occurs after the data is loaded and the indexes
are built.
RESUME
Specifies whether records are to be loaded into an empty or non-empty
partition. For nonsegmented table spaces, space is not reused for rows that
have been marked as deleted or by rows of dropped tables is not reused. If
the RESUME option is specified at the table space level, the RESUME
option is not allowed in the PART clause.
If you want the RESUME option to apply to the entire table space, use the
LOAD RESUME option. If you want the RESUME option to apply to a
particular partition, specify it by using PART integer RESUME.
NO
Loads records into an empty partition. If the partition is not empty, and
you have not used REPLACE, a message is issued, and the utility job
step terminates with a job step condition code of 8.
For nonsegmented table spaces that contains deleted rows or rows of
dropped tables, using the REPLACE keyword provides increased
efficiency.
The default is NO.
YES
Loads records into a non-empty partition. If the partition is empty, a
warning message is issued, but the partition is loaded.
REPLACE
Indicates that you want to replace only the contents of the partition that is
cited by the PART option, rather than the entire table space.
You cannot use LOAD REPLACE with the PART integer REPLACE option
of INTO TABLE. If you specify the REPLACE option, you must either
replace an entire table space, using LOAD REPLACE, or a single partition,
using the PART integer REPLACE option of INTO TABLE. You can,
however, use PART integer REPLACE with LOAD RESUME YES.
REUSE
Specifies, when used with the REPLACE option, that LOAD should logically
reset and reuse DB2-managed data sets without deleting and redefining
them. If you do not specify REUSE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If you specify REUSE with REPLACE on the PART specification (and not
for LOAD at the table space level), only the specified partitions are logically
reset. If you specify REUSE for the table space and REPLACE for the
partition, data sets for the replaced parts are logically reset.
KEEPDICTIONARY
Specifies that the LOAD utility is not to build a new dictionary. LOAD retains
the current dictionary and uses it for compressing the input data. This
option eliminates the cost that is associated with building a new dictionary.
This keyword is valid only if a dictionary exists and the partition that is
being loaded has the COMPRESS YES attribute.
If the partition has the COMPRESS YES attribute, but no dictionary exists,
one is built and an error message is issued.
INDDN ddname
Specifies the data definition (DD) statement or template that identifies the
input data set for the partition. The record format for the input data set must
be fixed or variable. The data set must be readable by the basic sequential
access method (BSAM).
The ddname is the name of the input data set. The default is SYSREC.
INDDN can be a template name.
If you specify INDDN, with or without DISCARDDN, in one INTO TABLE
PART specification and you supply more than one INTO TABLE PART
clause, you must specify INDDN in all INTO TABLE PART specifications.
Specifying INDDN at the partition level and supplying multiple PART
clauses, each with their own INDDN, enables load partition parallelism,
which can significantly improve performance. Loading all partitions in a
single job with load partition parallelism is recommended instead of
concurrent separate jobs whenever one or more nonpartitioned secondary
indexes are on the table space.
The field specifications apply separately to each input file. Therefore, if
multiple INTO TABLE PART INDDN clauses are used, field specifications
are required on each one.
DISCARDDN ddname
Specifies the DD statement for a discard data set for the partition. The
discard data set holds copies of records that are not loaded (for example, if
they contain conversion errors). The discard data set also holds copies of
records that were loaded and then removed (due to unique index errors, or
referential or check constraint violations).
Flag input records for discarding during the RELOAD, INDEXVAL, and
ENFORCE phases. However, the utility does not write the discard data set
until the DISCARD phase when the utility copies the flagged records from
the input data set to the discard data set.
The discard data set must be a sequential data set, and it must be
write-accessible by BSAM, with the same record format, record length, and
block size as the input data set.
The ddname is the name of the discard data set. DISCARDDN can be a
template name.
If you omit the DISCARDDN option, LOAD does not save discarded
records.
INCURSOR cursor-name
Specifies the cursor for the input data set. You must declare the cursor
before it is used by the LOAD utility. Use the EXEC SQL utility control
statement to define the cursor. You cannot load data into the same table on
which you defined the cursor.
The specified cursor can be used as part of the DB2 UDB family cross
loader function, which enables you to load data from any DRDA-compliant
remote server. For more information about using the cross loader function,
see “Loading data by using the cross-loader function” on page 238.
cursor-name is the cursor name. Cursor names that are specified with the
LOAD utility cannot be longer than eight characters.
You cannot use the INCURSOR option with the following options:
v SHRLEVEL CHANGE
v NOSUBS
v FORMAT UNLOAD
v FORMAT SQL/DS
v CONTINUEIF
v WHEN.
In addition, you cannot specify field specifications with the INCURSOR
option.
WHEN
Indicates which records in the input data set are to be loaded. If no WHEN
clause is specified (and if FORMAT UNLOAD was not used in the LOAD
statement), all records in the input data set are loaded into the specified
tables or partitions. (Data that is beyond the range of the specified partition
is not loaded.)
The option following WHEN describes a condition; input records that satisfy
the condition are loaded. Input records that do not satisfy any WHEN
clause of any INTO TABLE statement are written to the discard data set, if
one is being used.
Data in the input record can be in ASCII or Unicode, but LOAD always
interprets character constants that are specified in the utility control
statement as EBCDIC. To use WHEN where the ASCII or UNICODE option
is specified, code the condition by using the hexadecimal form, not the
character string form. For example, use (1:1)=X’31’ rather than (1:1)=’1’.
As an alternative, you can code the statement in UTF-8.
SQL/DS='table-name'
Is valid only when the FORMAT SQL/DS option is used on the LOAD
statement.
table-name is the name of a table that has been unloaded into the
unload data set. The table name after INTO TABLE tells which DB2
table the SQL/DS table is loaded into. Enclose the table name in
quotation marks if the name contains a blank.
If no WHEN clause is specified, input records from every SQL/DS table
are loaded into the table that is specified after INTO TABLE.
field-selection-criterion
Describes a field and a character constant. Only those records in which
the field contains the specified constant are to be loaded into the table
that is specified after INTO TABLE.
A field in a selection criterion must:
v Contain a character or graphic string. No data type conversions are
performed when the contents of the field in the input record are
compared to a string constant.
v Start at the same byte offset in each assembled input record. If any
record contains varying-length strings, which are stored with length
fields, that precede the selection field, they must be padded so that
the start of the selection field is always at the same offset.
The field and the constant do not need to be the same length. If they
are not, the shorter of the two is padded before a comparison is made.
Character and graphic strings are padded with blanks. Hexadecimal
strings are padded with zeros.
field-name
Specifies the name of a field that is defined by a field-specification.
If field-name is used, the start and end positions of the field are
given by the POSITION option of the field specification.
(start:end)
Identifies column numbers in the assembled load record; the first
column of the record is column 1. The two numbers indicate the
starting and ending columns of a selection field in the load record.
If :end is not used, the field is assumed to have the same length as
the constant.
X'byte-string'
Identifies the constant as a string of hexadecimal characters. For
example, the following WHEN clause specifies that a record is to be
loaded if it has the value X'FFFF' in columns 33 through 34.
WHEN (33:34) = X’FFFF’
'character-string'
Identifies the constant as a string of characters. For example, the
following WHEN clause specifies that a record is to be loaded if the
field DEPTNO has the value D11.
WHEN DEPTNO = ’D11’
G'graphic-string'
Identifies the constant as a string of double-byte characters. For
example, the following WHEN clause specifies that a record is to be
loaded if it has the specified value in columns 33 through 36.
WHEN (33:36) = G’<**>’
v LOB fields are varying length, and require a valid 4-byte binary length
field preceding the data; no intervening gaps are allowed between them
and the LOB fields that follow.
v Numeric data is assumed to be in the appropriate internal DB2 number
representation.
v The NULLIF or DEFAULTIF options cannot be used.
If any field specification is used for an input table, a field specification must
exist for each field of the table that does not have a default value. Any field
in the table with no corresponding field specification is loaded with its
default value.
If any column in the output table does not have a field specification and is
defined as NOT NULL, with no default, the utility job step is terminated.
Identity columns can appear in the field specification only if they were
defined with the GENERATED BY DEFAULT attribute.
field-name
Specifies the name of a field, which can be a name of your choice. If the
field is to be loaded, the name must be the name of a column in the table
that is named after INTO TABLE unless IGNOREFIELDS is specified. You
can use the field name as a vehicle to specify the range of incoming data.
See “Example 4: Loading data of different data types” on page 261 for an
example of loading selected records into an empty table space.
The starting location of the field is given by the POSITION option. If
POSITION is not used, the starting location is one column after the end of
the previous field.
LOAD determines the length of the field in one of the following ways, in the
order listed:
1. If the field has data type VARCHAR, VARGRAPHIC, or ROWID, the
length is assumed to be contained in a 2-byte binary field that precedes
the data. For VARCHAR fields, the length is in bytes; for VARGRAPHIC
fields, the length field identifies the number of double-byte characters.
If the field has data type CLOB, BLOB, or DBCLOB, the length is
assumed to be contained in a 4-byte binary field that precedes the data.
For BLOB and CLOB fields, the length is in bytes; for DBCLOB fields,
the length field identifies the number of double-byte characters.
2. If :end is used in the POSITION option, the length is calculated from
start and end. In that case, any length attribute after the CHAR,
GRAPHIC, INTEGER, DECIMAL, or FLOAT specifications is ignored.
3. The length attribute on the CHAR, GRAPHIC, INTEGER, DECIMAL, or
FLOAT specifications is used as the length.
4. The length is taken from the DB2 field description in the table definition,
or it is assigned a default value according to the data type. For DATE
and TIME fields, the length is defined during installation. For
variable-length fields, the length is defined from the column in the DB2
table definition, excluding the null indicator byte, if it is present. Table 28
shows the default length, in bytes, for each data type.
Table 28. Default length of each data type (in bytes)
Data type Default length in bytes
BLOB Varying
Table 28. Default length of each data type (in bytes) (continued)
Data type Default length in bytes
CHARACTER Length that is used in column definition
CLOB Varying
DATE 10 (or installation default)
DBCLOB Varying
DECIMAL EXTERNAL Decimal precision for output columns that are decimal,
otherwise the length that is used in column definition
DECIMAL PACKED Length that is used in column definition
DECIMAL ZONED Decimal precision for output columns that are decimal,
otherwise the length that is used in column definition
FLOAT (single precision) 4
FLOAT (double precision) 8
GRAPHIC 2 multiplied by (length that is used in column definition)
INTEGER 4
MIXED Mixed DBCS data
ROWID Varying
SMALLINT 2
TIME 8 (or installation default)
TIMESTAMP 26
VARCHAR Varying
VARGRAPHIC Varying
If a data type is not given for a field, its data type is assumed to be the
same as that of the column into which it is loaded, as given in the DB2
table definition.
POSITION(start:end)
Indicates where a field is in the assembled load record.
start and end are the locations of the first and last columns of the field; the
first column of the record is column 1. The option can be omitted.
Column locations can be specified as:
v An integer n, meaning an actual column number
v *, meaning one column after the end of the previous field
v *+n, where n is an integer, meaning n columns after the location that is
specified by *
Data types in a field specification: The data type of the field can be specified by
any of the keywords that follow. Except for graphic fields, length is the length in
bytes of the input field.
All numbers that are designated EXTERNAL are in the same format in the input
records.
CHAR(length)
Specifies a fixed-length character string. If you do not specify length, the length
of the string is determined from the POSITION specification. If you do not
specify length or POSITION, LOAD uses the default length for CHAR, which is
determined from the length of the column in the table. See Table 28 on page
210 for more information on the default length for CHAR. You can also specify
CHARACTER and CHARACTER(length).
| BIT
| Specifies that the input field contains BIT data. If BIT is specified, LOAD
| bypasses any CCSID conversions for the input data. If the target column
| has the BIT data type attribute, LOAD bypasses any code page translation
| for the input data.
MIXED
Specifies that the input field contains mixed SBCS and DBCS data. If
MIXED is specified, any required CCSID conversions use the mixed CCSID
for the input data. If MIXED is not specified, any such conversions use the
SBCS CCSID for the input data.
| STRIP
| Specifies that LOAD is to remove blanks (the default) or the specified
| characters from the beginning, the end, or both ends of the data. LOAD
| pads the CHAR field, so that it fills the rest of the column.
| LOAD applies the strip operation before performing any character code
| conversion or padding.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function. For details, see Chapter 5 of DB2 SQL Reference.
| BOTH
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning and end of the data. The default is
| BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| 'strip-char'
| Specifies a single-byte or double-byte character that LOAD is to strip
| from the data.
| Specify this character value in EBCDIC. Depending on the input
| encoding scheme, LOAD applies SBCS CCSID conversion to the
| strip-char value before it is used in the strip operation.
| If the subtype of the column to be loaded is BIT or you want to specify
| a strip-char value in an encoding scheme other than EBCDIC, use the
| BOTH
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning and end of the data. The default is
| BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| 'strip-char'
| Specifies a single-byte or double-byte character that LOAD is to strip
| from the data.
| Specify this character value in EBCDIC. Depending on the input
| encoding scheme, LOAD applies SBCS CCSID conversion to the
| strip-char value before it is used in the strip operation.
| If the subtype of the column to be loaded is BIT or you want to specify
| a strip-char value in an encoding scheme other than EBCDIC, use the
| hexadecimal form (X'strip-char'). LOAD does not perform any CCSID
| conversion if the hexadecimal form is used.
| X'strip-char'
| Specifies in hexadecimal form a single-byte or double-byte character
| that LOAD is to strip from the data. For single-byte characters, specify
| this value in the form X'hh', where hh is two hexadecimal characters.
| For double-byte characters, specify this value in the form X'hhhh',
| where hhhh is four hexadecimal characters.
| Use the hexadecimal form to specify a character in an encoding
| scheme other than EBCDIC. When you specify the character value in
| hexadecimal form, LOAD does not perform any CCSID conversion.
| If you specify a strip character in the hexadecimal format, you must
| specify the character in the input encoding scheme.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right if
| the string does not fit in the target column. LOAD performs the truncation
| operation after any CCSID translation.
| If the input data is BIT data, LOAD truncates the data at a byte boundary. If
| the input data is character type data, LOAD truncates the data at a
| character boundary. If a mixed-character type data is truncated to fit a
| column of fixed size, the truncated string can be shorter than the specified
| column size. In this case, blanks in the output CCSID are padded to the
| right.
GRAPHIC(length)
Specifies a fixed-length graphic type. You can specify both start and end for the
field specification.
If you use GRAPHIC, the input data must not contain shift characters. start and
end must indicate the starting and ending positions of the data itself.
length is the number of double-byte characters. The length of the field in bytes
is twice the value of length. If you do not specify length, the number of
double-byte characters is determined from the POSITION specification. If you
do not specify length or POSITION, LOAD uses the default length for
GRAPHIC, which is determined from the length of the column in the table. See
Table 28 on page 210 for more information on the default length for GRAPHIC.
For example, let *** represent three double-byte characters. Then, to describe
***, specify either POS(1:6) GRAPHIC or POS(1) GRAPHIC(3). A GRAPHIC field
that is described in this way cannot be specified in a field selection criterion.
| STRIP
| Specifies that LOAD is to remove blanks (the default) or the specified
| characters from the beginning, the end, or both ends of the data.
| LOAD applies the strip operation before performing any character code
| conversion or padding.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function. For details, see Chapter 5 of DB2 SQL Reference.
| BOTH
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning and end of the data. The default is
| BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| X'strip-char'
| Specifies the hexadecimal form of the double-byte character that LOAD
| is to strip from the data. Specify this value in the form X'hhhh', where
| hhhh is four hexadecimal characters.
| You must specify the character in the input encoding scheme.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right if
| the string does not fit in the target column. LOAD performs the truncation
| operation after any CCSID translation.
| LOAD truncates the data at a character boundary. Double-byte characters
| are not split.
GRAPHIC EXTERNAL(length)
Specifies a fixed-length field of the graphic type with the external format. You
can specify both start and end for the field specification.
If you use GRAPHIC EXTERNAL, the input data must contain a shift-out
character in the starting position, and a shift-in character in the ending position.
Other than the shift characters, this field must have an even number of bytes.
The first byte of any pair must not be a shift character.
length is the number of double-byte characters. length for GRAPHIC
EXTERNAL does not include the number of bytes that are represented by shift
characters. The length of the field in bytes is twice the value of length. If you do
not specify length, the number of double-byte characters is determined from the
POSITION specification. If you do not specify length or POSITION, LOAD uses
the default length for GRAPHIC, which is determined from the length of the
column in the table. See Table 28 on page 210 for more information on the
default length for GRAPHIC.
For example, let *** represent three double-byte characters, and let < and >
represent shift-out and shift-in characters. Then, to describe <***>, specify
either POS(1:8) GRAPHIC EXTERNAL or POS(1) GRAPHIC EXTERNAL(3).
| STRIP
| Specifies that LOAD is to remove blanks (the default) or the specified
| characters from the beginning, the end, or both ends of the data.
| LOAD applies the strip operation before performing any character code
| conversion or padding.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function. For details, see Chapter 5 of DB2 SQL Reference.
| BOTH
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning and end of the data. The default is
| BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| X'strip-char'
| Specifies the hexadecimal form of the double-byte character that LOAD
| is to strip from the data. Specify this value in the form X'hhhh', where
| hhhh is four hexadecimal characters.
| You must specify the character in the input encoding scheme.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right if
| the string does not fit in the target column. LOAD performs the truncation
| operation after any CCSID translation.
| LOAD truncates the data at a character boundary. Double-byte characters
| are not split.
VARGRAPHIC
Identifies a graphic field of varying length. The length, in double-byte
characters, must be specified in a 2-byte binary field preceding the data. (The
length does not include the 2-byte field itself.) The length field must start in the
column that is specified as start in the POSITION option. :end, if used, is
ignored.
VARGRAPHIC input data must not contain shift characters.
| STRIP
| Specifies that LOAD is to remove blanks (the default) or the specified
| characters from the beginning, the end, or both ends of the data. LOAD
| adjusts the VARGRAPHIC length field to the length of the stripped data (the
| number of DBCS characters).
| LOAD applies the strip operation before performing any character code
| conversion or padding.
| The effect of the STRIP option is the same as the SQL STRIP scalar
| function. For details, see Chapter 5 of DB2 SQL Reference.
| BOTH
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning and end of the data. The default is
| BOTH.
| TRAILING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the end of the data.
| LEADING
| Indicates that LOAD is to remove occurrences of blank or the specified
| strip character from the beginning of the data.
| X'strip-char'
| Specifies the hexadecimal form of the double-byte character that LOAD
| is to strip from the data. Specify this value in the form X'hhhh', where
| hhhh is four hexadecimal characters.
| You must specify the character in the input encoding scheme.
| TRUNCATE
| Indicates that LOAD is to truncate the input character string from the right if
| the string does not fit in the target column. LOAD performs the truncation
| operation after any CCSID translation.
| LOAD truncates the data at a character boundary. Double-byte characters
| are not split.
SMALLINT
Specifies a 2-byte binary number. Negative numbers are in two’s complement
notation.
INTEGER
Specifies a 4-byte binary number. Negative numbers are in two’s complement
notation. You can also specify INT.
INTEGER EXTERNAL(length)
A string of characters that represent a number. The format is that of an SQL
numeric constant, as described in Chapter 2 of DB2 SQL Reference. If you do
not specify length, the length of the string is determined from the POSITION
specification. If you do not specify length or POSITION, LOAD uses the default
length for INTEGER, which is 4 bytes. See Table 28 on page 210 for more
information on the default length for INTEGER. You can also specify INT
EXTERNAL.
DECIMAL PACKED
Specifies a number of the form ddd...ds, where d is a decimal digit that is
represented by four bits, and s is a 4-bit sign value. The plus sign (+) is
represented by A, C, E, or F, and the minus sign (-) is represented by B or D.
The maximum number of ds is the same as the maximum number of digits that
are allowed in the SQL definition. You can also specify DECIMAL, DEC, or DEC
PACKED.
DECIMAL ZONED
Specifies a number in the form znznzn...z/sn, where z, n, and s have the
following values:
n A decimal digit represented by the right 4 bits of a byte (called the
numeric bits)
z That digit’s zone, represented by the left 4 bits
s The right-most byte of the decimal operand; s can be treated as a zone
or as the sign value for that digit
The plus sign (+) is represented by A, C, E, or F, and the minus sign (-) is
represented by B or D. The maximum number of zns is the same as the
maximum number of digits that are allowed in the SQL definition. You can also
specify DEC ZONED.
DECIMAL EXTERNAL(length,scale)
Specifies a string of characters that represent a number. The format is that of
an SQL numeric constant, as described in Chapter 2 of DB2 SQL Reference.
length
Overall length of the input field, in bytes. If you do not specify length, the
length of the input field is determined from the POSITION specification. If
you do not specify length or POSITION, LOAD uses the default length for
DECIMAL EXTERNAL, which is determined by using decimal precision. See
Table 28 on page 210 for more information on the default length for
DECIMAL EXTERNAL.
scale
Specifies the number of digits to the right of the decimal point. scale must
be an integer greater than or equal to 0, and it can be greater than length.
The default is 0.
If scale is greater than length, or if the number of provided digits is less than
the specified scale, the input number is padded on the left with zeros until the
decimal point position is reached. If scale is greater than the target scale, the
source scale locates the implied decimal position. All fractional digits greater
than the target scale are truncated. If scale is specified and the target column
has a data type of small integer or integer, the decimal portion of the input
number is ignored. If a decimal point is present, its position overrides the field
specification of scale.
FLOAT(length)
Specifies either a 64-bit floating-point number or a 32-bit floating-point number.
If length is between 1 and 21 inclusive, the number is 32 bits in the s390 (HFP)
format:
Bit 0 Represents a sign (0 for plus and 1 for minus)
Bits 1-7 Represent an exponent in excess-64 notation
Bits 8-31 Represent a mantissa
If length is between 1 and 24 inclusive, the number is 32 bits in the IEEE (BFP)
format:
Bit 0 Represents a sign (0 for plus and 1 for minus)
Bits 1-8 Represent an exponent
Bits 9-31 Represent a mantissa
You can also specify REAL for single-precision floating-point numbers and
DOUBLE PRECISION for double-precision floating-point numbers.
FLOAT EXTERNAL(length)
Specifies a string of characters that represent a number. The format is that of
an SQL floating-point constant, as described in Chapter 2 of DB2 SQL
Reference.
A specification of FLOAT(IEEE) or FLOAT(S390) does not apply for this format
(string of characters) of floating-point numbers.
If you do not specify length, the length of the string is determined from the
POSITION specification. If you do not specify length or POSITION, LOAD uses
the default length for FLOAT, which is 4 bytes for single precision and 8 bytes
for double precision. See Table 28 on page 210 for more information on the
default length for FLOAT.
DATE EXTERNAL(length)
Specifies a character string representation of a date. The length, if unspecified,
is the specified length on the LOCAL DATA LENGTH install option, or, if none
was provided, the default is 10 bytes. If you specify a length, it must be within
the range of 8 to 254 bytes.
Dates can be in any of the following formats. You can omit leading zeros for
month and day. You can include trailing blanks, but no leading blanks are
allowed.
v dd.mm.yyyy
v mm/dd/yyyy
v yyyy-mm-dd
v Any local format that your site defined at the time DB2 was installed
TIME EXTERNAL(length)
Specifies a character string representation of a time. The length, if unspecified,
is the specified length on the LOCAL TIME LENGTH install option, or, if none
was provided, the default is 8 bytes. If you specify a length, it must be within
the range of 4 to 254 bytes.
Times can be in any of the following formats:
v hh.mm.ss
v hh:mm AM
v hh:mm PM
v hh:mm:ss
v Any local format that your site defined at the time DB2 was installed
You can omit the mm portion of the hh:mm AM and hh:mm PM formats if mm is
equal to 00. For example, 5 PM is a valid time, and can be used instead of 5:00
PM.
TIMESTAMP EXTERNAL(length)
Specifies a character string representation of a time. The default for length is 26
bytes. If you specify a length, it must be within the range of 19 to 26 bytes.
Timestamps can be in any of the following formats. Note that nnnnnn
represents the number of microseconds, and can be from 0 to 6 digits. You can
omit leading zeros from the month, day, or hour parts of the timestamp; you can
omit trailing zeros from the microseconds part of the timestamp.
v yyyy-mm-dd-hh.mm.ss
v yyyy-mm-dd-hh.mm.ss.nnnnnn
v yyyy-mm-dd hh:mm:ss.nnnnnn
See Chapter 2 of DB2 SQL Reference for more information about the
timestamp data type.
ROWID
Specifies a row ID. The input data must be a valid value for a row ID; DB2 does
not perform any conversions.
A field specification for a row ID column is not allowed if the row ID column was
created with the GENERATED ALWAYS option.
If the row ID column is part of the partitioning key, LOAD INTO TABLE PART is
not allowed; specify LOAD INTO TABLE instead.
BLOB
Specifies a BLOB field. You must specify the length in bytes in a 4-byte binary
field that precedes the data. (The length does not include the 4-byte field itself.)
The length field must start in the column that is specified as start in the
POSITION option. If :end is used, it is ignored.
CLOB
Specifies a CLOB field. You must specify the length in bytes in a 4-byte binary
field that precedes the data. (The length does not include the 4-byte field itself.)
The length field must start in the column that is specified as start in the
POSITION option. If :end is used, it is ignored.
MIXED
Specifies that the input field contains mixed SBCS and DBCS data. If
MIXED is specified, any required CCSID conversions use the mixed CCSID
for the input data; if MIXED is not specified, any such conversions use the
SBCS CCSID for the input data.
DBCLOB
Specifies a DBCLOB field. You must specify the length in double-byte
characters in a 4-byte binary field that precedes the data. (The length does not
include the 4-byte field itself.) The length field must start in the column that is
specified as start in the POSITION option. If :end is used, it is ignored.
DEFAULTIF field-selection-criterion
Describes a condition that causes the DB2 column to be loaded with its default
value. You can write the field-selection-criterion with the same options as
described under “field-selection-criterion” on page 208. If the contents of the
DEFAULTIF field match the provided character constant, the field that is
specified in field-specification is loaded with its default value.
If the DEFAULTIF field is defined by the name of a VARCHAR or VARGRAPHIC
field, DB2 takes the length of the field from the 2-byte binary field that appears
before the data portion of the VARCHAR or VARGRAPHIC field.
| Data in the input record can be in ASCII or Unicode, but the utility interprets
| character constants that are specified in the utility control statement as EBCDIC
| or Unicode. If the control statement is in the same encoding scheme as the
| input data, you can code character constants in the control statement.
| Otherwise, if the control statement is not in the same encoding scheme as the
| input data, you must code the condition with hexadecimal constants. For
| example, if the input data is in EBCDIC and the control statement is in UTF-8,
| use (1:1)=X’31’ in the condition rather than (1:1)=’1’.
You can use the DEFAULTIF attribute with the ROWID keyword. If the condition
is met, the column is loaded with a value that DB2 generates.
NULLIF field-selection-criterion
Describes a condition that causes the DB2 column to be loaded with NULL. You
can write the field-selection-criterion with the same options as described under
“field-selection-criterion” on page 208. If the contents of the NULLIF field match
the provided character constant, the field that is specified in field-specification is
loaded with NULL.
If the NULLIF field is defined by the name of a VARCHAR or VARGRAPHIC
field, DB2 takes the length of the field from the 2-byte binary field that appears
before the data portion of the VARCHAR or VARGRAPHIC field.
| Data in the input record can be in ASCII or Unicode, but the utility interprets
| character constants that are specified in the utility control statement as EBCDIC
| or Unicode. If the control statement is in the same encoding scheme as the
| input data, you can code character constants in the control statement.
| Otherwise, if the control statement is not in the same encoding scheme as the
| input data, you must code the condition with hexadecimal constants. For
| example, if the input data is in EBCDIC and the control statement is in UTF-8,
| use (1:1)=X’31’ in the condition rather than (1:1)=’1’.
The fact that a field in the output table is loaded with NULL does not change
the format or function of the corresponding field in the input record. The input
field can still be used in a field selection criterion. For example, assume that a
LOAD statement has the following field specification:
(FIELD1 POSITION(*) CHAR(4)
FIELD2 POSITION(*) CHAR(3) NULLIF(FIELD1=’SKIP’)
FIELD3 POSITION(*) CHAR(5))
You cannot use the NULLIF parameter with the ROWID keyword because row
ID columns cannot be null.
Field selection criterion
Describes a condition that causes the DB2 column to be loaded with NULL or
with its default value.
2. Prepare the necessary data sets, as described in “Data sets that LOAD uses”
on page 223.
3. Create JCL statements, by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 19. (For examples of JCL for
LOAD, see “Sample LOAD control statements” on page 259.)
4. Prepare a utility control statement that specifies the options for the tasks that
you want to perform, as described in “Instructions for specific tasks” on page
226.
5. Check the compatibility table in “Concurrency and compatibility for LOAD” on
page 253 if you want to run other jobs concurrently on the same target objects.
6. Plan for restart if the LOAD job doesn’t complete, as described in “Terminating
or restarting LOAD” on page 250.
7. Read “After running LOAD” on page 255 in this section.
8. Run LOAD by using one of the methods described in Chapter 3, “Invoking DB2
online utilities,” on page 19.
When loading data into a segmented table space, sort your data by table to ensure
that the data is loaded in the best physical organization.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. As an alternative to specifying an input data set, you can specify a cursor with the
INCURSOR option. For more information about cursors, see “Loading data by using the
cross-loader function” on page 238.
3. Required if referential constraints exist and ENFORCE(CONSTRAINTS) is specified
(This option is the default).
4. Used for tables with indexes.
5. Required for discard processing when loading one or more tables that have unique
indexes.
6. Required if a sort is done.
7. If you omit the DD statement for this data set, LOAD creates the data set with the same
record format, record length, and block size as the input data set.
8. Required for inline copies.
9. Required if any indexes are to be built or if a sort is required for processing errors.
10. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table Table that is to be loaded. (If you want to load only one partition of a table,
you must use the PART option in the control statement.)
Defining work data sets: Use the formulas and instructions in Table 31 to calculate
the size of work data sets for LOAD. Each row in the table lists the DD name that is
used to identify the data set and either formulas or instructions that you should use
to determine the size of the data set. The key for the formulas is located at the
bottom of the table.
Table 31. Size of work data sets for LOAD jobs
Work data set Size
| SORTOUT max(f,e)
| ST01WKnn 2 ×(maximum record length × numcols × (count + 2) × number of
| indexes)
Table 31. Size of work data sets for LOAD jobs (continued)
Work data set Size
SYSDISC Same size as input data set
SYSERR e
SYSMAP v Simple table space for discard processing:
m
v Partitioned or segmented table space without discard processing:
max(m,e)
SYSUT1 v Simple table space:
max(k,e)
v Partitioned or segmented table space:
max(k,e,m)
| If you specify an estimate of the number of keys with the SORTKEYS
| option:
max(f,e) for a simple table space
max(f,e,m) for a partitioned or segmented table space
Note:
variable
meaning
k Key calculation
f Foreign key calculation
m Map calculation
e Error calculation
max() Maximum value of the specified calculations
| numcols
| Number of key columns to concatenate when you collect frequent values from the
| specified index
| count Number of frequent values that DB2 is to collect
| maximum record length
| Maximum record length of the SYSCOLDISTSTATS record that is processed when
| collecting frequency statistics (You can obtain this value from the RECLENGTH
| column in SYSTABLES.)
2. Count 1 for each foreign key that is not exactly indexed (that is, where
foreign key and index definitions do not correspond identically).
3. For each foreign key that is exactly indexed (that is, where foreign key and
index definitions correspond identically):
| a. Count 0 for the first relationship in which the foreign key participates if
| the index is not a data-partitioned secondary index. Count 1 if the index
| is a data-partitioned secondary index.
b. Count 1 for subsequent relationships in which the foreign key
participates (if any).
4. Multiply count by the number of rows that are to be loaded.
Calculating the foreign key: f
| If a mix of data-partitioned secondary indexes and nonpartitioned indexes exists
| on the table that is being loaded or a foreign key exists that is exactly indexed
| by a data-partitioned secondary index, use this formula:
| max(longest foreign key + 15) × (number of extracted keys)
Otherwise, use this formula:
| max(longest foreign key + 13) × (number of extracted keys)
Calculating the map: m
The data set must be large enough to accommodate one map entry (length = 21
bytes) per table row that is produced by the LOAD job.
Calculating the error: e
| The data set must be large enough to accommodate one error entry (length =
| 560 bytes) per defect that is detected by LOAD (for example, conversion errors,
| unique index violations, violations of referential constraints).
Calculating the number of possible defects:
– For discard processing, if the discard limit is specified, the number of
possible defects is equal to the discard limit.
If the discard limit is the maximum, calculate the number of possible defects
by using the following formula:
number of input records +
(number of unique indexes × number of extracted keys) +
(number of relationships × number of extracted foreign keys)
– For nondiscard processing, the data set is not required.
Allocating twice the space that is used by the input data sets is usually adequate for
the sort work data sets. Two or three large SORTWKnn data sets are preferable to
several small ones. For more information, see DFSORT Application Programming:
Guide.
For example, assume that you have a variable-length column that contains
X'42C142C142C2', which might be interpreted as either six single-byte characters
or three double-byte characters. With the two-byte length field, use:
v X'0006'X'42C142C142C2' to signify six single-byte characters in a VARCHAR
column
v X'0003'X'42C142C142C2' to signify three double-byte characters in a
VARGRAPHIC column
Because rows with duplicate key values for unique indexes fail to be loaded, any
records that are dependent on such rows either:
v Fail to be loaded because they would cause referential integrity violations (if you
specify ENFORCE CONSTRAINTS)
v Are loaded without regard to referential integrity violations (if you specify
ENFORCE NO)
As a result, violations of referential integrity might occur. Such violations can be
detected by LOAD (without the ENFORCE(NO) option) or by CHECK DATA.
| When you run a LOAD job with the REPLACE option but without the REUSE option
| and the data set that contains the data is not user-managed, DB2 deletes this data
| set before the LOAD and redefines a new data set with a control interval that
| matches the page size.
| See Appendix C, “Resetting an advisory or restrictive status,” on page 831 for more
| information.
Using LOAD REPLACE with LOG YES: The LOAD REPLACE or PART REPLACE
with LOG YES option logs only the reset and not each deleted row. If you need to
see what rows are being deleted, use the SQL DELETE statement.
LOAD DATA
REPLACE
INTO TABLE DSN8810.DEPT
( DEPTNO POSITION (1) CHAR(3),
DEPTNAME POSITION (5) VARCHAR,
MGRNO POSITION (37) CHAR(6),
ADMRDEPT POSITION (44) CHAR(3),
LOCATION POSITION (48) CHAR(16) )
ENFORCE NO
Figure 33. Example of using LOAD to replace one table in a single-table table space
1. Use LOAD REPLACE on the first table as shown in the control statement in
Figure 34. This option removes data from the table space and replaces just the
data for the first table.
Figure 34. Example of using LOAD REPLACE on the first table in a multiple-table table
space
2. Use LOAD with RESUME YES on the second table as shown in the control
statement in Figure 35. This option adds the records for the second table
without destroying the data in the first table.
Figure 35. Example of using LOAD with RESUME YES on the second table in a
multiple-table table space
If you need to replace just one table in a multiple-table table space, you need to
delete all the rows in the table, and then use LOAD with RESUME YES. For
example, assume that you want to replace all the data in DSN8810.TDSPTXT
without changing any data in DSN8810.TOPTVAL. To do this, follow these steps:
1. Delete all the rows from DSN8810.TDSPTXT by using the following SQL
DELETE statement:
EXEC SQL
DELETE FROM DSN8810.TDSPTXT
ENDEXEC
Hint: The mass delete works most quickly on a segmented table space.
2. Use the LOAD job that is shown in Figure 36 on page 230 to replace the rows
in that table.
Figure 36. Example of using LOAD with RESUME YES to replace one table in a
multiple-table table space
If you want to include the data from the identity column or ROWID column when
you load the unloaded data into a table, the identity column or ROWID column in
the target table must be defined with GENERATED BY DEFAULT. To use the
generated LOAD statement, remove the IGNOREFIELDS keyword and change the
dummy field names to the corresponding column names in the target table.
If RESUME NO is specified and the target table is not empty, no data is loaded.
If RESUME YES is specified and the target table is empty, data is loaded.
LOAD always adds rows to the end of the existing rows, but index entries are
placed in key sequence.
Loading partitions
If you use the PART clause of the INTO TABLE option, only the specified partitions
of a partitioned table are loaded. If you omit PART, the entire table is loaded.
You can specify the REPLACE and RESUME options separately by partition. The
control statement in Figure 37 specifies that DB2 is to load data into the first and
second partitions of the employee table. Records with '0' in column 1 replace the
contents of partition 1; records with '1' in column 1 are added to partition 2; all other
records are ignored. (The example control statement, which is simplified to illustrate
the point, does not list field specifications for all columns of the table.)
If you are not loading columns in the same order as in the CREATE TABLE
statement, you must code field specifications for each INTO TABLE statement.
The following example assumes that you have your data in separate input data
sets. That data is already sorted by partition, so you do not need to use the WHEN
clause of INTO TABLE. Placing the RESUME YES option before the PART option
inhibits concurrent partition processing while the utility is running.
LOAD DATA INDDN EMPLDS1 CONTINUEIF(72:72)=’X’
RESUME YES
INTO TABLE DSN8810.EMP REPLACE PART 1
The following example allows partitioning independence when more than one
partition is being loaded concurrently.
LOAD DATA INDDN SYSREC LOG NO
INTO TABLE DSN8810.EMP PART 2 REPLACE
| When index-based partitioning is used, LOAD INTO PART integer is not allowed if
| an identity column is part of the partitioning index. When table-based partitioning is
| used, LOAD INTO PART integer is not allowed if an identity column is used in a
| partitioning-clause of the CREATE TABLE or ALTER TABLE statement.
Coding your LOAD job with SHRLEVEL CHANGE and using partition parallelism is
equivalent to concurrent, independent insert jobs. For example, in a large
partitioned table space that is created with DEFINE NO, the LOAD utility starts
three tasks. The first task tries to insert the first row, which causes an update to the
DBD. The other two tasks time out while they wait to access the DBD. The first task
holds the lock on the DBD while the data sets are defined for the table space.
| You are responsible for ensuring that the data in the file does not include the
| chosen delimiters. If the delimiters are part of the file’s data, unexpected errors can
| occur.
| Table 32 lists the default hex values for the delimiter characters based on encoding
| scheme.
| Table 32. Default delimiter values for different encoding schemes
| EBCDIC ASCII/Unicode ASCII/Unicode
| Character EBCDIC SBCS DBCS/MBCS SBCS MBCS
| Character string X'7F' X'7F' X'22' X'22'
| delimiter
| Decimal point X'4B' X'4B' X'2E' X'2E'
| character
| Column delimiter X'6B' X'6B' X'2C' X'2C'
|
| Note: In most EBCDIC code pages, the hex values that are specified in Table 32
| are a double quotation mark(") for the character string delimiter, a period(.)
| for the decimal point character, and a comma(,) for the column delimiter.
| Table 33 lists the maximum allowable hex values for any delimiter character based
| on the encoding scheme.
| Table 33. Maximum delimiter values for different encoding schemes
| Encoding scheme Maximum allowable value
| EBCDIC SBCS None
| EBCDIC DBCS/MBCS X'3F'
| ASCII/Unicode SBCS None
| ASCII/Unicode MBCS X'7F'
|
| Table 34 identifies the acceptable data type forms for the delimited file format that
| the LOAD and UNLOAD utilities use.
| Table 34. Acceptable data type forms for delimited files.
| Acceptable form for loading Form that is created by
| Data type a delimited file unloading a delimited file
| CHAR, VARCHAR A delimited or non-delimited Character data that is
| character string enclosed by character
| delimiters. For VARCHAR,
| length bytes do not precede
| the data in the string.
| GRAPHIC (any type) A delimited or non-delimited Data that is unloaded as a
| character stream delimited character string. For
| VARGRAPHIC, length bytes
| do not precede the data in
| the string.
| INTEGER (any type) 1 A stream of characters that Numeric data in external
| represents a number in format.
| EXTERNAL format
| DECIMAL (any type) 2 A character string that A string of characters that
| represents a number in represents a number.
| EXTERNAL format
| FLOAT 3 A representation of a number A string of characters that
| in the range -7.2E + 75 to represents a number in
| 7.2E + 75 in EXTERNAL floating-point notation.
| format
| Table 34. Acceptable data type forms for delimited files. (continued)
| Acceptable form for loading Form that is created by
| Data type a delimited file unloading a delimited file
| BLOB, CLOB A delimited or non-delimited Character data that is
| character string enclosed by character
| delimiters. Length bytes do
| not precede the data in the
| string.
| DBCLOB A delimited or non-delimited Character data that is
| character string enclosed by character
| delimiters. Length bytes do
| not precede the data in the
| string.
| DATE A delimited or non-delimited Character string
| character string that contains representation of a date.
| a date value in EXTERNAL
| format
| TIME A delimited or non-delimited Character string
| character string that contains representation of a time.
| a time value in EXTERNAL
| format
| TIMESTAMP A delimited or non-delimited Character string
| character string that contains representation of a
| a timestamp value in timestamp.
| EXTERNAL format
|
| Note:
| 1. Field specifications of INTEGER or SMALLINT are treated as INTEGER
| EXTERNAL.
| 2. Field specifications of DECIMAL, DECIMAL PACKED, or DECIMAL
| ZONED are treated as DECIMAL EXTERNAL.
| 3. Field specifications of FLOAT, REAL, or DOUBLE are treated as FLOAT
| EXTERNAL.
LOAD requires access to the primary indexes on the parent tables of any loaded
tables. For simple, segmented, and partitioned table spaces, it drains all writers
from the parent table’s primary indexes. Other users cannot make changes to the
parent tables that result in an update to their own primary indexes. Concurrent
inserts and deletes on the parent tables are blocked, but updates are allowed for
columns that are not defined as part of the primary index.
| v The loaded table might lack primary key values that are values of foreign keys in
| dependent tables.
| The next few paragraphs describe how DB2 signals each of those errors and the
| means it provides for correcting them.
Duplicate values of a primary key: A primary index must be a unique index and
must exist if the table definition is complete. Therefore, when you load a parent
table, you build at least its primary index. You need an error data set, and probably
also a map data set and a discard data set.
Invalid foreign key values: A dependent table has the constraint that the values of
its foreign keys must be values of the primary keys of corresponding parent tables.
By default, LOAD enforces that constraint in much the same way as it enforces the
uniqueness of key values in a unique index. First, it loads all records to the table.
Subsequently, LOAD checks the validity of the records with respect to the
constraints, identifies any invalid record by an error message, and deletes the
record from the table. You can choose to copy this record to a discard data set.
Again you need at least an error data set, and probably also a map data set and a
discard data set.
However the project table has a primary key, the project number. In this case, the
record that is rejected by LOAD defines a project number, and any row in the
project activity table that refers to the rejected number is also rejected. The
summary report identifies those as causing secondary errors. If you use a discard
data set, records for both types of errors are copied to it.
Missing primary key values: The deletion of invalid records does not cascade to
other dependent tables that are already in place. Suppose now that the project and
project activity tables exist in separate table spaces, and that they are both
currently populated and possess referential integrity. In addition, suppose that the
data in the project table is now to be replaced (using LOAD REPLACE) and that the
replacement data for some department was inadvertently not supplied in the input
data. Rows that reference that department number might already exist in the project
activity table. LOAD, therefore, automatically places the table space that contains
the project activity table (and all table spaces that contain dependent tables of any
table that is being replaced) into CHECK-pending status.
The CHECK-pending status indicates that the referential integrity of the table space
is in doubt; it might contain rows that violate a referential constraint. DB2 places
severe restrictions on the use of a table space in CHECK-pending status; typically,
you run the CHECK DATA utility to reset this status. For more information, see
“Resetting the CHECK-pending status” on page 256.
Consequences of ENFORCE NO: If you use the ENFORCE NO option, you tell
LOAD not to enforce referential constraints. Sometimes you have good reasons for
doing that, but the result is that the loaded table space might violate the constraints.
Hence, LOAD places the loaded table space in CHECK-pending status. If you use
REPLACE, all table spaces that contain any dependent tables of the tables that
were loaded are also placed in CHECK-pending status. You must reset the status of
each table before you can use any of the table spaces.
For example, the violations might occur because parent rows do not exist. In this
case, correcting the parent tables better than deleting the dependent rows. In this
case, ENFORCE NO is more appropriate than ENFORCE CONSTRAINTS. After
you correct the parent table, you can use CHECK DATA to reset the
CHECK-pending status.
Compressing data
You can use LOAD with the REPLACE or RESUME NO options to build a
compression dictionary. If your table space, or a partition in a partitioned table
space, is defined with COMPRESS YES, the dictionary is created while records are
loaded. After the dictionary is completely built, the rest of the data is compressed as
it is loaded.
The data is not compressed until the dictionary is built. You must use LOAD
REPLACE or RESUME NO to build the dictionary. To save processing costs, an
initial LOAD does not go back to compress the records that were used to build the
dictionary.
The number of records that are required to build a dictionary is dependent on the
frequency of patterns in the data. For large data sets, the number of rows that are
required to build the dictionary is a small percentage of the total number of rows
that are to be compressed. For the best compression results, build a new dictionary
whenever you load the data.
Consider using KEEPDICTIONARY if the last dictionary was built by REORG; the
REORG utility’s sampling method can yield more representative dictionaries than
LOAD and can thus mean better compression. REORG with KEEPDICTIONARY is
efficient because the data is not decompressed in the process.
403 for more information about using REORG to compress data, and see
Chapter 29, “RUNSTATS,” on page 535 for information about using RUNSTATS to
update catalog information about compression.
Use KEEPDICTIONARY if you want to try to compress all the records during LOAD,
and if you know that the data has not changed much in content since the last
dictionary was built. An example of LOAD with the KEEPDICTIONARY option is
shown in Figure 38.
LOAD DATA
REPLACE KEEPDICTIONARY
INTO TABLE DSN8810.DEPT
( DEPTNO POSITION (1) CHAR(3),
DEPTNAME POSITION (5) VARCHAR,
MGRNO POSITION (37) CHAR(6),
ADMRDEPT POSITION (44) CHAR(3),
LOCATION POSITION (48) CHAR(16) )
ENFORCE NO
You can also specify KEEPDICTIONARY for specific partitions of a partitioned table
space. In this case, each partition has its own dictionary.
IMS DPROP runs as a z/OS application and can extract data from VSAM and
physical sequential access method (SAM) files, as well from DL/I databases. Using
IMS DPROP, you do not need to extract all the data in a database or data set. You
use a statement such as an SQL subselect to indicate which fields to extract and
which conditions, if any, the source records or segments must meet.
With JCL models that you edit, you can have IMS DPROP produce the statements
for a DB2 LOAD utility job. If you have more than one DB2 subsystem, you can
name the one that is to receive the output. IMS DPROP can generate LOAD control
statements in the job to relate fields in the extracted data to target columns in DB2
tables.
You have the following choices for how IMS DPROP writes the extracted data:
v 80-byte records, which are included in the generated job stream
v A separate physical sequential data set (which can be dynamically allocated by
IMS DPROP), with a logical record length that is long enough to accommodate
any row of the extracted data
In the first case, the LOAD control statements that are generated by IMS DPROP
include the CONTINUEIF option to describe the extracted data to DB2 LOAD.
In the second case, you can have IMS DPROP name the data set that contains the
extracted data in the SYSREC DD statement in the LOAD job. (In that case, IMS
DPROP makes no provision for transmitting the extracted data across a network.)
Normally, you do not need to edit the job statements that are produced by IMS
DPROP. However, in some cases you might need to edit; for example, if you want
to load character data into a DB2 column with INTEGER data type, you need to edit
the job statements. (DB2 LOAD does not consider CHAR and INTEGER data to be
compatible.)
IMS DPROP is a versatile tool that contains more control, formatting, and output
options than are described here. For more information about this tool, see IMS
DataPropagator: An Introduction.
To use the cross-loader function, you first need to declare a cursor by using the
EXEC SQL utility. Within the cursor definition, specify a SELECT statement that
identifies the result table that you want to use as the input data for the LOAD job.
The column names in the SELECT statement must be identical to the column
names in the table that is being loaded. You can use the AS clause in the SELECT
list to change the columns names that are returned by the SELECT statement so
that they match the column names in the target table. The columns in the SELECT
list do not need to be in the same order as the columns in the target table. Also, the
SELECT statement needs to refer to any remote tables by their three-part name.
After you declare the cursor, specify the cursor name with the INCURSOR option in
the LOAD statement. You cannot load the input data into the same table on which
you defined the cursor. You can, however, use the same cursor to load multiple
tables.
When you submit the LOAD job, DB2 parses the SELECT statement in the cursor
definition and checks for errors. If the statement is invalid, the LOAD utility issues
an error message and identifies the condition that prevented the execution. If the
statement syntax is valid but an error occurs during execution, the LOAD utility also
issues an error message. The utility terminates when it encounters an error.
If no errors occur, the utility loads the result table that is identified by the cursor into
the specified target table according to the following rules:
v LOAD matches the columns in the input data to columns in the target table by
name, not by sequence.
v If the number of columns in the cursor is less than the number of columns in the
table that is being loaded, DB2 loads the missing columns with their default
values. If the missing columns are defined as NOT NULL without defaults, the
LOAD job fails.
v If you specify IGNOREFIELDS YES, LOAD skips any columns in the input data
that do not exist in the target table.
v If the data types in the target table do not match the data types in the cursor,
DB2 tries to convert the data as much as possible. If the conversion fails, the
LOAD job fails. You might be able to avoid these conversion errors by using SQL
conversion functions in the SELECT statement of the cursor declaration.
v If the encoding scheme of the input data is different than the encoding scheme of
the target table, DB2 converts the encoding schemes automatically.
v The sum of the lengths of all of the columns cannot exceed 32 KB.
v If the SELECT statement in the cursor definition specifies a table with at least
one LOB column and a ROWID that was created with the GENERATED ALWAYS
clause, you cannot specify this ROWID column in the SELECT list of the cursor.
Also, although you do not need to specify casting functions for any distinct types in
the input data or target table, you might need to add casting functions to any
additional WHERE clauses in the SQL.
For examples of loading data from a cursor, see “Sample LOAD control statements”
on page 259.
To create an inline copy, use the COPYDDN and RECOVERYDDN keywords. You
can specify up to two primary and two secondary copies. Inline copies are produced
during the RELOAD phase of LOAD processing.
The SYSCOPY record that is produced by an inline copy contains ICTYPE=F and
SHRLEVEL=R. The STYPE column contains an R if the image copy was produced
by LOAD REPLACE LOG(YES). It contains an S if the image copy was produced
by LOAD REPLACE LOG(NO). The data set that is produced by the inline copy is
logically equivalent to a full image copy with SHRLEVEL REFERENCE, but the data
within the data set differs in the following ways:
v Data pages might be out of sequence and some might be repeated. If pages are
repeated, the last one is always the correct copy.
v Space map pages are out of sequence and might be repeated.
v If the compression dictionary is rebuilt with LOAD, the set of dictionary pages
occurs twice in the data set, with the second set being the correct one.
The total number of duplicate pages is small, with a negligible effect on the required
space for the data set.
You must specify LOAD REPLACE. If you specify RESUME YES or RESUME NO
but not REPLACE, an error message is issued and LOAD terminates.
Improving performance
To improve LOAD utility performance, you can take the following actions:
v Use one LOAD DATA statement when loading multiple tables in the same table
space. Follow the LOAD statement with multiple INTO TABLE WHEN clauses.
v Run LOAD concurrently against separate partitions of a partitioned table space.
Alternatively, specify the INDDN and DISCARDDN keywords in your utility JCL to
invoke partition parallelism. This specification reduces the elapsed time required
for loading large amounts of data into partitioned table spaces.
Advantages of the SORTKEYS option: With SORTKEYS, index keys are passed
in memory rather than written to work files. Avoiding this I/O to the work files
improves LOAD performance.
You also reduce disk space requirements for the SYSUT1 and SORTOUT data
sets, especially if you provide an estimate of the number of keys to sort.
The SORTKEYS option reduces the elapsed time from the start of the RELOAD
phase to the end of the BUILD phase.
However, if the index keys are already in sorted order, or no indexes exist,
SORTKEYS does not provide any advantage.
You can reduce the elapsed time of a LOAD job for a table space or partition with
more than one defined index by specifying the parameters to invoke a parallel index
build. For more information, see “Building indexes in parallel for LOAD” on page
245.
Estimating the number of keys: You can specify an estimate of the number of
keys for the job to sort. If the estimate is omitted or specified as 0, LOAD writes the
extracted keys to the work data set, which reduces the performance improvement of
using SORTKEYS.
| If more than one table is being loaded, repeat the preceding steps for each table,
| and sum the results.
execution time. This technique might eliminate execution time delays but adds
setup time prior to the application’s execution. LOAD or REORG PREFORMAT
primes a new table space and prepares it for INSERT processing. When the
preformatted space is utilized and DB2 needs to extend the table space, normal
data set extending and preformatting occurs.
Preformatting for INSERT processing can be desirable for high-insert tables that
receive a predictable amount of data because all the required space can be
pre-allocated prior to the application’s execution. This benefit also applies to the
case of a table that acts as a repository for work items that come into a system and
that are subsequently used to feed a backend task that processes the work items.
Preformatting of a table space that contains a table that is used for query
processing can cause table space scans to read additional empty pages, extending
the elapsed time for these queries. LOAD or REORG PREFORMAT is not
recommended for tables that have a high ratio of reads to inserts if the reads result
in table space scans.
Preformatting boundaries: You can manage your own data sets or have DB2
manage the data sets. For user-managed data sets, DB2 does not delete and
reallocate them during utility processing. The size of the data set does not shrink
back to the original data set allocation size but either remains the same or
increases in size if additional space or data is added. This characteristic has
implications when LOAD or REORG PREFORMAT is used because of the
preformatting that is done for all free pages between the high-used RBA (or page)
to the high-allocated RBA. This preformatting includes secondary extents that have
been allocated.
For DB2-managed data sets, DB2 deletes and reallocates them if you specify
REPLACE on the LOAD or REORG job. This results in the data sets being re-sized
to their original allocation size. They remain that size if the data that is being
reloaded does not fill the primary allocation and forces a secondary allocation. This
means the LOAD or REORG PREFORMAT option with DB2-managed data causes
at least the full primary allocation amount of a data set to be preformatted after the
reload of data into the table space.
For both user-managed and DB2-managed data sets, if the data set goes into
secondary extents during utility processing, the high-allocated RBA becomes the
end of the secondary extent, and that becomes the high value for preformatting.
Table space scans can also be elongated because empty preformatted pages are
read. Use the LOAD or REORG PREFORMAT option for table spaces that start out
empty and are filled through high insert activity before any query access is
performed against the table space. Mixing inserts and nonindexed queries against a
preformatted table space might have a negative impact on the query performance
without providing a compensating improvement in the insert performance. You will
see the best results where a high ratio of inserts to read operations exists.
Tables 35, 36, and 37 identify the compatibility of data types for assignments and
comparisons. Y indicates that the data types are compatible. N indicates that the
data types are not compatible. D indicates the defaults that are used when you do
not specify the input data type in a field specification of the INTO TABLE statement.
Notes:
1. Conversion applies when either the input data or the target table is Unicode.
Input fields with data types CHAR, CHAR MIXED, CLOB, DBCLOB, VARCHAR,
VARCHAR MIXED, GRAPHIC, GRAPHIC EXTERNAL, and VARGRAPHIC are
converted from the CCSIDs of the input file to the CCSIDs of the table space when
they do not match. For example:
v You specify the ASCII or UNICODE option for the input data, and the table space
is EBCDIC.
v You specify the EBCDIC or UNICODE option, and the table space is ASCII.
v You specify the ASCII or EBCDIC option, and the table space is Unicode.
v The CCSID option is specified, and the CCSIDs of the input data are not the
same as the CCSIDs of the table space.
CLOB, BLOB, and DBCLOB input field types cannot be converted to any other field
type.
Truncation of the decimal part of numeric data is not considered a conversion error.
| You can also remove a specified character from the beginning, end, or both ends of
| the data by specifying the STRIP option. This option is valid only with the CHAR,
| VARCHAR, GRAPHIC, and VARGRAPHIC data type options. If you specify both the
| TRUNCATE and STRIP options, LOAD performs the strip operation first. For
| example, if you specify both TRUNCATE and STRIP for a field that is to be loaded
| into a VARCHAR(5) column, LOAD alters the character strings as shown in
| Table 38 on page 245. In this table, an underscore represents a character that is to
| be stripped.
| Table 38. Results of specifying both TRUNCATE and STRIP for data that is to be loaded into
| a VARCHAR(5) column.
| Specified STRIP String after strip
| option Input string operation String that is loaded
|| STRIP BOTH ‘_ABCDEFG_’ ‘ABCDEFG’ ‘ABCDE’
|| STRIP LEADING ‘_ABC_’ ‘ABC_’ ‘ABC_’
|| STRIP TRAILING ‘_ABC_DEF_’ ‘_ABC_DEF’ ‘_ABC_’
|
For unique indexes, any two null values are assumed to be equal, unless the index
was created with the UNIQUE WHERE NOT NULL clause. In that case, if the key is
a single column, it can contain any number of null values, although its other values
must be unique.
Neither the loaded table nor its indexes contain any of the records that might have
produced an error. Using the error messages, you can identify faulty input records,
correct them, and load them again. If you use a discard data set, you can correct
the records there and add them to the table with LOAD RESUME.
LOAD uses parallel index build if all of the following conditions are true:
v More than one index needs to be built.
| v The LOAD utility statement specifies a non-zero estimate of the number of keys
| on the SORTKEYS option.
For a diagram of parallel index build processing, see Figure 77 on page 457.
You can either allow the utility to dynamically allocate the data sets that the SORT
phase needs, or provide the necessary data sets yourself. Select one of the
following methods to allocate sort work and message data sets:
Method 1: LOAD determines the optimal number of sort work and message data
sets.
| 1. Specify the SORTDEVT keyword in the utility statement.
2. Allow dynamic allocation of sort work data sets by not supplying SORTWKnn
DD statements in the LOAD utility JCL.
3. Allocate UTPRINT to SYSOUT.
Method 2: You control allocation of sort work data sets, while LOAD allocates
message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: You have the most control over rebuild processing; you must specify
both sort work and message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
Data sets used: If you select Method 2 or 3 in the preceding information, use the
information provided here, along with “Determining the number of sort subtasks,”
“Allocation of sort subtasks,” and “Estimating the sort work file size” on page 247 to
define the necessary data sets.
Each sort subtask must have its own group of sort work data sets and its own print
message data set. Possible reasons to allocate data sets in the utility job JCL rather
than using dynamic allocation are:
v To control the size and placement of the data sets
v To minimize device contention
v To optimally utilize free disk space
v To limit the number of utility subtasks that are used to build indexes
The DD names SWnnWKmm define the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more data
sets that are to be used by that subtask pair. For example:
SW01WK01 The first sort work data set that is used by the subtask as it builds
the first index.
SW01WK02 The second sort work data set that is used by the subtask as it
builds the first index.
SW02WK01 The first sort work data set that is used by the subtask as it builds
the second index.
SW02WK02 The second sort work data set that is used by the subtask as it
builds the second index.
The DD names UTPRINnn define the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
LOAD determines the number of subtask pairs according to the following guidelines:
v The number of subtask pairs equals the number of sort work data set groups that
are allocated.
v The number of subtask pairs equals the number of message data sets that are
allocated.
v If you allocate both sort work and message data set groups, the number of
subtask pairs equals the smallest number of data sets that are allocated.
Allocation of sort subtasks: LOAD attempts to assign one sort subtask pair for
each index that is to be built. If LOAD cannot start enough subtasks to build one
index per subtask pair, it allocates any excess indexes across the pairs (in the order
that the indexes were created), so that one or more subtask pairs might build more
than one index.
During parallel index build processing, LOAD assigns all foreign keys to the first
utility subtask pair. Remaining indexes are then distributed among the remaining
subtask pairs according to the creation date of the index. If a table space does not
participate in any relationships, LOAD distributes all indexes among the subtask
pairs according to the index creation date, assigning the first created index to the
first subtask pair.
Refer to Table 39 for conceptual information about subtask pairing when the number
of indexes (seven indexes) exceeds the available number of subtask pairs (five
subtask pairs).
Table 39. LOAD subtask pairing for a relational table space
Subtask pair Assigned index
SW01WKmm Foreign keys, fifth created index
SW02WKmm First created index, sixth created index
SW03WKmm Second created index, seventh created index
SW04WKmm Third created index
SW05WKmm Fourth created index
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys in all of the indexes that are being
processed by the subtask in order to calculate each sort work file size. After you
determine which indexes are assigned to which subtask pairs, use one of the
following formulas to calculate the required space:
| v If the indexes being processed include a mixture of data-partitioned secondary
| indexes and nonpartitioned indexes, use the following formula:
| 2 × (longest index key + 15) × (number of extracted keys)
| v Otherwise, if only one type of index is being built, use the following formula:
| 2 × (longest index key + 13) × (number of extracted keys)
| longest index key The length of the longest key that is to be
| processed by the subtask. For the first subtask pair
| for LOAD, compare the length of the longest key
| and the length of the longest foreign key, and use
| the larger value. For nonpadded indexes, longest
| index key means the maximum possible length of a
| key with all varying-length columns, padded to their
| maximum lengths, plus 2 bytes for each
| varying-length column.
| number of extracted keys The number of keys from all indexes that are to be
sorted and that the subtask is to process.
When loading into a segmented table space, LOAD leaves free pages, and free
space on each page, in accordance with the current values of the FREEPAGE and
PCTFREE parameters. (You can set those values with the CREATE TABLESPACE,
ALTER TABLESPACE, CREATE INDEX, or ALTER INDEX statements.) LOAD
leaves one free page after reaching the FREEPAGE limit for each table in the table
space.
| If you are replacing a partition, these preceding restrictions are relaxed; the partition
| that is being replaced can be in the RECOVER-pending status, and its
| corresponding index partition can be in the REBUILD-pending status. However, all
| secondary indexes must not be in the page set REBUILD-pending status. See
| Appendix C, “Resetting an advisory or restrictive status,” on page 831 for more
| information about resetting a restrictive status.
See Table 169 on page 835 for information about resetting the RECOVER-pending
status, Table 168 on page 835 for information about resetting the REBUILD-pending
status, and “REORG-pending status” on page 836 for information about resetting
the REORG-pending status.
Any field specification that describes the data is checked before a field procedure is
executed. That is, the field specification must describe the data as it appears in the
input record.
ROWID generated by default: The LOAD utility can set from input data columns
that are defined as ROWID GENERATED BY DEFAULT. The input field must be
specified as a ROWID. No conversions are allowed. The input data for a ROWID
column must be a unique, valid value for a row ID. If the value of the row is not
unique, a duplicate key violation occurs. If such an error occurs, the load fails. In
this case, you need to discard the duplicate value and re-run the LOAD job with a
new unique value, or allow DB2 to generate the value of the row ID.
You can use the DEFAULTIF attribute with the ROWID keyword. If the condition is
met, the column is loaded with a value that is generated by DB2. You cannot use
the NULLIF attribute with the ROWID keyword because row ID columns cannot be
null.
Table 40. LOAD LOG and REORG LOG impact for a LOB table space (continued)
LOB table space
LOAD LOG/ REORG LOB table space status after utility
LOG keyword LOG attribute What is logged completes
Notes:
1. REORG LOG NO on a LOB table space sets COPY-pending status only if the LOB table space was changed by
the REORG utility.
Use either the STATISTICS option or the RUNSTATS utility to collect statistics so
that the DB2 catalog statistics contain information about the newly loaded data.
Recording these new statistics enables DB2 to select SQL paths with accurate
information. Then rebind any application plans that depend on the loaded tables to
update the path selection of any embedded SQL statements.
Collecting inline statistics for discarded rows: If you specify the DISCARDDN
and STATISTICS options and a row is found with check constraint errors or
conversion errors, the row is not loaded into the table and DB2 does not collect
inline statistics on it. However, the LOAD utility collects inline statistics prior to
discarding rows that have unique index violations or referential integrity violations.
In these cases, if the number of discarded rows is large enough to make the
statistics significantly inaccurate, run the RUNSTATS utility separately on the table
to gather the most accurate statistics.
Terminating LOAD
If you terminate LOAD by using the TERM UTILITY command during the reload
phase, the records are not erased. The table space remains in RECOVER-pending
status, and indexes remain in the REBUILD-pending status.
If you terminate LOAD by using the TERM UTILITY command during the sort or
build phases, the indexes that are not yet built remain in the REBUILD-pending
status.
| If the LOAD job terminates during the RELOAD, SORT, BUILD, or SORTBLD
phases, both RESTART and RESTART(PHASE) phases restart from the beginning
of the RELOAD phase. However, restart of LOAD RESUME YES or LOAD PART
RESUME YES in the BUILD or SORTBLD phase results in message DSNU257I.
Table 41 lists the LOAD phases and their effects on any pending states when the
utility is terminated in a particular phase.
Table 41. LOAD phases and their effects on pending states when terminated.
Phase Effect on pending status
Reload v Places table space in RECOVER-pending status, then resets the
status.
v Places indexes in REBUILD-pending status.
v Places table space in COPY-pending status.
| v Places table space in CHECK-pending status.
| Build v Resets REBUILD-pending status for non unique indexes.
Indexval v Resets REBUILD-pending status for unique indexes.
Enforce v Resets CHECK-pending status for table space.
Restarting LOAD
You can restart LOAD at its last commit point (RESTART(CURRENT)) or at the
beginning of the phase during which operation ceased (RESTART(PHASE)). LOAD
output messages identify the completed phases; use the DISPLAY command to
identify the specific phase during which operation stopped.
Notes:
1. SYSMAP and SYSERR data sets might not be required for all load jobs. See
Chapter 16, “LOAD,” on page 183 for exact requirements.
2. If the SYSERR data set is not required and has not been provided, LOAD uses
SYSUT1 as a work data set to contain error information.
3. You must not restart during the RELOAD phase if you specified SYSREC DD *.This
statement prevents internal commits from being taken, and RESTART performs like
RESTART(PHASE), except with no data back out. Also, you must not restart if your
SYSREC input consistsof multiple, concatenated data sets.
4. The utility can be restarted with either RESTART or RESTART(PHASE). However,
because this phase does not take checkpoints, RESTART is always re-executed from
the beginning of the phase.
5. A LOAD RESUME YES job cannot be restarted in the BUILD or SORTBLD phase.
| 6. Use RESTART or RESTART(PHASE) to restart at the beginning of the RELOAD phase.
7. This utility can be restarted with either RESTART or RESTART(PHASE).However, the
utility can be re-executed from the last internal checkpoint. This is dependent on the
data sets that are used and whether any input data sets have been rewritten.
8. The SYSUT1 data set is required if the target table space is segmented or partitioned.
9. If report is required and this is a load without discard processing, SYSMAP is required
to complete the report phase.
| 10. Any job that finished abnormally in the RELOAD, SORT, BUILD, or SORTBUILD phase
| restarts from the beginning of the RELOAD phase.
You can restart LOAD at its last commit point or at the beginning of the phase
during which operation ceased. LOAD output messages identify the completed
phases; use the DISPLAY command to identify the specific phase during which
operation stopped.
Restarting after an out-of-space condition: See “Restarting after the output data
set is full” on page 45 for guidance in restarting LOAD from the last commit point
after receiving an out-of-space condition.
Claims and drains: Table 43 shows which claim classes LOAD drains and the
restrictive states the utility sets.
Table 43. Claim classes of LOAD operations
LOAD LOAD PART LOAD LOAD PART
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Target NONE NONE CHANGE CHANGE
Table space, index, or DA/UTUT DA/UTUT CW/UTRW CW/UTRW
physical partition of a table
space or index space
| Nonpartitioned secondary DA/UTUT DR CW/UTRW CW/UTRW
| index
| Data-partitioned secondary DA/UTUT DA/UTUT CW/UTRW CW/UTRW
| index
Index logical partition None DA/UTUT None CW/UTRW
Primary index (with DW/UTRO DW/UTRO CR/UTRW CR/UTRW
ENFORCE option only)
RI dependents CHKP (NO) CHKP (NO) CHKP (NO) CHKP (NO)
Legend:
v CHKP (NO): Concurrently running applications do not see CHECK-pending status after
commit.
v CR: Claim the read claim class.
v CW: Claim the write claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
v None: Object is not affected by this utility.
v RI: Referential integrity
Compatibility: Table 44 on page 254 shows whether or not utilities are compatible
with LOAD and can run concurrently on the same target object. The target object
can be a table space, an index space, or a partition of a table space or index
space.
SQL operations and other online utilities on the same target partition are
incompatible.
You can also remove the restriction by using one of these operations:
v LOAD REPLACE LOG YES
v LOAD REPLACE LOG NO with an inline copy
v REORG LOG YES
v REORG LOG NO with an inline copy
v REPAIR SET with NOCOPYPEND
If you use LOG YES and do not make an image copy of the table space,
subsequent recovery operations are possible but take longer than if you had made
an image copy.
Although CHECK DATA is usually preferred, you can also reset the CHECK-pending
status by using any of the following operations:
v Drop tables that contain invalid rows.
v Replace the data in the table space, by using LOAD REPLACE and enforcing
check and referential constraints.
v Recover all members of the table space that were set to a prior quiesce point.
v Use REPAIR SET with NOCHECKPEND.
You want to run CHECK DATA against the table space that contains the project
activity table to reset the status. First, review the review the description of DELETE
YES and exception tables. Then, when you run the utility, ensure the availability of
all table spaces that contain either parent tables or dependent tables of any table in
the table spaces that are being checked.
DELETE YES: This option deletes invalid records and resets the status, but it is
not the default. Use DELETE NO, the default, to find out quickly how large your
problem is; you can choose to correct it by reloading, rather than correcting the
current situation.
Exception tables: With DELETE YES, you do not use a discard data set to
receive copies of the invalid records; instead, you use another DB2 table called an
exception table. This section assumes that you already have an exception table
available for every table that is subject to referential or table check constraints. (For
instructions on creating them, see “Create exception tables” on page 62.)
If you use DELETE YES, you must name an exception table for every descendent
of every table in every table space that is being checked. Deletes that are caused
by CHECK DATA are not subject to any of the SQL delete rules; they cascade
without restraint to the lowest-level descendent.
If table Y is the exception table for table X, name it with the following clause in the
CHECK DATA statement:
FOR EXCEPTION IN X USE Y
Example: In the following example, CHECK DATA is to be run against the table
space that contains the project activity table. Assume that the exception tables
DSN8810.EPROJACT and DSN8810.EEPA exist.
CHECK DATA TABLESPACE DSN8D81A.PROJACT
DELETE YES
FOR EXCEPTION IN DSN8810.PROJACT USE DSN8810.EPROJACT
IN DSN8810.EMPPROJACT USE DSN8810.EEPA
SORTDEVT SYSDA
SORTNUM 4
If the statement does not name error or work data sets, the JCL for the job must
contain DD statements similar to the following DD statements:
| //SYSERR DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //SYSUT1 DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //SORTOUT DD UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //UTPRINT DD SYSOUT=A
When the two jobs are complete, what table spaces are in CHECK-pending status?
v If you enforced constraints when loading the project table, the table space is not
in CHECK-pending status.
v Because you did not enforce constraints on the project activity table, the table
space is in CHECK-pending status.
v Because you used LOAD RESUME (not LOAD REPLACE) when loading the
project activity table, its dependents (the employee-to-project-activity table) are
not in CHECK-pending status. That is, the operation might not delete any parent
rows from the project table, and therefore might not violate the referential
integrity of its dependent. However if you delete records from PROJACT when
checking, you still need an exception table for EMPPROJACT.
Therefore you should check the data in the project activity table.
SCOPE PENDING: DB2 records the identifier of the first row of the table that
might violate referential or table check constraints. For partitioned table spaces, that
identifier is in SYSIBM.SYSTABLEPART; for nonpartitioned table spaces, that
identifier is in SYSIBM.SYSTABLES. The SCOPE PENDING option speeds the
checking by confining it to just the rows that might be in error.
Example: In the following example, CHECK DATA is to be run against the table
space that contains the project activity table after LOAD RESUME:
CHECK DATA TABLESPACE DSN8D81A.PROJACT
SCOPE PENDING
DELETE YES
FOR EXCEPTION IN DSN8810.PROJACT USE DSN8810.EPROJACT
IN DSN8810.EMPPROJACT USE DSN8810.EEPA
SORTDEVT SYSDA
SORTNUM 4
As before, the JCL for the job needs DD statements to define the error and sort
data sets.
To rebuild an index that is inconsistent with its data, use the REBUILD INDEX utility.
an auxiliary index, free space within the index might be consumed and index page
splits might occur. Consider reorganizing an index on the auxiliary table after LOAD
completes to introduce free space into the index for future inserts and loads.
| When you run LOAD with the REPLACE option, the utility updates this range of
| used version numbers for indexes that are defined with the COPY NO attribute.
| LOAD REPLACE sets the OLDEST_VERSION column to the current version
| number, which indicates that only one version is active; DB2 can then reuse all of
| the other version numbers.
| Recycling of version numbers is required when all of the version numbers are being
| used. All version numbers are being used when one of the following situations is
| true:
| v The value in the CURRENT_VERSION column is one less than the value in the
| OLDEST_VERSION column.
| v The value in the CURRENT_VERSION column is 15, and the value in the
| OLDEST_VERSION column is 0 or 1.
| You can also run REBUILD INDEX, REORG INDEX, or REORG TABLESPACE to
| recycle version numbers for indexes that are defined with the COPY NO attribute.
| To recycle version numbers for indexes that are defined with the COPY YES
| attribute or for table spaces, run MODIFY RECOVERY.
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
Each POSITION clause specifies the location of a field in the input record. In this
example, LOAD accepts the input that is shown in Figure 40 on page 260 and
interprets it as follows:
v The first 3 bytes of each record are loaded into the DEPTNO column of the table.
v The next 36 bytes, including trailing blanks, are loaded into the DEPTNAME
column.
If this input column were defined as VARCHAR(36), the input data would need to
contain a 2-byte binary length field preceding the data. This binary field would
begin at position 4.
v The next three fields are loaded into columns that are defined as CHAR(6),
CHAR(3), and CHAR(16).
The RESUME YES clause specifies that the table space does not need to be
empty; new records are added to the end of the table.
LOAD DATA
RESUME YES
INTO TABLE DSN8810.DEPT
(DEPTNO POSITION (1:3) CHAR(3),
DEPTNAME POSITION (4:39) CHAR(36),
MGRNO POSITION (40:45) CHAR(6),
ADMRDEPT POSITION (46:48) CHAR(3),
LOCATION POSITION (49:64) CHAR(16))
LOAD
INTO TABLE DSN8810.DEPT PART 1 REPLACE
The POSITION clauses specify the location of the fields in the input data for the
DEPT table. For each source record that is to be loaded into the DEPT table:
v The characters in positions 7 through 9 are loaded into the DEPTNO column.
v The characters in positions 10 through 35 are loaded into the DEPTNAME
column.
v The characters in positions 36 through 41 are loaded into the MGRNO column.
v The characters in positions 42 through 44 are loaded into the ADMRDEPT
column.
Figure 41. Example LOAD statement that loads selected records into multiple tables
For each input record, data is loaded into the specified columns (that is, PROJNO,
PROJNAME, DEPTNO, and so on) to form a table row. Any other PROJ columns
that are not specified in the LOAD control statement are set to the default value.
The POSITION clauses define the starting positions of the fields in the input data
set. The ending positions of the fields in the input data set are implicitly defined
either by the length specification of the data type (CHAR length) or the length
specification of the external numeric data type (LENGTH).
The numeric data that is represented in SQL constant format (EXTERNAL format) is
converted to the correct internal format by the LOAD process and placed in the
indicated column names. The two dates (PRSTDATE and PRENDATE) are
assumed to be represented by eight digits and two separator characters, as in the
USA format (for example, 11/15/2001). The length of the date fields is given as 10
explicitly, although in many cases, the default is the same value.
| The COLDEL option indicates that the column delimiter is a comma (,). The
| CHARDEL option indicates that the character string delimiter is a double quotation
| mark ("). The DECPT option indicates that the decimal point character is a period
| (.). You are not required to explicitly specify these particular characters, because
| they are all defaults.
| //*
| //STEP3 EXEC DSNUPROC,UID=’JUQBU101.LOAD2’,TIME=1440,
| // UTPROC=’’,
| // SYSTEM=’SSTR’,DB2LEV=DB2A
| //SYSERR DD DSN=JUQBU101.LOAD2.STEP3.SYSERR,
| // DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
| // SPACE=(4096,(20,20),,,ROUND)
| //SYSDISC DD DSN=JUQBU101.LOAD2.STEP3.SYSDISC,
| // DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
| // SPACE=(4096,(20,20),,,ROUND)
| //SYSMAP DD DSN=JUQBU101.LOAD2.STEP3.SYSMAP,
| // DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
| // SPACE=(4096,(20,20),,,ROUND)
| //SYSUT1 DD DSN=JUQBU101.LOAD2.STEP3.SYSUT1,
| // DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
| // SPACE=(4096,(20,20),,,ROUND)
| //UTPRINT DD SYSOUT=*
| //SORTOUT DD DSN=JUQBU101.LOAD2.STEP3.SORTOUT,
| // DISP=(MOD,DELETE,CATLG),UNIT=SYSDA,
| // SPACE=(4096,(20,20),,,ROUND)
| //SYSIN DD *
| LOAD DATA
| FORMAT DELIMITED COLDEL ’,’ CHARDEL ’"’ DECPT ’.’
| INTO TABLE TBQB0103
| (FILENO CHAR,
| DATE1 DATE EXTERNAL,
| TIME1 TIME EXTERNAL,
| TIMESTMP TIMESTAMP EXTERNAL)
| /*
| //SYSREC DD *
| "001", 2000-02-16, 00.00.00, 2000-02-16-00.00.00.0000
| "002", 2001-04-17, 06.30.00, 2001-04-17-06.30.00.2000
| "003", 2002-06-18, 12.30.59, 2002-06-18-12.30.59.4000
| "004", 1991-08-19, 18.59.30, 1991-08-19-18.59.30.8000
| "005", 2000-12-20, 24.00.00, 2000-12-20-24.00.00.0000
| /*
|
| Figure 43. Example of loading data in delimited file format
|
Example 6: Concatenating multiple input records. The control statement in
Figure 44 on page 264 specifies that data from the SYSRECOV input data set is to
be loaded into table DSN8810.TOPTVAL. The input data set is identified by the
INDDN option. The table space that contains the TOPTVAL table is currently empty.
Some of the data that is to be loaded into a single row spans more than one input
record. In this situation, an X in column 72 indicates that the input record contains
fields that are to be loaded into the same row as the fields in the next input record.
In the LOAD control statement, CONTINUEIF(72:72)='X' indicates that LOAD is to
concatenate any input records that have an X in column 72 with the next record
before loading the data.
For each assembled input record (that is, after the concatenation), fields are loaded
into the DSN8810.TOPTVAL table columns (that is, MAJSYS, ACTION, OBJECT ...,
DSPINDEX) to form a table row. Any columns that are not specified in the LOAD
control statement are set to the default value.
The POSITION clauses define the starting positions of the fields in the assembled
input records. Starting positions are numbered from the first column of the internally
assembled input record, not from the start of the input records in the sequential
data set. The ending positions of the fields are implicitly defined by the length
specification of the data type (CHAR length).
No conversions are required to load the input character strings into their designated
columns, which are also defined to be fixed-length character strings. However,
because columns INFOTXT, HELPTXT, and PFKTXT are defined as 79 characters
in length and the strings that are being loaded are 71 characters in length, those
strings are padded with blanks as they are loaded.
Figure 44. Example of concatenating multiple input records before loading the data
| Example 7: Loading null values. The control statement in Figure 45 specifies that
| data from the SYSRECST data set is to be loaded into the specified columns in
| table SYSIBM.SYSSTRINGS. The input data set is identified by the INDDN option.
| The NULLIF option for the ERRORBYTE and SUBBYTE columns specifies that if
| the input field contains a blank, LOAD is to place a null value in the indicated
| column for that particular row. The DEFAULTIF option for the TRANSTAB column
| indicates that the utility is to load the default value for this column if the input field
| value is GG. The CONTINUEIF option indicates that LOAD is to concatenate any
| input records that have an X in column 80 with the next record before loading the
| data.
|
LOAD DATA INDDN(SYSRECST) CONTINUEIF(80:80)=’X’ RESUME(YES)
INTO TABLE SYSIBM.SYSSTRINGS
(INCCSID POSITION( 1) INTEGER EXTERNAL(5),
OUTCCSID POSITION( 7) INTEGER EXTERNAL(5),
TRANSTYPE POSITION( 13) CHAR(2),
ERRORBYTE POSITION( 16) CHAR(1) NULLIF(ERRORBYTE=’ ’),
SUBBYTE POSITION( 18) CHAR(1) NULLIF(SUBBYTE=’ ’),
TRANSPROC POSITION( 20) CHAR(8),
IBMREQD POSITION( 29) CHAR(1),
TRANSTAB POSITION( 31) CHAR(256) DEFAULTIF(TRANSTYPE=’GG’))
The CONTINUEIF option indicates that before loading the data LOAD is to
concatenate any input records that have an X in column 72 with the next record.
The POSITION clauses define the starting positions of the fields in the input data
set. The ending positions of the fields in the input data set are implicitly defined by
the length specification of the data type (CHAR length). In this case, the characters
in positions 1 through 3 are loaded into the ACTNO column, the characters in
positions 5 through 10 are loaded into the ACTKWD column, and the characters in
position 13 onward are loaded into the ACTDESC column. Because the ACTDESC
column is of type VARCHAR, the input data needs to contain a 2-byte binary field
that contains the length of the character field. This binary field begins at position 13.
| Example 10: Loading data by using a parallel index build. The control statement
in Figure 48 on page 267 specifies that data from the SYSREC input data set is to
be loaded into table DSN8810.DEPT. Assume that 22 000 rows need to be loaded
into table DSN8810.DEPT, which has three indexes. In this example, the
SORTKEYS option is used to improve performance by forcing a parallel index build.
The SORTKEYS option specifies 66 000 as an estimate of the number keys to sort
in parallel during the SORTBLD phase. (This estimate was computed by using the
calculation that is described in “Improved performance with SORTKEYS” on page
241.) Because more than one index needs to be built, LOAD builds the indexes in
parallel.
The CONTINUEIF option indicates that, before loading the data, LOAD is to
concatenate any input records that have a plus sign (+) in column 79 and a plus
sign (+) in column 80 with the next record.
| Example 11: Creating inline copies. The LOAD control statement in Figure 49 on
| page 268 specifies that the LOAD utility is to load data from the SYSREC data set
| into the specified columns of table ADMF001.TB0S3902. See “Example 1:
| Specifying field positions” on page 259. for an explanation of the POSITION
| clauses.
| COPYDDN(COPYT1) indicates that LOAD is to create inline copies and write the
| primary image copy to the data set that is defined by the COPYT1 template. This
| template is defined in one of the preceding TEMPLATE control statements. For
| more information about TEMPLATE control statements, see “Syntax and options of
| the TEMPLATE control statement” on page 575 of the TEMPLATE chapter. To
| create an inline copy, you must also specify the REPLACE option, which indicates
| that any data in the table space is to be replaced.
| distinct values in all of the key column combinations are to be collected. FREQVAL
| NUMCOLS 4 COUNT 20 indicates that 20 frequent values are to be collected on
| the concatenation of the first four key columns.
| REPORT YES indicates that the statistics are to be sent to SYSPRINT as output.
| UPDATE ALL and HISTORY ALL indicate that all collected statistics are to be
| updated in the catalog and catalog history tables.
|
| Example 13: Loading Unicode data. The following control statement specifies that
| Unicode data from the REC1 input data set is to be loaded into table
| ADMF001.TBMG0301. The UNICODE option specifies the type of input data. Only
| data that satisfies the condition that is specified in the WHEN clause is to be
| loaded. The CCSID option specifies the three coded character set identifiers for the
| input file: one for SBCS data, one for mixed data, and one for DBCS data. LOG
| YES indicates that logging is to occur during the LOAD job.
| LOAD DATA INDDN REC1 LOG YES REPLACE
| UNICODE CCSID(00367,01208,01200)
| INTO TABLE "ADMF001 "."TBMG0301"
| WHEN(00004:00005 = X’0003’)
Example 14: Loading data from multiple input data sets by using partition
parallelism. The LOAD control statement in Figure 51 on page 272 contains a
series of INTO TABLE statements that specify which data is to be loaded into which
partitions of table DBA01.TBLX3303. For each INTO TABLE statement:
v Data is to be loaded into the partition that is identified by the PART option. For
example, the first INTO TABLE statement specifies that data is to be loaded into
the first partition of table DBA01.TBLX3303.
v Data is to be loaded from the data set that is identified by the INDDN option. For
example, the data from the PART1 data set is to be loaded into the first partition.
v Any discarded rows are to be written to the data set that is specified by the
DISCARDDN option. For example, rows that are discarded during the loading of
data from the PART1 data set are written to the DISC1 data set.
v The data is loaded into the specified columns (EMPNO, LASTNAME, and
SALARY).
LOAD uses partition parallelism to load the data into these partitions.
The TEMPLATE utility control statement defines the data set naming convention for
the data set that is to be dynamically allocated during the following LOAD job. The
name of the template is ERR3. The ERRDDN option in the LOAD statement
specifies that any errors are to be written to the data set that is defined by this
ERR3 template. For more information about TEMPLATE control statements, see
“Syntax and options of the TEMPLATE control statement” on page 575 in the
TEMPLATE chapter.
TEMPLATE ERR3
DSN &UT..&JO..&ST..ERR3&MO.&DAY.
UNIT SYSDA DISP(NEW,CATLG,CATLG)
LOAD DATA
REPLACE
ERRDDN ERR3
INTO TABLE DBA01.TBLX3303
PART 1
INDDN PART1
DISCARDDN DISC1
(EMPNO POSITION(1) CHAR(6),
LASTNAME POSIITON(8) VARCHAR(15),
SALARY POSITION(25) DECIMAL(9,2))
.
.
.
INTO TABLE DBA01.TBLX3303
PART 5
INDDN PART5
DISCARDDN DISC5
(EMPNO POSITION(1) CHAR(6),
LASTNAME POSIITON(8) VARCHAR(15),
SALARY POSITION(25) DECIMAL(9,2))
/*
Example 15: Loading data from another table in the same system by using a
declared cursor. The following LOAD control statement specifies that all rows that
are identified by cursor C1 are to be loaded into table MYEMP. The INCURSOR
option is used to specify cursor C1, which is defined in the EXEC SQL utility control
statement. Cursor C1 points to the rows that are returned by executing the
statement SELECT * FROM DSN8810.EMP. In this example, the column names in
table DSN8810.EMP are the same as the column names in table MYEMP. Note that
the cursor cannot be defined on the same table into which DB2 is to load the data.
EXEC SQL
DECLARE C1 CURSOR FOR SELECT * FROM DSN8810.EMP
ENDEXEC
LOAD DATA
INCURSOR(C1)
REPLACE
INTO TABLE MYEMP
STATISTICS
Example 16: Loading data partitions in parallel from a remote site by using a
declared cursor. The LOAD control statement in Figure 52 on page 273 specifies
that for each specified partition of table MYEMPP, the rows that are identified by the
specified cursor are to be loaded. In each INTO TABLE statement, the PART option
specifies the partition number, and the INCURSOR option specifies the cursor. For
example, the rows that are identified by cursor C1 are to be loaded into the first
partition. The data for each partition is loaded in parallel.
Each cursor is defined in a separate EXEC SQL utility control statement and points
to the rows that are returned by executing the specified SELECT statement. These
SELECT statement are being executed on a table at a remote server, so the
three-part name is used to identify the table. In this example, the column names in
table CHICAGO.DSN8810.EMP are the same as the column names in table
MYEMPP.
Figure 52. Example of loading data partitions in parallel using a declared cursor
MERGECOPY operates on the image copy data sets of a table space, and not on
the table space itself.
Output: Output from the MERGECOPY utility consists of one of the following types
of copies:
v A new single incremental image copy
v A new full image copy
You can create the new image copy for the local or recovery site.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute MERGECOPY, but only
on a table space in the DSNDB01 or DSNDB06 database.
Syntax diagram
WORKDDN SYSUT1
WORKDDN ddname
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. You can specify one LIST keyword per MERGECOPY
control statement. Do not specify LIST with the TABLESPACE keyword.
MERGECOPY is invoked once for each table space in the list. For more
information about LISTDEF specifications, see Chapter 15, “LISTDEF,” on
page 163.
TABLESPACE database-name.table-space-name
Specifies the table space that is to be copied, and, optionally, the database
to which it belongs.
database-name
The name of the database that the table space belongs to. The default
is DSNDB04.
table-space-name
The name of the table space whose incremental image copies are to be
merged.
You cannot specify DSNUM and LIST in the same MERGECOPY control
statement. Use PARTLEVEL on the LISTDEF instead. If image copies were
taken by data set (rather than by table space), MERGECOPY must use the
copies by data set.
WORKDDN ddname
Specifies a DD statement for a temporary data set or template, which is to
be used for intermediate merged output. WORKDDN is optional.
ddname is the DD name. The default is SYSUT1.
Use the WORKDDN option if you are not able to allocate enough data sets
to execute MERGECOPY; in that case, a temporary data set is used to hold
intermediate output. If you omit the WORKDDN option, you might find that
only some of the image copy data sets are merged. When MERGECOPY
has ended, a message is issued that tells the number of data sets that exist
and the number of data sets that have been merged. To continue the
merge, repeat MERGECOPY with a new output data set.
NEWCOPY
Specifies whether incremental image copies are to be merged with the full
image copy. NEWCOPY is optional.
NO
Merges incremental image copies into a single incremental image copy
but does not merge them with the full image copy. The default is NO.
YES
Merges all incremental image copies with the full image copy to form a
new full image copy.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the output image copy data sets at the
local site. ddname1 is the primary output image copy data set. ddname2 is
the backup output image copy data set. COPYDDN is optional.
The default is COPYDDN(SYSCOPY), where SYSCOPY identifies the
primary data set.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Object whose copies are to be merged.
Data sets: The input data sets for the merge operation are dynamically allocated.
To merge incremental copies, allocate in the JCL a work data set (WORKDDN) and
up to two new copy data sets (COPYDDN) for the utility job. You can allocate the
data sets to tape or disk. If you allocate them to tape, you need an additional tape
drive for each data set.
With the COPYDDN option of MERGECOPY, you can specify the DD names for the
output data sets. The option has the format COPYDDN (ddname1,ddname2), where
ddname1 is the DD name for the primary output data set in the system that
currently runs DB2, and ddname2 is the DD name for the backup output data set in
the system that currently runs DB2. The default for ddname1 is SYSCOPY.
The RECOVERYDDN option of MERGECOPY lets you specify the output image
copy data sets at the recovery site. The option has the format RECOVERYDDN
(ddname3, ddname4), where ddname3 is the DD name for the primary output image
copy data set at the recovery site, and ddname4 is the DD name for the backup
output data set at the recovery site.
Defining the work data set: The work data set should be at least equal in size to
the largest input image copy data set that is being merged. Use the same DCB
attributes that are used for the image copy data sets.
If NEWCOPY is YES, the utility inserts an entry for the new full image copy into the
SYSIBM.SYSCOPY catalog table.
In either case, if any of the input data sets might not be allocated, or you did not
specify a temporary work data set (WORKDDN), the utility performs a partial merge.
For large table spaces, consider using MERGECOPY to create full image copies.
With the NEWCOPY YES option, however, you can merge a full image copy of a
table space with incremental copies of the table space and of individual data sets to
make a new full image copy of the table space.
If the image copy data sets that you want to merge reside tape, refer to “Retaining
tape mounts” on page 368 for general information about specifying the appropriate
parameters on the DD statements.
To delete all log information that is included in a copy that MERGECOPY makes,
perform the following steps:
1. Find the record of that copy in the catalog table SYSIBM.SYSCOPY. You can
find it by selecting database name, table space name, and date (columns
DBNAME, TSNAME, and ICDATE).
2. Column START_RBA contains the RBA of the last image copy that
MERGECOPY used. Find the record of the image copy that has the same value
of START_RBA.
3. In that record, find the date in column ICDATE. You can use MODIFY
RECOVERY to delete all copies and log records for the table space that were
made before that date.
RECOVER uses the LOG RBA of image copies to determine the starting point in
the log that is needed for recovery. Normally, a timestamp directly corresponds to a
LOG RBA. Because of this, and because MODIFY uses dates to clean up recovery
history, you might decide to use dates to delete old archive log tapes. This decision
might cause a problem if you use MERGECOPY. MERGECOPY inserts the LOG
RBA of the last incremental image copy into the SYSCOPY row that is created for
the new image copy. The date that is recorded in the ICDATE column of SYSCOPY
row is the date that MERGECOPY was executed.
See “Restarting after the output data set is full” on page 45 for guidance in
restarting MERGECOPY from the last commit point after receiving an out-of-space
condition.
Table 47 shows the restrictive state that the utility sets on the target object.
Table 47. Claim classes of MERGECOPY operations.
Target MERGECOPY
Table space or partition UTRW
Legend:
MERGECOPY can run concurrently on the same target object with any utility
except the following utilities:
v COPY TABLESPACE
v LOAD
v MERGECOPY
v MODIFY
v RECOVER
v REORG TABLESPACE
v UNLOAD (only when from the same image copy data set)
For each full and incremental SYSCOPY record that is deleted from
SYSIBM.SYSCOPY, the utility returns a message identifying the name of the copy
data set.
For information about deleting SYSLGRNX rows, see “Deleting SYSLGRNX and
SYSCOPY rows for a single partition or the entire table space” on page 289.
If MODIFY RECOVERY deletes at least one SYSCOPY record and the target table
space or partition is not recoverable, the target object is placed in COPY-pending
status.
| For table spaces and indexes that are defined with COPY YES, the MODIFY
| RECOVERY utility updates the OLDEST_VERSION column of the following catalog
| tables:
| v SYSIBM.SYSTABLESPACE
| v SYSIBM.SYSTABLEPART
| v SYSIBM.SYSINDEXES
| v SYSIBM.SYSINDEXPART
| For more information about how and when MODIFY RECOVERY updates these
| tables, see “The effect of MODIFY RECOVERY on version numbers” on page 291.
Authorization required: To execute this utility, you must use a privilege set that
includes of the following authorities:
v IMAGCOPY privilege for the database to run MODIFY RECOVERY
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
Syntax diagram
DSNUM ALL
MODIFY RECOVERY LIST listdef-name
TABLESPACE table-space-name DSNUM integer
database-name.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. You can specify one LIST keyword per MODIFY RECOVERY
control statement. Do not specify LIST with the TABLESPACE keyword.
MODIFY is invoked once for each table space in the list. For more information
about LISTDEF specifications, see Chapter 15, “LISTDEF,” on page 163.
TABLESPACE database-name.table-space-name
Specifies the database and the table space for which records are to be deleted.
database-name
Specifies the name of the database to which the table space
belongs. database-name is optional. The default is DSNDB04.
table-space-name
Specifies the name of the table space.
DSNUM integer
Identifies a single partition or data set of the table space for which records are
to be deleted; ALL deletes records for the entire data set and table space.
integer is the number of a partition or data set.
The default is ALL.
| For a partitioned table space, integer is its partition number. The maximum is
| 4096.
For a nonpartitioned table space, use the data set integer at the end of the data
set name as cataloged in the VSAM catalog. If image copies are taken by
partition or data set and you specify DSNUM ALL, the table space is placed in
COPY-pending status if a full image copy of the entire table space does not
exist. The data set name has the following format, where y is either I or J, and
nnn is the data set integer.
catname.DSNDBx.dbname.tsname.y0001.Annn
If you specify DSNUM, MODIFY RECOVERY does not delete any SYSCOPY
records for the partitions that have an RBA greater than that of the earliest point
to which the entire table space could be recovered. That point might indicate a
full image copy, a LOAD operation with LOG YES or a REORG operation with
LOG YES.
See “Deleting SYSLGRNX and SYSCOPY rows for a single partition or the
entire table space” on page 289 for more information about specifying DSNUM.
DELETE
Indicates that records are to be deleted. See the DSNUM description for
restrictions on deleting partition statistics.
AGE integer
Deletes all SYSCOPY records that are older than a specified number of
days.
integer is the number of days, and can range from 0 to 32767. Records
that are created today are of age 0 and cannot be deleted by this
option.
(*) deletes all records, regardless of their age.
DATE integer
Deletes all records that are written before a specified date.
integer can be in eight- or six-character format. You must specify a year
(yyyy or yy), month (mm), and day (dd) in the form yyyymmdd or
yymmdd. DB2 checks the system clock and converts six-character
dates to the most recent, previous eight-character equivalent.
(*) deletes all records, regardless of the date on which they were
written.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Object for which records are to be deleted.
You can restart a MODIFY RECOVERY utility job, but it starts from the beginning
again. For guidance in restarting online utilities, see “Restarting an online utility” on
page 42.
Table 49 shows the restrictive state that the utility sets on the target object.
Table 49. Claim classes of MODIFY RECOVERY operations.
Target MODIFY RECOVERY
Table space or partition UTRW
Legend:
MODIFY RECOVERY can run concurrently on the same target object with any utility
except the following utilities:
v COPY TABLESPACE
v LOAD
v MERGECOPY
v MODIFY RECOVERY
v RECOVER TABLESPACE
v REORG TABLESPACE
| When you run MODIFY RECOVERY, the utility updates this range of used version
| numbers for table spaces and for indexes that are defined with the COPY YES
| attribute. MODIFY RECOVERY updates the OLDEST_VERSION column of the
| appropriate catalog table or tables with the version number of the oldest version
| that has not yet been applied to the entire object. DB2 can reuse any version
| numbers that are not in the range that is set by the values in the
| OLDEST_VERSION and CURRENT_VERSION columns.
| Recycling of version numbers is required when all of the version numbers are being
| used. All version numbers are being used when one of the following situations is
| true:
| v The value in the CURRENT_VERSION column is one less than the value in the
| OLDEST_VERSION column.
| v The value in the CURRENT_VERSION column is 255 for table spaces or 15 for
| indexes, and the value in the OLDEST_VERSION column is 0 or 1.
| To recycle version numbers for indexes that are defined with the COPY NO
| attribute, run LOAD REPLACE, REBUILD INDEX, REORG INDEX, or REORG
| TABLESPACE.
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
Example 2: Deleting SYSCOPY records that are older than a certain date. The
following control statement specifies that MODIFY RECOVERY is to delete all
SYSCOPY records that were written before 10 September 2002.
| MODIFY RECOVERY TABLESPACE DSN8D81A.DSN8S81D DELETE DATE(20020910)
Figure 56. Example MODIFY RECOVERY statements that delete SYSCOPY records for
partitions
Example 4: Deleting all SYSCOPY records for objects in a list and viewing the
results. In the following example job, the LISTDEF utility control statements define
three lists (L1, L2, L3). The first group of REPORT utility control statements then
specify that the utility is to report recovery information for the objects in these lists.
Next, the MODIFY RECOVERY control statement specifies that the utility is to
delete all SYSCOPY records for the objects in the L1 list. Finally, the second group
of REPORT control statements specify that the utility is to report the recovery
information for the same three lists. In this second report, no information will be
reported for the objects in the L1 list because all of the SYSCOPY records have
been deleted.
Figure 57. Example MODIFY RECOVERY statement that deletes all SYSCOPY records
For more information about the LISTDEF utility control statements, see Chapter 15,
“LISTDEF,” on page 163. For more information about the REPORT utility control
statements, see Chapter 27, “REPORT,” on page 509.
Run MODIFY STATISTICS regularly to clear outdated information from the statistics
history catalog tables. By deleting outdated information from those tables, you can
improve performance for processes that access data from those tables.
Output: MODIFY STATISTICS deletes rows from the following catalog tables:
v SYSIBM.SYSCOLDIST_HIST
v SYSIBM.SYSCOLUMNS_HIST
v SYSIBM.SYSINDEXES_HIST
v SYSIBM.SYSINDEXPART_HIST
v SYSIBM.SYSINDEXSTATS_HIST
v SYSIBM.SYSLOBSTATS_HIST
v SYSIBM.SYSTABLEPART_HIST
v SYSIBM.SYSTABSTATS_HIST
v SYSIBM.SYSTABLES_HIST
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority.
Syntax diagram
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. You cannot
repeat the LIST keyword or specify it with TABLESPACE, INDEXSPACE, or
INDEX.
The list can contain index spaces, table spaces, or both. MODIFY STATISTICS
is invoked once for each object in the list.
TABLESPACE database-name.table-space-name
Specifies the database and the table space for which catalog history records
are to be deleted.
database-name
Specifies the name of the database to which the table space
belongs. database-name is optional. The default is DSNDB04.
table-space-name
Specifies the name of the table space for which statistics are to
be deleted.
INDEXSPACE database-name.index-space-name
Specifies the qualified name of the index space for which catalog history
information is to be deleted. The utility lists the name in the
SYSIBM.SYSINDEXES table.
database-name
Optionally specifies the name of the database to which the
index space belongs. The default is DSNDB04.
index-space-name
Specifies the name of the index space for which the statistics
are to be deleted.
INDEX creator-id.index-name
Specifies the index for which catalog history information is to be deleted.
creator-id
Optionally specifies the creator of the index. The default is DSNDB04.
index-name
Specifies the name of the index for which the statistics are to be
deleted. Enclose the index name in quotation marks if the name
contains a blank.
DELETE
Indicates that records are to be deleted.
ALL Deletes all statistics history rows that are related to the specified object
from all catalog history tables.
Rows from the following history tables are deleted only when you
specify DELETE ALL:
v SYSTABLES_HIST
v SYSTABSTATS_HIST
v SYSINDEXES_HIST
v SYSINDEXSTATS_HIST
ACCESSPATH
Deletes all access-path statistics history rows that are related to the
specified object from the following history tables:
v SYSIBM.SYSCOLDIST_HIST
v SYSIBM.SYSCOLUMNS_HIST
SPACE
Deletes all space-tuning statistics history rows that are related to the
specified object from the following history tables:
v SYSIBM.SYSINDEXPART_HIST
v SYSIBM.SYSTABLEPART_HIST
v SYSIBM.SYSLOBSTATS_HIST
AGE (integer)
Deletes all statistics history rows that are related to the specified object and that
are older than a specified number of days.
(integer)
Specifies the number of days in a range from 0 to 32 767. This option
cannot delete records that are created today (age 0).
(*) Deletes all records, regardless of their age.
DATE (integer)
Deletes all records that are written before a specified date.
(integer)
Specifies the date in an eight-character format. Specify a year (yyyy),
month (mm), and day (dd) in the form yyyymmdd.
(* )
Deletes all records, regardless of the date on which they were written.
3. Prepare a utility control statement that specifies the options for the tasks that
you want to perform, as described in “Instructions for specific tasks.”
4. Check the compatibility table in “Concurrency and compatibility for MODIFY
STATISTICS” on page 299 if you want to run other jobs concurrently on the
same target objects.
5. Restart a MODIFY STATISTICS utility job (it starts from the beginning again) or
terminate MODIFY STATISTICS by using the TERM UTILITY command. For
guidance in restarting online utilities, see “Restarting an online utility” on page
42.
6. Run MODIFY STATISTICS by using one of the methods described in Chapter 3,
“Invoking DB2 online utilities,” on page 19.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space or index space
Object for which records are to be deleted.
Be aware that when you manually insert, update, or delete catalog information, DB2
does not store the historical information for those operations in the historical catalog
tables.
You can choose to delete only the statistics rows that relate to access path
selection by specifying the ACCESSPATH option. Alternatively, you can delete the
rows that relate to space statistics by using the SPACE option. To delete rows in all
statistics history catalog tables, including the SYSIBM.SYSTABLES_HIST catalog
table, you must specify the DELETE ALL option in the utility control statement.
To delete statistics from the RUNSTATS history tables, you can either use the
MODIFY STATISTICS utility or issue SQL DELETE statements. The MODIFY
STATISTICS utility simplifies the purging of old statistics without requiring you to
write the SQL DELETE statements.
You can also delete rows that meet the age and date criteria by specifying the
corresponding keywords (AGE and DATE) for a particular object.
You can restart a MODIFY STATISTICS utility job, but it starts from the beginning
again. For guidance in restarting online utilities, see “Restarting an online utility” on
page 42.
Table 51 shows the restrictive state that the utility sets on the target object.
Table 51. Claim classes of MODIFY STATISTICS operations.
Target MODIFY STATISTICS
Table space, index, or index space UTRW
Legend:
Example 2: Deleting access path records for all objects in a list. The MODIFY
STATISTICS control statement in Figure 58 specifies that the utility is to delete
access-path statistics history rows that were created before 17 April 2000 for
objects in the specified list. The list, M1, is defined in the preceding LISTDEF
control statement and includes table spaces DB0E1501.TL0E1501 and
DSN8D81A.DSN8S81E. For more information about LISTDEF control statements,
see Chapter 15, “LISTDEF,” on page 163.
Figure 58. MODIFY STATISTICS control statement that specifies that access path history
records are to be deleted
Figure 59. MODIFY STATISTICS control statement that specifies that space-tuning statistics
records are to be deleted
Example 4: Deleting all statistics history records for an index space. The
control statement in Figure 60 on page 301 specifies that MODIFY STATISTICS is
to delete all statistics history records for index space DBOE1501.IUOE1501. Note
that the deleted records are not limited by date because (*) is specified.
Figure 60. MODIFY STATISTICS control statement that specifies that all statistics history
records are to be deleted
See “Syntax and options of the OPTIONS control statement” for details.
Output: The OPTIONS control statement sets the specified processing options for
the duration of the job step, or until replaced by another OPTIONS control
statement within the same job step.
Syntax diagram
OPTIONS
PREVIEW LISTDEFDD ddname TEMPLATEDD ddname event-spec
OFF
KEY key-value
event-spec:
ITEMERROR,HALT WARNING,RC4
EVENT ( )
ITEMERROR,SKIP , WARNING,RC0
WARNING,RC8
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
PREVIEW Specifies that the utility control statements that follow are to run in
PREVIEW mode. The utility checks for syntax errors in all utility
control statements, but normal utility execution does not take place.
If the syntax is valid, the utility expands all LISTDEF lists and
TEMPLATE DSNs that appear in SYSIN and prints results to the
SYSPRINT data set.
PREVIEW evaluates and expands all LISTDEF statements into an
actual list of table spaces or index spaces. It evaluates TEMPLATE
DSNs and uses variable substitution for actual data set names
when possible. It also expands lists from the SYSLISTD DD and
TEMPLATE DSNs from the SYSTEMPL DD that a utility invocation
references.
A definitive preview of TEMPLATE DSN values is not always
possible. Substitution values for some variables, such as &DATE.,
&TIME., &SEQ. and &PART., can change at execution time. In
some cases, PREVIEW generates approximate data set names.
The OPTIONS utility substitutes unknown character variables with
the character string ″UNKNOWN″ and unknown integer variables
with zeroes.
Instead of OPTIONS PREVIEW, you can use a JCL PARM to
activate preview processing. Although the two functions are
identical, use JCL PARM to preview an existing set of utility control
statements. Use the OPTION PREVIEW control statement when
you invoke DB2 utilities through a stored procedure.
The JCL PARM is specified as the third JCL PARM of DSNUTILB
and on the UTPROC variable of DSNUPROC, as shown in the
following JCL:
//STEP1 EXEC DSNUPROC,UID=’JULTU106.RECOVE1’,
// UTPROC=’PREVIEW’,SYSTEM=’SSTR’
You can restart an OPTIONS utility job, but it starts from the beginning again. If you
are restarting this utility as part of a larger job in which OPTIONS completed
successfully, but a later utility failed, do not change the OPTIONS utility control
statement, if possible. If you must change the OPTIONS utility control statement,
use caution; any changes can cause the restart processing to fail. For example, if
you specify a valid OPTIONS statement in the initial invocation, and then on restart,
specify OPTIONS PREVIEW, the job fails. For guidance in restarting online utilities,
see “Restarting an online utility” on page 42.
OPTIONS PREVIEW
TEMPLATE COPYLOC UNIT(SYSDA)
DSN(&DB..&TS..D&JDATE..&STEPNAME..COPY&IC.&LOCREM.&PB.)
DISP(NEW,CATLG,CATLG) SPACE(200,20) TRK
VOLUMES(SCR03)
TEMPLATE COPYREM UNIT(SYSDA)
DSN(&DB..&TS..&UT..T&TIME..COPY&IC.&LOCREM.&PB.)
DISP(NEW,CATLG,CATLG) SPACE(100,10) TRK
LISTDEF CPYLIST INCLUDE TABLESPACES DATABASE DBLT0701
COPY LIST CPYLIST FULL YES
COPYDDN(COPYLOC,COPYLOC)
RECOVERYDDN(COPYREM,COPYREM)
SHRLEVEL REFERENCE
Figure 61. Example OPTIONS statement for checking syntax and previewing lists and
templates.
The first OPTIONS statement specifies that the LISTDEF definition library is
identified by the V1LIST DD statement and the TEMPLATE definition library is
identified by the V1TEMPL DD statement. These definition libraries apply to the
subsequent COPY utility control statement. Therefore, if DB2 does not find the
PAYTBSP list in SYSIN, it searches the V1LIST library, and if DB2 does not find the
PAYTEMP1 template in SYSIN, it searches the V1TEMP library.
The second OPTIONS statement is similar to the first, but it identifies different
libraries and applies to the second COPY control statement. This second COPY
control statement looks similar to the first COPY job. However, this statement
processes a different list and uses a different template. Whereas the first COPY job
uses the PAYTBSP list from the V1LIST library, the second COPY job uses the
PAYTBSP list from the V2LIST library. Also, the first COPY job uses the PAYTEMP1
template from the V1TEMPL library, the second COPY job uses the PAYTEMP1
template from the V2TEMPL library.
OPTIONS LISTDEFDD V1LIST TEMPLATEDD V1TEMPL
COPY LIST PAYTBSP COPYDDN(PAYTEMP1,PAYTEMP1)
Example 3: Forcing a return code 0. In the following example, the first OPTIONS
control statement forces a return code of 0 for the subsequent MODIFY
RECOVERY utility control statement. Ordinarily, this statement ends with a return
code of 4 because it specifies that DB2 is to delete all SYSCOPY records for table
space A.B. The second OPTIONS control statement restores the default options, so
that no return codes will be overridden for the second MODIFY RECOVERY control
statement.
OPTIONS EVENT(WARNING,RC0)
MODIFY RECOVERY TABLESPACE A.B DELETE AGE(*)
OPTIONS OFF
MODIFY RECOVERY TABLESPACE C.D DELETE AGE(30)
The second OPTIONS control statement specifies how DB2 is to handle return
codes of 8 in any subsequent utility statements that process a valid list. If
processing of a list item produces return code 8, DB2 skips that item, and continues
to process the rest of the items in the list, but DB2 does not process the next utility
control statement. Instead, the job ends with return code 8.
Figure 62. Example OPTIONS statements for checking syntax and skipping errors
Output: With the WRITE(YES) option, QUIESCE writes changed pages for the
table spaces and their indexes from the DB2 buffer pool to disk. The catalog table
SYSCOPY records the current RBA and the timestamp of the quiesce point. A row
with ICTYPE=’Q’ is inserted into SYSIBM.SYSCOPY for each table space that is
quiesced. DB2 also inserts a SYSCOPY row with ICTYPE=’Q’ for any indexes
(defined with the COPY YES attribute) over a table space that is being quiesced.
(Table spaces DSNDB06.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSUTILX
are an exception; their information is written to the log.)
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v IMAGCOPY privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute QUIESCE, but only on a
table space in the DSNDB01 or DSNDB06 database.
You can specify DSNDB01.SYSUTILX, but you cannot include it in a list with other
table spaces to be quiesced. Recover to current of the catalog/directory table
spaces is preferred and recommended. However, if a point-in-time recovery of the
catalog/directory table spaces is desired, a separate quiesce of
DSNDB06.SYSCOPY is required after a quiesce of the other catalog/directory table
spaces.
Syntax diagram
TABLESPACE table-space-name
database-name. PART integer
TABLESPACESET table-space-name
TABLESPACE database-name.
WRITE YES
WRITE NO
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name that contains
only table spaces. The utility allows one LIST keyword for each QUIESCE
control statement. Do not specify LIST with the TABLESPACE or
TABLESPACESET keyword. QUIESCE is invoked once for the entire list.
For the QUIESCE utility, the related index spaces are considered to be list
items for the purposes of OPTIONS ITEMERROR processing. You can alter
the utility behavior during processing of related indexes with the OPTIONS
ITEMERROR statement. For more information about LISTDEF
specifications, see Chapter 15, “LISTDEF,” on page 163.
TABLESPACE database-name.table-space-name
For QUIESCE TABLESPACE, specifies the table space that is to be
quiesced.
For QUIESCE TABLESPACESET, specifies a table space in the table space
set that is to be quiesced. For QUIESCE TABLESPACESET, the
TABLESPACE keyword is optional.
database-name
Optionally specifies the name of the database to which the table space
belongs. The default is DSNDB04.
table-space-name
Specifies the name of the table space that is to be quiesced. You can
specify DSNDB01.SYSUTILX, but do not include that name in a list with
other table spaces that are to be quiesced. If a point-in-time recovery is
planned for the catalog and directory, DSNDB06.SYSCOPY must be
quiesced separately after all other catalog and directory table spaces.
PART integer
Identifies a partition that is to be quiesced.
| integer is the number of the partition and must be in the range from 1 to the
| number of partitions that are defined for the table space. The maximum is
| 4096.
TABLESPACESET
Indicates that all of the referentially related table spaces in the table space
set are to be quiesced. For the purposes of the QUIESCE utility, a table
space set is one of these:
v A group of table spaces that have a referential relationship
v A base table space with all of its LOB table spaces
WRITE
Specifies whether the changed pages from the table spaces and index
spaces are to be written to disk.
YES
Establishes a quiesce point and writes the changed pages from the
table spaces and index spaces to disk. The default is YES.
NO
Establishes a quiesce point but does not write the changed pages from
the table spaces and index spaces to disk.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Object that is to be quiesced. (If you want to quiesce only one partition of a
table space, you must use the PART option in the control statement.)
If you use QUIESCE TABLESPACE instead and do not include every member, you
might encounter problems when you run RECOVER on the table spaces in the
You should QUIESCE and RECOVER the LOB table spaces to the same point in
time as the associated base table space. A group of table spaces that have a
referential relationship should all be quiesced to the same point in time.
When you use QUIESCE WRITE YES on a table space, the utility inserts a
SYSCOPY row that specifies ICTYPE=’Q’ for each related index that is defined with
COPY=YES in order to record the quiesce point.
Figure 63. Termination messages when you run QUIESCE on a table space with pending
restrictions
When you run QUIESCE on a table space or index space that is in COPY-pending,
CHECK-pending, or RECOVER-pending status, you might also receive one or more
of the messages that are shown in Figure 64 on page 316.
If any of the preceding conditions is true, QUIESCE terminates with a return code of
4 and issues a DSNU473I warning message.
You can restart a QUIESCE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 42.
Table 53 shows which claim classes QUIESCE drains and any restrictive state that
the utility sets on the target object.
Table 53. Claim classes of QUIESCE operations.
Target WRITE YES WRITE NO
Table space or partition DW/UTRO DW/UTRO
| Partitioning index, data-partitioned secondary DW/UTRO
| index, or partition
| Nonpartitioned secondary index DW/UTRO
Legend:
v DW - Drain the write claim class - concurrent access for SQL readers
v UTRO - Utility restrictive state - read-only access allowed
Table 54 on page 317 shows which utilities can run concurrently with QUIESCE on
the same target object. The target object can be a table space, an index space, or
QUIESCE on SYSUTILX is an exclusive job; such a job can interrupt another job
between job steps, possibly causing the interrupted job to time out.
Figure 65. shows the output that the preceding command produces.
Figure 65. Example output from a QUIESCE job that establishes a quiesce point for three
table spaces
Figure 66. shows the output that the preceding command produces.
Figure 66. Example output from a QUIESCE job that establishes a quiesce point for a list of
objects
Example 3: Establishing a quiesce point for a table space set. The following
control statement specifies that QUIESCE is to establish a quiesce point for the
indicated table space set. In this example, the table space set includes table space
DSN8D81A.DSN8S81D and all table spaces that are referentially related to it. Run
REPORT TABLESPACESET to obtain a list of table spaces that are referentially
related. For more information about this option, see Chapter 27, “REPORT,” on
page 509.
QUIESCE TABLESPACESET TABLESPACE DSN8D81A.DSN8S81D
Figure 67. shows the output that the preceding command produces.
Figure 67. Example output from a QUIESCE job that establishes a quiesce point for a table
space set
The preceding command produces the output that is shown in Figure 68. Notice
that the COPY YES index EMPNOI is placed in informational COPY-pending
(ICOPY) status:
Figure 68. Example output from a QUIESCE job that establishes a quiesce point, without
writing the changed pages to disk.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
To run REBUILD INDEX STATISTICS REPORT YES, you must use a privilege set
| that includes the SELECT privilege on the catalog tables and the tables for which
| statistics are to be gathered.
Syntax diagram
REBUILD
| ,
INDEX ( creatorid.index-name )
PART integer
(ALL) table-space-spec
LIST listdef-name
,
INDEXSPACE ( index-space-name )
database-name. PART integer
(ALL) table-space-spec
| SCOPE ALL
SCOPE PENDING REUSE SORTDEVT device-type SORTNUM integer stats-spec
table-space-spec:
TABLESPACE table-space-name
database-name. PART integer
stats-spec:
HISTORY ALL FORCEROLLUP YES
ACCESSPATH NO
SPACE
NONE
correlation-stats-spec:
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
INDEX creator-id.index-name
Indicates the qualified name of the index to be rebuilt. Use the form
creator-id.index-name to specify the name.
creator-id
Specifies the creator of the index. This qualifier is optional. If you omit the
qualifier creator-id, DB2 uses the user identifier for the utility job.
index-name
Specifies the qualified name of the index that is to be rebuilt. For an index,
you can specify either an index name or an index space name. Enclose the
index name in quotation marks if the name contains a blank.
To rebuild multiple indexes, separate each index name with a comma. All listed
indexes must reside in the same table space. If more than one index is listed
and the TABLESPACE keyword is not specified, DB2 locates the first valid
index name that is cited and determines the table space in which that index
resides. That table space is used as the target table space for all other valid
index names that are listed.
| INDEXSPACE database-name.index-space-name
| Specifies the qualified name of the index space that is obtained from the
| SYSIBM.SYSINDEXES table.
| database-name
| Specifies the name of the database that is associated with the index. This
| qualifier is optional.
| index-space-name
| Specifies the qualified name of the index space to copy. For an index, you
| can specify either an index name or an index space name.
| If you specify more than one index space, they must all be defined on the same
| table space.
| For an index, you can specify either an index name or an index space name.
(ALL)
Specifies that all indexes in the table space that is referred to by the
TABLESPACE keyword are to be rebuilt.
TABLESPACE database-name.table-space-name
Specifies the table space from which all indexes are to be rebuilt.
database-name
Identifies the database to which the table space belongs. The default is
DSNDB04.
table-space-name
Identifies the table space from which all indexes are to be rebuilt.
PART integer
| Specifies the physical partition of a partitioning index or a data-partitioned
| secondary index in a partitioned table that is to be rebuilt. When the target of
| the REBUILD operation is a nonpartitioned secondary index, the utility
| reconstructs logical partitions.
integer is the number of the partition and must be in the range from 1 to the
| number of partitions that are defined for the table space. The maximum is 4096.
You cannot specify PART with the LIST keyword. Use LISTDEF PARTLEVEL
instead.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The utility allows
one LIST keyword for each REBUILD INDEX control statement. The list must
contain either all index spaces or all table spaces. For a table space list,
REBUILD is invoked once per table space. For an index space list, DB2 groups
indexes by their related table space and executes the rebuild once per table
space. For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
| SCOPE
| Indicates the scope of the rebuild organization of the specified index or indexes.
| ALL
| Indicates that you want the specified index or indexes to be rebuilt. The
| default is ALL.
| PENDING
| Indicates that you want the specified index or indexes with one or more
| partitions in REBUILD-pending (RBDP), REBUILD-pending star (RBDP*),
| page set REBUILD-pending (PSRBD), RECOVER-pending (RECP), or
| advisory REORG-pending (AREO*) state to be rebuilt.
REUSE
Specifies that REBUILD should logically reset and reuse DB2-managed data
sets without deleting and redefining them. If you do not specify REUSE, DB2
deletes and redefines DB2-managed data sets to reset them.
If you are rebuilding the index because of a media failure, do not specify
REUSE.
If a data set has multiple extents, the extents are not released if you use the
REUSE parameter.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be dynamically
allocated by DFSORT. For device-type, you can specify any device that is valid
on the DYNALLOC parameter of the SORT or OPTION options for DFSORT.
For more information about these options, see DFSORT Application
Programming: Guide.
device-type is the device type.
A TEMPLATE specification does not dynamically allocate sort work data sets.
The SORTDEVT keyword controls dynamic allocation of these data sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be dynamically
allocated by the sort program. If you omit SORTDEVT, SORTNUM is ignored. If
you use SORTDEVT and omit SORTNUM, no value is passed to DFSORT;
DFSORT uses its own default.
integer is the number of temporary data sets.
STATISTICS
Specifies that index statistics are to be collected.
If you specify the STATISTICS and UPDATE options, statistics are stored in the
DB2 catalog. You cannot collect inline statistics for indexes on the catalog and
directory tables.
| Restriction: If you specify STATISTICS for encrypted data, DB2 might not
| provide useful statistics on this data.
REPORT
Indicates whether a set of messages to report the collected statistics is to be
generated.
NO
Indicates that the set of messages is not to be sent as output to
SYSPRINT. The default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT.
The generated messages are dependent on the combination of keywords
(such as TABLESPACE, INDEX, TABLE, and COLUMN) that you specify
with the RUNSTATS utility. However, these messages are not dependent on
the specification of the UPDATE option. REPORT YES always generates a
report of SPACE and ACCESSPATH statistics.
KEYCARD
Specifies that all of the distinct values in all of the 1 to n key column
combinations for the specified indexes are to be collected. n is the number of
columns in the index.
FREQVAL
Controls the collection of frequent-value statistics. If you specify FREQVAL, it
must be followed by two additional keywords:
NUMCOLS
Indicates the number of key columns that are to be concatenated when
collecting frequent values from the specified index. If you specify 3, the
utility collects frequent values on the concatenation of the first three key
columns. The default is 1, which means that DB2 is to collect frequent
values only on the first key column of the index.
COUNT
Indicates the number of frequent values that are to be collected. If you
specify 15, the utility collects 15 frequent values from the specified key
columns. The default is 10.
| UPDATE
| Indicates whether the collected statistics are to be inserted into the catalog
| tables. UPDATE also allows you to select statistics that are used for access
| path selection or statistics that are used by database administrators.
| ALL Indicates that all collected statistics are to be updated in the catalog.
| The default is ALL.
| ACCESSPATH
| Indicates that the only catalog table columns that are to be updated are
| those that provide statistics that are used for access path selection.
| SPACE
| Indicates that the only catalog table columns that are to be updated are
| those that provide statistics to help the database administrator assess
| the status of a particular table space or index.
| NONE Indicates that catalog tables are not to be updated with the collected
| statistics. This option is valid only when REPORT YES is specified.
| HISTORY
Records all catalog table inserts or updates to the catalog history tables.
The default is supplied by the value that is specified in STATISTICS HISTORY
on panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that the only catalog history table columns that are to be
updated are those that provide statistics that are used for access path
selection.
SPACE
Indicates that only space-related catalog statistics are to be updated in
catalog history tables.
NONE Indicates that catalog history tables are not to be updated with the
| collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when you
execute RUNSTATS even if some indexes or index partitions are empty. This
keyword enables the optimizer to select the best access path.
The following options are available for the FORCEROLLUP keyword:
YES Indicates that forced aggregation or rollup processing is to be done,
even though some indexes or index partitions might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all indexes or index partitions.
If data is not available, the utility issues DSNU623I message if you have set the
installation value for STATISTICS ROLLUP on panel DSNTIPO to NO.
If you recover a table space to a prior point in time and do not recover all the
indexes to the same point in time, you must rebuild all of the indexes.
Some logging might occur if both of the following conditions are true:
v The index is a nonpartitioning index.
v The index is being concurrently accessed either by SQL on a different partition of
the same table space or by a utility that is run on a different partition of the same
table space.
| Notes:
| 1. Required when collecting inline statistics on at least one data-partitioned secondary
| index.
| 2. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
| data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Object whose indexes are to be rebuilt.
| Calculating the size of the work data sets: To calculate the approximate size (in
| bytes) of the SORTWKnn data set, use the following formula:
Using two or three large SORTWKnn data sets are preferable to several small
ones.
| Calculating the size of the sort work data sets: To calculate the approximate
| size (in bytes) of the ST01WKnn data set, use the following formula:
| numcols
| Number of key columns to concatenate when you collect frequent values
| from the specified index.
| count
| Number of frequent values that DB2 is to collect.
When you run the REBUILD INDEX utility concurrently on separate partitions of a
| partitioned index (either partitioning or secondary), the sum of the processor time is
| approximately the time for a single REBUILD INDEX job to run against the entire
index. For partitioning indexes, the elapsed time for running concurrent REBUILD
INDEX jobs is a fraction of the elapsed time for running a single REBUILD INDEX
job against an entire index.
The subtasks that are used for the parallel REBUILD INDEX processing use DB2
connections. If you receive message DSNU397I that indicates that the REBUILD
INDEX utility is constrained, increase the number of concurrent connections by
using the MAX BATCH CONNECT parameter on panel DSNTIPE.
| Figure 69 shows the flow of a REBUILD INDEX job with a parallel index build. The
| same flow applies whether you rebuild a data-partitioned secondary index or a
| partitioning index. DB2 starts multiple subtasks to unload the entire partitioned table
| space. Subtasks then sort index keys and build the partitioning index in parallel. If
you specify STATISTICS, additional subtasks collect the sorted keys and update the
catalog table in parallel, eliminating the need for a second scan of the index by a
separate RUNSTATS job.
Figure 69. How a partitioning index is rebuilt during a parallel index build
Figure 70 on page 331 shows the flow of a REBUILD INDEX job with a parallel
index build. DB2 starts multiple subtasks to unload all partitions of a partitioned
table space and to sort index keys in parallel. The keys are then merged and
| passed to the build subtask, which builds the nonpartitioned secondary index. If you
specify STATISTICS, a separate subtask collects the sorted keys and updates the
catalog table.
Figure 70. How a nonpartitioned secondary index is rebuilt during a parallel index build
Sort work data sets for parallel index build:You can either allow the utility to
dynamically allocate the data sets that SORT needs, or provide the necessary data
sets yourself. Select one of the following methods to allocate sort work data sets
and message data sets:
Method 1: REBUILD INDEX determines the optimal number of sort work data sets
and message data sets.
| 1. Specify the SORTDEVT keyword in the utility statement.
2. Allow dynamic allocation of sort work data sets by not supplying SORTWKnn
DD statements in the REBUILD INDEX utility JCL.
3. Allocate UTPRINT to SYSOUT.
Method 2: You control allocation of sort work data sets, and REBUILD INDEX
allocates message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: You have the most control over rebuild processing; you must specify
both sort work data sets and message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
Data sets that are used: If you select Method 2 or 3, define the necessary data
sets by using the information provided here and in the following topics:
v “Determining the number of sort subtasks” on page 332
v “Allocation of sort subtasks” on page 332
v “Estimating the sort work file size” on page 333
Each sort subtask must have its own group of sort work data sets and its own print
| message data set. In addition, you need to allocate the merge message data set
| when you build a single nonpartitioned secondary index on a partitioned table
space.
Possible reasons to allocate data sets in the utility job JCL rather than using
dynamic allocation are to:
v Control the size and placement of the data sets
v Minimize device contention
v Optimally utilize free disk space
v Limit the number of utility subtasks that are used to build indexes
The DD names SWnnWKmm define the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more data
sets that are to be used by that subtask pair. For example:
SW01WK01 Is the first sort work data set that is used by the subtask that builds
the first index.
SW01WK02 Is the second sort work data set that is used by the subtask that
builds the first index.
SW02WK01 Is the first sort work data set that is used by the subtask that builds
the second index.
SW02WK02 Is the second sort work data set that is used by the subtask that
builds the second index.
The DD names UTPRINnn define the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
If you allocate the UTPRINT DD statement to SYSOUT in the job statement, the
sort message data sets and the merge message data set, if required, are
dynamically allocated. If you want the sort message data sets, merge message data
sets, or both, allocated to a disk or tape data set rather than to SYSOUT, you must
supply the UTPRINnn or the UTMERG01 DD statements (or both) in the utility JCL.
If you do not allocate the UTPRINT DD statement to SYSOUT, and you do not
supply a UTMERG01 DD statement in the job statement, partitions are not
unloaded in parallel.
subtasks to build one index per subtask, it allocates any excess indexes across the
pairs (in the order that the indexes were created), so that one or more subtasks
might build more than one index.
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys that are present in all of the indexes or
| index partitions that are being processed by the subtask in order to calculate each
| sort work file size. When you determine which indexes or index partitions are
| assigned to which subtask pairs, use the formula listed in “Data sets that REBUILD
| INDEX uses” on page 327 to calculate the required space.
| Overriding dynamic DFSORT allocation: DB2 estimates how many rows are to
| be sorted and passes this information to DFSORT on the parameter FILSZ.
| DFSORT then dynamically allocates the necessary sort work space.
| If the table space contains rows with VARCHAR columns, DB2 might not be able to
| accurately estimate the number of rows. If the estimated number of rows is too high
| and the sort work space is not available or if the estimated number of rows is too
| low, DFSORT might fail and cause an abend. Important: Run RUNSTATS UPDATE
| SPACE before the REBUILD INDEX utility so that DB2 calculates a more accurate
| estimate.
| You can override this dynamic allocation of sort work space in two ways:
| v Allocate the sort work data sets with SORTWKnn DD statements in your JCL.
| v Override the DB2 row estimate in FILSZ using control statements that are
| passed to DFSORT. However, using control statements overrides size estimates
| that are passed to DFSORT in all invocations of DFSORT in the job step,
| including any sorts that are done in any other utility that is executed in the same
| step. The result might be reduced sort efficiency or an abend due to an
| out-of-space condition.
You can reset the REBUILD-pending status for an index with any of these
operations:
v REBUILD INDEX
v REORG TABLESPACE SORTDATA
v REPAIR SET INDEX with NORBDPEND
v START DATABASE command with ACCESS FORCE
Important: Use the START DATABASE command with ACCESS FORCE only as a
means of last resort.
You must either make these table spaces available, or run the RECOVER
TABLESPACE utility on the catalog or directory, using an authorization ID with the
installation SYSADM or installation SYSOPR authority.
| Recommendation: Make a full image copy of the index to create a recovery point;
| this action also resets the ICOPY status.
If you restart a job that uses the STATISTICS keyword, inline statistics collection
does not occur. To update catalog statistics, run the RUNSTATS utility after the
restarted REBUILD INDEX job completes.
For more guidance about restarting online utilities, see “Restarting an online utility”
on page 42.
Table 56 on page 335 shows which claim classes REBUILD INDEX drains and any
restrictive state that the utility sets on the target object.
Table 57 shows which utilities can run concurrently with REBUILD INDEX on the
same target object. The target object can be an index space or a partition of an
index space. If compatibility depends on particular options of a utility, that
information is also shown. REBUILD INDEX does not set a utility restrictive state if
the target object is DSNDB01.SYSUTILX.
Table 57. Compatibility of REBUILD INDEX with other utilities
Action REBUILD INDEX
CHECK DATA No
CHECK INDEX No
CHECK LOB Yes
COPY INDEX No
COPY TABLESPACE SHRLEVEL CHANGE No
COPY TABLESPACE SHRLEVEL REFERENCE Yes
DIAGNOSE Yes
LOAD No
MERGECOPY Yes
MODIFY Yes
QUIESCE No
REBUILD INDEX No
RECOVER INDEX No
RECOVER TABLESPACE No
REORG INDEX No
REORG TABLESPACE UNLOAD CONTINUE or PAUSE No
REORG TABLESPACE UNLOAD ONLY or EXTERNAL with No
cluster index
REORG TABLESPACE UNLOAD ONLY or EXTERNAL without Yes
cluster index
REPAIR LOCATE by KEY No
REPAIR LOCATE by RID DELETE or REPLACE No
REPAIR LOCATE by RID DUMP or VERIFY Yes
| When you run REBUILD INDEX, the utility updates this range of used version
| numbers for indexes that are defined with the COPY NO attribute. REBUILD INDEX
| sets the OLDEST_VERSION column to the current version number, which indicates
| that only one version is active; DB2 can then reuse all of the other version
| numbers.
| Recycling of version numbers is required when all of the version numbers are being
| used. All version numbers are being used when one of the following situations is
| true:
| v The value in the CURRENT_VERSION column is one less than the value in the
| OLDEST_VERSION column
| v The value in the CURRENT_VERSION column is 15, and the value in the
| OLDEST_VERSION column is 0 or 1.
| You can also run LOAD REPLACE, REORG INDEX, or REORG TABLESPACE to
| recycle version numbers for indexes that are defined with the COPY NO attribute.
| To recycle version numbers for indexes that are defined with the COPY YES
| attribute or for table spaces, run MODIFY RECOVERY.
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
If sufficient virtual storage resources are available, DB2 starts one pair of utility sort
subtasks for each partition. This example does not require UTPRINnn DD
statements because it uses DSNUPROC to invoke utility processing. DSNUPROC
includes a DD statement that allocates UTPRINT to SYSOUT.
//SAMPJOB JOB ...
//STEP1 EXEC DSNUPROC,UID=’SAMPJOB.RBINDEX’,UTPROC=’’,SYSTEM=’DSN’
//SYSIN DD *
REBUILD INDEX (DSN8810.XEMP1 PART 2, DSN8810.XEMP1 PART 3)
SORTDEVT SYSWK
SORTNUM 4
/*
If sufficient virtual storage resources are available, DB2 starts one utility sort
subtask to build the partitioning index and another utility sort subtask to build the
nonpartitioning index. This example does not require UTPRINnn DD statements
because it uses DSNUPROC to invoke utility processing. DSNUPROC includes a
DD statement that allocates UTPRINT to SYSOUT.
//SAMPJOB JOB ...
//STEP1 EXEC DSNUPROC,UID=’SAMPJOB.RCVINDEX’,UTPROC=’’,SYSTEM=’DSN’
//SYSIN DD *
REBUILD INDEX (ALL) TABLESPACE DSN8D81A.DSN8S81E
SORTDEVT SYSWK
SORTNUM 4
/*
The largest unit of data recovery is the table space or index space; the smallest is
the page. You can recover a single object, or a list of objects. The RECOVER utility
recovers an entire table space, index space, a partition or data set, pages within an
error range, or a single page. You recover data from image copies of an object and
from log records that contain changes to the object. If the most recent full image
copy data set is unusable, and previous image copy data sets exist in the system,
RECOVER uses the previous image copy data sets.
Output: Output from RECOVER consists of recovered data (a table space, index,
partition or data set, error range, or page within a table space).
If you use the RECOVER utility to partially recover a referentially related table
space set or a base table space and LOB table space set, you must ensure that
you recover the entire set of table spaces. This task includes rebuilding or
recovering all indexes (including indexes on auxiliary tables for a base table space
and LOB table space set) to a common quiesce point or to a SHRLEVEL
REFERENCE copy. If you do not include every member of the set, or if you do not
recover the entire set to the same point in time, RECOVER sets the
CHECK-pending status on for all dependent table spaces, base table spaces, or
LOB table spaces in the set.
Recommendation: If you use the RECOVER utility to partially recover data and all
indexes on the data, recover these objects to a common quiesce point or to a
SHRLEVEL REFERENCE copy. Otherwise, RECOVER places all indexes in the
CHECK-pending status.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute RECOVER, but only on
a table space in the DSNDB01 or DSNDB06 database.
Syntax diagram
DSNUM ALL
object
(1)
DSNUM integer
DSNUM ALL
object recover-options-spec
(1)
DSNUM integer
object PAGE page-number
CONTINUE
LOGRANGES YES
LOCALSITE (2)
RECOVERYSITE LOGRANGES NO
Notes:
1 Not valid for nonpartitioning indexes.
2 Use the LOGRANGES NO option only at the direction of IBM Software Support. This option can
cause the LOGAPPLY phase to run much longer and, in some cases, apply log records that
should not be applied.
object:
TABLESPACE table-space-name
database-name.
INDEXSPACE index-space-name
database-name.
INDEX index-name
creator-id.
list-options-spec:
TORBA X’byte-string’
TOLOGPOINT X’byte-string’
|
REUSE CURRENTCOPYONLY PARALLEL
(num-objects) TAPEUNITS
(num-tape-units)
LOGONLY
recover-options-spec:
| TOCOPY data-set-name
TOVOLUME CATALOG REUSE CURRENTCOPYONLY
vol-ser
TOSEQNO integer
TOLASTCOPY
REUSE CURRENTCOPYONLY
TOLASTFULLCOPY
REUSE CURRENTCOPYONLY
ERROR RANGE
Option descriptions
You can specify a list of objects by repeating the TABLESPACE, INDEX, or
INDEXSPACE keywords. If you use a list of objects, the valid keywords are:
DSNUM, TORBA, TOLOGPOINT, LOGONLY, PARALLEL, and either LOCALSITE or
RECOVERYSITE.
INDEX creator-id.index-name
Specifies the index in the index space that is to be recovered. The RECOVER
utility can recover only indexes that were defined with the COPY YES attribute
and subsequently copied.
creator-id
Optionally specifies the creator of the index. The default is the user
identifier for the utility.
index-name
Specifies the name of the index in the index space that is to be recovered.
Enclose the index name in quotation marks if the name contains a blank.
For a partitioned table space or index space: The integer is its partition
number.
For a nonpartitioned table space: Find the integer at the end of the data set
name. The data set name has the following format:
catname.DSNDBx.dbname.tsname.y0001.Annn
where:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
tsname Is the table space name.
y Is I or J.
nnn Is the data set integer.
PAGE page-number
| Specifies a particular page that is to be recovered. You cannot specify this
| option if you are recovering from a concurrent copy.
page-number is the number of the page, in either decimal or hexadecimal
notation. For example, both 999 and X'3E7' represent the same page. PAGE is
invalid with the LIST specification.
CONTINUE
Specifies that the recovery process is to continue. Use this option only if an
| drives that are allocated for the RECOVER job is the sum of the JCL-allocated
| tape drives, and the number of tape drives, which is determined as follows:
| v The specified value for TAPEUNITS.
| v The value that is determined by the RECOVER utility if you omit the
| TAPEUNITS keyword. The number of tape drives that RECOVER attempts to
| allocate is determined by the object in the list that requires the most tape
| drives.
(num-objects)
Specifies the number of objects in the list that are to be processed in
parallel. If storage constraints are encountered, you can adjust this
value to a smaller value.
| If you specify 0 or do not specify a value for num-objects, RECOVER
| determines the optimal number of objects to process in parallel.
TAPEUNITS
Specifies the number of tape drives that the utility should dynamically allocate
for the list of objects that are to be processed in parallel. If you omit this
keyword, the utility determines the number of tape drives to allocate for the
recovery function.
(num-tape-units)
Specifies the number of tape drives to allocate. If you specify 0 or do not
specify a value for num-tape-units, RECOVER determines the maximum
number of tape units to use at one time.
LOGONLY
Specifies that the target objects are to be recovered from their existing data
sets by applying only log records to the data sets. DB2 applies all log records
that were written after a point that is recorded in the data set itself.
To recover an index space by using RECOVER LOGONLY, you must define the
index space with the COPY YES attribute.
Use the LOGONLY option when the data sets of the target objects have already
been restored to a point of consistency by another process offline, such as
DFSMSdss concurrent copy.
TOCOPY data-set-name
Specifies the particular image copy data set that DB2 is to use as a source for
recovery.
data-set-name is the name of the data set.
If the data set is a full image copy, it is the only data set that is used in the
recovery. If it is an incremental image copy, RECOVER also uses the previous
full image copy and any intervening incremental image copies.
If you specify the data set as the local backup copy, DB2 first tries to allocate
the local primary copy. If the local primary copy is unavailable, DB2 uses the
local backup copy.
If you use TOCOPY or TORBA to recover a single data set of a nonpartitioned
table space, DB2 issues message DSNU520I to warn that the table space can
become inconsistent following the RECOVER job. This point-in-time recovery
can cause compressed data to exist without a dictionary or can even overwrite
the data set that contains the current dictionary.
If you use TOCOPY with a particular partition or data set (identified with
DSNUM), the image copy must be for the same partition or data set, or for the
whole table space or index space. If you use TOCOPY with DSNUM ALL, the
image copy must be for DSNUM ALL. You cannot specify TOCOPY with a LIST
specification.
If the image copy data set is a z/OS generation data set, supply a fully qualified
data set name, including the absolute generation and version number.
If the image copy data set is not a generation data set and more than one
image copy data set with the same data set name exists, use one of the
following options to identify the data set exactly:
TOVOLUME
Identifies the image copy data set.
CATALOG
Indicates that the data set is cataloged. Use this option only for an image
copy that was created as a cataloged data set. (Its volume serial is not
recorded in SYSIBM.SYSCOPY.)
RECOVER refers to the SYSIBM.SYSCOPY catalog table during execution.
If you use TOVOLUME CATALOG, the data set must be cataloged. If you
remove the data set from the catalog after creating it, you must catalog the
data set again to make it consistent with the record for this copy that
appears in SYSIBM.SYSCOPY.
vol-ser
Identifies the data set by an alphanumeric volume serial identifier of its first
volume. Use this option only for an image copy that was created as a
noncataloged data set. Specify the first vol-ser in the SYSCOPY record to
locate a data set that is stored on multiple tape volumes.
TOSEQNO integer
Identifies the image copy data set by its file sequence number. integer
is the file sequence number.
TOLASTCOPY
Specifies that RECOVER is to restore the object to the last image copy that
was taken. If the last image copy is a full image copy, it is restored to the
object. If the last image copy is an incremental image copy, the most recent full
copy along with any incremental copies are restored to the object.
TOLASTFULLCOPY
Specifies that the RECOVER utility is to restore the object to the last full image
copy that was taken. Any incremental image copies that were taken after the full
image copy are not restored to the object.
ERROR RANGE
Specifies that all pages within the range of reported I/O errors are to be
recovered. Recovering an error range is useful when the range is small, relative
to the object that contains it; otherwise, recovering the entire object is preferred.
| You cannot specify this option if you are recovering from a concurrent copy.
In some situations, recovery using the ERROR RANGE option is not possible,
such as when a sufficient quantity of alternate tracks cannot be obtained for all
bad records within the error range. You can use the IBM Device Support
Facility, ICKDSF service utility to determine whether this situation exists. In such
a situation, redefine the error data set at a different location on the volume or
on a different volume, and then run the RECOVER utility without the ERROR
RANGE option.
You cannot specify ERROR RANGE with a LIST specification.
For additional information about the use of this keyword, see Part 4 (Volume 1)
of DB2 Administration Guide.
LOCALSITE
Specifies that RECOVER is to use image copies from the local site. If you
specify neither LOCALSITE or RECOVERYSITE, RECOVER uses image copies
from the current site of invocation. (The current site is identified on the
installation panel DSNTIPO under SITE TYPE and in the macro DSN6SPRM
under SITETYP.)
RECOVERYSITE
Specifies that RECOVER is to use image copies from the recovery site. If you
specify neither LOCALSITE or RECOVERYSITE, RECOVER uses image copies
from the current site of invocation. (The current site is identified on the
installation panel DSNTIPO under SITE TYPE and in the macro DSN6SPRM
under SITETYP.)
LOGRANGES YES
Specifies that RECOVER should use SYSLGRNX information for the
LOGAPPLY phase. This option is the default.
LOGRANGES NO
Specifies that RECOVER should not use SYSLGRNX information for the
LOGAPPLY phase. Use this option only under the direction of IBM Software
Support.
This option can cause RECOVER to run much longer. In a data sharing
environment this option can result in the merging of all logs from all members
that were created since the last image copy.
This option can also cause RECOVER to apply logs that should not be applied.
For example, assume that you take an image copy of a table space and then
run REORG LOG YES on the same table space. Assume also that the REORG
utility abends and you then issue the TERM UTILITY command for the REORG
job. The SYSLGRNX records that are associated with the REORG job are
deleted, so a RECOVER job with the LOGRANGES YES option (the default)
skips the log records from the REORG job. However, if you run RECOVER
LOGRANGES NO, the utility applies these log records.
If you need to recover both the data and the indexes, and no image copies of the
indexes are available, use the following procedure:
1. Use RECOVER TABLESPACE to recover the data.
2. Run REBUILD INDEX on any related indexes to rebuild them from the data.
If you have image copies of both the table spaces and the indexes, you can recover
both sets of objects in the same RECOVER utility statement. The objects are
recovered from the image copies and logs.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or index space Object that is to be recovered. If you want to
recover less than an entire table space:
v Use the DSNUM option to recover a partition or
data set.
v Use the PAGE option to recover a single page.
v Use the ERROR RANGE option to recover a
range of pages with I/O errors.
Image copy data set Copy that RECOVER is to restore. DB2 accesses
this information through the DB2 catalog. However,
if you want to preallocate your image copy data
sets by using DD statements, refer to “Retaining
tape mounts” on page 368 for more information.
To recover multiple table spaces, create a list of table spaces that are to be
recovered; repeat the TABLESPACE keyword before each specified table space.
The following RECOVER statement specifies that the utility is to recover partition 2
of the partitioned table space DSN8D81A.DSN8S81E, and recover the table space
DSN8D81A.DSN8S81D to the quiesce point (RBA X'000007425468').
RECOVER TABLESPACE DSN8D81A.DSN8S81E DSNUM 2
TABLESPACE DSN8D81A.DSN8S81D
TORBA X’000007425468’
Each table space that is involved is unavailable for most other applications until
recovery is complete. If you make image copies by table space, you can recover
the entire table space, or you can recover a data set or partition from the table
space. If you make image copies separately by partition or data set, you must
recover the partitions or data sets by running separate RECOVER operations. The
following example shows the RECOVER statement for recovering four data sets in
database DSN8D81A, table space DSN8S81E:
RECOVER TABLESPACE DSN8D81A.DSN8S81E DSNUM 1
TABLESPACE DSN8D81A.DSN8S81E DSNUM 2
TABLESPACE DSN8D81A.DSN8S81E DSNUM 3
TABLESPACE DSN8D81A.DSN8S81E DSNUM 4
You can schedule the recovery of these data sets in four separate jobs to run in
parallel. In many cases, the four jobs can read the log data concurrently.
If a table space or data set is in the COPY-pending status, recovering it might not
be possible. You can reset this status in several ways; for more information, see
“Resetting COPY-pending status” on page 255.
| RECOVER does not place dependent table spaces that are related by informational
| referential constraints into CHECK-pending status.
If referential integrity violations are not an issue, you can run a separate job to
recover each table space.
When you specify the PARALLEL keyword, DB2 supports parallelism during the
RESTORE phase and performs recovery as follows:
v During initialization and setup (the UTILINIT recover phase), the utility locates the
full and incremental copy information for each object in the list from
SYSIBM.SYSCOPY.
v The utility sorts the list of objects for recovery into lists to be processed in
parallel according to the number of tape volumes, file sequence numbers, and
sizes of each image copy.
v The number of objects that can be restored in parallel depends on the maximum
number of available tape devices and on how many tape devices the utility
requires for the incremental and full image copy data sets. You can control the
number of objects that are to be processed in parallel on the PARALLEL
keyword. You can control the number of dynamically allocated tape drives on the
TAPEUNITS keyword, which is specified with the PARALLEL keyword.
v If an object in the list requires a DB2 concurrent copy, the utility sorts the object
in its own list and processes the list in the main task, while the objects in the
other sorted lists are restored in parallel. If the concurrent copies that are to be
restored are on tape volumes, the utility uses one tape device and counts it
toward the maximum value that is specified for TAPEUNITS.
If image copies are taken at the data set level, RECOVER must be performed at
the data set level. To recover the whole table space, you must recover all the data
Alternatively, if image copies are taken at the table space, index, or index space
level, you can recover individual data sets by using the DSNUM parameter.
Even if you do not periodically merge multiple image copies into one copy when
you do not have enough tape units, the utility can still perform. RECOVER
dynamically allocates the full image copy and attempts to dynamically allocate all
the incremental image copy data sets. If RECOVER successfully allocates every
incremental copy, recovery proceeds to merge pages to table spaces and apply the
log. If a point is reached where an incremental copy cannot be allocated,
RECOVER notes the log RBA or LRSN of the last successfully allocated data set.
Attempts to allocate incremental copies cease, and the merge proceeds using only
the allocated data sets. The log is applied from the noted RBA or LRSN, and the
incremental image copies that were not allocated are ignored.
Recovering a page
Using RECOVER PAGE enables you to recover data on a page that is damaged. In
some situations, you can determine (usually from an error message) which page of
an object has been damaged. You can use the PAGE option to recover a single
page. You can use the CONTINUE option to continue recovering a page that was
damaged during the LOGAPPLY phase of a RECOVER operation.
Recovering a page by using PAGE and CONTINUE: Suppose that you start
RECOVER for table space TSPACE1. During processing, message DSNI012I
informs you of a problem that damages page number 5. RECOVER completes, but
the damaged page, number 5, is in a stopped state and is not recovered. When
RECOVER ends, message DSNU501I informs you that page 5 is damaged.
If more than one page is damaged during RECOVER, perform the preceding steps
for each damaged page.
The following RECOVER statement specifies that the utility is to recover any current
error range problems for table space TS1:
RECOVER TABLESPACE DB1.TS1 ERROR RANGE
Recovering an error range is useful when the range is small, relative to the object
containing it; otherwise, recovering the entire object is preferable.
Message DSNU086I indicates that I/O errors were detected on a table space and
that you need to recover it. Before you attempt to use the ERROR RANGE option
of RECOVER, you should run the ICKDSF service utility to correct the disk error. If
an I/O error is detected during RECOVER processing, DB2 issues message
DSNU538I to identify the affected target tracks are involved. The message provides
enough information to run ICKDSF correctly.
During the recovery of the entire table space or index space, DB2 might still
encounter I/O errors that indicate DB2 is still using a bad volume. For user-defined
data sets, you should use Access Method Services to delete the data sets and
redefine them with the same name on a new volume. If you use DB2 storage
groups, you can remove the bad volume from the storage group by using ALTER
STOGROUP.
Because the data sets are restored offline without DB2 involvement, RECOVER
LOGONLY checks that the data set identifiers match those that are in the DB2
catalog. If the identifiers do not match, message DSNU548I is issued, and the job
terminates with return code 8.
To ensure that no other transactions can access DB2 objects between the time that
you restore a data set and the time that you run RECOVER LOGONLY, follow these
steps:
1. Stop the DB2 objects that are being recovered by issuing the following
command:
-STOP DATABASE(database-name) SPACENAM(space-name)
2. Restore all DB2 data sets that are being recovered.
3. Start the DB2 objects that are being recovered by issuing the following
command:
-START DATABASE(database-name) SPACENAM(space-name) ACCESS(UT)
4. Run the RECOVER utility without the TORBA or TOLOGPOINT parameters and
with the LOGONLY parameter to recover the DB2 data sets to the current point
in time and to perform forward recovery using DB2 logs. If you want to recover
the DB2 data sets to a prior point in time, run the RECOVER utility with either
TORBA or TOLOGPOINT, and with the LOGONLY parameters.
5. If you did not recover related indexes in the same RECOVER control statement,
rebuild all indexes on the recovered object.
6. Issue the following command to allow access to the recovered object if the
recovery completes successfully:
-START DATABASE(database-name) SPACENAM(space-name) ACCESS(RW)
With the LOGONLY option, when recovering a single piece of a multi-piece linear
page set, RECOVER opens the first piece of the page set. If the data set is
migrated by DFSMShsm, the data set is recalled by DFSMShsm. Without
LOGONLY, no data set recall is requested.
Backing up a single piece of a multi-piece linear page set is not recommended. This
action can cause a data integrity problem if the backup is used to restore the data
set at a later time.
6. DSNDB01.SYSLGRNX.
7. All indexes on SYSLGRNX.
| 8. DSNDB06.SYSALTER.
| 9. All indexes on SYSALTER.
10. DSNDB06.SYSDBAUT.
11. All indexes on SYSDBAUT. If no user-defined indexes that are
stogroup-managed are defined on SYSDBAUT, execute the following utility
statement to rebuild IBM-defined and any user-defined indexes on SYSDBAUT:
REBUILD INDEX (ALL) TABLESPACE DSNDB06.SYSDBAUT
For all catalog and directory table spaces, you can list the IBM-defined indexes that
have the COPY YES attribute in the same RECOVER utility statement.
| The catalog and directory objects that are listed in step 15 in the preceding list can
| be grouped together for recovery. You can specify them as a list of objects in a
| single RECOVER utility statement. When you specify all of these objects in one
| statement, the utility needs to make only one pass of the log for all objects during
| the LOGAPPLY phase and can use parallelism when restoring the image copies in
| the RESTORE phase. Thus, these objects are recovered faster.
Recovery of the items on the list can be done concurrently or included in the same
job step. However, some restrictions apply:
1. When you recover the following table spaces or indexes, the job step in which
the RECOVER statement appears must not contain any other utility statements.
No other utilities can run while the RECOVER utility is running.
v DSNDB01.SYSUTILX
v All indexes on SYSUTILX
v DSNDB01.DBD01
2. When you recover the following table spaces, no other utilities can run while the
RECOVER utility is running. Other utility statements can exist in the same job
step.
v DSNDB06.SYSCOPY
v DSNDB01.SYSLGRNX
v DSNDB06.SYSDBAUT
v DSNDB06.SYSUSER
v DSNDB06.SYSDBASE
If the logging environment requires adding or restoring active logs, restoring archive
logs, or performing any action that affects the log inventory in the BSDS, you
should recover the BSDS before catalog and directory objects. For information
about recovering the BSDS, see Part 4 (Volume 1) of DB2 Administration Guide. To
copy active log data sets, use the Access Method Services REPRO function. For
information about the JCL for the Access Method Services REPRO function, see
one of the following publications:
v DFSMS/MVS: Access Method Services for the Integrated Catalog
v z/OS DFSMS Access Method Services for Catalogs
Why the order is important: To recover one object, RECOVER must obtain
information about it from some other object. Table 59 lists the objects from which
RECOVER must obtain information.
Table 59. Objects that the RECOVER utility accesses
Object name Reason for access by RECOVER
DSNDB01.SYSUTILX Utility restart information. The object is not
accessed when it is recovered; RECOVER
for this object is not restartable, and no other
commands can be in the same job step.
SYSCOPY information for SYSUTILX is
obtained from the log.
DSNDB01.DBD01 Descriptors for the catalog database
(DSNDB06), the work file database
(DSNDB07), and user databases. RECOVER
for this object is not restartable, and no other
commands can be in the same job step.
SYSCOPY information for DBD01 is obtained
from the log.
DSNDB06.SYSCOPY Locations of image copy data sets.
SYSCOPY information for SYSCOPY itself is
obtained from the log.
DSNDB01.SYSLGRNX The RBA or LRSN of the first log record after
the most recent copy.
DSNDB06.SYSDBAUT, DSNDB06.SYSUSER Verification that the authorization ID is
authorized to run RECOVER.
Planning for point-in-time recovery for the catalog and directory: When you
recover the DB2 catalog and directory, consider the entire catalog and directory,
including all table spaces and index spaces, as one logical unit. Recover all objects
in the catalog and directory to the same point of consistency. If a point-in-time
recovery of the catalog and directory objects is planned, a separate quiesce of the
DSNDB06.SYSCOPY table space is required after a quiesce of the other catalog
and directory table spaces.
| You should be aware of some special considerations when you are recovering
| catalog and directory objects to a point in time in which the DB2 subsystem was in
| a different mode. For example, if your DB2 subsystem is currently in new-function
| mode, and you need to recover to a point in time in which the subsystem was in
| compatibility mode. For details, see Part 4 of DB2 Administration Guide.
Recommendation: Recover the catalog and directory objects to the current state.
You can use sample queries and documentation, which are provided in DSNTESQ
in the SDSNSAMP sample library, to check the consistency of the catalog.
Indexes are rebuilt by REBUILD INDEX. If the only items you have recovered are
table spaces in the catalog or directory, you might need to rebuild their indexes.
Use the CHECK INDEX utility to determine whether an index is inconsistent with
the data it indexes. You can use the RECOVER utility to recover catalog and
directory indexes if the index was defined with the COPY YES attribute and if you
have a full index image copy.
You must recover the catalog and directory before recovering user table spaces.
Be aware that the following table spaces, along with their associated indexes, do
not have entries in SYSIBM.SYSLGRNX, even if they were defined with COPY
YES:
v DSNDB01.SYSUTILX
v DSNDB01.DBD01
v DSNDB01.SYSLGRNX
v DSNDB06.SYSCOPY
v DSNDB06.SYSGROUP
v DSNDB01.SCT02
v DSNDB01.SPT01
These objects are assumed to be open from the point of their last image copy, so
the RECOVER utility processes the log from that point forward.
Point-in-time recovery: Full recovery of the catalog and directory table spaces and
indexes is strongly recommended. However, if you need to plan for point-in-time
recovery of the catalog and directory, here is a way to create a point of consistency:
1. Quiesce all catalog and directory table spaces in a list, except for
DSNDB06.SYSCOPY and DSNDB01.SYSUTILX.
2. Quiesce DSNDB06.SYSCOPY.
Recommendation: Quiesce DSNDB06.SYSCOPY in a separate utility
statement; when you recover DSNDB06.SYSCOPY to its own quiesce point, it
contains the ICTYPE = 'Q’ (quiesce) SYSCOPY records for the other catalog
and directory table spaces.
3. Quiesce DSNDB01.SYSUTILX in a separate job step.
If you need to recover to a point in time, recover DSNDB06.SYSCOPY and
DSNDB01.SYSUTILX to their own quiesce points, and recover other catalog and
directory table spaces to their common quiesce point. The catalog and directory
objects must be recovered in a particular order, as described in “Why the order is
important” on page 357.
The status of an object that is related to a LOB table space can change due to a
recovery operation, depending on the type of recovery that is performed. If all of the
following objects for all LOB columns are recovered in a single RECOVER utility
statement to the present point in time, a QUIESCE point, or a COPY
SHRLEVEL(REFERENCE) point, no pending status exists:
v Base table space
v Index on the auxiliary table
v LOB table space
Refer to Table 60 for information about the status of a base table space, index on
the auxiliary table, or LOB table space that was recovered without its related
objects.
Table 60. Object status after being recovered without its related objects
Index on the
Base table space auxiliary table LOB table space
Object Recovery type status status status
Base table space Current RBA or LRSN None None None
1
Base table space Point-in-time CHECK-pending None None
Index on the Current RBA or LRSN None None None
auxiliary table
Index on the Point-in-time None CHECK-pending1 None
auxiliary table
Table 60. Object status after being recovered without its related objects (continued)
Index on the
Base table space auxiliary table LOB table space
Object Recovery type status status status
LOB table space Current RBA or LRSN, LOB None None None
table space that is defined
with LOG(YES)
LOB table space Current RBA or LRSN, LOB None None Auxiliary warning2
table space that is defined
with LOG(NO)
LOB table space TOCOPY, COPY was CHECK-pending1 REBUILD-pending None
SHRLEVEL REFERENCE
LOB table space TOCOPY, COPY was CHECK-pending1 REBUILD-pending CHECK-pending or
SHRLEVEL CHANGE auxiliary warning1
LOB table space TOLOGPOINT or TORBA CHECK-pending1 REBUILD-pending CHECK-pending or
(not a quiesce point) auxiliary warning1
LOB table space TOLOGPOINT or TORBA (at CHECK-pending1 REBUILD-pending None
a quiesce point)
| Notes:
| 1. RECOVER does not place dependent table spaces that are related by informational referential constraints into
| CHECK-pending status.
| 2. If, at any time, a log record is applied to the LOB table space and a LOB is consequently marked invalid, the LOB
| table space is set to auxiliary warning status.
For information about resetting any of these statuses, see Appendix C, “Resetting
an advisory or restrictive status,” on page 831.
Because a point-in-time recovery of only the table space leaves data in a consistent
state and indexes in an inconsistent state, you must rebuild all indexes by using
REBUILD INDEX. For more information, see “Resetting the REBUILD-pending
status” on page 333.
| After an index has been altered to PADDED or NOT PADDED, you cannot recover
| that index to a prior point in time. Instead, you should rebuild the index.
The auxiliary CHECK-pending status (ACHKP) is set when the CHECK DATA utility
detects an inconsistency between a base table space with defined LOB columns
360 Utility Guide and Reference
RECOVER
and a LOB table space. For information about how to reset the ACHKP status, see
Appendix C, “Resetting an advisory or restrictive status,” on page 831.
You can also use point-in-time recovery and the point-in-time recovery options to
recover all user-defined table spaces and indexes that are in refresh-pending status
(REFP).
For more information about recovering data to a prior point of consistency, see Part
4 (Volume 1) of DB2 Administration Guide.
If you run the REORG utility to turn off a REORG-pending status, and then recover
to a point in time before that REORG job, DB2 sets restrictive statuses on all
partitions that you specified in the REORG job, as follows:
v Sets REORG-pending (and possibly CHECK-pending) on for the data partitions
v Sets REBUILD-pending on for the associated index partitions
| v Sets REBUILD-pending on for the associated logical partitions of nonpartitioned
| secondary indexes
For information about resetting these restrictive statuses, see “REORG-pending
status” on page 836 and “REBUILD-pending status” on page 834.
| Actions that can affect recovery status When you perform the following actions
| before you recover a table space, the recovery status is affected as described:
| v If you alter a table to rotate partitions:
| – You can recover the partition to the current time.
| – You can recover the partition to a point in time after the alter. The utility can
| use a recovery base, (for example, a full image copy, a REORG LOG YES
| operation, or a LOAD REPLACE LOG YES operation) that occurred prior to
| the alter.
| – You cannot recover the partition to a point in time prior to the alter; the
| recover fails with MSGDSNU556I and RC8.
| When you perform the following actions before you recover an index to a prior point
| in time or to the current time, the recovery status is affected as described:
| v If you alter the data type of a column to a numeric data type, you cannot recover
| the index until you take a full image copy of the index. However, the index can
| be rebuilt.
| v If you alter an index to NOT PADDED or PADDED , you cannot recover the index
| until you take a full image copy of the index. However, the index can be rebuilt.
| For information about recovery status, see Appendix C, “Resetting an advisory or
| restrictive status,” on page 831.
To improve the performance of the recovery, take a full image copy of the table
space or set of table spaces, and then quiesce them by using the QUIESCE utility.
This action enables RECOVER TORBA to recover the table spaces to the quiesce
point with minimal use of the log.
If possible, specify a table space and all of its indexes (or a set of table spaces and
all related indexes) in the same RECOVER utility statement, and specify
TOLOGPOINT or TORBA to identify a QUIESCE point. This action avoids placing
indexes in the CHECK-pending or REBUILD-pending status. If the TOLOGPOINT is
not a common QUIESCE point for all objects, use the following procedure:
1. RECOVER table spaces to the value for TOLOGPOINT (either an RBA or
LRSN).
2. Use concurrent REBUILD INDEX jobs to recover the indexes over each table
space.
This procedure ensures that the table spaces and indexes are synchronized, and it
eliminates the need to run the CHECK INDEX utility.
RECOVER does not place dependent table spaces that are related by informational
referential constraints into CHECK-pending status.
| The TORBA and TOLOGPOINT options set the CHECK-pending status for table
| spaces when you perform any of the following actions:
| v Recover one or more members of a set of table spaces to a previous point in
| time that is not a common quiesce point or SHRLEVEL(REFERENCE) point.
| Dependent table spaces are placed in CHECK-pending status.
| v Recover all members of a set of table spaces that are to be recovered to the
| same quiesce point, but referential constraints were defined for a dependent
| table after that quiesce point. Table spaces that contain those dependent tables
| are placed in CHECK-pending status.
| v Recover table spaces with defined LOB columns without recovering their LOB
| table spaces.
To avoid setting CHECK-pending status, you must perform both of the following
steps:
v Recover the table space or the set of table spaces to a quiesce point or to an
image copy that was made with SHRLEVEL REFERENCE.
If you do not recover each table space to the same quiesce point, and if any of
the table spaces are part of a referential integrity structure, the following actions
occur:
– All dependent table spaces that are recovered are placed in CHECK-pending
status with the scope of the whole table space.
– All dependent table spaces of the recovered table spaces are placed in
CHECK-pending status with the scope of the specific dependent tables.
v Do not add table check constraints or referential constraints after the quiesce
point or image copy.
If you recover each table space of a table space set to the same quiesce point,
but referential constraints were defined after the quiesce point, the
CHECK-pending status is set for the table space that contains the table with the
referential constraint.
The TORBA and TOLOGPOINT options set the CHECK-pending status for indexes
when you perform either of the following actions:
v Recover one or more of the indexes to a previous point in time, but you do not
recover the related table space in the same RECOVER statement.
v Recover one or more of the indexes along with the related table space to a
previous point in time that is not a quiesce point or SHRLEVEL REFERENCE
point.
You can turn off CHECK-pending status for an index by using the TORBA and
TOLOGPOINT options. Recover indexes along with the related table space to the
same quiesce point or SHRLEVEL REFERENCE point. RECOVER processing
resets the CHECK-pending status for all indexes in the same RECOVER statement.
For information about resetting the CHECK-pending status of table spaces, see
Chapter 8, “CHECK DATA,” on page 55. For information about resetting the
CHECK-pending status for indexes, see “CHECK-pending status” on page 832.
Image copy on tape: If the image copy is on tape, messages IEF233D and
IEF455D request the tape for RECOVER, as shown in the following example:
IEF233D M BAB,COPY ,,R92341QJ,DSNUPROC,
OR RESPOND TO IEF455D MESSAGE
*42 IEF455D MOUNT COPY ON BAB FOR R92341QJ,DSNUPROC OR REPLY ’NO’
R 42,NO
IEF234E K BAB,COPY ,PVT,R92341QJ,DSNUPROC
By replying NO, you can initiate the fallback to the previous image copy. RECOVER
responds with messages DSNU030I and DSNU508I, as shown in the following
example:
DSNU030I csect-name - UNABLE TO ALLOCATE R92341Q.UTQPS001.FCOPY010
RC=4, CODE=X’04840000’
DSNU508I csect-name - IN FALLBACK PROCESSING TO PRIOR FULL IMAGE COPY
Reason code X'0484' means that the request was denied by the operator.
Image copy on disk: If the image copy is on disk, you can delete or rename the
image copy data set before RECOVER starts executing. RECOVER issues
messages DSNU030I and DSNU508I, as shown in the following example:
DSNU030I csect-name - UNABLE TO ALLOCATE R92341Q.UTQPS001.FCOPY010,
RC=4, CODE=X’17080000’
DSNU508I csect-name - IN FALLBACK PROCESSING TO PRIOR FULL IMAGE COPY
Reason code X'1708' means that the ICF catalog entry cannot be found.
Improving performance
To improve recovery time, consider enabling the Fast Log Apply function on the
DB2 subsystem. For more information about enabling this function, see the LOG
APPLY STORAGE field on panel DSNTIPL, in Part 2 of DB2 Installation Guide.
Use MERGECOPY to merge your table space image copies before recovering the
table space. If you do not merge your image copies, RECOVER automatically
merges them. If RECOVER cannot allocate all the incremental image copy data
sets when it merges the image copies, RECOVER uses the log instead.
Include a list of table spaces and indexes in your RECOVER utility statement to
apply logs in a single scan of the logs.
If you use RECOVER TOCOPY for full image copies, you can improve performance
by using data compression. The improvement is proportional to the degree of
compression.
Consider specifying the PARALLEL keyword to restore image copies from disk or
tape to a list of objects in parallel.
If possible, DB2 reads the required log records from the active log to provide the
best performance.
Any log records that are not found in the active logs are read from the archive log
data sets, which are dynamically allocated to satisfy the requests. The type of
storage that is used for archive log data sets is a significant factor in the
performance. Consider the following actions to improve performance:
v RECOVER a list of objects in one utility statement to take only a single pass of
the log.
v Keep archive logs on disk to provide the best possible performance.
v Control archive logs data sets by using DFSMShsm to provide the next best
performance. DB2 optimizes recall of the data sets. After the data set is recalled,
DB2 reads it from disk.
v If the archive log must be read from tape, DB2 optimizes access by means of
ready-to-process and look-ahead mount requests. DB2 also permits delaying the
deallocation of a tape drive if subsequent RECOVER jobs require the same
archive log tape. Those methods are described in more detail in the subsequent
paragraphs.
The BSDS contains information about which log data sets to use and where they
reside. You must keep the BSDS information current. If the archive log data sets
are cataloged, the ICF catalog indicates where to allocate the required data set.
DFSMShsm data sets: The recall of the first DFSMShsm archive log data set
starts automatically when the LOGAPPLY phase starts. When the recall is complete
and the first log record is read, the recall for the next archive log data set starts.
This process is known as look-ahead recalling. Its purpose is to recall the next data
set while it reads the preceding one.
When a recall is complete, the data set is available to all RECOVER jobs that
require it. Reading proceeds in parallel.
Non-DFSMShsm tape data sets: DB2 reports on the console all tape volumes that
are required for the entire job. The report distinguishes two types of volumes:
v Any volume that is not marked with an asterisk (*) is required for the for the job
to complete. Obtain these volumes from the tape library as soon as possible.
v Any volume that is marked with an asterisk (*) contains data that is also
contained in one of the active log data sets. The volume might or might not be
required.
As tapes are mounted and read, DB2 makes two types of mount requests:
v Ready-to-process: The current job needs this tape immediately. As soon as the
tape is loaded, DB2 allocates and opens it.
v Look-ahead: This is the next tape volume that is required by the current job.
Responding to this request enables DB2 to allocate and open the data set before
it is needed, thus reducing overall elapsed time for the job.
You can dynamically change the maximum number of input tape units that are used
to read the archive log by specifying the COUNT option of the SET ARCHIVE
command. For example, use the following command to assign 10 tape units to your
DB2 subsystem:
-SET ARCHIVE COUNT (10)
The DISPLAY ARCHIVE READ command shows the currently mounted tape
volumes and their statuses.
Delayed deallocation: DB2 can delay deallocating the tape units used to read the
archive logs. This is useful when several RECOVER utility statements run in
parallel. By delaying deallocation, DB2 can re-read the same volume on the same
tape unit for different RECOVER jobs, without taking time to allocate it again.
You can dynamically change the amount of time that DB2 delays deallocation by
using the TIME option of the SET ARCHIVE command. For example, to specify a
60 minute delay, issue the following command:
-SET ARCHIVE TIME(60)
In a data sharing environment, you might want to specify zero (0) to avoid having
one member hold onto a data set that another member needs for recovery.
Performance summary:
1. Achieve the best performance by allocating archive logs on disk.
2. Consider staging cataloged tape data sets to disk before allocation by the log
read process.
3. If the data sets are read from tape, set both the COUNT and the TIME values to
the maximum allowable values within the system constraints.
For example, if the incremental image copies are on tape and an adequate number
of tape drives are not available, RECOVER does not use the remaining incremental
image copy data sets.
If one of the following actions occurs, the index remains untouched, and utility
processing terminates with return code 8:
v RECOVER processes an index for which no full copy exists.
v The copy cannot be used because of utility activity that occurred on the index or
on its underlying table space,
For more information, see “Setting and clearing the informational COPY-pending
status” on page 117.
If you always make multiple image copies, RECOVER should seldom fall back to an
earlier point. Instead, RECOVER relies on the backup copy data set if the primary
copy data set is unusable.
RECOVER does not perform parallel processing for objects that are in backup or
fallback recovery. Instead, the utility performs non-parallel image copy allocation
processing of the objects. RECOVER defers the processing of objects that require
backup or fallback processing until all other objects are recovered, at which time the
utility processes the objects one at a time.
If the RECOVER utility cannot complete because of severe errors that are caused
by the damaged media, you might need to use Access Method Services (IDCAMS)
with the NOSCRATCH option to delete the cluster for the table space or index. If
the table space or index is defined by using STOGROUP, the RECOVER utility
automatically redefines the cluster. For user-defined table spaces or indexes, you
must redefine the cluster before invoking the RECOVER utility.
Terminating RECOVER
Terminating a RECOVER job with the TERM UTILITY command leaves the table
space that is being recovered in RECOVER-pending status, and the index space
that is being recovered in the REBUILD-pending status. If you recover a table
space to a previous point in time, its indexes are left in the REBUILD-pending
status. The data or index is unavailable until the object is successfully recovered or
rebuilt.
Restarting RECOVER
You can restart RECOVER from the last commit point (RESTART(CURRENT)) or
| the beginning of the phase (RESTART(PHASE)). By default, DB2 uses
| RESTART(CURRENT).
In both cases, you must identify and fix the causes of the failure before performing
a current restart.
For instructions on restarting a utility job, see “Restarting an online utility” on page
42.
Table 61 shows which claim classes RECOVER claims and drains and any
restrictive state that the utility sets on the target object.
Table 61. Claim classes of RECOVER operations.
RECOVER RECOVER RECOVER
RECOVER (no TORBA or PART TORBA ERROR-
Target option) TOCOPY or TOCOPY RANGE
Table space or DA/UTUT DA/UTUT DA/UTUT DA/UTUT
partition CW/UTRW1
Partitioning index, DA/UTUT DA/UTUT DA/UTUT DA/UTUT
data-partitioned CW/UTRW1
secondary index, or
physical partition
| Nonpartitioned DA/UTUT DA/UTUT DA/UTUT DA/UTUT
| secondary index CW/UTRW1
RI dependents none CHKP (YES) CHKP (YES) none
Legend:
RECOVER does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Table 62 shows which utilities can run concurrently with RECOVER on the same
target object. The target object can be a table space, an index space, or a partition
of a table space or index space. If compatibility depends on particular options of a
utility, that information is also documented in the table.
Table 62. Compatibility of RECOVER with other utilities
Compatible with Compatible with
Compatible with RECOVER RECOVER
RECOVER (no TOCOPY or ERROR-
Action option)? TORBA? RANGE?
CHECK DATA No No No
CHECK INDEX No No No
CHECK LOB No No No
COPY INDEXSPACE No No No
COPY TABLESPACE No No No
DIAGNOSE Yes Yes Yes
LOAD No No No
MERGECOPY No No No
MODIFY No No No
QUIESCE No No No
REBUILD INDEX No No No
REORG INDEX Yes No Yes
REORG TABLESPACE No No No
REPAIR LOCATE INDEX Yes No Yes
REPAIR LOCATE TABLESPACE No No No
REPORT Yes Yes Yes
RUNSTATS INDEX No No No
RUNSTATS TABLESPACE No No No
STOSPACE Yes Yes Yes
UNLOAD No No No
To run on DSNDB01.SYSUTILX, RECOVER must be the only utility in the job step
and the only utility running in the DB2 subsystem.
RECOVER on any catalog or directory table space is an exclusive job; such a job
can interrupt another job between job steps, possibly causing the interrupted job to
time out.
Example 3: Recovering a table space partition to the last image copy that was
taken. The following control statement specifies that the RECOVER utility is to
recover the first partition of table space DSN8D81A.DSN8S81D to the last image
copy that was taken. If the last image copy that was taken is a full image copy, this
full image copy is restored. If the last image copy that was taken is an incremental
image copy, the most recent full image copy, along with any incremental image
copies, are restored.
RECOVER TABLESPACE DSN8D81A.DSN8S81D DSNUM 1 TOLASTCOPY
Example 5: Recovering an index to the last full image copy that was taken
without deleting and redefining the data sets. The following control statement
specifies that the RECOVER utility is to recover index ADMF001.IADH082P to the
last full image copy. The REUSE option specifies that DB2 is to logically reset and
reuse DB2-managed data sets without deleting and redefining them.
RECOVER INDEX ADMF001.IADH082P REUSE TOLASTFULLCOPY
LISTDEF RCVR4_LIST
INCLUDE TABLESPACES TABLESPACE DBOL1002.TSOL1002
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1003 PARTLEVEL 3
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1003 PARTLEVEL 6
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1004 PARTLEVEL 5
INCLUDE TABLESPACES TABLESPACE DBOL1003.TPOL1004 PARTLEVEL 9
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IPOL1051 PARTLEVEL 22
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IPOL1061 PARTLEVEL 10
INCLUDE INDEXSPACES INDEXSPACE DBOL1003.IXOL1062
Figure 73. Example RECOVER control statement with the CURRENTCOPYONLY option
Figure 74. Example RECOVER control statement for a list of objects on tape
job. The PARALLEL option indicates that RECOVER is to restore four objects at a
time in parallel. If any of the image copies are on tape (either stacked or not
stacked), RECOVER determines the number of tape drives to use to optimize the
process.
LISTDEF RCVRLIST INCLUDE TABLESPACE DSN8D81A.DSN8S81D
INCLUDE INDEX DSN8810.XDEPT1
INCLUDE INDEX DSN8810.XDEPT2
INCLUDE INDEX DSN8810.XDEPT3
INCLUDE TABLESPACE DSN8D81A.DSN8S81E
INCLUDE INDEX DSN8810.XEMP1
INCLUDE INDEX DSN8810.XEMP2
RECOVER LIST RCVRLIST TOLOGPOINT X’00000551BE7D’ PARALLEL(4)
You can determine when to run REORG INDEX by using the LEAFDISTLIMIT
catalog query option. If you specify the REPORTONLY option, REORG INDEX
produces a report that indicates whether if a REORG is recommended; in this case,
a REORG is not performed. These options are not available for indexes on the
directory.
For a diagram of REORG INDEX syntax and a description of available options, see
“Syntax and options of the REORG INDEX control statement” on page 376. For
detailed guidance on running this utility, see “Instructions for running REORG
INDEX” on page 387.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v REORG privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL authority
v SYSADM authority
To execute this utility on an index space in the catalog or directory, you must use a
privilege set that includes one of the following authorities:
v REORG privilege for the DSNDB06 (catalog) database
v DBADM or DBCTRL authority for the DSNDB06 (catalog) database
v Installation SYSOPR authority
v SYSCTRL authority
v SYSADM or Installation SYSADM authority
While trying to reorganize an index space in the catalog or directory, a user with
authority other than installation SYSADM or installation SYSOPR might receive the
following message:
DSNT500I "resource unavailable"
An ID with installation SYSOPR authority can also execute REORG INDEX, but
only on an index in the DSNDB06 database.
To run REORG INDEX STATISTICS REPORT YES, ensure that the privilege set
| includes the SELECT privilege on the catalog tables and on the tables for which
| statistics are to be gathered.
Execution phases of REORG INDEX: The REORG INDEX utility operates in these
phases:
Phase Description
UTILINIT Performs initialization and setup
UNLOAD Unloads index space and writes keys to a sequential data set.
BUILD Builds indexes. Updates index statistics.
LOG Processes log iteratively. Used only if you specify SHRLEVEL
CHANGE.
SWITCH Switches access between original and new copy of index space or
partition. Used only if you specify SHRLEVEL REFERENCE or
CHANGE.
UTILTERM Performs cleanup. For DB2-managed data sets and either
SHRLEVEL CHANGE or SHRLEVEL REFERENCE, the utility
deletes the original copy of the table space or index space.
Syntax diagram
SHRLEVEL NONE
FASTSWITCH YES
SHRLEVEL REFERENCE deadline-spec drain-spec
CHANGE deadline-spec drain-spec change-spec FASTSWITCH NO
UNLOAD CONTINUE
LEAFDISTLIMIT (1) (2)
integer REPORTONLY UNLOAD PAUSE stats-spec
ONLY
WORKDDN (SYSUT1)
WORKDDN (ddname) PREFORMAT
Notes:
1 You cannot use UNLOAD PAUSE with the LIST option.
2 You cannot specify any options in stats-spec with the UNLOAD ONLY option.
index-name-spec:
| INDEX index-name
creator-id. PART integer
INDEXSPACE index-space-name
database-name.
deadline-spec:
DEADLINE NONE
DEADLINE timestamp
labeled-duration-expression
drain-spec:
RETRY 0
DRAIN_WAIT integer RETRY_DELAY 300
RETRY integer
RETRY_DELAY integer
change-spec:
TIMEOUT ABEND
TIMEOUT TERM
labeled-duration-expression:
stats-spec:
|
HISTORY ALL FORCEROLLUP YES
ACCESSPATH NO
SPACE
NONE
| correlation-stats-spec:
|
SORTNUM integer
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
INDEX creator-id.index-name
Specifies an index that is to be reorganized.
creator-id. specifies the creator of the index and is optional. If you omit the
qualifier creator id, DB2 uses the user identifier for the utility job. index-name is
| the qualified name of the index to copy. For an index, you can specify either an
| index name or an index space name. Enclose the index name in quotation
marks if the name contains a blank.
| INDEXSPACE database-name.index-space-name
| Specifies the qualified name of the index space that is obtained from the
| SYSIBM.SYSINDEXES table.
| database-name specifies the name of the database that is associated with the
| index and is optional. The default is DSNDB04.
| index-space-name specifies the qualified name of the index space that is to be
| reorganized; the name is obtained from the SYSIBM.SYSINDEXES table.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The INDEX
keyword is required to differentiate this REORG INDEX LIST from REORG
TABLESPACE LIST. The utility allows one LIST keyword for each control
statement of REORG INDEX. The list must not contain any table spaces.
REORG INDEX is invoked once for each item in the list. For more information
about LISTDEF specifications, see Chapter 15, “LISTDEF,” on page 163.
Do not specify STATISTICS INDEX index-name with REORG INDEX LIST. If
you want to collect inline statistics for a list of indexes, just specify STATISTICS.
You cannot specify DSNUM and PART with LIST on any utility.
REUSE
When used with SHRLEVEL NONE, specifies that REORG is to logically reset
and reuse DB2-managed data sets without deleting and redefining them. If you
do not specify REUSE and SHRLEVEL NONE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If a data set has multiple extents and you use the REUSE parameter, the
extents are not released.
If you specify SHRLEVEL REFERENCE or CHANGE with REUSE, REUSE
does not apply
PART integer
Identifies a partition that is to be reorganized. You can reorganize a single
partition of a partitioning index. You cannot specify PART with LIST. integer
must be in the range from 1 to the number of partitions that are defined for the
| partitioning index. The maximum is 4096.
integer designates a single partition.
If you omit the PART keyword, the entire index is reorganized.
SHRLEVEL
Specifies the method for performing the reorganization. The parameter following
SHRLEVEL indicates the type of access that is to be allowed during the
RELOAD phase of REORG.
NONE Specifies that reorganization is to operate by unloading from the area
that is being reorganized (while applications can read but cannot write
to the area), building into that area (while applications have no access),
and then allowing read-write access again. The default is NONE.
If you specify NONE (explicitly or by default), you cannot specify the
following parameters:
v MAXRO
v LONGLOG
v DELAY
v DEADLINE
v DRAIN_WAIT
v RETRY
v RETRY_DELAY
REFERENCE
Specifies that reorganization is to operate as follows:
v Unload from the area that is being reorganized while applications can
read but cannot write to the area.
v Build into a shadow copy of that area while applications can read but
cannot write to the original copy.
v Switch the future access of the applications from the original copy to
the shadow copy by exchanging the names of the data sets, and
then allowing read-write access again.
To determine which data sets are required when you execute REORG
SHRLEVEL REFERENCE, see “Data sets that REORG INDEX uses”
on page 389.
To determine which data sets are required when you execute REORG
SHRLEVEL CHANGE, see “Data sets that REORG INDEX uses” on
page 389.
CURRENT_TIMESTAMP
Specifies that the deadline is to be calculated based on the CURRENT
TIMESTAMP.
constant
Indicates a unit of time and is followed by one of the seven duration
keywords: YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS,
or MICROSECONDS. The singular form of these words is also
acceptable: YEAR, MONTH, DAY, HOUR, MINUTE, SECOND,
MICROSECOND.
If you specify DEFER, and DB2 determines that the actual time for an
iteration and the estimated time for the next iteration are both less than
5 seconds, DB2 adds a 5-second pause to the next iteration. This
pause reduces consumption of processor time. The first time this
situation occurs for a given execution of REORG, DB2 sends message
DSNU362I to the console. The message states that the number of log
records that must be processed is small and that the pause occurs. To
change the MAXRO value and thus cause REORG to finish, execute
the ALTER UTILITY command. DB2 adds the pause whenever the
situation occurs; however, DB2 sends the message only if 30 minutes
have elapsed since the last message was sent for a given execution of
REORG.
DRAIN
Specifies drain behavior at the end of the log phase after the MAXRO threshold
is reached and when the last iteration of the log is to be applied.
WRITERS
Specifies the current default action, in which DB2 drains only the writers
during the log phase after the MAXRO threshold is reached and
subsequently issues DRAIN ALL on entering the switch phase.
ALL Specifies that DB2 is to drain all readers and writers during the log
phase, after the MAXRO threshold is reached.
Consider specifying DRAIN ALL if the following conditions are both true:
v SQL update activity is high during the log phase.
v The default behavior results in a large number of -911 SQL error
messages.
LONGLOG
Specifies the action that DB2 is to perform, after sending a message to the
console, if the number of records that the next iteration of log process is to
process is not sufficiently lower than the number that the previous iterations
processed. This situation means that REORG INDEX is not reading the
application log quickly enough to keep pace with the writing of the application
log.
CONTINUE
Specifies that until the time on the JOB statement expires, DB2 is to
continue performing reorganization, including iterations of log
processing, if the estimated time to perform an iteration exceeds the
time that is specified with MAXRO.
A value of DEFER for MAXRO and a value of CONTINUE for
LONGLOG together mean that REORG INDEX is to continue allowing
access to the original copy of the area that is being reorganized and
does not switch to the shadow copy. The user can execute the ALTER
UTILITY command with a large value for MAXRO when the switching is
desired.
The default is CONTINUE.
TERM Specifies that DB2 is to terminate reorganization after the delay
specified by the DELAY parameter.
DRAIN
Specifies that DB2 is to drain the write claim class after the delay that is
specified by the DELAY parameter. This action forces the final iteration
of log processing to occur.
DELAY integer
Specifies the minimum interval between the time that REORG sends the
LONGLOG message to the console and the time REORG that performs the
action that is specified by the LONGLOG parameter.
integer is the number of seconds. The default is 1200.
TIMEOUT
Specifies the action that is to be taken if the REORG INDEX utility gets a
time-out condition while trying to drain objects in either the log or switch
phases.
ABEND
Indicates that if a time-out condition occurs, DB2 is to leave the objects in a
UTRO or UTUT state.
TERM
Indicates that DB2 is to behave as follows if you specify the TERM option
and a time out condition occurs:
1. DB2 issues an implicit TERM UTILITY command, causing the utility to
end with a return code 8.
2. DB2 issues the DSNU590I and DSNU170I messages.
3. DB2 leaves the objects in a RW state.
FASTSWITCH
Specifies which switch methodology is to be used.
YES
Specifies that the fifth-level qualifier in the data set name is to alternate
between I0001 and J0001. This option is not allowed for the catalog
(DSNDB06) or directory (DSNDB01). The default is FASTSWITCH YES.
NO
Specifies that the SWITCH phase is to use IDCAMS RENAME.
LEAFDISTLIMIT integer
Specifies that the value for integer is to be compared to the LEAFDIST value
for the specified partitions of the specified index in SYSIBM.SYSINDEXPART. If
any LEAFDIST value exceeds the specified LEAFDISTLIMIT value, REORG is
performed or, if you specify REPORTONLY, recommended.
The default value is 200.
REPORTONLY
Specifies that REORG is only to be recommended, not performed. REORG
produces a report with one of the following return codes:
1 No limit met; no REORG performed or recommended.
2 REORG performed or recommended.
UNLOAD
Specifies whether the utility job is to continue processing or terminate after the
data is unloaded.
CONTINUE
Specifies that, after the data has been unloaded, the utility is to
continue processing. The default is CONTINUE.
PAUSE
Specifies that, after the data has been unloaded, processing is to end.
The utility stops and the RELOAD status is stored in SYSIBM.SYSUTIL
so that processing can be restarted with RELOAD RESTART(PHASE).
You cannot use UNLOAD PAUSE if you specify the LIST option.
ONLY Specifies that, after the data has been unloaded, the utility job ends and
the status in SYSIBM.SYSUTIL that corresponds to this utility ID is
removed.
STATISTICS
Specifies that statistics for the index are to be collected; the statistics are either
reported or stored in the DB2 catalog. You cannot collect inline statistics for
indexes on the catalog and directory tables.
| Restriction: If you specify STATISTICS for encrypted data, DB2 might not
| provide useful information on this data.
REPORT
Indicates whether a set of messages is to be generated to report the collected
statistics.
NO
Indicates that the set of messages is not to be sent as output to
SYSPRINT.
The default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT.
The generated messages are dependent on the combination of keywords
(such as TABLESPACE, INDEX, TABLE, and COLUMN) that are specified
with the RUNSTATS utility. However, these messages are not dependent
on the specification of the UPDATE option. REPORT YES always generates
a report of SPACE and ACCESSPATH statistics.
KEYCARD
Indicates that all of the distinct values in all of the 1 to n key column
combinations for the specified indexes are to be collected. n is the number of
columns in the index.
FREQVAL
Specifies that frequent value statistics are to be collected. If you specify
FREQVAL, you must also specify NUMCOLS and COUNT.
NUMCOLS
Indicates the number of key columns to concatenate together when you
collect frequent values from the specified index. Specifying 3 means that
DB2 is to collect frequent values on the concatenation of the first three key
columns. The default is 1, which means DB2 is to collect frequent values
on the first key column of the index.
COUNT
Indicates the number of frequent values that are to be collected. Specifying
15 means that DB2 is to collect 15 frequent values from the specified key
columns. The default is 10.
| SORTDEVT device-type
| Specifies the device type for temporary data sets that are to be dynamically
| allocated by DFSORT. For device-type, specify any device that is valid on the
| DYNALLOC parameter of the SORT or OPTION options for DFSORT. See
| DFSORT Application Programming: Guide for more information.
| SORTNUM integer
| Specifies the number of temporary data sets that are to be dynamically
| allocated when collecting statistics for a data-partitioned secondary index. If you
| omit SORTDEVT, SORTNUM is ignored. If you use SORTDEVT and omit
| SORTNUM, no value is passed to DFSORT; DFSORT uses its own default.
| integer is the number of temporary data sets.
| UPDATE
| Indicates whether the collected statistics are to be inserted into the catalog
| tables. UPDATE also allows you to select statistics that are used for access
| path selection or statistics that are used by database administrators.
| ALL Indicates that all collected statistics are to be updated in the catalog.
| The default is ALL.
| ACCESSPATH
| Indicates that only the catalog table columns that provide statistics that
| are used for access path selection are to be updated.
| SPACE
| Indicates that only the catalog table columns that provide statistics to
| help the database administrator to assess the status of a particular
| table space or index are to be updated.
| NONE Indicates that catalog tables are not to be updated with the collected
| statistics. This option is valid only when REPORT YES is specified.
| HISTORY
| Indicates that all catalog table inserts or updates to the catalog history tables
| are to be recorded.
| The default is supplied by the specified value in STATISTICS HISTORY on
| panel DSNTIPO.
| ALL Indicates that all collected statistics are to be updated in the catalog
| history tables.
| ACCESSPATH
| Indicates that only the catalog history table columns that provide
| statistics used for access path selection are to be updated.
| SPACE
| Indicates that only space-related catalog statistics are to be updated in
| catalog history tables.
| NONE Indicates that catalog history tables are not to be updated with the
| collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics are to take place when
RUNSTATS is executed even when some parts are empty. This option enables
the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some parts might not contain data.
NO Indicates that aggregation or rollup is to be done only if data is
available for all parts.
If data is not available for all parts and if the installation value for STATISTICS
ROLLUP on panel DSNTIPO is set to NO, DSNU623I message is issued.
WORKDDN(ddname)
ddname specifies the DD statement for the unload data set.
ddname
Is the DD name of the temporary work file for build input. The default is
SYSUT1.
The WORKDDN keyword specifies either a DD name or a TEMPLATE
name from a previous TEMPLATE control statement. If utility processing
detects that the specified name is both a DD name in the current job
step and a TEMPLATE name, the utility uses DD name. For more
information about TEMPLATE specifications, see Chapter 31,
“TEMPLATE,” on page 575.
Data sharing considerations for REORG: You must not execute REORG on an
object if another DB2 subsystem holds retained locks on the object or has
long-running noncommitting applications that use the object. You can use the
DISPLAY GROUP command to determine whether a member’s status is ″FAILED.″
You can use the DISPLAY DATABASE command with the LOCKS option to
determine if locks are held.
CHECK-pending status: You cannot reorganize an index when the data is in the
CHECK-pending status. See Chapter 8, “CHECK DATA,” on page 55 for more
information about resetting the CHECK-pending status.
| Notes:
| 1. Required when collecting inline statistics on at least one data-partitioned secondary
| index.
| 2. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
| data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Index Object to be reorganized.
Calculating the size of the work data sets: When reorganizing an index space,
you need a non-DB2 sequential work data set. That data set is identified by the DD
statement that is named in the WORKDDN option. During the UNLOAD phase, the
index keys and the data pointers are unloaded to the work data set. This data set is
used to build the index. It is required only during the execution of REORG.
Use the following formula to calculate the approximate size (in bytes) of the
WORKDDN data set SYSUT1:
| size = number of keys x (key length + 8)
where
| Calculating the size of the sort work data sets: To calculate the approximate
| size (in bytes) of the ST01WKnn data set, use the following formula:
For user-managed data sets, you must preallocate the shadow data sets before you
execute REORG INDEX with SHRLEVEL REFERENCE or SHRLEVEL CHANGE. If
an index or partitioned index resides in DB2-managed data sets and shadow data
sets do not already exist when you execute REORG INDEX, DB2 creates the
shadow data sets. At the end of REORG processing, the DB2-managed shadow
| data sets are deleted. You can create the shadows ahead of time for DB2-managed
| data sets.
Shadow data set names: Each shadow data set must have the following name:
catname.DSNDBx.psname.y0001.Lnnn
To determine the names of existing shadow data sets, execute one of the following
queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
| SELECT DBNAME, TSNAME, IPREFIX
| FROM SYSIBM.SYSTABLEPART
| WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
| SELECT DBNAME, IXNAME, IPREFIX
| FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
| WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
| AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
Defining shadow data sets: Consider the following actions when you preallocate
the data sets:
v Allocate the shadow data sets according to the rules for user-managed data sets.
v Define the shadow data sets as LINEAR.
v Use SHAREOPTIONS(3,3).
v Define the shadow data sets as EA-enabled if the original table space or index
space is EA-enabled.
v Allocate the shadow data sets on the volumes that are defined in the storage
group for the original table space or index space.
If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
the SECQTY value for the table space or index space.
Recommendation: Use the MODEL option, which causes the new shadow data set
to be created like the original data set. This method is shown in the following
example:
DEFINE CLUSTER +
(NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
DATA +
(NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’) )
| Creating shadow data sets for indexes: When you preallocate data sets for
| indexes, create the shadow data sets as follows:
| v Create shadow data sets for the partition of the table space and the
| corresponding partition in each partitioning index and data-partitioned secondary
| index.
| v Create a shadow data set for logical partitions of nonpartitioned secondary
| indexes.
| Use the same naming scheme for these index data sets as you use for other data
| sets that are associated with the base index, except use J0001 instead of I0001.
| For more information about this naming scheme, see the information about the
| shadow data set naming convention at the beginning of this section.
Estimating the size of shadow data sets: If you do not change the value of
FREEPAGE or PCTFREE, the amount of space that is required for a shadow data
set is approximately comparable to the amount of space that is required for the
original data set. For more information about calculating the size of data sets, see
“Data sets that REORG INDEX uses” on page 389.
You can determine when to run REORG for indexes by using the LEAFDISTLIMIT
catalog query option. If you specify the REPORTONLY option, REORG produces a
report that indicates whether a REORG is recommended; in this case, a REORG is
not performed.
When you specify the catalog query options along with the REPORTONLY option,
REORG produces a report with one of the following return codes:
1 No limit met; no REORG performed or recommended.
2 REORG performed or recommended.
Alternatively, information from the SYSINDEXPART catalog table can tell you which
indexes qualify for reorganization.
End of Product-sensitive Programming Interface
Use the following query to identify user-created indexes and DB2 catalog indexes
that you should consider reorganizing with the REORG INDEX utility:
EXEC SQL
SELECT IXNAME, IXCREATOR
FROM SYSIBM.SYSINDEXPART
WHERE LEAFDIST > 200
ENDEXEC
example, with FREEPAGE 0 and index page splitting, the LEAFDIST value can
climb sharply. In this case, a LEAFDIST value that exceeds 200 can be acceptable.
After you run RUNSTATS, issuing the following SQL statement provides the
average distance (multiplied by 100) between successive leaf pages during
sequential access of the ZZZ index.
EXEC SQL
SELECT LEAFDIST
FROM SYSIBM.SYSINDEXPART
WHERE IXCREATOR = 'index_creator_name'
AND IXNAME = 'index_name'
ENDEXEC
v The number of log records that the next iteration will process is not sufficiently
lower than the number of log records that were processed in the previous
iteration. If this condition is met but the first two conditions are not, DB2 sends
message DSNU377I to the console. DB2 continues log processing for the length
of time that is specified by DELAY and then performs the action specified by
LONGLOG.
Operator actions: LONGLOG specifies the action that DB2 is to perform if log
processing is not occurring quickly enough. See “Option descriptions” on page 379
for a description of the LONGLOG options. If the operator does not respond to the
console message DSNU377I, the LONGLOG option automatically goes into effect.
You can take one of the following actions:
v Execute the START DATABASE(db) SPACENAM(ts)... ACCESS(RO) command
and the QUIESCE utility to drain the write claim class. DB2 performs the last
iteration, if MAXRO is not DEFER. After the QUIESCE, you should also execute
the ALTER UTILITY command, even if you do not change any REORG
parameters.
v Execute the START DATABASE(db) SPACENAM(ts)... ACCESS(RO) command
and the QUIESCE utility to drain the write claim class. Then, after reorganization
has made some progress, execute the START DATABASE(db) SPACENAM(ts)...
ACCESS(RW) command. This action increases the likelihood that log processing
can improve. After the QUIESCE, you should also execute the ALTER UTILITY
command, even if you do not change any REORG parameters.
v Execute the ALTER UTILITY command to change the value of MAXRO.
Changing it to a huge positive value, such as 9999999, causes the next iteration
to be the last iteration.
v Execute the ALTER UTILITY command to change the value of LONGLOG.
v Execute the TERM UTILITY command to terminate reorganization.
v Adjust the amount of buffer space that is allocated to reorganization and to
applications. This adjustment can increase the likelihood that log processing
improve after adjusting the space, you should also execute the ALTER UTILITY
command, even if you do not change any REORG parameters.
v Adjust the scheduling priorities of reorganization and applications. This
adjustment can increase the likelihood that log processing improve. After
adjusting the priorities, you should also execute the ALTER UTILITY command,
even if you do not change any REORG parameters.
DB2 does not take the action specified in the LONGLOG phrase if any one of these
events occurs before the delay expires:
v An ALTER UTILITY command is issued.
v A TERM UTILITY command is issued.
v DB2 estimates that the time to perform the next iteration is likely to be less than
or equal to the time specified on the MAXRO keyword.
v REORG terminates for any reason (including the deadline).
For REORG with SHRLEVEL REFERENCE or CHANGE, you can use the ALTER
STOGROUP command to change the characteristics of a DB2-managed data set.
You can effectively change the characteristics of a user-managed data set by
specifying the desired new characteristics when creating the shadow data set; see
“Shadow data sets” on page 390 for more information about shadow data sets. In
particular, placing the original and shadow data sets on different disk volumes might
reduce contention and thus improve the performance of REORG and the
performance of applications during REORG execution.
The SYSIBM.SYSUTIL record for the REORG INDEX utility remains in ″stopped″
status until REORG is restarted or terminated.
While REORG is interrupted by PAUSE, you can re-define the table space
attributes for user defined table spaces. PAUSE is not required for
STOGROUP-defined table spaces. Attribute changes are done automatically by a
REORG following an ALTER INDEX.
Improving performance
To improve REORG performance, run REORG concurrently on separate partitions
of a partitioned index space. The processor time for running REORG INDEX on
partitions of a partitioned index is approximately the same as the time for running a
single REORG index job. The elapsed time is a fraction of the time for running a
single REORG job on the entire index
By specifying a short delay time (less than the system timeout value, IRLMRWT),
you can reduce the impact on applications by reducing time-outs. You can use the
RETRY option to give the online REORG INDEX utility chances to complete
successfully. If you do not want to use RETRY processing, you can still use
DRAIN_WAIT to set a specific and more consistent limit on the length of drains.
RETRY allows an online REORG that is unable to drain the objects it requires to try
again after a set period (RETRY_DELAY). If the drain fails in the SWITCH phase,
the objects remain in their original state (read-only mode for SHRLEVEL
REFERENCE or read-write mode for SHRLEVEL CHANGE). Likewise, objects will
remain in their original state if the drain fails in the LOG phase.
Because application SQL statements can queue behind any unsuccessful drain that
the online REORG has tried, define a reasonable delay before you retry to allow
this work to complete; the default is 5 minutes.
When the default DRAIN WRITERS is used with SHRLEVEL CHANGE and RETRY,
multiple read-only log iterations can occur. Because online REORG can have to do
more work when RETRY is specified, multiple or extended periods of restricted
access might occur. Applications that run with REORG must perform frequent
commits. During the interval between retries, the utility is still active; consequently,
other utility activity against the table space and indexes is restricted.
When you perform a table space REORG and specify both RETRY and SHRLEVEL
CHANGE, the size of the copy that REORG takes might increase.
Recommendation: Run online REORG during light periods of activity on the table
space or index.
If you terminate REORG with the TERM UTILITY command during the build phase,
the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the index is left in RECOVER-pending status. After you
recover the index, rerun the REORG job.
v For SHRLEVEL REFERENCE or CHANGE, the index keys are reloaded into a
shadow index, so the original index has not been affected by REORG. You can
rerun the job.
If you terminate REORG with the TERM UTILITY command during the log phase,
the index keys are reloaded into a shadow index, so the original index has not been
affected by REORG. You can rerun the job.
If you terminate REORG with the TERM UTILITY command during the switch
phase, all data sets that were renamed to their shadow counterparts are renamed
back, so the objects are left in their original state. You can rerun the job. If a
problem occurs in renaming to the original data sets, the objects are left in
RECOVER-pending status. You must recover the index.
The REORG-pending status is not reset until the UTILTERM execution phase. If the
REORG INDEX utility abnormally terminates or is terminated, the objects are left in
RECOVER-pending status. See Appendix C, “Resetting an advisory or restrictive
status,” on page 831 for information about resetting either status.
Table 64 lists any restrictive states that are set based on the phase in which
REORG INDEX terminated.
Table 64. Restrictive states set based on the phase in which REORG INDEX terminated
Phase Effect on restrictive status
UNLOAD No effect.
Table 64. Restrictive states set based on the phase in which REORG INDEX
terminated (continued)
Phase Effect on restrictive status
BUILD Sets REBUILD-pending (RBDP) status at the beginning of the build
phase, and resets RBDP at the end of the phase. SHRLEVEL NONE
places an index that was defined with the COPY YES attribute in
RECOVER pending (RECP) status.
LOG No effect.
SWITCH Under certain conditions, if TERM UTILITY is issued, it must complete
successfully; otherwise, objects might be placed in RECP status or
RBDP status. For SHRLEVEL REFERENCE or CHANGE, sets the RECP
status if the index was defined with the COPY YES attribute at the
beginning of the switch phase, and resets RECP at the end of the phase.
If the index was defined with COPY NO, this phase sets the index in
RBDP status at the beginning of the phase, and resets RBDP at the end
of the phase.
| If you restart REORG in the outlined phase, it re-executes from the beginning of the
| phase. DB2 always uses RESTART(PHASE) by default unless you restart the job in
| the UNLOAD phase. In this case, DB2 uses RESTART(CURRENT) by default.
For each phase of REORG and for each type of REORG INDEX (with SHRLEVEL
NONE, with SHRLEVEL REFERENCE, and with SHRLEVEL CHANGE), the table
indicates the types of restart that are allowed (CURRENT and PHASE). None
indicates that no restart is allowed. The ″Data sets required″ column lists the data
sets that must exist to perform the specified type of restart in the specified phase.
Table 65. REORG INDEX utility restart information
Type of restart
Type of restart allowed for Type of restart
allowed for SHRLEVEL allowed for
Phase SHRLEVEL NONE REFERENCE SHRLEVEL CHANGE Data sets required Notes
UNLOAD CURRENT, PHASE CURRENT, PHASE None SYSUT1
BUILD CURRENT, PHASE CURRENT, PHASE None SYSUT1 1
LOG Phase does not occur Phase does not None None
occur
SWITCH Phase does not occur CURRENT, PHASE CURRENT, PHASE originals and shadows 1
Notes:
1. You can restart the utility with either RESTART or RESTART(PHASE). However, because this phase does not take
checkpoints, RESTART always re-executes from the beginning of the phase.
If you restart a REORG STATISTICS job that was stopped in the BUILD phase by
using RESTART CURRENT, inline statistics collection does not occur. To update
catalog statistics, run the RUNSTATS utility after the restarted job completes.
Restarting a REORG STATISTICS job with RESTART(PHASE) is conditional after
executing UNLOAD PAUSE. To determine if catalog table statistics are to be
updated when you restart a REORG STATISTICS job, see Table 66. This table lists
whether or not statistics are updated based on the execution phase and whether
the job is restarted with RESTART(CURRENT) or RESTART(PHASE).
Table 66. Whether statistics are updated when REORG INDEX STATISTICS jobs are
restarted in certain phases
Phase RESTART CURRENT RESTART PHASE
UTILINIT No Yes
UNLOAD No Yes
BUILD No Yes
For instructions on restarting a utility job, see Chapter 3, “Invoking DB2 online
utilities,” on page 19.
Table 67 shows which claim classes REORG INDEX drains and any restrictive state
the utility sets on the target object. The target is an index or index partition.
Table 67. Claim classes of REORG INDEX operations
REORG INDEX
REORG INDEX SHRLEVEL SHRLEVEL REORG INDEX SHRLEVEL
Phase NONE REFERENCE CHANGE
UNLOAD DW/UTRO DW/UTRO CR/UTRW
BUILD DA/UTUT none none
1
Last iteration of LOG n/a DA/UTUT DW/UTRO
SWITCH n/a DA/UTUT DA/UTUT
Legend:
v CR: Claim the read claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTRO: Utility restrictive state, read only access allowed.
v UTUT: Utility restrictive state, exclusive control.
v none: Any claim, drain, or restrictive state for this object does not change in this phase.
Notes:
1. Applicable if you specified DRAIN ALL.
Table 68 on page 399 shows which utilities can run concurrently with REORG
INDEX on the same target object. The target object can be an index space or a
partition. If compatibility depends on particular options of a utility, that is also shown.
REORG INDEX does not set a utility restrictive state if the target object is an index
on DSNDB01.SYSUTILX.
When reorganizing an index, REORG leaves free pages and free space on each
page in accordance with the current values of the FREEPAGE and PCTFREE
parameters. (You can set those values by using the CREATE INDEX or ALTER
INDEX statement.) REORG leaves one free page after reaching the FREEPAGE
limit for each table in the index space.
| When you run REORG INDEX, the utility updates this range of used version
| numbers for indexes that are defined with the COPY NO attribute. REORG INDEX
| sets the OLDEST_VERSION column to the current version number, which indicates
| that only one version is active; DB2 can then reuse all of the other version
| numbers.
| Recycling of version numbers is required when all of the version numbers are being
| used. All version numbers are being used when one of the following situations is
| true:
| v The value in the CURRENT_VERSION column is one less than the value in the
| OLDEST_VERSION column.
| v The value in the CURRENT_VERSION column is 15 and the value in the
| OLDEST_VERSION column is 0 or 1.
| You can also run LOAD REPLACE, REBUILD INDEX, or REORG TABLESPACE to
| recycle version numbers for indexes that are defined with the COPY NO attribute.
| To recycle version numbers for indexes that are defined with the COPY YES
| attribute or for table spaces, run MODIFY RECOVERY.
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
Example 3: Updating access path statistics in the catalog and catalog history
tables while reorganizing an index. The following control statement specifies that
while reorganizing index IU0E0801, REORG INDEX is to also collect statistics,
collect all of the distinct values in the key column combinations, and update access
path statistics in the catalog and catalog history tables. The utility is also to send
any output, including space and access path statistics, to SYSPRINT.
REORG INDEX IUOE0801
STATISTICS
KEYCARD
REPORT YES
UPDATE ACCESSPATH
HISTORY ACCESSPATH
The REORG INDEX statement specifies that the utility is to reorganize the indexes
that are included in the REORG_INDX list. The SHRLEVEL CHANGE option
indicates that during this processing, read and write access is allowed on the areas
that are being reorganized, with the exception of a 100-second period during the
last iteration of log processing. During this time, which is specified by the MAXRO
option, applications have read-only access. The WORKDDN option indicates that
REORG INDEX is to use the data set that is defined by the SUT1 template. If the
SWITCH phase does not begin by the deadline that is specified on the DEADLINE
option, processing terminates.
Figure 75. Example statements for job that reorganizes a list of indexes
You can determine when to run REORG for non-LOB table spaces by using the
OFFPOSLIMIT or INDREFLIMIT catalog query options. If you specify the
REPORTONLY option, REORG produces a report that indicates whether a REORG
is recommended without actually performing the REORG.
Run the REORG TABLESPACE utility on a LOB table space to help increase the
effectiveness of prefetch. For a LOB table space, REORG TABLESPACE performs
these actions:
v Removes imbedded free space
v Attempts to make LOB pages contiguous
A REORG of a LOB table space does not reclaim physical space.
Do not execute REORG on an object if another DB2 holds retained locks on the
object or has long-running noncommitting applications that use the object. You can
use the DISPLAY GROUP command to determine whether a member’s status is
failed. You can use the DISPLAY DATABASE command with the LOCKS option to
determine if locks are held.
Output: If the table space or partition has the COMPRESS YES attribute, the data
is compressed when it is reloaded. If you specify the KEEPDICTIONARY option of
REORG, the current dictionary is used; otherwise a new dictionary is built.
You can execute the REORG TABLESPACE utility on the table spaces in the DB2
catalog database (DSNDB06) and on some table spaces in the directory database
(DSNDB01). It cannot be executed on any table space in the DSNDB07 database.
Table 70. summaries the results of REORG TABLESPACE according to the type of
REORG specified.
Table 70. Summary of REORG TABLESPACE output
Type of REORG specified Results
| REORG TABLESPACE Reorganizes all data and all indexes.
REORG TABLESPACE PART n Reorganizes data for PART n of the table space and
| PART n of all partitioned indexes.
| REORG TABLESPACE PART n:m Reorganizes data for PART n through PART m of the
| table space and PART n through PART m of all
partitioned indexes.
Note: When SCOPE PENDING is also specified, the REORG TABLESPACE utility
reorganizes the specified table space only if it is in REORG-pending or advisory
REORG-pending status. For a partitioned table space, REORG TABLESPACE
SCOPE PENDING reorganizes only the partitions that are in REORG-pending or
advisory REORG-pending status.
Authorization required: To execute this utility on a user table space, you must use
a privilege set that includes one of the following authorities:
v REORG privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL authority
v SYSADM authority
To execute this utility on a table space in the catalog or directory, you must use a
privilege set that includes one of the following authorities:
v REORG privilege for the DSNDB06 (catalog) database
v DBADM or DBCTRL authority for the DSNDB06 (catalog) database
v Installation SYSOPR authority
v SYSCTRL authority
v SYSADM or Installation SYSADM authority
| If you use RACF access control with multilevel security and REORG TABLESPACE
| is to process a table space that contains a table that has multilevel security with
| row-level granularity, you must be identified to RACF and have an accessible valid
| security label. You must also meet the following authorization requirements:
| v For REORG statements that include the UNLOAD EXTERNAL option, each row
| is unloaded only if your security label dominates the data security label. If your
| security label does not dominate the data security label, the row is not unloaded,
| but DB2 does not issue an error message.
| v For REORG statements that include the DISCARD option, qualifying rows are
| discarded only if one of the following situations is true:
| – Write-down rules are in effect, you have write-down privilege, and your
| security label dominates the data’s security label.
| – Write-down rules are not in effect and your security label dominates the data’s
| security label.
| – Your security label is equivalent to the data security label.
| For more information about multilevel security and security labels, see Part 3 of
| DB2 Administration Guide.
If the LOB table space is defined with LOG NO, it is left in COPY-pending status
after REORG TABLESPACE completes processing.
Syntax diagram
| (1)
SCOPE ALL LOG YES SORTDATA
REUSE SCOPE PENDING REBALANCE LOG NO NOSYSREC
copy-spec
| SHRLEVEL NONE
FASTSWITCH YES
SHRLEVEL REFERENCE deadline-spec drain-spec
CHANGE deadline-spec drain-spec table-change-spec FASTSWITCH NO
10 10
OFFPOSLIMIT INDREFLIMIT
integer integer REPORTONLY
DISCARDDN SYSDISC
DISCARDDN ddname reorg tablespace options
DISCARD FROM-TABLE-spec
NOPAD
Notes:
1 SORTDATA is not the default if you specify UNLOAD ONLY or UNLOAD EXTERNAL.
2 You cannot use UNLOAD PAUSE with the LIST option.
copy-spec:
(1)
COPYDDN(SYSCOPY)
COPYDDN( ddname1 ) RECOVERYDDN(ddname3 )
,ddname2 ,ddname4
,ddname2
Notes:
1 COPYDDN(SYSCOPY) is not the default if you specify SHRLEVEL NONE and no partitions are in
REORG-pending status.
deadline-spec:
DEADLINE NONE
DEADLINE timestamp
labeled-duration-expression
drain-spec:
DRAIN_WAIT integer RETRY_DELAY 300
RETRY integer
RETRY_DELAY integer
table-change-spec:
labeled-duration-expression:
statistics-spec:
STATISTICS
TABLE ( ALL )
SAMPLE integer
COLUMN ALL
TABLE ( table-name )
SAMPLE integer ,
COLUMN ( column-name )
UPDATE ALL
UPDATE ACCESSPATH HISTORY ALL FORCEROLLUP YES
SPACE ACCESSPATH NO
NONE SPACE
NONE
correlation-stats-spec:
FROM-TABLE-spec:
selection-condition-spec:
predicate
selection condition AND predicate
OR selection condition
predicate:
basic predicate
BETWEEN predicate
IN predicate
LIKE predicate
NULL predicate
basic predicate:
(1)
column-name = constant
<> labeled-duration-expression
>
<
>=
<=
Notes:
1 The following forms of the comparison operators are also supported in basic and quantified
predicates: !=, !<, and !>. For details, see “comparision operators” on page 424.
BETWEEN predicate:
constant
labeled-duration-expression
IN predicate:
column-name IN ( constant )
NOT
LIKE predicate:
NULL predicate:
column-name IS NULL
NOT
UNLDDN SYSREC
UNLDDN ddname SORTDEVT device-type SORTNUM integer PREFORMAT
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
TABLESPACE database-name.table-space-name
Specifies the table space (and, optionally, the database to which it belongs) that
is to be reorganized.
If you reorganize a table space, its indexes are also reorganized.
database-name
Is the name of the database to which the table space belongs. The
name cannot be DSNDB07. The default is DSNDB04.
table-space-name
Is the name of the table space that is to be reorganized. The name
cannot be SYSUTILX if the specified database name is DSNDB01.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The utility allows
one LIST keyword for each control statement of REORG TABLESPACE. The list
must contain only table spaces.
Do not specify FROM TABLE, STATISTICS TABLE table-name, or STATISTICS
INDEX index-name with REORG TABLESPACE LIST. If you want to collect
inline statistics for a list of table spaces, specify STATISTICS TABLE (ALL). If
you want to collect inline statistics for a list of indexes, specify STATISTICS
INDEX (ALL). Do not specify PART with LIST.
REORG TABLESPACE is invoked once for each item in the list.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
REUSE
When used with SHRLEVEL NONE, specifies that REORG is to logically reset
and reuse DB2-managed data sets without deleting and redefining them. If you
do not specify REUSE and SHRLEVEL NONE, DB2 deletes and redefines
DB2-managed data sets to reset them.
If a data set has multiple extents, the extents are not released if you use the
REUSE parameter.
If you omit the PART keyword, the entire table space is reorganized.
If you specify the PART keyword for a LOB table space, DB2 issues an error
message, and utility processing terminates with return code 8.
| REBALANCE
| Specifies that REORG TABLESPACE is to set new partition boundaries so that
| pages are evenly distributed across the reorganized partitions. If the columns
| that are used in defining the partition boundaries have many duplicate values
| within the data rows, even balancing is not always possible. Specify
| REBALANCE for more than one partition; if you specify a single partition for
| rebalancing, REORG TABLESPACE ignores the specification.
| You can specify REBALANCE with SHRLEVEL NONE or SHRLEVEL
| REFERENCE. REBALANCE cannot be specified with SHRLEVEL CHANGE or
| SCOPE PENDING. Also, do not specify REBALANCE for partitioned table
| spaces with LOB columns. For additional restrictions, see “Restrictions when
| using REBALANCE” on page 436.
| At completion, DB2 invalidates plans, packages, and the dynamic cache.
LOG
Specifies whether records are to be logged during the RELOAD phase of
REORG. If the records are not logged, the table space is recoverable only after
| SORTDATA is ignored for some of the catalog and directory table spaces; see
| “Reorganizing the catalog and directory” on page 450.
NOSYSREC
Specifies that the output of sorting (if a clustering index exists) is the input to
reloading, without the REORG TABLESPACE utility using an unload data set.
| You can specify this option only if the REORG TABLESPACE job includes
| SHRLEVEL REFERENCE or SHRLEVEL NONE, and only if you do not specify
UNLOAD PAUSE or UNLOAD ONLY. See “Omitting the output data set” on
page 449 for additional information about using this option.
COPYDDN (ddname1,ddname2)
Specifies the DD statements for the primary (ddname1) and backup (ddname2)
copy data sets for the image copy.
ddname1 and ddname2 are the DD names.
The default is SYSCOPY for the primary copy. A full image copy data set is
created when REORG executes. The name of the data set is listed as a row in
the SYSIBM.SYSCOPY catalog table with ICTYPE=’R’ (as it is for the COPY
v DELAY
v DEADLINE
v DRAIN_WAIT
v RETRY
v RETRY_DELAY
To determine which data sets are required when you execute REORG
SHRLEVEL REFERENCE, see “Data sets that REORG TABLESPACE
uses” on page 438.
To determine which data sets are required when you execute REORG
SHRLEVEL CHANGE, see “Data sets that REORG TABLESPACE
uses” on page 438.
If you specify CHANGE, you must create a mapping table and specify
the name of the mapping table with the MAPPINGTABLE option.
RETRY integer
Specifies the maximum number of retries that are to be attempted. Valid values
for integer are from 0 to 255. If the keyword is omitted, the utility does not
attempt a retry.
Specifying RETRY can lead to increased processing costs and can result in
multiple or extended periods of read-only access. For example, when you
specify RETRY and SHRLEVEL CHANGE, the size of the copy that is taken by
REORG might increase.
RETRY_DELAY integer
Specifies the minimum duration, in seconds, between retries. Valid values
for integer are from 1 to 1800. The default is 300 seconds.
MAPPINGTABLE table-name
Specifies the name of the mapping table that REORG TABLESPACE is to use
to map between the RIDs of data records in the original copy of the area and
the corresponding RIDs in the shadow copy. This parameter is required if you
specify SHRLEVEL CHANGE, and you must create a mapping table and an
index for it before running REORG TABLESPACE. See “Before running REORG
TABLESPACE” on page 435 for the columns and the index that the mapping
table must include. Enclose the table name in quotation marks if the name
contains a blank.
MAXRO integer
Specifies the maximum amount of time for the last iteration of log processing.
During that iteration, applications have read-only access.
The actual execution time of the last iteration might exceed the specified value
for MAXRO.
The ALTER UTILITY command can change the value of MAXRO.
The default is 300 seconds.
integer
integer is the number of seconds. Specifying a small positive value
reduces the length of the period of read-only access, but it might
increase the elapsed time for REORG to complete. If you specify a
huge positive value, the second iteration of log processing is probably
the last iteration.
DEFER
Specifies that the iterations of log processing with read-write access
can continue indefinitely. REORG never begins the final iteration with
read-only access, unless you change the MAXRO value with ALTER
UTILITY.
If you specify DEFER, you should also specify LONGLOG CONTINUE.
If you specify DEFER, and DB2 determines that the actual time for an
iteration and the estimated time for the next iteration are both less than
5 seconds, DB2 adds a 5 second pause to the next iteration. This
pause reduces consumption of processor time. The first time this
situation occurs for a given execution of REORG, DB2 sends message
DSNU362I to the console. The message states that the number of log
records that must be processed is small and that the pause occurs. To
change the MAXRO value and thus cause REORG to finish, execute
the ALTER UTILITY command. DB2 adds the pause whenever the
ABEND
Indicates that, if a time-out condition occurs, DB2 is to leave the objects in
a UTRO or UTUT state.
TERM
Indicates that DB2 is to behave as follows if you specify the TERM option
and a time-out condition occurs:
1. DB2 issues an implicit TERM UTILITY command, causing the utility to
end with a return code 8.
2. DB2 issues the DSNU590I and DSNU170I messages.
3. DB2 leaves the objects in a read-write state.
FASTSWITCH
Specifies which switch methodology is to be used for a given reorganization.
YES
Enables the SWITCH phase to use the FASTSWITCH methodology. This
option is not allowed for the catalog (DSNDB06) or directory (DSNDB01).
The default is YES.
NO
Causes the SWITCH phase to use IDCAMS RENAME methodology.
OFFPOSLIMIT integer
Indicates that the specified value is to be compared to the value that DB2
calculates for the explicit clustering indexes of every table in the specified
partitions that are in SYSIBM.SYSINDEXPART. The calculation is computed as
follows:
(NEAROFFPOSF + FAROFFPOSF) × 100 / CARDF
integer is the value that is to be compared and can range from 0 to 65535. The
default value is 10.
INDREFLIMIT integer
Indicates that the specified value is to be compared to the value that DB2
calculates for the specified partitions in SYSIBM.SYSTABLEPART for the
specified table space. The calculation is computed as follows:
(NEARINDREF + FARINDREF) × 100 / CARDF
integer is the value that is to be compared and can range from 0 to 65535. The
default value is 10.
REPORTONLY
Specifies that REORG is only to be recommended, not performed. REORG
produces a report with one of the following return codes:
1 No limit met; no REORG is to be performed or recommended.
However, you cannot use UNLOAD PAUSE if you specify the LIST option.
ONLY
Specifies that, after the data has been unloaded, the utility job ends and the
status that corresponds to this utility ID is removed from SYSIBM.SYSUTIL.
If you specify UNLOAD ONLY with REORG TABLESPACE, any edit routine
or field procedure is executed during record retrieval in the unload phase.
This option is not allowed for any table space in DSNDB01 or DSNDB06.
The DISCARD and WHEN options are not allowed with UNLOAD ONLY.
EXTERNAL
Specifies that, after the data has been unloaded, the utility job is to end and
the status that corresponds to this utility ID is removed.
The UNLOAD utility has more functions. If you specify UNLOAD
EXTERNAL with REORG TABLESPACE, rows are decompressed, edit
routines are decoded, field procedures are decoded, and SMALLINT,
INTEGER, FLOAT, DECIMAL, DATE, TIME, and TIMESTAMP columns are
converted to DB2 external format. Validation procedures are not invoked.
This option is not allowed for any table space in DSNDB01 or DSNDB06.
The DISCARD option is not allowed with UNLOAD EXTERNAL.
NOPAD
Specifies that the variable-length columns in the unloaded or discarded records
are to occupy the actual data length without additional padding. The unloaded
records can have varying lengths. If you do not specify NOPAD, default
REORG processing pads variable-length columns in the unloaded or discarded
records to their maximum length; the unloaded or discarded records have equal
lengths for each table.
You can specify the NOPAD option only with UNLOAD EXTERNAL or with
UNLOAD DISCARD.
Although the LOAD utility processes records with variable-length columns that
were unloaded or discarded with the NOPAD option, these records cannot be
processed by applications that process only fields that are in fixed positions.
For the generated LOAD statement to provide a NULLIF condition for fields that
are not in a fixed position, DB2 generates an input field definition with a name
in the form of DSN_NULL_IND_nnnnn, where nnnnn is the number of the
associated column.
For example, the LOAD statement that is generated for the EMP sample table
looks similar to the LOAD statement that is in Figure 76 on page 423.
Figure 76. Sample LOAD statement generated by REORG TABLESPACE with the NOPAD
keyword
FROM TABLE
Specifies the tables that are to be reorganized. The table space that is specified
in REORG TABLESPACE can store more than one table. All tables are
unloaded for UNLOAD EXTERNAL, and all tables might be subject to
DISCARD. If you want to qualify the rows that are to be unloaded or discarded,
you must use the FROM TABLE statement.
Do not specify FROM TABLE with REORG TABLESPACE LIST.
table-name
Specifies the name of the table that is to be qualified by the following
WHEN clause. The table must be described in the catalog and must not be
a catalog table. If the table name is not qualified by an authorization ID, the
authorization ID of the person who invokes the utility job step is used as the
qualifier of the table name. Enclose the table name in quotation marks if the
name contains a blank.
WHEN
Indicates which records in the table space are to be unloaded (for UNLOAD
EXTERNAL) or discarded (for DISCARD). If you do not specify a WHEN clause
for a table in the table space, all of the records are unloaded (for UNLOAD
EXTERNAL), or none of the records is discarded (for DISCARD).
The option following WHEN describes the conditions for UNLOAD or DISCARD
of records from a table and must be enclosed in parentheses.
selection condition
Specifies a condition that is true, false, or unknown about a specific row.
When the condition is true, the row qualifies for UNLOAD or DISCARD.
When the condition is false or unknown, the row does not qualify.
A selection condition consists of at least one predicate and any logical
operators (AND, OR, NOT). The result of a selection condition is derived by
applying the specified logical operators to the result of each specified
predicate. If logical operators are not specified, the result of the selection
condition is the result of the specified predicate.
Selection conditions within parentheses are evaluated first. If the order of
evaluation is not specified by parentheses, AND is applied before OR.
| If the control statement is in the same encoding scheme as the input data,
| you can code character constants in the control statement. Otherwise, if the
| control statement is not in the same encoding scheme as the input data,
| you must code the condition with hexadecimal constants. For example, if
| the table space is in EBCDIC and the control statement is in UTF-8, use
| (1:1)=X’F1’ in the condition rather than (1:1)=’1’.
| Restriction: REORG TABLESPACE cannot filter rows that contain
| encrypted data.
predicate
A predicate specifies a condition that is true, false, or unknown about a
given row or group.
basic predicate
Specifies the comparison of a column with a constant. If the value of
the column is null, the result of the predicate is unknown. Otherwise,
the result of the predicate is true or false.
Predicate Is true if and only if
column-name = constant The column is equal to the constant or
labeled duration expression.
column-name < > constant The column is not equal to the constant
or labeled duration expression.
column-name > constant The column is greater than the constant
or labeled duration expression.
column-name < constant The column is less than the constant or
labeled duration expression.
column-name > = constant The column is greater than or equal to
the constant or labeled duration
expression.
column-name < = constant The column is less than or equal to the
constant or labeled duration expression.
850, the forms ¬=, ¬<, and ¬> are supported. All these product-specific
forms of the comparison operators are intended only to support existing
REORG statements that use these operators and are not recommended
for use in new REORG statements.
A not sign (¬), or the character that must be used in its place in certain
countries, can cause parsing errors in statements that are passed from
one DBMS to another. The problem occurs if the statement undergoes
character conversion with certain combinations of source and target
CCSIDs. To avoid this problem, substitute an equivalent operator for
any operator that includes a not sign. For example, substitute ’< >’ for
’¬=’, ’<=’ for ’¬>’, and ’>=’ for ’¬<’.
BETWEEN predicate
Indicates whether a given value lies between two other given values
that are specified in ascending order. Each of the predicate’s two forms
(BETWEEN and NOT BETWEEN) has an equivalent search condition,
as shown in Table 71. If relevant, the table also shows any equivalent
predicates.
Table 71. BETWEEN predicates and their equivalent search conditions
Predicate Equivalent predicate Equivalent search condition
column BETWEEN value1 (column >= value1 AND
None
AND value2 column <= value2)
column NOT BETWEEN NOT(column BETWEEN (column < value1 OR column
value1 AND value2 value1 AND value2) > value2)
Note: The values can be constants or labeled duration expressions.
For example, the following predicate is true for any row when salary is
greater than or equal to 10 000 and less than or equal to 20 000:
SALARY BETWEEN 10000 AND 20000
labeled-duration-expression
Specifies an expression that begins with the following special register
values:
v CURRENT DATE (CURRENT_DATE is acceptable.)
v CURRENT TIMESTAMP (CURRENT_TIMESTAMP is acceptable.)
Optionally, the expression contains the arithmetic operations of addition
or subtraction, expressed by a number followed by one of the seven
duration keywords:
v YEARS (or YEAR)
v MONTHS (or MONTH)
v DAYS (or DAY)
v HOURS (or HOUR)
v MINUTES (or MINUTE)
v SECONDS (or SECOND)
v MICROSECONDS (or MICROSECOND)
The order in which labeled date durations are added to and subtracted
from dates can affect the results. When you add labeled date durations
to a date, specify them in the order of YEARS + MONTHS + DAYS.
When you subtract labeled date durations from a date, specify them in
the order of DAYS - MONTHS - YEARS. For example, to add one year
and one day to a date, specify the following code:
CURRENT DATE + 1 YEAR + 1 DAY
To subtract one year, one month, and one day from a date, specify the
following code:
CURRENT DATE − 1 DAY − 1 MONTH − 1 YEAR
For example, the following predicate is true for any row with an
employee in department D11, B01, or C01:
WORKDEPT IN (’D11’, ’B01’, ’C01’)
LIKE predicate
Qualifies strings that have a certain pattern. Specify the pattern by
using a string in which the underscore and percent sign characters can
be used as wildcard characters. The underscore character (_)
represents a single, arbitrary character. The percent sign (%) represents
a string of zero or more arbitrary characters.
In this description, let x denote the column that is to be tested and y
denote the pattern in the string constant.
The following rules apply to predicates of the form “x LIKE y...”. If NOT
is specified, the result is reversed.
v When x or y is null, the result of the predicate is unknown.
v When y is empty and x is not empty, the result of the predicate is
false.
v When x is empty and y is not empty, the result of the predicate is
false unless y consists only of one or more percent signs.
v When x and y are both empty, the result of the predicate is true.
v When x and y are both not null, the result of the predicate is true if x
matches the pattern in y and false if x does not match the pattern in
y.
The pattern string and the string that is to be tested must be of the
same type; that is, both x and y must be character strings, or both x
and y must be graphic strings. When x and y are graphic strings, a
character is a DBCS character. When x and y are character strings and
x is not mixed data, a character is an SBCS character, and y is
interpreted as SBCS data regardless of is subtype. The rules for
mixed-data patterns are described in “Strings and patterns” on page
429.
Within the pattern, a percent sign (%) or underscore character (_) can
represent the literal occurrence of a percent sign or underscore
character. To have a literal meaning, each character must be preceded
by an escape character.
The ESCAPE clause designates a single character. You can use that
character, and only that character, multiple times within the pattern as
an escape character. When the ESCAPE clause is omitted, no
character serves as an escape character and percent signs and
underscores in the pattern can only be used to represent arbitrary
characters; they cannot represent their literal occurrences.
NULL predicate
Specifies a test for null values.
If the value of the column is null, the result is true. If the value is not
null, the result is false. If NOT is specified, the result is reversed.
KEEPDICTIONARY
Prevents REORG TABLESPACE from building a new compression dictionary
when unloading the rows. The efficiency of REORG increases with the
KEEPDICTIONARY option for the following reasons:
v The processing cost of building the compression dictionary is eliminated.
v Existing compressed rows do not need to be compressed again.
v Existing compressed rows do not need to be expanded, unless indexes
require it or SORTDATA is used.
v If the data has changed significantly since the last dictionary was built,
rebuilding the dictionary might save a significant amount of space.
v If the current dictionary was built using the LOAD utility, building it using
REORG might produce a better compression dictionary.
For more information about specifying or omitting the KEEPDICTIONARY
option, see “Compressing data” on page 236.
column statistics. You can specify any value from 1 through 100. The default is
25. The SAMPLE option is not allowed for LOB table spaces.
COLUMN
Specifies columns for which column information is to be gathered.
You can specify this option only if you specify a particular table for which
statistics are to be gathered (TABLE (table-name)). If you specify particular
tables and do not specify the COLUMN option, the default, COLUMN(ALL), is
used. If you do not specify a particular table when using the TABLE option, you
cannot specify the COLUMN option; however, COLUMN(ALL) is assumed.
(ALL)
Specifies that statistics are to be gathered for all columns in the table.
(column-name, ...)
Specifies the columns for which statistics are to be gathered.
You can specify a list of column names; the maximum is 10. If you specify
more than one column, separate each name with a comma.
INDEX
Specifies indexes for which information is to be gathered. Column information is
gathered for the first column of the index. All the indexes must be associated
with the same table space, which must be the table space that is specified in
the TABLESPACE option.
Do not specify STATISTICS INDEX index-name with REORG TABLESPACE
LIST. Instead, specify STATISTICS INDEX (ALL).
(ALL) Specifies that the column information is to be gathered for all indexes
that are defined on tables that are contained in the table space.
(index-name)
Specifies the indexes for which information is to be gathered. Enclose
the index name in quotation marks if the name contains a blank.
KEYCARD
Indicates that all of the distinct values in all of the 1 to n key column
combinations for the specified indexes are to be collected. n is the number of
columns in the index.
FREQVAL
Specifies that frequent-value statistics are to be collected. If you specify
FREQVAL, you must also specify NUMCOLS and COUNT.
NUMCOLS
Indicates the number of key columns to concatenate together when you
collect frequent values from the specified index. Specifying 3 means that
DB2 is to collect frequent values on the concatenation of the first three key
columns. The default is 1, which means DB2 is to collect frequent values
on the first key column of the index.
COUNT
Indicates the number of frequent values that are to be collected. For
example, specifying 15 means that DB2 is to collect 15 frequent values
from the specified key columns. The default is 10.
REPORT
Specifies whether a set of messages is to be generated to report the collected
statistics.
NO
Indicates that the set of messages is not to be sent as output to
SYSPRINT. The default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT.
The generated messages are dependent on the combination of keywords
(such as TABLESPACE, INDEX, TABLE, and COLUMN) that are specified
with the RUNSTATS utility. However, these messages are not dependent
on the specification of the UPDATE option. REPORT YES always generates
a report of SPACE and ACCESSPATH statistics.
| UPDATE
Indicates whether the collected statistics are to be inserted into the catalog
tables. UPDATE also allows you to select statistics that are used for access
path selection or statistics that are used by database administrators.
ALL Indicates that all collected statistics are to be updated in the catalog.
The default is ALL.
ACCESSPATH
Indicates that only the catalog table columns that provide statistics that
are used for access path selection are to be updated.
SPACE
Indicates that only the catalog table columns that provide statistics to
help database administrators assess the status of a particular table
space or index are to be updated.
NONE Indicates that no catalog tables are to be updated with the collected
statistics. This option is valid only when REPORT YES is specified.
| HISTORY
Specifies that all catalog table inserts or updates to the catalog history tables
are to be recorded.
The default value is whatever value is specified in the STATISTICS HISTORY
field on panel DSNTIPO.
ALL Indicates that all collected statistics are to be updated in the catalog
history tables.
ACCESSPATH
Indicates that only the catalog history table columns that provide
statistics that are used for access path selection are to be updated.
SPACE
Indicates that only space-related catalog statistics are to be updated in
catalog history tables.
NONE Indicates that no catalog history tables are to be updated with the
collected statistics.
FORCEROLLUP
Specifies whether aggregation or rollup of statistics is to take place when
RUNSTATS is executed even if statistics have not been gathered on some
partitions; for example, partitions have not had any data loaded. Aggregate
statistics are used by the optimizer to select the best access path.
YES Indicates that forced aggregation or rollup processing is to be done,
even though some partitions might not contain data.
If data is not available for all partitions, DSNU623I message is issued if the
installation value for STATISTICS ROLLUP on panel DSNTIPO is set to NO.
PUNCHDDN ddname
Specifies the DD statement for a data set that is to receive the LOAD utility
control statements that are generated by REORG TABLESPACE UNLOAD
EXTERNAL or REORG TABLESPACE DISCARD FROM TABLE ... WHEN.
ddname is the DD name.
The default is SYSPUNCH.
PUNCHDDN is required if the limit key of the last partition of a partitioned table
space has been reduced.
The PUNCHDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility processing
detects that the specified name is both a DD name in the current job step and a
TEMPLATE name, the utility uses the DD name. For more information about
TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on page 575.
DISCARDDN ddname
Specifies the DD statement for a discard data set, which contains copies of
records that meet the DISCARD FROM TABLE ... WHEN specification.
ddname is the DD name.
If you omit the DISCARDDN option, the utility saves discarded records only if a
SYSDISC DD statement is in the JCL input.
The default is SYSDISC.
The DISCARDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility processing
detects that the specified name is both a DD name in the current job step and a
TEMPLATE name, the utility uses the DD name. For more information about
TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on page 575.
UNLDDN ddname
Specifies the name of the unload data set.
ddname is the DD name of the unload data set. The default is SYSREC.
The UNLDDN keyword specifies either a DD name or a TEMPLATE name
specification from a previous TEMPLATE control statement. If utility processing
detects that the specified name is both a DD name in the current job step and a
TEMPLATE name, the utility uses the DD name. For more information about
TEMPLATE specifications, see Chapter 31, “TEMPLATE,” on page 575.
SORTDEVT device-type
Specifies the device type for temporary data sets that are to be dynamically
allocated by DFSORT.
device-type is the device type; it can be any device that is acceptable to the
DYNALLOC parameter of the SORT or OPTION control statement for DFSORT.
If you omit SORTDEVT and require a sort of the index keys, you must provide
the DD statements that the sort program needs for the temporary data sets.
SORTDEVT is ignored for the catalog and directory table spaces that are listed
in “Reorganizing the catalog and directory” on page 450.
The utility does not allow a TEMPLATE specification to dynamically allocate sort
work data sets. The SORTDEVT keyword controls dynamic allocation of these
data sets.
SORTNUM integer
Specifies the number of temporary data sets that are to be dynamically
allocated for all sorts that REORG performs.
integer is the number of temporary data sets.
If you omit SORTDEVT, SORTNUM is ignored. If you use SORTDEVT and omit
SORTNUM, no value is passed to DFSORT. DFSORT uses its own default.
SORTNUM is ignored for the catalog and directory table spaces listed in
“Reorganizing the catalog and directory” on page 450.
PREFORMAT
Specifies that the remaining pages are to be preformatted up to the high RBA in
the table space and index spaces that are associated with the table that is
specified in FROM TABLE table-name option. The preformatting occurs after the
data is loaded and the indexes are built.
PREFORMAT can operate on an entire table space and its index spaces, or on
a partition of a partitioned table space and its corresponding partitioning index
space.
PREFORMAT is ignored if you specify UNLOAD ONLY or UNLOAD
EXTERNAL.
For more information about the PREFORMAT option, see “Improving
performance with LOAD or REORG PREFORMAT” on page 241.
DISCARD
Specifies that records that meet the specified WHEN conditions are to be
discarded during REORG TABLESPACE UNLOAD CONTINUE or UNLOAD
PAUSE. If you specify DISCARDDN or a SYSDISC DD statement in the JCL,
discarded records are saved in the associated data set.
| You can specify any SHRLEVEL option with DISCARD; however, if you specify
| SHRLEVEL CHANGE, modifications that are made during the reorganization to
| data rows that match the discard criteria are not permitted. In this case,
| REORG TABLESPACE terminates with an error.
If you specify DISCARD, rows are decompressed and edit routines are
decoded. If you also specify DISCARD to a file, rows are decoded by field
procedure, and the following columns are converted to DB2 external format:
v SMALLINT
v INTEGER
v FLOAT
v DECIMAL
v TIME
v TIMESTAMP
Otherwise, edit routines or field procedures are bypassed on both the UNLOAD
and RELOAD phases for table spaces. Validation procedures are not invoked
during either phase.
Region size: The recommended minimum region size is 4096 KB. Region sizes
greater than 32 MB enable increased parallelism for index builds.
The number of rows in the mapping table should not exceed 110% of the number of
rows in the table space or partition that is to be reorganized. The mapping table
must have only the columns and the index that are created by the following SQL
statements:
CREATE TABLE table-name1
(TYPE CHAR(1) NOT NULL,
SOURCE_RID CHAR(5) NOT NULL,
TARGET_XRID CHAR(9) NOT NULL,
LRSN CHAR(6) NOT NULL);
CREATE UNIQUE INDEX index-name1 ON table-name1
(SOURCE_RID ASC, TYPE, TARGET_XRID, LRSN);
You must specify the TARGET_XRID column as CHAR(9), even though the RIDs
are 5 bytes long.
You must have DELETE, INSERT, SELECT, and UPDATE authorization on the
mapping table.
You can run more than one REORG SHRLEVEL CHANGE job concurrently, either
on separate table spaces or on different partitions of the same table space. When
you run concurrently with other jobs, each REORG job must have a separate
mapping table. The mapping tables do not need to reside in separate table spaces.
If only one mapping table exists, the REORG jobs must be scheduled to run
serially. If more than one REORG job tries to access the same mapping table at the
same time, one of the REORG jobs fails.
For a sample of using REORG with SHRLEVEL CHANGE and a sample mapping
table and index, see job sample DSNTEJ1 in DB2 Installation Guide.
| For example, assume that you create a table space with three partitions. Table 74
| shows the mapping that exists between the physical and logical partition numbers.
| Table 74. Mapping of physical and logical partition numbers when a table space with three
| partitions is created.
| Logical partition number Physical partition number
| 1 1
| 2 2
| 3 3
|
| Assume that you then try to execute a REORG TABLESPACE REBALANCE PART
| 1:2. This statement requests a reorganization and rebalancing of physical partitions
| 1 and 2. Note that physical partition 1 is logical partition 2, and physical partition 2
| is logical partition 4. Thus, the utility is processing logical partitions 2 and 4. If
| during the course of rebalancing, the utility needs to move keys from logical
| partition 2 to logical partition 3, the job fails, because logical partition 3 is not within
| the specified physical partition range.
Notes:
1. Required when collecting inline statistics on at least one data-partitioned secondary
index.
2. Required if you specify DISCARDDN
3. Required you specify PUNCHDDN
4. Required unless NOSYSREC or SHRLEVEL CHANGE is specified.
5. Required if a partition is in REORG-pending status or COPYDDN, RECOVERYDDN,
SHRLEVEL REFERENCE, or SHRLEVEL CHANGE is specified.
6. Required if NOSYSREC or SHRLEVEL CHANGE is specified, but SORTDEVT is not
specified.
7. Required if any indexes exist and SORTDEVT is not specified.
8. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space
Object that is to be reorganized.
Calculating the size of the unload data set: The required size for the unload data
set varies depending on the options that you use for REORG.
1. If you use REORG with UNLOAD PAUSE or CONTINUE and you specify
KEEPDICTIONARY (assuming that a compression dictionary already exists), the
size of the unload data set, in bytes, is the VSAM high-allocated RBA for the
table space. You can obtain the high-allocated RBA from the associated VSAM
catalog.
For SHRLEVEL CHANGE, also add the result of the following calculation (in
bytes) to the VSAM high-used RBA:
number of records * 11
2. If you use REORG with UNLOAD ONLY, UNLOAD PAUSE, or CONTINUE and
you do not specify KEEPDICTIONARY, you can calculate the size of the unload
data set, in bytes, by using the following formula:
maximum row length * number of rows
The maximum row length is the row length, including the 6-byte record prefix,
plus the length of the longest clustering key. If multiple tables exist in the table
space, use the following formula to determine the maximum row length:
Sum over all tables (row length * number of rows)
For SHRLEVEL CHANGE, also add the result of the following formula to the
preceding result:
(21 * ((NEARINDREF + FARINDREF) * 1.1))
3. The accuracy of the data set size calculation depends on recent information in the SYSTABLEPART catalog table.
For certain table spaces in the catalog and directory, the unload data set for the
table spaces have a different format. The calculation for the size of this data set is
as follows:
data set size in bytes = (28 + longrow) * numrows
See “Reorganizing the catalog and directory” on page 450 for more information
about reorganizing catalog and directory table spaces.
| Calculating the size of the work data sets: Allocating twice the space that is used
| by the input data sets is usually adequate for the sort work data sets. For
| compressed data, double again the amount of space that is allocated for the sort
| work data sets if you use either of the following REORG options:
| v UNLOAD PAUSE without KEEPDICTIONARY
| v UNLOAD CONTINUE without KEEPDICTIONARY
Using two or three large SORTWKnn data sets is preferable to using several small
ones. If adequate space is not available, you cannot run REORG.
Specifying a destination for DFSORT messages: The REORG utility job step
must contain a UTPRINT DD statement that defines a destination for messages that
are issued by DFSORT during the SORT phase of REORG. DB2I, the %DSNU
CLIST command, and the DSNUPROC procedure use the following default DD
statement:
//UTPRINT DD SYSOUT=A
| Calculating the size of the sort work data sets: To calculate the approximate
| size (in bytes) of the ST01WKnn data set, use the following formula:
| processed when collecting frequency statistics (You can obtain this value
| from the RECLENGTH column in SYSTABLES.)
| numcols
| Number of key columns to concatenate when you collect frequent values
| from the specified index.
| count
| Number of frequent values that DB2 is to collect.
For user-managed data sets, you must preallocate the shadow data sets before you
execute REORG with SHRLEVEL REFERENCE or SHRLEVEL CHANGE. If a table
space, partition, or index resides in DB2-managed data sets and shadow data sets
do not already exist when you execute REORG, DB2 creates the shadow data sets.
At the end of REORG processing, the DB2-managed shadow data sets are deleted.
Shadow data set names: Each shadow data set must have the following name:
catname.DSNDBx.psname.y0001.Lnnn
To determine the names of existing shadow data sets, execute one of the following
queries against the SYSTABLEPART or SYSINDEXPART catalog tables:
| SELECT DBNAME, TSNAME, IPREFIX
| FROM SYSIBM.SYSTABLEPART
| WHERE DBNAME = ’dbname’ AND TSNAME = ’psname’;
| SELECT DBNAME, IXNAME, IPREFIX
| FROM SYSIBM.SYSINDEXES X, SYSIBM.SYSINDEXPART Y
| WHERE X.NAME = Y.IXNAME AND X.CREATOR = Y.IXCREATOR
| AND X.DBNAME = ’dbname’ AND X.INDEXSPACE = ’psname’;
For a partitioned table space, DB2 returns rows from which you select the row for
the partitions that you want to reorganize.
For example, assume that you have a ten-partition table space and you want to
determine a naming convention for the data set in order to successfully execute the
REORG utility with the SHRLEVEL CHANGE PART 2:6 options. The following
queries of the DB2 catalog tables SYSTABLEPART and SYSINDEXPART provide
the required information:
SELECT DBNAME, TSNAME, PARTITION, IPREFIX FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = ’DBDV0701’ AND TSNAME = ’TPDV0701’
ORDER BY PARTITION;
SELECT IXNAME, PARTITION, IPREFIX FROM SYSIBM.SYSINDEXPART
WHERE IXNAME = ’IXDV0701
ORDER BY PARTITION;
The preceding queries produce the information that is shown in Table 79 and
Table 80.
To execute REORG SHRLEVEL CHANGE PART 2:6, you need to preallocate the
following shadow objects. The naming convention for these objects use information
from the query results that are shown in Table 79 and Table 80.
vcatnam.DSNDBC.DBDV0701.TPDV0701.J0001.A002
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A003
vcatnam.DSNDBC.DBDV0701.TPDV0701.J0001.A004
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A005
vcatnam.DSNDBC.DBDV0701.TPDV0701.I0001.A006
vcatnam.DSNDBC.DBDV0701.IXDV0701.J0001.A002
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A003
vcatnam.DSNDBC.DBDV0701.IXDV0701.J0001.A004
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A005
vcatnam.DSNDBC.DBDV0701.IXDV0701.I0001.A006
Defining shadow data sets: Consider the following actions when you preallocate
the data sets:
v Allocate the shadow data sets according to the rules for user-managed data sets.
v Define the shadow data sets as LINEAR.
v Use SHAREOPTIONS(3,3).
v Define the shadow data sets as EA-enabled if the original table space or index
space is EA-enabled.
v Allocate the shadow data sets on the volumes that are defined in the storage
group for the original table space or index space.
If you specify a secondary space quantity, DB2 does not use it. Instead, DB2 uses
the SECQTY value for the table space or index space.
Recommendation: Use the MODEL option, which causes the new shadow data set
to be created like the original data set. This method is shown in the following
example:
DEFINE CLUSTER +
(NAME(’catname.DSNDBC.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBC.dbname.psname.y0001.L001’)) +
DATA +
(NAME(’catname.DSNDBD.dbname.psname.x0001.L001’) +
MODEL(’catname.DSNDBD.dbname.psname.y0001.L001’) )
| DB2 treats preallocated shadow data sets as DB2-managed data sets. For
| example, DB2 deletes a preallocated shadow data set for a nonpartitioning index at
| the end of REORG PART.
| Creating shadow data sets for indexes: When you preallocate data sets for
| indexes, create the shadow data sets as follows:
| v Create shadow data sets for the partition of the table space and the
| corresponding partition in each partitioning index and data-partitioned secondary
| index.
| v Create a shadow data set for logical partitions of nonpartitioned secondary
| indexes.
| Use the same naming scheme for these index data sets as you use for other data
| sets that are associated with the base index, except use J0001 instead of I0001.
| For more information about this naming scheme, see the information about the
| shadow data set naming convention at the beginning of this section, “Shadow data
| sets” on page 442.
Estimating the size of shadow data sets: If you have not changed the value of
FREEPAGE or PCTFREE, the amount of required space for a shadow data set is
comparable to the amount of required space for the original data set. However, for
REORG PART, the required space for the shadow data set of the logical partition of
For example, a partitioned table space with 100 partitions and data that is relatively
evenly balanced across the partitions needs a shadow data set for the logical
partition that is approximately 1% of the size of the original nonpartitioning index.
Preallocating shadow data sets for REORG PART: By creating the shadow data
sets before executing REORG PART, even for DB2-managed data sets, you prevent
possible over-allocation of the disk space during REORG processing. When
reorganizing a partition, you must create the shadow data sets for the partition of
the table space and for the partition of the partitioning index. In addition, before
executing REORG PART with SHRLEVEL REFERENCE or SHRLEVEL CHANGE
on partition mmm of a partitioned table space, you must create a shadow data set
for each nonpartitioning index that resides in user-defined data sets. Each shadow
data set is to be used for a copy of the logical partition of the index. For information
about naming conventions for shadow data sets, , see the information about the
shadow data set naming convention at the beginning of this section, “Shadow data
sets” on page 442.
When reorganizing a range of partitions, you must allocate a single shadow data
set for each logical partition. Each logical partition within the range specified is
contained in the single shadow data set.
You can determine when to run REORG for non-LOB table spaces and indexes by
using the OFFPOSLIMIT and INDREFLIMIT catalog query options. If you specify
the REPORTONLY option, REORG produces a report that indicates whether a
REORG is recommended; a REORG is not performed.
When you specify the catalog query options along with the REPORTONLY option,
REORG produces a report with one of the following return codes:
1 No limit met; no REORG is performed or recommended.
2 REORG is performed or recommended.
Information from the SYSTABLEPART catalog table can also tell you how well disk
space is being used. If you want to find the number of varying-length rows that
were relocated to other pages because of an update, run RUNSTATS, and then
issue the following statement:
SELECT CARD, NEARINDREF, FARINDREF
FROM SYSIBM.SYSTABLEPART
WHERE DBNAME = 'XXX'
AND TSNAME = 'YYY';
A large number (relative to previous values that you have received) for FARINDREF
indicates that I/O activity on the table space is high. If you find that this number
increases over a period of time, you probably need to reorganize the table space to
improve performance, and increase PCTFREE or FREEPAGE for the table space
with the ALTER TABLESPACE statement.
Issue the following statement to determine whether the rows of a table are stored in
the same order as the entries of its clustering index:
Several indicators are available to signal a time for reorganizing table spaces. A
large value for FAROFFPOSF might indicate that clustering is deteriorating. In this
case, reorganizing the table space can improve query performance.
A large value for NEAROFFPOSF might indicate also that reorganization might
improve performance. However, in general NEAROFFPOSF is not as critical a
factor as FAROFFPOSF.
For any table, the REORG utility repositions rows into the sequence of the key of
the clustering index that is defined on that table.
Recommendation: Run RUNSTATS if the statistics are not current. If you have an
object that should also be reorganized, run REORG with STATISTICS and take
inline copies. If you run REORG PART and nonpartitioning indexes exist,
subsequently run RUNSTATS for each nonpartitioning index.
End of Product-sensitive Programming Interface
REORG with SHRLEVEL NONE, the default, reloads the reorganized data into the
original area that is being reorganized. Applications have read-only access during
| unloading and no access during reloading. For data-partitioned secondary indexes,
| the option rebuilds the index parts during the BUILD phase. (Rebuilding these
| indexes does not create contention between parallel REORG PART jobs.) For
| nonpartitioned secondary indexes, the option corrects the indexes. Using REORG
| SHRLEVEL NONE is the only access level that resets REORG-pending status.
REORG with SHRLEVEL REFERENCE reloads the reorganized data into a new
| (shadow) copy of the area that is being reorganized. Near the end of
| reorganization, DB2 switches the future access of the application from the original
| data to the shadow copy. For SHRLEVEL REFERENCE, applications have
| read-only access during unloading and reloading, and a brief period of no access
| during switching. For data-partitioned secondary indexes, nothing occurs during the
| BUILD phase. (Rebuilding these indexes does not create contention between
| parallel REORG PART jobs.) For nonpartitioned secondary indexes, the option
| corrects the indexes of the reorganized parts.
REORG with SHRLEVEL CHANGE reloads the reorganized data into a shadow
copy of the area that is being reorganized. For REORG TABLESPACE SHRLEVEL
CHANGE, a mapping table correlates RIDs in the original copy of the table space
or partition with RIDs in the shadow copy; see “Mapping table with SHRLEVEL
CHANGE” on page 435 for instructions on creating the mapping table.
Applications can read from and write to the original area, and DB2 records the
writing in the log. DB2 then reads the log and applies it to the shadow copy to bring
the shadow copy up to date. This step executes iteratively, with each iteration
processing a sequence of log records.
Near the end of reorganization, DB2 switches the future access of the application
from the original data to the shadow copy. Applications have read-write access
during unloading and reloading, a brief period of read-only access during the last
iteration of log processing, and a brief period of no access during switching.
| For data-partitioned secondary indexes, nothing occurs during the BUILD phase.
| (Rebuilding these indexes does not create contention between parallel REORG
| PART jobs.) For nonpartitioned secondary indexes, the option corrects the indexes
| of the reorganized parts.
Operator actions: LONGLOG specifies the action that DB2 performs if the pace of
processing log records between iterations is slow. See “Option descriptions” on
page 412 for a description of the LONGLOG options. If no action is taken after
message DSNU377I is sent to the console, the LONGLOG option automatically
goes into effect. Some examples of possible actions that you can take:
v Execute the START DATABASE(database) SPACENAM(tablespace) ...
ACCESS(RO) command and the QUIESCE utility to drain the write claim class.
DB2 performs the last iteration, if MAXRO is not DEFER. After the QUIESCE,
you should also execute the ALTER UTILITY command, even if you do not
change any REORG parameters.
v Execute the START DATABASE(database) SPACENAM(tablespace) ...
ACCESS(RO) command and the QUIESCE utility to drain the write claim class.
Then, after reorganization makes some progress, execute the START
DATABASE(database) SPACENAM(tablespace) ... ACCESS(RW) command. This
increases the likelihood that processing of log records between iterations can
continue at an acceptable rate. After the QUIESCE, you should also execute the
ALTER UTILITY command, even if you do not change any REORG parameters.
v Execute the ALTER UTILITY command to change the value of MAXRO.
Changing it to a huge positive value, such as 9999999, causes the next iteration
to be the last iteration.
v Execute the ALTER UTILITY command to change the value of LONGLOG.
v Execute the TERM UTILITY command to terminate reorganization.
v Adjust the amount of buffer space that is allocated to reorganization and to
applications. This adjustment can increase the likelihood that processing of log
records between iterations can continue at an acceptable rate. After adjusting the
space, you should also execute the ALTER UTILITY command, even if you do
not change any REORG parameters.
v Adjust the scheduling priorities of reorganization and applications. This
adjustment can increase the likelihood that processing of log records between
iterations can continue at an acceptable rate. After adjusting the priorities, you
should also execute the ALTER UTILITY command, even if you do not change
any REORG parameters.
DB2 does not take the action that is specified in the LONGLOG phrase if any one
of these events occurs before the delay expires:
v An ALTER UTILITY command is issued.
v A TERM UTILITY command is issued.
v DB2 estimates that the time to perform the next iteration is less than or equal to
the time that is specified in the MAXRO keyword.
v REORG terminates for any reason (including the deadline).
If you specify UNLOAD ONLY, REORG unloads data from the table space and then
ends. You can reload the data at a later date with the LOAD utility, specifying
FORMAT UNLOAD.
Between unloading and reloading, you can add a validation routine to a table.
During reloading, all the rows are checked by the validation procedure.
| Do not use REORG UNLOAD ONLY to propagate data. When you specify the
| UNLOAD ONLY option, REORG unloads only the data that physically resides in the
| base table space; LOB columns are not unloaded. For purposes of data
| propagation, you should use UNLOAD or REORG UNLOAD EXTERNAL instead.
However, if you use REORG SHRLEVEL NONE LOG NO, RECOVER cannot
restore data from the log past the point at which the object was last reorganized
successfully. Therefore, you must take an image copy after running REORG with
LOG NO to establish a level of fallback recovery.
Attention: You must take a full image copy before and after reorganizing any
catalog or directory object. Otherwise, you cannot recover any catalog of directory
objects without the full image copies. When you reorganize the
DSNDB06.SYSCOPY table space with the LOG NO option and omit the COPYDDN
option, DB2 places the table space in COPY-pending status. Take a full image copy
of the table space to remove the COPY-pending status before continuing to
reorganize the catalog or directory table spaces.
The FASTSWITCH YES option is ignored for catalog and directory objects.
When to run REORG on the catalog and directory: You do not need to run
REORG TABLESPACE on the catalog and directory table spaces as often as you
do on user table spaces. RUNSTATS collects statistics about user table spaces
which you use to determine if a REORG is necessary. You can use the same
statistics to determine if a REORG is needed for catalog table spaces. The only
difference is the information in the columns NEAROFFPOSF and FAROFFPOSF in
table SYSINDEXPART. The values in these columns can be double the
recommended value for user table spaces before a reorganization is needed if the
table space is DSNDB06.SYSDBASE, DSNDB06.SYSVIEWS,
DSNDB06.SYSPLAN, DSNDB06.SYSGROUP, or DSNDB06.SYSDBAUT.
Reorganize the whole catalog before a catalog migration or once every couple of
years. Reorganizing the catalog is useful for reducing the size of the catalog table
space. To improve query performance, reorganize the indexes on the catalog
tables.
Associated directory table spaces: When certain catalog table spaces are
reorganized, you should also reorganize the associated directory table space. The
associated directory table spaces are listed in Table 81.
| – DSNDB06.SYSPLAN
| – DSNDB06.SYSVIEWS
| – DSNDB01.DBD01
v REORG TABLESPACE with STATISTICS cannot collect inline statistics on the
following catalog and directory table spaces:
– DSNDB06.SYSDBASE
– DSNDB06.SYSDBAUT
– DSNDB06.SYSGROUP
– DSNDB06.SYSPLAN
– DSNDB06.SYSVIEWS
– DSBDB06.SYSSTATS
– DSNDB06.SYSHIST
– DSNDB01.DBD01
For these table spaces, REORG TABLESPACE reloads the indexes (in addition to
the table space) during the RELOAD phase, rather than storing the index keys in a
work data set for sorting.
For all other catalog and directory table spaces, DB2 uses index build parallelism.
For REORG with SHRLEVEL REFERENCE or CHANGE, you can use the ALTER
STOGROUP command to change the characteristics of a DB2-managed data set.
To change the characteristics of a user-managed data set, specify the desired new
characteristics when you create the shadow data set; see page “Shadow data sets”
on page 442 for more information about user-managed data sets. For example,
placing the original and shadow data sets on different disk volumes might reduce
contention and thus improve the performance of REORG and the performance of
applications during REORG execution.
While REORG is interrupted by PAUSE, you can redefine the table space attributes
for user-defined table spaces. PAUSE is not required for STOGROUP-defined table
spaces. Attribute changes are done automatically by a REORG following an ALTER
TABLESPACE.
| If the table space contains rows with VARCHAR columns, DB2 might not be able to
| accurately estimate the number of rows. If the estimated number of rows is too high
| and the sort work space is not available or if the estimated number of rows is too
| low, DFSORT might fail and cause an abend. Important: Run RUNSTATS UPDATE
| SPACE before the REORG so that DB2 calculates a more accurate estimate.
You can override this dynamic allocation of sort work space in two ways:
v Allocate the sort work data sets with SORTWKnn DD statements in your JCL.
v Override the DB2 row estimate in FILSZ using control statements that are
passed to DFSORT. However, using control statements overrides size estimates
that are passed to DFSORT in all invocations of DFSORT in the job step,
including sorting keys to build indexes, and any sorts that are done in any other
utility that is executed in the same step. The result might be reduced sort
efficiency or an abend due to an out-of-space condition.
If you use ALTER INDEX to modify the limit keys for partition boundaries, you must
subsequently use REORG TABLESPACE to redistribute data in the partitioned table
spaces based on the new key values and to reset the REORG-pending status. The
following example specifies options that help maximize performance while
performing the necessary rebalancing reorganization:
| REORG TABLESPACE DSN8S81E PART 2:3
| NOSYSREC
| COPYDDN SYSCOPY
| STATISTICS TABLE INDEX(ALL)
You can reorganize a range of partitions, even if the partitions are not in
REORG-pending status. If you specify the STATISTICS keyword, REORG collects
data about the specified range of partitions.
For more restrictions when using REBALANCE, see “Restrictions when using
REBALANCE” on page 436.
| Rebalancing partitions when the clustering index does not match the
| partitioning key: For a table that has a clustering index that does not match the
| partitioning key, you must run REORG TABLESPACE twice so that data is
| rebalanced and all rows are in clustering order. The first utility execution rebalances
| the data and the second utility execution sorts the data.
| For example, assume you have a table space that was created with the following
| SQL:
| ------------------------------------------
| SQL to create a table and index with
| separate columns for partitioning
| and clustering
| ------------------------------------------
|
| CREATE TABLESPACE TS IN DB
| USING STOGROUP SG
| NUMPARTS 4 BUFFERPOOL BP0;
|
| CREATE TABLE TB (C01 CHAR(5) NOT NULL,
| C02 CHAR(5) NOT NULL,
| C03 CHAR(5) NOT NULL)
| IN DB.TS
| PARTITION BY (C01)
| (PART 1 VALUES (’00001’),
| PART 2 VALUES (’00002’),
| PART 3 VALUES (’00003’),
| PART 4 VALUES (’00004’));
|
| CREATE INDEX IX ON TB(C02) CLUSTER;
| To rebalance the data across the four partitions, use the following REORG
| TABLESPACE control statement:
| REORG TABLESPACE DB.TS REBALANCE
| After the preceding utility job completes, the table space is placed in AREO* status
| to indicate that a subsequent reorganization is recommended to ensure that the
| rows are in clustering order. For this subsequent reorganization, use the following
| REORG TABLESPACE control statement:
| REORG TABLESPACE DB.TS
To create an inline copy, use the COPYDDN and RECOVERYDDN keywords. You
can specify up to two primary copies and two secondary copies. Inline copies are
produced during the RELOAD phase of REORG processing.
The total number of duplicate pages is small, with a negligible effect on the amount
of space that is required for the data set. One exception to this guideline is the case
of running REORG SHRLEVEL CHANGE, in which the number of duplicate pages
varies with the number of records that are applied during the LOG phase.
Improving performance
To improve REORG performance:
v Run REORG concurrently on separate partitions of a partitioned table space.
When you run REORG on partitions of a partitioned table space, the sum of each
job’s processor usage is greater than for a single REORG job on the entire table
space. However, the elapsed time of reorganizing the entire table in parallel can
be significantly less than it would be for a single REORG job.
v Use parallel index build for table spaces or partitions that have more than one
defined index. For more information, see “Building indexes in parallel for REORG
TABLESPACE” on page 457.
v Specify NOSYSREC on your REORG statement. See “Omitting the output data
set” on page 449 for restrictions.
v If you are using 3990 caching, and you have the nonpartitioning indexes on
RAMAC®, consider specifying YES on the UTILITY CACHE OPTION field of
installation panel DSNTIPE. This option allows DB2 to use sequential prestaging
when reading data from RAMAC for the following utilities:
– LOAD PART integer RESUME
– REORG TABLESPACE PART
For LOAD PART and REORG TABLESPACE PART utility jobs, prefetch reads
remain in the cache longer, which can lead to possible improvements in the
performance of subsequent writes.
Use inline copy and inline statistics instead of running separate COPY and
RUNSTATS utilities.
When to use DRAIN_WAIT: The DRAIN_WAIT option gives you greater control
over the time that online REORG is to wait for drains. Also because the
DRAIN_WAIT is the aggregate time that online REORG is to wait to perform a drain
on a table space and associated indexes, the length of drains is more predictable
than if each partition and index has its own individual waiting time limit.
By specifying a short delay time (less than the system timeout value, IRLMRWT),
you can reduce the impact on applications by reducing time-outs. You can use the
RETRY option to give the online REORG more chances to complete successfully. If
you do not want to use RETRY processing, you can still use DRAIN_WAIT to set a
specific and more consistent limit on the length of drains.
RETRY allows an online REORG that is unable to drain the objects that it requires
so that DB2 can try again after a set period (RETRY_DELAY). During the
RETRY_DELAY period, all the objects are available for read-write access in the
case of SHRLEVEL CHANGE. For SHRLEVEL REFERENCE, the objects remain
with the access that existed prior to the attempted drain (that is if the drain fails in
the UNLOAD phase the object remains in read-write access; if the drain fails in the
SWITCH phase, objects remain in read-only access). Because application SQL
statements can be in a queue behind any unsuccessful drain the online REORG
has tried, a reasonable delay is recommended before retrying, to allow this work to
complete; the default is 5 minutes.
When you specify DRAIN WRITERS (the default) with SHRLEVEL CHANGE and
RETRY, multiple read-only log iterations can occur. Generally, online REORG might
need to do more work when RETRY is specified, and this might result in multiple or
extended periods of restricted access. Applications that run alongside online
REORG need to perform frequent commits. During the interval between retries, the
utility is still active, and consequently other utility activity against the table space
and indexes is restricted.
When doing a table space REORG with RETRY and SHRLEVEL CHANGE both
specified, you can increase the size of the COPY that REORG takes.
Figure 77 shows the flow of a REORG TABLESPACE job that uses a parallel index
build. DB2 starts multiple subtasks to sort index keys and build indexes in parallel.
If you specify STATISTICS, additional subtasks collect the sorted keys and update
the catalog table in parallel, eliminating the need for a second scan of the index by
a separate RUNSTATS job.
Figure 77. How indexes are built during a parallel index build
| REORG TABLESPACE uses parallel index build if more than one index needs to be
| built (including the mapping index for SHRLEVEL CHANGE). You can either let the
| utility dynamically allocate the data sets that SORT needs for this parallel index
| build or provide the necessary data sets yourself.
Select one of the following methods to allocate sort work and message data sets:
Method 1: REORG TABLESPACE determines the optimal number of sort work data
sets and message data sets.
| 1. Specify the SORTDEVT keyword in the utility statement.
2. Allow dynamic allocation of sort work data sets by not supplying SORTWKnn
DD statements in the REORG TABLESPACE utility JCL.
3. Allocate UTPRINT to SYSOUT.
Method 2: Control allocation of sort work data sets, while REORG TABLESPACE
allocates message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Allocate UTPRINT to SYSOUT.
Method 3: Exercise the most control over rebuild processing; specify both sort work
data sets and message data sets.
| 1. Provide DD statements with DD names in the form SWnnWKmm.
2. Provide DD statements with DD names in the form UTPRINnn.
Data sets used: If you select Method 2 or 3 in the preceding information, define
the necessary data sets by using the information provided here, along with
“Determining the number of sort subtasks,” “Allocation of sort subtasks,” and
“Estimating the sort work file size” on page 459.
Each sort subtask must have its own group of sort work data sets and its own print
message data set. Possible reasons to allocate data sets in the utility job JCL rather
than using dynamic allocation are:
v To control the size and placement of the data sets
v To minimize device contention
v To optimally utilize free disk space
v To limit the number of utility subtasks that are used to build indexes
The DD name SWnnWKmm defines the sort work data sets that are used during
utility processing. nn identifies the subtask pair, and mm identifies one or more data
sets that are to be used by that subtask pair. For example:
SW01WK01 Is the first sort work data set that is used by the subtask that builds
the first index.
SW01WK02 Is the second sort work data set that is used by the subtask that
builds the first index.
SW02WK01 Is the first sort work data set that is used by the subtask that builds
the second index.
SW02WK02 Is the second sort work data set that is used by the subtask that
builds the second index.
The DD name UTPRINnn defines the sort work message data sets that are used by
the utility subtask pairs. nn identifies the subtask pair.
During parallel index build processing, REORG distributes all indexes among the
subtask pairs according to the index creation date, assigning the first created index
to the first subtask pair. For SHRLEVEL CHANGE, the mapping index is assigned
last.
Estimating the sort work file size: If you choose to provide the data sets, you
need to know the size and number of keys that are present in all of the indexes that
are being processed by the subtask in order to calculate each sort work file size.
After you determine which indexes are assigned to which subtask pairs, use the
following formula to calculate the required space:
Do not count keys that belong to partitioning indexes should not be counted in the
sort work data set size calculation. The space estimation formula might indicate that
0 bytes are required (because the only index that is processed by a task set is the
partitioning index). In this case, if you allocate your own sort work data set groups,
you still need to allocate sort work data sets for this task set, but you can use a
minimal allocation, such as 1 track.
If the error is on the unloaded data, or if you used the NOSYSREC option,
terminate REORG by using the TERM UTILITY command. Then recover the table
space, using RECOVER, and run the REORG job again.
For segmented table spaces, REORG does not normally need to reclaim space
from dropped tables. Space that is freed by dropping tables in a segmented table
space is immediately available if the table space can be accessed when DROP
TABLE is executed. If the table space cannot be accessed when DROP TABLE is
executed (for example, the disk device is offline), DB2 removes the table from the
catalog, but does not delete all table rows. In this case, the space for the dropped
table is not available until REORG reclaims it.
After you run REORG, the segments for each table are contiguous.
during the reorganizing a LOB column. Table 40 on page 249 shows the logging
output and LOB table space effect, if any. SYSIBM.SYSCOPY is not updated.
Specify LOG YES and SHRLEVEL NONE when you reorganize a LOB table space
to avoid leaving the LOB table space in COPY-pending status after the REORG.
If you terminate REORG TABLESPACE with the TERM UTILITY command during
the RELOAD phase, the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the data records are not erased. The table space and
indexes remain in RECOVER-pending status. After you recover the table space,
rerun the REORG job.
v For SHRLEVEL REFERENCE or CHANGE, the data records are reloaded into
shadow objects, so the original objects have not been affected by REORG. You
can rerun the job.
If you terminate REORG with the TERM UTILITY command during the SORT,
BUILD, or LOG phases, the behavior depends on the SHRLEVEL option:
v For SHRLEVEL NONE, the indexes that are not yet built remain in
RECOVER-pending status. You can run REORG with the SORTDATA option, or
you can run REBUILD INDEX to rebuild those indexes.
v For SHRLEVEL REFERENCE or CHANGE, the records are reloaded into
shadow objects, so the original objects have not been affected by REORG. You
can rerun the job.
If you terminate a stopped REORG utility with the TERM UTILITY command during
the SWITCH phase, the following conditions apply:
v All data sets that were renamed to their shadow counterparts are renamed to
their original names, so that the objects remain in their original state, and you
can rerun the job.
v If a problem occurs in renaming the data sets to the original names, the objects
remain in RECOVER-pending status, and you cannot rerun the job.
If the SWITCH phase does not complete, the image copy that REORG created is
not available for use by the RECOVER utility. If you terminate an active REORG
utility during the SWITCH phase with the TERM UTILITY command, during the
rename process, the renaming occurs, and the SWITCH phase completes. The
image copy that REORG created is available for use by the RECOVER utility.
If you terminate REORG with the TERM UTILITY command during the BUILD2
phase, the logical partitions of nonpartitioned indexes remain in RECOVER-pending
status. After you run REBUILD INDEX for the logical partition, all objects have been
reorganized successfully.
The REORG-pending status is not reset until the UTILTERM execution phase. If the
REORG utility abnormally terminates or is terminated, the objects remain in
REORG-pending status and RECOVER-pending status, depending on the phase in
Chapter 25. REORG TABLESPACE 461
REORG TABLESPACE
Table 82 lists the restrictive states that REORG TABLESPACE sets according to the
phase in which the utility terminated.
Table 82. Restrictive states that REORG TABLESPACE sets.
Phase Effect on restrictive status
UNLOAD No effect.
RELOAD SHRLEVEL NONE:
v Places table space in RECOVER-pending status at the beginning of
the phase and resets the status at the end of the phase.
v Places indexes in RECOVER-pending status.
| v Places the table space in COPY-pending status. If COPYDDN is
| specified and SORTKEYS is ignored, the COPY-pending status is
| reset at the end of the phase. SORTKEYS is ignored for several
| catalog and directory table spaces. For a list of these table spaces,
| see “Reorganizing the catalog and directory” on page 450.
SHRLEVEL REFERENCE or CHANGE has no effect.
SORT No effect.
BUILD SHRLEVEL NONE resets RECOVER-pending status for indexes and, if
the utility job includes both COPYDDN and SORTKEYS, resets
COPY-pending status for table spaces at the end of the phase.
SHRLEVEL REFERENCE or CHANGE has no effect.
SORTBLD No effect during the sort portion of the SORTBLD phase. During the build
portion of the SORTBLD phase, the effect is the same as for the BUILD
phase.
LOG No effect.
SWITCH No effect. Under certain conditions, if TERM UTILITY is issued, it must
complete successfully; otherwise, objects might be placed in
RECOVER-pending status.
BUILD2 If TERM UTILITY is issued, the logical partitions for nonpartitioning
indexes are placed in logical RECOVER-pending status.
| – DSNDB06.SYSGROUP
| – DSNDB06.SYSPLAN
| – DSNDB06.SYSVIEWS
| – DSNDB01.DBD01
If you restart a REORG job of one or more of the catalog or directory table spaces
in the preceding list, you cannot use RESTART(CURRENT).
If you restart REORG in the UTILINIT phase, it re-executes from the beginning of
the phase. If REORG abnormally terminates or system failure occurs while it is in
the UTILTERM phase, you must restart the job with RESTART(PHASE).
For each phase of REORG and for each type of REORG TABLESPACE (with
SHRLEVEL NONE, with SHRLEVEL REFERENCE, and with SHRLEVEL
CHANGE), Table 83 indicates the types of restarts that are allowed (CURRENT and
PHASE). A value of None indicates that no restart is allowed. The ″Data Sets
Required″ column lists the data sets that must exist to perform the specified type of
restart in the specified phase.
Table 83. REORG TABLESPACE utility restart information for SHRLEVEL NONE, REFERENCE, and CHANGE
Type of Type of Type of
restart restart restart
allowed for allowed for allowed for
SHRLEVEL SHRLEVEL SHRLEVEL
Phase NONE REFERENCE CHANGE Required data sets Notes
UNLOAD CURRENT, CURRENT, None SYSREC
PHASE PHASE
| RELOAD CURRENT, CURRENT, None SYSREC 1, 2
PHASE PHASE
| SORT CURRENT, CURRENT, None None 2, 3
PHASE PHASE
| BUILD CURRENT, CURRENT, None None 2, 3, 4
PHASE PHASE
SORTBLD CURRENT, CURRENT, None None 2
PHASE PHASE
LOG Phase does Phase does None None
not occur not occur
SWITCH Phase does CURRENT, CURRENT, Originals and shadows 3
not occur PHASE PHASE
BUILD2 Phase does CURRENT, CURRENT, Shadows for nonpartitioning indexes 3, 4
not occur PHASE PHASE
Notes:
1. For None, if you specify NOSYSREC, restart is not possible, and you must execute the RECOVER TABLESPACE
utility for the table space or partition. For REFERENCE, if the REORG job includes both SORTDATA and
NOSYSREC, RESTART or RESTART(PHASE) restarts at the beginning of the UNLOAD phase.
2. If you specify SHRLEVEL NONE or SHRLEVEL REFERENCE, and the job includes the SORTKEYS option, use
RESTART or RESTART(PHASE) to restart at the beginning of the RELOAD phase.
3. You can restart the utility with RESTART or RESTART(PHASE). However, because this phase does not take
checkpoints, RESTART restarts from the beginning of the phase.
4. If you specify the PART option with REORG TABLESPACE, you cannot restart the utility at the beginning of the
BUILD or BUILD2 phase if any nonpartitioning index is in a page set REBUILD-pending (PSRBD) status.
For instructions on restarting a utility job, see Chapter 3, “Invoking DB2 online
utilities,” on page 19.
REORG of a LOB table space is not compatible with any other utility. The LOB
table space is unavailable to other applications during REORG processing.
This section includes a series of tables that show which claim classes REORG
drains and any restrictive state that the utility sets on the target object.
For SHRLEVEL NONE, Table 85 on page 465 shows which claim classes REORG
drains and any restrictive state that the utility sets on the target object. For each
column, the table indicates the claim or drain that is acquired and the restrictive
state that is set in the corresponding phase. UNLOAD CONTINUE and UNLOAD
PAUSE, unlike UNLOAD ONLY, include the RELOAD phase and thus include the
drains and restrictive states of that phase.
For SHRLEVEL REFERENCE, Table 86 shows which claim classes REORG drains
and any restrictive state that the utility sets on the target object. For each column,
the table indicates the claim or drain that is acquired and the restrictive state that is
set in the corresponding phase.
Table 86. Claim classes of REORG TABLESPACE SHRLEVEL REFERENCE operations
UNLOAD phase SWITCH phase UNLOAD phase SWITCH phase BUILD2 phase of
Target of REORG of REORG of REORG PART of REORG PART REORG PART
Table space or DW/UTRO DA/UTUT DW/UTRO DA/UTUT UTRW
partition of table
space
| Partitioning index, DW/UTRO DA/UTUT DW/UTRO DA/UTUT UTRW
| data-partitioned
| secondary index,
| or partition of
| either
| Nonpartitioned DW/UTRO DA/UTUT None DR None
| secondary index
Logical partition of None None DW/UTRO DA/UTUT None
nonpartitioning
index
Legend:
v DA: Drain all claim classes, no concurrent SQL access.
v DDR: Dedrain the read claim class, concurrent SQL access.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v None: Any claim, drain, or restrictive state for this object does not change in this phase.
For REORG of an entire table space with SHRLEVEL CHANGE, Table 87 shows
which claim classes REORG drains and any restrictive state that the utility sets on
the target object.
Table 87. Claim classes of REORG TABLESPACE SHRLEVEL CHANGE operations
Last iteration of LOG
Target UNLOAD phase phase SWITCH phase
1
Table space CR/UTRW DW/UTRO DA/UTUT
1
Index CR/UTRW DW/UTRO DA/UTUT
Legend:
v CR: Claim the read claim class.
v DA: Claim all claim classes, no concurrent SQL access.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
Notes:
1. If the target object is a segmented table space, SHRLEVEL CHANGE does not allow you to concurrently execute
an SQL searched DELETE without the WHERE clause.
For REORG of a partition with SHRLEVEL NONE, Table 88 shows which claim
classes REORG drains and any restrictive state that the utility sets on the target
object.
Table 88. Claim classes of REORG TABLESPACE SHRLEVEL NONE operations on a partition
Last iteration of LOG
Target UNLOAD phase phase SWITCH phase BUILD2 phase
Partition of table CR/UTRW DW/UTRO or DA/UTUT UTRW
space DA/UTUT1
Partition of partitioning CR/UTRW DW/UTRO or DA/UTUT UTRW
index DA/UTUT1
Nonpartitioning index None None DR None
Logical partition of CR/UTRW DW/UTRO or DA/UTUT None
nonpartitioning index DA/UTUT1
Legend:
v CR: Claim the read claim class.
v DA: Drain all claim classes, no concurrent SQL access.
v DDR: Dedrain the read claim class, no concurrent access for SQL repeatable readers.
v DR: Drain the repeatable read class, no concurrent access for SQL repeatable readers.
v DW: Drain the write claim class, concurrent access for SQL readers.
v UTUT: Utility restrictive state, exclusive control.
v UTRO: Utility restrictive state, read-only access allowed.
v UTRW: Utility restrictive state, read-write access allowed.
v None: Any claim, drain, or restrictive state for this object does not change in this phase.
Notes:
1. DA/UTUT applies if you specify DRAIN ALL.
Table 89 on page 467 shows which utilities can run concurrently with REORG on
the same target object. The target object can be a table space, an index space, or
Table 90 shows which DB2 operations can be affected when reorganizing catalog
table spaces.
Table 90. DB2 operations that are affected by reorganizing catalog table spaces
Catalog table space Actions that might not run concurrently
Any table space except SYSCOPY and CREATE, ALTER, and DROP statements
SYSSTR
SYSCOPY, SYSDBASE, SYSDBAUT, Utilities
SYSSTATS, SYSUSER, SYSHIST
SYSDBASE, SYSDBAUT, SYSGPAUT, GRANT and REVOKE statements
SYSPKAGE, SYSPLAN, SYSUSER
SYSDBAUT, SYSDBASE, SYSGPAUT, BIND and FREE commands
SYSPKAGE, SYSPLAN, SYSSTATS,
SYSUSER, SYSVIEWS
SYSPKAGE, SYSPLAN Plan or package execution
When reorganizing a segmented table space, REORG leaves free pages and free
space on each page in accordance with the current values of the FREEPAGE and
PCTFREE parameters. (You can set those values by using the CREATE
TABLESPACE, ALTER TABLESPACE, CREATE INDEX, or ALTER INDEX
statements). REORG leaves one free page after reaching the FREEPAGE limit for
each table in the table space. When reorganizing a nonsegmented table space,
REORG leaves one free page after reaching the FREEPAGE limit, regardless of
whether the loaded records belong to the same or different tables.
– Provide a full image copy for recovery. This action prevents the need to
process the log records that are written during reorganization.
– Permit making incremental image copies later.
You might not need to take an image copy of a table space for which all the
following statements are true:
– The table space is relatively small.
– The table space is used only in read-only applications.
– The table space can be easily loaded again in the event of failure.
See Chapter 11, “COPY,” on page 95 for information about making image copies.
v If you use REORG SHRLEVEL NONE LOG NO on a LOB table space and DB2
determines that nothing needs to be done to the table space, no COPY-pending
status is set. However, if DB2 indicates that changes are needed, REORG places
the reorganized LOB table space or partition in COPY-pending status. In this
situation, perform a full image copy to reset the COPY-pending status and to
ensure that a backup is available for recovery.
You should also run the COPY utility if the REORG was performed to turn off
REORG-pending status (REORP), and an inline copy was not taken. You cannot
use an image copy that was created before turning off REORP.
v If you use COPYDDN, SHRLEVEL REFERENCE, or SHRLEVEL CHANGE, and
the object that you are reorganizing is not a catalog or directory table space for
which COPYDDN is ignored, you do not need to take an image copy.
v Use the RUNSTATS utility on the table space and its indexes if inline statistics
were not collected, so that the DB2 catalog statistics take into account the newly
reorganized data, and SQL paths can be selected with accurate information. You
need to run RUNSTATS on nonpartitioning indexes only if you reorganized a
subset of the partitions.
v If you use REORG TABLESPACE SHRLEVEL CHANGE, you can drop the
mapping table and its index.
v If you use SHRLEVEL REFERENCE or CHANGE, and a table space, partition, or
index resides in user-managed data sets, you can delete the user-managed
shadow data sets.
v If you specify DISCARD on a REORG of a table that is involved in a referential
integrity set, you need to run CHECK DATA for any affected referentially related
objects that were placed in CHECK-pending status.
| When you run REORG TABLESPACE, the utility sets all of the rows in the table or
| partition to the current object version. The utility also updates the range of used
| version numbers for indexes that are defined with the COPY NO attribute. REORG
| TABLESPACE sets the OLDEST_VERSION column equal to the
| CURRENT_VERSION column in the appropriate catalog column. These updated
| values indicate that only one version is active. DB2 can then reuse all of the other
| version numbers.
| Recycling of version numbers is required when all of the version numbers are being
| used. All version numbers are being used when one of the following situations is
| true:
| v The value in the CURRENT_VERSION column is one less than the value in the
| OLDEST_VERSION column.
| v The value in the CURRENT_VERSION column is 255 for table spaces or 15 for
| indexes, and the value in the OLDEST_VERSION column is 0 or 1.
| You can also run LOAD REPLACE, REBUILD INDEX, or REORG INDEX to recycle
| version numbers for indexes that are defined with the COPY NO attribute. To
| recycle version numbers for indexes that are defined with the COPY YES attribute
| or for table spaces, run MODIFY RECOVERY.
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
| Example 2: Reorganizing a table space and specifying the unload data set.
| The control statement in Figure 78 on page 471 specifies that REORG
| TABLESPACE is to reorganize table space DSN8D81A.DSN8S81D. The DD name
| for the unload data set is UNLD, as specified by the UNLDDN option.
|
Figure 78. Example REORG TABLESPACE control statement with the UNLDDN option
Example 4: Reorganizing a table and using parallel index build. The control
statement in Figure 79 on page 472 specifies that REORG TABLESPACE is to
reorganize table space DSNDB04.DSN8S81D and to use a parallel index build to
rebuild the indexes. The indexes are built in parallel, because more than one index
| needs to be built and the job allocates the data sets that DFSORT needs. Note that
| you no longer need to specify SORTKEYS; it is the default.
The job allocates the sort work data sets in two groups, which limits the number of
pairs of utility subtasks to two. This example does not require UTPRINnn DD
statements because it uses DSNUPROC to invoke utility processing. DSNUPROC
includes a DD statement that allocates UTPRINT to SYSOUT.
LOG NO specifies that records are not to be logged during the RELOAD phase.
This option puts the table space in COPY-pending status.
Figure 79. Example REORG TABLESPACE control statement with LOG NO option
start of the SWITCH phase is eight hours from the start of the REORG job. The
name of the mapping table is DSN8810.MAP_TBL. The maximum desired amount
of time for the log processing in the read-only (last) iteration of log processing is
240 seconds, as indicated by the MAXRO option. If DB2 is not reading the log
quickly enough after the applications write to the log, DB2 drains the write claim
class after sending the LONGLOG message to the operator. That draining takes
place at least 900 seconds after the LONGLOG message is sent, as indicated by
the DELAY option. DB2 is also to take inline image copies for the local site and
recovery site, as indicated by the COPYDDN and RECOVERYDDN options.
REORG TABLESPACE DSN8D81A.DSN8S81D COPYDDN(MYCOPY1)
RECOVERYDDN(MYCOPY2) SHRLEVEL CHANGE
DEADLINE CURRENT TIMESTAMP + 8 HOURS
MAPPINGTABLE DSN8810.MAP_TBL MAXRO 240 LONGLOG DRAIN DELAY 900
Example 10: Reorganizing a table space and reporting table space and index
statistics. The following control statement specifies that REORG TABLESPACE is
to reorganize table space DSN8D81A.DSN8S81E. The SORTDATA option indicates
that the data is to be unloaded and sorted in clustering order. This option is the
default and does not need to be specified. The STATISTICS, TABLE, INDEX, and
REPORT YES options indicate that the utility is also to report catalog statistics for
all tables in the table space and for all indexes that are defined on those tables.
The KEYCARD, FREQVAL, NUMCOLS, and COUNT options indicate that DB2 is to
collect 10 frequent values on the first key column of the index. UPDATE NONE
indicates that the catalog tables are not to be updated. This option requires that
REPORT YES also be specified.
REORG TABLESPACE DSN8D81A.DSN8S81E SORTDATA STATISTICS
TABLE
INDEX(ALL) KEYCARD FREQVAL NUMCOLS 1
COUNT 10 REPORT YES UPDATE NONE
|
| Figure 80. Example REORG TABLESPACE statement with REPORTONLY, OFFPOSLIMIT,
| and INDREFLIMIT options
Figure 81. Sample output showing that REORG limits have been met
| //******************************************************************
| //* COMMENT: UPDATE STATISTICS
| //******************************************************************
| //STEP1 EXEC DSNUPROC,UID=’HUHRU252.REORG1’,TIME=1440,
| // UTPROC=’’,
| // SYSTEM=’DSN’,DB2LEV=DB2A
| //SYSREC DD DSN=HUHRU252.REORG1.STEP1.SYSREC,DISP=(MOD,DELETE,CATLG),
| // UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| // SPACE=(4000,(20,20),,,ROUND)
| //SYSIN DD *
| RUNSTATS TABLESPACE DBHR5201.TPHR5201
| UPDATE SPACE
| /*
| //******************************************************************
| //* COMMENT: REORG THE TABLESPACE
| //******************************************************************
| //STEP2 EXEC DSNUPROC,UID=’HUHRU252.REORG1’,TIME=1440,
| // UTPROC=’’,
| // SYSTEM=’DSN’,DB2LEV=DB2A
| //SYSREC DD DSN=HUHRU252.REORG1.STEP1.SYSREC,DISP=(MOD,DELETE,CATLG),
| // UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //SYSCOPY1 DD DSN=HUHRU252.REORG1.STEP1.SYSCOPY1,
| // DISP=(MOD,CATLG,CATLG),UNIT=SYSDA,
| // SPACE=(4000,(20,20),,,ROUND)
| //SYSIN DD *
| REORG TABLESPACE DBHR5201.TPHR5201
| SHRLEVEL CHANGE MAPPINGTABLE MAP1
| COPYDDN(SYSCOPY1)
| OFFPOSLIMIT 9 INDREFLIMIT 9
| /*
|
| Figure 82. Example of conditionally reorganizing a table
| On successful completion, DB2 returns output for the REORG TABLESPACE job
| that is similar to the output in Figure 83 on page 476.
|
DSNU348I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=36 FOR INDEX ADMF001.IPHR5201 PART 1
DSNU348I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=5 FOR INDEX ADMF001.IPHR5201 PART 2
...
DSNU349I = DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=6985 FOR INDEX ADMF001.IUHR5210
DSNU258I DSNURBXD - BUILD PHASE STATISTICS - NUMBER OF INDEXES=5
DSNU259I DSNURBXD - BUILD PHASE COMPLETE, ELAPSED TIME=00:00:18
DSNU386I DSNURLGD - LOG PHASE STATISTICS. NUMBER OF ITERATIONS = 1, NUMBER OF LOG
RECORDS = 194
DSNU385I DSNURLGD- LOG PHASE COMPLETE, ELAPSED TIME = 00:01:10
DSNU400I DSNURBID- COPY PROCESSED FOR TABLESPACE DBHR5201.TPHR5201
NUMBER OF PAGES=1073
AVERAGE PERCENT FREE SPACE PER PAGE = 14.72
PERCENT OF CHANGED PAGES =100.00
ELAPSED TIME=00:01:58
DSNU387I DSNURSWT - SWITCH PHASE COMPLETE, ELAPSED TIME = 00:01:05
DSNU428I DSNURSWT - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBHR5201.TPHR5201
Example 13: Reorganizing a table space after waiting for SQL statements to
complete. The control statement in Figure 84 on page 477 specifies that REORG
TABLESPACE is to reorganize the table space in the REORG_TBSP list, which is
defined in the preceding LISTDEF utility control statement. Before reorganizing the
table space, REORG TABLESPACE is to wait for 30 seconds for SQL statements to
finish adding or changing data. This interval is indicated by the DRAIN_WAIT
option. If the SQL statements do not finish, the utility is to retry up to four times, as
indicated by the RETRY option. The utility is to wait 10 seconds between retries, as
indicated by the RETRY_DELAY option.
The TEMPLATE utility control statements define the data set characteristics for the
data sets that are to be dynamically allocated during the REORG TABLESPACE
job. The OPTIONS utility control statement indicates that the TEMPLATE
statements and LISTDEF statement are to run in PREVIEW mode.
Figure 84. Example of reorganizing a table space by using DRAIN WAIT, RETRY, and
RETRY_DELAY
|
| Figure 85. Sample output of REORG TABLESPACE job with DRAIN WAIT, RETRY, and RETRY_DELAY options (Part
| 1 of 2)
|
|
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5706
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5705
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5702 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5705 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.21.292235
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5703 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5706 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.22.288665
DSNU393I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IPHR5701 PART 11
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IPHR5701
DSNU394I = DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=331 FOR INDEX ADMF001.IXHR5704
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPI - SYSINDEXSTATS CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPD - SYSCOLDISTSTATS CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUPC - SYSCOLSTATS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IPHR5701 SUCCESSFUL
DSNU610I = DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU610I = DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU610I = DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR ADMF001.TBHR5701 SUCCESSFUL
DSNU610I = DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR ADMF001.IXHR5704 SUCCESSFUL
DSNU620I = DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2002-08-05-16.25.20.886803
DSNU391I DSNURPTB - SORTBLD PHASE STATISTICS. NUMBER OF INDEXES = 7
DSNU392I DSNURPTB - SORTBLD PHASE COMPLETE, ELAPSED TIME = 00:00:04
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
...
DSNU377I = DSNURLOG - IN REORG WITH SHRLEVEL CHANGE, THE LOG IS
BECOMING LONG, MEMBER= , UTILID=HUHRU257.REORG
DSNU1122I = DSNURLOG - JOB T3161108 PERFORMING REORG
WITH UTILID HUHRU257.REORG UNABLE TO DRAIN DBHR5701.TPHR5701.
RETRY 1 OF 4 WILL BE ATTEMPTED IN 10 SECONDS
DSNU1122I = DSNURLOG - JOB T3161108 PERFORMING REORG
WITH UTILID HUHRU257.REORG UNABLE TO DRAIN DBHR5701.TPHR5701.
RETRY 2 OF 4 WILL BE ATTEMPTED IN 10 SECONDS
DSNU386I DSNURLGD - LOG PHASE STATISTICS. NUMBER OF ITERATIONS = 32, NUMBER OF LOG RECORDS = 2288
DSNU385I DSNURLGD - LOG PHASE COMPLETE, ELAPSED TIME = 00:03:43
DSNU400I DSNURBID - COPY PROCESSED FOR TABLESPACE DBHR5701.TPHR5701
NUMBER OF PAGES=377
AVERAGE PERCENT FREE SPACE PER PAGE = 5.42
PERCENT OF CHANGED PAGES =100.00
ELAPSED TIME=00:04:02
DSNU387I DSNURSWT - SWITCH PHASE COMPLETE, ELAPSED TIME = 00:00:02
DSNU428I DSNURSWT - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBHR5701.TPHR5701
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0
Figure 85. Sample output of REORG TABLESPACE job with DRAIN WAIT, RETRY, and RETRY_DELAY options (Part
2 of 2)
Example 14: Using a mapping table: In the example in Figure 86 on page 480, a
mapping table and mapping table index are created. Then, a REORG
TABLESPACE job uses the mapping table, and finally the mapping table is dropped.
Some parts of this job use the EXEC SQL utility to execute dynamic SQL
statements.
The first EXEC SQL control statement contains the SQL statements that create a
mapping table that is named MYMAPPING_TABLE. The second EXEC SQL control
statement contains the SQL statements that create mapping index
MYMAPPING_INDEX on the table MYMAPPING_TABLE. For more information
about the CREATE TABLE and CREATE INDEX statements, see DB2 SQL
Reference.
The REORG TABLESPACE control statement then specifies that the REORG
TABLESPACE utility is to reorganize table space DSN8D81P.DSN8S81C and to use
mapping table MYMAPPING_TABLE.
Finally, the third EXEC SQL statement contains the SQL statements that drop
MYMAPPING_TABLE. For more information about the DROP TABLE statement,
see DB2 SQL Reference.
EXEC SQL
CREATE TABLE MYMAPPING_TABLE
(TYPE CHAR( 01 ) NOT NULL,
SOURCE_RID CHAR( 05 ) NOT NULL,
TARGET_XRID CHAR( 09 ) NOT NULL,
LRSN CHAR( 06 ) NOT NULL)
IN DSN8D81P.DSN8S81Q
CCSID EBCDIC
ENDEXEC
EXEC SQL
CREATE UNIQUE INDEX MYMAPPING_INDEX
ON MYMAPPING_TABLE
(SOURCE_RID ASC,
TYPE,
TARGET_XRID,
LRSN)
USING STOGROUP DSN8G710
PRIQTY 120 SECQTY 20
ERASE NO
BUFFERPOOL BP0
CLOSE NO
ENDEXEC
REORG TABLESPACE DSN8D81P.DSN8S81C
COPYDDN(COPYDDN)
SHRLEVEL CHANGE
DEADLINE CURRENT_TIMESTAMP+8 HOURS
MAPPINGTABLE MYMAPPING_TABLE
MAXRO 240 LONGLOG DRAIN DELAY 900
SORTDEVT SYSDA SORTNUM 4
STATISTICS TABLE(ALL)
INDEX(ALL)
EXEC SQL
DROP TABLE MYMAPPING_TABLE
ENDEXEC
Example 15: Discarding records from one table while reorganizing a table
space: The control statement in Figure 87 on page 481 specifies that REORG
TABLESPACE is to reorganize table space DSN8D51A.DSN8S51E. During
reorganization, records in table DSN8510.EMP are discarded if they have the value
D11 in the WORKDEPT field. This discard criteria is specified in the WHEN clause
that follows the DISCARD option. Because a SYSDISC DD statement is included in
the JCL, any discarded rows are to be written to the data set that is identified by
this DD statement.
The COPYDDN option specifies that during the REORG, DB2 is also to take an
inline copy of the table space. This image copy is to be written to the data set that
is identified by the SYSCOPY DD statement.
|
| Figure 88. Example REORG statement that specifies discard criteria for several tables
|
Example 17: Reorganizing only those partitions that are in REORG-pending
status. The control statement in Figure 89 specifies that REORG TABLESPACE is
to reorganize only those partitions of table space DBKQAA01.TPKQAA01 that are
in the range from 2 to 10 and are in REORG-pending status.
|
| Figure 89. Example REORG TABLESPACE statement with SCOPE PENDING
|
You use REPAIR to replace invalid data with valid data. Be extremely careful when
using REPAIR. Improper use can damage the data even further.
For a diagram of REPAIR syntax and a description of available options, see “Syntax
and options of the REPAIR control statement” on page 484. For detailed guidance
on running this utility, see “Instructions for running REPAIR” on page 498.
Output: The output from the REPAIR utility can consist of one or more modified
pages in the specified DB2 table space or index and a dump of the contents.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v REPAIR privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute REPAIR, but only on a
table space in the DSNDB01 or DSNDB06 database.
To execute REPAIR with the DBD option, you must use a privilege set that includes
SYSADM, SYSCTRL, or installation SYSOPR authority.
REPAIR should be used only by a person that is knowledgeable in DB2 and your
data. Grant REPAIR authorization only to the appropriate people.
| REPAIR
level-id statement:
versions statement:
index-name-spec:
| INDEX index-name
creator-id.
INDEXSPACE index-space-name
database-name.
OBJECT
Indicates that an object is to be repaired. This keyword Is optional. only.
LOG
Indicates whether the changes that REPAIR makes are to be logged. If the
changes are to be logged, they are applied again if the data is recovered.
YES
Indicates that the changes are to be logged. The default is YES.
REPAIR LOG YES cannot override the LOG NO attribute of a table space.
NO
Indicates that the changes are not to be logged. You cannot use this option
with a DELETE statement.
REPAIR LOG NO can override the LOG YES attribute of a table space.
LEVELID
Indicates that the level identifier of the named table space, table space partition,
index, or index space partition is to be reset to a new identifier. Use LEVELID to
accept the use of a down-level data set. You cannot specify multiple LEVELID
keywords in the same REPAIR control statement.
You cannot use LEVELID with a table space, table space partition, index, or
index space partition that has outstanding indoubt log records or pages in the
logical page list (LPL).
Attention: Accepting the use of a down-level data set might cause data
inconsistencies. Problems with inconsistent data that result from resetting the
level identifier are the responsibility of the user.
TABLESPACE database-name.table-space-name
Specifies the table space (and, optionally, the database to which it belongs)
| whose level identifier is to be reset (if you specify LEVELID) or whose version
| identifier is to be updated (if you specify VERSIONS).
database-name
Specifies the name of the database to which the table space belongs.
The default is DSNDB04.
table-space-name
Specifies the name of the table space.
INDEX
Specifies the index whose level identifier is to be reset (if you specify LEVELID)
| or whose version identifier is to be updated (if you specify VERSIONS).
| creator-id.
| Specifies the creator of the index. Specifying this qualifier is optional.
index-name
Specifies the name of the index. Enclose the index name in quotation
marks if the name contains a blank.
| specify VERSIONS). You can obtain the index space name for an index from
| the SYSIBM.SYSINDEXES catalog table. The index space name must be
| qualified.
| database-name.
| Specifies the name of the database to which the index space belongs.
| index-space-name
| Specifies the name of the index space.
table-space-spec:
TABLESPACE table-space-name
database-name.
| (ALL) Specifies that all indexes in the table space will be processed.
PART integer
Specifies a particular partition whose COPY-pending, informational
COPY-pending, or RECOVER-pending status is to be reset. If you do not
specify PART, REPAIR resets the pending status of the entire table space
or index.
integer is the number of the partition and must be in the range from one to
| the number of partitions that are defined for the object. The maximum is
| 4096.
You can specify PART for NOCHECKPEND on a table space, and for
NORCVRPEND on indexes.
The PART keyword is not valid for a LOB table space or an index on the
auxiliary table.
NOCOPYPEND
Specifies that the COPY-pending status of the specified table space, or the
informational COPY-pending (ICOPY) status of the specified index is to be
reset.
NORCVRPEND
Specifies that the RECOVER-pending (RECP) status of the specified table
space or index is to be reset.
NORBDPEND
Specifies that the REBUILD-pending (RBDP) status, the page set
REBUILD-pending status (PSRBDP), or the RBDP* status of the specified
index is to be reset.
NOCHECKPEND
Specifies that the CHECK-pending (CHKP) status of the specified table
space or index is to be reset.
NOAUXWARN
Specifies that the auxiliary warning (AUXW) status of the specified table
space is to be reset. The specified table space must be a base table space
or a LOB table space.
NOAUXCHKP
Specifies that the auxiliary CHECK-pending (ACHKP) status of the specified
table space is to be reset. The specified table space must be a base table
space.
| NOAREORPENDSTAR
| Resets the advisory REORG-pending (AREO*) status of the specified table
| space or index.
In any LOCATE block, you can use VERIFY, REPLACE, or DUMP as often as you
like; you can use DELETE only once.
LOCATE
|
table-space-spec table-options spec verify statement
INDEX index-name index-options-spec replace statement
INDEXSPACE index-space-name index-options-spec delete statement
dump statement
table-space-spec ROWID X'byte-string' VERSION X'byte-string' delete statement
dump statement
table-space-spec:
TABLESPACE table-space-name
database-name.
table-options-spec:
PAGE integer
PART integer X'byte-string'
RID X'byte-string'
KEY literal INDEX index-name
index-options-spec:
PAGE integer
PART integer X'byte-string'
One LOCATE statement is required for each unit of data that is to be repaired.
Several LOCATE statements can appear after each REPAIR statement.
database-name
Is the name of the database to which the table space belongs and is
optional. The default is DSNDB04.
table-space-name
Is the name of the table space that contains the data that you want to
repair.
PART integer
Specifies the partition that contains the page that is to be located. Part is valid
only for partitioned table spaces.
integer is the number of the partition.
PAGE
Specifies the relative page number within the table space, partitioned table
space, or index that is to be operated on. The first page, in either case, is 0
(zero).
integer
integer is a decimal number from one to six digits in length.
X'byte-string'
Specifies that the data of interest is an entire page. The specified
offsets in byte-string and in subsequent statements are relative to the
beginning of the page. The first byte of the page is at offset 0.
byte-string is a hexadecimal value from one to eight characters in
length. You do not need to enter leading zeros. Enclose the byte-string
between apostrophes, and precede it with X.
RID X'byte-string'
Specifies that the data that is to be located is a single row. The specified offsets
in byte-string and in subsequent statements are relative to the beginning of the
row. The first byte of the stored row prefix is at offset 0.
byte-string can be a hexadecimal value from one to eight characters in length.
You do not need to enter leading zeros. Enclose the byte string between
apostrophes, and precede it with an X.
KEY literal
Specifies that the data that is to be located is a single row, identified by literal.
The specified offsets in subsequent statements are relative to the beginning of
the row. The first byte of the stored row prefix is at offset 0.
literal is any SQL constant that can be compared with the key values of the
named index.
Character constants that are specified within the LOCATE KEY option cannot
be specified as ASCII or Unicode character strings. No conversion of the values
is performed. To use this option when the table space is ASCII or Unicode, you
should specify the values as hexadecimal constants.
If more than one row has the value literal in the key column, REPAIR returns a
list of record identifiers (RIDs) for records with that key value, but does not
perform any other operations (verify, replace, delete, or dump) until the next
LOCATE TABLESPACE statement is encountered. To repair the proper data,
write a LOCATE TABLESPACE statement that selects the desired row, using
the RID option, the PAGE option, or a different KEY and INDEX option. Then
execute REPAIR again.
ROWID X'byte-string'
Specifies that the data that is to be located is a LOB in a LOB table space.
One LOCATE statement is required for each unit of data that is to be repaired.
Multiple LOCATE statements can appear after each REPAIR statement.
X'byte-string'
Specifies that the data of interest is an entire page. The specified
offsets in byte-string and in subsequent statements are relative to
the beginning of the page. The first byte of the page is at offset 0.
byte-string is a hexadecimal value from one to eight characters in
length. You do not need to enter leading zeros. Enclose the
byte-string between apostrophes, and precede it with X.
OFFSET 0
VERIFY DATA X'byte-string'
OFFSET integer 'character-string'
X'byte-string'
REPLACE RESET
OFFSET 0
DATA X'byte-string'
OFFSET integer 'character-string'
X'byte-string'
The DELETE statement operates without regard for referential constraints. If you
delete a parent row, its dependent rows remain unchanged in the table space.
However, in the DB2 catalog and directory table spaces, where links are used to
reference other tables in the catalog, deleting a parent row causes all child rows to
be deleted, as well. Moreover, deleting a child row in the DB2 catalog tables also
updates its predecessor and successor pointer to reflect the deletion of this row.
Therefore, if the child row has incorrect pointers, the DELETE might lead to an
unexpected result. See “Example 5: Repairing a table space with an orphan row” on
page 507 for a possible method of deleting a child row without updating its
predecessor and successor pointer.
In any LOCATE block, you can include no more than one DELETE option.
| If you have coded any of the following options, you cannot use DELETE:
| v The LOG NO option on the REPAIR statement
| v A LOCATE INDEX statement to begin the LOCATE block
| v The PAGE option on the LOCATE TABLESPACE statement in the same LOCATE
| block
| v A REPLACE statement for the same row of data
When you specify LOCATE ROWID for a LOB table space, the LOB that is
specified by ROWID is deleted with its index entry. All pages that are occupied by
the LOB are converted to free space. The DELETE statement does not remove
any reference to the deleted LOB from the base table space.
DELETE
When you specify LOCATE ROWID for a LOB table space, one or more map or
data pages of the LOB are dumped. The DUMP statement dumps all of the LOB
column pages if you do not specify either the MAP or DATA keyword.
OFFSET 0
DUMP
OFFSET integer LENGTH X'byte-string' PAGES X'byte-string'
X'byte-string' integer integer
*
MAP
pages
DATA
pages
pages specifies the number of LOB map pages that are to be dumped. If you
do not specify pages, all LOB map pages of the LOB that is specified by
ROWID and version are dumped.
DATA pages
Specifies that only the LOB data pages are to be dumped.
pages specifies the number of LOB data pages that are to be dumped. If you
do not specify pages, all LOB data pages of the LOB that is specified by
ROWID and version are dumped.
The REPAIR utility assumes that the links in table spaces DSNDB01.DBD01,
DSNDB06.SYSDBAUT, and DSNDB06.SYSDBASE are intact. Before executing
REPAIR with the DBD statement, run the DSN1CHKR utility on these table spaces
to ensure that the links are not broken. For more information about DSN1CHKR,
see Chapter 38, “DSN1CHKR,” on page 691.
The database on which REPAIR DBD is run must be started for access by utilities
only. For more information about using the DBD statement, see page “Using the
DBD statement” on page 500.
You can use REPAIR DBD on declared temporary tables, which must be created in
a database that is defined with the AS TEMP clause. No other DB2 utilities can be
used on a declared temporary table, its indexes, or its table spaces.
Attention: Use the DROP option with extreme care. Using DROP can cause
additional damage to your data. For more assistance, you can contact IBM
Software Support.
DATABASE database-name
Specifies the target database.
database-name is the name of the target database, which cannot be DSNDB01
(the DB2 directory) or DSNDB06 (the DB2 catalog).
If you use TEST, DIAGNOSE, or REBUILD, database-name cannot be
DSNDB07 (the work file database).
If you use DROP, database-name cannot be DSNDB04 (the default database).
DBID X'dbid'
Specifies the database descriptor identifier for the target database.
dbid is the database descriptor identifier.
TEST
Specifies that a DBD is to be built from information in the DB2 catalog, and is to
be compared with the DBD in the DB2 directory. If you specify TEST, DB2
reports significant differences between the two DBDs.
If the condition code is 0, the DBD in the DB2 directory is consistent with the
information in the DB2 catalog.
If the condition code is not 0, then the information in the DB2 catalog and the
DBD in the DB2 directory might be inconsistent. Run REPAIR DBD with the
DIAGNOSE option to gather information that is necessary for resolving any
possible inconsistency.
DIAGNOSE
Specifies that information that is necessary for resolving an inconsistent
database definition is to be generated. Like the TEST option, DIAGNOSE builds
a DBD that is based on the information in the DB2 catalog and compares it with
the DBD in the DB2 directory. In addition, DB2 reports any differences between
the two DBDs, and produces hexadecimal dumps of the inconsistent DBDs.
If the condition code is 0, the information in the DB2 catalog and the DBD in the
DB2 directory is consistent.
If the condition code is 8, the information in the DB2 catalog and the DBD in the
DB2 directory might be inconsistent.
For further assistance in resolving any inconsistencies, you can contact IBM
Software Support.
REBUILD
Specifies that the DBD that is associated with the specified database is to be
rebuilt from the information in the DB2 catalog.
Attention: Use the REBUILD option with extreme care, as you can cause more
damage to your data. For more assistance, you can contact IBM Software
Support.
OUTDDN ddname
Specifies the DD statement for an optional output data set. This data set
contains copies of the DB2 catalog records that are used to rebuild the DBD.
ddname is the name of the DD statement.
Attention: Be extremely careful when using the REPAIR utility to replace data.
Changing data to invalid values by using REPLACE might produce unpredictable
results, particularly when changing page header information. Improper use of
REPAIR can result in damaged data, or in some cases, system failure.
The following objects are named in the utility control statement and do not require a
DD statement in the JCL:
Table space or index
Object that is to be repaired.
Calculating output data set size: Use the following formula to estimate the size of
the output data set:
SPACE = (4096,(n,n))
In this formula, n = the total number of DB2 catalog records that relate to the
database on which REPAIR DBD is being executed.
You can calculate an estimate for n by summing the results of SELECT COUNT(*)
from all of the catalog tables in the SYSDBASE table space, where the name of the
database that is associated with the record matches the database on which
REPAIR DBD is being executed.
To reset the auxiliary warning (AUXW) status for a LOB table space:
1. Update or correct the invalid LOB columns, then
2. Run the CHECK LOB utility with the AUXERROR INVALIDATE option if invalid
LOB columns were corrected.
Consider using the REBUILD INDEX or RECOVER INDEX utility on an index that is
in REBUILD-pending status, rather than running REPAIR SET INDEX
NORBDPEND. RECOVER uses DB2-controlled recovery information, whereas
REPAIR SET INDEX resets the REBUILD-pending status without considering the
recoverability of the index. Recoverability issues include the availability of image
copies, of rows in SYSIBM.SYSCOPY, and of log data sets.
DB2 reads each table space in the database during the REBUILD process to
gather information. If the data sets for the table spaces do not exist or are not
accessible to DB2, the REBUILD abnormally terminates.
REPAIR DBD REBUILD obtains environment information, such as the character
that is used for the decimal point, from the DSNHDECP module for the
subsystem.
6. If you suspect an inconsistency in the DBD of the work file database, run
REPAIR DBD DROP or DROP DATABASE (SQL), and then recreate it. If you
receive errors when you drop the work file database, contact IBM Software
Support for assistance.
| 2. If you are copying indexes that have not been altered in Version 8, check the
| SYSIBM.SYSINDEXES catalog table on both subsystems to ensure that the
| value in both the CURRENT_VERSION column and the OLDEST_VERSION
| column is 0.
| 3. If the object has been altered since its creation and has never been
| reorganized, run the REORG utility on the object. You can determine if an object
| has been altered but not reorganized by checking the values of the
| OLDEST_VERSION and CURRENT_VERSION columns in
| SYSIBM.SYSTABLESPACE or SYSIBM.SYSINDEXES. If OLDEST_VERSION is
| 0 and CURRENT_VERSION is greater than 0, run REORG.
| 4. Ensure that enough version numbers are available. For a table space, the
| combined active number of versions for the object on both the source and target
| subsystems must be less than 255. For an index, the combined active number
| of versions must be less than 16. Use the following guidelines to calculate the
| active number of versions for the object on both the source and target
| subsystems:
| v If the value in the CURRENT_VERSION column is less than the value in the
| OLDEST_VERSION column, add the maximum number of versions (255 for a
| table space or 16 for an index) to the value in the CURRENT_VERSION
| column.
| v Use the following formula to calculate the number of active versions:
| number of active_versions =
| MAX(target.CURRENT_VERSION,source.CURRENT_VERSION)
| - MIN(target.OLDEST_VERSION,source.OLDEST_VERSION) + 1
| If the number of active versions is too high, you must reduce the number of
| active versions by running REORG on both the source and target objects. Then,
| use the COPY utility to take a copy, and run MODIFY RECOVERY to recycle
| the version numbers.
| 5. Run the DSN1COPY utility with the OBIDXLAT option. On the control statement,
| specify the proper mapping of table database object identifiers (OBIDs) for the
| table space or index from the source to the target subsystem.
| 6. Run REPAIR VERSIONS on the object on the target subsystem. For table
| spaces, the utility updates the following columns:
| v OLDEST_VERSION and CURRENT_VERSION in SYSTABLEPART
| v VERSION in SYSTABLES
| For indexes, the utility updates OLDEST_VERSION and CURRENT_VERSION
| in SYSINDEXES. DB2 uses the following formulas to update these columns in
| both SYSTABLEPART and SYSINDEXES:
| CURRENT_VERSION = MAX(target.CURRENT_VERSION,source.CURRENT_VERSION)
| OLDEST_VERSION = MIN(target.OLDEST_VERSION,source.OLDEST_VERSION)
| For more information about versions and how they are used by DB2, see Part 2 of
| DB2 Administration Guide.
Table 93 shows which claim classes REPAIR drains and any restrictive state that
the utility sets on the target object.
Table 93. Claim classes of REPAIR operations
Table space or Index or partition
Action partition
REPAIR LOCATE KEY DUMP or VERIFY DW/UTRO DW/UTRO
REPAIR LOCATE KEY DELETE or DA/UTUT DA/UTUT
REPLACE
REPAIR LOCATE RID DUMP or VERIFY DW/UTRO None
REPAIR LOCATE RID DELETE DA/UTUT DA/UTUT
REPAIR LOCATE RID REPLACE DA/UTUT None
REPAIR LOCATE TABLESPACE DUMP or DW/UTRO None
VERIFY
REPAIR LOCATE TABLESPACE REPLACE DA/UTUT None
REPAIR LOCATE INDEX PAGE DUMP or None DW/UTRO
VERIFY
REPAIR LOCATE INDEX PAGE DELETE None DA/UTUT
Legend:
v DA - Drain all claim classes - no concurrent SQL access.
v DW - Drain the write claim class - concurrent access for SQL readers.
v UTUT - Utility restrictive state - exclusive control.
v UTRO - Utility restrictive state - read-only access allowed.
v None - Object is not affected by this utility.
REPAIR does not set a utility restrictive state if the target object is
DSNDB01.SYSUTILX.
Table 94 and Table 95 on page 504 show which utilities can run concurrently with
REPAIR on the same target object. The target object can be a table space, an
index space, or a partition of a table space or index space. If compatibility depends
on particular options of a utility, that information is also shown in the table.
Table 94 shows which utilities can run concurrently with REPAIR LOCATE by KEY
or RID.
Table 94. Utility compatibility with REPAIR, LOCATE by KEY or RID
Utility DUMP or VERIFY DELETE or REPLACE
CHECK DATA No No
CHECK INDEX Yes No
CHECK LOB Yes No
COPY INDEXSPACE Yes No
COPY TABLESPACE Yes No
DIAGNOSE Yes Yes
Table 94. Utility compatibility with REPAIR, LOCATE by KEY or RID (continued)
Utility DUMP or VERIFY DELETE or REPLACE
LOAD No No
MERGECOPY Yes Yes
MODIFY Yes Yes
QUIESCE Yes No
REBUILD INDEX No No
1
RECOVER INDEX No No
RECOVER TABLESPACE No No
2
REORG INDEX No No
REORG TABLESPACE UNLOAD No No
CONTINUE or PAUSE
REORG TABLESPACE UNLOAD Yes No
ONLY or EXTERNAL
3
REPAIR DELETE or REPLACE No No
REPAIR DUMP or VERIFY Yes No
REPORT Yes Yes
RUNSTATS INDEX SHRLEVEL Yes Yes
CHANGE
RUNSTATS INDEX SHRLEVEL Yes No
REFERENCE
RUNSTATS TABLESPACE Yes No
STOSPACE Yes Yes
UNLOAD Yes No
Notes:
1. REORG INDEX is compatible with LOCATE by RID, DUMP, VERIFY, or
REPLACE.
2. RECOVER INDEX is compatible with LOCATE by RID, DUMP, or VERIFY.
3. REPAIR LOCATE INDEX PAGE REPLACE is compatible with LOCATE by RID
or REPLACE.
Table 95 shows which utilities can run concurrently with REPAIR LOCATE by PAGE.
Table 95. Utility compatibility with REPAIR, LOCATE by PAGE
TABLESPACE TABLESPACE INDEX DUMP or
Utility or action DUMP or VERIFY REPLACE VERIFY INDEX REPLACE
SQL read Yes No Yes No
SQL write No No No No
CHECK DATA No No No No
CHECK INDEX Yes No Yes No
CHECK LOB Yes No Yes No
COPY INDEXSPACE Yes Yes Yes No
COPY TABLESPACE Yes No Yes No
DIAGNOSE Yes Yes Yes Yes
LOAD No No No No
Notes:
1. REPAIR LOCATE INDEX PAGE REPLACE is compatible with LOCATE TABLESPACE PAGE.
Error messages: At each LOCATE statement, the last data page and the new
page that are being located are checked for a few common errors, and messages
are issued.
Data checks: Although REPAIR enables you to manipulate both user and DB2 data
by bypassing SQL, it does perform some checking of data. For example, if REPAIR
tries to write a page with the wrong page number, DB2 abnormally terminates with
a 04E code and reason code C200B0. If the page is broken because the broken
page bit is on or the incomplete page flag is set, REPAIR issues the following
message:
DSNU670I + DSNUCBRP - PAGE X’000004’ IS A BROKEN PAGE
To resolve this error condition, submit the following control statement, which
specifies that REPAIR is to delete the nonindexed row and log the change. (The
LOG keyword is not required; the change is logged by default.) The RID option
identifies the row that REPAIR is to delete.
REPAIR
LOCATE TABLESPACE DSNDB04.TS1 RID (X’0000000503’)
DELETE
Example 3: Reporting whether catalog and directory DBDs differ. The following
control statement specifies that REPAIR is to compare the DBD for DSN8D2AP in
the catalog with the DBD for DSN8D2AP in the directory.
REPAIR DBD TEST DATABASE DSN8D2AP
If the condition code is 0, the DBDs are consistent. If the condition code is not 0,
the DBDs might be inconsistent. In this case, run REPAIR DBD with the
DIAGNOSE option, as shown in example 4, to find out more detailed information
about any inconsistencies.
Setting the pointer to zeros prevents the next step from updating link pointers
while deleting the orphan. Updating the link pointers can cause DB2 to
abnormally terminate if the orphan’s pointers are incorrect.
2. Submit the following control statement, which deletes the orphan:
REPAIR OBJECT LOG YES
LOCATE TABLESPACE DSNDB06.SYSDBASE RID X’00002420’
VERIFY OFFSET X’06’ DATA X’00002521’
DELETE
Output: The output from REPORT TABLESPACESET consists of the names of all
table spaces in the table space set that you specify. It also lists all tables in the
table spaces and all tables that are dependent on those tables.
The output from REPORT RECOVERY consists of the recovery history from the
SYSIBM.SYSCOPY catalog table, log ranges from the SYSIBM.SYSLGRNX
directory table, and volume serial numbers where archive log data sets from the
BSDS reside. In addition, REPORT RECOVERY output includes information about
any indexes on the table space that are in the informational COPY-pending status
because this information affects the recoverability of an index. For more information
about this situation, see “Setting and clearing the informational COPY-pending
status” on page 117.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v RECOVERDB privilege for the database
v DBADM or DBCTRL authority for the database
v SYSCTRL or SYSADM authority
An ID with DBCTRL or DBADM authority over database DSNDB06 can run the
REPORT utility on any table space in DSNDB01 (the directory) or DSNDB06 (the
catalog), as can any ID with installation SYSOPR, SYSCTRL, or SYSADM authority.
Syntax diagram
REPORT
INDEX NONE
RECOVERY TABLESPACE LIST listdef-name
table-space-name-spec INDEX ALL info options
index-list-spec
TABLESPACESET table-space-name-spec
TABLESPACE
index-list-spec:
INDEXSPACE index-space-name
database-name.
LIST listdef-name
INDEX index-name
creator-id.
LIST listdef-name
info options:
DSNUM ALL
DSNUM integer CURRENT SUMMARY LOCALSITE RECOVERYSITE
ARCHLOG 1
ARCHLOG 2
ALL
table-space-name-spec:
table-space-name
database-name.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
RECOVERY
Indicates that recovery information for the specified table space or index is to
be reported.
TABLESPACE database-name.table-space-name
For REPORT RECOVERY, specifies the table space (and, optionally, the
database to which it belongs) that is being reported.
For REPORT TABLESPACESET, specifies a table space (and, optionally, the
database to which it belongs) in the table space set.
database-name
Optionally specifies the database to which the table space belongs.
table-space-name
Specifies the table space.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
utility allows one LIST keyword for each control statement of REPORT.
The list must contain only table spaces. Do not specify LIST with the
TABLESPACE...table-space-name specification. The TABLESPACE
keyword is required in order to validate the contents of the list.
REPORT RECOVERY TABLESPACE is invoked once per item in the
list.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
INDEXSPACE database-name.index-space-name
Specifies the index space that is being reported.
database-name
Optionally specifies the database to which the index space belongs.
index-space-name
Specifies the index space name for the index that is being reported.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. The
utility allows one LIST keyword for each control statement of REPORT.
The list must contain only index spaces. Do not specify LIST with the
INDEXSPACE...index-space-name specification. The INDEXSPACE
keyword is required in order to validate the contents of the list.
REPORT RECOVERY INDEXSPACE is invoked once for each item in
the list.
In this format:
catname Is the VSAM catalog name or alias.
x Is C or D.
dbname Is the database name.
tsname Is the table space name.
y Is I or J.
nnn Is the data set integer.
CURRENT
Specifies that only the SYSCOPY entries that were written after the last
recovery point of the table space are to be reported. The last recovery point
is the last full image copy, LOAD REPLACE LOG YES image copy, or
REORG LOG YES image copy. If you specify DSNUM ALL, the last
recovery point is a full image copy that was taken for the entire table space
or index space. However, if you specify the CURRENT option, but the last
recovery point does not exist on the active log, DB2 prompts you to mount
archive tapes until this point is found.
CURRENT also reports only the SYSLGRNX rows and archive log volumes
that were created after the last incremental image copy entry. If no
incremental image copies were created, only the SYSLGRNX rows and
archive log volumes that were created after the last recovery point are
reported.
If you do not specify CURRENT or if no last recovery point exists, all
SYSCOPY and SYSLGRNX entries for that table space or index space are
reported, including those on archive logs. If you do not specify CURRENT,
the entries that were written after the last recovery point are marked with an
asterisk (*) in the report.
SUMMARY
Specifies that only a summary of volume serial numbers is to be reported. It
reports the following volume serial numbers:
v Where the archive log data sets from the BSDS reside
v Where the image copy data sets from SYSCOPY reside
1. Prepare the necessary data sets, as described in “Data sets that REPORT
uses.”
2. Create JCL statements, by using one of the methods that are described in
Chapter 3, “Invoking DB2 online utilities,” on page 19. (For examples of JCL for
REPORT, see “Sample REPORT control statements” on page 523.)
3. Prepare a utility control statement that specifies the options for the tasks that
you want to perform, as described in “Instructions for specific tasks.”
4. Check the compatibility table in “Concurrency and compatibility for REPORT” on
page 517 if you want to run other jobs concurrently on the same target objects.
5. Plan for restart if the REPORT job doesn’t complete, as described in
“Terminating or restarting REPORT” on page 516.
6. Run REPORT by using one of the methods that are described in Chapter 3,
“Invoking DB2 online utilities,” on page 19.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Object that is to be reported.
You can also use REPORT to obtain recovery information about the catalog and
directory. When doing so, use the CURRENT option to avoid unnecessary mounting
of archive tapes.
REPORT uses asterisks to denote any non-COPY entries that it finds in the
SYSIBM.SYSCOPY catalog table. For example, an entry that is added by the
QUIESCE utility is marked with asterisks in the REPORT output.
Recommendation: For image copies of partitioned table spaces that are taken with
the DSNUM ALL option, run REPORT RECOVERY DSNUM ALL. If you run
REPORT RECOVERY DSNUM ALL CURRENT, DB2 reports additional historical
information that dates back to the last full image copy that was taken for the entire
table space.
The REPORT RECOVERY utility output indicates whether any image copies are
unusable; image copies that were taken prior to REORG or LOAD events that reset
REORG-pending status are marked as unusable. In the REPORT RECOVERY
output, look at the IC TYPE and STYPE fields to help you determine which image
copies are unusable.
For example, in the sample REPORT RECOVERY output in Figure 92, the value in
the first IC TYPE field, *R*, indicates that a LOAD REPLACE LOG YES operation
occurred. The value in the second IC TYPE field, <F> indicates that a full image
copy was taken.
Figure 92. Sample REPORT RECOVERY output before table space placed in REORG-pending status
After this image copy was taken, assume that an event occurred that put the table
space in REORG-pending status. Figure 93 shows the next several rows of
REPORT RECOVERY output for the same table space. The value in the first
ICTYPE field, *X* indicates that a REORG LOG YES event occurred. In the same
SYSCOPY record, the value in the STYPE field, A, indicates that this REORG job
reset the REORG-pending status. Any image copies that are taken before this
status was reset are unusable. (Thus, the full image copy in the REPORT output in
Figure 92 on page 515 is unusable.) The next record contains an F in the IC TYPE
field and an X in the STYPE field, which indicates that a full image copy was taken
during the REORG job. This image copy is usable.
Figure 93. Sample REPORT RECOVERY output after REORG-pending status is reset
For a complete explanation of the SYSCOPY fields, see DB2 SQL Reference.
You can use REPORT TABLESPACESET on the DB2 catalog and directory table
spaces.
You can restart a REPORT utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 42.
REPORT can run concurrently on the same target object with any utility or SQL
operation.
The report contains three sections, which include the following types of information:
v Recovery history from the SYSIBM.SYSCOPY catalog table.
For a description of the fields in the SYSCOPY rows, see the table that describes
SYSIBM.SYSCOPY in Appendix D of DB2 SQL Reference.
v Log ranges from SYSIBM.SYSLGRNX.
v Volume serial numbers where archive log data sets from the BSDS reside.
If REPORT has no data to display for one or more of these topics, the
corresponding sections of the report contain the following message:
DSNU588I - NO DATA TO BE REPORTED
DSNU592I = DSNUPREC - REPORT RECOVERY INFORMATION FOR DATA SHARING MEMBER : V81A
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DBOS3002.TPOS3002
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
092502 08361462 00000DE3C9FA 00000DE5BC88 B848AEDD97F6 B848AF380D15 0001 0001
092502 08361646 00000DE3E91D 00000DE5BD88 B848AEDF5972 B848AF3828FE 0002 0001
092502 08361831 00000DE40804 00000DE5BE88 B848AEE11D01 B848AF382BAF 0003 0001
092502 08362021 00000DE426EB 00000DE5C000 B848AEE2ECAC B848AF382F61 0004 0001
092502 08362214 00000DE445D2 00000DE5C100 B848AEE4C45C B848AF3830BC 0005 0001
092502 08362404 00000DE464B9 00000DE5C200 B848AEE69324 B848AF38331C 0006 0001
092502 08362822 00000DE483A0 00000DE5C300 B848AEEA8F55 B848AF3834F7 0007 0001
092502 08363681 00000DE4A2AD 00000DE5C400 B848AEF2C02C B848AF3836FE 0008 0001
092502 08364286 00000DE4C194 00000DE5C500 B848AEF886EE B848AF383888 0009 0001
092502 08364946 00000DE4E0A1 00000DE5C600 B848AEFED236 B848AF3839DB 0010 0001
...
092502 08392880 00000DE83A3E 00000DF41788 B848AF96C610 B848B1752630 0015 0001
092502 08392883 00000DE83DA8 00000DF41526 B848AF96CD77 B848B174A721 0016 0001
DSNU592I = DSNUPREC - REPORT RECOVERY INFORMATION FOR DATA SHARING MEMBER : V81B
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DBOS3002.TPOS3002
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
092502 08502167 00000010981C 000000128AAB B848B20565F4 B848B206C7D0 0001 0002
092502 08502170 00000010B7CF 000000128BAB B848B2056F2B B848B206C976 0002 0002
092502 08502176 00000010D6B6 000000128CAB B848B2057C32 B848B206CAED 0003 0002
092502 08502182 00000010F5DA 000000128DAB B848B2058BE5 B848B206CC55 0004 0002
092502 08502188 0000001115CF 000000128EAB B848B2059AAD B848B206CDCA 0005 0002
092502 08502193 0000001134B6 000000129000 B848B205A5C3 B848B206CF53 0006 0002
...
092502 09064089 00000083C29F 0000009A2DB2 B848B5AB422E B848B6D42EBD 0015 0002
092502 09070293 00000089949A 0000009A3090 B848B5C04679 B848B6D4DEE7 0016 0002
DSNU584I = DSNUPPBS - REPORT RECOVERY TABLESPACE DBOS3002.TPOS3002 ARCHLOG1 BSDS VOLUMES
DSNU588I = DSNUPPBS - NO DATA TO BE REPORTED
DSNU584I = DSNUPPBS - REPORT RECOVERY TABLESPACE DBOS3002.TPOS3002 ARCHLOG2 BSDS VOLUMES
DSNU588I = DSNUPPBS - NO DATA TO BE REPORTED
Figure 96 on page 521 shows sample output for the statement REPORT
RECOVERY TABLESPACE ARCHLOG. Under message DSNU584I, the archive log
entries after the last recovery point are marked with an asterisk (*). If you code the
CURRENT option, the output from message DSNU584I would include only the
archive logs after the last recovery point and the asterisk (*) would not be included
in the report.
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DB580501.TS580501
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
091702 10025977 00001E4FD319 00001E4FEB91 00001E4FD319 00001E4FEB91 0000 0000 *
091702 10030124 00001E505B93 00001E58BC23 00001E505B93 00001E58BC23 0000 0000 *
091702 10032302 00001E59A637 00001E5A5258 00001E59A637 00001E5A5258 0000 0000 *
091702 10035391 00001E5B26AB 00001E6222F3 00001E5B26AB 00001E6222F3 0000 0000 *
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DB580501.TS580501
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
091702 10025977 00001E4FD319 00001E4FEB91 00001E4FD319 00001E4FEB91 0000 0000
091702 10030124 00001E505B93 00001E58BC23 00001E505B93 00001E58BC23 0000 0000
091702 10032302 00001E59A637 00001E5A5258 00001E59A637 00001E5A5258 0000 0000
091702 10035391 00001E5B26AB 00001E6222F3 00001E5B26AB 00001E6222F3 0000 0000
The preceding statement produces output similar to the output shown in Figure 97
on page 524.
|
| Figure 97. Example output for REPORT RECOVERY
|
Example 2: Reporting table spaces that are referentially related. The following
control statement specifies that REPORT is to provide a list of all table spaces that
are referentially related to table space DSN8D81A.DSN8S81E. The output also
includes a list of any related LOB table spaces and of all indexes on the tables in
those table spaces.
REPORT TABLESPACESET TABLESPACE DSN8D81A.DSN8S81E
The preceding statement produces output similar to the output shown in Figure 98.
TABLESPACE : DSN8D81A.DSN8S81D
TABLE : DSN8810.DEPT
INDEXSPACE : DSN8D81A.XDEPT1
INDEX : DSN8810.XDEPT1
INDEXSPACE : DSN8D81A.XDEPT2
INDEX : DSN8810.XDEPT2
INDEXSPACE : DSN8D81A.XDEPT3
INDEX : DSN8810.XDEPT3
DEP TABLE : DSN8810.DEPT
DSN8810.EMP
DSN8810.PROJ
TABLESPACE : DSN8D81A.DSN8S81E
TABLE : DSN8810.EMP
INDEXSPACE : DSN8D81A.XEMP1
INDEX : DSN8810.XEMP1
INDEXSPACE : DSN8D81A.XEMP2
INDEX : DSN8810.XEMP2
DEP TABLE : DSN8810.DEPT
DSN8810.EMPPROJACT
DSN8810.PROJ
TABLESPACE : DSN8D81A.DSN8S81P
TABLE : DSN8810.ACT
INDEXSPACE : DSN8D81A.XACT1
INDEX : DSN8810.XACT1
INDEXSPACE : DSN8D81A.XACT2
INDEX : DSN8810.XACT2
DEP TABLE : DSN8810.PROJACT
TABLE : DSN8810.EMPPROJACT
INDEXSPACE : DSN8D81A.XEMPPROJ
INDEX : DSN8810.XEMPPROJACT1
INDEXSPACE : DSN8D81A.KRZC1YHQ
INDEX : DSN8810.XEMPPROJACT2
TABLE : DSN8810.PROJ
INDEXSPACE : DSN8D81A.XPROJ1
INDEX : DSN8810.XPROJ1
INDEXSPACE : DSN8D81A.XPROJ2
INDEX : DSN8810.XPROJ2
DEP TABLE : DSN8810.PROJ
DSN8810.PROJACT
TABLE : DSN8810.PROJACT
INDEXSPACE : DSN8D81A.XPROJAC1
INDEX : DSN8810.XPROJAC1
DEP TABLE : DSN8810.EMPPROJACT
DSNU580I DSNUPORT - REPORT UTILITY COMPLETE - ELAPSED TIME=00:00:00
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0
The preceding statement produces output similar to the output shown in Figure 99.
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR TABLESPACE DSN8D8
UCDATE UCTIME START RBA STOP RBA START LRSN STOP LRSN PARTITION MEMBER ID
111402 14582255 000001741BF8 000001742B91 000001741BF8 000001742B91 0004 0000
111402 14582699 00000177F09E 0000018798B8 00000177F09E 0000018798B8 0004 0000
111402 14584755 0000018EBEC5 0000019456AB 0000018EBEC5 0000019456AB 0004 0000
DSNU583I = DSNUPPLR - SYSLGRNX ROWS FROM REPORT RECOVERY FOR INDEX DSN8810.XDEP
DSNU588I = DSNUPPLR - NO DATA TO BE REPORTED
| The RESTORE SYSTEM utility can be run from any member in a data sharing
| group, even one that is normally quiesced when any backups are taken. Any
| member in the data sharing group that is active at or beyond the log truncation
| point must be restarted, and its logs are truncated to the SYSPITR LRSN point. You
| can specify the SYSPITR LRSN point in the CRESTART control statement of the
| DSNJU003 (Change Log Inventory) utility. Any data sharing group member that is
| normally quiesced at the time the backups are taken and is not active at or beyond
| the log truncation point does not need to be restarted.
| RESTORE SYSTEM does not restore logs; the utility only applies the logs. If you
| specified BACKUP SYSTEM FULL to create copies of both the data and the logs,
| you can restore the logs by another method. For more information about BACKUP
| SYSTEM FULL, see Chapter 5, “BACKUP SYSTEM,” on page 47.
| Output: Output for RESTORE SYSTEM is the recovered copy of the data volume
| or volumes.
| Related information: For more information about the use of RESTORE SYSTEM
| in system level point-in-time recovery, see Part 4 of DB2 Administration Guide.
| Authorization required: To run this utility, you must use a privilege set that
| includes SYSADM authority.
|
| Syntax and options of the RESTORE SYSTEM control statement
| The utility control statement defines the function that the utility job performs. Use
| the ISPF/PDF edit function to create a control statement and to save it in a
| sequential or partitioned data set. When you create the JCL for running the job, use
| the SYSIN DD statement to specify the name of the data set that contains the utility
| control statement.
| When you specify RESTORE SYSTEM, you can specify only the following
| statements in the same step:
| v DIAGNOSE
| v OPTIONS PREVIEW
| v OPTIONS OFF
| v OPTIONS KEY
| v OPTIONS EVENT WARNING
| In addition, RESTORE SYSTEM must be the last statement in SYSIN.
| Syntax diagram
|
| RESTORE SYSTEM
LOGONLY
|
|
| Option descriptions
| “Control statement coding rules” on page 19 provides general information about
| specifying options for DB2 utilities.
| LOGONLY
| Specifies that the database volumes have already been restored, so the
| RESTORE phase is skipped. Use this option when the database volumes have
| already been restored outside of DB2. If the subsystem is at a tracker site, you
| must specify the LOGONLY option. For more information about using a tracker
| site, see Part 4 (Volume 1) of DB2 Administration Guide.
| By default, RESTORE SYSTEM recovers the data from the database copy pool
| during the RESTORE phase and then applies logs to the point in time at which the
| existing logs were truncated during the LOGAPPLY phase. The RESTORE utility
| never restores logs from the log copy pool.
|
| Instructions for running RESTORE SYSTEM
| To run RESTORE SYSTEM, you must:
| 1. Read “Before running RESTORE SYSTEM” on page 531.
| 2. Prepare the necessary data sets, as described in “Data sets that RESTORE
| SYSTEM uses” on page 531.
| 3. Create JCL statements by either “Using the supplied JCL procedure
| (DSNUPROC)” on page 35 or “Creating the JCL data set yourself by using the
| EXEC statement” on page 38.
| 4. Prepare a utility control statement that specifies the options for the tasks that
| you want to perform, as described in “Instructions for specific tasks.”
| 5. Check “Concurrency and compatibility for RESTORE SYSTEM” on page 532 if
| you want to run other jobs concurrently on the same target objects.
| 6. Plan for restarting RESTORE SYSTEM if the job doesn’t complete, as described
| in “Terminating and restarting RESTORE SYSTEM” on page 532.
| 7. Run RESTORE SYSTEM by either “Using the supplied JCL procedure
| (DSNUPROC)” on page 35 or “Creating the JCL data set yourself by using the
| EXEC statement” on page 38.
| You can restart RESTORE SYSTEM at the beginning of a phase or at the current
| system checkpoint. A current system checkpoint occurs during the LOGAPPLY
| phase after log records are processed. By default, RESTORE SYSTEM restarts at
| the current system checkpoint.
| When you restart RESTORE SYSTEM for a data sharing group, the member on
| which the restart is issued must be the same member on which the original
| RESTORE SYSTEM was issued.
| For guidance in restarting online utilities, see “Restarting an online utility” on page
| 42.
|
| Concurrency and compatibility for RESTORE SYSTEM
| While RESTORE SYSTEM is running, no other utilities can run.
|
| After running RESTORE SYSTEM
| Complete the following steps after running RESTORE SYSTEM:
| 1. Stop and start each DB2 subsystem or member to remove it from access
| maintenance mode.
| 2. Use the DISPLAY UTIL command to see if any utilities are running. If other
| utilities are running, use the TERM UTIL command to end them.
| 3. Use the RECOVER utility to recover all objects in RECOVER-pending (RECP)
| or REBUILD-pending (RBDP) status, or use the REBUILD INDEX utility to
| rebuild objects. If a CREATE TABLESPACE, CREATE INDEX, or data set
| extension has failed, you can also recover or rebuild any objects in the logical
| page list (LPL).
|
| Sample RESTORE SYSTEM control statements
| RESTORE SYSTEM uses data that is copied by the BACKUP SYSTEM utility. For
| a complete list of all of the steps for system-level point-in-time recovery, see Part 4
| of DB2 Administration Guide.
The two formats for the RUNSTATS utility are RUNSTATS TABLESPACE and
RUNSTATS INDEX. RUNSTATS TABLESPACE gathers statistics on a table space
and, optionally, on tables, indexes or columns; RUNSTATS INDEX gathers statistics
only on indexes.
| When you run RUNSTATS TABLESPACE, you can use the COLGROUP option to
| collect frequency and cardinality statistics on any column group. You can also
| collect frequency and cardinality statistics on any single column. When you run
| RUNSTATS INDEX, you can collect frequency statistics on the leading column of an
| index and multi-column frequency and cardinality statistics on the leading
| concatenated columns of an index.
Output: RUNSTATS updates the DB2 catalog with table space or index space
statistics, prints a report, or both. See “Reviewing RUNSTATS output” on page 554
for a list of all the catalog tables and columns that are updated by RUNSTATS.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STATS privilege for the database
v DBADM, DBCTRL, or DBMAINT authority for the database
v SYSCTRL or SYSADM authority
An ID with installation SYSOPR authority can also execute the RUNSTATS utility,
but only on a table space in the DSNDB06 database.
To use RUNSTATS with the REPORT YES option, you must have the SELECT
privilege on the reported tables. RUNSTATS does not report values from tables that
the user is not authorized to see.
| To gather statistics on a LOB table space, you must have SYSADM or DBADM
| authority for the LOB table space.
RUNSTATS TABLESPACE
LIST listdef-name
table-space-name
database-name. FORCEROLLUP NO
PART integer
FORCEROLLUP YES
(1) SAMPLE 25
TABLE (table-name) column-spec
SAMPLE integer colgroup-spec
( ALL )
INDEX correlation-stats-spec
,
( index-name correlation-stats-spec )
PART integer
| (2)
SHRLEVEL REFERENCE REPORT NO UPDATE ALL HISTORY NONE
SHRLEVEL CHANGE REPORT YES UPDATE ACCESSPATH HISTORY ALL
SPACE ACCESSPATH
NONE SPACE
|
SORTDEVT device-type SORTNUM integer
Notes:
1 The TABLE keyword is not valid for a LOB table space.
2 You can change the default HISTORY value by modifying the STATISTICS HISTORY subsystem
parameter. By default, this value is NONE.
column-spec:
COLUMN ( ALL )
,
COLUMN ( column-name )
colgroup-spec:
|
,
COLGROUP ( column-name )
MOST
FREQVAL COUNT integer
BOTH
LEAST
correlation-stats-spec:
|
FREQVAL NUMCOLS 1 COUNT 10 MOST
KEYCARD MOST
FREQVAL NUMCOLS integer COUNT integer
BOTH
LEAST
If you specify LIST, you cannot specify the PART option. Instead, use
the PARTLEVEL option on the LISTDEF statement. The TABLESPACE
keyword is required in order to validate the contents of the list.
RUNSTATS TABLESPACE is invoked once for each item in the list.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
database-name
Identifies the name of the database to which the table space belongs.
The default is DSNDB04.
table-space-name
Identifies the name of the table space on which statistics are to be
gathered.
If the table space that is specified by the TABLESPACE keyword is a LOB table
space, you can specify only the following additional keywords: SHRLEVEL
REFERENCE or CHANGE, REPORT YES or NO, and UPDATE ALL or NONE.
PART integer
Identifies a table space partition on which statistics are to be collected.
integer is the number of the partition and must be in the range from 1 to the
| number of partitions that are defined for the table space. The maximum is 4096.
You cannot specify PART with LIST.
TABLE
Specifies the table on which column statistics are to be gathered. All tables
must belong to the table space that is specified in the TABLESPACE option.
You cannot specify the TABLE option for a LOB table space.
(ALL) Specifies that column statistics are to be gathered on all columns of all
tables in the table space. The default is ALL.
(table-name)
Specifies the tables on which column statistics are to be gathered. If
you omit the qualifier, RUNSTATS uses the user identifier for the utility
job as the qualifier. Enclose the table name in quotation marks if the
name contains a blank.
If you specify more than one table, you must repeat the TABLE option.
SAMPLE integer
Indicates the percentage of rows that RUNSTATS is to sample when collecting
statistics on non-indexed columns. You can specify any value from 1 through
100. The default is 25.
You cannot specify SAMPLE for LOB table spaces.
COLUMN
Specifies columns on which column statistics are to be gathered.
You can specify this option only if you specify a particular table on which
statistics are to be gathered. (Use the TABLE (table-name) option to specify a
particular table.) If you specify particular tables and do not specify the COLUMN
option, RUNSTATS uses the default, COLUMN(ALL). If you do not specify a
particular table when using the TABLE option, you cannot specify the COLUMN
option; however, in this case, COLUMN(ALL) is assumed.
(ALL)
Specifies that statistics are to be gathered on all columns in the table.
The COLUMN (ALL) option is not allowed for LOB table spaces.
(column-name, ...)
| Specifies the columns on which statistics are to be gathered. You can
| specify a list of column names. If you specify more than one column,
| separate each name with a comma.
The more columns that you specify, the longer the job takes to complete.
| COLGROUP (column-name, ...)
| Indicates that the specified set of columns are to be treated as a group. This
| option enables RUNSTATS to collect a cardinality value on the specified column
| group.
| When you specify the COLGROUP keyword, RUNSTATS collects correlation
| statistics for the specified column group. If you want RUNSTATS to also collect
| distribution statistics, specify the FREQVAL option with COLGROUP.
| (column-name, ...) specifies the names of the columns that are part of the
| column group.
| To specify more than one column group, repeat the COLGROUP option.
| FREQVAL
| Indicates, when specified with the COLGROUP option, that frequency statistics
| are also to be gathered for the specified group of columns. (COLGROUP
| indicates that cardinality statistics are to be gathered.) One group of statistics is
| gathered for each column. You must specify COUNT integer with COLGROUP
| FREQVAL.
| COUNT integer
| Indicates the number of frequently occurring values to be collected from the
| specified column group. For example, COUNT 20 means that DB2 collects
| 20 frequently occurring values from the column group. You must specify a
| value for integer; no default value is assumed.
| Be careful when specifying a high value for COUNT. Specifying a value of
| 1000 or more can increase the prepare time for some SQL statements.
| MOST
| Indicates that the utility is to collect the most frequently occurring values for
| the specified set of columns when COLGROUP is specified. The default is
| MOST.
| LEAST
| Indicates that the utility is to collect the least frequently occurring values for
| the specified set of columns when COLGROUP is specified.
| BOTH
| Indicates that the utility is to collect the most and the least frequently
| occurring values for the specified set of columns when COLGROUP is
| specified.
INDEX
Specifies indexes on which statistics are to be gathered. RUNSTATS gathers
column statistics for the first column of the index, and possibly additional index
columns depending on the options that you specify. All the indexes must be
associated with the same table space, which must be the table space that is
specified in the TABLESPACE option.
INDEX can be used on auxiliary tables to gather statistics on an index.
(ALL) Specifies that column statistics are to be gathered for all indexes that
are defined on tables that are contained in the table space. The default
is ALL.
(index-name, ...)
Specifies the indexes for which statistics are to be gathered. You can
specify a list of index names. If you specify more than one index,
separate each name with a comma. Enclose the index name in
quotation marks if the name contains a blank.
PART integer
Identifies an index partition on which statistics are to be collected.
integer is the number of the partition.
KEYCARD
Collects all of the distinct values in all of the 1 to n key column combinations for
the specified indexes. n is the number of columns in the index. For example,
suppose that you have an index defined on three columns: A, B, and C. If you
specify KEYCARD, RUNSTATS collects cardinality statistics for column A,
column set A and B, and column set A, B, and C.
FREQVAL
Controls, when specified with the INDEX option, the collection of frequent-value
statistics. If you specify FREQVAL with INDEX, this keyword must be followed
by the NUMCOLS and COUNT keywords.
| NUMCOLS integer
| Indicates the number of columns in the index for which RUNSTATS is to
| collect frequently occurring values. integer can be a number between 1 and
| the number of indexed columns. If you specify a number greater than the
| number of indexed columns, RUNSTATS uses the number of columns in
| the index.
| For example, suppose that you have an index defined on three columns: A,
| B, and C. If you specify NUMCOLS 1, DB2 collects frequently occurring
| values for column A. If you specify NUMCOLS 2, DB2 collects frequently
| occurring values for the column set A and B. If you specify NUMCOLS 3,
| DB2 collects frequently occurring values for the column set A, B, and C.
| The default is 1, which means that RUNSTATS is to collect frequently
| occurring values on the first key column of the index.
COUNTinteger
Indicates the number of frequently occurring values that are to be collected
from the specified key columns. For example, specifying 15 means that
RUNSTATS is to collect 15 frequently occurring values from the specified
key columns. The default is 10.
SHRLEVEL
Indicates whether other programs that access the table space while RUNSTATS
is running must use read-only access or can change the table space.
REFERENCE
Allows only read-only access by other programs. The default is
REFERENCE.
CHANGE
Allows other programs to change the table space or index. With
SHRLEVEL CHANGE, RUNSTATS might collect statistics on
uncommitted data.
REPORT
Specifies whether RUNSTATS is to generate a set of messages that report the
collected statistics.
NO
Indicates that RUNSTATS is not to generate the set of messages. The
default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT.
The messages that RUNSTATS generates are dependent on the
combination of keywords in the utility control statement. However, these
messages are not dependent on the value of the UPDATE option. REPORT
YES always generates a report of space and access path statistics.
| UPDATE
| Indicates which collected statistics are to be inserted into the catalog tables.
| ALL Indicates that all collected statistics are to be updated in the catalog.
| The default is ALL.
| ACCESSPATH
| Indicates that DB2 is to update the catalog with only those statistics that
| are used for access path selection.
| SPACE
| Indicates that DB2 is to update the catalog with only space-related
| statistics.
| NONE Indicates that no catalog tables are to be updated with the collected
| statistics.
| Executing RUNSTATS always invalidates the dynamic cache; however,
| when you specify UPDATE NONE REPORT NO, RUNSTATS
| invalidates statements in the dynamic statement cache without
| collecting statistics, updating catalogs tables, or generating reports.
| HISTORY
| Indicates which statistics are to be recorded in the catalog history tables. The
| value that you specify for HISTORY does not depend on the value that you
| specify for UPDATE.
| The default is the value of the STATISTICS HISTORY subsystem parameter on
| the DSNTIPO installation panel. By default, this parameter value is NONE.
| ALL Indicates that all collected statistics are to be updated in the catalog
| history tables.
| ACCESSPATH
| Indicates that DB2 is to update the catalog history tables with only
| those statistics that are used for access path selection.
| SPACE
| Indicates that DB2 is to update the catalog history tables with only
| space-related statistics.
| NONE Indicates that no catalog history tables are to be updated with the
| collected statistics.
| SORTDEVT
| Specifies the device type that DFSORT uses to dynamically allocate the sort
| work data sets that are required.
| device-type
| Specifies any device type that is acceptable for the DYNALLOC parameter
| of the SORT or OPTIONS option of DFSORT. For information about valid
| device types, see DFSORT Application Programming Guide.
| If you omit SORTDEVT and a sort is required, you must provide the DD
| statements that the SORT program requires for the temporary data sets.
If the value for STATISTICS ROLLUP on panel DSNTIPO is NO and data is not
available for all partitions, DB2 issues message DSNU623I.
RUNSTATS INDEX
LIST listdef-name
,
( index-name correlation-stats-spec )
PART integer
( ALL ) TABLESPACE tablespace-name correlation-stats-spec
database-name.
| (1)
HISTORY NONE FORCEROLLUP NO
SORTNUM integer HISTORY ALL FORCEROLLUP YES
ACCESSPATH
SPACE
Notes:
1 You can change the default HISTORY value by modifying the STATISTICS HISTORY subsystem
parameter. By default, this value is NONE.
correlation-stats-spec:
|
FREQVAL NUMCOLS 1 COUNT 10 MOST
KEYCARD MOST
FREQVAL NUMCOLS integer COUNT integer
BOTH
LEAST
INDEX
Specifies the indexes on which statistics are to be gathered. Column statistics
are gathered on the first column of the index. All of the indexes must be
associated with the same table space.
LIST listdef-name
Specifies the name of a previously defined LISTDEF list name. You can specify
one LIST keyword for each RUNSTATS control statement. When you specify
LIST with RUNSTATS INDEX, the list must contain only index spaces. Do not
specify LIST with keywords from the INDEX...(index-name) specification.
RUNSTATS groups indexes by their related table space. RUNSTATS INDEX is
invoked once per table space. The INDEX keyword is required in order to
validate the contents of the LIST.
For more information about LISTDEF specifications, see Chapter 15,
“LISTDEF,” on page 163.
(index-name, ...)
Specifies the indexes on which statistics are to be gathered. You can specify a
list of index names. If you specify more than one index, separate each name
with a comma. Enclose the index name in quotation marks if the name contains
a blank.
PART integer
Identifies the index partition on which statistics are to be collected.
integer is the number of the partition.
(ALL)
Specifies that statistics are to be gathered on all indexes that are defined on all
tables in the specified table space.
TABLESPACE
Identifies the table space and, optionally, the database to which it belongs, for
which index statistics are to be gathered.
database-name
The name of the database to which the table space belongs. The
default is DSNDB04.
tablespace-name
The name of the table space for which index statistics are to be
gathered.
KEYCARD
Collects all of the distinct values in all of the 1 to n key column combinations for
the specified indexes. n is the number of columns in the index. For example,
suppose that you have an index defined on three columns: A, B, and C. If you
specify KEYCARD, RUNSTATS collects cardinality statistics for column A,
column set A and B, and column set A, B, and C.
FREQVAL
Controls, when specified with the INDEX option, the collection of frequent-value
statistics. If you specify FREQVAL with INDEX, this keyword must be followed
by the NUMCOLS and COUNT keywords.
| NUMCOLS integer
| Indicates the number of columns in the index for which RUNSTATS is to
| collect frequently occurring values. integer can be a number between 1 and
| the number of indexed columns. If you specify a number greater than the
| number of indexed columns, RUNSTATS uses the number of columns in
| the index.
| For example, suppose that you have an index defined on three columns: A,
| B, and C. If you specify NUMCOLS 1, DB2 collects frequently occurring
| values for column A. If you specify NUMCOLS 2, DB2 collects frequently
| occurring values for the column set A and B. If you specify NUMCOLS 3,
| DB2 collects frequently occurring values for the column set A, B, and C.
| The default is 1, which means that RUNSTATS is to collect frequently
| occurring values on the first key column of the index.
COUNT integer
Indicates the number of frequently occurring values that are to be collected
from the specified key columns. For example, specifying 15 means that
RUNSTATS is to collect 15 frequently occurring values from the specified
key columns. The default is 10.
SHRLEVEL
Indicates whether other programs that access the table space while RUNSTATS
is running must use read-only access or can change the table space.
REFERENCE
Allows only read-only access by other programs. The default is
REFERENCE.
CHANGE
Allows other programs to change the table space or index. With
SHRLEVEL CHANGE, RUNSTATS might collect statistics on
uncommitted data.
REPORT
Specifies whether RUNSTATS is to generate a set of messages that report the
collected statistics.
NO
Indicates that RUNSTATS is not to generate the set of messages. The
default is NO.
YES
Indicates that the set of messages is to be sent as output to SYSPRINT.
The messages that RUNSTATS generates are dependent on the
combination of keywords in the utility control statement. However, these
messages are not dependent on the value of the UPDATE option. REPORT
YES always generates a report of space and access path statistics.
| UPDATE
| Indicates which collected statistics are to be inserted into the catalog tables.
| ALL Indicates that all collected statistics are to be updated in the catalog.
| The default is ALL.
| ACCESSPATH
| Indicates that DB2 is to update the catalog with only those statistics that
| are used for access path selection.
| SPACE
| Indicates that DB2 is to update the catalog with only space-related
| statistics.
| NONE Indicates that no catalog tables are to be updated with the collected
| statistics.
| If you omit SORTDEVT and a sort is required, you must provide the DD
| statements that the SORT program requires for the temporary data sets.
If the value for STATISTICS ROLLUP on panel DSNTIPO is NO and data is not
available for all partitions, DB2 issues message DSNU623I.
| Notes:
| 1. Required when collecting distribution statistics for column groups.
| 2. Required when collecting statistics on at least one data-partitioned secondary index.
| 3. If the DYNALLOC parm of the SORT program is not turned on, you need to allocate the
| data set. Otherwise, DFSORT dynamically allocates the temporary data set.
The following objects are named in the utility control statement and do not require
DD statements in the JCL:
Table space or index
Object that is to be scanned.
| Calculating the size of the sort work data sets: Depending on the type of
| statistics that RUNSTATS collects, the utility uses the ST01WKnn data sets, the
| STATWK01 data set, both types of data sets, or neither.
| The ST01WKnn data sets are used when collecting statistics on at least one
| data-partitioned secondary index. To calculate the approximate size (in bytes) of the
| ST01WKnn data set, use the following formula:
| The STATWK01 data set is used when collecting distribution statistics. To calculate
| the approximate size (in bytes) of the STATWK01 data set, use the following
| formula:
option descriptions. See “Sample RUNSTATS control statements” on page 565 for
examples of RUNSTATS control statements.
| You should recollect frequency statistics when either of the following situations is
| true:
| v The distribution of the data changes
| v The values over which the data is distributed change
| One common situation in which old statistics can affect query performance is when
| a table has columns that contain data or ranges that are constantly changing (for
| example, dates and timestamps). These types of columns can result in old values in
| the HIGH2KEY and LOW2KEY columns in the catalog. You should periodically
| collect column statistics on these changing columns so that the values in
| HIGH2KEY and LOW2KEY accurately reflect the true range of data, and range
| predicates can obtain accurate filter factors.
| If you need to control the size or placement of the data sets, use the JCL
| statements to allocate STATWK01. To estimate the size of this sort work data set,
| use the formula for STATWK01 in “Data sets that RUNSTATS uses” on page 548.
| To let the work data set be dynamically allocated, remove the STATWK01 DD
| statements from the job and allocate the UTPRINT statement to SYSOUT. If you let
| the SORT program dynamically allocate this data set, you must specify the
| SORTDEV option in the RUNSTATS control statement.
Figure 102. Example RUNSTATS output from a job on a catalog table space
DB2 uses the collected statistics on the catalog to determine the access path for
user queries of the catalog.
Improving performance
You can improve the performance of RUNSTATS on table spaces that are defined
with the LARGE option by specifying the SAMPLE option, which reduces the
number of rows that are scanned for statistics.
jobs is roughly equivalent to the processor time for running the single RUNSTATS
job. However, the total elapsed time for the concurrent jobs can be significantly less
than when you run RUNSTATS on an entire table space or index.
| Run RUNSTATS on only the columns or column groups that might be used as
| search conditions in a WHERE clause of queries. Use the COLGROUP option to
| identify the column groups. Collecting additional statistics on groups of columns that
| are used as predicates improves the accuracy of the filter factor estimate and leads
| to improved query performance. Collecting statistics on all columns of a table is
| costly and might not be necessary.
In some cases, you can avoid running RUNSTATS by specifying the STATISTICS
keyword in LOAD, REBUILD INDEX, or REORG utility statements. When you
specify STATISTICS in one of these utility statements, DB2 updates the catalog with
table space or index space statistics for the objects on which the utility is run.
| However, you cannot collect column group statistics with the STATISTICS keyword.
You can collect column group statistics only by running the RUNSTATS utility. If you
restart a LOAD or REBUILD INDEX job that uses the STATISTICS keyword, DB2
does not collect inline statistics. For these cases, you need to run the RUNSTATS
utility after the restarted utility job completes. For information about restarting a
REORG job that uses the STATISTICS keyword, see page “Restarting REORG
| STATISTICS” on page 464.
| If you need to control the size or placement of the data sets, use the JCL
| statements to allocate ST01WKnn. To estimate the size of this sort work data set,
| use the formula for ST01WKnn in “Data sets that RUNSTATS uses” on page 548.
| To let the sort work data sets be dynamically allocated, remove the ST01WKnn DD
| statements from the job and allocate the UTPRINT statement to SYSOUT. If you let
| the SORT program dynamically allocate these data sets, you must specify the
| SORTDEV option in the RUNSTATS control statement to specify the device type for
| the temporary data sets. Optionally, you can also use the SORTNUM option to
| specify the number of temporary data sets to use.
| parameter that you specify with HISTORY. The HISTORY option does not update
| the main catalog statistics that DB2 uses to select access paths. You can use the
| HISTORY option to monitor how statistics change over time without updating the
| main catalog statistics that DB2 uses to select access paths.
You can restart a RUNSTATS utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 42.
Table 99 shows which claim classes RUNSTATS claims and drains and any
restrictive state that the utility sets on the target object.
Table 99. Claim classes of RUNSTATS operations
RUNSTATS RUNSTATS RUNSTATS RUNSTATS
TABLESPACE TABLESPACE INDEX INDEX
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Target REFERENCE CHANGE REFERENCE CHANGE
Table space or DW/UTRO CR/UTRW1 None None
partition
Index or partition None None DW/UTRO CR/UTRW
Legend:
v DW - Drain the write claim class - concurrent access for SQL readers.
v CR - Claim the read claim class.
v UTRO - Utility restrictive state - read-only access allowed.
v UTRW - Utility restrictive state - read-write access allowed.
v None - Object is not affected by this utility.
Notes:
1. If the target object is a segmented table space, SHRLEVEL CHANGE does not allow you
to concurrently execute an SQL searched DELETE without the WHERE clause.
Table 100 shows which utilities can run concurrently with RUNSTATS on the same
target object. The target object can be a table space, an index space, or a partition
of a table space or index space. If compatibility depends on particular options of a
utility, that information is also shown in the table.
Table 100. Compatibility of RUNSTATS with other utilities
RUNSTATS RUNSTATS RUNSTATS RUNSTATS
TABLESPACE TABLESPACE INDEX INDEX
SHRLEVEL SHRLEVEL SHRLEVEL SHRLEVEL
Utility REFERENCE CHANGE REFERENCE CHANGE
CHECK DATA DELETE NO Yes Yes Yes Yes
RUNSTATS sets the following columns to -1 for table spaces that are defined as
LARGE:
v CARD in SYSTABLES
v CARD in SYSINDEXPART
v FAROFFPOS in SYSINDEXPART
v NEAROFFPOS in SYSINDEXPART
v FIRSTKEYCARD in SYSINDEXES
v FULLKEYCARD in SYSINDEXES
Index statistics and table space statistics: Table 101 shows the catalog tables
that RUNSTATS updates depending on the value of the UPDATE option, the value
of the HISTORY option, and the source of the statistics (table space, partition, index
or LOB table space).
Table 101. Catalog tables that RUNSTATS updates
Catalog table that RUNSTATS
Keyword UPDATE option HISTORY option updates
TABLESPACE UPDATE ALL HISTORY ALL 4 SYSTABLESPACE
SYSTABLEPART1 SYSTABLES1
SYSTABSTATS1,2
SYSLOBSTATS3
TABLESPACE UPDATE ALL HISTORY ACCESSPATH SYSTABLESPACE SYSTABLES1
SYSTABSTATS1,2
TABLESPACE UPDATE ALL HISTORY SPACE SYSTABLEPART1
SYSLOBSTATS3
TABLESPACE UPDATE ACCESSPATH2 HISTORY ALL 4 SYSTABLESPACE
SYSTABLES
SYSTABSTATS2
TABLESPACE UPDATE ACCESSPATH2 HISTORY ACCESSPATH SYSTABLESPACE
SYSTABLES
SYSTABSTATS2
TABLESPACE UPDATE ACCESSPATH2 HISTORY SPACE none
TABLESPACE UPDATE SPACE2 HISTORY ALL 4 SYSTABLEPART
SYSLOBSTATS
SYSTABLES
TABLESPACE UPDATE SPACE2 HISTORY ACCESSPATH none
2
TABLESPACE UPDATE SPACE HISTORY SPACE SYSTABLEPART
SYSLOBSTATS
SYSTABLES
TABLE UPDATE ALL HISTORY ALL 4 SYSCOLUMNS SYSCOLSTATS2
TABLE UPDATE ALL HISTORY ACCESSPATH SYSCOLUMNS SYSCOLSTATS2
TABLE UPDATE ALL HISTORY SPACE none
4
TABLE UPDATE ACCESSPATH HISTORY ALL SYSCOLUMNS
SYSCOLSTATS2
TABLE UPDATE ACCESSPATH HISTORY ACCESSPATH SYSCOLUMNS
SYSCOLSTATS2
TABLE UPDATE ACCESSPATH HISTORY SPACE none
4
INDEX UPDATE ALL HISTORY ALL SYSCOLUMNS SYSCOLDIST
SYSCOLDISTSTATS2
SYSCOLSTATS2 SYSINDEXES
SYSINDEXPART
SYSINDEXSTATS2
Notes:
1. Not applicable if the specified table space is a LOB table space.
2. Only updated for partitioned objects. When you run RUNSTATS against single partitions of an object, RUNSTATS
uses the partition-level statistics to update the aggregate statistics for the entire object. These partition-level
statistics are contained in the following catalog tables:
v SYSCOLSTATS
v SYSCOLDISTSTATS
v SYSTABSTATS
v SYSINDEXSTATS
3. Applicable only when the specified table space is a LOB table space.
4. When HISTORY NONE is specified, none of the catalog history tables are updated.
5. Only the SPACEF and STATSTIME columns are updated.
These tables do not describe information about LOB columns because DB2 does
not use those statistics for access path selection. For information about what values
in these columns indicate for LOBs, see Appendix D of DB2 SQL Reference.
A value in the “Use” column indicates whether information about the DB2 catalog
column is General-use Programming Interface and Associated Guidance Information
Table 102 lists the columns in SYSTABLES that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 102. SYSTABLES catalog columns that DB2 uses to select access paths
SYSTABLES Column
name Column description Use
CARDF Total number of rows in the table. S
NPAGES Total number of pages on which rows of this table are S
included.
NPAGESF Total number of pages that are used by the table. S
PCTROWCOMP Percentage of rows compressed within the total number S
of active rows in the table.
Table 103 lists the columns in SYSTABSTATS that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 103. SYSTABSTATS catalog columns that DB2 uses to select access paths
SYSTABSTATS
Column name Column description Use
CARDF Total number of rows in the partition. S
NPAGES Total number of pages on which rows of this partition S
are included.
Table 104 lists the columns in SYSCOLUMNS that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 104. SYSCOLUMNS catalog columns that DB2 uses to select access paths
SYSCOLUMNS
Column name Column description Use
COLCARDF Estimated number of distinct values for the column. For S
an indicator column, this value is the number of LOBs
that are not null and whose lengths are greater than
zero. The value is -1 if statistics have not been
gathered. The value is -2 for columns of an auxiliary
table.
HIGH2KEY Second highest value of the column. Blank if statistics S
have not been gathered or if the column is an indicator
column or a column of an auxiliary table. If the column
has a non-character data type, the data might not be
printable. This column can be updated.
LOW2KEY Second lowest value of the column. Blank if statistics S
have not been gathered or if the column is an indicator
column or a column of an auxiliary table. If the column
has a non-character data type, the data might not be
printable. This column can be updated.
Table 105 lists the columns in SYSCOLDIST that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 105. SYSTCOLDIST catalog columns that DB2 uses to select access paths
SYSCOLDIST
Column name Column description Use
CARDF The number of distinct values for the column group. S
This number is valid only for cardinality key column
statistics. (A C in the TYPE column indicates that
cardinality statistics were gathered.)
COLGROUPCOLNO Identifies the set of columns that are associated with S
the key column statistics.
COLVALUE Actual index column value that is being counted for S
distribution index statistics.
FREQUENCYF Percentage of rows, multiplied by 100, that contain the S
values that are specified in COLVALUE.
NUMCOLUMNS The number of columns that are associated with the G
key column statistics.
Table 106 lists the columns in SYSTABLESPACE that DB2 uses to select access
paths. These columns are updated by RUNSTATS with the UPDATE ACCESSPATH
or UPDATE ALL options.
Table 106. SYSTABLESPACE catalog columns that DB2 uses to select access paths
SYSTABLESPACE
Column name Column description Use
NACTIVE or Number of active pages in the table space; shows the S
NACTIVEF number of pages that are accessed if a record cursor is
used to scan the entire file. The value is -1 if statistics
have not been gathered.
Table 107 lists the columns in SYSINDEXES that DB2 uses to select access paths.
These columns are updated by RUNSTATS with the UPDATE ACCESSPATH or
UPDATE ALL options.
Table 107. SYSINDEXES catalog columns that DB2 uses to select access paths
SYSINDEXES
Column name Column description Use
CLUSTERRATIOF A number between 0 and 1 that, when multiplied by S
100, gives the percentage of rows that are in clustering
order. For example, a value of 1 indicates that all rows
are in clustering order. A value of .87825 indicates that
87.825% of the rows are in clustering order.
CLUSTERING Indication of whether CLUSTER was specified when the G
index was created.
FIRSTKEYCARDF Number of distinct values of the first key column. S
FULLKEYCARDF Number of distinct values of the full key. S
NLEAF Number of leaf pages in the index. S
NLEVELS Number of levels in the index tree. S
A value in the ″Use″ column indicates whether information about the DB2 catalog
column is General-use Programming Interface and Associated Guidance Information
(G) or Product-sensitive Programming Interface and Associated Guidance
Information (S), as defined in “Programming interface information” on page 882.
| Table 108 lists the columns in SYSTABLESPACE that are updated by RUNSTATS
| with the UPDATE SPACE or UPDATE ALL options
| Table 108. SYSTABLESPACE catalog columns that are updated by RUNSTATS with the
| UPDATE SPACE or UPDATE ALL options.
| SYSTABLESPACE
| Column name Column description Use
| AVGROWLEN Average length of rows for the tables in the table space. G
|
| Table 109 lists the columns in SYSTABLES that are updated by RUNSTATS with
| the UPDATE SPACE or UPDATE ALL options.
| Table 109. SYSTABLES catalog columns that are updated by RUNSTATS with the UPDATE
| SPACE or UPDATE ALL options
| SYSTABLES Column
| name Column description Use
| AVGROWLEN Average length of rows for the tables in the table space. G
|
| Table 110 lists the columns in SYSTABLES_HIST that are updated by RUNSTATS
| with the UPDATE SPACE or UPDATE ALL options.
Table 110. SYSTABLES_HIST catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSTABLES_HIST
Column name Column description Use
AVGROWLEN Average length of rows for the tables in the table space. G
Table 111 lists the columns in SYSTABLEPART that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options.
Table 111. SYSTABLEPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSTABLEPART
column name Column description Use
| AVGROWLEN Average length of rows for the tables in the table G
space.
Table 111. SYSTABLEPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSTABLEPART
column name Column description Use
CARDF Total number of rows in the table space or partition, or G
number of LOBs in the table space if the table space
is a LOB table space. The value is -1 if statistics have
not been gathered.
DSNUM Number of data sets. G
EXTENTS Number of data set extents. G
NEARINDREF Number of rows that are relocated near their original S
page.
Table 111. SYSTABLEPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSTABLEPART
column name Column description Use
PAGESAVE Percentage of pages that are saved in the table S
space or partition as a result of using data
compression. For example, a value of 25 indicates a
savings of 25%, so that the required pages are only
75% of what would be required without data
compression. The value is 0 if no savings from using
data compression are likely, or if statistics have not
been gathered. The value can be negative if using
data compression causes an increase in the number
of pages in the data set.
| Table 112 on page 562 lists the columns in SYSTABLEPART_HIST that are updated
| by RUNSTATS with the UPDATE SPACE or UPDATE ALL options.
Table 112. SYSTABLEPART_HIST that are updated by RUNSTATS with the UPDATE SPACE
or UPDATE ALL options
SYSTABLEPART_HIST
Column name Column description Use
AVGROWLEN Average length of rows for the tables in the table G
space.
| Table 113 lists the columns in SYSINDEXES that are updated by RUNSTATS with
| the UPDATE SPACE or UPDATE ALL options.
| Table 113. SYSINDEXES catalog columns that are updated by RUNSTATS with the UPDATE
| SPACE or UPDATE ALL options
| SYSINDEXES
| column name Column description Use
| AVGKEYLEN Average length of keys within the index. The value is G
| −1 if statistics have not been gathered.
|
| Table 114 lists the columns in SYSINDEXES_HIST that are updated by RUNSTATS
| with the UPDATE SPACE or UPDATE ALL options.
| Table 114. SYSINDEXES_HIST catalog columns that are updated by RUNSTATS with the
| UPDATE SPACE or UPDATE ALL options
| SYSINDEXES_HIST
| column name Column description Use
| AVGKEYLEN Average length of keys within the index. The G
| value is −1 if statistics have not been gathered.
|
Table 115 lists the columns in SYSINDEXPART that are updated by RUNSTATS
with the UPDATE SPACE or UPDATE ALL options.
Table 115. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSINDEXPART
column name Column description Use
| AVGKEYLEN Average length of keys within the index. The value is G
| −1 if statistics have not been gathered.
CARDF Number of rows that the index or partition refers to. S
DSNUM Number of data sets. G
EXTENTS Number of data set extents. G
Table 115. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSINDEXPART
column name Column description Use
FAROFFPOSF Number of times that accessing a different, “far-off” S
page would be necessary when accessing all the data
records in index order.
Table 115. SYSINDEXPART catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options (continued)
SYSINDEXPART
column name Column description Use
NEAROFFPOSF Number of times that accessing a different, “near-off” S
page would be necessary when accessing all the data
records in index order.
Table 117 lists the columns in SYSLOBSTATS that are updated by RUNSTATS with
the UPDATE SPACE or UPDATE ALL options.
Table 117. SYSLOBSTATS catalog columns that are updated by RUNSTATS with the
UPDATE SPACE or UPDATE ALL options
SYSLOBSTATS
column name Column description Use
FREESPACE The number of kilobytes of available space in the S
LOB table space, up to the highest used RBA.
ORGRATIO The ratio of organization in the LOB table space. A S
value of 1 indicates perfect organization of the LOB
table space. The greater the value exceeds 1, the
more disorganized the LOB table space is.
Example 5: Updating all statistics for a table space. The following control
statement specifies that RUNSTATS is to update all catalog statistics (table space,
tables, columns, and indexes) for table space DSN8D81P.DSN8S81C.
RUNSTATS TABLESPACE(DSN8D81P.DSN8S81C) TABLE INDEX
Example 6: Updating statistics that are used for access path selection and
generating a report. The following control statement specifies that RUNSTATS is
to update the catalog with only the statistics that are collected for access path
selection. The utility is to report all statistics for the table space and route the report
to SYSPRINT.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
REPORT YES
UPDATE ACCESSPATH
Example 7: Updating all statistics and generating a report. The following control
statement specifies that RUNSTATS is to update the catalog with all statistics
(access path and space) for table space DSN8D81A.DSN8S81E. The utility is also
to report the collected statistics and route the report to SYSPRINT.
RUNSTATS TABLESPACE DSN8D81A.DSN8S81E
REPORT YES
UPDATE ALL
Example 10: Updating catalog and history tables and reporting all statistics.
The following control statement specifies that RUNSTATS is to update the catalog
tables and history catalog tables with all statistics for table space
The KEYCARD option indicates that the utility is to collect cardinality statistics for
column NP1, column set NP1 and NP2, and column set NP1, NP2, and NP3, and
column set NP1, NP2, NP3, and NP4. The FREQVAL option and its associated
parameters indicate that RUNSTATS is also to collect the 5 most frequently
occurring values on column NP1 (the first key column of the index), and the 10
most frequently occurring values on the column set NP1 and NP2 (the first two key
columns of the index). The utility is to report the collected statistics and route the
statistics to SYSPRINT.
RUNSTATS INDEX (SYSADM.IXNPI)
KEYCARD
FREQVAL NUMCOLS 1 COUNT 5
FREQVAL NUMCOLS 2 COUNT 10
REPORT YES
| Example 16: Updating statistics for an index and retrieving the most and least
| frequently occurring values. The following control statement specifies that
| RUNSTATS is to collect the 10 most frequently occurring values and the 10 least
| frequently occurring values for the first key column of index ADMF001.IXMA0101.
| The KEYCARD option indicates that the utility is also to collect all the distinct
| values in all the key column combinations. A set of messages is sent to SYSPRINT
| and all collected statistics are updated in the catalog.
| RUNSTATS INDEX(ADMF001.IXMA0101)
| KEYCARD
| FREQVAL NUMCOLS 1 COUNT 10 BOTH
| REPORT YES UPDATE ALL
Output: The output from STOSPACE consists of new values in a number of catalog
tables. See “Reviewing STOSPACE output” on page 572 for a list of columns and
tables that STOSPACE updates.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v STOSPACE privilege
v SYSCTRL or SYSADM authority
Syntax diagram
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
STOGROUP
Identifies the storage groups that are to be processed.
(stogroup-name, ...)
Specifies the name of a storage group. You can use a list of
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Storage group
Object that is to be reported.
When DB2 storage groups are used in the creation of table spaces and indexes,
DB2 defines the data sets for them. The STOSPACE utility permits a site to monitor
the disk space that is allocated for the storage group.
STOSPACE does not accumulate information for more than one storage group. If a
partitioned table space or index space has partitions in more than one storage
group, the information in the catalog about that space comes from only the group
for which STOSPACE was run.
When you run the STOSPACE utility, the SPACEF column of the catalog represents
the high-allocated RBA of the VSAM linear data set. Use the value in the SPACEF
column to project space requirements for table spaces, table space partitions, index
spaces, and index space partitions over time. Use the output from the Access
Method Services LISTCAT command to determine which table spaces and index
spaces have allocated secondary extents. When you find these, increase the
primary quantity value for the data set, and run the REORG utility.
For information about space utilization in the DSN8S81E table space in the
DSN8D81A database, first run the STOSPACE utility, and then execute the
following SQL statement:
Alternatively, you can use TSO to look at data set and pack descriptions.
You can restart a STOSPACE utility job, but it starts from the beginning again. For
guidance in restarting online utilities, see “Restarting an online utility” on page 42.
STOSPACE can run concurrently with any utility on the same target object.
However, because STOSPACE updates the catalog, concurrent STOSPACE utility
jobs or other concurrent applications that update the catalog might cause timeouts
and deadlocks.
4. If the value is too large to fit in the SPACE column, the SPACEF column is updated.
| Example 2: Specifying a storage group name that contains spaces. If the name
| of the storage group that you want STOSPACE to process contains spaces, enclose
| the entire storage group name in single quotation marks. Parentheses are optional.
| The following statements are correct ways to specify a storage group with the name
| THIS IS STOGROUP.1.ON.E:
| STOSPACE STOGROUP(’THIS IS STOGROUP.1.ONE’)
|
| STOSPACE STOGROUP ’THIS IS STOGROUP.1.ONE’
Example 3: Updating catalog SPACE columns for all storage groups. The
following control statement specifies that the STOSPACE utility is to update the
| catalog SPACE or SPACEF columns for all storage groups.
STOSPACE STOGROUP *
Example 4: Updating catalog SPACE columns for several storage groups. The
following control statement specifies that the STOSPACE utility is to update the
| catalog SPACE or SPACEF columns for storage groups DSN8G810 and
DSN8G81U.
STOSPACE STOGROUP(DSN8G810, DSN8G81U)
Templates enable you to standardize data set names across the DB2 subsystem
and to easily identify the data set type when you use variables in the data set
name. These variables are listed in “Option descriptions” on page 577.
The TEMPLATE control statement uses the z/OS DYNALLOC macro (SVC 99) to
perform data set allocation. Therefore, the facility is constrained by the limitations of
this macro and by the subset of DYNALLOC that is supported by TEMPLATE. See
z/OS MVS Programming: Assembler Services Guide for more details.
Syntax diagram
| name-expression:
.
(1)
qualifier-expression
(parenthetical-expression)
Notes:
1 The entire name-expression represents one character string and cannot contain any blanks.
| qualifier-expression:
character-expression
(2)
&variable .
(1)
(start )
,length
Notes:
1 If you use substring notation, the entire DSN operand must be enclosed in single quotation marks.
For example, the DSN operand 'P&PA(4,2).' uses substring notation, so it is enclosed in single
quotation marks.
2 The &PA. variable cannot be used more than once.
| common-options:
MGMTCLAS name STORCLAS name RETPD integer ,
EXPDL' date'
VOLUMES ( volser )
GDGLIMIT 99
VOLCNT integer UNCNT integer GDGLIMIT integer
DISP ( NEW , DELETE , DELETE )
OLD KEEP KEEP
SHR CATLG CATLG
MOD UNCATLG UNCATLG
disk-options:
NBRSECND 10
NBRSECND integer
tape-options:
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
TEMPLATE template-name
Defines a data set allocation template and assigns to the template
Parentheses around the DSN name operand are optional. They are
used in the following DSN specification:
DSN(&DB..&TS..D&DATE.)
character-expression
Specifies the data set name or part of the data set name by using
non-variable alphanumeric or national characters.
&variable. Specifies the data set name or part of the data set name by using
symbolic variables. See Table 120 on page 579, Table 121 on page
579, Table 122 on page 580, and Table 123 on page 580 for a list
of variables that can be used.
Each symbolic variable is substituted with its related value at
execution time to form a specific data set name. When used in a
DSN expression, substitution variables begin with an ampersand
sign (&) and end with a period (.), as in the following example:
DSN &DB..&TS..D&JDATE..COPY&ICTYPE.
| You can also use substring notation for the data set name. This
| notation can help you keep the data set name from exceeding the
| 44 character maximum. If you use substring notation, the entire
| DSN operand must be enclosed in single quotation marks. To
| specify a substring, use the form &variable(start). or
| &variable(start,length).
| start
| Specifies the substring’s starting byte location within the current
| variable base value at the time of execution. start must be an
| integer from 1 to 128.
| length
| Specifies the length of the substring. If you specify start but do
| not specify length, length, by default, is the number of
| characters from the start character to the last character of the
| variable value at the time of execution. For example, given a
| five-digit base value, &PART(4). specifies the fourth and fifth
| digits of the value. length must be an integer that does not
| cause the substring to extend beyond the end of the base
| value. For more examples of variable substring notation, see
| “Sample TEMPLATE control statements” on page 590.
Table 123 contains a list of DATE and TIME variables. and their
descriptions.
Table 123. DATE and TIME variables
Variable Description
&DATE. or &DT. YYYYDDD
&TIME. or &TI. HHMMSS
&JDATE. or &JU. YYYYDDD
&YEAR. or &YE. YYYY portion of &DATE.
&MONTH. or &MO. MM
&DAY. or &DA. DD
&JDAY. or &JD. DDD portion of &DATE.
&HOUR. or &HO. HH portion of &TIME.
parenthetical-expression
Specifies part of the data set name by using non-variable
alphanumeric or national characters that are enclosed in
parentheses. For example, the expressions Q1.Q2.Q3(member) and
Q1.Q2.Q3(+1) use valid parenthetical expressions.
Default values for DISP vary, depending on the utility and the data
set that is being allocated. Defaults for restarted utilities also differ
from default values for new utility executions. Default values are
shown in Table 124 on page 583 and Table 125 on page 583.
Table 125. Data dispositions for dynamically allocated data sets on RESTART (continued)
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
ddname DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSRCPY1 Ignored Ignored MOD Ignored MOD MOD Ignored Ignored MOD Ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSRCPY2 Ignored Ignored MOD Ignored MOD MOD Ignored Ignored MOD Ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSUT1 MOD MOD Ignored Ignored MOD Ignored MOD MOD CATLG MOD Ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG
SORTOUT MOD Ignored Ignored Ignored MOD Ignored Ignored MOD MOD Ignored
DELETE DELETE DELETE DELETE
CATLG CATLG CATLG CATLG
SYSMAP Ignored Ignored Ignored Ignored MOD Ignored Ignored Ignored Ignored Ignored
CATLG
CATLG
SYSERR MOD Ignored Ignored Ignored MOD Ignored Ignored Ignored Ignored Ignored
CATLG CATLG
CATLG CATLG
FILTERDDS Ignored Ignored NEW Ignored Ignored Ignored Ignored Ignored Ignored Ignored
DELETE
DELETE
disk-options
SPACE (primary,secondary)
Specifies the z/OS disk space allocation parameters in the range
from 1 to 1677215. If you specify (primary,secondary) value, these
values are used instead of the DB2-calculated values. When
specifying primary and secondary quantities, you must either
specify both values or omit both values.
Use the MAXPRIME option to set an upper limit on the primary
quantity.
CYL
Specifies that allocation quantities, if present, are to be
expressed in cylinders and that allocation is to occur in
cylinders. If SPACE CYL is specified, without
(primary,secondary), the DB2-calculated quantities are allocated
in cylinders by using 3390 quantities for byte conversion.
The default is CYL.
TRK
Specifies that, in the absence of values for (primary,secondary),
the DB2-calculated quantities are to be allocated in tracks by
using 3390 disk drive quantities for byte conversion.
MB
Specifies that allocation quantities, if present, are to be
expressed in megabytes, and that allocation is to occur in
records. One megabyte is 1 048 576 bytes. If SPACE MB is
specified, without (primary,secondary), the DB2-calculated
quantities are allocated in records by using the average record
length for the data set.
tape-options
TRTCH Specifies the track recording technique for magnetic tape drives
that have improved data recording capability.
NONE
Specifies that the TRTCH specification is to be eliminated from
dynamic allocation. The default is NONE.
COMP
Specifies that data is to be written in compacted format.
NOCOMP
Specifies that data is to be written in standard format.
End of tape-options
As an alternative to using JCL to specify the data sets, you can use the TEMPLATE
utility control statement to dynamically allocate utility data sets. Options of the
TEMPLATE utility allow you to specify the following information:
v The data set naming convention
v DFSMS parameters
v Disk or tape allocation parameters
You can specify a template in the SYSIN data set, immediately preceding the utility
control statement that references it, or in one or more TEMPLATE libraries.
A TEMPLATE library is a data set that contains only TEMPLATE utility control
statements. You can specify a TEMPLATE data set DD name by using the
TEMPLATEDD option of the OPTIONS utility control statement. This specification
applies to all subsequent utility control statements until the end of input or until DB2
encounters a new OPTIONS TEMPLATEDD(ddname) specification.
Any template that is defined within SYSIN overrides another template definition of
the same name in a TEMPLATE data set.
TEMPLATE utility control statements enable you to standardize data set allocation
and the utility control statements that reference those data sets, which reduces the
need to customize and alter utility job streams.
The required TEMPLATE statement might look something like the following
TEMPLATE statement:
TEMPLATE tmp1 DSN(DB2.&TS..D&JDATE..COPY&ICTYPE.&LOCREM.&PRIBAC.)
VOLUMES(vol1,vol2,vol3)
LISTDEF payroll INCLUDE TABLESPACE PAYROLL.*
INCLUDE INDEXSPACE PAYROLL.*IX
EXCLUDE TABLESPACE PAYROLL.TEMP*
EXCLUDE INDEXSPACE PAYROLL.TMPIX*
COPY LIST payroll COPYDDN(tmp1,tmp1) RECOVERYDDN(tmp1,tmp1)
See “Syntax and options of the TEMPLATE control statement” on page 575 for
details.
DB2 usually estimates the size of a data set based on the size of other existing
data sets; however, if any of the required data sets are on tape, DB2 is unable to
estimate the size. When DB2 is able to calculate size, it calculates the maximum
size. This action can result in overly large data sets. DB2 always allocates data set
size with the RLSE (release) option so that unused space is released on
de-allocation. However in some cases, the calculated size of required data sets is
too large for the DYNALLOC interface to handle. In this case, DB2 issues error
message DSNU1034I, you must allocate the data set by a DD statement. If the
object is part of a LISTDEF list, you might need to remove it from the list and
process it individually.
Database administrators can check utility control statements without executing them
by using the PREVIEW function. In PREVIEW mode, DB2 expands all TEMPLATE
data set names in the SYSIN DD, in addition to any data set name from the
TEMPLATE DD that are referenced on a utility control statement. DB2 then prints
the information to the SYSPRINT data set and halts execution. You can specify
PREVIEW in one of two ways, either as a JCL PARM or on the OPTIONS
PREVIEW utility control statement.
If you omit the SPACE option quantities, current data set space estimation formulas
that are shown in the ″Data sets that utility uses″ sections for each online utility are
implemented as default values for disk data sets.
| v REORG TABLESPACE
You can restart a TEMPLATE utility job, but it starts from the beginning again. If you
are restarting this utility as part of a larger job in which TEMPLATE completed
successfully, but a later utility failed, do not change the TEMPLATE utility control
statement, if possible. If you must change the TEMPLATE utility control statement,
use caution; any changes can cause the restart processing to fail. For example, if
you change the template name of a temporary work data set that was opened in an
earlier phase and closed but is to be used later, the job fails. For guidance in
restarting online utilities, see “Restarting an online utility” on page 42.
Example 2: Using variable substring notation to specify data set names. The
following control statement defines template CP2. Variable substring notation is
used in the DSN option to define the data set naming convention.
Assume that in the year 2003 you make a full image copy of partition 00004 of
table space DSN8S81D. Assume that you specify the template CP2 for the data set
for the local primary copy. DB2 gives the following name to the image copy data
set: DH173001.DSN8S81D.Y03.COPYLP.P004
Notice that every variable in the DSN option begins with an ampersand (&) and
ends with a period (.). These ampersands and periods are not included in the data
set name. Only periods that do not signal the end of a variable are included in the
data set name.
TEMPLATE CP2 DSN ’DH173001.&SN..Y&YEAR(3)..COPY&LR.&PB..P&PART(3,3).’
UNIT(SYSDA)
Example 3: Using COPY with TEMPLATE with variable substring notation. The
following TEMPLATE utility control statement defines template SYSCOPY. Variable
substring notation is used in the DSN option to define the data set naming
convention. The subsequent COPY utility control statement specifies that DB2 is to
make a local primary copy of the first partition of table space
DSN8D81A.DSN8S81E. COPY is to write this image copy to a data set that is
dynamically allocated according to the SYSCOPY template. In this case, the
resulting data set name is DSN8D81A.DSN8S81E.P001
TEMPLATE SYSCOPY DSN ’&DB..&TS..P&PA(3).’
Notice that you can change the part variable in the DSN operand from P&PA(3). to
P&PA(3,3). The resulting data set name is the same, because the length value of 3
is implied in the first specification.
Example 4: Specifying a template for tape data sets with an expiration date.
The following control statement defines the TAPEDS template. Any data sets that
are defined with this template are to be allocated on device number 3590-1, as
indicated by the UNIT option, and are to expire on 1 January 2100, as indicated by
the EXPDL option. The DSN option indicates that these data set names are to have
the following three parts: database name, table space name, and date.
TEMPLATE TAPEDS DSN(&DB..&TS..D&DATE.)
UNIT 3590-1 EXPDL ’2100001’
//************************************************************
//* COMMENT: Define a model data set. *
//************************************************************
//STEP1 EXEC PGM=IEFBR14
//SYSCOPX DD DSN=JULTU225.MODEL,DISP=(NEW,CATLG,CATLG),
// UNIT=SYSDA,SPACE=(4000,(20,20)),VOL=SER=SCR03,
// DCB=(RECFM=FB,BLKSIZE=4000,LRECL=100)
//***********************************************************
//* COMMENT: GDGLIMIT(6)
//***********************************************************
//STEP2 EXEC DSNUPROC,UID=’JULTU225.GDG’,
// UTPROC=’’,
// SYSTEM=’SSTR’
//SYSIN DD *
TEMPLATE COPYTEMP
UNIT SYSDA
DSN ’JULTU225.GDG(+1)’
MODELDCB JULTU225.MODEL
GDGLIMIT(6)
COPY TABLESPACE DBLT2501.TPLT2501
FULL YES
COPYDDN (COPYTEMP)
SHRLEVEL REFERENCE
/*
Figure 103. Example TEMPLATE and COPY statements for writing a local copy to a data set
that is dynamically allocated according to the characteristics of the template.
Example 9: Using a template to copy a GDG data set to tape. In the example in
Figure 104 on page 593, the OPTIONS utility control statement causes the
subsequent TEMPLATE statement to run in PREVIEW mode. In this mode, DB2
checks the syntax of the TEMPLATE statement. If DB2 determines that the syntax
is valid, it expands the data set names. The OPTIONS OFF statement ends
PREVIEW mode processing. The subsequent COPY utility control statement
executes normally. The COPY statement specifies that DB2 is to write a local image
copy of the table space DBLT4301.TPLT4301 to a data set that is dynamically
allocated according to the characteristics that are defined in the COPYTEMP
template. According to the COPYTEMP template, this data set is to be named
JULTU243.GDG(+1) (as indicated by the DSN option) and is to be stacked on the
tape volume 99543 (as indicated by the UNIT, STACK, and VOLUMES options).
The data set dispositions are specified by the DISP option. The GDGLIMIT option
specifies that 50 entries are to be created in a GDG base.
Figure 104. Example job that uses OPTIONS, TEMPLATE, and COPY statements to copy a
GDG data set to tape.
The output records that the UNLOAD utility writes are compatible as input to the
LOAD utility; as a result, you can reload the original table or different tables.
Authorization required: To execute this utility, you must use a privilege set that
includes one of the following authorities:
v Ownership of the tables
v SELECT privilege on the tables
v DBADM authority for the database
v SYSADM authority
v SYSCTRL authority (catalog tables only)
| If you use RACF access control with multilevel security and UNLOAD is to process
| a table space that contains a table that has multilevel security with row-level
| granularity, you must be identified to RACF and have an accessible valid security
| label. Each row is unloaded only if your security label dominates the data security
| label. If your security label does not dominate the data security label, the row is not
| unloaded, but DB2 does not issue an error message. For more information about
| multilevel security and security labels, see Part 3 of DB2 Administration Guide.
Syntax diagram
source-spec
from-table-spec
LIST listdef-name
source-spec:
TABLESPACE tablespace-name
database-name. PART integer
int1 : int2
|
FROMCOPY data-set-name
FROMVOLUME CATALOG
vol-ser
(1)
FROMSEQNO n
FROMCOPYDDN ddname
Notes:
1 The FROMSEQNO option is required if you are unloading an image copy from a tape data set
that is not cataloged.
unload-spec:
, NOSUBS NOPAD
CCSID( integer )
| FLOAT S390
COLDEL ',' CHARDEL '"' DECPT '.' FLOAT IEEE
DELIMITED
COLDEL coldel CHARDEL chardel DECPT decpt
FROM-TABLE-spec:
For the syntax diagram and the option descriptions of the FROM TABLE
specification, see “FROM-TABLE-spec” on page 604.
Option descriptions
“Control statement coding rules” on page 19 provides general information about
specifying options for DB2 utilities.
DATA Identifies the data that is to be selected for unloading with table-name in the
from-table-spec. The DATA keyword is mutually exclusive with
TABLESPACE, PART, and LIST keywords.
When you specify the DATA keyword, or you omit either the TABLESPACE
or the LIST keyword, you must also specify at least one FROM TABLE
clause.
TABLESPACE
Specifies the table space (and, optionally, the database to which it belongs)
from which the data is to be unloaded.
database-name
The name of the database to which the table space belongs. The name
cannot be DSNDB01 or DSNDB07. The default is DSNDB04.
tablespace-name
The name of the table space from which the data is to be unloaded.
The specified table space must not be a LOB table space.
PART
Identifies a partition or a range of partitions from which the data is to be
If the specified image copy data set is a full image copy, either
compressed or uncompressed records can be unloaded.
image copy that was created as a cataloged data set (which means that
its volume serial is not recorded in SYSIBM.SYSCOPY).
vol-ser
Identifies the data set by an alphanumeric volume serial identifier of its
first volume. Use this option only for an image copy that was created as
a non-cataloged data set. To specify a data set that is stored on
multiple tape volumes, identify the first vol-ser in the SYSCOPY record.
| FROMSEQNO n
Identifies the image copy data set by its file sequence number. The
FROMSEQNO option is required if you are unloading an image
copy from a tape data set that is not cataloged.
| n Specifies the file sequence number.
FROMCOPYDDN ddname
Indicates that data is to be unloaded from one or more image copy data
sets that are associated with the specified ddname. Multiple image copy
data sets (primarily for the copy of pieces) can be concatenated under a
single DD name.
ddname
Identifies a DD name with which one or more image copy data sets are
associated.
LIST listdef-name
Identifies the name of a list of objects that are defined by a LISTDEF utility
| control statement. The list can include table spaces, index spaces,
| databases, a tables, an index, and partitions. The list cannot include index
| spaces, LOB table spaces, and directory objects. You cannot use the LIST
option to specify image copy data sets.
When you specify the LIST option, the referenced LISTDEF identifies:
v The table spaces from which the data is to be unloaded. You can use the
pattern-matching feature of LISTDEF.
v The partitions (if a table space is partitioned) from which the data is to be
unloaded (defined by the INCLUDE, EXCLUDE, and PARTLEVEL
keywords in the LISTDEF statement).
The UNLOAD utility associates a single table space with one output data
set, except when partition-parallelism is activated. When you use the LIST
option with a LISTDEF that represents multiple table spaces, you must also
define a data set TEMPLATE that corresponds to all of the table spaces
and specify the template-name in the UNLDDN option.
If you want to generate the LOAD statements, you must define another
TEMPLATE for the PUNCHDDN data set that is similar to UNLDDN. DB2
then generates a LOAD statement for each table space.
PUNCHDDN
Specifies the DD name for a data set or a template name that defines one
or more data set names that are to receive the LOAD utility control
statements that the UNLOAD utility generates.
ddname
Specifies the DD name. The default is SYSPUNCH.
template-name
Identifies the name of a data set template that is defined by a
TEMPLATE utility control statement.
If the specified name is defined both as a DD name (in the JCL) and as a
template name (in a TEMPLATE statement), it is treated as the DD name.
When you run the UNLOAD utility for multiple table spaces and you want to
generate corresponding LOAD statements, you must have multiple output
data sets that correspond to the table spaces so that DB2 retains all of the
generated LOAD statements. In this case, you must specify an appropriate
template name to PUNCHDDN. If you omit the PUNCHDDN specification,
the LOAD statements are not generated.
If the specified name is defined both as a DD name (in the JCL) and as a
template name (in a TEMPLATE statement), it is treated as the DD name.
When you run the UNLOAD utility for a partitioned table space, the selected
partitions are unloaded in parallel if the following conditions are true:
1. You specify a template name for UNLDDN.
2. The template data set name contains the partition as a variable (&PART.
or &PA.) without substring notation. This template name is expanded
into multiple data sets that correspond to the selected partitions.
3. The TEMPLATE control statement does not contain all of the following
options:
v STACK(YES)
v UNIT(TAPE)
v An UNCNT value that is less than or equal to one.
If conditions 1 and 2 are true, but condition 3 is false, partition parallelism is
not activated and all output data sets are stacked on one tape.
Similarly, when you run the UNLOAD utility for multiple table spaces, the
output records are placed in data sets that correspond to the respective
table spaces. Therefore the output data sets must be physically distinctive,
and you must specify an appropriate template name to UNLDDN. If you
omit the UNLDDN specification, the SYSREC DD name is not used, and an
error occurs.
EBCDIC
Specifies that all output data of the character type is to be in EBCDIC. If a
different encoding scheme is used for the source data, the data (except for
bit strings) is converted into EBCDIC.
If you do not specify EBCDIC, ASCII, UNICODE, or CCSID, the encoding
scheme of the source data is preserved.
See the description of the CCSID option for this utility.
ASCII Specifies that all output data of the character type is to be in ASCII. If a
different encoding scheme is used for the source data, the data (except for
bit strings) is converted into ASCII.
If you do not specify EBCDIC, ASCII, UNICODE, or CCSID, the encoding
scheme of the source data is preserved.
See the description of the CCSID option for this utility.
UNICODE
Specifies that all output data of the character type (except for bit strings) is
to be in Unicode. If a different encoding scheme is used for the source
data, the data is converted into Unicode.
If you do not specify EBCDIC, ASCII, UNICODE, or CCSID, the encoding
scheme of the source data is preserved.
See the description of the CCSID option of this utility.
CCSID(integer1,integer2,integer3)
Specifies up to three coded character set identifiers (CCSIDs) that are to be
used for the data of character type in the output records, including data that
is unloaded in the external character formats.
integer1 specifies the CCSID for SBCS data. integer2 specifies the CCSID
for mixed data. integer3 specifies the CCSID for DBCS data. This option is
not applied to data with a subtype of BIT.
| If you specify both FORMAT DELIMITED and UNICODE, all output data is
| in CCSID 1208, UTF-8; any other specified CCSID is ignored.
The following specifications are also valid:
CCSID(integer1)
Indicates that only an SBCS CCSID is specified.
CCSID(integer1,integer2)
Indicates that an SBCS CCSID and a mixed CCSID are specified.
integer
Specifies either a valid CCSID or 0.
If you specify a value of 0 for one of the arguments or omit a value, the
encoding scheme that is specified by EBCDIC, ASCII, or UNICODE is
assumed for the corresponding data type (SBCS, MIXED, or DBCS). If you
do not specify EBCDIC, ASCII, or UNICODE:
v If the source data is of character type, the original encoding scheme is
preserved.
v For character strings that are converted from numeric, date, time, or
timestamp data, the default encoding scheme of the table is used. For
more information, see the CCSID option of the CREATE TABLE
statement in Chapter 5 of DB2 SQL Reference.
| You cannot specify the same character for more than one type of delimiter
| (COLDEL, CHARDEL, and DECPT).
| If you specify the FORMAT DELIMITED option, you cannot specify
| HEADER CONST or use any of the multiple FROM TABLE statements.
| Also, UNLOAD ignores any specified POSITION statements within the
| UNLOAD utility control statement.
| For delimited output, UNLOAD does not add trailing padded blanks to
| variable-length columns, even if you do not specify the NOPAD option. For
| fixed-length columns, the normal padding rules apply. For example, if a
| VARCHAR(10) field contains ABC, UNLOAD DELIMITED unloads the field as
| ″ABC″. However, for a CHAR(10) field that contains ABC, UNLOAD
| DELIMITED unloads it as ″ABC ″. For information about using
| delimited output and delimiter restrictions, see “Unloading delimited files” on
| page 636. For more information about delimited files see Appendix F,
| “Delimited file format,” on page 875.
| COLDEL
| Specifies the column delimiter that is used in the output file. The default
| is a comma (,). For ASCII and UTF-8 data this is X'2C', and for
| EBCDIC data it is a X'6B'.
| CHARDEL
| Specifies the character string delimiter that is used in the output file.
| The default is a double quotation mark (″). For ASCII and UTF-8 data
| this is X'22', and for EBCDIC data it is X'7F'.
| The UNLOAD utility adds the CHARDEL character before and after
| every character string. To delimit character strings that contain the
| character string delimiter, the UNLOAD utility repeats the character
| string delimiter where it used in the character string. The LOAD utility
| then interprets any pair of character delimiters that are found between
| the enclosing character delimiters as a single character. For example,
| the phrase what a “nice warm” day is unloaded as “what a ““nice
| warm”” day”, and LOAD interprets it as what a “nice warm” day. The
| UNLOAD utility recognizes these character pairs for only CHAR,
| VARCHAR, and CLOB fields.
| DECPT
| Specifies the decimal point character that is used in the output file. The
| default is a period (.).
| The default decimal point character is a period in a delimited file, X'2E'
| in an ASCII data file, and X'4B' in an EBCDIC data file.
FLOAT
Specifies the output format of the numeric floating-point data. This option
applies to the binary output format only.
S390
Indicates that the binary floating point data is written to the output
records in the S/390 internal format (also known as the hexadecimal
floating point, or HFP).
The default is FLOAT S390.
IEEE
Indicates that the binary floating-point data is written to the output
records in the IEEE format (also known as the binary floating point, or
BFP).
MAXERR integer
Specifies the maximum number of records in error that are to be allowed;
the unloading process terminates when this value is reached.
integer
Specifies the number of records in error that are allowed. When the
error count reaches this number, the UNLOAD utility issues message
DSNU1219 and terminates with return code 8.
The default is 1, which indicates that UNLOAD stops when the first
error is encountered. If you specify 0 or any negative number, execution
continues regardless of the number of records that are in error.
If multiple table spaces are being processed, the number of records in error
is counted for each table space. If the LIST option is used, you can add
OPTION utility control statement (EVENT option with ITEMERROR) before
the UNLOAD statement to specify that the table space in error is to be
skipped and the subsequent table spaces are to be processed.
SHRLEVEL
Specifies whether other processes can access or update the table space or
partitions while the data is being unloaded.
UNLOAD ignores the SHRLEVEL specification when the source object is an
image copy data set.
The default is SHRLEVEL CHANGE ISOLATION CS.
CHANGE
Specifies that rows can be read, inserted, updated, and deleted from
the table space or partition while the data is being unloaded.
ISOLATION
Specifies the isolation level with SHRLEVEL CHANGE.
CS
Indicates that the UNLOAD utility is to read rows in cursor
stability mode. With CS, the UNLOAD utility assumes
CURRENTDATA(NO).
UR
Indicates that uncommitted rows, if they exist, are to be
unloaded. The unload operation is performed with minimal
interference from the other DB2 operations that are applied to
the objects from which the data is being unloaded.
REFERENCE
Specifies that during the unload operation, rows of the tables can be
read, but cannot be inserted, updated, nor deleted by other DB2
threads.
When you specify SHRLEVEL REFERENCE, the UNLOAD utility drains
writers on the table space from which the data is to be unloaded. When
data is unloaded from multiple partitions, the drain lock is obtained for
all of the selected partitions in the UTILINIT phase.
FROM-TABLE-spec
More than one table or partition for each table space can be unloaded with a single
invocation of the UNLOAD utility. One FROM TABLE statement for each table that
is to be unloaded is required to identify:
v A table name from which the rows are to be unloaded
v A field to identify the table that is associated with the rows that are to be
unloaded from the table by using the HEADER option
v Sampling options for the table rows
v A list of field specifications for the table that is to be used to select columns that
are to be unloaded
v Selection conditions, specified in the WHEN clause, that are to be used to qualify
rows that are to be unloaded from the table
All tables that are specified by FROM TABLE statements must belong to the same
table space. If rows from specific tables are to be unloaded, a FROM TABLE clause
must be specified for each source table. If you do not specify a FROM TABLE
clause for a table space, all the rows of the table space are unloaded.
If you omit a list of field specifications, all columns of the source table are unloaded
in the defined column order for the table. The default output field types that
correspond to the data types of the columns are used.
In a FROM TABLE clause, you can use parentheses in only two situations: to
enclose the entire field selection list, and in a WHEN selection clause. This usage
avoids potential conflict between the keywords and field-names that are used in the
field selection list. A valid sample of a FROM TABLE clause specification follows:
UNLOAD ...
FROM TABLE tablename SAMPLE x (c1,c2) WHEN (c3>0)
You cannot specify FROM TABLE if the LIST option is already specified.
FROM-TABLE-spec:
FROM-TABLE-spec
HEADER OBID
FROM TABLE table-name
HEADER NONE SAMPLE decimal LIMIT integer
CONST 'string'
X'hex-string'
, WHEN (selection-condition)
( field-specification )
field-specification:
POSITION(*)
field-name
POSITION(start)
CHAR
(length) TRUNCATE
VARCHAR
(length) BOTH TRUNCATE
STRIP
TRAILING 'strip-char'
LEADING X'strip-char'
GRAPHIC
EXTERNAL (length) TRUNCATE
VARGRAPHIC
(length) BOTH TRUNCATE
STRIP
TRAILING X'strip-char'
LEADING
SMALLINT
INTEGER
EXTERNAL
(length)
PACKED
DECIMAL
ZONED ,0
EXTERNAL (length )
,scale
FLOAT
EXTERNAL (length)
DOUBLE
REAL
DATE EXTERNAL
(length)
TIME EXTERNAL
(length)
TIMESTAMP EXTERNAL
(length)
CONSTANT 'string'
X'hex-string'
ROWID
BLOB
(length) TRUNCATE
CLOB
(length) TRUNCATE
DBCLOB
(length) TRUNCATE
selection condition:
predicate AND
(selection-condition) OR predicate
(selection-condition)
predicate:
basic predicate
BETWEEN predicate
IN predicate
LIKE predicate
NULL predicate
basic predicate:
column-name = constant
<> labeled-duration-expression
>
<
>=
<=
BETWEEN predicate:
IN predicate:
column-name IN ( constant )
NOT
LIKE predicate:
NULL predicate:
column-name IS NULL
NOT
labeled-duration-expression:
OBID
Specifies that the object identifier (OBID) for the table (a 2-byte binary
value) is to be placed in the first 2 bytes of the output records that are
unloaded from the table.
| If you omit the HEADER option, HEADER OBID is the default, except
| for delimited files.
With HEADER OBID, the first 2 bytes of the output record cannot be
used by the unloaded data. For example, consider the following
UNLOAD statement:
UNLOAD ...
FROM TABLE table-name HEADER OBID ...
The sampling is applied for each individual table. If the rows from multiple
tables are unloaded with sampling enabled, the referential integrity between
the tables might be lost.
LIMIT integer
Specifies the maximum number of rows that are to be unloaded from a
table. If the number of unloaded rows reaches the specified limit, message
DSNU1201 is issued for the table, and no more rows are unloaded from the
table. The process continues to unload qualified rows from the other tables.
When partition parallelism is activated, the LIMIT option is applied to each
partition instead of to the entire table.
integer
Indicates the maximum number of rows that are to be unloaded from a
table. If the specified number is less than or equal to zero, no row is
unloaded from the table.
Like the SAMPLE option, if multiple tables are unloaded with the LIMIT
option, the referential integrity between the tables might be lost.
field-name
Identifies a column name that must exist in the source table.
POSITION(start)
Specifies the field position in the output record. You can specify
the position parameter as follows:
* An asterisk, indicating that the field starts at the first byte after the
last position of the previous field.
start A positive integer that indicates the start column of the data field.
the item that is specified by the HEADER option is placed at the beginning
of all the records that are unloaded from the table. You must account for the
space for the record header:
v HEADER OBID (the default case): 2 bytes from position 1.
v HEADER CONST 'string' or X'hex-string' case: The length of the given
string from position 1.
If the source table column can be null, the utility places a NULL indicator
byte at the beginning of the data field in the output record. The start
parameter (or *) points to the position of the NULL indicator byte. In the
generated LOAD statement, start is shifted by 1 byte to the right (as
start+1) so that, in the LOAD statement, the start parameter of the
POSITION option points to the next byte past the NULL indicator byte.
For a varying-length field, a length field precedes the actual data field (after
the NULL indicator byte, if applicable). If the value cannot be null, the start
parameter (or *) points to the first byte of the length field. The size of the
length field is either 4 bytes (BLOB, CLOB, or DBCLOB) or 2 bytes
(VARCHAR or VARGRAPHIC).
When you explicitly specify the output field positions by using start
parameters (or using the * format) of the POSITION option, you must
consider the following items as a part of the output field:
v For a field whose value can be null, a space for the NULL indicator byte
v For varying-length data, a space for the length field (either 2 bytes or 4
bytes)
“Determining the layout of output fields” on page 633 illustrates the field
layout in conjunction with the POSITION option, NULL indicator byte, the
length field for a varying-length field, the length parameter, and the actual
data length.
The POSITION option is useful when the output fields must be placed at
desired positions in the output records. The use of the POSITION
parameters, however, can restrict the size of the output data fields. Use
care when explicitly specifying start parameters for nullable and
varying-length fields. The TRUNCATE option might be required, if
applicable, to fit a data item in a shorter space in an output record.
If you omit the POSITION option for the first field, the field starts from
position 1 if HEADER NONE is specified. Otherwise, the field starts from
the next byte position past the record header field. If POSITION is omitted
for a subsequent field, the field is placed next to the last position of the
previous field without any gap.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the output
data that corresponds to the specified option, is encoded in the CCSID,
depending on the subtype of the source data (SBCS or MIXED). If the
subtype is BIT, no conversion is applied.
(length)
Specifies the size of the output data in bytes.
If the length parameter is omitted, the default is the maximum length
that is defined on the source table column. When the length parameter
is specified:
v If the length is less than the size of the table column, the data is
truncated to the length if the TRUNCATE keyword is present;
otherwise, a conversion error occurs.
v If the length is larger than the size of the table column, the output
field is padded by the default pad characters to the specified length.
TRUNCATE
Indicates that a character string (encoded for output) is to be truncated
from the right, if the data does not fit in the available space for the field
in the output record. Truncation occurs at the character boundary. See
“Specifying TRUNCATE and STRIP options for output data” on page
638 for the truncation rules that are used in the UNLOAD utility. Without
TRUNCATE, an error occurs when the output field size is too small for
the data.
VARCHAR
Specifies that the output field type is character of varying length. A 2-byte
binary field indicating the length of data in bytes is prepended to the data
field. If the table column can be null, a NULL indicator byte is placed before
| this length field for a non-delimited output file.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the output
data is encoded in the CCSID corresponding to the specified option,
depending on the subtype of the source data (SBCS or MIXED). If the
subtype is BIT, no conversion is applied.
(length)
Specifies the maximum length of the actual data field in bytes. If you
also specify NOPAD, it indicates the maximum allowable space for the
data in the output records; otherwise, the space of the specified length
is reserved for the data.
If the length parameter is omitted, the default is the smaller of 255 and
the maximum length that is defined on the source table column.
STRIP
Specifies that UNLOAD is to remove blanks (the default) or the
specified characters from the beginning, the end, or both ends of the
data. UNLOAD adjusts the VARCHAR length field (for the output field)
to the length of the stripped data.
The STRIP option is applicable if the subtype of the source data is BIT.
In this case, no CCSID conversion is performed on the specified strip
character (even if it is given in the form 'strip-char').
The effect of the STRIP option is the same as the SQL STRIP scalar
function. For details, see Chapter 5 of DB2 SQL Reference.
BOTH
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning and end of the data.
The default is BOTH.
TRAILING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the end of the data.
LEADING
Indicates that UNLOAD is to remove occurrences of blank or the
specified strip character from the beginning of the data.
'strip-char'
Specifies a single-byte character that is to be stripped. Specify this
character value in EBCDIC. Depending on the output encoding
scheme, UNLOAD applies SBCS CCSID conversion to the
strip-char value before it is used in the strip operation. If you want
to specify a strip-char value in an encoding scheme other than
EBCDIC, use the hexadecimal form. UNLOAD does not perform
CCSID conversion if the hexadecimal form is used.
X'strip-char'
Specifies a single-byte character that is to be stripped. It can be
specified in the hexadecimal form, X'hex-string', where hex-string is
two hexadecimal characters that represent a single SBCS
character. If the strip-char operand is omitted, the default is the
blank character, which is coded as follows:
v X'40', for the EBCDIC-encoded output case
v X'20' for the ASCII-encoded output case
v X'20' the Unicode-encoded output case
TRUNCATE
Indicates that a graphic character string (encoded for output) is to be
truncated from the right, if the data does not fit in the available space
for the field in the output records. Truncation occurs at a character
(DBCS) boundary. Without TRUNCATE, an error occurs when the
output field size is too small for the data.
GRAPHIC EXTERNAL
Specifies that the data is to be written in the output records as a
fixed-length field of the graphic type with the external format; that is, the
shift-out (SO) character is placed at the starting position, and the shift-in
(SI) character is placed at the ending position. The byte count of the output
field is always an even number.
GRAPHIC EXTERNAL is supported only in the EBCDIC output mode (by
default or when the EBCDIC keyword is specified).
If the start parameter of the POSITION option is used to specify the output
column position, it points to the (inserted) shift-out character at the
beginning of the field. The shift-in character is placed at the next byte
position past the last double-byte character of the data.
(length)
Specifies a number of DBCS characters, excluding the shift characters
(as in the graphic type column definition that is used in a CREATE
TABLE statement) nor the NULL indicator byte if the source column can
be null. If the length parameter is omitted, the default output field size is
the length that is defined on the corresponding table column, plus two
bytes (shift-out and shift-in characters).
If the specified length is larger than the size of the data, the field is
padded on the right with the default DBCS padding character.
TRUNCATE
Indicates that a graphic character string is to be truncated from the right
by the DBCS characters, if the data does not fit in the available space
for the field in the output records. Without TRUNCATE, an error occurs
when the output field size is too small for the data. An error can also
occur with the TRUNCATE option if the available space is less than 4
bytes (4 bytes is the minimum size for a GRAPHIC EXTERNAL field;
shift-out character, one DBCS, and shift-in character); or fewer than 5
bytes if the field is can be null (the 4 bytes plus the NULL indicator
byte).
VARGRAPHIC
Specifies that the output field is to be of the varying-length graphic type. A
2-byte binary length field is prepended to the actual data field. If the table
column can be null, a NULL indicator byte is placed before this length field
| for any non-delimited output file.
(length)
Specifies the maximum length of the actual data field in the number of
DBCS characters. If you also specify NOPAD, it indicates the maximum
allowable space for the data in the output records; otherwise, the space
of the specified length is reserved for the data.
If the length parameter is omitted, the default is the smaller of 127 and
the maximum defined length of the source table column.
STRIP
Indicates that UNLOAD is to remove DBCS blanks (the default) or the
An INTEGER output field requires 4 bytes, and the length option is not
available.
INTEGER EXTERNAL
Specifies that the output field is to contain a character string that represents
an integer number.
(length)
Indicates the size of the output data in bytes, including a space for the
sign character. When the length is given and the character notation
does not fit in the space, an error occurs. The default is 11 characters
(including a space for the sign).
If the value is negative, a minus sign precedes the numeric digits. If the
output field size is larger than the length of the data, the output data is left
justified and blanks are padded on the right.
If you specify the output field size as less than the length of the data,
an error occurs. If the specified field size is greater than the length of
data, X'0' is padded on the left.
ZONED
Specifies that the output data is a number that is represented by the
zoned-decimal format. You can use DEC ZONED as an abbreviated
form of the keyword.
The zoned-decimal representation of a number is of the form
znznzn...z/sn, where n denotes a 4 bit decimal digit (called the numeric
bits); z is the digit’s zone (left 4 bits of a byte); s is the right-most
operand that can be a zone (z) or can be a sign value (hexadecimal A,
C, E, or F for a positive number, and hexadecimal B or D for a negative
number).
length
Specifies the number of bytes (that is the number of decimal digits)
that are placed in the output field. The length must be between 1
and 31.
If the source data type is DECIMAL and the length parameter is
omitted, the default length is determined by the column attribute
that is defined on the table. Otherwise, the default length is 31
bytes.
scale
Specifies the number of digits to the right of the decimal point.
(Note that, in this case, a decimal point is not included in the output
field.) The number must be an integer greater than or equal to zero
and less than or equal to the length.
The default depends on the column attribute that is defined on the
table. If the source data type is DECIMAL, the defined scale value
is the default value; otherwise, the default is 0.
If you specify the output field size as less than the length of the data,
an error occurs. If the specified field size is greater than the length of
data, X'F0' is padded on the left.
EXTERNAL
Specifies that the output data is a character string that represents a
number in the form of ±dd...d.ddd...d, where d is a numeric character
0-9. (The plus sign for a positive value is omitted.)
length
Specifies the overall length of the output data (the number of
characters including a sign, and a decimal point if scale is
specified).
If the source data type is DECIMAL and the length parameter is
omitted, the default length is determined by the column attribute
that is defined on the table. Otherwise, the default length is 33 (31
numeric digits, plus a sign and a decimal point). The minimum
value of length is 3 to accommodate the sign, one digit, and the
decimal point.
scale
Specifies the number of digits to the right of the decimal point. The
number must be an integer that is greater than or equal to zero and
less than or equal to length - 2 (to allow for the sign character and
the decimal point).
is not available, an error occurs. If the specified length is larger than the
size of the data, blanks are padded on the right.
TIME EXTERNAL
Specifies that the output field is for a character string representation of a
time. The output format of time depends on the DB2 installation.
(length)
Specifies the size of the data field in bytes in the output record. A TIME
EXTERNAL field requires a space of at least eight characters. If the
space is not available, a conversion error occurs. If the specified length
is larger than the size of the data, blanks are padded on the right.
TIMESTAMP EXTERNAL
| Specifies that the output field is for a character string representation of a
| timestamp.
| (length)
| Specifies the size of the data field in bytes in the output record. A
| TIMESTAMP EXTERNAL field requires a space of at least 19
| characters. If the space is not available, an error occurs. The length
| parameter, if specified, determines the output format of the
| TIMESTAMP. If the specified length is larger than the size of the data,
| the field is padded on the right with the default padding character.
CONSTANT
Specifies that the output records are to have an extra field containing a
constant value. The field name that is associated with the CONSTANT
keyword must not coincide with a table column name (the field name is for
clarification purposes only). A CONSTANT field always has a fixed length
that is equal to the length of the given string.
'string'
Specifies the character string that is to be inserted in the output records
at the specified or default position. A string is the required operand of
the CONSTANT option. If the given string is in the form 'string', it is
assumed to be an EBCDIC SBCS string. However, the output string for
a CONSTANT field is in the specified or default encoding scheme. (That
is, if the encoding scheme used for output is not EBCDIC, the SBCS
CCSID conversion is applied to the given string before it is placed in
output records.)
X'hex-string'
Specifies the character string in hexadecimal form, X'hex-string', that is
to be inserted in the output records at the specified or default position.
If you want to specify a CONSTANT string value in an encoding
scheme other than SBCS EBCDIC, use the hexadecimal form. No
CCSID conversion is performed if the hexadecimal form is used.
ROWID
Specifies that the output data is of type ROWID. The field type ROWID can
be specified if and only if the column that is to be unloaded is of type
ROWID. The keyword is provided for consistency purposes.
ROWID fields have varying length and a 2-byte binary length field is
prepended to the actual data field.
For the ROWID type, no data conversion nor truncation is applied. If the
output field size is too small to unload ROWID data, an error occurs.
If the source is an image copy and a ROWID column is selected, and if the
page set header page is missing in the specified data set, the UNLOAD
utility terminates with the error message DSNU1228I. This situation can
occur when the source is an image copy data set of DSNUM that is greater
than one for a nonpartitioned table space that is defined on multiple data
sets.
BLOB Indicates that the column is to be unloaded as a binary large object
(BLOB). No data conversion is applied to the field.
When you specify the BLOB field type, a 4-byte binary length field is placed
in the output record prior to the actual data field. If the source table column
can be null, a NULL indicator byte is placed before the length field.
(length)
Specifies the maximum length of the actual data field in bytes. If you
specify NOPAD, it indicates the maximum allowable space for the data
in the output records; otherwise, the space of the specified length is
reserved for the data.
The default is the maximum length that is defined on the source table
column.
TRUNCATE
Indicates that a BLOB string is to be truncated from the right, if the data
does not fit in the available space for the field in the output record. For
BLOB data, truncation occurs at a byte boundary. Without TRUNCATE,
an error occurs when the output field size is too small for the data.
CLOB Indicates that the column is to be unloaded as a character large object
(CLOB).
When you specify the CLOB field type, a 4-byte binary length field is placed
in the output record prior to the actual data field. If the source table column
can be null, a NULL indicator byte is placed before the length field.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the output
data is encoded in the CCSID corresponding to the specified option,
depending on the subtype of the source data (SBCS or MIXED). No
conversion is applied if the subtype is BIT.
(length)
Specifies the maximum length of the actual data field in bytes. If you
specify NOPAD, it indicates the maximum allowable space for the data
in the output records; otherwise, the space of the specified length is
reserved for the data.
The default is the maximum length that is defined on the source table
column.
TRUNCATE
Indicates that a CLOB string (encoded for output) is to be truncated
from the right, if the data does not fit in the available space for the field
in the output record. For CLOB data, truncation occurs at a character
boundary. See “Specifying TRUNCATE and STRIP options for output
data” on page 638 for the truncation rules that are used in the UNLOAD
utility. Without TRUNCATE, an error occurs when the output field size is
too small for the data.
DBCLOB
Indicates that the column is to be unloaded as a double-byte character
large object (DBCLOB).
If you specify the DBCLOB field type, a 4-byte binary length field is placed
in the output record prior to the actual data field. If the source table column
can be null, a NULL indicator byte is placed before the length field.
If you specify the EBCDIC, ASCII, UNICODE, or CCSID options, the output
data is encoded in the CCSID corresponding to the specified option; DBCS
CCSID is used.
(length)
Specifies the maximum length of the actual data field in the number of
DBCS characters. If you specify NOPAD, it indicates the maximum
allowable space for the data in the output records; otherwise, the space
of the specified length is reserved for the data.
The default is the maximum length that is defined on the source table
column.
TRUNCATE
Indicates that a DBCS string (encoded for output) is to be truncated
from the right, if the data does not fit in the available space for the field
in the output record. For a DBCLOB data, truncation occurs at a
character (DBCS) boundary. See “Specifying TRUNCATE and STRIP
options for output data” on page 638 for the truncation rules that are
used in the UNLOAD utility. Without TRUNCATE, an error occurs when
the output field size is too small for the data.
WHEN
Indicates which records in the table space are to be unloaded. If no WHEN
clause is specified for a table in the table space, all of the records are
unloaded.
The option following WHEN describes the conditions for unloading records
from a table.
selection condition
Specifies a condition that is true, false, or unknown about a given row.
When the condition is true, the row qualifies for UNLOAD. When the
condition is false or unknown, the row does not qualify.
The result of a selection condition is derived by application of the specified
logical operators (AND and OR) to the result of each specified predicate. If
logical operators are not specified, the result of the selection condition is
the result of the specified predicate.
Selection conditions within parentheses are evaluated first. If the order of
evaluation is not specified by parentheses, AND is applied before OR.
| If the control statement is in the same encoding scheme as the input data,
| you can code character constants in the control statement. Otherwise, if the
| control statement is not in the same encoding scheme as the input data,
| you must code the condition with hexadecimal constants. For example, if
| the table space is in EBCDIC and the control statement is in UTF-8, use
| (1:1) = X'31' in the condition rather than (1:1) = ’1’.
| Restriction: UNLOAD cannot filter rows that contain encrypted data.
predicate
Specifies a condition that is true, false, or unknown about a row.
basic predicate
Specifies the comparison of a column with a constant. If the value of
the column is null, the result of the predicate is unknown. Otherwise,
the result of the predicate is true or false.
column = constant The column is equal to the constant or
labeled duration expression.
column < > constant The column is not equal to the constant
or labeled duration expression.
column > constant The column is greater than the constant
or labeled duration expression.
column < constant The column is less than the constant or
labeled duration expression.
column > = constant The column is greater than or equal to
the constant or labeled duration
expression.
column < = constant The column is less than or equal to the
constant or labeled duration expression.
For example, the following predicate is true for any row when salary is
greater than or equal 10000 and less than or equal to 20000:
SALARY BETWEEN 10000 AND 20000
IN predicate
Specifies that a value is to be compared with a set of values. In the IN
predicate, the second operand is a set of one or more values that are
specified by constants. Each of the predicate’s two forms (IN and NOT
IN) has an equivalent search condition, as shown in Table 127.
Table 127. IN predicates and their equivalent search conditions
Predicate Equivalent search condition
value1 IN (value1, value2,..., valuen) (value1 = value2 OR ... OR value1 = valuen)
value1 NOT IN (value1, value2,..., valuen) value1 ¬= value2 AND ... AND value1 ¬= valuen)
Note: The values can be constants or labeled duration expressions.
For example, the following predicate is true for any row whose
employee is in department D11, B01, or C01:
WORKDEPT IN (’D11’, ’B01’, ’C01’)
LIKE predicate
Specifies the qualification of strings that have a certain pattern.
Within the pattern, a percent sign or underscore can have a special
meaning, or it can represent the literal occurrence of a percent sign or
underscore. To have its literal meaning, it must be preceded by an
escape character. If it is not preceded by an escape character, it has its
special meaning. The underscore character (_) represents a single,
arbitrary character. The percent sign (%) represents a string of zero or
more arbitrary characters.
The ESCAPE clause designates a single character. That character, and
only that character, can be used multiple times within the pattern as an
escape character. When the ESCAPE clause is omitted, no character
serves as an escape character, so that percent signs and underscores
in the pattern always have their special meanings.
The following rules apply to the use of the ESCAPE clause:
v The ESCAPE clause cannot be used if x is mixed data.
v If x is a character string, the data type of the string constant must be
character string. If x is a graphic string, the data type of the string
constant must be graphic string. In both cases, the length of the
string constant must be 1.
v The pattern must not contain the escape character except when
followed by the escape character, '%' or '_'. For example, if '+' is the
escape character, any occurrence of '+' other than '++', '+_', or '+%'
in the pattern is an error.
Let x denote the column that is to be tested and y the pattern in the
string constant. The following rules apply to predicates of the form ″x
LIKE y...″. If NOT is specified, the result is reversed.
v When x and y are both neither empty nor null, the result of the
predicate is true if x matches the pattern in y and false if x does not
match the pattern in y.
v When x or y is null, the result of the predicate is unknown.
v When y is empty and x is not empty, the result of the predicate is
false.
v When x is empty and y is not empty, the result of the predicate is
false unless y consists only of one or more percent signs.
v When x and y are both empty, the result of the predicate is true.
The pattern string and the string that is to be tested must be of the
same type. That is, both x and y must be character strings, or both x
and y must be graphic strings. When x and y are graphic strings, a
character is a DBCS character. When x and y are character strings and
x is not mixed data, a character is an SBCS character and y is
interpreted as SBCS data regardless of its subtype. The rules for
mixed-data patterns are described under “Strings and patterns” on page
625.
NULL predicate
Specifies a test for null values.
If the value of the column is null, the result is true. If the value is not
null, the result is false. If NOT is specified, the result is reversed. (That
is, if the value is null, the result is false, and if the value is not null, the
result is true.)
labeled duration expression
Specifies an expression that begins with special register CURRENT
DATE or special register CURRENT TIMESTAMP (the forms
CURRENT_DATE and CURRENT_TIMESTAMP are also acceptable).
This special register can be followed by arithmetic operations of
addition or subtraction. These operations are expressed by using
numbers that are followed by one of the seven duration keywords:
YEARS, MONTHS, DAYS, HOURS, MINUTES, SECONDS, or
Chapter 32. UNLOAD 625
UNLOAD
To subtract one year, one month, and one day from a date, specify the
following code:
CURRENT DATE - 1 DAY - 1 MONTH - 1 YEAR
Notes:
1. Required if you request that UNLOAD generate LOAD statements by specifying
PUNCHDDN in the utility control statement.
The following object is named in the utility control statement and does not require a
DD statement in the JCL:
Table space
Table space that is to be unloaded. (If you want to unload only one partition
of a table space, you must specify the PART option in the control
statement.)
Unloading partitions
If the source table space is partitioned, use one of the following mutually exclusive
methods to select the partitions to unload:
v Use the LIST keyword with a LISTDEF that contains PARTLEVEL specifications.
Partitions can be either included or excluded by the use of the INCLUDE and the
EXCLUDE features of LISTDEF.
v Specify the PART keyword to select a single partition or a range of partitions.
With either method, the unloaded data can be stored in a single data set for all
selected partitions or in one data set for each selected partition. If you want to
unload to a single output data set, specify a DD name to UNLDDN. If you want to
unload into multiple output data sets, specify a template name that is associated
with the partitions. You can process multiple partitions in parallel if the TEMPLATE
definition contains the partition as a variable, for example &PA.
You cannot specify multiple output data sets with the FROMCOPY or the
FROMCOPYDDN option.
Within a FROM TABLE clause, you can specify one or more of the following criteria:
v Row and column selection criteria by using the field specification list
v Row selection conditions by using the WHEN specification clause
v Row sampling specifications
Important: When an incremental image copy is taken of a table space, rows might
be updated or moved if the SHRLEVEL CHANGE option is specified. As a result,
data that is unloaded from such a copy might contain duplicates of these rows.
You can specify a format conversion option for each field in the field specification
list.
If you select a LOB column in a list of field specifications or select a LOB column by
default (by omitting a list of field specifications), LOB data is materialized in the
output. However, you cannot select LOB columns from image copy data sets.
Unload rows from a single image copy data set by specifying the FROMCOPY
option in the UNLOAD control statement. Specify the FROMCOPYDDN option to
unload data from one or more image copy data sets that are associated with the
specified DD name. Use an image copy that contains the page set header page
when you are unloading a ROWID column; otherwise the unload fails.
The source image copy data set must have been created by one of the following
utilities:
v COPY
v COPYTOCOPY
v LOAD inline image copy
v MERGECOPY
v REORG TABLESPACE inline image copy
v DSN1COPY
UNLOAD accepts full image copies, incremental image copies, and a copy of
pieces as valid input sources.
The UNLOAD utility supports image copy data sets for a single table space. The
table space name must be specified in the TABLESPACE option. The specified
table space must exist when you run the UNLOAD utility. (That is, the table space
cannot have been dropped since the image copy was taken.)
Use the FROMCOPYDDN option to concatenate the copy of table space partitions
under a DD name to form a single input data set image. When you use the
FROMCOPYDDN option, concatenate the data sets in the order of the data set
number; the first data set must be concatenated first. If the data sets are
concatenated in the wrong order or if different generations of image copies are
concatenated, the results might be unpredictable. For example, if the most recent
image copy data sets and older image copies are intermixed, the results might be
unpredictable.
You can use the FROMCOPYDDN option to concatenate a full image copy and
incremental image copies for a table space, a partition, or a piece, but duplicate
rows are also unloaded in this situation. Instead, consider using MERGECOPY to
generate an updated full image copy as the input to the UNLOAD utility.
You can select specific rows and columns to unload just as you would for a table
space. However, you can unload only rows that contain LOB columns when the
LOB columns are not included in a field specification list. If you use an image copy
that does not contain the page set header page when unloading a ROWID column,
the unload fails.
If you use the FROMCOPY or the FROMCOPYDDN option, you can specify only
one output data set.
If you specify a dropped table on the FROM TABLE option, the UNLOAD utility
| terminates with return code 4. If you do not specify a FROM TABLE option and if an
image copy contains rows from dropped tables, UNLOAD ignores these rows.
When you specify either a full or incremental copy of partitions of a segmented
table space that consists of multiple data sets in the FROMCOPY option, be careful
when applying a mass delete to a table in the table space before you create the
copy. If a mass delete of a table occurs, the utility unloads deleted rows if the space
map pages that indicate the mass delete are not included in the data set that
corresponds to the specified copy. Where possible, use the FROMCOPYDDN
option to concatenate the copy of table space partitions.
If an image copy contains a table to which ALTER ADD COLUMN was applied after
the image copy was taken, the UNLOAD utility sets the system or user-specified
default value for the added column when the data is unloaded from such an image
copy.
When you unload a floating-point type column, you can specify the binary form of
the output to either the S/390 format (hexadecimal floating point, or HFP), or the
IEEE format (binary floating point, or BFP).
You can also convert a varying-length column to a fixed-length output field, with or
without padding characters. In either case, unless you explicitly specify a
fixed-length data type for the field, the data itself is treated as a varying-length data,
and a length field is appended to the data.
For certain data types, you can unload data into fields with a smaller length by
using the TRUNCATE or STRIP options. In this situation, if a character code
conversion is applied, the length of the data in bytes might change due to the code
conversion. The truncation operation is applied after the code conversion.
You can perform character code conversion on a character type field, including
converting numeric columns to the external format and the CLOB type. Be aware
that when you apply a character code conversion for mixed-data fields, the length of
the result string in bytes can be shorter or longer than the length of the source
string. Character type data is always converted if you specify any of the character
code conversion options (EBCDIC, ASCII, UNICODE, or CCSID).
DATE, TIME, or TIMESTAMP column types are always converted into the external
formats based on the DATE, TIME, and TIMESTAMP formats of your installation.
If you specify a data type in the UNLOAD control statement, the field type
information is included in the generated LOAD utility statement. For specific data
type compatibility information, refer to Table 129, Table 130, and Table 131 on page
632. These tables show the compatibility of the data type of the source column
(input data type) with the data type of the output field (output data type). A Y
indicates that the input data type can be converted to the output data type.
Notes:
1. Subject to the CCSID conversion, if specified (EXTERNAL case). For more information
about CCSID, see “CCSID” on page 601.
2. Potential overflow (conversion error).
Notes:
1. Subject to the CCSID conversion, if specified.
2. Results in an error if the field length is too small for the data unless you specify the TRUNCATE option. Note that
a LOB has a 4-byte length field; any other varying-length type has a 2-byte length field.
3. Only in the EBCDIC output mode.
4. Not applicable to BIT subtype data.
Notes:
1. Subject to the CCSID conversion, if specified.
2. Zeros in the time portion.
3. DATE or TIME portion of the timestamp.
Use the POSITION option to specify field position in the output records. You can
also specify the size of the output data field by using the length parameter for a
particular data type. The length parameter must indicate the size of the actual data
field. The start parameter of the POSITION option indicates the starting position of
a field, including the NULL indicator byte (if the field can be null) and the length field
(if the field is varying length).
Using the POSITION parameter, the length parameter, or both can restrict the size
of the data field in the output records. Use care when specifying the POSITION and
length parameters, especially for nullable fields and varying length fields. If a
conflict exists between the length parameter and the size of the field in the output
record that is specified by the POSITION parameters, DB2 issues an error
message, and the UNLOAD utility terminates. If an error occurs, the count of the
number of records in error is incremented. See the description of the MAXERR
option on page “MAXERR” on page 604 for more information.
If you specify a length parameter for a varying-length field and you also specify the
NOPAD option, length indicates the maximum length of data that is to be unloaded.
Without the NOPAD option, UNLOAD reserves a space of the given length instead
of the maximum data size.
If you explicitly specify start parameters for certain fields, they must be listed in
ascending order in the field selection list. Unless you specify HEADER NONE for
the table, a fixed-length record header is placed at the beginning of each record for
the table, and the start parameter must not overlap the record header area.
The TRUNCATE option is available for certain output field types. See
“FROM-TABLE-spec” on page 604 and “Specifying TRUNCATE and STRIP options
for output data” on page 638 for more information. For the output field types where
the TRUNCATE option is not applicable, enough space must be provided in the
output record for each field. The output field layouts are summarized in
“Determining the layout of output fields.”
For information about errors that can occur at the record level due to the field
specifications, see “Interpreting field specification errors” on page 639.
Data field
To determine the layout of a fixed-length field that can be null, see the layout
diagram in Figure 106 on page 634. This diagram shows that a null indicator byte is
stored before the data field, which begins at the specified position or at the next
byte position past the end of the previous data field.
Data field
If you are running UNLOAD with the NOPAD option and need to determine the
layout of a varying-length field that cannot be null, see the layout diagram in
Figure 107. This diagram shows that a length field, which contains the actual length
of the data, is stored before the data field. The length field begins at the specified
position or at the next byte position past the end of the previous data field.
Data field
Figure 107. Layout of a varying-length field (NOT NULL) with the NOPAD option
If you are running UNLOAD without the NOPAD option and need to determine the
layout of a varying-length field that cannot be null, see the layout diagram in
Figure 108 on page 635. This diagram shows that the length field is stored before
the data field and that the padding is after the data field.
Figure 108. Layout of a varying-length field (NOT NULL) without the NOPAD option
If you are running UNLOAD with the NOPAD option and need to determine the
layout of a varying-length field that can be null, see the layout diagram in
Figure 109. This diagram shows that the null indicator is stored before the length
field, which is stored before the data field. The length field begins at the specified
position or at the next byte position past the end of the previous data field.
Data field
Figure 109. Layout of a nullable varying-length field with the NOPAD option
If you are running UNLOAD without the NOPAD option and need to determine the
layout of a varying-length field that can be null, see the layout diagram in
Figure 110 on page 636. This diagram shows that the null indicator is stored before
the length field, which is stored before the data field, which has padding at the end.
Figure 110. Layout of a nullable varying-length field without the NOPAD option
| You are responsible for ensuring that the chosen delimiters are not part of the data
| in the file. If the delimiters are part of the file’s data, unexpected errors can occur.
| VARCHAR, GRAPHIC, VARGRAPHIC, CLOB, DBCLOB, and BLOB data are the
| delimited lengths of each field in the output data set, and the utility unloads all
| numeric types in external format.)
| v You cannot specify a binary 0 (zero) for any delimiter.
| v No null byte is present for a delimited output file. A null value is indicated by the
| absence of a cell value where one would normally occur. For example, two
| successive column delimiters or a missing column at the end of a record indicate
| a null value.
| v You cannot use the default decimal point as a character string delimiter
| (CHARDEL) or a column string delimiter (COLDEL).
| v Shift-in and shift-out characters cannot be specified as EBCDIC MBCS
| delimiters.
| v In the DBCS environment, the pipe character ( | ) is not supported.
| v If the output is coded in ASCII or Unicode, you cannot specify any of the
| following values for any delimiter: X'0A', X'0D', X'2E'.
| v If the output is coded in EBCDIC, you cannot specify any of the following values
| for any delimiter: X'15', X'0D', X'25'.
| v If the output is coded in EBCDIC DBCS or MBCS, you cannot specify any of the
| following values for character string delimiters: X'0D', X'15', X'25', X'4B'.
| Table 32 on page 233 lists by encoding scheme the default hex values for the
| delimiter characters.
| Table 132. Default delimiter values for different encoding schemes
| EBCDIC ASCII/Unicode ASCII/Unicode
| Character EBCDIC SBCS DBCS/MBCS SBCS MBCS
| Character string X'7F' X'7F' X'22' X'22'
| delimiter
| Decimal point X'4B' X'4B' X'2E' X'2E'
| character
| Column delimiter X'6B' X'6B' X'2C' X'2C'
|
| In most EBCDIC code pages, the hex values in Table 32 on page 233 represent a
| double quotation mark(") for the character string delimiter, a period(.) for the decimal
| point character, and a comma(,) for the column delimiter.
| Table 33 on page 233 lists by encoding scheme the maximum allowable hex values
| for any delimiter character.
| Table 133. Maximum delimiter values for different encoding schemes
| Encoding scheme Maximum allowable value
| EBCDIC SBCS None
| EBCDIC DBCS/MBCS X'3F'
| ASCII/Unicode SBCS None
| ASCII/Unicode MBCS X'7F'
|
| Table 134 on page 638 identifies the acceptable data type forms for the delimited
| file format that the LOAD and UNLOAD utilities use.
For bit strings, truncation occurs at a byte boundary. For character type data,
truncation occurs at a character boundary (a multi-byte character is not split). If a
mixed-character type data is truncated in an output field of fixed size, the truncated
string can be shorter than the specified field size. In this case, blanks in the output
CCSID are padded to the right. If the output data is in EBCDIC for a
mixed-character type field, truncation preserves the SO (shift-out) and the SI
(shift-in) characters around a DBCS substring.
The TRUNCATE option of the UNLOAD utility truncates string data, and it has a
different purpose than the SQL TRUNCATE scalar function.
leading blanks, the trailing blanks, or both. The strip operation is applied on the
encoded data for output. If both the TRUNCATE and STRIP options are specified,
the truncation operation is applied first, and then strip is applied. For example, the
output for an UNLOAD job in which you specify both the TRUNCATE and STRIP
options for a VARCHAR(5) output field is shown in Table 135. In this table, an
underscore represents a character that is to be stripped. In all cases, the source
string is first truncated to '_ABC_' (a five-character string to fit in the VARCHAR(5)
field), and then the strip operation is applied.
Table 135. Results of specifying both the TRUNCATE and STRIP options for UNLOAD
Specified STRIP Truncated Specified
option Source string string Output string length
STRIP BOTH ’_ABC_DEF’ ’_ABC_’ ’ABC’ 3
The generated LOAD statement includes WHEN and INTO TABLE specifications
that identify the table where the rows are to be reloaded, unless the HEADER
NONE option was specified in the UNLOAD control statement. You need to edit the
generated LOAD statement if you intend to load the UNLOAD output data into
different tables than the original ones.
If multiple table spaces are to be unloaded and you want UNLOAD to generate
LOAD statements, you must specify a physically distinct data set for each table
space to PUNCHDDN by using a template that contains the table space as a
variable (&TS.).
If PUNCHDDN is not specified and the SYSPUNCH DD name does not exist, the
LOAD statement is not generated.
If the image copy data set is an incremental copy or a copy of pieces that does not
contain a dictionary, the FROMCOPYDDN option can be used for a DD name to
concatenate the data set with the corresponding full image copy that contains the
dictionary. If SYSTEMPAGES YES is used, a dictionary will always be available in
the incremental copies or pieces. For more information, see “FROMCOPYDDN” on
page 599.
than zero, the UNLOAD utility continues processing until the total number of the
records in error reaches the specified MAXERR number. DB2 issues one message
for each record in error and does not unload the record. For information about
specific error messages, see DB2 Messages and Codes.
For instructions on restarting a utility job, see “Restarting an online utility” on page
42. When the source is one or more table spaces, you can restart the UNLOAD job
at the partition level or at the table space level when data is unloaded from multiple
table spaces by using the LIST option. When you restart a terminated UNLOAD job,
processing begins with the table spaces or partitions that had not yet been
completed. For a table space or partitions that were being processed at termination,
UNLOAD resets the output data sets and processes those table space or partitions
again.
When the source is one or more image copy data sets (when FROMCOPY or
FROMCOPYDDN is specified), UNLOAD always starts processing from the
beginning.
Claims and drains: Table 136 shows which claim classes UNLOAD drains and the
restrictive states that the utility sets.
Table 136. Claim classes of UNLOAD operations
Target UNLOAD UNLOAD PART
Table space or physical partition of a table DW/UTRO DW/UTRO
space with SHRLEVEL REFERENCE
Table space or physical partition of a table CR/UTRW CR/UTRW
space with SHRLEVEL CHANGE
Image copy* CR/UTRW CR/UTRW
Legend:
v DW: Drain the write claim class, concurrent access for SQL readers
v UTRO: Utility restrictive state, read-only access allowed
v CR: Claim read, concurrent access for SQL writers and readers
v UTRW: Utility restrictive state; read-write access allowed
Note: * If the target object is an image copy, the UNLOAD utility applies CR/UTRW to the
corresponding table space or physical partitions to prevent the table space from being
dropped while data is being unloaded from the image copy, even though the UNLOAD utility
does not access the data in the table space.
Compatibility: The compatibility of the UNLOAD utility and the other utilities on the
same target objects are shown in Table 137 on page 641. If the SHRLEVEL
REFERENCE option is specified, only SQL read operations are allowed on the
same target objects; otherwise SQL INSERT, DELETE, and UPDATE are also
allowed. If the target object is an image copy, INSERT, DELETE, and UPDATE are
always allowed on the corresponding table space. In any case, DROP or ALTER
640 Utility Guide and Reference
UNLOAD
cannot be applied to the target object while the UNLOAD utility is running.
Table 137. Compatibility of UNLOAD with other utilities
UNLOAD SHRLEVEL UNLOAD SHRLEVEL
Action REFERENCE CHANGE FROM IMAGE COPY
CHECK DATA Yes Yes Yes
DELETE NO
CHECK DATA No No Yes
DELETE YES
CHECK INDEX Yes Yes Yes
CHECK LOB Yes Yes Yes
COPY INDEXSPACE Yes Yes Yes
COPY TABLESPACE Yes Yes Yes*
DIAGNOSE Yes Yes Yes
LOAD SHRLEVEL No Yes Yes
CHANGE
LOAD SHRLEVEL No No Yes
NONE
MERGECOPY Yes Yes No
MODIFY RECOVERY Yes Yes No
MODIFY STATISTICS Yes Yes Yes
QUIESCE Yes Yes Yes
REBUILD INDEX Yes Yes Yes
RECOVER (no No No Yes
options)
RECOVER ERROR No No Yes
RANGE
RECOVER TOCOPY No No Yes
or TORBA
REORG INDEX Yes Yes Yes
REORG No No Yes
TABLESPACE
UNLOAD CONTINUE
or PAUSE
REORG Yes Yes Yes
TABLESPACE
UNLOAD ONLY or
EXTERNAL
REPAIR DUMP or Yes Yes Yes
VERIFY
REPAIR LOCATE Yes Yes Yes
INDEX PAGE
REPLACE
REPAIR LOCATE No No Yes
KEY or RID DELETE
or REPLACE
REPAIR LOCATE No No Yes
TABLESPACE PAGE
REPLACE
The output from this example might look similar to the following output:
000060@@STERN# 32250.00
000150@@ADAMSON# 25280.00
000200@@BROWN# 27740.00
000220@@LUTZ# 29840.00
200220@@JOHN# 29840.00
In this output:
v '@@' before the last name represents the 2-byte binary field that contains the
length of the VARCHAR field LASTNAME (for example, X'0005' for STERN).
v '#' represents the NULL indicator byte for the nullable SALARY field.
v Because the SALARY column is declared as DECIMAL (9,2) on the table, the
default output length of the SALARY field is 11 (9 digits + sign + decimal point),
not including the NULL indicator byte.
v LASTNAME is unloaded as a variable-length field because the NOPAD option is
specified.
| Example 3: Unloading data from an image copy. The FROMCOPY option in the
| following control statement specifies that data is to be unloaded from a single image
| copy data set, JUKWU111.FCOPY1.STEP1.FCOPY1.
| Example 5: Unloading data from two tables in a segmented table space. The
| following control statement specifies that data from table ADMF001.TBKW1504 and
| table ADMF001.TBKW1505 is to be unloaded from the segmented table space
| DBKW1502.TSKW1502. The PUNCHDDN option indicates that UNLOAD is to
| generate LOAD utility control statements and write them to the SYSPUNCH data
| set, which is the default. The UNLDDN option specifies that the data is to be
| unloaded to the data set that is defined by the SYSREC DD statement, which is
| also the default.
| UNLOAD TABLESPACE DBKW1502.TSKW1502
| PUNCHDDN SYSPUNCH UNLDDN SYSREC
| FROM TABLE ADMF001.TBKW1504
| FROM TABLE ADMF001.TBKW1505
allocated for each table space partition. For more information about TEMPLATE
control statements, see “Syntax and options of the TEMPLATE control statement”
on page 575 in the TEMPLATE chapter.
Assume that table space TDB1.TSP1, which contains table TCRT.TTBL, has three
partitions. Because the table space is partitioned and each partition is associated
with an output data set that is defined by the UNLDDS template, the UNLOAD job
runs in parallel in a multi-processor environment. The number of parallel tasks are
determined by the number of available processors.
Figure 112. Example of unloading data in parallel from a partitioned table space
Assume that the user ID is USERID. This UNLOAD job creates the following three
data sets to store the unloaded data:
v USERID.SMPLUNLD.TSP1.P00001 ... contains rows from partition 1.
v USERID.SMPLUNLD.TSP1.P00002 ... contains rows from partition 2.
v USERID.SMPLUNLD.TSP1.P00003 ... contains rows from partition 3.
The data is to be unloaded to data sets that are defined by the UNLDDS template.
For more information about TEMPLATE control statements, see “Syntax and options
of the TEMPLATE control statement” on page 575 in the TEMPLATE chapter.
Figure 113. Example of using a LISTDEF utility statement to specify partitions to unload
Assume that the user ID is USERID. This UNLOAD job creates the following two
data sets to store the unloaded data:
v USERID.SMPLUNLD.TSP1.P00001 ... contains rows from partition 1.
v USERID.SMPLUNLD.TSP1.P00003 ... contains rows from partition 3.
The UNLDDN option specifies that the data is to be unloaded to data sets that are
defined by the UNLDDS template. The PUNCHDDN option specifies that UNLOAD
is to generate LOAD utility control statements and write them to the data sets that
are defined by the PUNCHDS template. For more information about TEMPLATE
control statements, see “Syntax and options of the TEMPLATE control statement”
on page 575 in the TEMPLATE chapter.
Assume that the user ID is USERID. This UNLOAD job creates the following two
data sets to store the unloaded data:
| The column delimiter is specified by the COLDEL option as a semicolon (;), the
| character string delimiter is specified by the CHARDEL option as a pound sign (#),
| and the decimal point character is specified by the DECPT option as an
| exclamation point (!).
The EBCDIC option indicates that all output character data is to be in EBCDIC.
| //*
| //STEP3 EXEC DSNUPROC,UID=’JUQBU105.UNLD1’,
| // UTPROC='',
| // SYSTEM='SSTR'
| //UTPRINT DD SYSOUT=*
| //SYSREC DD DSN=JUQBU105.UNLD1.STEP3.TBQB0501,DISP=(MOD,DELETE,CATLG),
| // UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //SYSPUNCH DD DSN=JUQBU105.UNLD1.STEP3.SYSPUNCH
| // DISP=(MOD,CATLG,CATLG)
| // UNIT=SYSDA,SPACE=(4000,(20,20),,,ROUND)
| //SYSIN DD*
| UNLOAD TABLESPACE DBQB0501.TSQB0501
| DELIMITED CHARDEL '#' COLDEL ';' DECPT '!'
| PUNCHDDN SYSPUNCH
| UNLDDN SYSREC EBCDIC
| FROM TABLE ADMF001.TBQB0501
| (RECID POSITION(*) CHAR,
| CHAR7SBCS POSITION(*) CHAR,
| CHAR7SBIT POSITION(*) CHAR(7),
| VCHAR20 POSITION(*) VARCHAR,
| VCHAR20SBCS POSITION(*) VARCHAR,
| VCHAR20BIT POSITION(*) VARCHAR)
| /*
Example 10: Converting character data. For this example, assume that table
DSN8810.DEMO_UNICODE contains character data in Unicode. The UNLOAD
control statement in Figure 116 on page 647specifies that the utility is to unload the
data in this table as EBCDIC data.
Utility control statements and parameters define the function that a utility job
performs. Some stand-alone utilities read the control statements from an input
stream, and others obtain the function definitions from JCL EXEC PARM
parameters.
The following utilities read control statements from the input stream file of the
specified DD name:
Utility DD name
DSNJU003 (change log inventory)
SYSIN
DSNJU004 (print log map) SYSIN (optional)
DSN1LOGP SYSIN
DSN1SDMP SDMPIN
Utility control statements are read from the DD name input stream. The statements
in that stream must conform to these rules:
v The logical record length (LRECL) must be 80 characters. Columns 73 through
80 are ignored.
v The records are concatenated into a single stream before they are parsed. No
concatenation character is necessary.
v The SYSIN stream can contain multiple utility control statements.
The parameters that you specify must obey these OS/390 JCL EXEC PARM
parameter specification rules:
v Enclose multiple subparameters in single quotation marks or parentheses and
separate the subparameters with commas, as in the following example:
//name EXEC PARM=’ABC,...,XYZ’
v Do not let the total length exceed 100 characters.
v Do not use blanks within the parameter specification.
To specify the parameter across multiple lines:
1. Enclose it in parentheses.
© Copyright IBM Corp. 1983, 2004 653
2. End the first line with a subparameter, followed by a comma.
3. Continue the subparameters on the next line, beginning before column 17.
The following example shows a parameter that spans multiple lines:
//stepname EXEC PARM=(ABC,...LMN,
OPQ,...,XYZ)
Environment
Execute the DSNJCNVB utility as a batch job only when DB2 is not running.
Authorization required
The authorization ID of the DSNJCNVB job must have the requisite RACF
authorization.
Prerequisite actions
If you have migrated to a new version of DB2, you need to create a larger BSDS
before converting it. See the DB2 Installation Guide for instructions on how to
create a larger BSDS. For a new installation, you do not need to create a larger
BSDS. DB2 provides a larger BSDS definition in installation job DSNTIJIN;
however, if you want to convert the BSDS, you must still run DSNJCNVB.
Control statement
See “Sample DSNJCNVB control statement” on page 656 for an example of using
DSNJCNVB to convert the BSDS.
Running DSNJCNVB
Use the following EXEC statement to execute this utility:
//EXEC PGM=DSNJCNVB
DSNJCNVB output
The following example shows sample DSNJCNVB output:
CONVERSION OF BSDS DATA SET - COPY 1, DSN=DSNC810.BSDS01
SYSTEM TIMESTAMP - DATE=2003.199 LTIME= 9:40:58.74
UTILITY TIMESTAMP - DATE=2003.216 LTIME=14:26:02.21
PREVIOUS HIKEY - 04000053
NEW HIKEY - 040002F0
RECORDS ADDED - 669
DSNJ260I DSNJCNVB BSDS CONVERSION FOR DDNAME=SYSUT1 COMPLETED SUCCESSFULLY
DSNJ200I DSNJCNVB CONVERT BSDS UTILITY PROCESSING COMPLETED SUCCESSFULLY
Environment
Run DSNJLOGF as a z/OS job.
Control statement
See “Sample DSNJLOGF control statement” for an example of using DSNJLOGF to
preformat the active log data sets.
//JOBLIB DD DSN=DSN810.SDSNLOAD,DISP=SHR
//STEP1 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC810.LOGCOPY1.DS01,DISP=SHR
//STEP2 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC810.LOGCOPY1.DS02,DISP=SHR
//STEP3 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC810.LOGCOPY2.DS01,DISP=SHR
//STEP4 EXEC PGM=DSNJLOGF
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSUT1 DD DSN=DSNC810.LOGCOPY2.DS02,DISP=SHR
DSNJLOGF output
The following sample shows the DSNJLOGF output for the first data set in the
sample control statement in Figure 117.
DSNJ991I DSNJLOGF START OF LOG DATASET PREFORMAT FOR JOB LOGFRMT STEP1
DSNJ992I DSNJLOGF LOG DATA SET NAME = DSNC810.LOGCOPY1.DS01
DSNJ996I DSNJLOGF LOG PREFORMAT COMPLETED SUCCESSFULLY, 00015000
RECORDS FORMATTED
NEWLOG statement
,COPY1
,COPY2 ,STARTRBA=startrba,ENDRBA=endrba
,CATALOG=NO
,COPY1VOL=vol-id ,STARTRBA=startrba,ENDRBA=endrba,UNIT=unit-id
,COPY2VOL=vol-id ,CATALOG=YES
STRTLRSN=startlrsn,ENDLRSN=endlrsn
DELETE statement
| DELETE DSNAME=data-set-name
,COPY1VOL=vol-id
,COPY2VOL=vol-id
CCSIDS
CRESTART statement
create-spec:
|
,STARTRBA=startrba ,ENDRBA=endrba ,CHKPTRBA=chkptrba
,ENDLRSN=endlrsn
,SYSPITR=log-truncation-point
,FORWARD=YES ,BACKOUT=YES
,FORWARD=NO ,BACKOUT=NO
,CSRONLY
NEWCAT statement
NEWCAT VSAMCAT=catalog-name
DDF statement
| ,
DDF LOCATION=locname
,
ALIAS= alias-name
:alias-port
NOALIAS
LUNAME=luname
PASSWORD=password
NOPASSWD
GENERIC=gluname
NGENERIC
PORT=port
RESPORT=resport
CHECKPT statement
HIGHRBA statement
Option descriptions
“Creating utility control statements” on page 653 provides general information about
specifying options for DB2 utilities.
NEWLOG Declares one of the following data sets:
v A VSAM data set that is available for use as an active log data
set.
Use only the keywords DSNAME=, COPY1, and COPY2.
YES Indicates that the archive log data set is to be cataloged. All
subsequent allocations of the data set are made using the
catalog.
DB2 requires that all archive log data sets on disk be
cataloged. Select CATALOG=YES if the archive log data set
is on disk.
STRTLRSN=startlrsn
On the NEWLOG statement, identifies the LRSN in the log record
header of the first complete log record on the new archive data set.
startlrsn is a hexadecimal number of up to 12 characters. If you use
fewer than 12 characters, leading zeros are added. In a data
sharing environment, run the print log map utility to find an archive
log data set and start and end RBAs and LRSNs.
ENDLRSN=endlrsn
endlrsn is a hexadecimal number of up to 12 characters. If you use
fewer than 12 characters, leading zeros are added. In a data
sharing environment, run the print log map utility to find an archive
log data set and start and end RBAs and LRSNs.
| For the NEWLOG and CHECKPT statements, the ENDLRSN option
| is valid only in a data sharing environment. For the CRESTART
| statement, the ENDLRSN option is valid in both data sharing and
| non-data sharing environments. This option cannot be specified with
| STARTRBA or ENDRBA.
On the NEWLOG statement, endlrsn is the LRSN in the log record
header of the last log record on the new archive data set.
| On the CRESTART statement, in a data sharing environment,
| endlrsn is an LRSN value that is to be used as the log truncation
| point. A valid log truncation point is any LRSN value for which there
| exists a log record with an LRSN that is greater than or equal to the
| specified LRSN value. Any log information in the bootstrap data set,
| the active logs, and the archive logs with an LRSN greater than
| endlrsn is discarded. If you omit ENDLRSN, DB2 determines the
| end of the log range.
| In a non-data sharing environment, endlrsn is the RBA value that
| matches the start of the last log record that is to be used during
| restart. Any log information in the bootstrap data set, the active
| logs, and the archive logs with an RBA that is greater than endlrsn
| is discarded. If the endlrsn RBA value does not match the start of a
| log record, DB2 restart fails. If you omit ENDLRSN, DB2
| determines the end of the log range.
On the CHECKPT statement, endlrsn is the LRSN of the end
checkpoint log record.
STARTIME=startime
Enables you to record the start time of the RBA in the BSDS. This
field is optional.
startime specifies the start time in the following timestamp format:
yyyydddhhmmsst
In this format:
yyyy Indicates the year (1989-2099).
ddd Indicates the day of the year (0-365; 366 in leap years).
| You cannot specify any other option with CREATE, SYSPITR. You
| can run this option of the utility only after new-function mode is
| enabled.
CANCEL On the CRESTART statement, deactivates the currently active
conditional restart control record. The record remains in the BSDS
as historical information.
No other keyword can be used with CANCEL on the CRESTART
statement.
On the CHECKPT statement, deletes the checkpoint queue entry
that contains a starting RBA that matches the parameter that is
specified by the STARTRBA keyword.
| be different than the values for the PORT and RESPORT options.
| Specify a value for alias-port when you want to identify a subset of
| data sharing members to which a distributed request can go. For
| more information about member-specific access, see DB2 Data
| Sharing: Planning and Administration.
| You can add or replace aliases by respecifying the ALIAS option.
| The new list of names replaces the existing list.
| NOALIAS Indicates that no alias names exist for the specified location. Any
| alias names that were specified in a previous DSNJU003 utility job
| are removed.
| You cannot specify any other keyword with NOALIAS.
LUNAME=luname
Changes the LUNAME value in the BSDS.
luname specifies the LUNAME value. The LUNAME in the BSDS
must always contain the value that identifies your local DB2
subsystem to the VTAM network.
PASSWORD= The DDF password follows VTAM convention, but DB2 restricts it to
one to eight alphanumeric characters. The first character must be
either a capital letter or an alphabetic extender. The remaining
characters can consist of alphanumeric characters and alphabetic
extenders.
password Optionally assigns a password to the distributed
data facility communication record that establishes
communications for a distributed data environment.
See VTAM for MVS/ESA Resource Definition
Reference for a description of the
PRTCT=password option on the APPL definition
statement that is used to define DB2 to VTAM.
NOPASSWD Removes the archive password protection for all archives that are
created after this operation. It also removes a previously existing
password from the DDF record. No other keyword can be used with
NOPASSWD.
GENERIC=gluname
Replaces the value of the DB2 GENERIC LUNAME subsystem
parameter in the BSDS.
gluname specifies the GENERIC LUNAME value.
NGENERIC Changes the DB2 GENERIC LUNAME to binary zeros in the BSDS,
indicating that no VTAM generic LU name support is requested.
PORT Identifies the TCP/IP port number that is used by DDF to accept
incoming connection requests. This value must be a decimal
number between 0 and 65535, including 65535; zero indicates that
DDF’s TCP/IP support is to be deactivated.
If DB2 is part of a data sharing group, all the members of the DB2
data sharing group must have the same value for PORT.
RESPORT Identifies the TCP/IP port number that is used by DDF to accept
incoming DRDA two-phase commit resynchronization requests. This
value must be a decimal number between 0 and 65535, including
65535; zero indicates that DDF’s TCP/IP support is to be
Environment
Execute the change log inventory utility only as a batch job when DB2 is not
running. Changing a BSDS for a data-sharing member by using DSNJU003 might
cause a log read request from another data-sharing member to fail. The failure
occurs only if the second member tries to access the changed BSDS before the
first member is started.
Authorization required
The authorization ID of the DSNJU003 job must have the requisite RACF
authorization.
Control statement
See “Syntax and options of the DSNJU003 control statement” on page 659 for
DSNJU003 syntax and option descriptions.
Optional statements
The change log inventory utility provides the following statements:
v NEWLOG
v DELETE
v SYSTEMDB
v CRESTART
v NEWCAT
v DDF
v CHECKPT
v HIGHRBA
You can specify any statement one or more times. In each statement, separate the
operation name from the first parameter by one or more blanks. You can use
parameters in any order; separate them by commas with no blanks. Do not split a
parameter description across two SYSIN records.
statements are checked for syntax errors. Therefore, BSDS updates are not made
for any operation that is specified in the statement in error and in any subsequent
statements.
Using DSNJU003
This section describes the following tasks that are associated with running the
DSNJU003 utility:
“Running DSNJU003”
“Making changes for active logs”
“Making changes for archive logs” on page 673
“Creating a conditional restart control record” on page 673
“Deleting log data sets with errors” on page 673
“Altering references to NEWLOG and DELETE data sets” on page 674
“Specifying the NEWCAT statement” on page 674
“Renaming DB2 system data sets” on page 675
“Renaming DB2 active log data sets” on page 675
“Renaming DB2 archive log data sets” on page 676
Running DSNJU003
Execute the utility with the following statement, which can be included only in a
batch job:
//EXEC PGM=DSNJU003
To copy the contents of an old active log data set to the new one, you can also give
the RBA range and the starting and ending timestamp on the NEWLOG statement.
| To archive to disk when the size of your active logs has increased, you might find it
| necessary to increase the size of your archive data set primary and secondary
| space quantities in DSNZPARM.
Deleting: To delete information about an active log data set from the BSDS, you
might specify the following statements:
DELETE DSNAME=DSNC810.LOGCOPY1.DS01
DELETE DSNAME=DSNC810.LOGCOPY2.DS01
Recording: To record information about an existing active log data set in the
BSDS, you might specify the following statement:
NEWLOG DSNAME=DSNC810.LOGCOPY2.DS05,COPY2,STARTIME=19910212205198,
ENDTIME=19910412205200,STARTRBA=43F8000,ENDRBA=65F3FFF
You can insert a record of that information into the BSDS for any of these reasons:
Enlarging: When DB2 is inactive (down), use one of the following procedures.
If you can use the Access Method Services REPRO command, follow these steps:
1. Stop DB2. This step is required because DB2 allocates all active log data sets
when it is active.
2. Use the Access Method Services ALTER command with the NEWNAME option
to rename your active log data sets.
3. Use the Access Method Services DEFINE command to define larger active log
data sets. Refer to installation job DSNTIJIN to see the definitions that create
the original active log data sets. See DB2 Installation Guide.
By reusing the old data set names, you don’t need to run the change log
inventory utility to establish new names in the BSDSs. The old data set names
and the correct RBA ranges are already in the BSDSs.
4. Use the Access Method Services REPRO command to copy the old (renamed)
data sets into their respective new data sets.
5. Start DB2.
If you cannot use the Access Method Services REPRO command, follow this
procedure:
1. Ensure that all active log data sets except the current active log data sets have
been archived. Active log data sets that have been archived are marked
REUSABLE in print log map utility (DSNJU004) output.
2. Stop DB2.
3. Rename or delete the reusable active logs. Allocate new, larger active log data
sets with the same names as the old active log data sets.
4. Run the DSNJLOGF utility to preformat the new log data sets.
5. Run the change log inventory utility (DSNJU003) with the DELETE statement to
delete all active logs except the current active logs from the BSDS.
6. Run the change log inventory utility with the NEWLOG statement to add to the
BSDS the active logs that you just deleted. So that the logs are added as
empty, do not specify an RBA range.
7. Start DB2.
8. Issue the ARCHIVE LOG command to cause DB2 to truncate the current active
logs and switch to one of the new sets of active logs.
9. Repeat steps 2 through 7 to enlarge the active logs that were just archived.
Although all log data sets do not need to be the same size, from an operational
standpoint using the same size is more consistent and efficient. If the log data sets
are not the same size, tracking your system’s logs can be more difficult. Space can
be wasted if you are using dual data sets of different sizes because they fill only to
the size of the smallest, not using the remaining space on the larger one.
If you are archiving to disk and the size of your active logs has increased, you
might need to increase the size of your archive log data sets. However, because of
DFSMS disk management limits, you must specify less than 64 000 tracks for the
primary space quantity. See the PRIMARY QUANTITY and SECONDARY QTY
fields on installation panel DSNTIPA to modify the primary and secondary allocation
space quantities. See DB2 Installation Guide for more information.
Deleting: To delete an entire archive log data set from one or more volumes, you
might specify the following statement:
DELETE DSNAME=DSNC810.ARCHLOG1.D89021.T2205197.A0000015,COPY1VOL=DSNV04
To specify a cold start, make the values of STARTRBA and ENDRBA equal with a
statement similar to the following statement:
CRESTART CREATE,STARTRBA=4A000,ENDRBA=4A000
In most cases when doing a cold start, you should make sure that the STARTRBA
and ENDRBA are set to an RBA value that is greater than the highest used RBA.
An existing conditional restart control record governs any START DB2 operation
until one of these events occurs:
v A restart operation completes.
v A CRESTART CANCEL statement is issued.
v A new conditional restart control record is created.
d. Specify NEWLOG to identify the new data set as the new active log. The
DELETE and NEWLOG operations can be performed by the same job step.
(The DELETE statement precedes the NEWLOG statement in the SYSIN
input data set.)
3. Delete the bad data set, using VSAM Access Method Services.
Use the print log map utility before and after running the change log inventory utility
to ensure correct execution and to document changes.
When using dual active logs, choose a naming convention that distinguishes
primary and secondary active log data set. The naming convention should also
identify the log data sets within the series of primary or secondary active log data
sets. For example, the default naming convention that is established at DB2
installation time is:
prefix.LOGCOPYn.DSmm
In this convention, n=1 for all primary log data sets, n=2 for all secondary log data
sets, and mm is the data set number within each series.
If a naming convention such as the default convention is used, pairs of data sets
with equal mm values are usually used together. For example,
DSNC120.LOGCOPY1.DS02 and DSNC120.LOGCOPY2.DS02 are used together.
However, after you run the change log inventory utility with the DELETE and
NEWLOG statements, the primary and secondary series can become
unsynchronized, even if the NEWLOG data set name that you specify is the same
as the old data set name. To avoid this situation, always do maintenance on both
data sets of a pair in the same change log inventory execution:
v Delete both data sets together.
v Define both data sets together with NEWLOG statements.
To ensure consistent results, execute the change log inventory utility on the same
z/OS system on which the DB2 online subsystem executes.
If misused, the change log inventory utility can compromise the viability and integrity
of the DB2 subsystem. Only highly skilled people, such as the DB2 system
administrator, should use this utility, and then only after careful consideration.
Before initiating a conditional restart or cold restart, you should consider making
backup copies of all disk volumes that contain any DB2 data sets. This enables a
possible fallback. The backup data sets must be generated when DB2 is not active.
At startup, the DB2 system checks that the name that is recorded with NEWCAT in
the BSDS is the high-level qualifier of the DB2 system table spaces that are defined
in the load module for subsystem parameters.
NEWCAT is normally used only at installation time. See “Renaming DB2 system
data sets” for an additional function of NEWCAT.
When you change the high-level qualifier by using the NEWCAT statement, you
might specify the following statements:
//S2 EXEC PGM=DSNJU003
//SYSUT1 DD DSN=DSNC120.BSDS01,DISP=OLD
//SYSUT2 DD DSN=DSNC120.BSDS02,DISP=OLD
//SYSPRINT DD SYSOUT=*
NEWCAT VSAMCAT=DBP1
After you run the change log inventory utility with the NEWCAT statement, the utility
generates output similar to the following output:
| NEWCAT VSAMCAT=DBP1
| DSNJ210I OLD VASAM CATALOG NAME=DSNC120, NEW CATALOG NAME=DBP1
| DSNJ225I NEWCAT OPERATION COMPLETED SUCCESSFULLY
| DSNJ200I DSNJU003 CHANGE LOG INVENTORY UTILITY
| PROCESSING COMPLETED SUCCESSFULLY
To modify the high-level qualifier for archive log data sets, you need to reassemble
DSNZPARM.
Example 2: Deleting a data set. The following control statement specifies that
DSNJU003 is to delete data set DSNREPAL.A0001187 from the BSDS. The volume
serial number for the data set is DSNV04, as indicated by the COPY1VOL option.
DELETE DSNAME=DSNREPAL.A0001187,COPY1VOL=DSNV04
| Example 6: Adding multiple aliases and alias ports to the BSDS. The following
control statement specifies five alias names for the communication record in the
BSDS (MYALIAS1, MYALIAS2, MYALIAS3, MYALIAS4, and MYALIAS5). Only
MYALIAS2 and MYALIAS5 support subsets of a data sharing group. Any alias
names that were specified in a previous DSNJU003 utility job are removed.
| DDF ALIAS=MYALIAS1,MYALIAS2:8002,MYALIAS3,MYALIAS4,MYALIAS5:10001
| Example 7: Specifying a point in time for system recovery. The following control
| statement specifies that DSNJU003 is to create a new conditional restart control
| record. The SYSPITR option specifies an end RBA value as the point in time for
| system recovery for a non-data sharing system. For a data sharing system, use an
| end LRSN value instead of an end RBA value. This point in time is used by the
| RESTORE SYSTEM utility.
| //JOBLIB DD DSN=USER.TESTLIB,DISP=SHR
| // DD DSN=DSN810.SDSNLOAD,DISP=SHR
| //STEP01 EXEC PGM=DSNJU003
| //SYSUT1 DD DSN=DSNC810.BSDS01,DISP=OLD
| //SYSUT2 DD DSN=DSNC810.BSDS02,DISP=OLD
| //SYSPRINT DD SYSOUT=*
| //SYSIN DD *
| CRESTART CREATE,SYSPITR=04891665D000
| /*
Only use the CHECKPT statement if you have a good understanding of conditional
restart and checkpoint processing.
In a data sharing environment, the DSNJU004 utility can list information from any or
all BSDSs of a data sharing group.
MEMBER *
MEMBER DDNAME
,
( member-name )
Option descriptions
The following keywords can be used in an optional control statement on the SYSIN
data set:
MEMBER
Specifies which member’s BSDS information to print.
* Prints the information from the BSDS of each member in
the data sharing group.
DDNAME Prints information from only those BSDSs that are pointed
to by the MxxBSDS DD statements.
(member-name)
Prints information for only the named group members.
Environment
The DSNJU004 program runs as a batch job.
This utility can be executed either when DB2 is running and when it is not running.
However, to ensure consistent results from the utility job, the utility and the DB2
online subsystem must both be executing under the control of the same operating
system.
Authorization required
The user ID of the DSNJU004 job must have requisite RACF authorization.
Control statement
See “DSNJU004 (print log map) syntax diagram” on page 679 for DSNJU004
syntax and option descriptions. See “Sample DSNJU004 control statement” on page
681 for an example of a control statement.
Recommendations
v For dual BSDSs, execute the print log map utility twice, once for each BSDS, to
compare their contents.
v To ensure consistent results for this utility, execute the utility job on the same
z/OS system on which the DB2 online subsystem executes.
v Execute the print log map utility regularly, possibly daily, to keep a record of
recovery log data set usage.
v Use the print log map utility to document changes that are made by the change
log inventory utility.
| The sample print log map utility output in Figure 118 is for a non-data-sharing
| subsystem.
|
| ****************************************************************************************
| * *
| * LOG MAP OF THE BSDS DATA SET BELONGING TO MEMBER ’NO NAME ’ OF GROUP ’NO NAME ’. *
| * *
| ****************************************************************************************
| DSNJCNVB CONVERSION PROGRAM HAS NOT RUN DDNAME=SYSUT1
| LOG MAP OF BSDS DATA SET COPY 1, DSN=DSNC810.BSDS01
| LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
| DATA SHARING MODE IS OFF
| SYSTEM TIMESTAMP - DATE=2003.346 LTIME= 8:36:35.35
| UTILITY TIMESTAMP - DATE=2003.346 LTIME= 8:18:10.10
| VSAM CATALOG NAME=DSNC810
| HIGHEST RBA WRITTEN 000004FD1B40 2003.346 16:36:26.1
| HIGHEST RBA OFFLOADED 0000031B1FFF
| RBA WHEN CONVERTED TO V4 000000000000
| THIS BSDS HAS MEMBER RECORDS FOR THE FOLLOWING MEMBERS:
| HOST MEMBER NAME:
| MEMBER ID: 0
| GROUP NAME:
| BSDS COPY 1 DATA SET NAME:
| BSDS COPY 2 DATA SET NAME:
| ENFM START RBA/LRSN: 0000035B8D5C
| ACTIVE LOG COPY 1 DATA SETS
| START RBA/TIME END RBA/TIME DATE LTIME DATA SET INFORMATION
| -------------------- -------------------- -------- ----- --------------------
| 0000048C4000 000004C47FFF 2001.045 14:39 DSN=DSNC810.LOGCOPY1.DS02
| 2003.346 16:33:11.8 2003.346 16:34:09.4 PASSWORD=(NULL) STATUS=REUSABLE
| 000004C48000 000004FC8FFF 2001.045 14:39 DSN=DSNC810.LOGCOPY1.DS03
| 2003.346 16:34:09.4 2003.346 16:35:47.8 PASSWORD=(NULL) STATUS=TRUNCATED, REUSABLE
| 000004FC9000 00000534CFFF 2001.045 14:39 DSN=DSNC810.LOGCOPY1.DS01
| 2003.346 16:35:47.8 ........ .......... PASSWORD=(NULL) STATUS=REUSABLE
|
| Figure 118. Sample print log map utility output for a non-dating-sharing subsystem (Part 1 of 3)
|
|
|
| Figure 118. Sample print log map utility output for a non-dating-sharing subsystem (Part 2 of 3)
|
|
|
| Figure 118. Sample print log map utility output for a non-dating-sharing subsystem (Part 3 of 3)
|
| The sample print log map utility output in Figure 119 is for a member of a data
| sharing group.
|
| *****************************************************************************************
| * *
| * LOG MAP OF THE BSDS DATA SET BELONGING TO MEMBER ’V81A ’ OF GROUP ’DSNCAT ’. *
| * *
| *****************************************************************************************
| DSNJCNVB CONVERSION PROGRAM HAS NOT RUN DDNAME=SYSUT1
| LOG MAP OF BSDS DATA SET COPY 1, DSN=DSNC810.BSDS01
| LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
| DATA SHARING MODE IS ON
| SYSTEM TIMESTAMP - DATE=2003.346 LTIME=10:45:53.34
| UTILITY TIMESTAMP - DATE=2003.346 LTIME= 8:52:00.63
| VSAM CATALOG NAME=DSNC810
| HIGHEST RBA WRITTEN 0000068F92DE 2003.346 18:45:22.8
| HIGHEST RBA OFFLOADED 000000000000
| RBA WHEN CONVERTED TO V4 000001251868
| MAX RBA FOR TORBA 000001251868
| MIN RBA FOR TORBA 000000000000
| STCK TO LRSN DELTA 000000000000
|
| Figure 119. Sample print log map utility output for a member of a data sharing group (Part 1 of 3)
|
|
|
| Figure 119. Sample print log map utility output for a member of a data sharing group (Part 2 of 3)
|
|
|
| Figure 119. Sample print log map utility output for a member of a data sharing group (Part 3 of 3)
|
Timestamps in the output column LTIME are in local time. All other timestamps are
in Greenwich Mean Time (GMT).
Figure 118 on page 682 and Figure 119 on page 684 show example output from the
print log map utility. The following timestamps are included in the header section of
the reports:
System timestamp Reflects the date and time that the BSDS was last
updated. The BSDS can be updated by several
events:
v DB2 startup.
v During log write activities, whenever the write
threshold is reached.
Depending on the number of output buffers that
you have specified and the system activity rate,
the BSDS might be updated several times a
second, or it might not be updated for several
seconds, minutes, or even hours.
v Due to an error, DB2 might drop into
single-BSDS mode from its normal dual BSDS
mode. This action might occur when a request to
get, insert, point to, update, or delete a BSDS
The following timestamps are included in the active and archive log data sets
portion of the reports:
Active log date The date on which the active log data set was
originally allocated on the DB2 subsystem.
Active log time The time at which the active log data set was
originally allocated on the DB2 subsystem.
Archive log date The date of creation (not allocation) of the archive
log data set.
Archive log time The time of creation (not allocation) of the archive
log data set.
The following timestamps are included in the conditional restart control record
portion of the report that is shown in Figure 123 on page 690:
Conditional restart control record
The current time and date. This data is reported for
information only and is not kept in the BSDS.
CRCR created The time and date of creation of the CRCR by the
CRESTART option in the change log inventory
utility.
Begin restart The time and date that the conditional restart was
attempted.
End restart The time and date that the conditional restart
ended.
STARTRBA (timestamp) The time at which the control interval was written.
ENDRBA (timestamp) The time at which the last control interval was
written.
Time of checkpoint The time and date that are associated with the
checkpoint record that was used during the
conditional restart process.
The following timestamps are included in the checkpoint queue and the DDF
communication record sections of the report that is shown in Figure 122 on page
689:
Checkpoint queue The current time and date. This data is reported for
information only and is not kept in the BSDS.
Time of checkpoint The time and date that the checkpoint was taken.
DDF communication record (heading)
The current time and date. This data is reported for
information only, and is not kept in the BSDS.
The status value for each active log data set is displayed in the print log map utility
output. The sample print log map output in Figure 120 shows how the status is
displayed.
Figure 120. Portion of print log map utility output that shows active log data set status
|
| Figure 121. Portion of print log map utility output that shows archive log command history
|
The values in the TIME column of the ARCHIVE LOG COMMAND HISTORY
section of the report in Figure 121 represent the time that the ARCHIVE LOG
command was issued. This time value is saved in the BSDS and is converted to
printable format at the time that the print log map utility is run. Therefore this value,
when printed, can differ from other time values that were recorded concurrently.
Some time values are converted to printable format when they are recorded, and
then they are saved in the BSDS. These printed values remain the same when the
printed report is run.
CHECKPOINT QUEUE
15:54:57 FEBRUARY 04, 2003
TIME OF CHECKPOINT 15:54:37 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 0000400000EC
END CHECKPOINT RBA 00004000229A
TIME OF CHECKPOINT 15:53:44 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 00000B39E1EC
END CHECKPOINT RBA 00000B3A80A6
SHUTDOWN CHECKPOINT
TIME OF CHECKPOINT 15:49:40 FEBRUARY 04, 2003
BEGIN CHECKPOINT RBA 00000B2E33E5
END CHECKPOINT RBA 00000B2E9C88
...
TIME OF CHECKPOINT 21:06:01 FEBRUARY 03, 2003
BEGIN CHECKPOINT RBA 00000A7AA19C
END CHECKPOINT RBA 00000A82C998
Use DSN1CHKR on a regular basis to promptly detect any damage to the catalog
and directory.
,
PARM= DUMP
FORMAT
,
HASH( hexadecimal-constant )
,
RID( integer,hexadecimal-constant )
,
HASH( hexadecimal-constant,integer )
,
PAGE( integer,hexadecimal-constant )
Option descriptions
The following parameters are optional. Specify parameters on the EXEC statement
in any order after the required JCL parameter PARM=. If you specify more than one
parameter, separate them with commas but no blanks. If you do not specify any
parameters, DSN1CHKR scans all table space pages for broken links and for
records that are not part of any link or chain, and prints the appropriate diagnostic
messages.
DUMP Specifies that printed table space pages, if any, are to be in dump
format. If you specify DUMP, you cannot specify the FORMAT
parameter.
FORMAT Specifies that printed table space pages, if any, are to be formatted
on output. If you specify FORMAT, you cannot specify the DUMP
parameter.
HASH(hexadecimal-constant, ...)
Specifies a hash value for a hexadecimal database identifier (DBID)
DSN1CHKR is a diagnosis tool; it executes outside the control of DB2. You should
have detailed knowledge of DB2 data structures to make proper use of this service
aid.
Environment
Run the DSN1CHKR program as a z/OS job.
Do not run DSN1CHKR on a table space while it is active under DB2. While
DSN1CHKR runs, do not run other database operations for the database and table
space that are to be checked. Use the STOP DATABASE command for the
database and table space that are to be checked.
Authorization required
This utility does not require authorization. However, if RACF protects any of the
data sets, the authorization ID must also have the necessary RACF authority.
Control statement
Create the utility control statement for the DSN1CHKR job. See “Syntax and options
of the DSN1CHKR control statement” on page 691 for DSN1CHKR syntax and
option descriptions.
Required data sets: DSN1CHKR uses two data definition (DD) statements. Specify
the data set for the utility’s output with the SYSPRINT DD statement. Specify the
first data set piece of the table space that is to be checked with the SYSUT1 DD
statement.
SYSPRINT Defines the data set that contains output messages from the
DSN1CHKR program and all hexadecimal dump output.
SYSUT1 Defines the input data set. This data set can be a DB2 data set or a
copy that is created by the DSN1COPY utility. Specify disposition of
this data set as DISP=OLD to ensure that it is not in use by DB2.
Set the data set’s disposition as DISP=SHR only when the STOP
DATABASE command has stopped the table space you want to
check.
Restrictions
This section contains restrictions that you should be aware of before running
DSN1CHKR.
DSN1CHKR does not use full image copies that are created with the COPY utility. If
you create a full image copy with SHRLEVEL REFERENCE, you can copy it into a
VSAM data set with DSN1COPY and check it with DSN1CHKR.
DSN1CHKR cannot use full image copies that are created with DFSMSdss
concurrent copy. The DFSMSdss data set does not copy to a VSAM data set
because of incompatible formats.
Recommendation: First copy the stopped table space to a temporary data set by
using DSN1COPY. Use the DB2 naming convention for the copied data set. Run
DSN1CHKR on the copy, which frees the actual table space for restart to DB2.
When you run DSN1COPY, use the CHECK option to examine the table space for
page integrity errors. Although DSN1CHKR does check for these errors, running
DSN1COPY with CHECK prevents an unnecessary invocation of DSN1CHKR.
DSN1CHKR prints the chains, beginning with the pointers on the RID option in the
MAP (maintenance analysis procedure) parameter. In this example, the first pointer
is on page 000002, at an offset of 6 bytes from record 1. The second pointer is on
page 00000B, at an offset of 6 bytes from record 1.
//YOUR JOBCARD
//*
//JOBCAT DD DSNAME=DSNCAT1.USER.CATALOG,DISP=SHR
//STEP1 EXEC PGM=IDCAMS
//********************************************************************
//* ALLOCATE A TEMPORARY DATA SET FOR SYSDBASE *
//********************************************************************
//SYSPRINT DD SYSOUT=A
//SYSUDUMP DD SYSOUT=A
//SYSIN DD *
DELETE -
(TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001) -
CATALOG(DSNCAT)
DEFINE CLUSTER -
( NAME(TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001) -
NONINDEXED -
REUSE -
CONTROLINTERVALSIZE(4096) -
VOLUMES(XTRA02) -
RECORDS(783 783) -
RECORDSIZE(4089 4089) -
SHAREOPTIONS(3 3) ) -
DATA -
( NAME(TESTCAT.DSNDBD.TEMPDB.TMPDBASE.I0001.A001)) -
CATALOG(DSNCAT)
/*
//STEP2 EXEC PGM=IKJEFT01,DYNAMNBR=20
//********************************************************************
//* STOP DSNDB06.SYSDBASE *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSTSPRT DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM(DSN)
-STOP DB(DSNDB06) SPACENAM(SYSDBASE)
END
/*
//STEP3 EXEC PGM=DSN1COPY,PARM=(CHECK)
//********************************************************************
//* CHECK SYSDBASE AND RUN DSN1COPY *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DSNDB06.SYSDBASE.I0001.A001,DISP=SHR
//SYSUT2 DD DSN=TESTCAT.DSNDBC.TEMPDB.TMPDBASE.I0001.A001,DISP=SHR
/*
Figure 124. Sample JCL for running DSN1CHKR on a temporary data set (Part 1 of 2)
Figure 124. Sample JCL for running DSN1CHKR on a temporary data set (Part 2 of 2)
//YOUR JOBCARD
//*
//STEP1 EXEC PGM=IKJEFT01,DYNAMNBR=20
//********************************************************************
//* EXAMPLE 2 *
//* *
//* STOP DSNDB06.SYSDBASE *
//********************************************************************
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSTSPRT DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM(DSN)
-STOP DB(DSNDB06) SPACENAM(SYSDBASE)
END
/*
Figure 125. Sample JCL for running DSN1CHKR on a stopped table space. (Part 1 of 2)
Figure 125. Sample JCL for running DSN1CHKR on a stopped table space. (Part 2 of 2)
DSN1CHKR output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
You can run this utility on the following types of data sets that contain
uncompressed data:
v DB2 full image copy data sets
v VSAM data sets that contain DB2 table spaces
v Sequential data sets that contain DB2 table spaces (for example, DSN1COPY
output)
DSN1COMP does not estimate savings for data sets that contain LOB table spaces
or for index spaces.
DSN1COMP
32K DSSIZE ( integer G ) NUMPARTS(integer)
PAGESIZE ( 4K ) LARGE
8K
16K
32K
FREEPAGE(integer) PCTFREE(integer) FULLCOPY REORG ROWLIMIT(integer)
MAXROWS(integer)
Option descriptions
To run DSN1COMP, specify one or more of the following parameters on the EXEC
statement to run DSN1COMP. If you specify more than one parameter, separate
each parameter by a comma. You can specify parameters in any order.
32K Specifies that the input data set, SYSUT1, has a 32-KB page size.
If you specify this option and the SYSUT1 data set does not have a
32-KB page size, DSN1COMP might produce unpredictable results.
The recommended option for performance is PAGESIZE(32K).
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If
you specify an incorrect page size, DSN1COMP might produce
unpredictable results.
+-------------------------------------------+
| DBNAME | TSNAME | PARTITION | IPREFIX |
+-------------------------------------------+
1_| DBMC0731 | TPMC0731 | 1 | J |
2_| DBMC0731 | TPMC0731 | 2 | J |
3_| DBMC0731 | TPMC0731 | 3 | J |
4_| DBMC0731 | TPMC0731 | 4 | J |
5_| DBMC0731 | TPMC0731 | 5 | J |
+-------------------------------------------+
Figure 126. Result from query on the SYSTABLEPART catalog table to determine the value
in the IPREFIX column
The preceding output provides the current instance qualifier (J), which can be used
to code the data set name in the DSN1COMP JCL as follows.
//STEP1 EXEC PGM=DSN1COMP
//SYSUT1 DD DSN=vcatname.DSNDBC.DBMC0731.J0001.A001,DISP=SHR
//SYSPRINT DD AYAOUT=*
//SYSUDUMP DD AYAOUT=*
Environment
Run DSN1COMP as a z/OS job.
You can run DSN1COMP even when the DB2 subsystem is not operational. Before
you use DSN1COMP when the DB2 subsystem is operational, issue the DB2 STOP
DATABASE command. Issuing the STOP DATABASE command ensures that DB2
has not allocated the DB2 data sets.
Authorization required
DSN1COMP does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1COMP job. See “Syntax and
options of the DSN1COMP control statement” on page 699 for DSN1COMP syntax
and option descriptions.
Required data sets: DSN1COMP uses the following data definition (DD)
statements:
SYSPRINT Defines the data set that contains output messages from
DSN1COMP and all hexadecimal dump output.
SYSUT1 Defines the input data set, which can be a sequential data set or a
VSAM data set.
Specify the disposition for this data set as OLD (DISP=OLD) to
ensure that it is not in use by DB2. Specify the disposition for this
data set as SHR (DISP=SHR) only in circumstances where the DB2
STOP DATABASE command does not work.
The requested operation takes place only for the specified data set.
In the following situations, you must specify the correct data set.
v The input data set belongs to a linear table space.
v The index space is larger than 2 GB.
v The table space or index space is a partitioned space.
Recommendation
Before using DSN1COMP, be sure that you know the page size and data set size
(DSSIZE) for the table space. Use the following query on the DB2 catalog to get the
information you need:
SELECT T.CREATOR,
T.NAME,
S.PGSIZE,
CASE S.DSSIZE
WHEN 0 THEN
CASE S.TYPE
WHEN ’ ’ THEN 2097152
WHEN ’I’ THEN 2097152
WHEN ’L’ THEN 4194304
WHEN ’K’ THEN 4194304
ELSE NULL
END
ELSE S.DSSIZE
END
FROM SYSIBM.SYSTABLES T,
SYSIBM.SYSTABLESPACE S
WHERE T.DBNAME=S.DBNAME
AND T.TSNAME=S.NAME;
Using DSN1COMP
This section describes the following tasks that are associated with running the
DSN1COMP utility:
“Estimating compression savings achieved by REORG”
“Including free space in compression calculations” on page 704
“Running DSN1COMP on a table space with identical data” on page 704
DNS1COMP does not try to convert data to the latest version before it compresses
rows and derives a savings estimate.
Without the REORG option, DSN1COMP uses the first n rows to fill the
compression dictionary. DSN1COMP processes the remaining rows to provide the
compression estimate. If the number of rows that are used to build the dictionary is
a significant percentage of the data set rows, little savings result. With the REORG
option, DSN1COMP processes all the rows, including those that are used to build
the dictionary, which results in greater compression.
The fifth qualifier in the data set name can be either I0001 or J0001. This example
uses I0001. Note that because the input is a full image copy, the FULLCOPY option
must be specified.
//jobname JOB acct information
//COMPEST EXEC PGM=DSN1COMP,PARM=’FULLCOPY’
//STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR
//SYSPRINT DD SYSOUT=A
//SYSABEND DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DB254A.TS254A.I0001.A001,DISP=SHR
STEP2 specifies that DSN1COMP is to report the estimated space savings that are
to achieved by compressing the data in the data set that is identified by the
SYSUT1 DD statement, DSNC810.DSNDBD.DB254SP4.TS254SP4.I0001.A0001.
When providing the compression estimate, DSN1COMP is to evaluate no more than
20 000 rows, as indicated by the ROWLIMIT option. Specifying the maximum
number of rows to evaluate limits the elapsed time and processor time that
DSN1COMP requires.
Figure 128. Example DSN1COMP statements with PCTFREE, FREEPAGE, and ROWLIMIT
options
Example 3: Estimating space savings that are comparable to what the REORG
utility would achieve. The statement in Figure 129 on page 706 specifies that
DSN1COMP is to report the estimated space savings that are to be achieved by
compressing the data in the data set that is identified by the SYSUT1 DD
statement, DSNCAT.DSNDBD.DBJT0201.TPJTO201.I0001.A254. This input data
set is a table space that was defined with the LARGE option and has 254 partitions,
as indicated by the DSN1COMP options LARGE and NUMPARTS.
When calculating these estimates, DSN1COMP considers the values passed by the
PCTFREE and FREEPAGE options. The PCTFREE value indicates that 30% of
each page is to be left as free space. The FREEPAGE value indicates that every
thirtieth page is to be left as free space. This value must be the same value that
you specified for the FREEPAGE option of the SQL statement CREATE
TABLESPACE or ALTER TABLESPACE. DSN1COMP is to evaluate no more than
20 000 rows, as indicated by the ROWLIMIT option.
Figure 129. Example DSN1COMP statement with the LARGE, PCTFREE, FREEPAGE,
NUMPARTS, REORG, and ROWLIMIT options.
DSN1COMP output
This section contains examples of output that is generated by the DSN1COMP
utility.
Message DSN1941
If you receive this message, use a data set with more rows as input, or specify a
larger ROWLIMIT.
Note: A DB2 VSAM data set is a single piece of a nonpartitioned table space or
index, or a single partition of a partitioned table space or index. The input
must be a single z/OS sequential or VSAM data set. Concatenation of input
data sets is not supported.
Using DSN1COPY, you can also print hexadecimal dumps of DB2 data sets and
databases, check the validity of data or index pages (including dictionary pages for
compressed data), translate database object identifiers (OBIDs) to enable moving
data sets between different systems, and reset to 0 the log RBA that is recorded in
each index page or data page.
You can use the DSN1COPY utility on LOB table spaces by specifying the LOB
keyword and omitting the SEGMENT and INLCOPY keywords.
DSN1COPY
CHECK 32K FULLCOPY LARGE
PAGESIZE( 4K ) INCRCOPY LOB
8K SEGMENT
16K INLCOPY
32K
DSSIZE ( integer G ) PIECESIZ(integer K ) NUMPARTS(integer)
M
G
|
(1)
EBCDIC
PRINT
(hexadecimal-constant,hexadecimal-constant) ASCII
UNICODE
VALUE( string ) OBIDXLAT RESET
hexadecimal-constant
Notes:
1 EBCDIC is not necessarily the default if the first page of the input data set is a header page. If the
first page is a header page, DSN1COPY uses the format information in the header page as the
default format.
Option descriptions
To run DSN1COPY, specify one or more of the following parameters on the EXEC
statement. If you specify more than one parameter, separate each parameter by a
comma. You can specify parameters in any order.
CHECK Checks each page from the SYSUT1 data set for validity. The
validity checking operates on one page at a time and does not
include any cross-page checking. If an error is found, a message is
issued describing the type of error, and a dump of the page is sent
to the SYSPRINT data set. If you do not receive any messages, no
errors were found. If more than one error exists in a given page,
the check identifies only the first of the errors. However, the entire
| page is dumped. DSN1COPY does not check system pages for
| validity.
32K Specifies that the SYSUT1 data set has a 32-KB page size. If you
specify this option and the SYSUT1 data set does not have a
32-KB page size, DSN1COPY might produce unpredictable results.
The recommended option for performance is PAGESIZE(32K).
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If
you specify an incorrect page size, DSN1COPY might produce
unpredictable results.
If you do not specify the page size, DSN1COPY tries to determine
the page size from the input data set if the first page of the input
data set is a header page. DB2 issues an error message if
DSN1COPY cannot determine the input page size. This might
happen if the header page is not in the input data set, or if the page
size field in the header page contains an invalid page size.
FULLCOPY Specifies that a DB2 full image copy (not a DFSMSdss concurrent
copy) of your data is to be used as input. If this data is partitioned,
specify NUMPARTS to identify the total number of partitions. If you
specify FULLCOPY without NUMPARTS, DSN1COPY assumes that
your input file is not partitioned.
Specify FULLCOPY when using a full image copy as input. Omitting
the parameter can cause error messages or unpredictable results.
The FULLCOPY parameter requires SYSUT2 (output data set) to
be either a DB2 VSAM data set or a DUMMY data set.
INCRCOPY Specifies that an incremental image copy of the data is to be used
as input. DSN1COPY with the INCRCOPY parameter updates
existing data sets; do not redefine the existing data sets.
INCRCOPY requires that the output data set (SYSUT2) be a DB2
VSAM data set.
Before you apply an incremental image copy to your data set, you
must first apply a full image copy to the data set by using the
FULLCOPY parameter. Make sure that you apply the full image
copy in a separate execution step because you receive an error
message if you specify both the FULLCOPY and the INCRCOPY
parameters in the same step. Then, apply each incremental image
copy in a separate step, starting with the oldest incremental image
copy.
Specifying neither FULLCOPY nor INCRCOPY implies that the
input is not image copy data sets. Therefore, only a single output
data set is used.
SEGMENT Specifies that you want to use a segmented table space as input to
DSN1COPY. Pages with all zeros in the table space are copied, but
no error messages are issued. You cannot specify FULLCOPY or
INCRCOPY if you specify SEGMENT.
If you are using DSN1COPY with the OBIDXLAT to copy a DB2
data set to another DB2 data set, the source and target table
spaces must have the same SEGSIZE attribute.
You cannot specify the SEGMENT option with the LOB parameter.
INLCOPY Specifies that the input data is an inline copy data set.
You cannot specify the INLCOPY option with the LOB parameter.
DSSIZE(integer G)
Specifies the data set size, in gigabytes, for the input data set. If
you omit the DSSIZE keyword or the LARGE keyword, DSN1COPY
assumes the appropriate default input data set size that is listed in
Table 139 on page 712.
integer must match the DSSIZE value that was specified when the
table space was defined.
If you omit DSSIZE and the data set is not one of the default sizes,
the results from DSN1COPY are unpredictable.
If you specify DSSIZE, you cannot specify LARGE.
LARGE Specifies that the input data set is a table space that was defined
with the LARGE option, or an index on such a table space. If you
specify the LARGE keyword, DB2 assumes that the data set has a
4-GB boundary. The recommended method of specifying a table
space that was defined with the LARGE option is DSSIZE(4G).
If you omit the LARGE or DSSIZE(4G) option when it is needed, or
if you specify LARGE for a table space that was not defined with
the LARGE option, the results from DSN1COPY are unpredictable.
If you specify LARGE, you cannot specify LOB or DSSIZE.
LOB Specifies that SYSUT1 data set is a LOB table space. Empty pages
in the table space are copied, but no error messages are issued.
You cannot specify the SEGMENT and INLCOPY options with the
LOB parameter.
DB2 attempts to determine if the input data set is a LOB data set. If
you specify the LOB option but the data set is not a LOB data set,
or if you omit the LOB option for a data set that is a LOB data set,
DB2 issues an error message and DSN1COPY terminates.
If you specify LOB, you cannot specify LARGE.
NUMPARTS(integer)
Specifies the total number of partitions that are associated with the
data set that you are using as input or whose page range you are
| printing. When you use DSN1COPY to copy a data-partitioned
| secondary index, specify the number of partitions in the index.
| integer can range from 1 to 4096.
DSN1COPY uses this value to calculate the size of its output data
sets and to help locate the first page in a range that is to be
printed. If you omit NUMPARTS or specify it as 0, DSN1COPY
assumes that your input file is not partitioned. If you specify a
number greater than 64, DSN1COPY assumes that the data set is
for a partitioned table space that was defined with the LARGE
option, even if the LARGE keyword is not specified for DSN1COPY.
If you specify the number of partitions incorrectly, DSN1COPY can
copy the data to the wrong data sets, return an error message
indicating that an unexpected page number was encountered, or fail
to allocate the data sets correctly. In the last case, a VSAM PUT
error might be detected, resulting in a request parameter list (RPL)
error code of 24.
PRINT(hexadecimal-constant,hexadecimal-constant)
Causes the SYSUT1 data set to be printed in hexadecimal format
on the SYSPRINT data set. You can specify the PRINT parameter
with or without the page range specifications (hexadecimal-
constant,hexadecimal-constant). If you do not specify a range, all
pages of the SYSUT1 are printed. If you want to limit the range of
pages that are printed, indicate the beginning and ending page. If
you want to print a single page, supply only that page number. In
either case, your range specifications must be from one to eight
hexadecimal characters in length.
The following example shows how you code the PRINT parameter if
you want to begin printing at page X'2F0' and stop at page X'35C':
PRINT(2F0,35C)
Because the CHECK and RESET options and the copy function run
independently of the PRINT range, these options apply to the entire
input file, regardless of whether a range of pages is being printed.
| You can indicate the format of the row data in the PRINT output by
| specifying EBCDIC, ASCII, or UNICODE. For an example of the
| output that is affected by these options, see the DSN1PRNT
| FORMAT output in Figure 139 on page 752.
| EBCDIC
| Indicates that the row data in the PRINT output is to be
| displayed in EBCDIC. The default is EBCDIC if the first page
| of the input data set is not a header page.
| If the first page is a header page, DSN1COPY uses the format
| information in the header page as the default format. However,
| if you specify EBCDIC, ASCII, or UNICODE, that format
| overrides the format information in the header page. The
| unformatted header page dump is always displayed in EBCDIC,
| because most of the fields are in EBCDIC.
| ASCII
| Indicates that the row data in the PRINT output is to be
| displayed in ASCII. Specify ASCII when printing table spaces
| that contain ASCII data.
| UNICODE
| Indicates that the row data in the PRINT output is to be
| displayed in Unicode. Specify UNICODE when printing table
| spaces that contain Unicode data.
PIECESIZ(integer)
Specifies the maximum piece size (data set size) for nonpartitioned
indexes. The value that you specify must match the value that was
specified when the nonpartitioning index was created or altered.
The defaults for PIECESIZ are 2G (2 GB) for indexes that are
backed by non-large table spaces and 4G (4 GB) for indexes that
are backed by table spaces that were defined with the LARGE
| option. This option is required if the piece size is not one of the
| default values. If PIECESIZ is omitted and the index is backed by a
table space that was defined with the LARGE option, the LARGE
option is required for DSN1COPY.
The subsequent keyword K, M, or G indicates the unit of the value
that is specified in integer.
K Indicates that the integer value is to be multiplied by 1 KB
to specify the maximum piece size in bytes. integer must be
either 256 or 512.
M Indicates that the integer value is to be multiplied by 1 MB
to specify the maximum piece size in bytes. integer must be
a power of two, between 1 and 512.
G Indicates that the integer value is to be multiplied by 1 GB
to specify the maximum piece size in bytes. integer must be
1, 2, or 4.
Attention: Do not use DSN1COPY in place of COPY for both backup and
recovery. Improper use of DSN1COPY can result in unrecoverable damage and
loss of data.
Environment
Execute DSN1COPY as a z/OS job when the DB2 subsystem is either active or not
active.
If you execute DSN1COPY when DB2 is active, use the following procedure:
1. Start the table space as read-only by using START DATABASE.
2. Run the QUIESCE utility with the WRITE (YES) option to externalize all data
pages and index pages.
3. Run DSN1COPY with DISP=SHR on the data definition (DD) statement.
4. Start the table space as read-write by using START DATABASE to return to
normal operations.
Authorization required
DSN1COPY does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1COPY job. See “Syntax and options
of the DSN1COPY control statement” on page 710 for DSN1COPY syntax and
option descriptions.
To obtain the names, DBIDs, PSIDs, ISOBIDs, and OBIDs, run the
DSNTEP2 sample application on both the source and target
systems. The following SQL statements yield the preceding
information.
DB2 allows input of only one DSN1COPY data set. DB2 does not permit the input
of concatenated data sets. For a table space that consists of multiple data sets,
ensure that you specify the correct data set. For example, if you specify the CHECK
option to validate pages of a partitioned table space’s second partition, code the
second data set of the table space for SYSUT1.
If you use full or incremental copies as input, specify the SYSUT2 data sets
according to the following guidelines:
v If SYSUT1 is an image copy of a single partition, SYSUT2 must list the data
set name for that partition of the table space. Specify the NUMPARTS parameter
to identify the number of partitions in the entire table space.
v If SYSUT1 is an image copy of an entire partitioned table space, SYSUT2
must list the name of the table space’s first data set. Important: All data sets in
the partitioned table space must use the same fifth-level qualifier, I0001 or J0001,
before DSN1COPY can run successfully on a partitioned table space.
DSN1COPY allocates all of the target data sets. However, you must previously
define the target data sets by using IDCAMS. Specify the NUMPARTS parameter
to identify the number of partitions in the whole table space.
v If SYSUT1 is an image copy of a nonpartitioned data set, SYSUT2 should be
the name of the actual output data set. Do not specify the NUMPARTS
parameter because this parameter is only for partitioned table spaces.
v If SYSUT1 is an image copy of all data sets in a linear table space with
multiple data sets, SYSUT2 should be the name of its first data set.
DSN1COPY allocates all target data sets. However, you must previously define
the target data sets by using IDCAMS.
Performing these steps resets the data set and causes normal extensions through
DB2.
Restrictions
This section contains restrictions that you should know about when running
DSN1COPY.
DSN1COPY does not alter data set structure. For example, DSN1COPY does not
copy a partitioned or segmented table space into a simple table space. The output
data set is a page-for-page copy of the input data set. If the intended use of
DSN1COPY is to move or restore data, ensure that definitions for the source and
target table spaces, tables, and indexes are identical. Otherwise, unpredictable
results can occur.
DSN1COPY cannot copy DB2 recovery log data sets. The format of a DB2 log
page is different from that of a table or index page. If you try to use DSN1COPY to
recover log data sets, DSN1COPY will abnormally terminate.
Recommendations
This section contains recommendations that you should know about when running
the DSN1COPY utility.
Figure 131. Example catalog query that returns the page set size and data set size for the
page set.
| For more information about versions and how DB2 uses them, see Part 2 of DB2
| Administration Guide.
Using DSN1COPY
This section describes the following tasks that are associated with running the
DSN1COPY utility:
“Altering a table before running DSN1COPY” on page 722
“Checking for inconsistent data” on page 722
“Translating DB2 internal identifiers” on page 722
“Using an image copy as input to DSN1COPY” on page 722
“Resetting page log RBAs” on page 722
“Copying multiple data set table spaces” on page 723
“Restoring indexes with DSN1COPY” on page 723
“Restoring table spaces with DSN1COPY” on page 723
“Printing with DSN1COPY” on page 724
“Copying tables from one subsystem to another” on page 724
You must run a CHECK utility job on the table space that is involved to ensure that
no inconsistencies exist between data and indexes on that data:
v Before using DSN1COPY to save critical data that is indexed
v After using DSN1COPY to restore critical data that is indexed
The CHECK utility performs validity checking between pages.
To protect against invalidating the OBIDs, specify the OBIDXLAT parameter for
DSN1COPY. The OBIDXLAT parameter translates OBID, DBID, or PSID before
DSN1COPY copies the data.
Do not specify the RESET parameter for page sets that are in group buffer pool
RECOVER-pending (GRECP) status.
The MODIFY utility might have removed the row in SYSIBM.SYSCOPY. If this has
happened, and if the image copy is a full image copy with SHRLEVEL
REFERENCE, DSN1COPY can restore the table space or data set.
DSN1COPY can restore the object to an incremental image copy, but first it needs
to have restored the previous full image copy and any intermediate incremental
image copies. These actions ensure data integrity. You are responsible for getting
the correct sequence of image copies. DB2 cannot help ensure the proper
sequence.
If you use DSN1COPY for point-in-time recovery, the table space is not recoverable
with the RECOVER utility. Because DSN1COPY executed outside of DB2’s control,
DB2 is not aware that you recovered to a point in time. DSN1COPY recovers the
affected table space after point-in-time recovery in conjunction with the following
steps:
1. Remove old image copies by using MODIFY AGE.
2. Create one or more full image copies by using SHRLEVEL REFERENCE.
When you use DSN1COPY for printing, you must specify the PRINT parameter. The
requested operation takes place only for the specified data set. If the input data set
belongs to a linear table space or index space that is larger than 2 GB, specify the
correct data set. Alternatively, if it is a partitioned table space or partitioned index,
specify the correct data set. For example, DSN1COPY prints a page range in the
second partition of a four-partition table space. DSN1COPY does this by specifying
NUMPARTS(4) and the data set name of the second data set in the VSAM group
(DSN=...A002).
To print a full image copy data set (rather than recovering a table space), specify a
DUMMY SYSUT2 DD statement, and specify the FULLCOPY parameter.
Be careful when you copy a table that contains an identity column from one DB2
subsystem to another:
1. Stop the table space on the source subsystem.
2. Issue a SELECT statement to query the SYSIBM.SYSSEQUENCES entry that
corresponds to the identity column for this table on the source subsystem. Add
the INCREMENT value to the MAXASSIGNEDVAL to determine the next value
(nv) for the identity column.
3. Create the table on the target subsystem. On the identity column specification,
specify nv for the START WITH value, and ensure that all of the other identity
column attributes are the same as for the source table.
4. Stop the table space on the target subsystem.
5. Copy the data by using DSN1COPY.
6. Start the table space on the source subsystem for read-write access.
7. Start the table space on the target subsystem for read-write access.
Example 1: Checking input data set before copying. The following statement
specifies that the DSN1COPY utility is to copy the data set that is identified by the
SYSUT1 DD statement to the data set that is identified by the SYSUT2 DD
statement. Before DSN1COPY copies this data, the utility is to check the validity of
the input data set.
//RUNCOPY EXEC PGM=DSN1COPY,PARM=’CHECK’
//* COPY VSAM TO SEQUENTIAL AND CHECK PAGES
//STEPLIB DD DSN=PDS CONTAINING DSN1COPY
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DSNDB01.SYSUTILX.I0001.A001,DISP=OLD
//SYSUT2 DD DSN=TAPE.DS,UNIT=TAPE,DISP=(NEW,KEEP),VOL=SER=UTLBAK
Example 2: Translating the DB2 internal identifiers. The statement in Figure 132
specifies that DSN1COPY is to copy the data set that is identified by the SYSUT1
DD statement to the data set that is identified by the SYSUT2 DD statement. The
OBIDXLAT option specifies that DSN1COPY is to translate the OBIDs before the
data set is copied. The OBIDs are provided as input on the SYSXLAT DD
statement. Because the OBIDXLAT option is specified, DSN1COPY also checks the
validity of the input data set, even though the CHECK option is not specified.
from page F0000 to page F000F, as indicated by the PRINT option. The maximum
data set size is 64 MB, as indicated by the PIECESIZ option.
//PRINT2 EXEC PGM=DSN1COPY,PARM=(PRINT(F0000,F000F),PIECESIZ(64M))
//* PRINT THE FIRST 16 PAGES IN THE 61ST PIECE OF AN NPI WITH PIECE SIZE OF 64M
//SYSUDUMP DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSUT2 DD DUMMY
//SYSUT1 DD DISP=OLD,DSN=DSNCAT.DSTDBD.MMRDB.NPI1.I0001.A061
DSN1COPY output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
You can specify the range of the log to process and select criteria within the range
to limit the records in the detail report. For example, you can specify:
v One or more units of recovery that are identified by URID
v A single database
By specifying a URID and a database, you can display recovery log records that
correspond to the use of one database by a single unit of recovery.
SYSCOPY (NO)
SYSCOPY (YES) DBID(hex-constant) OBID(hex-constant) PAGE(hex-constant)
RID(hex-constant) URID(hex-constant) LUWID(luwid)
TYPE ( hex-constant )
SUBTYPE ( hex-constant )
value/offset statement
SUMMARY (NO)
SUMMARY ( YES ) CHECK(DATA)
ONLY FILTER
value/offset statement:
VALUE/OFFSET
VALUE(hex-constant) OFFSET(hex-constant)
Option descriptions
To execute DSN1LOGP, construct a batch job. The utility name, DSN1LOGP, should
appear on the EXEC statement, as shown in “Sample DSN1LOGP control
statements” on page 737.
If you specify more than one keyword, separate them by commas. You can specify
the keywords in any order. You can include blanks between keywords, and also
between the keywords and the corresponding values.
RBASTART(hex-constant)
Specifies the hexadecimal log RBA from which to begin reading. If
the value does not match the beginning RBA of one of the log
records, DSN1LOGP begins reading at the beginning RBA of the
next record. For any given job, specify this keyword only once.
Alternative spellings: STARTRBA, ST.
hex-constant is a hexadecimal value consisting of 1 to 12
characters (6 bytes); leading zeros are not required.
The default is 0.
RBAEND(hex-constant)
Specifies the last valid hexadecimal log RBA to extract. If the
specified RBA is in the middle of a log record, DSN1LOGP
continues reading the log in an attempt to return a complete log
record.
To read to the last valid RBA in the log, specify
RBAEND(FFFFFFFFFFFF). For any given job, specify this keyword
only once. Alternative spellings: ENDRBA, EN.
hex-constant is a hexadecimal value consisting of 1 to 12
characters (6 bytes); leading zeros are not required.
The default is FFFFFFFFFFFF.
RBAEND can be specified only if RBASTART is specified.
LRSNSTART(hex-constant)
Specifies the log record sequence number (LRSN) from which to
begin the log scan. DSN1LOGP starts its processing on the first log
record that contains an LRSN value that is greater than or equal to
the LRSN value that is specified on LRSNSTART. The default
LRSN is the LRSN at the beginning of the data set. Alternative
spellings: STARTLRSN, STRTLRSN, and LRSNSTRT.
For any given job, specify this keyword only once.
You must specify this keyword to search the member BSDSs and to
locate the log data sets from more than one DB2 subsystem. You
can specify either the LRSNSTART keyword or the RBASTART
keyword to search the BSDS of a single DB2 subsystem and to
locate the log data sets.
LRSNEND(hex-constant)
Specifies the LRSN value of the last log record that is to be
scanned. When LRSNSTART is specified, the default is
X'FFFFFFFFFFFF'. Otherwise, it is the end of the data set.
Alternative spelling: ENDLRSN.
For any given job, specify this keyword only once.
DATAONLY Limits the log records in the detail report to those that represent
data changes (insert, page repair, update space map, and so on).
The default is DATAONLY(NO).
(YES) Extracts log records for data changes only. For example,
The VALUE and OFFSET options must be used together. You can
specify a maximum of 10 VALUE-OFFSET pairs. The SUBTYPE
parameter is required when using the VALUE and OFFSET options.
VALUE(hex-constant)
Specifies a value that must appear in a log record that is to be
extracted.
hex-constant is a hexadecimal value consisting of a maximum
of 64 characters and must be an even number of characters.
The SUBTYPE keyword must be specified before the VALUE
option.
OFFSET(hex-constant)
Specifies an offset from the log record header at which the
value that is specified in the VALUE option must appear.
hex-constant is a hexadecimal value consisting of a maximum
of eight characters.
The SUBTYPE keyword must be specified before specifying the
OFFSET option.
Environment
DSN1LOGP runs as a batch z/OS job.
DSN1LOGP runs on archive data sets, but not active data sets, when DB2 is
running.
Authorization required
DSN1COPY does not require authorization. However, if any of the data sets is
RACF-protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1LOGP job. See “Syntax and options
of the DSN1LOGP control statement” on page 728 for DSN1LOGP syntax and
option descriptions.
DSN1LOGP identifies the recovery log by DD statements that are described in the
stand-alone log services. For a description of these services, see Appendix C
(Volume 2) of DB2 Administration Guide.
Data sharing requirements: When selecting log records from more than one DB2
subsystem, you must use all of the following DD statements to locate the log data
sets:
GROUP
MxxBSDS
MxxARCHV
MxxACTn
Using DSN1LOGP
This section describes the following tasks that are associated with running the
DSN1LOGP utility:
“Reading archive log data sets on tape”
“Locating table and index identifiers” on page 737
If you perform archiving on tape, the first letter of the lowest-level qualifier varies for
both the first and second data sets. The first letter of the first data set is B (for
BSDS), and the first letter of the second data set is A (for archive). Hence, the
archive log data set names all end in Axxxxxxx, and the DD statement identifies
each of them as the second data set on the corresponding tape:
LABEL=(2,SL)
When reading archive log data sets on tape (or copies of active log data sets on
tape), add one or more of the following Job Entry Subsystem (JES) statements:
//*MAIN HOLD=YES Place the job in HOLD status until the operator is
ready to release the job.
TYPRUN=HOLD Perform the same function as //*MAIN HOLD=YES.
The system places the JCL on the JOB statement.
Alternatively, the user submits the job to a z/OS initiator that your operations center
has established for exclusive use by jobs that require tape mounts. The user
specifies the initiator class by using the CLASS parameter on the JOB statement, in
both JES2 and JES3 environments.
For additional information on these options, refer to z/OS MVS JCL User's Guide or
z/OS MVS JCL Reference.
You can think of the DB2 recovery log as a large sequential file. When recovery log
records are written, they are written to the end of the log. A log RBA is the address
of a byte on the log. Because the recovery log is larger than a single data set, the
recovery log is physically stored on many data sets. DB2 records the RBA ranges
and their corresponding data sets in the BSDS. To determine which data set
contains a specific RBA, read the information about the DSNJU004 utility under
Chapter 37, “DSNJU004 (print log map),” on page 679 and see Part 4 (Volume 1) of
DB2 Administration Guide. During normal DB2 operation, messages are issued that
include information about log RBAs.
Example 2: Extracting information from the active log when the BSDS is not
available. The following example shows how to extract the information from the
active log when the BSDS is not available. The extraction includes log records that
apply to the table space or index space that is identified by the DBID of X'10A' and
the OBID of X'1F'. The only information that is extracted is information that relates
to page numbers X'3B' and X'8C', as identified by the PAGE options. You can omit
beginning and ending RBA values for ACTIVEn or ARCHIVE DD statements
because the DSN1LOGP search includes all specified ACTIVEn DD statements.
The DD statements ACTIVE1, ACTIVE2, and ACTIVE3 specify the log data sets in
ascending log RBA range. Use the DSNJU004 utility to determine what the log RBA
range is for each active log data set. If the BSDS is not available and you cannot
determine the ascending log RBA order of the data sets, you must run each log
data set individually.
//STEP1 EXEC PGM=DSN1LOGP
//STEPLIB DD DSN=PDS containing DSN1LOGP
//SYSPRINT DD SYSOUT=A
//SYSABEND DD SYSOUT=A
//ACTIVE1 DD DSN=DSNCAT.LOGCOPY1.DS02,DISP=SHR RBA X’A000’ - X’BFFF’
//ACTIVE2 DD DSN=DSNCAT.LOGCOPY1.DS03,DISP=SHR RBA X’C000’ - X’EFFF’
//ACTIVE3 DD DSN=DSNCAT.LOGCOPY1.DS01,DISP=SHR RBA X’F000’ - X’12FFF’
//SYSIN DD *
DBID (10A) OBID(1F) PAGE(3B) PAGE(8C)
/*
Example 3: Extracting information from the archive log when the BSDS is not
available. The example in Figure 134 shows how to extract the information from
archive logs when the BSDS is not available. The extraction includes log records
that apply to a single unit of recovery (whose URID is X'61F321'). Because the
BEGIN UR is the first record for the unit of recovery and is at X'61F321', the
beginning RBA is specified to indicate that it is the first RBA in the range from which
to extract recovery log records. Also, because no ending RBA value is specified, all
specified archive logs are scanned for qualifying log records. The specification of
DBID(4) limits the scan to changes that the specified unit of recovery made to all
table spaces and index spaces in the database whose DBID is X'4'.
Figure 134. Example DSN1LOGP statement with RBASTART and URID options
The following example produces both a detail and a summary report that uses the
BSDS to identify the log data sets. The summary report summarizes all recovery
log information within the RBASTART and RBAEND specifications. You cannot limit
the output of the summary report with any of the other options, except by using the
FILTER option with a URID or LUWID specification. RBASTART and RBAEND
specification use depends on whether a BSDS is used.
This example is similar to Example 1, in that it shows how to extract the information
from the recovery log when you have the BSDS available. However, this example
also shows you how to specify a summary report of all logged information between
the log RBA of X'AF000' and the log RBA of X'B3000'. This summary is generated
with a detail report, but it is printed to SYSSUMRY separately.
//STEP1 EXEC PGM=DSN1LOGP
//STEPLIB DD DSN=PDS containing DSN1LOGP
//SYSPRINT DD SYSOUT=A
//SYSSUMRY DD SYSOUT=A
//SYSABEND DD SYSOUT=A
//BSDS DD DSN=DSNCAT.BSDS01,DISP=SHR
//SYSIN DD *
RBASTART (AF000) RBAEND (B3000)
DBID (10A) OBID(1F) SUMMARY(YES)
/*
DSN1LOGP output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
Figure 135 on page 741 shows a sample of the summary report. Figure 136 on
page 742 shows a sample of the detail report. Figure 137 on page 744 shows a
sample of data propagation information from a summary report. A description of the
output precedes each sample.
The first section lists all completed units of recovery (URs) and checkpoints within
the range of the log that is scanned. Events are listed chronologically, with URs
listed according to when they were completed and checkpoints listed according to
when the end of the checkpoint was processed. The page sets that are changed by
each completed UR are listed. If a log record that is associated with a UR is
unavailable, the attribute INFO=PARTIAL is displayed for the UR. Otherwise, the
UR is marked INFO=COMPLETE. A log record that is associated with a UR is
unavailable if the range of the scanned log is not large enough to contain all
records for a given UR.
| The DISP attribute can be one of the following values: COMMITTED, ABORTED,
| INFLIGHT, IN-COMMIT, IN-ABORT, POSTPONED ABORT, or INDOUBT. The DISP
| attributes COMMITTED and ABORTED are used in the first section; the remaining
| attributes are used in the second section.
The list in the second section shows the work that is required of DB2 at restart as it
is recorded in the log that you specified. If the log is available, the checkpoint that is
to be used is identified, as is each outstanding UR, together with the page sets it
changed. Each page set with pending writes is also identified, as is the earliest log
record that is required to complete those writes. If a log record that is associated
with a UR is unavailable, the attribute INFO=PARTIAL is displayed, and the
identification of modified page sets is incomplete for that UR.
================================================================================
DSN1150I SUMMARY OF COMPLETED EVENTS
================================================================================
DSN1157I RESTART SUMMARY
You can reduce the volume of the detail log records by specifying one or more of
the optional keywords that are listed under “Syntax and options of the DSN1LOGP
control statement” on page 728.
Figure 137. Sample data propagation information from the summary report
A detail report contains the following information for each page regression error:
v DBID
v OBID
v Page number
v Current LRSN or RBA
v Member name
v Previous level
v Previous update
v Date
v Time
A summary report contains the total number of page regressions that the utility
found as well as the following information for each table space in which it found
page regression errors:
v Database name
v Table space name
v DBID
v OBID
If no page regression errors are found, DSN1LOGP outputs a single message that
no page regression errors were found.
The sample output in Figure 138 shows detail and summary reports when page
regression errors are found.
DSN1191I:
-------------------------------------------------------------------------------------
DETAIL REPORT OF PAGE REGRESSION ERRORS
-------------------------------------------------------------------------------------
DBID OBID PAGE# CURRENT MEMBER PREV-LEVEL PREV-UPDATE DATE TIME
---- --- -------- ------------ ------- ----------- ------------ ------ --------
0001 OOCF OOOO132F B7A83F071892 0002 84A83BBEE81F B7A83C6042DF 02.140 15:29:20
0001 OOCF 000086C2 B7A84BD4C3E5 0003 04A83BC42C58 B7A83C61D53E 02.140 18:01:13
0006 0009 00009DBF B7A8502A39F4 0002 04A83BC593B6 B7A83C669743 02.140 18:20:37
Figure 138. Sample DSN1LOGP detail and summary reports for page regression errors.
Note: A DB2 VSAM data set is a single piece of a nonpartitioned table space or
index, or a single partition of a partitioned table space or index. The input
must be a single z/OS sequential or VSAM data set. Concatenation of input
data sets is not supported.
Using DSN1PRNT, you can print hexadecimal dumps of DB2 data sets and
databases. If you specify the FORMAT option, DSN1PRNT formats the data and
indexes for any page that does not contain an error that would prevent formatting. If
DSN1PRNT detects such an error, it prints an error message just before the page
and dumps the page without formatting. Formatting resumes with the next page.
DSN1PRNT is especially useful when you want to identify the contents of a table
space or index. You can run DSN1PRNT on image copy data sets and on table
spaces and indexes. DSN1PRNT accepts an index image copy as input when you
specify the FULLCOPY option.
DSN1PRNT is compatible with LOB table spaces, when you specify the LOB
keyword and omit the INLCOPY keyword.
32K FULLCOPY LARGE DSSIZE ( integer G )
PAGESIZE ( 4K ) INCRCOPY LOB
8K INLCOPY
16K
32K
PIECESIZ(integer K ) NUMPARTS(integer)
M
G
| (1)
PRINT EBCDIC
(1)
EBCDIC
PRINT
(hexadecimal-constant,hexadecimal-constant) ASCII
UNICODE
VALUE( string ) FORMAT
hexadecimal-constant EXPAND NODATA
SWONLY NODATPGS
Notes:
1 EBCDIC is not necessarily the default if the first page of the input data set is a header page. If the
first page is a header page, DSN1PRNT uses the format information in the header page as the
default format.
Option descriptions
To run DSN1PRNT, specify one or more of the following parameters on the EXEC
statement.
PAGESIZE Specifies the page size of the input data set that is defined by
SYSUT1. Available page size values are 4K, 8K, 16K, or 32K. If
you specify an incorrect page size, DSN1PRNT might produce
unpredictable results.
If you do not specify the page size, DSN1PRNT tries to determine
the page size from the input data set if the first page of the input
data set is a header page. DB2 issues an error message if
DSN1PRNT cannot determine the input page size. This might
happen if the header page is not in the input data set, or if the page
size field in the header page contains an invalid page size.
FULLCOPY Specifies that a DB2 full image copy (not a DFSMSdss concurrent
copy) of your data is to be used as input. If this data is partitioned,
you also need to specify the NUMPARTS parameter to identify the
number and length of the partitions. If you specify FULLCOPY
without including a NUMPARTS specification, DSN1PRNT assumes
that the input file is not partitioned.
The FULLCOPY parameter must be specified when you use an
image copy as input to DSN1PRNT. Omitting the parameter can
cause error messages or unpredictable results.
INCRCOPY Specifies that an incremental image copy of the data is to be used
as input. If the data is partitioned, also specify NUMPARTS to
identify the number and length of the partitions. If you specify
INCRCOPY without NUMPARTS, DSN1PRNT assumes that the
input file is not partitioned.
The INCRCOPY parameter must be specified when you use an
incremental image copy as input to DSN1PRNT. Omitting the
parameter can cause error messages or unpredictable results.
INLCOPY Specifies that the input data is to be an inline copy data set.
When you use DSN1PRNT to print a page or a page range from an
inline copy that is produced by LOAD or REORG, DSN1PRNT
prints all instances of the pages. The last instance of the printed
page or pages is the last one that is created by the utility.
LARGE Specifies that the input data set is a table space that was defined
with the LARGE option, or an index on such a table space. If you
specify LARGE, DB2 assumes that the data set has a 4-GB
boundary. The recommended method of specifying a table space
that was defined with the LARGE option is DSSIZE(4G).
If you omit the LARGE or DSSIZE(4G) option when it is needed, or
if you specify LARGE for a table space that was not defined with
the LARGE option, the results from DSN1PRNT are unpredictable.
If you specify LARGE, you cannot specify LOB or DSSIZE.
LOB Specifies that the SYSUT1 data set is a LOB table space. You
cannot specify the INLCOPY option with the LOB parameter.
DB2 attempts to determine if the input data set is a LOB data set. If
you specify the LOB option but the data set is not a LOB data set,
or if you omit the LOB option but the data set is a LOB data set,
DB2 issues an error message and DSN1PRNT terminates.
If you specify LOB, you cannot specify LARGE.
DSSIZE(integer G)
Specifies the data set size, in gigabytes, for the input data set. If
you omit the DSSIZE keyword or the LARGE keyword, DSN1PRNT
assumes the appropriate default input data set size that is listed in
Table 140.
Table 140. Default input data set sizes
Object Default input data set size (in GB)
Non-LOB linear table space or index 2
LOB 4
Partitioned table space or index with 4
NUMPARTS = 1-16
Partitioned table space or index with 2
NUMPARTS = 17-32
Partitioned table space or index with 1
NUMPARTS = 33-64
Partitioned table space or index with 4
NUMPARTS >64
integer must match the DSSIZE value that was specified when the
table space was defined.
If you omit DSSIZE and the data set is not one of the default sizes,
the results from DSN1PRNT are unpredictable.
If you specify DSSIZE, you cannot specify LARGE.
PIECESIZ(integer)
Specifies the maximum piece size (data set size) for nonpartitioned
indexes. The value that you specify must match the value that is
| specified when the secondary index was created or altered.
The defaults for PIECESIZ are 2G (2 GB) for indexes that are
backed by non-large table spaces and 4G (4 GB) for indexes that
are backed by table spaces that were defined with the LARGE
option. This option is required if a print range is specified and the
piece size is not one of the default values. If PIECESIZ is omitted
and the index is backed by a table space that was defined with the
LARGE option, the LARGE keyword is required for DSN1PRNT.
The subsequent keyword K, M, or G, indicates the units of the
value that is specified in integer.
K Indicates that the integer value is to be multiplied by 1 KB
to specify the maximum piece size in bytes. integer must be
either 256 or 512.
M Indicates that the integer value is to be multiplied by 1 MB
to specify the maximum piece size in bytes. integer must be
a power of 2, between 1 and 512.
G Indicates that the integer value is to be multiplied by 1 GB
to specify the maximum piece size in bytes. integer must be
1, 2, or 4.
v 4 MB or 4 GB
v 8 MB
v 16 MB
v 32 MB
v 64 MB
v 128 MB
v 256 KB or 256 MB
v 512 KB or 512 MB
NUMPARTS(integer)
Specifies the number of partitions that are associated with the input
data set. NUMPARTS is required if the input data set is partitioned.
| When you use DSN1PRNT to copy a data-partitioned secondary
| index, specify the number of partitions in the index.
| Valid specifications range from 1 to 4096. DSN1PRNT uses this
value to help locate the first page in a range that is to be printed. If
you omit NUMPARTS or specify it as 0, DSN1PRNT assumes that
your input file is not partitioned. If you specify a number greater
than 64, DSN1PRNT assumes that the data set is for a partitioned
table space that was defined with the LARGE option, even if the
LARGE keyword is not specified for DSN1PRNT.
DSN1PRNT cannot always validate the NUMPARTS parameter. If
you specify it incorrectly, DSN1PRNT might print the wrong data
sets or return an error message that indicates that an unexpected
page number was encountered.
PRINT(hexadecimal-constant,hexadecimal-constant)
Causes the SYSUT1 data set to be printed in hexadecimal format
on the SYSPRINT data set. This option is the default for
DSN1PRNT.
You can specify the PRINT parameter with or without page range
specifications. If you do not specify a range, all pages of the
SYSUT1 are printed. If you want to limit the range of pages that are
printed, you can do so by indicating the beginning and ending page
numbers with the PRINT parameter or, if you want to print a single
page, by indicating only the beginning page. In either case, your
range specifications must be from one to eight hexadecimal
characters in length.
The following example shows how to code the PRINT parameter if
you want to begin printing at page X'2F0' and to stop at page
X'35C':
PRINT(2F0,35C)
| Note that the actual size of a 4-GB DB2 data set that is full is 4G -
| 256 x 4KB. This size also applies to data sets that are created with
| a DFSMS data class that has extended addressability. When
| calculating the print range of pages in a non-first data set of a
| multiple data set linear table space or index with 4G DSSIZE or
| PIECESIZ, use the actual data set size.
| The relationship between the page size and the number of pages in
| a 4-GB data set is shown in Table 141 on page 752.
| Table 141. Relationship between page size and the number of pages in a 4-GB data set
| Page size Number of pages
| 4 KB X'FFF00'
| 8 KB X'7FF80'
| 16 KB X'3FFC0'
| 32 KB X'1FFE0'
|
| For example, if PAGESIZE is 4 KB, the page number of the first
| page of the third data set is 2*FFF00 = 1FFE00.
| You can indicate the format of the row data in the PRINT output by
| specifying EBCDIC, ASCII, or UNICODE. The part of the output that
| is affected by these options is in bold in Figure 139.
|
RECORD: XOFFSET=’0014’X PGSFLAGS=’00’X PGSLTH=65 PGSLTH=’0041’X PGSOBD=’0003’X PGSBID=’01’X
C5C5F0F6 C1404040 40404040 F1F34040 40C1E2D6 F1F3F5E7 40404040 40404040 EE06A 13 ASO135X
C1C6F3F1 C587C6F0 01800000 14199002 01174522 00000080 000000 AF31E.F0...................
Figure 139. The part of the DSN1PRNT FORMAT output that is affected by the EBCDIC, ASCII, and UNICODE
options
| EBCDIC
| Indicates that the row data in the PRINT output is to be
| displayed in EBCDIC. The default is EBCDIC if the first page
| of the input data set is not a header page.
| If the first page is a header page, DSN1PRNT uses the format
| information in the header page as the default format. However,
| if you specify EBCDIC, ASCII, or UNICODE, that format
| overrides the format information in the header page. The
| unformatted header page dump is always displayed in EBCDIC,
| because most of the fields are in EBCDIC.
| ASCII
| Indicates that the row data in the PRINT output is to be
| displayed in ASCII. Specify ASCII when printing table spaces
| that contain ASCII data.
| UNICODE
| Indicates that the row data in the PRINT output is to be
| displayed in Unicode. Specify UNICODE when printing table
| spaces that contain Unicode data.
VALUE Causes each page of the input data set SYSUT1 to be scanned for
the character string that you specify in parentheses following the
VALUE parameter. Each page that contains that character string is
then printed in SYSPRINT. You can specify the VALUE parameter in
conjunction with any of the other DSN1PRNT parameters.
(string)
Can consist of from 1 to 20 alphanumeric EBCDIC characters.
For non-EBCDIC characters, use hexadecimal characters.
(hexadecimal-constant)
Consists of from 2 to 40 hexadecimal characters. You must
specify two apostrophe characters before and after the
hexadecimal character string.
If, for example, you want to search your input file for the string
'12345', your JCL should look like the following JCL:
//STEP1 EXEC PGM=DSN1PRNT,PARM=’VALUE(12345)’
Environment
Run DSN1PRNT as a z/OS job.
You can run DSN1PRNT even when the DB2 subsystem is not operational. If you
choose to use DSN1PRNT when the DB2 subsystem is operational, ensure that the
DB2 data sets that are to be printed are not currently allocated to DB2.
To make sure that a data set is not currently allocated to DB2, issue the DB2 STOP
DATABASE command, specifying the table spaces and indexes that you want to
print.
Authorization required
No special authorization is required. However, if any of the data sets is RACF
protected, the authorization ID of the job must have RACF authority.
Control statement
Create the utility control statement for the DSN1PRNT job. See “Syntax and options
of the DSN1PRNT control statement” on page 748 for DSN1PRNT syntax and
option descriptions.
If you run the online REORG utility with the FASTSWITCH option,
verify the data set name before running the DSN1PRNT utility. The
fifth-level qualifier in the data set name alternates between I0001
and J0001 when using FASTSWITCH. Specify the correct fifth-level
qualifier in the data set name to successfully execute the
| DSN1PRNT utility. To determine the correct fifth-level qualifier,
| query the IPREFIX column of SYSIBM.SYSTABLEPART for each
| data partition or the IPREFIX column of SYSIBM.SYSINDEXPART
| for each index partition. If the object is not partitioned, use zero as
| the value for the PARTITION column in your query.
Recommendations
This section contains recommendations for running the DSN1PRNT utility.
SELECT I.CREATOR,
I.NAME,
S.PGSIZE,
CASE S.DSSIZE
WHEN 0 THEN CASE S.TYPE
WHEN ’ ’ THEN 2097152
WHEN ’I’ THEN 2097152
WHEN ’L’ THEN 4194304
WHEN ’K’ THEN 4194304
ELSE NULL
END
ELSE S.DSSIZE
END
FROM SYSIBM.SYSINDEXES I,
SYSIBM.SYSTABLES T,
SYSIBM.SYSTABLESPACE S
WHERE I.CREATOR=’DSN8610’ AND
I.NAME=’XEMP1’ AND
I.TBCREATOR=T.CREATOR AND
I.TBNAME=T.NAME AND
T.DBNAME=S.DBNAME AND
T.TSNAME=S.NAME;
Figure 140. Example SQL query that returns the page size and data set size for the page
set.
See “Data sets that REORG INDEX uses” on page 389 for information about
determining data set names.
The fifth-level qualifier in the data set name can be either I0001 or J0001. This
example uses I0001.
//PRINT2 EXEC PGM=DSN1PRNT,
// PARM=(PRINT(F0000,F000F),FORMAT,PIECESIZ(64M))
//SYSUDUMP DD SYSOUT=A
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DISP=OLD,DSN=DSNCAT.DSNDBD.MMRDB.NPI1.I0001.A061
Example 4: Printing a partitioned data set. The following example specifies that
DSN1PRNT is to print the data set that is identified by the SYSUT1 DD statement.
Because this data set is a table space that was defined with the LARGE option, the
DSSIZE(4G) option is specified in the parameter list for DSN1PRNT. You could
specify the LARGE option in this list instead, but specifying DSSIZE(4G) is
recommended. This input table space has 260 partitions, as indicated by the
NUMPARTS option.
//RUNPRNT1 EXEC PGM=DSN1PRNT,
// PARM=’DSSIZE(4G),PRINT,NUMPARTS(260),FORMAT’
//STEPLIB DD DSN=DB2A.SDSNLOAD,DISP=SHR
//SYSPRINT DD SYSOUT=A
//SYSUT1 DD DSN=DSNCAT.DSNDBC.DBOM0301.TPOM0301.I0001.A259,DISP=SHR
/*
DSN1PRNT output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
For information about the format of trace records, see Appendix D (Volume 2) of
DB2 Administration Guide.
SELECT function,offset,data-specification
|
(X'00E60100')
ACTION (action )
(abend-code)
(1)
(STTRACE )second-trace-spec
(X'00E60100')
,action
(abend-code)
Notes:
1 The options in the second-trace-spec do not have to be specified immediately following the
STTRACE option. However, they can be specified only if the STTRACE option is also specified.
second-trace-spec:
|
(X'00E60100') FILTER( ACE ) COMMAND command
ACTION2( action ) EB
(abend-code)
SELECT2 function,offset,data-specification
Option descriptions
START TRACE (trace-parameters)
Indicates the start of a DSN1SDMP job. START TRACE is a
required keyword and must be the first keyword that is specified in
the SDMPIN input stream. The trace parameters that you use are
those that are described in Chapter 2 of DB2 Command Reference,
except that you cannot use the subsystem recognition character.
If the START TRACE command in the SDMPIN input stream is not
valid, or if the user is not properly authorized, the IFI
(instrumentation facility interface) returns an error code and START
TRACE does not take effect. DSN1SDMP writes the error message
to the SDMPPRNT data set.
Trace Destination: If DB2 trace data is to be written to the SDMPTRAC data set,
the trace destination must be an IFI online performance (OP) buffer. OP buffer
destinations are specified in the DEST keyword of START TRACE. Eight OP buffer
destinations exist, OP1 to OP8. The OPX trace destination assigns the next
available OP buffer.
The DB2 output text from the START TRACE command is written to SDMPPRNT.
START TRACE and its associated keywords must be specified first. Specify the
remaining selective dump keywords in any order following the START TRACE
command.
SELECT function,offset,data-specification
Specifies selection criteria in addition to those that are specified on the START
TRACE command. SELECT expands the data that is available for selection in a
trace record and allows more specific selection of data in the trace record than
using START TRACE alone. You can specify a maximum of eight SELECT
criteria.
The selection criteria use the concept of the current-record pointer. DB2
initializes the current-record pointer to zero, that is, at the beginning of the trace
record. For this instance of the DSN1SDMP trace, the trace record begins with
the self-defining section. The current-record pointer can be modified by Px and
LN functions, which are described in the list of functions below.
For information about the fields in the DB2 trace records, see Appendix D
(Volume 2) of DB2 Administration Guide.
You can specify the selection criteria with the following parameters:
function Specifies the type of search that is to be performed on the trace
record. The specified value must be two characters. The
possible values are:
DR Specifies a direct comparison of data from the specified
offset. The offset is always calculated from the
current-record pointer.
GE Specifies a comparison of data that is greater than or
equal to the value of the specified offset. The offset is
always calculated from the current-record pointer. The
test succeeds if the data from the specified offset is
greater than or equal to data-specification, which you
can specify on the SELECT option.
LE Specifies a comparison of data that is less than or
equal to the value of the specified offset. The offset is
always calculated from the current-record pointer. The
test succeeds if the data from the specified offset is
less than or equal to data-specification, which you
specify on the SELECT option.
P1, P2, or P4
Selects the 1-, 2-, or 4-byte field that is located offset
bytes past the start of the record. The function then
moves the current-record pointer that number of bytes
into the record. P1, P2, and P4 always start from the
beginning of the record (plus the offset that you
specify).
This offset is saved as the current-record pointer that is
to be used on subsequent DR, LE, GR, and LN
requests.
For example, suppose that the user knows that the
offset to the standard header is 4 bytes long and is
located in the first 4 bytes of the record. P4,00 reads
that offset and moves the current-record pointer to the
start of the standard header.
LN Advances the current-record pointer by the number of
bytes that are indicated in the 2-byte field that is
located offset bytes from the previous current-record
pointer.
This offset is saved as the current-record pointer that is
to be used on subsequent DR, LE, GR, and LN
requests.
offset Specifies the number (in decimal) of bytes into the trace record
where the comparison with the data-specification field begins.
The offset starts from the beginning of the trace record after a
P1, P2, or P4, and from the current-record pointer after a GE,
LE, LN, or DR.
Figure 141. Format of the DB2 trace record at data specification comparison time
An abend reason code can also be specified on this parameter. The codes
must be in the range X'00E60100' to X'00E60199'. The default value is
X'00E60100'.
STTRACE
Specifies that a second trace is to be started when a trace record passes
the selection criteria.
If you do not specify action or STTRACE, the record is written and no action is
performed.
AFTER(integer)
Specifies that the ACTION is to be performed after the trace point is reached
integer times.
integer must be between 1 and 32767. The default is AFTER(1).
FOR(integer)
Specifies the number of times that the ACTION is to take place when the
specified trace point is reached. After integer times, the trace is stopped, and
DSN1SDMP terminates.
integer must be between 1 and 32767 and includes the first action. If no
SELECT criteria are specified, use an integer greater than 1; the START
TRACE command automatically causes the action to take place one time. The
default is FOR(1).
ACTION2
Specifies the action to perform when a trace record passes the selection criteria
of the START TRACE, SELECT, and SELECT2 keywords.
Attention: The ACTION2 keyword, like the ACTION keyword, should be used
with extreme caution, because you might damage existing data. Not all abends
are recoverable, even if the ABENDRET parameter is specified. Some abends
might force the DB2 subsystem to terminate, particularly those that occur during
end-of-task or end-of-memory processing due to the agent having experienced
a previous abend.
action(abend-code)
Specifies a particular action to perform. Possible values for action are:
ABENDRET ABEND and retry the agent.
ABENDTER ABEND and terminate the agent.
An abend reason code can also be specified on this parameter. The codes
must be in the range X'00E60100-00E60199'. If no abend code is specified,
X'00E60100' is used.
If you do not specify action, the record is written and no action is performed.
FILTER
Specifies that DSN1SDMP is to filter the output of the second trace based on
either an ACE or an EB.
(ACE)
Specifies that DSN1SDMP is to include trace records only for the agent
control element (ACE) that is associated with the agent when the first action
is triggered and the second trace is started.
(EB)
Specifies that DSN1SDMP is to include trace records only for the execution
block (EB) that is associated with the agent when the first action is
triggered and the second trace is started.
COMMAND
Indicates that the specified command is to be issued when a trace record
passes the selection criteria for the first trace and a second trace is started. You
can start a second trace by specifying the STTRACE option.
command
Specifies a specific command to be issued. For a complete list of
commands, see DB2 Command Reference.
FOR2(integer)
Specifies the number of times that the ACTION2 is to take place when the
specified second trace point is reached. After integer times, the second trace is
stopped, and DSN1SDMP terminates.
integer must be between 1 and 32767 and includes the first action. If no
SELECT2 criteria are specified, use an integer greater than 1; the STTRACE
option automatically causes the action to take place one time. The default is
FOR2(1).
AFTER2(integer)
Specifies that the ACTION2 is to be performed after the second trace point is
reached integer times.
integer must be between 1 and 32767. The default is AFTER2(1).
SELECT2 function,offset,data-specification
Specifies selection criteria for the second trace. This option functions like the
SELECT option, except that it pertains to the second trace only. You can start a
second trace by specifying the STTRACE option.
Environment
Run DSN1SDMP as a z/OS job, and execute it with the DSN TSO command
processor. To execute DSN1SDMP, the DB2 subsystem must be running.
The z/OS job completes only under one of the following conditions:
v The TRACE and any additional selection criteria that are started by DSN1SDMP
meet the criteria specified in the FOR parameter.
v The TRACE that is started by DSN1SDMP is stopped by using the STOP TRACE
command.
v The job is canceled by the operator.
Authorization required
To execute this utility, the privilege set of the process must include one of the
following privileges or authorities:
v TRACE system privilege
v SYSOPR authority
v SYSADM authority
v MONITOR1 or MONITOR2 privileges (if you are using user-defined data sets)
The user who executes DSN1SDMP must have EXECUTE authority on the plan
that is specified in the trace-parameters of the START TRACE keyword.
Control statement
See “Syntax and options of the DSN1SDMP control statement” on page 757 for
DSN1SDMP syntax and option descriptions.
The DB2 subsystem name must be filled in by the user. The DSN
RUN command must specify a plan for which the user has execute
authority. DSN1SDMP dump does not execute the specified plan;
the plan is used only to connect to DB2.
Using DSN1SDMP
This section describes the following tasks that are associated with running the
DSN1SDMP utility:
“Assigning buffers”
“Generating a dump” on page 764
“Stopping or modifying DSN1SDMP traces” on page 764
Assigning buffers
The OPX trace destination assigns the next available OP buffer. You must specify
the OPX destination for all traces that are being recorded to an OPn buffer, thereby
avoiding the possibility of starting a trace to a buffer that has already been
assigned.
If a trace is started to an OPn buffer that has already been assigned, DSN1SDMP
waits indefinitely until the trace is manually stopped. The default for MONITOR-type
traces is the OPX destination (the next available OP buffer). Other trace types must
be explicitly directed to OP destinations via the DEST keyword of the START
TRACE command. DSN1SDMP interrogates the IFCAOPN field after the START
TRACE COMMAND call to determine if the trace was started to an OP buffer.
Trace records are written to the SDMPTRAC data set when the trace destination is
an OP buffer (see “Trace Destination” on page 758). The instrumentation facilities
component (IFC) writes trace records to the buffer and posts DSN1SDMP to read
the buffer when it fills to half of the buffer size.
You can specify the buffer size on the BUFSIZE keyword of the START TRACE
command. The default buffer size is 8 KB. All returned records are written to
SDMPTRAC.
If the number of generated trace records requires a larger buffer size than was
specified, you can lose some trace records. If this happens, error message
DSN2724I is issued.
Generating a dump
All of the following events must occur before DSN1SDMP generates a DB2 dump:
v DB2 produces a trace record that satisfies all of the selection criteria.
v An abend action (ABENDRET or ABENDTER) is specified.
v The AFTER and FOR conditions for the trace are satisfied.
If DSN1SDMP does not finish execution, you can stop the utility by issuing the
STOP TRACE command, as in the following example:
-STOP TRACE=P CLASS(32)
A STOP TRACE or MODIFY TRACE command that is entered from a console for
the trace that is started by DSN1SDMP causes immediate abnormal termination of
DSN1SDMP processing. The IFI READA function terminates with an appropriate IFI
termination message and reason code. Additional error messages and reason
codes that are associated with the DSN1SDMP STOP TRACE command vary
depending on the specific trace command that is entered by the console operator.
If the console operator terminates the original trace by using the STOP TRACE
command, the subsequent STOP TRACE command that is issued by DSN1SDMP
fails.
If the console operator enters a MODIFY TRACE command and processing of this
command completes before the STOP TRACE command is issued by DSN1SDMP,
the modified trace is also terminated.
/*
//**********************************************************************
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROG(DSN1SDMP) PLAN(DSNEDCL)
END
//*
Example 2: Abending and terminating agent on -904 SQL CODE. The example
in Figure 143 on page 766 specifies that DB2 is to start a performance trace (which
is indicated by the letter P) and activate only IFCID 58. To start only those IFCIDs
that are specified in the IFCID option, use trace classes 30-32. In this example,
trace class 32 is specified. The trace output is to be recorded in a generic
destination that uses the first free OPn slot, as indicated by the DEST option. These
START TRACE options are explained in greater detail in DB2 Command Reference.
The SELECT option indicates additional criteria for data in the trace record. In this
example, the SELECT option first specifies that the current-record pointer is to be
moved to the 4-byte field that is located 8 bytes past the start of the record. The
option then specifies that the utility is to directly compare the data that is 74 bytes
from the current-record pointer with the value X'FFFFFC78'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by the
ACTION keyword. In this example, the job is to abend and terminate with reason
code 00E60188. This action is to take place only once, as indicated by the FOR
option. FOR(1) is the default, and is therefore not required to be explicitly specified.
//SDMPIN DD *
* START ONLY IFCID 58, END SQL STATEMENT
START TRACE=P CLASS(32) IFCID(58) DEST(OPX)
FOR(1)
ACTION(ABENDTER(00E60188))
SELECT
* OFFSET TO FIRST DATA SECTION CONTAINING THE SQLCA.
P4,08
* SQLCODE -904, RESOURCE UNAVAILABLE
DR,74,X’FFFFFC78’
/*
Figure 143. Example job that abends and terminates agent on -904 SQL code
Example 3: Abending and retrying on RMID 20. The example in Figure 144
specifies that DB2 is to start a performance trace (which is indicated by the letter P)
and activate all IFCIDs in classes 3 and 8. The trace output is to be recorded in a
generic destination that uses the first free OPn slot, as indicated by the DEST
option. The TDATA (TRA) option specifies that a CPU header is to be placed into
the product section of each trace record. These START TRACE options are
explained in greater detail in DB2 Command Reference.
The SELECT option indicates additional criteria for data in the trace record. In this
example, the SELECT option first specifies that the current-record pointer is to be
placed at the 4-byte field that is located at the start of the record. The current
record pointer is then to be advanced the number of bytes that are indicated in the
2-byte field that is located at the current record pointer. The utility is then to directly
compare the data that is 4 bytes from the current-record pointer with the value
X'0025'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by the
ACTION keyword. In this example, the job is to abend and retry the agent.
trace output is to be recorded in the system management facility (SMF). The TDATA
(COR,TRA) option specifies that a trace header and a CPU header are to be placed
into the product section of each trace record. These START TRACE options are
explained in greater detail in DB2 Command Reference.
The SELECT option indicates additional criteria for data in the trace record. In this
example, the SELECT option first specifies that the current-record pointer is to be
placed at the 4-byte field that is located at the start of the record. The utility is then
to directly compare the data that is 2 bytes from the current-record pointer with the
value X'0116003A'. The current record pointer is then to be moved to the 4-byte
field that is located 8 bytes past the start of the current record. The utility is then to
directly compare the data that is 74 bytes from the current-record pointer with the
value X'FFFFFCD5'.
When a trace record passes the selection criteria of the START TRACE command
and SELECT keywords, DSN1SDMP is to perform the action that is specified by the
ACTION keyword. In this example, the job is to abend with reason code 00E60188
and retry the agent. This action is to take place only once, as indicated by the FOR
option. FOR(1) is the default, and is therefore not required to be explicitly specified.
AFTER(1) indicates that this action is to be performed the first time the trace point
is reached. AFTER(1) is also the default.
//SDMPIN DD *
START TRACE=P CLASS(3) RMID(22) DEST(SMF) TDATA(COR,TRA)
AFTER(1)
FOR(1)
SELECT
* POSITION TO HEADERS (QWHS IS ALWAYS FIRST)
P4,00
* CHECK QWHS 01, FOR RMID 16, IFCID 58
DR,02,X’0116003A’
* POSITION TO SECOND SECTION (1ST DATA SECTION)
P4,08
* COMPARE SQLCODE FOR 811
DR,74,X’FFFFFCD5’
ACTION(ABENDRET(00E60188))
/*
Figure 145. Example job that generates a dump on SQL code -811 RMID16 IFCID
Example 5: Starting a second trace. The example job in Figure 146 on page 768
starts a trace on IFC 196 records. An IFC 196 record is written when a lock timeout
occurs. In this example, when a lock timeout occurs, DSN1SDMP is to start a
second trace, as indicated by the ACTION(STTRACE) option. This second trace is
to be an accounting trace, as indicated by the COMMAND START TRACE(ACCTG)
option. This trace is to include records only for the ACE that is associated with the
agent that timed out, as indicated by the FILTER(ACE) option. When the qualifying
accounting record is found, DSN1SDMP generates a dump.
//SDMPIN DD *
* START ONLY IFCID 196, TIMEOUT
START TRACE=P CLASS(32) IFCID(196) DEST(SMF)
AFTER(1)
* ACTION = START ACCOUNTING TRACE
ACTION(STTRACE)
* FILTER ON JUST 196 RECORDS...
SELECT
P4,00
DR,04,X’00C4’
* WHEN ACCOUNTING IS CUT, ABEND
ACTION2(ABENDRET(00E60188))
* START THE ACCOUNTING TRACE FILTER ON THE ACE OF THE AGENT
* THAT TIMED OUT
COMMAND
START TRACE(ACCTG) CLASS(32) IFCID(3) DEST(SMF)
* Filter can be for ACE or EB
FILTER(ACE)
/*
DSN1SDMP output
One intended use of this utility is to aid in determining and correcting system
problems. When diagnosing DB2, you might need to refer to licensed
documentation to interpret output from this utility. For more information about
diagnosing problems, see DB2 Diagnosis Guide and Reference.
Table 143 shows the minimum and maximum limits for numeric values.
Table 143. Numeric limits
Item Limit
Smallest SMALLINT value -32768
Largest SMALLINT value 32767
Smallest INTEGER value -2147483648
Largest INTEGER value 2147483647
Smallest REAL value About -7.2×1075
Largest REAL value About 7.2×1075
Smallest positive REAL value About 5.4×10-79
Largest negative REAL value About -5.4×10-79
Smallest FLOAT value About -7.2×1075
Largest FLOAT value About 7.2×1075
Table 145 shows the minimum and maximum limits for datetime values.
Table 145. Datetime limits
Item Limit
Smallest DATE value (shown in ISO format) 0001-01-01
Largest DATE value (shown in ISO format) 9999-12-31
Smallest TIME value (shown in ISO format) 00.00.00
Largest TIME value (shown in ISO format) 24.00.00
Smallest TIMESTAMP value 0001-01-01-00.00.00.000000
v Captures the utility output stream (SYSPRINT) into a created temporary table
(SYSIBM.SYSPRINT)
v Declares a cursor to select from SYSPRINT:
DECLARE SYSPRINT CURSOR WITH RETURN FOR
SELECT SEQNO, TEXT FROM SYSPRINT
ORDER BY SEQNO;
v Opens the SYSPRINT cursor and returns.
The calling program then fetches from the returned result set to obtain the captured
utility output.
Then, to execute the utility, you must use a privilege set that includes the
authorization to run the specified utility.
If the DSNUTILS stored procedure invokes a new utility, refer to Table 148 for
information about the default data dispositions that are specified for dynamically
allocated data sets. This table lists the DD name that is used to identify the data set
and the default dispositions for the data set by utility.
Table 148. Data dispositions for dynamically allocated data sets
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSREC ignored ignored ignored ignored OLD ignored ignored ignored NEW NEW
KEEP CATLG CATLG
KEEP CATLG CATLG
SYSDISC ignored ignored ignored ignored NEW ignored ignored ignored NEW ignored
CATLG CATLG
CATLG CATLG
SYSPUNCH ignored ignored ignored ignored ignored ignored ignored ignored NEW NEW
CATLG CATLG
CATLG CATLG
SYSCOPY ignored ignored NEW ignored NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSCOPY2 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
Table 148. Data dispositions for dynamically allocated data sets (continued)
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSRCPY1 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY2 ignored ignored NEW NEW NEW NEW ignored ignored NEW ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSUT1 NEW NEW ignored ignored NEW ignored NEW NEW NEW ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG CATLG
SORTOUT NEW ignored ignored ignored NEW ignored ignored ignored NEW ignored
DELETE DELETE DELETE
CATLG CATLG CATLG
SYSMAP ignored ignored ignored ignored NEW ignored ignored ignored ignored ignored
CATLG
CATLG
SYSERR NEW ignored ignored ignored NEW ignored ignored ignored ignored ignored
CATLG CATLG
CATLG CATLG
FILTER ignored ignored NEW ignored ignored ignored ignored ignored ignored ignored
DELETE
CATLG
If the DSNUTILS stored procedure restarts a current utility, refer to Table 149 for
information about the default data dispositions that are specified for
dynamically-allocated data sets on RESTART. This table lists the DD name that is
used to identify the data set and the default dispositions for the data set by utility.
Table 149. Data dispositions for dynamically allocated data sets on RESTART
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSREC ignored ignored ignored ignored OLD ignored ignored ignored MOD MOD
KEEP CATLG CATLG
KEEP CATLG CATLG
SYSDISC ignored ignored ignored ignored MOD ignored ignored ignored MOD ignored
CATLG CATLG
CATLG CATLG
SYSPUNCH ignored ignored ignored ignored ignored ignored ignored ignored MOD MOD
CATLG CATLG
CATLG CATLG
SYSCOPY ignored ignored MOD ignored MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG
SYSCOPY2 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY1 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
SYSRCPY2 ignored ignored MOD MOD MOD MOD ignored ignored MOD ignored
CATLG CATLG CATLG CATLG CATLG
CATLG CATLG CATLG CATLG CATLG
Table 149. Data dispositions for dynamically allocated data sets on RESTART (continued)
CHECK
INDEX or REORG
CHECK CHECK COPY- MERGE- REBUILD REORG TABLE-
DD name DATA LOB COPY TOCOPY LOAD COPY INDEX INDEX SPACE UNLOAD
SYSUT1 MOD MOD ignored ignored MOD ignored MOD MOD MOD ignored
DELETE DELETE DELETE DELETE CATLG DELETE
CATLG CATLG CATLG CATLG CATLG CATLG
SORTOUT MOD ignored ignored ignored MOD ignored ignored ignored MOD ignored
DELETE DELETE DELETE
CATLG CATLG CATLG
SYSMAP ignored ignored ignored ignored MOD ignored ignored ignored ignored ignored
CATLG
CATLG
SYSERR MOD ignored ignored ignored MOD ignored ignored ignored ignored ignored
CATLG CATLG
CATLG CATLG
FILTER ignored ignored MOD ignored ignored ignored ignored ignored ignored ignored
DELETE
CATLG
CURRENT
Restarts the utility at the last commit point.
PHASE
Restarts the utility at the beginning of the currently stopped phase.
Use the DISPLAY UTILITY to determine the currently stopped
phase.
PREVIEW
Executes in PREVIEW mode the utility control statements that
follow. While in PREVIEW mode, DB2 parses all utility control
statements for syntax errors, but normal utility execution does not
take place. If the syntax is valid, DB2 expands all LISTDEF lists
and TEMPLATE data set name expressions that appear in SYSIN
and prints the results to the SYSPRINT data set. DB2 evaluates
and expands all LISTDEF definitions into an actual list of table
spaces or index spaces. DB2 also evaluates TEMPLATE data set
name expressions into actual data set names through variable
substitution. DB2 also expands lists from the SYSLISTD DD and
TEMPLATE data set name expressions from the SYSTEMPL DD
that is referenced by a utility invocation.
Absence of the PREVIEW keyword turns off preview processing
with one exception. The absence of this keyword does not override
the PREVIEW JCL parameter which, if specified, remains in effect
for the entire job step.
This option is identical to the PREVIEW JCL parameter.
utstmt Specifies the utility control statements.
This is an input parameter of type VARCHAR(32704) in EBCDIC.
retcode
Specifies the utility highest return code.
This is an output parameter of type INTEGER.
utility-name
Specifies the utility that you want to invoke.
This is an input parameter of type VARCHAR(20) in EBCDIC.
Because DSNUTILS allows only a single utility here, dynamic support of
data set allocation is limited. Specify only a single utility that requires data
set allocation in the utstmt parameter. Select the utility name from the
following list:
ANY6
CHECK DATA
CHECK INDEX
CHECK LOB
COPY
COPYTOCOPY
DIAGNOSE
LOAD
MERGECOPY
MODIFY RECOVERY
MODIFY STATISTICS
6. Use ANY to indicate that TEMPLATE dynamic allocation is to be used. This value suppresses the dynamic allocation that is
normally performed by DSNUTILS.
QUIESCE
REBUILD INDEX
RECOVER
REORG INDEX
REORG LOB
REORG TABLESPACE
REPAIR
REPORT RECOVERY
REPORT TABLESPACESET
RUNSTATS INDEX
RUNSTATS TABLESPACE
STOSPACE
UNLOAD
discspace
Specifies the number of cylinders to use as the primary space allocation for
the discdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
pnchdsn
Specifies the cataloged data set name that REORG TABLESPACE
UNLOAD EXTERNAL or REORG TABLESPACE DISCARD uses to hold the
generated LOAD utility control statements. If you specify a value for
pnchdsn, the data set is allocated to the SYSPUNCH DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the PUNCHDDN parameter for REORG TABLESPACE, the
specified ddname value must be SYSPUNCH.
pnchdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the pnchdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
pnchspace
Specifies the number of cylinders to use as the primary space allocation for
the pnchdsn data set. The secondary space allocation is 10% of the
primary space allocation.
This is an input parameter of type SMALLINT.
copydsn1
Specifies the name of the required target (output) data set, which is needed
when you specify the COPY, COPYTOCOPY, or MERGECOPY utilities. It is
optional for LOAD and REORG TABLESPACE. If you specify copydsn1, the
data set is allocated to the SYSCOPY DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the COPYDDN parameter for COPY, COPYTOCOPY,
MERGECOPY, LOAD, or REORG TABLESPACE, the specified ddname1
value must be SYSCOPY.
copydevt1
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the copydsn1 data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
copyspace1
Specifies the number of cylinders to use as the primary space allocation for
the copydsn1 data set. The secondary space allocation is 10% of the
primary space allocation.
This is an input parameter of type SMALLINT.
copydsn2
Specifies the name of the cataloged data set that is used as a target
(output) data set for the backup copy. It is optional for COPY,
COPYTOCOPY, MERGECOPY, LOAD, and REORG TABLESPACE. If you
specify copydsn2, the data set is allocated to the SYSCOPY2 DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the MAPDDN parameter for LOAD, the specified ddname
value must be SYSMAP.
mapdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the mapdsn data set resides.
This is an input parameter of type CHAR(8).
mapspace
Specifies the number of cylinders to use as the primary space allocation for
the mapdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
errdsn Specifies the name of the cataloged data set that is required as a work data
set for error processing. It is required for CHECK DATA, and it is optional
for LOAD. If you specify errdsn, the data set is allocated to the SYSERR
DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the ERRDDN parameter for CHECK DATA or LOAD, the
specified ddname value must be SYSERR.
errdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the errdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
errspace
Specifies the number of cylinders to use as the primary space allocation for
the errdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
filtrdsn Specifies the name of the cataloged data set that is required as a work data
set for error processing. It is optional for COPY CONCURRENT. If you
specify filtrdsn, the data set is allocated to the FILTER DD name.
This is an input parameter of type VARCHAR(54) in EBCDIC.
If you specify the FILTERDDN parameter for COPY, the specified ddname
value must be FILTER.
filtrdevt
Specifies a unit address, a generic device type, or a user-assigned group
name for a device on which the filtrdsn data set resides.
This is an input parameter of type CHAR(8) in EBCDIC.
filtrspace
Specifies the number of cylinders to use as the primary space allocation for
the filtrdsn data set. The secondary space allocation is 10% of the primary
space allocation.
This is an input parameter of type SMALLINT.
DSNUTILS output
DB2 creates the result set according to the DECLARE statement that is shown
under “Example of declaring a cursor to select from SYSPRINT” on page 778.
If DSNUTILB abends, the abend codes are returned as DSNUTILS return codes.
| The BIND PACKAGE statement for the DSNUTILU stored procedure determines the
| character set of the resulting utility SYSPRINT output that is placed in the
| SYSIBM.SYSPRINT table. If ENCODING(EBCDIC) is specified, the SYSPRINT
| contents are in EBCDIC. If ENCODING(UNICODE) is specified, the SYSPRINT
| contents are in Unicode. The default install job, DSNTIJSG, is shipped with
| ENCODING(EBCDIC).
| Then, to execute the utility, you must use a privilege set that includes the
| authorization to run the specified utility.
Figure 147. Sample PROC for running the WLM-established stored procedures
| DSNUTILU output
| DB2 creates the result set according to the DECLARE statement shown on
| “Example of declaring a cursor to select from SYSPRINT” on page 778
DSNACCQC is a sample stored procedure that gives you information about your
table spaces and indexes. You can use DSNACCQC to obtain the following types of
information:
v Table spaces and indexes on which RUNSTATS needs to be run
v Table spaces and indexes on which the STOSPACE utility has not been run
v Table spaces and indexes that exceed the primary space allocation
v Table spaces with more than a user-specified percentage of relocated rows
v Table spaces with more than a user-specified percentage of space that is
occupied by dropped tables
v Table spaces with table space locking
v Simple table spaces with more than one table
v Indexes with clustering problems
v Indexes with more than a user-specified number of index levels
v Indexes with more than a user-specified LEAFDIST value
v Type 1 indexes
v Indexes with long RID chains that are not unique
v Indexes that are not used in static SQL statements
The owner of the package or plan that contains the CALL statement must also have
SELECT authority on the following catalog table spaces:
v SYSIBM.SYSINDEXES
v SYSIBM.SYSINDEXPART
v SYSIBM.SYSPACKDEP
v SYSIBM.SYSPLANDEP
v SYSIBM.SYSTABLEPART
v SYSIBM.SYSTABLES
v SYSIBM.SYSTABLESPACE
return-code, message-text )
4 Obtains information about simple table spaces with more than one
table.
5 Obtains information about table spaces on which the STOSPACE utility
has not been run.
6 Obtains information about table spaces that have exceeded their
allocated primary space quantity.
qualifier1
Narrows the search for objects that match query-type to a specified set of
database names. qualifier1 is an input parameter of type VARCHAR(255).
The format of this parameter is the same as the format of pattern-expression in
an SQL LIKE predicate. pattern-expression is described in Chapter 4 of DB2
SQL Reference.
For example, to obtain information about table spaces or indexes in all
databases with names that begin with ACCOUNT, specify this value for
qualifier1:
ACCOUNT%
qualifier2
Narrows the search for objects that match query-type to a specified set of
creator names. A creator name is the value of column CREATOR in
SYSIBM.SYSTABLESPACE for table space queries, or SYSIBM.SYSINDEXES
for index queries. qualifier2 is an input parameter of type VARCHAR(255).
The format of this parameter is the same as the format of pattern-expression in
an SQL LIKE predicate. pattern-expression is described in Chapter 4 of DB2
SQL Reference.
For example, to obtain information about table spaces or indexes with creators
that begin with DSN8, specify this value for qualifier2:
DSN8%
varparm1, varparm2, varparm3
The meanings of these parameters vary with object-type and query-type. See
Table 150 on page 794 for the meaning of each parameter for table space
queries. See Table 151 on page 794 for the meaning of each parameter for
index queries.
These are input parameters of type VARCHAR(255).
varparm4 through varparm10
These variables are reserved for future use. Specify an empty string ('') for each
parameter value.
These are input parameters of type VARCHAR(255).
return-code
Specifies the return code from the DSNACCQC call, which is one of the
following values:
0 DSNACCQC executed successfully.
12 An error occurred during DSNACCQC execution.
length of 121 bytes. The last byte of each line is a new-line character.
message-text is an output parameter of type VARCHAR(1331).
Table 150. Variable input parameter values for DSNACCQC table space queries
query-type Parameter Value
0 varparm1 Specifies the table spaces for which to collect information. This
is a timestamp in character format (yyyy-mm-dd-
hh.mm.ss.nnnnnn). Information is collected for table spaces if
RUNSTATS was run on them before this time or was never run.
0 varparm2 Not used. Specify an empty string ('').
0 varparm3 Not used. Specify an empty string ('').
1 varparm1 Character representation of a number between 0 and 100, which
indicates the maximum acceptable percentage of relocated table
rows. DSNACCQC returns information about table spaces for
which (((FARINDREF+NEARINDREF)⁄CARD)*100)>varparm1.
1 varparm2 Not used. Specify an empty string ('').
1 varparm3 Not used. Specify an empty string ('').
2 varparm1 Character representation of a number between 0 and 100, which
indicates the maximum acceptable percentage space that is
occupied by rows of dropped tables. DSNACCQC returns
information about table spaces for which PERCDROP>varparm1.
2 varparm2 Not used. Specify an empty string ('').
2 varparm3 Not used. Specify an empty string ('').
3 varparm1 Not used. Specify an empty string ('').
3 varparm2 Not used. Specify an empty string ('').
3 varparm3 Not used. Specify an empty string ('').
4 varparm1 Not used. Specify an empty string ('').
4 varparm2 Not used. Specify an empty string ('').
4 varparm3 Not used. Specify an empty string ('').
5 varparm1 Not used. Specify an empty string ('').
5 varparm2 Not used. Specify an empty string ('').
5 varparm3 Not used. Specify an empty string ('').
6 varparm1 Not used. Specify an empty string ('').
6 varparm2 Not used. Specify an empty string ('').
6 varparm3 Not used. Specify an empty string ('').
Table 151. Variable input parameter values for DSNACCQC index queries
query-type Parameter Value
0 varparm1 Specifies the indexes for which to collect information. This is a
timestamp in character format (yyyy-mm-dd-hh.mm.ss.nnnnnn).
Information is collected for indexes if RUNSTATS was run on
them before this time or was never run.
0 varparm2 Not used. Specify an empty string ('').
0 varparm3 Not used. Specify an empty string ('').
Table 151. Variable input parameter values for DSNACCQC index queries (continued)
query-type Parameter Value
1 varparm1 Character representation of a number between 0 and 100, which
indicates the maximum acceptable percentage of table rows that
are far from their optimal position. DSNACCQC returns
information about indexes for which
((FAROFFPOSF⁄CARDF)*100)>varparm1.
1 varparm2 Character representation of a number between 0 and 100, which
indicates the maximum acceptable percentage of table rows that
are near but not at their optimal position. DSNACCQC returns
information about indexes for which
((NEAROFFPOSF⁄CARDF)*100)>varparm2.
1 varparm3 Character representation of a number between 0 and 100, which
indicates the minimum acceptable percentage of table rows that
are in clustering order. DSNACCQC returns information about
indexes for which CLUSTERRATIO<varparm3.
2 varparm1 Character representation of a number that indicates the
maximum acceptable number of index levels. DSNACCQC
returns information about indexes for which
NLEVELS>varparm1.
2 varparm2 Not used. Specify an empty string ('').
2 varparm3 Not used. Specify an empty string ('').
3 varparm1 Character representation of a number that indicates the
maximum acceptable value for 100 times the average number of
leaf pages between successive active leaf pages of the index.
DSNACCQC returns information about indexes for which
LEAFDIST>varparm1.
3 varparm2 Not used. Specify an empty string ('').
3 varparm3 Not used. Specify an empty string ('').
4 varparm1 Not used. Specify an empty string ('').
4 varparm2 Not used. Specify an empty string ('').
4 varparm3 Not used. Specify an empty string ('').
5 varparm1 Character representation of a number that indicates the
maximum acceptable average length for RID chains.
DSNACCQC returns information about indexes for which
((CARDF*1.0)/FULLKEYCARDF)>varparm1.
5 varparm2 Not used. Specify an empty string ('').
5 varparm3 Not used. Specify an empty string ('').
6 varparm1 Not used. Specify an empty string ('').
6 varparm2 Not used. Specify an empty string ('').
6 varparm3 Not used. Specify an empty string ('').
7 varparm1 Not used. Specify an empty string ('').
7 varparm2 Not used. Specify an empty string ('').
7 varparm3 Not used. Specify an empty string ('').
8 varparm1 Not used. Specify an empty string ('').
8 varparm2 Not used. Specify an empty string ('').
8 varparm3 Not used. Specify an empty string ('').
OBJTYPE=0;
QUERY=0;
DBQUAL='DSNCC%';
CREATQUAL='%';
VARPARM1='0001-01-01-00.00.00.000000';
VARPARM2='';
VARPARM3='';
VARPARM4='';
VARPARM5='';
VARPARM6='';
VARPARM7='';
VARPARM8='';
VARPARM9='';
VARPARM10='';
DSNACCQC output
In addition to the output parameters described in “DSNACCQC option descriptions”
on page 792, DSNACCQC returns one result set. The format of the result set
varies, depending on whether you are retrieving index information (object-type=0) or
table space information (object-type=1).
Table 152 shows the columns of a result set row and the DB2 catalog table that is
the source of information for each column for table space queries.
Table 152. Result set columns for DSNACCQC table space queries
DB2 catalog table that is
Column name Data type the data source
NAME CHAR(8) SYSTABLESPACE
Table 152. Result set columns for DSNACCQC table space queries (continued)
DB2 catalog table that is
Column name Data type the data source
| CREATOR VARCHAR(128) SYSTABLESPACE
BPOOL CHAR(8) SYSTABLESPACE
LOCKRULE CHAR(1) SYSTABLESPACE
LOCKMAX INTEGER SYSTABLESPACE
CLOSERULE CHAR(1) SYSTABLESPACE
ENCODING_SCHEME CHAR(1) SYSTABLESPACE
LOCKPART CHAR(1) SYSTABLESPACE
MAXROWS SMALLINT SYSTABLESPACE
PARTITIONS SMALLINT SYSTABLESPACE
TYPE CHAR(1) SYSTABLESPACE
SEGSIZE SMALLINT SYSTABLESPACE
SPACE INTEGER SYSTABLESPACE
NTABLES SMALLINT SYSTABLESPACE
STATUS CHAR(1) SYSTABLESPACE
STATSTIME TIMESTAMP SYSTABLESPACE
ERASERULE CHAR(1) SYSTABLESPACE
DBNAME CHAR(8) SYSTABLESPACE
DSETPASS CHAR(8) SYSTABLESPACE
LOG CHAR(1) SYSTABLESPACE
DSSIZE INTEGER SYSTABLESPACE
SBCS_CCSID INTEGER SYSTABLESPACE
Table 153 shows the columns of a result set row and the DB2 catalog table that is
the source of information for index queries.
Table 153. Result set columns for DSNACCQC index queries
DB2 catalog table that is
Column name Data type the data source
| CREATOR VARCHAR(128) SYSINDEXES
| NAME VARCHAR(128) SYSINDEXES
| TBCREATOR VARCHAR(128) SYSINDEXES
| TBNAME VARCHAR(128) SYSINDEXES
UNIQUERULE CHAR(1) SYSINDEXES
INDEXTYPE CHAR(1) SYSINDEXES
INDEXSPACE CHAR(8) SYSINDEXES
CLUSTERING CHAR(1) SYSINDEXES
ERASERULE CHAR(1) SYSINDEXES
CLOSERULE CHAR(1) SYSINDEXES
COLCOUNT SMALLINT SYSINDEXES
DBID SMALLINT SYSINDEXES
Table 153. Result set columns for DSNACCQC index queries (continued)
DB2 catalog table that is
Column name Data type the data source
DBNAME CHAR(8) SYSINDEXES
BPOOL CHAR(8) SYSINDEXES
PGSIZE SMALLINT SYSINDEXES
DSETPASS CHAR(8) SYSINDEXES
PIECESIZE INTEGER SYSINDEXES
COPY CHAR(1) SYSINDEXES
PARTITION_COUNT INTEGER SYSINDEXPART1
Notes:
1. The value of PARTITION_COUNT is COUNT(DISTINCT PARTITION) for partitioning
indexes or 0 for nonpartitioning indexes. The PARTITION column is in
SYSIBM.SYSINDEXPART.
To obtain the information from the result set, you can write your client program to
retrieve information from one result set with known contents. However, for greater
flexibility, you might want to write your client program to retrieve data from an
unknown number of result sets with unknown contents. Both techniques are shown
in Part 6 of DB2 Application Programming and SQL Guide.
The owner of the package or plan that contains the CALL statement must also have
SELECT authority on the following catalog table spaces:
v SYSIBM.SYSCOPY
798 Utility Guide and Reference
DSNACCAV stored procedure
v SYSIBM.SYSINDEXES
v SYSIBM.SYSINDEXPART
v SYSIBM.SYSTABLEPART
v SYSIBM.SYSTABLES
v SYSIBM.SYSTABLESPACE
REORG INDEX
Obtains information about index partitions on which REORG needs to be
run.
EXTENTS TABLESPACE
Obtains information about table space partitions that have used more than
a user-specified number of extents.
EXTENTS INDEX
Obtains information about index partitions that have used more than a
user-specified number of extents.
search-condition
Narrows the search for objects that match query-type. search-condition is an
input parameter of type VARCHAR(4096).
The format of this parameter is the same as the format of search-condition in
an SQL where-clause.search-condition is described in Chapter 4 of DB2 SQL
Reference.
If the call is executed to obtain table space information, search-condition can
include any column in SYSIBM.SYSTABLESPACE. If the call is executed to
obtain index information, search-condition can include any column in
SYSIBM.SYSINDEXES. Each column name must be preceded by the string 'A.'.
For example, to obtain information about table spaces with creator ADMF001,
specify this value for search-condition:
A.CREATOR='ADMF001'
maximum-days
Specifies the maximum number of days that are to elapse between executions
of the REORG, RUNSTATS, or COPY utility. DSNACCAV uses this value as the
criterion for determining which table space or index partitions need to have the
utility that you specified in query-type run against them. This value can be
specified if query-type has one of the following values:
v COPY TABLESPACE
v COPY INDEX
v RUNSTATS TABLESPACE
v RUNSTATS INDEX
v REORG TABLESPACE
v REORG INDEX
maximum-days is an input parameter of type INTEGER.
image-copy-type
Specifies the types of image copies about which DSNACCAV is to give you
information. This value can be specified if query-type is COPY TABLESPACE or
COPY INDEX.image-copy-type is an input parameter of type CHAR(1). The
contents must be one of the following values:
B Specifies that you want information about partitions for which the most
recent image copy was either a full image copy or an incremental
image copy
F Specifies that you want information about partitions for which the most
recent image copy was a full image copy
I Specifies that you want information about partitions for which the most
recent image copy was an incremental image copy
maximum-extents
Specifies the maximum number of extents that a table space or index partition
is to use. This value can be specified if query-type is one of the following
values:
v REORG TABLESPACE
v REORG INDEX
v EXTENTS TABLESPACE
v EXTENTS INDEX
maximum-extents is an input parameter of type INTEGER.
return-code
Specifies the return code from the DSNACCAV call, which is one of the
following values:
0 DSNCCAV executed successfully.
12 An error occurred during DSNCCAV execution.
QUERY='RESTRICT TABLESPACE';
IND1=0;
Criteria='A.DBNAME LIKE ''DSNCC%''';
IND2=0;
numdays=0;
IND3=0;
optype='';
IND4=0;
extents=0;
IND5=0;
DSNACCAV output
In addition to the output parameters that are described in “DSNACCAV option
descriptions” on page 799, DSNACCAV returns two result sets.
The first result set contains the text from commands that DB2 executes, formatted
into 80-byte records.
The second result set contains partition information. The format of the second result
set varies, depending on whether you request table space or index information.
Table 155 shows the columns of a result set row and the DB2 catalog table that is
the source of information for each column for table space queries.
Table 155. Result set row for DSNACCAV table space queries
DB2 catalog table that is
Column name Data type the data source
NAME CHAR(8) SYSTABLESPACE
CREATOR CHAR(8) SYSTABLESPACE
BPOOL CHAR(8) SYSTABLESPACE
LOCKRULE CHAR(1) SYSTABLESPACE
LOCKMAX INTEGER SYSTABLESPACE
CLOSERULE CHAR(1) SYSTABLESPACE
ENCODING_SCHEME CHAR(1) SYSTABLESPACE
LOCKPART CHAR(1) SYSTABLESPACE
MAXROWS SMALLINT SYSTABLESPACE
PARTITIONS SMALLINT SYSTABLESPACE
TYPE CHAR(1) SYSTABLESPACE
SEGSIZE SMALLINT SYSTABLESPACE
SPACE INTEGER SYSTABLESPACE
NTABLES SMALLINT SYSTABLESPACE
STATUS CHAR(1) SYSTABLESPACE
STATSTIME TIMESTAMP SYSTABLESPACE
ERASERULE CHAR(1) SYSTABLESPACE
DBNAME CHAR(8) SYSTABLESPACE
DSETPASS CHAR(8) SYSTABLESPACE
LOG CHAR(1) SYSTABLESPACE
DSSIZE INTEGER SYSTABLESPACE
PARTITION SMALLINT SYSTABLEPART
OPERATIONTIME TIMESTAMP SYSCOPY or
SYSTABLEPART1
DAYS INTEGER SYSCOPY or
SYSTABLEPART2
PERCOFFPOS SMALLINT SYSINDEXPART3
PERCINDREF SMALLINT SYSTABLEPART4
PERCDROP SMALLINT SYSTABLEPART
EXTENTS INTEGER None5
REASON CHAR(18) None6
SBCS_CCSID INTEGER SYSTABLESPACE
Table 155. Result set row for DSNACCAV table space queries (continued)
DB2 catalog table that is
Column name Data type the data source
Notes:
1. If query-type is COPY TABLESPACE or REORG TABLESPACE, the value of
OPERATIONTIME is the value of the TIMESTAMP column in SYSIBM.SYSCOPY.
If query-type is RUNSTATS TABLESPACE, the value of OPERATIONTIME is the value of
the TIMESTAMP column in SYSIBM.SYSTABLEPART.
2. DAYS is the number of days since the last invocation of the utility. This column is derived
from the OPERATIONTIME column.
3. PERCOFFPOS=(NEAROFFPOSF+FAROFFPOSF)*100⁄CARDF
4. PERCINDREF=(NEARINDREF+FARINDREF)*100⁄CARD
5. EXTENTS is the number of data set extents that the partition is using.
6. REASON is the reason that the row is in the result set. See Table 156 for values of
REASON for each value of query-type.
Table 156 shows the values of the REASON column for each query-type value for
table space queries.
Table 156. Values of the REASON result set column for table space queries
query-type REASON value REASON meaning
COPY Status from Table space is in restricted status COPY
TABLESPACE DISPLAY
DATABASE
command output
COPY DAYS The number of days since the last COPY occurred
TABLESPACE exceeds the maximum-days value
RESTRICT Status from Table space is in restricted status
TABLESPACE DISPLAY
DATABASE
command output
RUNSTATS LOAD LOAD was run after RUNSTATS
TABLESPACE
RUNSTATS REORG REORG was run after RUNSTATS
TABLESPACE
RUNSTATS RECOVER RECOVER was run after RUNSTATS
TABLESPACE
RUNSTATS DAYS The number of days since the last RUNSTATS
TABLESPACE occurred exceeds the maximum-days value
REORG LIMIT One of the following reasons:
TABLESPACE v A clustering index meets this condition:
((NEAROFFPOSF+FAROFFPOSF)*100⁄CARDF)>10
v A partition meets either of these conditions:
((NEARINDREF+FARINDREF)*100⁄CARD)>10
PERCDROP>10
REORG DAYS The number of days since the last REORG occurred
TABLESPACE exceeds the maximum-days value
Table 156. Values of the REASON result set column for table space queries (continued)
query-type REASON value REASON meaning
REORG EXTENTS Number of partition extents exceeds
TABLESPACE maximum-extents value
REORG Status from The table space is in restricted status REORP
TABLESPACE DISPLAY
DATABASE
command output
EXTENTS EXTENTS Number of partition extents exceeds
TABLESPACE maximum-extents value
Table 157 shows the columns of a result set row and the DB2 catalog table that is
the source of information for each column for index queries.
Table 157. Result set row for DSNACCAV index queries
DB2 catalog table that is
Column name Data type the data source
CREATOR CHAR(8) SYSINDEXES
NAME VARCHAR(18) SYSINDEXES
TBCREATOR CHAR(8) SYSINDEXES
TBNAME VARCHAR(18) SYSINDEXES
UNIQUERULE CHAR(1) SYSINDEXES
INDEXTYPE CHAR(1) SYSINDEXES
INDEXSPACE CHAR(8) SYSINDEXES
CLUSTERING CHAR(1) SYSINDEXES
ERASERULE CHAR(1) SYSINDEXES
CLOSERULE CHAR(1) SYSINDEXES
COLCOUNT SMALLINT SYSINDEXES
DBID SMALLINT SYSINDEXES
DBNAME CHAR(8) SYSINDEXES
BPOOL CHAR(8) SYSINDEXES
PGSIZE SMALLINT SYSINDEXES
DSETPASS CHAR(8) SYSINDEXES
PIECESIZE INTEGER SYSINDEXES
COPY CHAR(1) SYSINDEXES
PARTITIONS SMALLINT SYSINDEXPART1
PARTITION SMALLINT SYSINDEXPART
OPERATIONTIME TIMESTAMP SYSCOPY or
SYSINDEXPART2
DAYS INTEGER SYSCOPY or
SYSTABLEPART3
LEAFDIST INTEGER SYSINDEXPART
EXTENTS INTEGER None4
REASON CHAR(18) None5
Table 157. Result set row for DSNACCAV index queries (continued)
DB2 catalog table that is
Column name Data type the data source
Notes:
1. PARTITIONS is derived from the PARTITION column through this SELECT statement:
SELECT IXNAME,IXCREATOR,MAX(PARTITION) AS PARTITIONS
FROM SYSIBM.SYSINDEXPART
GROUP BY IXNAME,IXCREATOR;
2. If query-type is COPY INDEX or REORG INDEX, the value of OPERATIONTIME is the
value of the TIMESTAMP column in SYSIBM.SYSCOPY.
If query-type is RUNSTATS INDEX, the value of OPERATIONTIME is the value of the
TIMESTAMP column in SYSIBM.SYSINDEXPART.
3. DAYS is the number of days since the last invocation of the utility. This column is derived
from the OPERATIONTIME column.
4. EXTENTS is the number of data set extents that the partition is using.
5. REASON is the reason that the row is in the result set. See Table 158 for values of
REASON for each value of query-type.
Table 158 shows the values of the REASON column for each query-type value for
index queries.
Table 158. Values of the REASON result set column for index queries
query-type REASON value REASON meaning
COPY INDEX LIMIT Index is in restricted status ICOPY
COPY INDEX DAYS The number of days since the last COPY occurred
exceeds the maximum-days value
RESTRICT Status from Index is in restricted status
INDEX DISPLAY
DATABASE
command output
RUNSTATS TABLESPACE LOAD was run on the associated table space after
INDEX LOAD RUNSTATS
RUNSTATS TABLESPACE REORG was run on the associated table space after
INDEX REORG RUNSTATS
RUNSTATS TABLESPACE RECOVER was run on the associated table space
INDEX RECOVER after RUNSTATS
RUNSTATS REBUILT REBUILD was run after RUNSTATS
INDEX
RUNSTATS DAYS The number of days since the last RUNSTATS
INDEX occurred exceeds maximum-days value
REORG INDEX LIMIT LEAFDIST exceeds the recommended limit of 200
REORG INDEX DAYS The number of days since the last REORG occurred
exceeds maximum-days value
REORG INDEX EXTENTS Number of partition extents exceeds
maximum-extents value
EXTENTS INDEX EXTENTS Number of partition extents exceeds
maximum-extents value
The number of rows that are returned in the second result set varies with
query-type. Table 159 on page 807 shows the number and types of rows that are
Table 159. Rows of the second DSNACCAV result set for each query type (continued)
query-type Rows returned
REORG INDEX One row for:
v Each index partition for which LEAFDIST>200
v Each index partition for which the number of data set extents
exceeds the value that is specified by maximum-extents
v Each index partition on which REORG was not run within the
number of days specified by the maximum-days parameter
EXTENTS One row for each table space partition for which the number of data
TABLESPACE set extents exceeds the value that is specified by maximum-extents
EXTENTS INDEX One row for each index partition for which the number of data set
extents exceeds the value that is specified by maximum-extents
To obtain the information from the result sets, you can write your client program to
retrieve information from two result sets with known contents. However, for greater
flexibility, you might want to write your client program to retrieve data from an
unknown number of result sets with unknown contents. Both techniques are shown
in Part 6 of DB2 Application Programming and SQL Guide.
DSNACCOR uses the set of criteria that are shown in “DSNACCOR formulas for
recommending actions” on page 818 to evaluate table spaces and index spaces. By
default, DSNACCOR evaluates all table spaces and index spaces in the subsystem
that have entries in the real-time statistics tables. However, you can override this
default through input parameters.
the recommended action can be performed on that object. For example, before
you can perform an image copy on an index, the index must have the COPY
YES attribute.
DSNACCOR creates and uses declared temporary tables. Therefore, before you
can invoke DSNACCOR, you need to create a TEMP database and segmented
table spaces in the TEMP database. For information about creating TEMP
databases and table spaces, see CREATE DATABASE and CREATE TABLESPACE
Chapter 5 of DB2 SQL Reference.
You should bind the package for DSNACCOR with isolation UR to avoid lock
contention. You can find the installation steps for DSNACCOR in job DSNTIJSG.
The owner of the package or plan that contains the CALL statement must also
have:
v SELECT authority on the real-time statistics tables
v The DISPLAY system privilege
| CRChangesPct
| Specifies a criterion for recommending a full image copy on a table space or
| index space. If the following condition is true for a table space, DSNACCOR
| recommends an image copy:
| The total number of insert, update, and delete operations since the last
| image copy, divided by the total number of rows or LOBs in a table space or
| partition (expressed as a percentage) is greater than CRChangesPct.
| See item 3 in Figure 150 on page 818. If both of the following conditions are
| true for an index table space, DSNACCOR recommends an image copy:
| v The total number of insert and delete operations since the last image copy,
| divided by the total number of entries in the index space or partition
| (expressed as a percentage) is greater than CRChangesPct.
| v The number of active pages in the index space or partition is greater than
| CRIndexSize.
| See items 2 and 4 in Figure 151 on page 819. CRChangesPct is an input
| parameter of type INTEGER. The default is 10.
| CRDaySncLastCopy
| Specifies a criterion for recommending a full image copy on a table space or
| index space. If the number of days since the last image copy is greater than
| this value, DSNACCOR recommends an image copy. (See item 1 in Figure 150
| on page 818 and item 1 in Figure 151 on page 819.) CRDaySncLastCopy is an
| input parameter of type INTEGER. The default is 7.
| ICRUpdatedPagesPct
| Specifies a criterion for recommending an incremental image copy on a table
| space. If the following condition is true, DSNACCOR recommends an
| incremental image copy:
| The number of distinct pages that were updated since the last image copy,
| divided by the total number of active pages in the table space or partition
| (expressed as a percentage) is greater than CRUpdatedPagesPct.
| (See item 1 in Figure 152 on page 819.) ICRUpdatedPagesPct is an input
| parameter of type INTEGER. The default is 1.
| ICRChangesPct
| Specifies a criterion for recommending an incremental image copy on a table
| space. If the following condition is true, DSNACCOR recommends an
| incremental image copy:
| The ratio of the number of insert, update, or delete operations since the last
| image copy, to the total number of rows or LOBs in a table space or
| partition (expressed as a percentage) is greater than ICRChangesPct.
| (See item 2 in Figure 152 on page 819.) ICRChangesPct is an input parameter
| of type INTEGER. The default is 1.
| CRIndexSize
| Specifies, when combined with CRUpdatedPagesPct or CRChangesPct, a
| criterion for recommending a full image copy on an index space. (See items 2,
| 3, and 4 in Figure 151 on page 819.) CRIndexSize is an input parameter of type
| INTEGER. The default is 50.
| RRTInsDelUpdPct
| Specifies a criterion for recommending that the REORG utility is to be run on a
| table space. If the following condition is true, DSNACCOR recommends running
| REORG:
| The sum of insert, update, and delete operations since the last REORG,
| divided by the total number of rows or LOBs in the table space or partition
| (expressed as a percentage) is greater than RRTInsDelUpdPct
| (See item 1 in Figure 153 on page 819.) RRTInsDelUpdPct is an input
| parameter of type INTEGER. The default is 20.
| RRTUnclustInsPct
| Specifies a criterion for recommending that the REORG utility is to be run on a
| table space. If the following condition is true, DSNACCOR recommends running
| REORG:
| The number of unclustered insert operations, divided by the total number of
| rows or LOBs in the table space or partition (expressed as a percentage) is
| greater than RRTUnclustInsPct.
| (See item 2 in Figure 153 on page 819.) RRTUnclustInsPct is an input
| parameter of type INTEGER. The default is 10.
| RRTDisorgLOBPct
| Specifies a criterion for recommending that the REORG utility is to be run on a
| table space. If the following condition is true, DSNACCOR recommends running
| REORG:
| The number of imperfectly chunked LOBs, divided by the total number of
| rows or LOBs in the table space or partition (expressed as a percentage) is
| greater than RRTDisorgLOBPct.
| (See item 3 in Figure 153 on page 819.) RRTDisorgLOBPct is an input
| parameter of type INTEGER. The default is 10.
| RRTMassDelLimit
| Specifies a criterion for recommending that the REORG utility is to be run on a
| table space. If one of the following values is greater than RRTMassDelLimit,
| DSNACCOR recommends running REORG:
| v The number of mass deletes from a segmented or LOB table space since the
| last REORG or LOAD REPLACE
| v The number of dropped tables from a nonsegmented table space since the
| last REORG or LOAD REPLACE
| (See item 5 in Figure 153 on page 819.) RRTMassDelLimit is an input
| parameter of type INTEGER. The default is 0.
| RRTIndRefLimit
| Specifies a criterion for recommending that the REORG utility is to be run on a
| table space. If the following value is greater than RRTIndRefLimit, DSNACCOR
| recommends running REORG:
| The total number of overflow records that were created since the last
| REORG or LOAD REPLACE, divided by the total number of rows or LOBs
| in the table space or partition (expressed as a percentage)
| (See item 4 in Figure 153 on page 819.) RRTIndRefLimit is an input parameter
| of type INTEGER. The default is 10.
| RRIInsertDeletePct
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the following value is greater than RRIInsertDeletePct,
| DSNACCOR recommends running REORG:
| The sum of the number of index entries that were inserted and deleted since
| the last REORG, divided by the total number of index entries in the index
| space or partition (expressed as a percentage)
| (See item 1 in Figure 154 on page 820.) This is an input parameter of type
| INTEGER. The default is 20.
| RRIAppendInsertPct
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the following value is greater than RRIAppendInsertPct,
| DSNACCOR recommends running REORG:
| The number of index entries that were inserted since the last REORG,
| REBUILD INDEX, or LOAD REPLACE with a key value greater than the
| maximum key value in the index space or partition, divided by the number of
| index entries in the index space or partition (expressed as a percentage)
| (See item 2 in Figure 154 on page 820.) RRIInsertDeletePct is an input
| parameter of type INTEGER. The default is 10.
RRIPseudoDeletePct
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the following value is greater than RRIPseudoDeletePct,
| DSNACCOR recommends running REORG:
| The number of index entries that were pseudo-deleted since the last
| REORG, REBUILD INDEX, or LOAD REPLACE, divided by the number of
| index entries in the index space or partition (expressed as a percentage)
| (See item 3 in Figure 154 on page 820.) RRIPseudoDeletePct is an input
| parameter of type INTEGER. The default is 10.
| RRIMassDelLimit
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the number of mass deletes from an index space or partition
| since the last REORG, REBUILD, or LOAD REPLACE is greater than this
| value, DSNACCOR recommends running REORG.
| (See item 4 in Figure 154 on page 820.) RRIMassDelLimit is an input parameter
| of type INTEGER. The default is 0.
| RRILeafLimit
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the following value is greater than RRILeafLimit, DSNACCOR
| recommends running REORG:
| The number of index page splits that occurred since the last REORG,
| REBUILD INDEX, or LOAD REPLACE in which the higher part of the split
| page was far from the location of the original page, divided by the total
| number of active pages in the index space or partition (expressed as a
| percentage)
| (See item 5 in Figure 154 on page 820.) RRILeafLimit is an input parameter of
| type INTEGER. The default is 10.
| RRINumLevelsLimit
| Specifies a criterion for recommending that the REORG utility is to be run on an
| index space. If the following value is greater than RRINumLevelsLimit,
| DSNACCOR recommends running REORG:
| The number of levels in the index tree that were added or removed since
| the last REORG, REBUILD INDEX, or LOAD REPLACE
| (See item 6 in Figure 154 on page 820.) RRINumLevelsLimit is an input
| parameter of type INTEGER. The default is 0.
| SRTInsDelUpdPct
| Specifies, when combined with SRTInsDelUpdAbs, a criterion for
| v The number of inserted and deleted index entries since the last RUNSTATS
| on an index space or partition, divided by the total number of index entries in
| the index space or partition (expressed as a percentage) is greater than
| SRIInsDelUpdPct.
| v The sum of the number of inserted and deleted index entries since the last
| RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs,
| (See items 1 and 2 in Figure 156 on page 820.) SRIInsDelAbs is an input
| parameter of type INTEGER. The default is 0.
SRIMassDelLimit
Specifies a criterion for recommending that the RUNSTATS utility is to be run
on an index space. If the number of mass deletes from an index space or
partition since the last REORG, REBUILD INDEX, or LOAD REPLACE is
greater than this value, DSNACCOR recommends running RUNSTATS.
(See item 3 in Figure 156 on page 820.) SRIMassDelLimit is an input parameter
of type INTEGER. The default is 0.
ExtentLimit
Specifies a criterion for recommending that the RUNSTATS or REORG utility is
to be run on a table space or index space. Also specifies that DSNACCOR is to
warn the user that the table space or index space has used too many extents.
DSNACCOR recommends running RUNSTATS or REORG, and altering data
set allocations if the following condition is true:
v The number of physical extents in the index space, table space, or partition
is greater than ExtentLimit.
(See Figure 157 on page 820.) ExtentLimit is an input parameter of type
INTEGER. The default is 50.
LastStatement
When DSNACCOR returns a severe error (return code 12), this field contains
the SQL statement that was executing when the error occurred. LastStatement
is an output parameter of type VARCHAR(8012).
ReturnCode
The return code from DSNACCOR execution. Possible values are:
0 DSNACCOR executed successfully. The ErrorMsg parameter contains
the approximate percentage of the total number of objects in the
subsystem that have information in the real-time statistics tables.
4 DSNACCOR completed, but one or more input parameters might be
incompatible. The ErrorMsg parameter contains the input parameters
that might be incompatible.
8 DSNACCOR terminated with errors. The ErrorMsg parameter contains
a message that describes the error.
12 DSNACCOR terminated with severe errors. The ErrorMsg parameter
contains a message that describes the error. The LastStatement
parameter contains the SQL statement that was executing when the
error occurred.
14 DSNACCOR terminated because it could not access one or more of the
real-time statistics tables. The ErrorMsg parameter contains the names
of the tables that DSNACCOR could not access.
15 DSNACCOR terminated because it encountered a problem with one of
the declared temporary tables that it defines and uses.
Figure 150 shows the formula that DSNACCOR uses to recommend a full image
copy on a table space.
Figure 150. DSNACCOR formula for recommending a full image copy on a table space
Figure 151 on page 819 shows the formula that DSNACCOR uses to recommend a
full image copy on an index space.
Figure 151. DSNACCOR formula for recommending a full image copy on an index space
Figure 152 shows the formula that DSNACCOR uses to recommend an incremental
image copy on a table space.
Figure 152. DSNACCOR formula for recommending an incremental image copy on a table
space
Figure 153 shows the formula that DSNACCOR uses to recommend a REORG on
a table space. If the table space is a LOB table space, and CHCKLVL=1, the
formula does not include EXTENTS>ExtentLimit.
Figure 154 on page 820 shows the formula that DSNACCOR uses to recommend a
REORG on an index space.
Figure 155 shows the formula that DSNACCOR uses to recommend RUNSTATS on
a table space.
Figure 156 shows the formula that DSNACCOR uses to recommend RUNSTATS on
an index space.
Figure 157 shows the formula that DSNACCOR uses to that too many index space
or table space extents have been used.
EXTENTS>ExtentLimit
Figure 157. DSNACCOR formula for warning that too many data set extents for a table space
or index space are used
To create the exception table, execute a CREATE TABLE statement similar to the
following one. You can include other columns in the exception table, but you must
include at least the columns that are shown.
CREATE TABLE DSNACC.EXCEPT_TBL
(DBNAME CHAR(8) NOT NULL,
NAME CHAR(8) NOT NULL,
QUERYTYPE CHAR(40))
CCSID EBCDIC;
Recommendation: If you plan to put many rows in the exception table, create a
nonunique index on DBNAME, NAME, and QUERYTYPE.
After you create the exception table, insert a row for each object for which you want
to include information in the INEXCEPTTABLE column.
Example: Suppose that you want the INEXCEPTTABLE column to contain the
string 'IRRELEVANT’ for table space STAFF in database DSNDB04. You also want
the INEXCEPTTABLE column to contain ’CURRENT’ for table space DSN8S81D in
database DSN8D81A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSNDB04 ’, ’STAFF ’, ’IRRELEVANT’);
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSN8D81A’, ’DSN8S81D’, ’CURRENT’);
Example: Suppose that you want to include all rows for database DSNDB04 in the
recommendations result set, except for those rows that contain the string
’IRRELEVANT’ in the INEXCEPTTABLE column. You might include the following
search condition in your Criteria input parameter:
DBNAME=’DSNDB04’ AND INEXCEPTTABLE<>’IRRELEVANT’
.WORKING-STORAGE SECTION.
.
.
***********************
* DSNACCOR PARAMETERS *
***********************
01 QUERYTYPE.
49 QUERYTYPE-LN PICTURE S9(4) COMP VALUE 40.
49 QUERYTYPE-DTA PICTURE X(40) VALUE ’ALL’.
01 OBJECTTYPE.
49 OBJECTTYPE-LN PICTURE S9(4) COMP VALUE 3.
49 OBJECTTYPE-DTA PICTURE X(3) VALUE ’ALL’.
01 ICTYPE.
49 ICTYPE-LN PICTURE S9(4) COMP VALUE 1.
49 ICTYPE-DTA PICTURE X(1) VALUE ’B’.
01 STATSSCHEMA.
49 STATSSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 STATSSCHEMA-DTA PICTURE X(128) VALUE ’SYSIBM’.
01 CATLGSCHEMA.
49 CATLGSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 CATLGSCHEMA-DTA PICTURE X(128) VALUE ’SYSIBM’.
01 LOCALSCHEMA.
49 LOCALSCHEMA-LN PICTURE S9(4) COMP VALUE 128.
49 LOCALSCHEMA-DTA PICTURE X(128) VALUE ’DSNACC’.
01 CHKLVL PICTURE S9(9) COMP VALUE +3.
01 CRITERIA.
49 CRITERIA-LN PICTURE S9(4) COMP VALUE 4096.
49 CRITERIA-DTA PICTURE X(4096) VALUE SPACES.
01 RESTRICTED.
49 RESTRICTED-LN PICTURE S9(4) COMP VALUE 80.
49 RESTRICTED-DTA PICTURE X(80) VALUE SPACES.
01 CRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRCHANGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRDAYSNCLASTCOPY PICTURE S9(9) COMP VALUE +0.
01 ICRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0.
01 ICRCHANGESPCT PICTURE S9(9) COMP VALUE +0.
01 CRINDEXSIZE PICTURE S9(9) COMP VALUE +0.
01 RRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 RRTUNCLUSTINSPCT PICTURE S9(9) COMP VALUE +0.
01 RRTDISORGLOBPCT PICTURE S9(9) COMP VALUE +0.
01 RRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRTINDREFLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRIINSERTDELETEPCT PICTURE S9(9) COMP VALUE +0.
01 RRIAPPENDINSERTPCT PICTURE S9(9) COMP VALUE +0.
01 RRIPSEUDODELETEPCT PICTURE S9(9) COMP VALUE +0.
01 RRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRILEAFLIMIT PICTURE S9(9) COMP VALUE +0.
01 RRINUMLEVELSLIMIT PICTURE S9(9) COMP VALUE +0.
01 SRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 SRTINSDELUPDABS PICTURE S9(9) COMP VALUE +0.
01 SRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 SRIINSDELUPDPCT PICTURE S9(9) COMP VALUE +0.
01 SRIINSDELUPDABS PICTURE S9(9) COMP VALUE +0.
01 SRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0.
01 EXTENTLIMIT PICTURE S9(9) COMP VALUE +0.
01 LASTSTATEMENT.
49 LASTSTATEMENT-LN PICTURE S9(4) COMP VALUE 8012.
49 LASTSTATEMENT-DTA PICTURE X(8012) VALUE SPACES.
01 RETURNCODE PICTURE S9(9) COMP VALUE +0.
01 ERRORMSG.
49 ERRORMSG-LN PICTURE S9(4) COMP VALUE 1331.
49 ERRORMSG-DTA PICTURE X(1331) VALUE SPACES.
01 IFCARETCODE PICTURE S9(9) COMP VALUE +0.
01 IFCARESCODE PICTURE S9(9) COMP VALUE +0.
01 EXCESSBYTES PICTURE S9(9) COMP VALUE +0.
*****************************************
* INDICATOR VARIABLES. *
* INITIALIZE ALL NON-ESSENTIAL INPUT *
* VARIABLES TO -1, TO INDICATE THAT THE *
* INPUT VALUE IS NULL. *
*****************************************
01 QUERYTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 OBJECTTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 ICTYPE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 STATSSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CATLGSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 LOCALSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CHKLVL-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRITERIA-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RESTRICTED-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRDAYSNCLASTCOPY-IND PICTURE S9(4) COMP-4 VALUE -1.
01 ICRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 ICRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 CRINDEXSIZE-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTUNCLUSTINSPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTDISORGLOBPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRTINDREFLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIINSERTDELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIAPPENDINSERTPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIPSEUDODELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRIMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRILEAFLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 RRINUMLEVELSLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTINSDELUPDABS-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRTMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIINSDELUPDABS-IND PICTURE S9(4) COMP-4 VALUE -1.
01 SRIMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 EXTENTLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1.
01 LASTSTATEMENT-IND PICTURE S9(4) COMP-4 VALUE +0.
01 RETURNCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 ERRORMSG-IND PICTURE S9(4) COMP-4 VALUE +0.
01 IFCARETCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 IFCARESCODE-IND PICTURE S9(4) COMP-4 VALUE +0.
01 EXCESSBYTES-IND PICTURE S9(4) COMP-4 VALUE +0.
..PROCEDURE DIVISION.
.
*********************************************************
* SET VALUES FOR DSNACCOR INPUT PARAMETERS: *
* - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK *
* FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT *
* TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE *
* RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) *
* - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO *
* MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES *
* DSN8D81A AND DSN8D81L. *
*****************
* CALL DSNACCOR *
*****************
EXEC SQL
CALL SYSPROC.DSNACCOR
(:QUERYTYPE :QUERYTYPE-IND,
:OBJECTTYPE :OBJECTTYPE-IND,
:ICTYPE :ICTYPE-IND,
:STATSSCHEMA :STATSSCHEMA-IND,
:CATLGSCHEMA :CATLGSCHEMA-IND,
:LOCALSCHEMA :LOCALSCHEMA-IND,
:CHKLVL :CHKLVL-IND,
:CRITERIA :CRITERIA-IND,
:RESTRICTED :RESTRICTED-IND,
:CRUPDATEDPAGESPCT :CRUPDATEDPAGESPCT-IND,
:CRCHANGESPCT :CRCHANGESPCT-IND,
:CRDAYSNCLASTCOPY :CRDAYSNCLASTCOPY-IND,
:ICRUPDATEDPAGESPCT :ICRUPDATEDPAGESPCT-IND,
:ICRCHANGESPCT :ICRCHANGESPCT-IND,
:CRINDEXSIZE :CRINDEXSIZE-IND,
:RRTINSDELUPDPCT :RRTINSDELUPDPCT-IND,
:RRTUNCLUSTINSPCT :RRTUNCLUSTINSPCT-IND,
:RRTDISORGLOBPCT :RRTDISORGLOBPCT-IND,
:RRTMASSDELLIMIT :RRTMASSDELLIMIT-IND,
:RRTINDREFLIMIT :RRTINDREFLIMIT-IND,
:RRIINSERTDELETEPCT :RRIINSERTDELETEPCT-IND,
:RRIAPPENDINSERTPCT :RRIAPPENDINSERTPCT-IND,
:RRIPSEUDODELETEPCT :RRIPSEUDODELETEPCT-IND,
:RRIMASSDELLIMIT :RRIMASSDELLIMIT-IND,
:RRILEAFLIMIT :RRILEAFLIMIT-IND,
:RRINUMLEVELSLIMIT :RRINUMLEVELSLIMIT-IND,
:SRTINSDELUPDPCT :SRTINSDELUPDPCT-IND,
:SRTINSDELUPDABS :SRTINSDELUPDABS-IND,
:SRTMASSDELLIMIT :SRTMASSDELLIMIT-IND,
:SRIINSDELUPDPCT :SRIINSDELUPDPCT-IND,
:SRIINSDELUPDABS :SRIINSDELUPDABS-IND,
:SRIMASSDELLIMIT :SRIMASSDELLIMIT-IND,
:EXTENTLIMIT :EXTENTLIMIT-IND,
:LASTSTATEMENT :LASTSTATEMENT-IND,
:RETURNCODE :RETURNCODE-IND,
:ERRORMSG :ERRORMSG-IND,
:IFCARETCODE :IFCARETCODE-IND,
:IFCARESCODE :IFCARESCODE-IND,
:EXCESSBYTES :EXCESSBYTES-IND)
END-EXEC.
*************************************************************
* ASSUME THAT THE SQL CALL RETURNED +466, WHICH MEANS THAT *
* RESULT SETS WERE RETURNED. RETRIEVE RESULT SETS. *
*************************************************************
* LINK EACH RESULT SET TO A LOCATOR VARIABLE
EXEC SQL ASSOCIATE LOCATORS (:LOC1, :LOC2)
WITH PROCEDURE SYSPROC.DSNACCOR
END-EXEC.
* LINK A CURSOR TO EACH RESULT SET
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC1
END-EXEC.
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :LOC2
END-EXEC.
* PERFORM FETCHES USING C1 TO RETRIEVE ALL ROWS FROM FIRST RESULT SET
* PERFORM FETCHES USING C2 TO RETRIEVE ALL ROWS FROM SECOND RESULT SET
DSNACCOR output
If DSNACCOR executes successfully, in addition to the output parameters
described in “DSNACCOR option descriptions” on page 810, DSNACCOR returns
two result sets.
The first result set contains the results from IFI COMMAND calls that DSNACCOR
makes. Table 160 shows the format of the first result set.
Table 160. Result set row for first DSNACCOR result set
Column name Data type Contents
RS_SEQUENCE INTEGER Sequence number of the output line
RS_DATA CHAR(80) A line of command output
The second result set contains DSNACCOR's recommendations. This result set
contains one or more rows for a table space or index space. A nonpartitioned table
space or nonpartitioning index space can have at most one row in the result set. A
partitioned table space or partitioning index space can have at most one row for
each partition. A table space, index space, or partition has a row in the result set if
both of the following conditions are true:
v If the Criteria input parameter contains a search condition, the search condition is
true for the table space, index space, or partition.
v DSNACCOR recommends at least one action for the table space, index space,
or partition.
Table 161 shows the columns of a result set row.
Table 161. Result set row for second DSNACCOR result set
Column name Data type Description
DBNAME CHAR(8) Name of the database that contains the object.
NAME CHAR(8) Table space or index space name.
PARTITION INTEGER Data set number or partition number.
OBJECTTYPE CHAR(2) DB2 object type:
v TS for a table space
v IX for an index space
| OBJECTSTATUS CHAR(36) Status of the object:
| v ORPHANED, if the object is an index space with no
| corresponding table space, or if the object does not exist
| v If the object is in a restricted state, one of the following
| values:
| – TS=restricted-state, if OBJECTTYPE is TS
| – IX=restricted-state, if OBJECTTYPE is IX
| restricted-state is one of the status codes that appear in
| DISPLAY DATABASE output. See Chapter 2 of DB2
| Command Reference for details.
| v A, if the object is in an advisory state.
| v L, if the object is a logical partition, but not in an advisory
| state.
| v AL, if the object is a logical partition and in an advisory
| state.
Table 161. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
IMAGECOPY CHAR(3) COPY recommendation:
v If OBJECTTYPE is TS: FUL (full image copy), INC
(incremental image copy), or NO
v If OBJECTTYPE is IX: YES or NO
RUNSTATS CHAR(3) RUNSTATS recommendation: YES or NO.
EXTENTS CHAR(3) Indicates whether the data sets for the object have exceeded
ExtentLimit: YES or NO.
REORG CHAR(3) REORG recommendation: YES or NO.
INEXCEPTTABLE CHAR(40) A string that contains one of the following values:
v Text that you specify in the QUERYTYPE column of the
exception table.
v YES, if you put a row in the exception table for the object
that this result set row represents, but you specify NULL in
the QUERYTYPE column.
v NO, if the exception table exists but does not have a row for
the object that this result set row represents.
v Null, if the exception table does not exist, or if the ChkLvl
input parameter does not include the value 4.
ASSOCIATEDTS CHAR(8) If OBJECTTYPE is IX and the ChkLvl input parameter includes
the value 2, this value is the name of the table space that is
associated with the index space. Otherwise null.
COPYLASTTIME TIMESTAMP Timestamp of the last full image copy on the object. Null if
COPY was never run, or if the last COPY execution was
terminated.
LOADRLASTTIME TIMESTAMP Timestamp of the last LOAD REPLACE on the object. Null if
LOAD REPLACE was never run, or if the last LOAD REPLACE
execution was terminated.
REBUILDLASTTIME TIMESTAMP Timestamp of the last REBUILD INDEX on the object. Null if
REBUILD INDEX was never run, or if the last REBUILD INDEX
execution was terminated.
CRUPDPGSPCT INTEGER If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the
ratio of distinct updated pages to preformatted pages,
expressed as a percentage. Otherwise null.
CRCPYCHGPCT INTEGER If OBJECTTYPE is TS and IMAGECOPY is YES, the ratio of
the total number insert, update, and delete operations since the
last image copy to the total number of rows or LOBs in the
table space or partition, expressed as a percentage. If
OBJECTTYPE is IX and IMAGECOPY is YES, the ratio of the
total number of insert and delete operations since the last
image copy to the total number of entries in the index space or
partition, expressed as a percentage. Otherwise null.
CRDAYSCELSTCPY INTEGER If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the
number of days since the last image copy. Otherwise null.
CRINDEXSIZE INTEGER If OBJECTTYPE is IX and IMAGECOPY is YES, the number of
active pages in the index space or partition. Otherwise null.
REORGLASTTIME TIMESTAMP Timestamp of the last REORG on the object. Null if REORG
was never run, or if the last REORG execution was terminated.
RRTINSDELUPDPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the
sum of insert, update, and delete operations since the last
REORG to the total number of rows or LOBs in the table space
or partition, expressed as a percentage. Otherwise null.
Table 161. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
RRTUNCINSPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the
number of unclustered insert operations to the total number of
rows or LOBs in the table space or partition, expressed as a
percentage. Otherwise null.
RRTDISORGLOBPCT INTEGER If OBJECTTYPE is TS and REORG is YES, the ratio of the
number of imperfectly chunked LOBs to the total number of
rows or LOBs in the table space or partition, expressed as a
percentage. Otherwise null.
RRTMASSDELETE INTEGER If OBJECTTYPE is TS, REORG is YES, and the table space is
a segmented table space or LOB table space, the number of
mass deletes since the last REORG or LOAD REPLACE. If
OBJECTTYPE is TS, REORG is YES, and the table space is
nonsegmented, the number of dropped tables since the last
REORG or LOAD REPLACE. Otherwise null.
RRTINDREF INTEGER If OBJECTTYPE is TS, REORG is YES, the ratio of the total
number of overflow records that were created since the last
REORG or LOAD REPLACE to the total number of rows or
LOBs in the table space or partition, expressed as a
percentage. Otherwise null.
RRIINSDELPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the total
number of insert and delete operations since the last REORG
to the total number of index entries in the index space or
partition, expressed as a percentage. Otherwise null.
RRIAPPINSPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index entries that were inserted since the last
REORG, REBUILD INDEX, or LOAD REPLACE that had a key
value greater than the maximum key value in the index space
or partition, to the number of index entries in the index space
or partition, expressed as a percentage. Otherwise null.
RRIPSDDELPCT INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index entries that were pseudo-deleted (the RID
entry was marked as deleted) since the last REORG, REBUILD
INDEX, or LOAD REPLACE to the number of index entries in
the index space or partition, expressed as a percentage.
Otherwise null.
RRIMASSDELETE INTEGER If OBJECTTYPE is IX and REORG is YES, the number of
mass deletes from the index space or partition since the last
REORG, REBUILD, or LOAD REPLACE. Otherwise null.
RRILEAF INTEGER If OBJECTTYPE is IX and REORG is YES, the ratio of the
number of index page splits that occurred since the last
REORG, REBUILD INDEX, or LOAD REPLACE in which the
higher part of the split page was far from the location of the
original page, to the total number of active pages in the index
space or partition, expressed as a percentage. Otherwise null.
RRINUMLEVELS INTEGER If OBJECTTYPE is IX and REORG is YES, the number of
levels in the index tree that were added or removed since the
last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise
null.
STATSLASTTIME TIMESTAMP Timestamp of the last RUNSTATS on the object. Null if
RUNSTATS was never run, or if the last RUNSTATS execution
was terminated.
Table 161. Result set row for second DSNACCOR result set (continued)
Column name Data type Description
| SRTINSDELUPDPCT INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the ratio of the
total number of insert, update, and delete operations since the
last RUNSTATS on a table space or partition, to the total
number of rows or LOBs in the table space or partition,
expressed as a percentage. Otherwise null.
| SRTINSDELUPDABS INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the total
number of insert, update, and delete operations since the last
RUNSTATS on a table space or partition. Otherwise null.
SRTMASSDELETE INTEGER If OBJECTTYPE is TS and RUNSTATS is YES, the number of
mass deletes from the table space or partition since the last
REORG or LOAD REPLACE. Otherwise null.
SRIINSDELPCT INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the ratio of the
total number of insert and delete operations since the last
RUNSTATS on the index space or partition, to the total number
of index entries in the index space or partition, expressed as a
percentage. Otherwise null.
SRIINSDELABS INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the number
insert and delete operations since the last RUNSTATS on the
index space or partition. Otherwise null.
SRIMASSDELETE INTEGER If OBJECTTYPE is IX and RUNSTATS is YES, the number of
mass deletes from the index space or partition since the last
REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise,
this value is null.
TOTALEXTENTS SMALLINT If EXTENTS is YES, the number of physical extents in the table
space, index space, or partition. Otherwise, this value is null.
Use the DISPLAY DATABASE command to display the current status for an object.
| In addition to these states, the output from the DISPLAY DATABASE command
| might also indicate that an object is in logical page list (LPL) status. This state
| means that the pages that are listed in the LPL PAGES column are logically in error
| and are unavailable for access. DB2 writes entries for these pages in an LPL. For
| more information about an LPL and on how to remove pages from the LPL, see
| Part 4 of DB2 Administration Guide.
Refer to Table 162 for information about resetting the auxiliary CHECK-pending
status. This table lists the status name, abbreviation, affected object, and any
corrective actions.
Table 162. Resetting auxiliary CHECK-pending status
Status Abbreviation Object affected Corrective action Notes
Auxiliary ACHKP Base table space 1. Update or delete invalid LOBs using SQL. 1
CHECK-
2. Run the CHECK DATA utility with the
pending
appropriate SCOPE option to verify the
validity of LOBs and reset ACHKP status.
Notes:
1. A base table space in the ACHKP status is unavailable for processing by SQL.
The RECOVER utility also sets AUXW status if it finds an invalid LOB column.
Invalid LOB columns might result from a situation in which all the following actions
occur:
1. LOB table space was defined with LOG NO.
2. LOB table space was recovered.
3. LOB was updated since the last image copy.
Refer to Table 163 for information about resetting the auxiliary warning status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 163. Resetting auxiliary warning status
Status Abbreviation Object affected Corrective action Notes
Auxiliary AUXW Base table space 1. Update or delete invalid LOBs using SQL. 1,2,3
warning
2. Run CHECK DATA utility to verify the validity
of LOBs and reset AUXW status.
Auxiliary AUXW LOB table space 1. Update or delete invalid LOBs using SQL. 1
warning
2. Run CHECK LOB utility to verify the validity
of LOBs and reset AUXW status.
Notes:
1. A base table space or LOB table space in the AUXW status is available for processing by SQL, even though it
contains invalid LOBs. However, an attempt to retrieve an invalid LOB results in a -904 SQL return code.
2. DB2 can access all rows of a base table space that are in the AUXW status. SQL can update the invalid LOB
column and delete base table rows, but the value of the LOB column cannot be retrieved. If DB2 attempts to
access an invalid LOB column, a -904 SQL code is returned. The AUXW status remains on the base table space
even when SQL deletes or updates the last invalid LOB column.
3. If CHECK DATA AUXERROR REPORT encounters only invalid LOB columns and no other LOB column errors, the
base table space is set to the auxiliary warning status.
CHECK-pending status
The CHECK-pending (CHKP) restrictive status indicates that an object might be in
an inconsistent state and must be checked.
The following utilities set the CHECK-pending status on a table space if referential
integrity constraints are encountered:
v LOAD with ENFORCE NO
v RECOVER to a point in time
v CHECK LOB
The CHECK-pending status can also affect a base table space or a LOB table
space.
DB2 ignores informational referential integrity constraints and does not set
CHECK-pending status for them.
Refer to Table 164 for information about resetting the CHECK-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 164. Resetting CHECK-pending status
Status Abbreviation Object affected Corrective action Notes
CHECK- CHKP Table space, base table Check and correct referential integrity constraints
pending space using the CHECK DATA utility.
Notes:
1. An index might be placed in the CHECK-pending status if you recovered an index to a specific RBA or LRSN from
a copy and applied the log records, but you did not recover the table space in the same list. The CHECK-pending
status can also be placed on an index if you specified the table space and the index in the same list, but the
RECOVER point in time was not a QUIESCE or COPY SHRLEVEL REFERENCE point.
COPY-pending status
The COPY-pending (COPY) restrictive status indicates that the affected object must
be copied.
DB2 ignores informational referential integrity constraints and does not set
CHECK-pending status for them.
Refer to Table 165 for information about resetting the COPY-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 165. Resetting COPY-pending status
Status Abbreviation Object affected Corrective action Notes
COPY- COPY Table space, table space Take an image copy of the affected object.
pending partition
Refer to Table 166 for information about resetting the group buffer pool
RECOVER-pending status. This table lists the status name, abbreviation, affected
objects, and any corrective actions.
Table 166. Resetting group buffer pool RECOVER-pending status
Status Abbreviation Object affected Corrective action Notes
Group buffer GRECP Object Recover the object, or use START DATABASE
pool to recover the object.
RECOVER-
pending
Refer to Table 167 for information about resetting the informational COPY-pending
status. This table lists the status name, abbreviation, affected objects, and any
corrective actions.
Table 167. Resetting informational COPY-pending status
Status Abbreviation Object affected Corrective action Notes
Informational ICOPY Partitioning index, Copy the affected index.
COPY- nonpartitioning index,
pending index on the auxiliary
table
REBUILD-pending status
A REBUILD-pending restrictive status indicates that the affected index or index
partition is broken and must be rebuilt from the data.
| If you alter the data type of a column to a numeric data type, RECOVER INDEX
| cannot complete. You must rebuild the index.
Refer to Table 168 for information about resetting a REBUILD-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 168. Resetting REBUILD-pending status
Status Abbreviation Object affected Corrective action Notes
REBUILD- RBDP Physical or logical index Run the REBUILD utility on the affected index
pending partition partition.
REBUILD- RBDP* Logical partitions of Run REBUILD INDEX PART or RECOVER utility
| pending star nonpartitioned secondary on the affected logical partitions.
indexes
| Page set PSRBD Nonpartitioned secondary Run REBUILD INDEX ALL, the RECOVER
REBUILD- index, index on the utility, or run REBUILD INDEX listing all indexes
pending auxiliary table in the affected index space.
| REBUILD- RBDP, all The following actions also reset the
| pending RBDP*, or REBUILD-pending status:
| PSRBD v Use LOAD REPLACE for the table space or
partition.
v Use REPAIR SET INDEX with NORBDPEND
on the index partition. Be aware that this does
not correct the data inconsistency in the index
partition. Use CHECK INDEX instead of
REPAIR to verify referential integrity
constraints.
v Start the database that contains the index
space with ACCESS FORCE. Be aware that
this does not correct the data inconsistency in
the index partition.
v Run REORG INDEX SORTDATA on the
affected index.
RECOVER-pending status
The RECOVER-pending (RECP) restrictive status indicates that a table space or
table space partition is broken and must be recovered.
Refer to Table 169 for information about resetting the RECOVER-pending status.
This table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 169. Resetting RECOVER-pending status
Status Abbreviation Object affected Corrective action Notes
RECOVER- RECP Table space Run the RECOVER utility on the affected object.
pending
RECOVER- RECP Table space partition Recover the partition.
pending
REFRESH-pending status
Whenever DB2 marks an object in refresh-pending (REFP) status, it also puts the
object in RECOVER-pending (RECP) or REBUILD-pending (RBDP or PSRBD). If a
user-defined table space is in refresh-pending (REFP) status, you can replace the
data by using LOAD REPLACE. At the successful completion of the RECOVER and
LOAD REPLACE jobs, both (REFP and RECP or REFP and RBDP or PSRBD)
statuses are reset.
REORG-pending status
The REORG-pending (REORP) restrictive status indicates that a table space
partition is broken and must be reorganized.
| REORP status is set on the last partition of a partitioned table space if you perform
| the following actions:
| v Create a partitioned table space.
| v Create a partitioning index.
| v Insert a row into a table.
| v Create a data-partitioned secondary index.
| In this situation, the data-partitioned secondary index is set to REBUILD-pending
| (RBDP) restrictive status.
| The REORG-pending (AREO*) advisory status indicates that a table space, index,
| or partition needs to be reorganized for optimal performance.
Refer to Table 170 for information about resetting the REORG-pending status. This
table lists the status name, abbreviation, affected objects, and any corrective
actions.
Table 170. Resetting REORG-pending status
Status Abbreviation Object affected Corrective action Notes
REORG- REORP Table space Perform one of the following actions:
pending v Use LOAD REPLACE for the entire table
space.
v Run the REORG TABLESPACE utility with
SHRLEVEL NONE.
If a table space is in both REORG-pending
and CHECK-pending status (or auxiliary
CHECK-pending status), run REORG first and
then run CHECK DATA to clear the respective
states.
v Run REORG PARTm:n SHRLEVEL NONE.
REORG- REORP Partitioned table space For row lengths <= 32 KB:
pending 1. Run REORG TABLESPACE SHRLEVEL
NONE SORTDATA.
Notes:
| 1. You can reset AREO* for a specific partition without being restricted by another
| AREO* for an adjacent partition. When you run REPAIR VERSIONS, the utility
| resets the status and updates the version information in SYSTABLEPART for
| table spaces and SYSINDEXES for indexes.
Restart-pending status
The restart-pending (RESTP) status is set on if an object has backout work pending
at the end of DB2 restart.
Refer to Table 171 on page 838 for information about resetting the restart-pending
status. This table lists the status name, abbreviation, affected objects, and any
corrective actions.
Notes:
1. Delay running REORG TABLESPACE SHRLEVEL CHANGE until all RESTP statuses are reset.
2. You cannot use LOAD REPLACE on an object that is in the RESTP status.
3. Utility activity against page sets or partitions with RESTP status is not allowed. Any attempt to access a page set
or partition with RESTP status terminates with return code 8.
Because these four programs also accept the static SQL statements CONNECT,
SET CONNECTION, and RELEASE, you can use the programs to access DB2
tables at remote locations.
DSNTIAUL and DSNTIAD are shipped only as source code, so you must
precompile, assemble, link, and bind them before you can use them. If you want to
| use the source code version of DSNTEP2 or DSNTEP4, you must precompile,
| compile, link, and bind it. You need to bind the object code version of DSNTEP2 or
| DSNTEP4 before you can use it. Usually a system administrator prepares the
programs as part of the installation process. Table 172 indicates which installation
job prepares each sample program. All installation jobs are in data set
DSN810.SDSNSAMP.
Table 172. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Program preparation job
DSNTIAUL DSNTEJ2A
DSNTIAD DSNTIJTM
DSNTEP2 (source) DSNTEJ1P
DSNTEP2 (object) DSNTEJ1L
DSNTEP4 (source) DSNTEJ1P
Table 172. Jobs that prepare DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4 (continued)
Program name Program preparation job
DSNTEP4 (object) DSNTEJ1L
To run the sample programs, use the DSN RUN command, which is described in
detail in Chapter 2 of DB2 Command Reference. Table 173 lists the load module
name and plan name that you must specify, and the parameters that you can
specify when you run each program. See the following sections for the meaning of
each parameter.
Table 173. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4
Program name Load module Plan Parameters
DSNTIAUL DSNTIAUL DSNTIB81 SQL
number of rows per fetch
DSNTIAD DSNTIAD DSNTIA81 RC0
SQLTERM(termchar)
DSNTEP2 DSNTEP2 DSNTEP81 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
DSNTEP4 DSNTEP4 DSNTEP481 ALIGN(MID)
or ALIGN(LHS)
NOMIXED or MIXED
SQLTERM(termchar)
The remainder of this chapter contains the following information about running each
program:
v Descriptions of the input parameters
v Data sets that you must allocate before you run the program
v Return codes from the program
v Examples of invocation
See the sample jobs that are listed in Table 172 on page 839 for a working example
of each program.
Running DSNTIAUL
This section contains information that you need when you run DSNTIAUL, including
parameters, data sets, return codes, and invocation examples.
DSNTIAUL parameters:
SQL
Specify SQL to indicate that your input data set contains one or more complete
SQL statements, each of which ends with a semicolon. You can include any
SQL statement that can be executed dynamically in your input data set. In
addition, you can include the static SQL statements CONNECT, SET
CONNECTION, or RELEASE. DSNTIAUL uses the SELECT statements to
determine which tables to unload and dynamically executes all other statements
except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL executes
CONNECT, SET CONNECTION, and RELEASE statically to connect to remote
locations.
If you do not specify the SQL parameter, your input data set must contain one or
more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause
SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a
SELECT statement for each input statement by appending your input line to
SELECT * FROM, then uses the result to determine which tables to unload. For this
input format, the text for each table specification can be a maximum of 72 bytes
and must not span multiple lines.
You can use the input statements to specify SELECT statements that join two or
more tables or select specific columns from a table. If you specify columns, you
need to modify the LOAD statement that DSNTIAUL generates.
Define all data sets as sequential data sets. You can specify the record length and
block size of the SYSPUNCH and SYSRECnn data sets. The maximum record
length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
Examples of DSNTIAUL invocation: Suppose that you want to unload the rows
for department D01 from the project table. Because you can fit the table
specification on one line, and you do not want to execute any non-SELECT
statements, you do not need the SQL parameter. Your invocation looks like the one
that is shown in Figure 159:
If you want to obtain the LOAD utility control statements for loading rows into a
table, but you do not want to unload the rows, you can set the data set names for
the SYSRECnn data sets to DUMMY. For example, to obtain the utility control
statements for loading rows into the department table, you invoke DSNTIAUL as
shown in Figure 160 on page 843:
Now suppose that you also want to use DSNTIAUL to do these things:
v Unload all rows from the project table
v Unload only rows from the employee table for employees in departments with
department numbers that begin with D, and order the unloaded rows by
employee number
v Lock both tables in share mode before you unload them
| v Retrieve 250 rows per fetch
| For these activities, you must specify the SQL parameter and specify the number of
| rows per fetch when you run DSNTIAUL. Your DSNTIAUL invocation is shown in
Figure 161:
Running DSNTIAD
This section contains information that you need when you run DSNTIAD, including
parameters, data sets, return codes, and invocation examples.
DSNTIAD parameters:
RC0
If you specify this parameter, DSNTIAD ends with return code 0, even if the
program encounters SQL errors. If you do not specify RC0, DSNTIAD ends with
a return code that reflects the severity of the errors that occur. Without RC0,
DSNTIAD terminates if more than 10 SQL errors occur during a single
execution.
SQLTERM(termchar)
Specify this parameter to indicate the character that you use to end each SQL
statement. You can use any special character except one of those listed in
Table 175. SQLTERM(;) is the default.
Table 175. Invalid special characters for the SQL terminator
Name Character Hexadecimal representation
blank X'40'
comma , X'6B'
double quotation mark " X'7F'
left parenthesis ( X'4D'
right parenthesis ) X'5D'
single quotation mark ' X'7D'
underscore _ X'6D'
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
Example: Suppose that you specify the parameter SQLTERM(#) to indicate that
the character # is the statement terminator. Then a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
in this data set. DSNTIAD sets the record length of this data set to
121 bytes and the block size to 1210 bytes.
NOMIXED
| Specifies that the DSNTEP2 or DSNTEP4 input contains no DBCS
characters. NOMIXED is the default.
MIXED
| Specifies that the DSNTEP2 or DSNTEP4 input contains some DBCS
characters.
SQLTERM(termchar)
Specifies the character that you use to end each SQL statement. You can use
any character except one of those that are listed in Table 175 on page 844.
SQLTERM(;) is the default.
Use a character other than a semicolon if you plan to execute a statement that
contains embedded semicolons.
Example: Suppose that you specify the parameter SQLTERM(#) to indicate that
the character # is the statement terminator. Then a CREATE TRIGGER
statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
Be careful to choose a character for the statement terminator that is not used
within the statement.
If you want to change the SQL terminator within a series of SQL statements,
you can use the --#SET TERMINATOR control statement.
Example: Suppose that you have an existing set of SQL statements to which
you want to add a CREATE TRIGGER statement that has embedded
semicolons. You can use the default SQLTERM value, which is a semicolon, for
all of the existing SQL statements. Before you execute the CREATE TRIGGER
statement, include the --#SET TERMINATOR # control statement to change
the SQL terminator to the character #:
SELECT * FROM DEPT;
SELECT * FROM ACT;
SELECT * FROM EMPPROJACT;
SELECT * FROM PROJ;
SELECT * FROM PROJACT;
--#SET TERMINATOR #
CREATE TRIGGER NEW_HIRE
AFTER INSERT ON EMP
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1;
END#
See the following discussion of the SYSIN data set for more information about
the --#SET control statement.
Figure 163. DSNTEP2 invocation with the ALIGN(LHS) and MIXED parameters
Figure 164. DSNTEP4 invocation with the ALIGN(MID) and MIXED parameters and using the
MULT_FETCH control option
DB2 collects statistics that you can use to determine when you need to perform
certain maintenance functions on your table spaces and index spaces.
DB2 collects the statistics in real time. You create tables into which DB2 periodically
writes the statistics. You can then write applications that query the statistics and
help you decide when to run REORG, RUNSTATS, or COPY, or to enlarge your
data sets. Figure 165 shows an overview of the process of collecting and using
real-time statistics.
DB2 catalog
Application
DB2
program
Real-time statistics
tables
The following sections provide detailed information about the real-time statistics
tables:
v “Setting up your system for real-time statistics”
v “Contents of the real-time statistics tables” on page 851
v “Operating with real-time statistics” on page 863
For information about a DB2-supplied stored procedure that queries the real-time
statistics tables, see “The DB2 real-time statistics stored procedure” on page 808.
Before you can alter an object in the real-time statistics database, you must stop
the database. Otherwise, you receive an SQL error. Table 178 shows the DB2
objects for storing real-time statistics.
Table 178. DB2 objects for storing real-time statistics
Object name Description
DSNRTSDB Database for real-time statistics objects
DSNRTSTS Table space for real-time statistics objects
SYSIBM.TABLESPACESTATS Table for statistics on table spaces and table space
partitions
SYSIBM.INDEXSPACESTATS Table for statistics on index spaces and index space
partitions
SYSIBM.TABLESPACESTATS_IX Unique index on SYSIBM.TABLESPACESTATS
(columns DBID, PSID, and PARTITION)
| SYSIBM.INDEXSPACESTATS_IX Unique index on SYSIBM.INDEXSPACESTATS
| (columns DBID, ISOBID, and PARTITION)
To create the real-time statistics objects, you need the authority to create tables and
indexes on behalf of the SYSIBM authorization ID.
DB2 inserts one row in the table for each partition or non-partitioned table space or
index space. You therefore need to calculate the amount of disk space that you
need for the real-time statistics tables based on the current number of table spaces
and indexes in your subsystem.
To determine the amount of storage that you need for the real-time statistics when
they are in memory, use the following formula:
Max_concurrent_objects_updated * 152 bytes = Storage_in_bytes
Recommendation: Place the statistics indexes and tables in their own buffer pool.
When the statistics pages are in memory, the speed at which in-memory statistics
are written to the tables improves.
In a data sharing environment, each member has its own interval for writing
real-time statistics.
You must start the database in read-write modeso that DB2 can externalize
real-time statistics. See “When DB2 externalizes real-time statistics” on page 863
for information about the conditions for which DB2 externalizes the statistics.
Table 179 describes the columns of the TABLESPACESTATS table and explains
how you can use them in deciding when to run REORG, RUNSTATS, or COPY.
Table 179. Descriptions of columns in the TABLESPACESTATS table
Column name Data type Description
DBNAME CHAR(8) NOT NULL The name of the database. This column is used to map a database
to its statistics.
NAME CHAR(8) NOT NULL The name of the table space. This column is used to map a table
space to its statistics.
PARTITION SMALLINT NOT The data set number within the table space. This column is used to
NULL map a data set number in a table space to its statistics. For
partitioned table spaces, this value corresponds to the partition
number for a single partition. For nonpartitioned table spaces, this
value is 0.
DBID SMALLINT NOT The internal identifier of the database. This column is used to map
NULL a DBID to its statistics.
PSID SMALLINT NOT The internal identifier of the table space page set descriptor. This
NULL column is used to map a PSID to its statistics.
If the table space contains more than one table, this value is the
sum of all rows in all tables. A null value means that the number of
rows is unknown or that REORG or LOAD has never been run.
Use the TOTALROWS value with the value of any column that
contains some affected rows to determine the percentage of rows
that are affected by a particular action.
NACTIVE INTEGER The number of active pages in the table space or partition.
Use the NACTIVE value with the value of any column that contains
some affected pages to determine the percentage of pages that are
affected by a particular action.
For multi-piece linear page sets, this value is the amount of space
in all data sets. A null value means the amount of space is
unknown.
A null value means that LOAD REPLACE has never been run on
the table space or partition or that the timestamp of the last LOAD
REPLACE is unknown.
You can compare this timestamp to the timestamp of the last COPY
on the same object to determine when a COPY is needed. If the
date of the last LOAD REPLACE is more recent than the last
COPY, you might need to run COPY:
(JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REORGLASTTIME TIMESTAMP The timestamp of the last REORG on the table space or partition.
A null value means REORG has never been run on the table space
or partition or that the timestamp of the last REORG is unknown.
You can compare this timestamp to the timestamp of the last COPY
on the same object to determine when a COPY is needed. If the
date of the last REORG is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REORGINSERTS INTEGER The number of records or LOBs that have been inserted since the
last REORG or LOAD REPLACE on the table space or partition.
This value does not include LOB updates because LOB updates
are really deletions followed by insertions. A null value means that
the number of updated rows is unknown.
REORGDISORGLOB INTEGER The number of LOBs that were inserted since the last REORG or
LOAD REPLACE that are not perfectly chunked. A LOB is perfectly
chunked if the allocated pages are in the minimum number of
chunks. A null value means that the number of imperfectly chunked
LOBs is unknown.
Use this value to determine whether you need to run REORG. For
example, you might want to run REORG if the ratio of
REORGDISORGLOB to the total number of LOBs is greater than
10%:
((REORGDISORGLOB*100)/TOTALROWS)>10
REORGUNCLUSTINS INTEGER The number of records that were inserted since the last REORG or
LOAD REPLACE that are not well-clustered with respect to the
clustering index. A record is well-clustered if the record is inserted
into a page that is within 16 pages of the ideal candidate page. The
clustering index determines the ideal candidate page.
You can use this value to determine whether you need to run
REORG. For example, you might want to run REORG if the
following comparison is true:
((REORGUNCLUSTINS*100)/TOTALROWS)>10
REORGMASSDELETE INTEGER The number of mass deletes from a segmented or LOB table
space, or the number of dropped tables from a segmented table
space, since the last REORG or LOAD REPLACE.
A null value means that the number of overflow records near the
pointer record is unknown.
A null value means that the number of overflow records far from the
pointer record is unknown.
A null value means that RUNSTATS has never been run on the
table space or partition, or that the timestamp of the last
RUNSTATS is unknown.
This value does not include LOB updates because LOB updates
are really deletions followed by insertions. A null value means that
the number of updated rows is unknown.
A null value means that COPY has never been run on the table
space or partition, or that the timestamp of the last full image copy
is unknown.
You might want to take a full image copy when 20% of the pages
have changed:
((COPYUPDATEDPAGES*100)/NACTIVE)>20
COPYCHANGES INTEGER The number of insert, delete, and update operations since the last
COPY.
You might want to take a full image copy when DB2 processes
more than 10% of the rows from the logs:
((COPYCHANGES*100)/TOTALROWS)>10
Table 180 describes the columns of the INDEXSPACESTATS table and explains
how you can use them in deciding when to run REORG, RUNSTATS, or COPY.
Table 180. Descriptions of columns in the INDEXSPACESTATS table
Column name Data type Description
DBNAME CHAR(8) NOT NULL The name of the database. This column is used to map a
database to its statistics.
NAME CHAR(8) NOT NULL The name of the index space. This column is used to map an
index space to its statistics.
PARTITION SMALLINT NOT This column is used to map a data set number in an index space
NULL to its statistics. The data set number within the index space.
Use this value with the value of any column that contains a
number of affected index entries to determine the percentage of
index entries that are affected by a particular action.
NLEVELS SMALLINT The number of levels in the index tree.
Use this value with the value of any column that contains a
number of affected pages to determine the percentage of pages
that are affected by a particular action.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last LOAD REPLACE is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REBUILDLASTTIME TIMESTAMP The timestamp of the last REBUILD INDEX on the index space or
partition.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last REBUILD INDEX is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(REBUILDLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REORGLASTTIME TIMESTAMP The timestamp of the last REORG INDEX on the index space or
partition.
A null value means that the timestamp of the last REORG INDEX
is unknown.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last REORG INDEX is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME))
A null value means that the number of split pages near their
original pages is unknown.
REORGLEAFFAR INTEGER The number of index page splits that occurred since the last
REORG, REBUILD INDEX, or LOAD REPLACE in which the
higher part of the split page was far from the location of the
original page. The higher part of a split page is far from the
original page if the two page numbers differ by more than 16.
A null value means that the number of split pages that are far
from their original pages is unknown.
If this value is less than zero, the index space contains empty
pages. Running REORG can save disk space and decrease
index sequential scan I/O time by eliminating those empty pages.
STATSLASTTIME TIMESTAMP The timestamp of the last RUNSTATS on the index space or
partition.
A null value means that RUNSTATS has never been run on the
index space or partition, or that the timestamp of the last
RUNSTATS is unknown.
A null value means that COPY has never been run on the index
space or partition, or that the timestamp of the last full image
copy is unknown.
For example, you might want to take a full image copy when 20%
of the pages have changed:
((COPYUPDATEDPAGES*100)/NACTIVE)>20
COPYCHANGES INTEGER The number of insert or delete operations since the last COPY.
For example, you might want to take a full image copy when DB2
processes more than 10% of the index entries from the logs:
((COPYCHANGES*100)/TOTALENTRIES)>10
Table 182 shows how running LOAD affects the INDEXSPACESTATS statistics for
an index space or physical index partition.
Table 182. Changed INDEXSPACESTATS values during LOAD
Settings for LOAD REPLACE after BUILD
Column name phase
TOTALENTRIES Number of index entries added1
NLEVELS Actual value
NACTIVE Actual value
SPACE Actual value
EXTENTS Actual value
LOADRLASTTIME Current timestamp
REORGINSERTS 0
REORGDELETES 0
REORGAPPENDINSERT 0
REORGPSEUDODELETES 0
REORGMASSDELETE 0
REORGLEAFNEAR 0
REORGLEAFFAR 0
REORGNUMLEVELS 0
STATSLASTTIME Current timestamp2
STATSINSERTS 02
STATSDELETES 02
STATSMASSDELETE 02
COPYLASTTIME Current timestamp3
COPYUPDATEDPAGES 03
COPYCHANGES 03
COPYUPDATELRSN Null3
Table 184 shows how running REORG affects the INDEXSPACESTATS statistics for
an index space or physical index partition.
Table 184. Changed INDEXSPACESTATS values during REORG
Settings for REORG Settings for REORG SHRLEVEL
SHRLEVEL NONE after REFERENCE or CHANGE after SWITCH
Column name RELOAD phase phase
TOTALENTRIES Number of index entries For SHRLEVEL REFERENCE: Number of
added1 added index entries during BUILD phase
For a logical index partition, DB2 does not reset the nonpartitioned index when it
does a REORG on a partition. Therefore, DB2 does not reset the statistics for the
index. The REORG counters and REORGLASTTIME are relative to the last time
the entire nonpartitioned index is reorganized. In addition, the REORG counters
might be low because, due to the methodology, some index entries are changed
during REORG of a partition.
For a logical index partition, DB2 does not collect TOTALENTRIES statistics for the
entire nonpartitioned index when it runs REBUILD INDEX. Therefore, DB2 does not
reset the statistics for the index. The REORG counters from the last REORG are
still correct. DB2 updates REBUILDLASTTIME when the entire nonpartitioned index
is rebuilt.
Table 186 shows how running RUNSTATS UPDATE ALL on a table space or table
space partition affects the TABLESPACESTATS statistics.
Table 186. Changed TABLESPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSUPDATES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
Table 187 shows how running RUNSTATS UPDATE ALL on an index affects the
INDEXSPACESTATS statistics.
Table 187. Changed INDEXSPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Table 188 shows how running COPY on a table space or table space partition
affects the TABLESPACESTATS statistics.
Table 188. Changed TABLESPACESTATS values during COPY
Column name During UTILINIT phase After COPY phase
COPYLASTTIME Current timestamp1 Timestamp of the start of
COPY phase
COPYUPDATEDPAGES Actual value1 Actual value2
COPYCHANGES Actual value1 Actual value2
COPYUPDATELRSN Actual value1 Actual value3
COPYUPDATETIME Actual value1 Actual value3
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
Table 189 shows how running COPY on an index affects the INDEXSPACESTATS
statistics.
Table 189. Changed INDEXSPACESTATS values during COPY
Column name During UTILINIT phase After COPY phase
COPYLASTTIME Current timestamp1 Timestamp of the start of
COPY phase
COPYUPDATEDPAGES Actual value1 Actual value2
COPYCHANGES Actual value1 Actual value2
COPYUPDATELRSN Actual value1 Actual value3
COPYUPDATETIME Actual value1 Actual value3
Note:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
If a row still exists in the real-time statistics tables for a dropped table space or
index, and if you create a new object with the same DBID and PSID as the dropped
object, DB2 reinitializes the row before it updates any values in that row.
INSERT: When you perform an INSERT, DB2 increments the insert counters. DB2
keeps separate counters for clustered and unclustered INSERTs.
DELETE: When you perform a DELETE, DB2 increments the delete counters.
| Notice that for INSERT and DELETE, the counter for the inverse operation is
| incremented. For example, if two INSERT statements are rolled back, the delete
| counter is incremented by 2.
If an update to a partitioning key does not cause rows to move to a new partition,
the counts are accumulated as expected:
DB2 does locking based on the lock size of the DSNRTSDB.DSNRTSTS table
space. DB2 uses cursor stability isolation and CURRENTDATA(YES) when it reads
the statistics tables.
At the beginning of a RUNSTATS job, all data sharing members externalize their
statistics to the real-time statistics tables and reset their in-memory statistics. If all
members cannot externalize their statistics, DB2 sets STATSLASTTIME to null. An
error in gathering and externalizing statistics does not prevent RUNSTATS from
running.
At the beginning of a COPY job, all data sharing members externalize their
statistics to the real-time statistics tables and reset their in-memory statistics. If all
members cannot externalize their statistics, DB2 sets COPYLASTTIME to null. An
error in gathering and externalizing statistics does not prevent COPY from running.
Utilities that reset page sets to empty can invalidate the in-memory statistics of
other DB2 members. The member that resets a page set notifies the other DB2
members that a page set has been reset to empty, and the in-memory statistics are
invalidated. If the notify process fails, the utility that resets the page set does not
fail. DB2 sets the appropriate timestamp (REORGLASTTIME, STATSLASTTIME, or
COPYLASTTIME) to null in the row for the empty page set to indicate that the
statistics for that page set are unknown.
Statistics accuracy
In general, the real-time statistics are accurate values. However, several factors can
affect the accuracy of the statistics:
v Certain utility restart scenarios
v Certain utility operations that leave indexes in a database restrictive state, such
as RECOVER-pending (RECP)
Always consider the database restrictive state of objects before accepting a utility
recommendation that is based on real-time statistics.
v A DB2 subsystem failure
v A notify failure in a data sharing environment
| Figure 166 describes the format of delimited files that can be loaded into or
| unloaded from tables by using the LOAD and UNLOAD utilities.
|
| Delimited file ::= Row 1 data ||
| Row 2 data ||
| .
| .
| .
| Row n data
|
| Row i data ::= Cell value(i,1) || Column delimiter ||
| Cell value(i,2) || Column delimiter ||
| .
| .
| .
| Cell value(i,m)
|
| Column delimiter ::= Character specified by COLDEL option;
| the default value is a comma (,)
|
| Cell value(i,j) ::= Leading spaces ||
| External numeric values ||
| Delimited character string ||
| Non-delimited character string ||
| Trailing spaces
|
| Non-delimited character string ::= A set of any characters except
| a column delimiter
|
| Delimited character string ::= A character string delimiter ||
| A set of any characters except a
| character string delimiter unless
| the character string delimiter is
| part of two successive character
| string delimiters ||
| A character string delimiter ||
| Trailing garbage
|
| Character string delimiter ::= Character specified by CHARDEL option; the default
| value is a double quotation mark (")
|
| Trailing garbage ::= A set of any characters except a column delimiter
|
| Figure 166. Format of delimited files
|
| For more information about the COLDEL and CHARDEL options for the LOAD
| utility, see “Option descriptions” on page 188. For more information about the
| COLDEL and CHARDEL options for the UNLOAD utility, see “Option descriptions”
| on page 597.
| Notes:
| 1. Field specifications of INTEGER or SMALLINT are treated as INTEGER EXTERNAL.
| 2. Field specifications of DECIMAL, DECIMAL PACKED, or DECIMAL ZONED are treated
| as DECIMAL EXTERNAL.
| 3. Field specifications of FLOAT, REAL, or DOUBLE are treated as FLOAT EXTERNAL.
|
|
| Examples of delimited files
| Figure 167 shows an example of a delimited file with delimited character strings. In
| this example, the column delimiter is a comma (,). Because the character strings
| contain the column delimiter character, they must be delimited with character string
| delimiters. In this example, the character string delimiter is a double quotation mark
| (″).
|
"Smith, Bob",4973,15.46
"Jones, Bill",12345,16.34
"Williams, Sam",452,193.78
The most rewarding task associated with a database management system is asking
questions of it and getting answers, the task called end use. Other tasks are also
necessary—defining the parameters of the system, putting the data in place, and so
on. The tasks that are associated with DB2 are grouped into the following major
categories (but supplemental information relating to all of the following tasks for new
releases of DB2 can be found in DB2 Release Planning Guide
Installation: If you are involved with DB2 only to install the system, DB2 Installation
Guide might be all you need.
If you will be using data sharing capabilities you also need DB2 Data Sharing:
Planning and Administration, which describes installation considerations for data
sharing.
End use: End users issue SQL statements to retrieve data. They can also insert,
update, or delete data, with SQL statements. They might need an introduction to
SQL, detailed instructions for using SPUFI, and an alphabetized reference to the
types of SQL statements. This information is found in DB2 Application Programming
and SQL Guide, and DB2 SQL Reference.
End users can also issue SQL statements through the DB2 Query Management
Facility (QMF) or some other program, and the library for that licensed program
might provide all the instruction or reference material they need. For a list of the
titles in the DB2 QMF library, see the bibliography at the end of this book.
Application programming: Some users access DB2 without knowing it, using
programs that contain SQL statements. DB2 application programmers write those
programs. Because they write SQL statements, they need the same resources that
end users do.
The material needed for writing a host program containing SQL is in DB2
Application Programming and SQL Guide and in DB2 Application Programming
Guide and Reference for Java. The material needed for writing applications that use
If you will be working in a distributed environment, you will need DB2 Reference for
Remote DRDA Requesters and Servers.
If you will be using the RACF access control module for DB2 authorization
checking, you will need DB2 RACF Access Control Module Guide.
If you are involved with DB2 only to design the database, or plan operational
procedures, you need DB2 Administration Guide. If you also want to carry out your
own plans by creating DB2 objects, granting privileges, running utility jobs, and so
on, you also need:
v DB2 SQL Reference, which describes the SQL statements you use to create,
alter, and drop objects and grant and revoke privileges
v DB2 Utility Guide and Reference, which explains how to run utilities
v DB2 Command Reference, which explains how to run commands
If you will be using data sharing, you need DB2 Data Sharing: Planning and
Administration, which describes how to plan for and implement data sharing.
Diagnosis: Diagnosticians detect and describe errors in the DB2 program. They
might also recommend or apply a remedy. The documentation for this task is in
DB2 Diagnosis Guide and Reference and DB2 Messages and Codes.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material available
for it are provided by IBM under terms of the IBM Customer Agreement, IBM
International Program License Agreement, or any equivalent agreement between
us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, other countries, or both:
BookManager IMS
CICS iSeries
CICS Connection Language Environment
CT MVS
DataJoiner MVS/ESA
DataPropagator OpenEdition
DataRefresher OS/390
DB2 Parallel Sysplex
DB2 Connect PR/SM
DB2 Universal Database QMF
DFSMSdfp RACF
DFSMSdss RAMAC
DFSMShsm Redbooks
DFSORT S/390
Distributed Relational Database Architecture SecureWay
DRDA SQL/DS
Enterprise Storage Server System/390
ES/3090 TotalStorage
eServer VTAM
FlashCopy WebSphere
IBM z/OS
IBM Registry
Java and all Java-based trademarks and logos are trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 883
884 Utility Guide and Reference
Glossary
The following terms and abbreviations are defined after trigger. A trigger that is defined with the trigger
as they are used in the DB2 library. activation time AFTER.
APPL. A VTAM® network definition statement that is authorized program analysis report (APAR). A
used to define DB2 to VTAM as an application program report of a problem that is caused by a suspected
that uses SNA LU 6.2 protocols. defect in a current release of an IBM supplied program.
application. A program or set of programs that authorized program facility (APF). A facility that
performs a task; for example, a payroll application. permits the identification of programs that are
authorized to use restricted functions.
application-directed connection. A connection that
an application manages using the SQL CONNECT | automatic query rewrite. A process that examines an
statement. | SQL statement that refers to one or more base tables,
| and, if appropriate, rewrites the query so that it performs
application plan. The control structure that is | better. This process can also determine whether to
produced during the bind process. DB2 uses the | rewrite a query so that it refers to one or more
application plan to process SQL statements that it | materialized query tables that are derived from the
encounters during statement execution. | source tables.
application process. The unit to which resources and auxiliary index. An index on an auxiliary table in
locks are allocated. An application process involves the which each index entry refers to a LOB.
execution of one or more programs.
auxiliary table. A table that stores columns outside
application programming interface (API). A the table in which they are defined. Contrast with base
functional interface that is supplied by the operating table.
system or by a separately orderable licensed program
that allows an application program that is written in a
high-level language to use specific data or functions of B
the operating system or licensed program.
backout. The process of undoing uncommitted
application requester. The component on a remote changes that an application process made. This might
system that generates DRDA® requests for data on be necessary in the event of a failure on the part of an
behalf of an application. An application requester application process, or as a result of a deadlock
accesses a DB2 database server using the DRDA situation.
application-directed protocol.
backward log recovery. The fourth and final phase of
application server. The target of a request from a restart processing during which DB2 scans the log in a
remote application. In the DB2 environment, the backward direction to apply UNDO log records for all
application server function is provided by the distributed aborted changes.
data facility and is used to access DB2 data from
remote applications. base table. (1) A table that is created by the SQL
CREATE TABLE statement and that holds persistent
archive log. The portion of the DB2 log that contains data. Contrast with result table and temporary table.
log records that have been copied from the active log. (2) A table containing a LOB column definition. The
actual LOB column data is not stored with the base
ASCII. An encoding scheme that is used to represent
table. The base table contains a row identifier for each
strings in many environments, typically on PCs and
row and an indicator column for each of its LOB
workstations. Contrast with EBCDIC and Unicode.
columns. Contrast with auxiliary table.
| ASID. Address space identifier. base table space. A table space that contains base
attachment facility. An interface between DB2 and tables.
TSO, IMS, CICS, or batch address spaces. An
basic predicate. A predicate that compares two
attachment facility allows application programs to
values.
access DB2.
basic sequential access method (BSAM). An access
attribute. A characteristic of an entity. For example, in
method for storing or retrieving data blocks in a
database design, the phone number of an employee is
continuous sequence, using either a sequential-access
one of that employee’s attributes.
or a direct-access device.
authorization ID. A string that can be verified for
| batch message processing program. In IMS, an
connection to DB2 and to which a set of privileges is
| application program that can perform batch-type
allowed. It can represent an individual, an organizational
| processing online and can access the IMS input and
group, or a function, but DB2 does not determine this
| output message queues.
representation.
before trigger. A trigger that is defined with the trigger buffer pool. Main storage that is reserved to satisfy
activation time BEFORE. the buffering requirements for one or more table spaces
or indexes.
binary integer. A basic data type that can be further
classified as small integer or large integer. built-in data type. A data type that IBM supplies.
Among the built-in data types for DB2 UDB for z/OS are
binary large object (BLOB). A sequence of bytes string, numeric, ROWID, and datetime. Contrast with
where the size of the value ranges from 0 bytes to distinct type.
2 GB−1. Such a string does not have an associated
CCSID. built-in function. A function that DB2 supplies.
Contrast with user-defined function.
binary string. A sequence of bytes that is not
associated with a CCSID. For example, the BLOB data business dimension. A category of data, such as
type is a binary string. products or time periods, that an organization might
want to analyze.
bind. The process by which the output from the SQL
precompiler is converted to a usable control structure,
often called an access plan, application plan, or C
package. During this process, access paths to the data
are selected and some authorization checking is cache structure. A coupling facility structure that
performed. The types of bind are: stores data that can be available to all members of a
automatic bind. (More correctly, automatic rebind) A Sysplex. A DB2 data sharing group uses cache
process by which SQL statements are bound structures as group buffer pools.
automatically (without a user issuing a BIND
CAF. Call attachment facility.
command) when an application process begins
execution and the bound application plan or call attachment facility (CAF). A DB2 attachment
package it requires is not valid. facility for application programs that run in TSO or z/OS
dynamic bind. A process by which SQL statements batch. The CAF is an alternative to the DSN command
are bound as they are entered. processor and provides greater control over the
incremental bind. A process by which SQL execution environment.
statements are bound during the execution of an
application process. call-level interface (CLI). A callable application
static bind. A process by which SQL statements are programming interface (API) for database access, which
bound after they have been precompiled. All static is an alternative to using embedded SQL. In contrast to
SQL statements are prepared for execution at the embedded SQL, DB2 ODBC (which is based on the CLI
same time. architecture) does not require the user to precompile or
bind applications, but instead provides a standard set of
bit data. Data that is character type CHAR or functions to process SQL statements and related
VARCHAR and is not associated with a coded character services at run time.
set.
cascade delete. The way in which DB2 enforces
BLOB. Binary large object. referential constraints when it deletes all descendent
rows of a deleted parent row.
block fetch. A capability in which DB2 can retrieve, or
fetch, a large set of rows together. Using block fetch CASE expression. An expression that is selected
can significantly reduce the number of messages that based on the evaluation of one or more conditions.
are being sent across the network. Block fetch applies
only to cursors that do not update data. cast function. A function that is used to convert
instances of a (source) data type into instances of a
BMP. Batch Message Processing (IMS). See batch different (target) data type. In general, a cast function
message processing program. has the name of the target data type. It has one single
argument whose type is the source data type; its return
bootstrap data set (BSDS). A VSAM data set that
type is the target data type.
contains name and status information for DB2, as well
as RBA range specifications, for all active and archive castout. The DB2 process of writing changed pages
log data sets. It also contains passwords for the DB2 from a group buffer pool to disk.
directory and catalog, and lists of conditional restart and
checkpoint records. castout owner. The DB2 member that is responsible
for casting out a particular page set or partition.
BSAM. Basic sequential access method.
catalog. In DB2, a collection of tables that contains
BSDS. Bootstrap data set. descriptions of objects such as tables, views, and
indexes.
Glossary 887
catalog table • closed application
catalog table. Any table in the DB2 catalog. | statements because of rows that violate referential
| constraints, check constraints, or both.
CCSID. Coded character set identifier.
checkpoint. A point at which DB2 records internal
CDB. Communications database. status information on the DB2 log; the recovery process
uses this information if DB2 abnormally terminates.
CDRA. Character Data Representation Architecture.
| child lock. For explicit hierarchical locking, a lock that
CEC. Central electronic complex. See central | is held on either a table, page, row, or a large object
processor complex. | (LOB). Each child lock has a parent lock. See also
| parent lock.
central electronic complex (CEC). See central
processor complex. CI. Control interval.
central processor (CP). The part of the computer that | CICS. Represents (in this publication): CICS
contains the sequencing and processing facilities for | Transaction Server for z/OS: Customer Information
instruction execution, initial program load, and other | Control System Transaction Server for z/OS.
machine operations.
CICS attachment facility. A DB2 subcomponent that
central processor complex (CPC). A physical uses the z/OS subsystem interface (SSI) and
collection of hardware (such as an ES/3090™) that cross-storage linkage to process requests from CICS to
consists of main storage, one or more central DB2 and to coordinate resource commitment.
processors, timers, and channels.
CIDF. Control interval definition field.
| CFRM. Coupling facility resource management.
claim. A notification to DB2 that an object is being
CFRM policy. A declaration by a z/OS administrator accessed. Claims prevent drains from occurring until the
regarding the allocation rules for a coupling facility claim is released, which usually occurs at a commit
structure. point. Contrast with drain.
character conversion. The process of changing claim class. A specific type of object access that can
characters from one encoding scheme to another. be one of the following isolation levels:
Cursor stability (CS)
Character Data Representation Architecture
Repeatable read (RR)
(CDRA). An architecture that is used to achieve
Write
consistent representation, processing, and interchange
of string data. claim count. A count of the number of agents that are
accessing an object.
character large object (CLOB). A sequence of bytes
representing single-byte characters or a mixture of class of service. A VTAM term for a list of routes
single- and double-byte characters where the size of the through a network, arranged in an order of preference
value can be up to 2 GB−1. In general, character large for their use.
object values are used whenever a character string
might exceed the limits of the VARCHAR type. class word. A single word that indicates the nature of
a data attribute. For example, the class word PROJ
character set. A defined set of characters. indicates that the attribute identifies a project.
character string. A sequence of bytes that represent clause. In SQL, a distinct part of a statement, such as
bit data, single-byte characters, or a mixture of a SELECT clause or a WHERE clause.
single-byte and multibyte characters.
CLI. Call- level interface.
check constraint. A user-defined constraint that
specifies the values that specific columns of a base client. See requester.
table can contain.
CLIST. Command list. A language for performing TSO
check integrity. The condition that exists when each tasks.
row in a table conforms to the check constraints that are
defined on that table. Maintaining check integrity CLOB. Character large object.
requires DB2 to enforce check constraints on operations
that add or change data. closed application. An application that requires
exclusive use of certain statements on certain DB2
| check pending. A state of a table space or partition objects, so that the objects are managed solely through
| that prevents its use by some utilities and by some SQL the application’s external interface.
CLPA. Create link pack area. command recognition character (CRC). A character
that permits a z/OS console operator or an IMS
| clustering index. An index that determines how rows subsystem user to route DB2 commands to specific
| are physically ordered (clustered) in a table space. If a DB2 subsystems.
| clustering index on a partitioned table is not a
| partitioning index, the rows are ordered in cluster command scope. The scope of command operation in
| sequence within each data partition instead of spanning a data sharing group. If a command has member scope,
| partitions. Prior to Version 8 of DB2 UDB for z/OS, the the command displays information only from the one
| partitioning index was required to be the clustering member or affects only non-shared resources that are
| index. owned locally by that member. If a command has group
scope, the command displays information from all
coded character set. A set of unambiguous rules that members, affects non-shared resources that are owned
establish a character set and the one-to-one locally by all members, displays information on sharable
relationships between the characters of the set and their resources, or affects sharable resources.
coded representations.
commit. The operation that ends a unit of work by
coded character set identifier (CCSID). A 16-bit releasing locks so that the database changes that are
number that uniquely identifies a coded representation made by that unit of work can be perceived by other
of graphic characters. It designates an encoding processes.
scheme identifier and one or more pairs consisting of a
character set identifier and an associated code page commit point. A point in time when data is considered
identifier. consistent.
code page. (1) A set of assignments of characters to committed phase. The second phase of the multisite
code points. In EBCDIC, for example, the character 'A' update process that requests all participants to commit
is assigned code point X'C1' (2) , and character 'B' is the effects of the logical unit of work.
assigned code point X'C2'. Within a code page, each
code point has only one specific meaning. common service area (CSA). In z/OS, a part of the
common area that contains data areas that are
code point. In CDRA, a unique bit pattern that addressable by all address spaces.
represents a character in a code page.
communications database (CDB). A set of tables in
coexistence. During migration, the period of time in the DB2 catalog that are used to establish
which two releases exist in the same data sharing conversations with remote database management
group. systems.
cold start. A process by which DB2 restarts without comparison operator. A token (such as =, >, or <)
processing any log records. Contrast with warm start. that is used to specify a relationship between two
values.
collection. A group of packages that have the same
qualifier. composite key. An ordered set of key columns of the
same table.
column. The vertical component of a table. A column
has a name and a particular data type (for example, compression dictionary. The dictionary that controls
character, decimal, or integer). the process of compression and decompression. This
dictionary is created from the data in the table space or
column function. An operation that derives its result table space partition.
by using values from one or more rows. Contrast with
scalar function. concurrency. The shared use of resources by more
than one application process at the same time.
"come from" checking. An LU 6.2 security option
that defines a list of authorization IDs that are allowed conditional restart. A DB2 restart that is directed by a
to connect to DB2 from a partner LU. user-defined conditional restart control record (CRCR).
Glossary 889
connection declaration clause • coupling facility resource management
connection declaration clause. In SQLJ, a statement | copy pool. A named set of SMS storage groups that
that declares a connection to a data source. | contains data that is to be copied collectively. A copy
| pool is an SMS construct that lets you define which
connection handle. The data object containing | storage groups are to be copied by using FlashCopy®
information that is associated with a connection that | functions. HSM determines which volumes belong to a
DB2 ODBC manages. This includes general status | copy pool.
information, transaction status, and diagnostic
information. | copy target. A named set of SMS storage groups that
| are to be used as containers for copy pool volume
connection ID. An identifier that is supplied by the | copies. A copy target is an SMS construct that lets you
attachment facility and that is associated with a specific | define which storage groups are to be used as
address space connection. | containers for volumes that are copied by using
| FlashCopy functions.
consistency token. A timestamp that is used to
generate the version identifier for an application. See | copy version. A point-in-time FlashCopy copy that is
also version. | managed by HSM. Each copy pool has a version
| parameter that specifies how many copy versions are
constant. A language element that specifies an | maintained on disk.
unchanging value. Constants are classified as string
constants or numeric constants. Contrast with variable. correlated columns. A relationship between the value
of one column and the value of another column.
constraint. A rule that limits the values that can be
inserted, deleted, or updated in a table. See referential correlated subquery. A subquery (part of a WHERE
constraint, check constraint, and unique constraint. or HAVING clause) that is applied to a row or group of
rows of a table or view that is named in an outer
context. The application’s logical connection to the subselect statement.
data source and associated internal DB2 ODBC
connection information that allows the application to correlation ID. An identifier that is associated with a
direct its operations to a data source. A DB2 ODBC specific thread. In TSO, it is either an authorization ID
context represents a DB2 thread. or the job name.
contracting conversion. A process that occurs when correlation name. An identifier that designates a
the length of a converted string is smaller than that of table, a view, or individual rows of a table or view within
the source string. For example, this process occurs a single SQL statement. It can be defined in any FROM
when an EBCDIC mixed-data string that contains DBCS clause or in the first clause of an UPDATE or DELETE
characters is converted to ASCII mixed data; the statement.
converted string is shorter because of the removal of
the shift codes. cost category. A category into which DB2 places cost
estimates for SQL statements at the time the statement
control interval (CI). A fixed-length area or disk in is bound. A cost estimate can be placed in either of the
which VSAM stores records and creates distributed free following cost categories:
space. Also, in a key-sequenced data set or file, the set v A: Indicates that DB2 had enough information to
of records that an entry in the sequence-set index make a cost estimate without using default values.
record points to. The control interval is the unit of v B: Indicates that some condition exists for which DB2
information that VSAM transmits to or from disk. A was forced to use default values for its estimate.
control interval always includes an integral number of
physical records. The cost category is externalized in the
COST_CATEGORY column of the
control interval definition field (CIDF). In VSAM, a DSN_STATEMNT_TABLE when a statement is
field that is located in the 4 bytes at the end of each explained.
control interval; it describes the free space, if any, in the
control interval. coupling facility. A special PR/SM™ LPAR logical
partition that runs the coupling facility control program
conversation. Communication, which is based on LU and provides high-speed caching, list processing, and
6.2 or Advanced Program-to-Program Communication locking functions in a Parallel Sysplex®.
(APPC), between an application and a remote
transaction program over an SNA logical unit-to-logical | coupling facility resource management. A
unit (LU-LU) session that allows communication while | component of z/OS that provides the services to
processing a transaction. | manage coupling facility resources in a Parallel Sysplex.
| This management includes the enforcement of CFRM
coordinator. The system component that coordinates | policies to ensure that the coupling facility and structure
the commit or rollback of a unit of work that includes | requirements are satisfied.
work that is done on one or more other systems.
CP. Central processor. current status rebuild. The second phase of restart
processing during which the status of the subsystem is
CPC. Central processor complex. reconstructed from information on the log.
C++ member. A data object or function in a structure, cursor. A named control structure that an application
union, or class. program uses to point to a single row or multiple rows
within some ordered set of rows of a result table. A
C++ member function. An operator or function that is cursor can be used to retrieve, update, or delete rows
declared as a member of a class. A member function from a result table.
has access to the private and protected data members
and to the member functions of objects in its class. cursor sensitivity. The degree to which database
Member functions are also called methods. updates are visible to the subsequent FETCH
statements in a cursor. A cursor can be sensitive to
C++ object. (1) A region of storage. An object is changes that are made with positioned update and
created when a variable is defined or a new function is delete statements specifying the name of that cursor. A
invoked. (2) An instance of a class. cursor can also be sensitive to changes that are made
with searched update or delete statements, or with
CRC. Command recognition character.
cursors other than this cursor. These changes can be
CRCR. Conditional restart control record. See also made by this application process or by another
conditional restart. application process.
create link pack area (CLPA). An option that is used cursor stability (CS). The isolation level that provides
during IPL to initialize the link pack pageable area. maximum concurrency without the ability to read
uncommitted data. With cursor stability, a unit of work
created temporary table. A table that holds temporary holds locks only on its uncommitted changes and on the
data and is defined with the SQL statement CREATE current row of each of its cursors.
GLOBAL TEMPORARY TABLE. Information about
created temporary tables is stored in the DB2 catalog, cursor table (CT). The copy of the skeleton cursor
so this kind of table is persistent and can be shared table that is used by an executing application process.
across application processes. Contrast with declared
cycle. A set of tables that can be ordered so that each
temporary table. See also temporary table.
table is a descendent of the one before it, and the first
cross-memory linkage. A method for invoking a table is a descendent of the last table. A self-referencing
program in a different address space. The invocation is table is a cycle with a single member.
synchronous with respect to the caller.
Glossary 891
database descriptor (DBD) • DBD
| SQL statements are not modified and are sent data mining. The process of collecting critical
| unchanged to the database server. business information from a data warehouse, correlating
it, and uncovering associations, patterns, and trends.
| database descriptor (DBD). An internal
| representation of a DB2 database definition, which data partition. A VSAM data set that is contained
| reflects the data definition that is in the DB2 catalog. within a partitioned table space.
| The objects that are defined in a database descriptor
| are table spaces, tables, indexes, index spaces, data-partitioned secondary index (DPSI). A
| relationships, check constraints, and triggers. A DBD secondary index that is partitioned. The index is
| also contains information about accessing tables in the partitioned according to the underlying data.
| database.
data sharing. The ability of two or more DB2
database exception status. An indication that subsystems to directly access and change a single set
something is wrong with a database. All members of a of data.
data sharing group must know and share the exception
status of databases. data sharing group. A collection of one or more DB2
subsystems that directly access and change the same
| database identifier (DBID). An internal identifier of the data while maintaining data integrity.
| database.
data sharing member. A DB2 subsystem that is
database management system (DBMS). A software assigned by XCF services to a data sharing group.
system that controls the creation, organization, and
modification of a database and the access to the data data source. A local or remote relational or
that is stored within it. non-relational data manager that is capable of
supporting data access via an ODBC driver that
database request module (DBRM). A data set supports the ODBC APIs. In the case of DB2 UDB for
member that is created by the DB2 precompiler and that z/OS, the data sources are always relational database
contains information about SQL statements. DBRMs are managers.
used in the bind process.
| data space. In releases prior to DB2 UDB for z/OS,
database server. The target of a request from a local | Version 8, a range of up to 2 GB of contiguous virtual
application or an intermediate database server. In the | storage addresses that a program can directly
DB2 environment, the database server function is | manipulate. Unlike an address space, a data space can
provided by the distributed data facility to access DB2 | hold only data; it does not contain common areas,
data from local applications, or from a remote database | system data, or programs.
server that acts as an intermediate database server.
data type. An attribute of columns, literals, host
data currency. The state in which data that is variables, special registers, and the results of functions
retrieved into a host variable in your program is a copy and expressions.
of data in the base table.
data warehouse. A system that provides critical
data definition name (ddname). The name of a data business information to an organization. The data
definition (DD) statement that corresponds to a data warehouse system cleanses the data for accuracy and
control block containing the same name. currency, and then presents the data to decision makers
so that they can interpret and use it effectively and
data dictionary. A repository of information about an efficiently.
organization’s application programs, databases, logical
data models, users, and authorizations. A data date. A three-part value that designates a day, month,
dictionary can be manual or automated. and year.
data-driven business rules. Constraints on particular date duration. A decimal integer that represents a
data values that exist as a result of requirements of the number of years, months, and days.
business.
datetime value. A value of the data type DATE, TIME,
Data Language/I (DL/I). The IMS data manipulation or TIMESTAMP.
language; a common high-level interface between a
user application and IMS. DBA. Database administrator.
data mart. A small data warehouse that applies to a DBCLOB. Double-byte character large object.
single department or team. See also data warehouse.
DBCS. Double-byte character set.
DBRM. Database request module. deferred embedded SQL. SQL statements that are
neither fully static nor fully dynamic. Like static
DB2 catalog. Tables that are maintained by DB2 and statements, they are embedded within an application,
contain descriptions of DB2 objects, such as tables, but like dynamic statements, they are prepared during
views, and indexes. the execution of the application.
DB2 command. An instruction to the DB2 subsystem deferred write. The process of asynchronously writing
that a user enters to start or stop DB2, to display changed data pages to disk.
information on current users, to start or stop databases,
to display information on the status of databases, and degree of parallelism. The number of concurrently
so on. executed operations that are initiated to process a
query.
DB2 for VSE & VM. The IBM DB2 relational database
management system for the VSE and VM operating delete-connected. A table that is a dependent of table
systems. P or a dependent of a table to which delete operations
from table P cascade.
DB2I. DB2 Interactive.
delete hole. The location on which a cursor is
DB2 Interactive (DB2I). The DB2 facility that provides positioned when a row in a result table is refetched and
for the execution of SQL statements, DB2 (operator) the row no longer exists on the base table, because
commands, programmer commands, and utility another cursor deleted the row between the time the
invocation. cursor first included the row in the result table and the
time the cursor tried to refetch it.
DB2I Kanji Feature. The tape that contains the panels
and jobs that allow a site to display DB2I panels in delete rule. The rule that tells DB2 what to do to a
Kanji. dependent row when a parent row is deleted. For each
relationship, the rule might be CASCADE, RESTRICT,
DB2 PM. DB2 Performance Monitor.
SET NULL, or NO ACTION.
DB2 thread. The DB2 structure that describes an
delete trigger. A trigger that is defined with the
application’s connection, traces its progress, processes
triggering SQL operation DELETE.
resource functions, and delimits its accessibility to DB2
resources and services. delimited identifier. A sequence of characters that are
enclosed within double quotation marks ("). The
DCLGEN. Declarations generator.
sequence must consist of a letter followed by zero or
DDF. Distributed data facility. more characters, each of which is a letter, digit, or the
underscore character (_).
ddname. Data definition name.
delimiter token. A string constant, a delimited
deadlock. Unresolvable contention for the use of a identifier, an operator symbol, or any of the special
resource, such as a table or an index. characters that are shown in DB2 syntax diagrams.
declarations generator (DCLGEN). A subcomponent denormalization. A key step in the task of building a
of DB2 that generates SQL table declarations and physical relational database design. Denormalization is
COBOL, C, or PL/I data structure declarations that the intentional duplication of columns in multiple tables,
conform to the table. The declarations are generated and the consequence is increased data redundancy.
from DB2 system catalog information. DCLGEN is also Denormalization is sometimes necessary to minimize
a DSN subcommand. performance problems. Contrast with normalization.
declared temporary table. A table that holds dependent. An object (row, table, or table space) that
temporary data and is defined with the SQL statement has at least one parent. The object is also said to be a
DECLARE GLOBAL TEMPORARY TABLE. Information dependent (row, table, or table space) of its parent. See
about declared temporary tables is not stored in the also parent row, parent table, parent table space.
DB2 catalog, so this kind of table is not persistent and
can be used only by the application process that issued dependent row. A row that contains a foreign key that
the DECLARE statement. Contrast with created matches the value of a primary key in the parent row.
temporary table. See also temporary table.
dependent table. A table that is a dependent in at
least one referential constraint.
Glossary 893
DES-based authenticator • DRDA access
directory. The DB2 system database that contains downstream. The set of nodes in the syncpoint tree
internal objects such as database descriptors and that is connected to the local DBMS as a participant in
skeleton cursor tables. the execution of a two-phase commit.
distinct type. A user-defined data type that is | DPSI. Data-partitioned secondary index.
internally represented as an existing type (its source
type), but is considered to be a separate and drain. The act of acquiring a locked resource by
incompatible type for semantic purposes. quiescing access to that object.
distributed data. Data that resides on a DBMS other drain lock. A lock on a claim class that prevents a
than the local system. claim from occurring.
distributed data facility (DDF). A set of DB2 DRDA. Distributed Relational Database Architecture.
components through which DB2 communicates with
another relational database management system. DRDA access. An open method of accessing
distributed data that you can use to can connect to
another database server to execute packages that were
previously bound at the server location. You use the
SQL CONNECT statement or an SQL statement with a encoding scheme. A set of rules to represent
three-part name to identify the server. Contrast with character data (ASCII, EBCDIC, or Unicode).
private protocol access.
entity. A significant object of interest to an
DSN. (1) The default DB2 subsystem name. (2) The organization.
name of the TSO command processor of DB2. (3) The
first three characters of DB2 module and macro names. enumerated list. A set of DB2 objects that are defined
with a LISTDEF utility control statement in which
duration. A number that represents an interval of time. pattern-matching characters (*, %, _ or ?) are not used.
See also date duration, labeled duration, and time
duration. environment. A collection of names of logical and
physical resources that are used to support the
| dynamic cursor. A named control structure that an performance of a function.
| application program uses to change the size of the
| result table and the order of its rows after the cursor is environment handle. In DB2 ODBC, the data object
| opened. Contrast with static cursor. that contains global information regarding the state of
the application. An environment handle must be
dynamic dump. A dump that is issued during the allocated before a connection handle can be allocated.
execution of a program, usually under the control of that Only one environment handle can be allocated per
program. application.
dynamic SQL. SQL statements that are prepared and EOM. End of memory.
executed within an application program while the
program is executing. In dynamic SQL, the SQL source EOT. End of task.
is contained in host language variables rather than
being coded into the application program. The SQL equijoin. A join operation in which the join-condition
statement can change several times during the has the form expression = expression.
application program’s execution.
error page range. A range of pages that are
| dynamic statement cache pool. A cache, located considered to be physically damaged. DB2 does not
| above the 2-GB storage line, that holds dynamic allow users to access any pages that fall within this
| statements. range.
| EB. See exabyte. ESMT. External subsystem module table (in IMS).
EBCDIC. Extended binary coded decimal interchange EUR. IBM European Standards.
code. An encoding scheme that is used to represent
character data in the z/OS, VM, VSE, and iSeries™
| exabyte. For processor, real and virtual storage
environments. Contrast with ASCII and Unicode.
| capacities and channel volume:
| 1 152 921 504 606 846 976 bytes or 260.
e-business. The transformation of key business
exception table. A table that holds rows that violate
processes through the use of Internet technologies.
referential constraints or check constraints that the
| EDM pool. A pool of main storage that is used for CHECK DATA utility finds.
| database descriptors, application plans, authorization
exclusive lock. A lock that prevents concurrently
| cache, application packages.
executing application processes from reading or
EID. Event identifier. changing data. Contrast with share lock.
embedded SQL. SQL statements that are coded executable statement. An SQL statement that can be
within an application program. See static SQL. embedded in an application program, dynamically
prepared and executed, or issued interactively.
enclave. In Language Environment® , an independent
collection of routines, one of which is designated as the execution context. In SQLJ, a Java object that can
main routine. An enclave is similar to a program or run be used to control the execution of SQL statements.
unit.
Glossary 895
exit routine • forest
forget. In a two-phase commit operation, (1) the vote function signature. The logical concatenation of a
that is sent to the prepare phase when the participant fully qualified function name with the data types of all of
has not modified any data. The forget vote allows a its parameters.
participant to release locks and forget about the logical
unit of work. This is also referred to as the read-only
vote. (2) The response to the committed request in the
G
second phase of the operation.
GB. Gigabyte (1 073 741 824 bytes).
forward log recovery. The third phase of restart
GBP. Group buffer pool.
processing during which DB2 processes the log in a
forward direction to apply all REDO log records. GBP-dependent. The status of a page set or page set
partition that is dependent on the group buffer pool.
free space. The total amount of unused space in a
Either read/write interest is active among DB2
page; that is, the space that is not used to store records
subsystems for this page set, or the page set has
or control information is free space.
changed pages in the group buffer pool that have not
full outer join. The result of a join operation that yet been cast out to disk.
includes the matched rows of both tables that are being
generalized trace facility (GTF). A z/OS service
joined and preserves the unmatched rows of both
program that records significant system events such as
tables. See also join.
I/O interrupts, SVC interrupts, program interrupts, or
fullselect. A subselect, a values-clause, or a number external interrupts.
of both that are combined by set operators. Fullselect
generic resource name. A name that VTAM uses to
specifies a result table. If UNION is not used, the result
represent several application programs that provide the
of the fullselect is the result of the specified subselect.
same function in order to handle session distribution
| fully escaped mapping. A mapping from an SQL and balancing in a Sysplex environment.
| identifier to an XML name when the SQL identifier is a
getpage. An operation in which DB2 accesses a data
| column name.
page.
function. A mapping, which is embodied as a program
global lock. A lock that provides concurrency control
(the function body) that is invocable by means of zero
within and among DB2 subsystems. The scope of the
or more input values (arguments) to a single value (the
lock is across all DB2 subsystems of a data sharing
result). See also column function and scalar function.
group.
Functions can be user-defined, built-in, or generated by
DB2. (See also built-in function, cast function, external global lock contention. Conflicts on locking requests
function, sourced function, SQL function, and between different DB2 members of a data sharing group
user-defined function.) when those members are trying to serialize shared
resources.
function definer. The authorization ID of the owner of
the schema of the function that is specified in the governor. See resource limit facility.
CREATE FUNCTION statement.
graphic string. A sequence of DBCS characters.
function implementer. The authorization ID of the
owner of the function program and function package. gross lock. The shared, update, or exclusive mode
locks on a table, partition, or table space.
function package. A package that results from binding
the DBRM for a function program. group buffer pool (GBP). A coupling facility cache
structure that is used by a data sharing group to cache
function package owner. The authorization ID of the data and to ensure that the data is consistent for all
user who binds the function program’s DBRM into a members.
function package.
group buffer pool duplexing. The ability to write data
function resolution. The process, internal to the to two instances of a group buffer pool structure: a
DBMS, by which a function invocation is bound to a primary group buffer pool and a secondary group buffer
particular function instance. This process uses the pool. z/OS publications refer to these instances as the
function name, the data types of the arguments, and a "old" (for primary) and "new" (for secondary) structures.
list of the applicable schema names (called the SQL
path) to make the selection. This process is sometimes group level. The release level of a data sharing
called function selection. group, which is established when the first member
migrates to a new release.
function selection. See function resolution.
Glossary 897
group name • IMS
group name. The z/OS XCF identifier for a data | host variable array. An array of elements, each of
sharing group. | which corresponds to a value for a column. The
| dimension of the array determines the maximum
group restart. A restart of at least one member of a | number of rows for which the array can be used.
data sharing group after the loss of either locks or the
shared communications area. HSM. Hierarchical storage manager.
heuristic decision. A decision that forces indoubt identify. A request that an attachment service program
resolution at a participant by means other than in an address space that is separate from DB2 issues
automatic resynchronization between coordinator and thorough the z/OS subsystem interface to inform DB2 of
participant. its existence and to initiate the process of becoming
connected to DB2.
| hole. A row of the result table that cannot be accessed
| because of a delete or an update that has been identity column. A column that provides a way for
| performed on the row. See also delete hole and update DB2 to automatically generate a numeric value for each
| hole. row. The generated values are unique if cycling is not
used. Identity columns are defined with the AS
home address space. The area of storage that z/OS IDENTITY clause. Uniqueness of values can be
currently recognizes as dispatched. ensured by defining a unique index that contains only
the identity column. A table can have no more than one
host. The set of programs and resources that are identity column.
available on a given TCP/IP instance.
IFCID. Instrumentation facility component identifier.
host expression. A Java variable or expression that is
referenced by SQL clauses in an SQLJ application IFI. Instrumentation facility interface.
program.
IFI call. An invocation of the instrumentation facility
host identifier. A name that is declared in the host interface (IFI) by means of one of its defined functions.
program.
IFP. IMS Fast Path.
host language. A programming language in which you
can embed SQL statements. image copy. An exact reproduction of all or part of a
table space. DB2 provides utility programs to make full
host program. An application program that is written image copies (to copy the entire table space) or
in a host language and that contains embedded SQL incremental image copies (to copy only those pages
statements. that have been modified since the last image copy).
host structure. In an application program, a structure implied forget. In the presumed-abort protocol, an
that is referenced by embedded SQL statements. implied response of forget to the second-phase
committed request from the coordinator. The response
host variable. In an application program, an is implied when the participant responds to any
application variable that is referenced by embedded subsequent request from the coordinator.
SQL statements.
IMS. Information Management System.
IMS attachment facility. A DB2 subcomponent that indoubt resolution. The process of resolving the
uses z/OS subsystem interface (SSI) protocols and status of an indoubt logical unit of work to either the
cross-memory linkage to process requests from IMS to committed or the rollback state.
DB2 and to coordinate resource commitment.
inflight. A status of a unit of recovery. If DB2 fails
IMS DB. Information Management System Database. before its unit of recovery completes phase 1 of the
commit process, it merely backs out the updates of its
IMS TM. Information Management System Transaction unit of recovery at restart. These units of recovery are
Manager. termed inflight.
in-abort. A status of a unit of recovery. If DB2 fails inheritance. The passing downstream of class
after a unit of recovery begins to be rolled back, but resources or attributes from a parent class in the class
before the process is completed, DB2 continues to back hierarchy to a child class.
out the changes during restart.
initialization file. For DB2 ODBC applications, a file
in-commit. A status of a unit of recovery. If DB2 fails containing values that can be set to adjust the
after beginning its phase 2 commit processing, it performance of the database manager.
"knows," when restarted, that changes made to data are
consistent. Such units of recovery are termed in-commit. inline copy. A copy that is produced by the LOAD or
REORG utility. The data set that the inline copy
independent. An object (row, table, or table space) produces is logically equivalent to a full image copy that
that is neither a parent nor a dependent of another is produced by running the COPY utility with read-only
object. access (SHRLEVEL REFERENCE).
index. A set of pointers that are logically ordered by inner join. The result of a join operation that includes
the values of a key. Indexes can provide faster access only the matched rows of both tables that are being
to data and can enforce uniqueness on the rows in a joined. See also join.
table.
inoperative package. A package that cannot be used
| index-controlled partitioning. A type of partitioning in because one or more user-defined functions or
| which partition boundaries for a partitioned table are procedures that the package depends on were dropped.
| controlled by values that are specified on the CREATE Such a package must be explicitly rebound. Contrast
| INDEX statement. Partition limits are saved in the with invalid package.
| LIMITKEY column of the SYSIBM.SYSINDEXPART
| catalog table. | insensitive cursor. A cursor that is not sensitive to
| inserts, updates, or deletes that are made to the
index key. The set of columns in a table that is used | underlying rows of a result table after the result table
to determine the order of index entries. | has been materialized.
index partition. A VSAM data set that is contained insert trigger. A trigger that is defined with the
within a partitioning index space. triggering SQL operation INSERT.
index space. A page set that is used to store the install. The process of preparing a DB2 subsystem to
entries of one index. operate as a z/OS subsystem.
indicator column. A 4-byte value that is stored in a installation verification scenario. A sequence of
base table in place of a LOB column. operations that exercises the main DB2 functions and
tests whether DB2 was correctly installed.
indicator variable. A variable that is used to represent
the null value in an application program. If the value for instrumentation facility component identifier
the selected column is null, a negative value is placed (IFCID). A value that names and identifies a trace
in the indicator variable. record of an event that can be traced. As a parameter
on the START TRACE and MODIFY TRACE
indoubt. A status of a unit of recovery. If DB2 fails commands, it specifies that the corresponding event is
after it has finished its phase 1 commit processing and to be traced.
before it has started phase 2, only the commit
coordinator knows if an individual unit of recovery is to instrumentation facility interface (IFI). A
be committed or rolled back. At emergency restart, if programming interface that enables programs to obtain
DB2 lacks the information it needs to make this online trace data about DB2, to submit DB2 commands,
decision, the status of the unit of recovery is indoubt and to pass data to DB2.
until DB2 obtains this information from the coordinator.
More than one unit of recovery can be indoubt at
restart.
Glossary 899
Interactive System Productivity Facility (ISPF) • Kerberos ticket
Interactive System Productivity Facility (ISPF). An iterator. In SQLJ, an object that contains the result set
IBM licensed program that provides interactive dialog of a query. An iterator is equivalent to a cursor in other
services in a z/OS environment. host languages.
inter-DB2 R/W interest. A property of data in a table iterator declaration clause. In SQLJ, a statement that
space, index, or partition that has been opened by more generates an iterator declaration class. An iterator is an
than one member of a data sharing group and that has object of an iterator declaration class.
been opened for writing by at least one of those
members.
J
intermediate database server. The target of a
request from a local application or a remote application | Japanese Industrial Standard. An encoding scheme
requester that is forwarded to another database server. | that is used to process Japanese characters.
In the DB2 environment, the remote request is
| JAR. Java Archive.
forwarded transparently to another database server if
the object that is referenced by a three-part name does Java Archive (JAR). A file format that is used for
not reference the local location. aggregating many files into a single file.
internationalization. The support for an encoding JCL. Job control language.
scheme that is able to represent the code points of
characters from many different geographies and JDBC. A Sun Microsystems database application
languages. To support all geographies, the Unicode programming interface (API) for Java that allows
standard requires more than 1 byte to represent a programs to access database management systems by
single character. See also Unicode. using callable SQL. JDBC does not require the use of
an SQL preprocessor. In addition, JDBC provides an
internal resource lock manager (IRLM). A z/OS architecture that lets users add modules called
subsystem that DB2 uses to control communication and database drivers, which link the application to their
database locking. choice of database management systems at run time.
| International Organization for Standardization. An JES. Job Entry Subsystem.
| international body charged with creating standards to
| facilitate the exchange of goods and services as well as JIS. Japanese Industrial Standard.
| cooperation in intellectual, scientific, technological, and
| economic activity. job control language (JCL). A control language that
is used to identify a job to an operating system and to
invalid package. A package that depends on an describe the job’s requirements.
object (other than a user-defined function) that is
dropped. Such a package is implicitly rebound on Job Entry Subsystem (JES). An IBM licensed
invocation. Contrast with inoperative package. program that receives jobs into the system and
processes all output data that is produced by the jobs.
invariant character set. (1) A character set, such as
the syntactic character set, whose code point join. A relational operation that allows retrieval of data
assignments do not change from code page to code from two or more tables based on matching column
page. (2) A minimum set of characters that is available values. See also equijoin, full outer join, inner join, left
as part of all character sets. outer join, outer join, and right outer join.
ISO. International Organization for Standardization. Kerberos. A network authentication protocol that is
designed to provide strong authentication for
isolation level. The degree to which a unit of work is client/server applications by using secret-key
isolated from the updating operations of other units of cryptography.
work. See also cursor stability, read stability, repeatable
read, and uncommitted read. Kerberos ticket. A transparent application mechanism
that transmits the identity of an initiating principal to its
ISPF. Interactive System Productivity Facility. target. A simple ticket contains the principal’s identity, a
session key, a timestamp, and other information, which
ISPF/PDF. Interactive System Productivity is sealed using the target’s secret key.
Facility/Program Development Facility.
key. A column or an ordered collection of columns that list. A type of object, which DB2 utilities can process,
is identified in the description of a table, index, or that identifies multiple table spaces, multiple index
referential constraint. The same column can be part of spaces, or both. A list is defined with the LISTDEF utility
more than one key. control statement.
key-sequenced data set (KSDS). A VSAM file or data list structure. A coupling facility structure that lets
set whose records are loaded in key sequence and data be shared and manipulated as elements of a
controlled by an index. queue.
keyword. In SQL, a name that identifies an option that LLE. Load list element.
is used in an SQL statement.
L-lock. Logical lock.
KSDS. Key-sequenced data set.
| load list element. A z/OS control block that controls
| the loading and deleting of a particular load module
L | based on entry point names.
labeled duration. A number that represents a duration load module. A program unit that is suitable for
of years, months, days, hours, minutes, seconds, or loading into main storage for execution. The output of a
microseconds. linkage editor.
latch. A DB2 internal mechanism for controlling LOB table space. A table space in an auxiliary table
concurrent events or the use of system resources. that contains all the data for a particular LOB column in
the related base table.
LCID. Log control interval definition.
local. A way of referring to any object that the local
LDS. Linear data set. DB2 subsystem maintains. A local table, for example, is
a table that is maintained by the local DB2 subsystem.
leaf page. A page that contains pairs of keys and Contrast with remote.
RIDs and that points to actual data. Contrast with
nonleaf page. locale. The definition of a subset of a user’s
environment that combines a CCSID and characters
left outer join. The result of a join operation that that are defined for a specific language and country.
includes the matched rows of both tables that are being
joined, and that preserves the unmatched rows of the local lock. A lock that provides intra-DB2 concurrency
first table. See also join. control, but not inter-DB2 concurrency control; that is, its
scope is a single DB2.
limit key. The highest value of the index key for a
partition. local subsystem. The unique relational DBMS to
which the user or application program is directly
linear data set (LDS). A VSAM data set that contains connected (in the case of DB2, by one of the DB2
data but no control information. A linear data set can be attachment facilities).
accessed as a byte-addressable string in virtual storage.
| location. The unique name of a database server. An
linkage editor. A computer program for creating load | application uses the location name to access a DB2
modules from one or more object modules or load | database server. A database alias can be used to
modules by resolving cross references among the | override the location name when accessing a remote
modules and, if necessary, adjusting addresses. | server.
link-edit. The action of creating a loadable computer | location alias. Another name by which a database
program using a linkage editor. | server identifies itself in the network. Applications can
| use this name to access a DB2 database server.
Glossary 901
lock • LUW
lock. A means of controlling concurrent events or logically complete. A state in which the concurrent
access to data. DB2 locking is performed by the IRLM. copy process is finished with the initialization of the
target objects that are being copied. The target objects
lock duration. The interval over which a DB2 lock is are available for update.
held.
logical page list (LPL). A list of pages that are in
lock escalation. The promotion of a lock from a row, error and that cannot be referenced by applications until
page, or LOB lock to a table space lock because the the pages are recovered. The page is in logical error
number of page locks that are concurrently held on a because the actual media (coupling facility or disk)
given resource exceeds a preset limit. might not contain any errors. Usually a connection to
the media has been lost.
locking. The process by which the integrity of data is
ensured. Locking prevents concurrent users from logical partition. A set of key or RID pairs in a
accessing inconsistent data. nonpartitioning index that are associated with a
particular partition.
lock mode. A representation for the type of access
that concurrently running programs can have to a logical recovery pending (LRECP). The state in
resource that a DB2 lock is holding. which the data and the index keys that reference the
data are inconsistent.
lock object. The resource that is controlled by a DB2
lock. logical unit (LU). An access point through which an
application program accesses the SNA network in order
lock promotion. The process of changing the size or to communicate with another application program.
mode of a DB2 lock to a higher, more restrictive level.
logical unit of work (LUW). The processing that a
lock size. The amount of data that is controlled by a program performs between synchronization points.
DB2 lock on table data; the value can be a row, a page,
a LOB, a partition, a table, or a table space. logical unit of work identifier (LUWID). A name that
uniquely identifies a thread within a network. This name
lock structure. A coupling facility data structure that is consists of a fully-qualified LU network name, an LUW
composed of a series of lock entries to support shared instance number, and an LUW sequence number.
and exclusive locking for logical resources.
log initialization. The first phase of restart processing
log. A collection of records that describe the events during which DB2 attempts to locate the current end of
that occur during DB2 execution and that indicate their the log.
sequence. The information thus recorded is used for
recovery in the event of a failure during DB2 execution. log record header (LRH). A prefix, in every logical
record, that contains control information.
| log control interval definition. A suffix of the physical
| log record that tells how record segments are placed in log record sequence number (LRSN). A unique
| the physical control interval. identifier for a log record that is associated with a data
sharing member. DB2 uses the LRSN for recovery in
logical claim. A claim on a logical partition of a the data sharing environment.
nonpartitioning index.
log truncation. A process by which an explicit starting
logical data modeling. The process of documenting RBA is established. This RBA is the point at which the
the comprehensive business information requirements in next byte of log data is to be written.
an accurate and consistent format. Data modeling is the
first task of designing a database. LPL. Logical page list.
logical drain. A drain on a logical partition of a LRECP. Logical recovery pending.
nonpartitioning index.
LRH. Log record header.
logical index partition. The set of all keys that
reference the same data partition. LRSN. Log record sequence number.
logical lock (L-lock). The lock type that transactions LU. Logical unit.
use to control intra- and inter-DB2 data concurrency
between transactions. Contrast with physical lock LU name. Logical unit name, which is the name by
(P-lock). which VTAM refers to a node in a network. Contrast
with location name.
LUWID. Logical unit of work identifier. modeling database. A DB2 database that you create
on your workstation that you use to model a DB2 UDB
for z/OS subsystem, which can then be evaluated by
M the Index Advisor.
mapping table. A table that the REORG utility uses to mode name. A VTAM name for the collection of
map the associations of the RIDs of data records in the physical and logical characteristics and attributes of a
original copy and in the shadow copy. This table is session.
created by the user.
modify locks. An L-lock or P-lock with a MODIFY
mass delete. The deletion of all rows of a table. attribute. A list of these active locks is kept at all times
in the coupling facility lock structure. If the requesting
master terminal. The IMS logical terminal that has
DB2 subsystem fails, that DB2 subsystem’s modify
complete control of IMS resources during online
locks are converted to retained locks.
operations.
MPP. Message processing program (in IMS).
master terminal operator (MTO). See master
terminal. MTO. Master terminal operator.
materialize. (1) The process of putting rows from a multibyte character set (MBCS). A character set that
view or nested table expression into a work file for represents single characters with more than a single
additional processing by a query. byte. Contrast with single-byte character set and
(2) The placement of a LOB value into contiguous double-byte character set. See also Unicode.
storage. Because LOB values can be very large, DB2
avoids materializing LOB data until doing so becomes multidimensional analysis. The process of assessing
absolutely necessary. and evaluating an enterprise on more than one level.
| materialized query table. A table that is used to Multiple Virtual Storage. An element of the z/OS
| contain information that is derived and can be operating system. This element is also called the Base
| summarized from one or more source tables. Control Program (BCP).
MB. Megabyte (1 048 576 bytes). multisite update. Distributed relational database
processing in which data is updated in more than one
MBCS. Multibyte character set. UTF-8 is an example location within a single unit of work.
of an MBCS. Characters in UTF-8 can range from 1 to
4 bytes in DB2. multithreading. Multiple TCBs that are executing one
copy of DB2 ODBC code concurrently (sharing a
member name. The z/OS XCF identifier for a processor) or in parallel (on separate central
particular DB2 subsystem in a data sharing group. processors).
menu. A displayed list of available functions for must-complete. A state during DB2 processing in
selection by the operator. A menu is sometimes called a which the entire operation must be completed to
menu panel. maintain data integrity.
| metalanguage. A language that is used to create mutex. Pthread mutual exclusion; a lock. A Pthread
| other specialized languages. mutex variable is used as a locking mechanism to allow
serialization of critical sections of code by temporarily
migration. The process of converting a subsystem blocking the execution of all but one thread.
with a previous release of DB2 to an updated or current
release. In this process, you can acquire the functions | MVS. See Multiple Virtual Storage.
of the updated or current release without losing the data
that you created on the previous release.
N
mixed data string. A character string that can contain
both single-byte and double-byte characters. negotiable lock. A lock whose mode can be
downgraded, by agreement among contending users, to
MLPA. Modified link pack area. be compatible to all. A physical lock is an example of a
negotiable lock.
MODEENT. A VTAM macro instruction that associates
a logon mode name with a set of parameters nested table expression. A fullselect in a FROM
representing session protocols. A set of MODEENT clause (surrounded by parentheses).
macro instructions defines a logon mode table.
Glossary 903
network identifier (NID) • overloaded function
NRE. Network recovery element. originating task. In a parallel group, the primary agent
that receives data from other execution units (referred to
NUL. The null character (’\0’), which is represented by as parallel tasks) that are executing portions of the
the value X'00'. In C, this character denotes the end of query in parallel.
a string.
OS/390. Operating System/390®.
null. A special value that indicates the absence of
information. OS/390 OpenEdition® Distributed Computing
Environment (OS/390 OE DCE). A set of technologies
NULLIF. A scalar function that evaluates two passed that are provided by the Open Software Foundation to
expressions, returning either NULL if the arguments are implement distributed computing.
equal or the value of the first argument if they are not.
outer join. The result of a join operation that includes
null-terminated host variable. A varying-length host the matched rows of both tables that are being joined
variable in which the end of the data is indicated by a and preserves some or all of the unmatched rows of the
null terminator. tables that are being joined. See also join.
null terminator. In C, the value that indicates the end overloaded function. A function name for which
of a string. For EBCDIC, ASCII, and Unicode UTF-8 multiple function instances exist.
strings, the null terminator is a single-byte value (X'00').
For Unicode UCS-2 (wide) strings, the null terminator is
a double-byte value (X'0000').
Glossary 905
partitioning index • primary authorization ID
| partitioning index. An index in which the leftmost policy. See CFRM policy.
| columns are the partitioning columns of the table. The
| index can be partitioned or nonpartitioned. Portable Operating System Interface (POSIX). The
IEEE operating system interface standard, which
| partition pruning. The removal from consideration of defines the Pthread standard of threading. See also
| inapplicable partitions through setting up predicates in a Pthread.
| query on a partitioned table to access only certain
| partitions to satisfy the query. POSIX. Portable Operating System Interface.
partner logical unit. An access point in the SNA postponed abort UR. A unit of recovery that was
network that is connected to the local DB2 subsystem inflight or in-abort, was interrupted by system failure or
by way of a VTAM conversation. cancellation, and did not complete backout during
restart.
path. See SQL path.
PPT. (1) Processing program table (in CICS). (2)
PCT. Program control table (in CICS). Program properties table (in z/OS).
PDS. Partitioned data set. precision. In SQL, the total number of digits in a
decimal number (called the size in the C language). In
piece. A data set of a nonpartitioned page set. the C language, the number of digits to the right of the
decimal point (called the scale in SQL). The DB2 library
physical claim. A claim on an entire nonpartitioning uses the SQL terms.
index.
precompilation. A processing of application programs
physical consistency. The state of a page that is not containing SQL statements that takes place before
in a partially changed state. compilation. SQL statements are replaced with
statements that are recognized by the host language
physical drain. A drain on an entire nonpartitioning
compiler. Output from this precompilation includes
index.
source code that can be submitted to the compiler and
physical lock (P-lock). A type of lock that DB2 the database request module (DBRM) that is input to
acquires to provide consistency of data that is cached in the bind process.
different DB2 subsystems. Physical locks are used only
predicate. An element of a search condition that
in data sharing environments. Contrast with logical lock
expresses or implies a comparison operation.
(L-lock).
prefix. A code at the beginning of a message or
physical lock contention. Conflicting states of the
record.
requesters for a physical lock. See also negotiable lock.
preformat. The process of preparing a VSAM ESDS
physically complete. The state in which the
for DB2 use, by writing specific data patterns.
concurrent copy process is completed and the output
data set has been created. prepare. The first phase of a two-phase commit
process in which all participants are requested to
plan. See application plan.
prepare for commit.
plan allocation. The process of allocating DB2
prepared SQL statement. A named object that is the
resources to a plan in preparation for execution.
executable form of an SQL statement that has been
plan member. The bound copy of a DBRM that is processed by the PREPARE statement.
identified in the member clause.
presumed-abort. An optimization of the
plan name. The name of an application plan. presumed-nothing two-phase commit protocol that
reduces the number of recovery log records, the
plan segmentation. The dividing of each plan into duration of state maintenance, and the number of
sections. When a section is needed, it is independently messages between coordinator and participant. The
brought into the EDM pool. optimization also modifies the indoubt resolution
responsibility.
P-lock. Physical lock.
presumed-nothing. The standard two-phase commit
PLT. Program list table (in CICS). protocol that defines coordinator and participant
responsibilities, relative to logical unit of work states,
point of consistency. A time when all recoverable recovery logging, and indoubt resolution.
data that an application accesses is consistent with
other data. The term point of consistency is primary authorization ID. The authorization ID that is
synonymous with sync point or commit point. used to identify the application process to DB2.
primary group buffer pool. For a duplexed group program temporary fix (PTF). A solution or bypass of
buffer pool, the structure that is used to maintain the a problem that is diagnosed as a result of a defect in a
coherency of cached data. This structure is used for current unaltered release of a licensed program. An
page registration and cross-invalidation. The z/OS authorized program analysis report (APAR) fix is
equivalent is old structure. Compare with secondary corrective service for an existing problem. A PTF is
group buffer pool. preventive service for problems that might be
encountered by other users of the product. A PTF is
primary index. An index that enforces the uniqueness temporary, because a permanent fix is usually not
of a primary key. incorporated into the product until its next release.
primary key. In a relational database, a unique, protected conversation. A VTAM conversation that
nonnull key that is part of the definition of a table. A supports two-phase commit flows.
table cannot be defined as a parent unless it has a
unique key or primary key. PSRCP. Page set recovery pending.
principal. An entity that can communicate securely PTF. Program temporary fix.
with another entity. In Kerberos, principals are
represented as entries in the Kerberos registry database Pthread. The POSIX threading standard model for
and include users, servers, computers, and others. splitting an application into subtasks. The Pthread
standard includes functions for creating threads,
principal name. The name by which a principal is terminating threads, synchronizing threads through
known to the DCE security services. locking, and other thread control facilities.
private protocol connection. A DB2 private query. A component of certain SQL statements that
connection of the application process. See also private specifies a result table.
connection.
query block. The part of a query that is represented
privilege. The capability of performing a specific by one of the FROM clauses. Each FROM clause can
function, sometimes on a specific object. The types of have multiple query blocks, depending on DB2’s internal
privileges are: processing of the query.
explicit privileges, which have names and are held
query CP parallelism. Parallel execution of a single
as the result of SQL GRANT and REVOKE
query, which is accomplished by using multiple tasks.
statements. For example, the SELECT privilege.
See also Sysplex query parallelism.
implicit privileges, which accompany the ownership
of an object, such as the privilege to drop a query I/O parallelism. Parallel access of data, which
synonym that one owns, or the holding of an is accomplished by triggering multiple I/O requests
authority, such as the privilege of SYSADM authority within a single query.
to terminate any utility job.
queued sequential access method (QSAM). An
privilege set. For the installation SYSADM ID, the set extended version of the basic sequential access method
of all possible privileges. For any other authorization ID, (BSAM). When this method is used, a queue of data
the set of all privileges that are recorded for that ID in blocks is formed. Input data blocks await processing,
the DB2 catalog. and output data blocks await transfer to auxiliary
storage or to an output device.
process. In DB2, the unit to which DB2 allocates
resources and locks. Sometimes called an application quiesce point. A point at which data is consistent as a
process, a process involves the execution of one or result of running the DB2 QUIESCE utility.
more programs. The execution of an SQL statement is
always associated with some process. The means of quiesced member state. A state of a member of a
initiating and terminating a process are dependent on data sharing group. An active member becomes
the environment. quiesced when a STOP DB2 command takes effect
without a failure. If the member’s task, address space,
program. A single, compilable collection of executable or z/OS system fails before the command takes effect,
statements in a programming language. the member state is failed.
Glossary 907
RACF • referential integrity
RCT. Resource control table (in CICS attachment recovery log. A collection of records that describes
facility). the events that occur during DB2 execution and
indicates their sequence. The recorded information is
RDB. Relational database. used for recovery in the event of a failure during DB2
execution.
RDBMS. Relational database management system.
recovery manager. (1) A subcomponent that supplies
RDBNAM. Relational database name. coordination services that control the interaction of DB2
resource managers during commit, abort, checkpoint,
RDF. Record definition field. and restart processes. The recovery manager also
supports the recovery mechanisms of other subsystems
read stability (RS). An isolation level that is similar to
(for example, IMS) by acting as a participant in the
repeatable read but does not completely isolate an
other subsystem’s process for protecting data that has
application process from all other concurrently executing
reached a point of consistency. (2) A coordinator or a
application processes. Under level RS, an application
participant (or both), in the execution of a two-phase
that issues the same query more than once might read
commit, that can access a recovery log that maintains
additional rows that were inserted and committed by a
the state of the logical unit of work and names the
concurrently executing application process.
immediate upstream coordinator and downstream
rebind. The creation of a new application plan for an participants.
application program that has been bound previously. If,
recovery pending (RECP). A condition that prevents
for example, you have added an index for a table that
SQL access to a table space that needs to be
your application accesses, you must rebind the
recovered.
application in order to take advantage of that index.
recovery token. An identifier for an element that is
rebuild. The process of reallocating a coupling facility
used in recovery (for example, NID or URID).
structure. For the shared communications area (SCA)
and lock structure, the structure is repopulated; for the RECP. Recovery pending.
group buffer pool, changed pages are usually cast out
to disk, and the new structure is populated only with redo. A state of a unit of recovery that indicates that
changed pages that were not successfully cast out. changes are to be reapplied to the disk media to ensure
data integrity.
RECFM. Record format.
reentrant. Executable code that can reside in storage
record. The storage representation of a row or other as one shared copy for all threads. Reentrant code is
data. not self-modifying and provides separate storage areas
for each thread. Reentrancy is a compiler and operating
record identifier (RID). A unique identifier that DB2
system concept, and reentrancy alone is not enough to
uses internally to identify a row of data in a table.
guarantee logically consistent results when
Compare with row ID.
multithreading. See also threadsafe.
| record identifier (RID) pool. An area of main storage referential constraint. The requirement that nonnull
| that is used for sorting record identifiers during values of a designated foreign key are valid only if they
| list-prefetch processing. equal values of the primary key of a designated table.
record length. The sum of the length of all the
referential integrity. The state of a database in which
columns in a table, which is the length of the data as it
all values of all foreign keys are valid. Maintaining
is physically stored in the database. Records can be
referential integrity requires the enforcement of
fixed length or varying length, depending on how the
referential constraints on all operations that change the
columns are defined. If all columns are fixed-length
data in a table on which the referential constraints are
columns, the record is a fixed-length record. If one or
defined.
more columns are varying-length columns, the record is
a varying-length column.
referential structure. A set of tables and relationships reoptimization, DB2 uses the values of host variables,
that includes at least one table and, for every table in parameter markers, or special registers.
the set, all the relationships in which that table
participates and all the tables to which it is related. REORG pending (REORP). A condition that restricts
SQL access and most utility access to an object that
| refresh age. The time duration between the current must be reorganized.
| time and the time during which a materialized query
| table was last refreshed. REORP. REORG pending.
registry. See registry database. repeatable read (RR). The isolation level that provides
maximum protection from other executing application
registry database. A database of security information programs. When an application program executes with
about principals, groups, organizations, accounts, and repeatable read protection, rows that the program
security policies. references cannot be changed by other programs until
the program reaches a commit point.
relational database (RDB). A database that can be
perceived as a set of tables and manipulated in repeating group. A situation in which an entity
accordance with the relational model of data. includes multiple attributes that are inherently the same.
The presence of a repeating group violates the
relational database management system (RDBMS). requirement of first normal form. In an entity that
A collection of hardware and software that organizes satisfies the requirement of first normal form, each
and provides access to a relational database. attribute is independent and unique in its meaning and
its name. See also normalization.
relational database name (RDBNAM). A unique
identifier for an RDBMS within a network. In DB2, this replay detection mechanism. A method that allows a
must be the value in the LOCATION column of table principal to detect whether a request is a valid request
SYSIBM.LOCATIONS in the CDB. DB2 publications from a source that can be trusted or whether an
refer to the name of another RDBMS as a LOCATION untrustworthy entity has captured information from a
value or a location name. previous exchange and is replaying the information
exchange to gain access to the principal.
relationship. A defined connection between the rows
of a table or the rows of two tables. A relationship is the request commit. The vote that is submitted to the
internal representation of a referential constraint. prepare phase if the participant has modified data and
is prepared to commit or roll back.
relative byte address (RBA). The offset of a data
record or control interval from the beginning of the requester. The source of a request to access data at
storage space that is allocated to the data set or file to a remote server. In the DB2 environment, the requester
which it belongs. function is provided by the distributed data facility.
remigration. The process of returning to a current resource. The object of a lock or claim, which could
release of DB2 following a fallback to a previous be a table space, an index space, a data partition, an
release. This procedure constitutes another migration index partition, or a logical partition.
process.
resource allocation. The part of plan allocation that
remote. Any object that is maintained by a remote deals specifically with the database resources.
DB2 subsystem (that is, by a DB2 subsystem other than
the local one). A remote view, for example, is a view resource control table (RCT). A construct of the
that is maintained by a remote DB2 subsystem. CICS attachment facility, created by site-provided macro
Contrast with local. parameters, that defines authorization and access
attributes for transactions or transaction groups.
remote attach request. A request by a remote
location to attach to the local DB2 subsystem. resource definition online. A CICS feature that you
Specifically, the request that is sent is an SNA Function use to define CICS resources online without assembling
Management Header 5. tables.
remote subsystem. Any relational DBMS, except the resource limit facility (RLF). A portion of DB2 code
local subsystem, with which the user or application can that prevents dynamic manipulative SQL statements
communicate. The subsystem need not be remote in from exceeding specified time limits. The resource limit
any physical sense, and might even operate on the facility is sometimes called the governor.
same processor under the same z/OS system.
resource limit specification table (RLST). A
reoptimization. The DB2 process of reconsidering the site-defined table that specifies the limits to be enforced
access path of an SQL statement at run time; during by the resource limit facility.
Glossary 909
resource manager • scale
resource manager. (1) A function that is responsible row. The horizontal component of a table. A row
for managing a particular resource and that guarantees consists of a sequence of values, one for each column
the consistency of all updates made to recoverable of the table.
resources within a logical unit of work. The resource
that is being managed can be physical (for example, ROWID. Row identifier.
disk or main storage) or logical (for example, a
particular type of system service). (2) A participant, in row identifier (ROWID). A value that uniquely
the execution of a two-phase commit, that has identifies a row. This value is stored with the row and
recoverable resources that could have been modified. never changes.
The resource manager has access to a recovery log so
row lock. A lock on a single row of data.
that it can commit or roll back the effects of the logical
unit of work to the recoverable resources. | rowset. A set of rows for which a cursor position is
| established.
restart pending (RESTP). A restrictive state of a page
set or partition that indicates that restart (backout) work | rowset cursor. A cursor that is defined so that one or
needs to be performed on the object. All access to the | more rows can be returned as a rowset for a single
page set or partition is denied except for access by the: | FETCH statement, and the cursor is positioned on the
v RECOVER POSTPONED command | set of rows that is fetched.
v Automatic online backout (which DB2 invokes after
restart if the system parameter LBACKOUT=AUTO) | rowset-positioned access. The ability to retrieve
| multiple rows from a single FETCH statement.
RESTP. Restart pending.
| row-positioned access. The ability to retrieve a single
result set. The set of rows that a stored procedure | row from a single FETCH statement.
returns to a client application.
row trigger. A trigger that is defined with the trigger
result set locator. A 4-byte value that DB2 uses to granularity FOR EACH ROW.
uniquely identify a query result set that a stored
procedure returns. RRE. Residual recovery entry (in IMS).
result table. The set of rows that are specified by a RRSAF. Recoverable Resource Manager Services
SELECT statement. attachment facility.
retained lock. A MODIFY lock that a DB2 subsystem RS. Read stability.
was holding at the time of a subsystem failure. The lock
is retained in the coupling facility lock structure across a RTT. Resource translation table.
DB2 failure.
RURE. Restart URE.
RID. Record identifier.
| schema. (1) The organization or structure of a self-referencing table. A table with a self-referencing
| database. (2) A logical grouping for user-defined constraint.
| functions, distinct types, triggers, and stored
| procedures. When an object of one of these types is | sensitive cursor. A cursor that is sensitive to changes
| created, it is assigned to one schema, which is | that are made to the database after the result table has
| determined by the name of the object. For example, the | been materialized.
| following statement creates a distinct type T in schema
| C: | sequence. A user-defined object that generates a
| sequence of numeric values according to user
| CREATE DISTINCT TYPE C.T ... | specifications.
scrollability. The ability to use a cursor to fetch in sequential data set. A non-DB2 data set whose
either a forward or backward direction. The FETCH records are organized on the basis of their successive
statement supports multiple fetch orientations to indicate physical positions, such as on magnetic tape. Several of
the new position of the cursor. See also fetch the DB2 database utilities require sequential data sets.
orientation.
sequential prefetch. A mechanism that triggers
scrollable cursor. A cursor that can be moved in both consecutive asynchronous I/O operations. Pages are
a forward and a backward direction. fetched before they are required, and several pages are
read with a single I/O operation.
SDWA. System diagnostic work area.
serial cursor. A cursor that can be moved only in a
search condition. A criterion for selecting rows from a
forward direction.
table. A search condition consists of one or more
predicates. serialized profile. A Java object that contains SQL
statements and descriptions of host variables. The
secondary authorization ID. An authorization ID that
SQLJ translator produces a serialized profile for each
has been associated with a primary authorization ID by
connection context.
an authorization exit routine.
server. The target of a request from a remote
secondary group buffer pool. For a duplexed group
requester. In the DB2 environment, the server function
buffer pool, the structure that is used to back up
is provided by the distributed data facility, which is used
changed pages that are written to the primary group
to access DB2 data from remote applications.
buffer pool. No page registration or cross-invalidation
occurs using the secondary group buffer pool. The z/OS server-side programming. A method for adding DB2
equivalent is new structure. data into dynamic Web pages.
| secondary index. A nonpartitioning index on a service class. An eight-character identifier that is
| partitioned table. used by the z/OS Workload Manager to associate user
performance goals with a particular DDF thread or
section. The segment of a plan or package that
stored procedure. A service class is also used to
contains the executable structures for a single SQL
classify work on parallelism assistants.
statement. For most SQL statements, one section in the
plan exists for each SQL statement in the source service request block. A unit of work that is
program. However, for cursor-related statements, the scheduled to execute in another address space.
DECLARE, OPEN, FETCH, and CLOSE statements
reference the same section because they each refer to session. A link between two nodes in a VTAM
the SELECT statement that is named in the DECLARE network.
CURSOR statement. SQL statements such as COMMIT,
ROLLBACK, and some SET statements do not use a session protocols. The available set of SNA
section. communication requests and responses.
segment. A group of pages that holds rows of a single shared communications area (SCA). A coupling
table. See also segmented table space. facility list structure that a DB2 data sharing group uses
for inter-DB2 communication.
segmented table space. A table space that is divided
into equal-sized groups of pages called segments. share lock. A lock that prevents concurrently
Segments are assigned to tables so that rows of executing application processes from changing data, but
different tables are never stored in the same segment. not from reading data. Contrast with exclusive lock.
self-referencing constraint. A referential constraint shift-in character. A special control character (X'0F')
that defines a relationship in which a table is a that is used in EBCDIC systems to denote that the
dependent of itself. subsequent bytes represent SBCS characters. See also
shift-out character.
Glossary 911
shift-out character • SQL path
shift-out character. A special control character (X'0E') | source table. A table that can be a base table, a view,
that is used in EBCDIC systems to denote that the | a table expression, or a user-defined table function.
subsequent bytes, up to the next shift-in control
character, represent DBCS characters. See also shift-in source type. An existing type that DB2 uses to
character. internally represent a distinct type.
sign-on. A request that is made on behalf of an space. A sequence of one or more blank characters.
individual CICS or IMS application process by an
attachment facility to enable DB2 to verify that it is special register. A storage area that DB2 defines for
authorized to use DB2 resources. an application process to use for storing information that
can be referenced in SQL statements. Examples of
simple page set. A nonpartitioned page set. A simple special registers are USER and CURRENT DATE.
page set initially consists of a single data set (page set
piece). If and when that data set is extended to 2 GB, specific function name. A particular user-defined
another data set is created, and so on, up to a total of function that is known to the database manager by its
32 data sets. DB2 considers the data sets to be a single specific name. Many specific user-defined functions can
contiguous linear address space containing a maximum have the same function name. When a user-defined
of 64 GB. Data is stored in the next available location function is defined to the database, every function is
within this address space without regard to any assigned a specific name that is unique within its
partitioning scheme. schema. Either the user can provide this name, or a
default name is used.
simple table space. A table space that is neither
partitioned nor segmented. SPUFI. SQL Processor Using File Input.
single-byte character set (SBCS). A set of characters SQL. Structured Query Language.
in which each character is represented by a single byte.
SQL authorization ID (SQL ID). The authorization ID
Contrast with double-byte character set or multibyte
that is used for checking dynamic SQL statements in
character set.
some situations.
single-precision floating point number. A 32-bit
SQLCA. SQL communication area.
approximate representation of a real number.
SQL communication area (SQLCA). A structure that
size. In the C language, the total number of digits in a
is used to provide an application program with
decimal number (called the precision in SQL). The DB2
information about the execution of its SQL statements.
library uses the SQL term.
SQL connection. An association between an
SMF. System Management Facilities.
application process and a local or remote application
SMP/E. System Modification Program/Extended. server or database server.
SNA. Systems Network Architecture. SQL descriptor area (SQLDA). A structure that
describes input variables, output variables, or the
SNA network. The part of a network that conforms to columns of a result table.
the formats and protocols of Systems Network
Architecture (SNA). SQL escape character. The symbol that is used to
enclose an SQL delimited identifier. This symbol is the
socket. A callable TCP/IP programming interface that double quotation mark ("). See also escape character.
TCP/IP network applications use to communicate with
remote TCP/IP partners. SQL function. A user-defined function in which the
CREATE FUNCTION statement contains the source
sourced function. A function that is implemented by code. The source code is a single SQL expression that
another built-in or user-defined function that is already evaluates to a single value. The SQL user-defined
known to the database manager. This function can be a function can return only one parameter.
scalar function or a column (aggregating) function; it
returns a single value from a set of values (for example, SQL ID. SQL authorization ID.
MAX or AVG). Contrast with built-in function, external
SQLJ. Structured Query Language (SQL) that is
function, and SQL function.
embedded in the Java programming language.
source program. A set of host language statements
SQL path. An ordered list of schema names that are
and SQL statements that is processed by an SQL
used in the resolution of unqualified references to
precompiler.
user-defined functions, distinct types, and stored
procedures. In dynamic SQL, the current path is found statement trigger. A trigger that is defined with the
in the CURRENT PATH special register. In static SQL, it trigger granularity FOR EACH STATEMENT.
is defined in the PATH bind option.
| static cursor. A named control structure that does not
SQL procedure. A user-written program that can be | change the size of the result table or the order of its
invoked with the SQL CALL statement. Contrast with | rows after an application opens the cursor. Contrast with
external procedure. | dynamic cursor.
SQL processing conversation. Any conversation that static SQL. SQL statements, embedded within a
requires access of DB2 data, either through an program, that are prepared during the program
application or by dynamic query requests. preparation process (before the program is executed).
After being prepared, the SQL statement does not
SQL Processor Using File Input (SPUFI). A facility of change (although values of host variables that are
the TSO attachment subcomponent that enables the specified by the statement might change).
DB2I user to execute SQL statements without
embedding them in an application program. storage group. A named set of disks on which DB2
data can be stored.
SQL return code. Either SQLCODE or SQLSTATE.
stored procedure. A user-written application program
SQL routine. A user-defined function or stored that can be invoked through the use of the SQL CALL
procedure that is based on code that is written in SQL. statement.
SQL statement coprocessor. An alternative to the string. See character string or graphic string.
DB2 precompiler that lets the user process SQL
statements at compile time. The user invokes an SQL strong typing. A process that guarantees that only
statement coprocessor by specifying a compiler option. user-defined functions and operations that are defined
on a distinct type can be applied to that type. For
SQL string delimiter. A symbol that is used to example, you cannot directly compare two currency
enclose an SQL string constant. The SQL string types, such as Canadian dollars and U.S. dollars. But
delimiter is the apostrophe ('), except in COBOL you can provide a user-defined function to convert one
applications, where the user assigns the symbol, which currency to the other and then do the comparison.
is either an apostrophe or a double quotation mark (").
structure. (1) A name that refers collectively to
SRB. Service request block. different types of DB2 objects, such as tables,
databases, views, indexes, and table spaces. (2) A
SSI. Subsystem interface (in z/OS). construct that uses z/OS to map and manage storage
on a coupling facility. See also cache structure, list
SSM. Subsystem member (in IMS).
structure, or lock structure.
stand-alone. An attribute of a program that means
Structured Query Language (SQL). A standardized
that it is capable of executing separately from DB2,
language for defining and manipulating data in a
without using DB2 services.
relational database.
star join. A method of joining a dimension column of a
structure owner. In relation to group buffer pools, the
fact table to the key column of the corresponding
DB2 member that is responsible for the following
dimension table. See also join, dimension, and star
activities:
schema.
v Coordinating rebuild, checkpoint, and damage
star schema. The combination of a fact table (which assessment processing
contains most of the data) and a number of dimension v Monitoring the group buffer pool threshold and
tables. See also star join, dimension, and dimension notifying castout owners when the threshold has
table. been reached
statement handle. In DB2 ODBC, the data object that subcomponent. A group of closely related DB2
contains information about an SQL statement that is modules that work together to provide a general
managed by DB2 ODBC. This includes information such function.
as dynamic arguments, bindings for dynamic arguments
and columns, cursor information, result values, and subject table. The table for which a trigger is created.
status information. Each statement handle is associated When the defined triggering event occurs on this table,
with the connection handle. the trigger is activated.
statement string. For a dynamic SQL statement, the subpage. The unit into which a physical index page
character string form of the statement. can be divided.
Glossary 913
subquery • table space
subquery. A SELECT statement within the WHERE or system agent. A work request that DB2 creates
HAVING clause of another SQL statement; a nested internally such as prefetch processing, deferred writes,
SQL statement. and service tasks.
subselect. That form of a query that does not include system conversation. The conversation that two DB2
an ORDER BY clause, an UPDATE clause, or UNION subsystems must establish to process system
operators. messages before any distributed processing can begin.
substitution character. A unique character that is system diagnostic work area (SDWA). The data that
substituted during character conversion for any is recorded in a SYS1.LOGREC entry that describes a
characters in the source program that do not have a program or hardware error.
match in the target coding representation.
system-directed connection. A connection that a
subsystem. A distinct instance of a relational relational DBMS manages by processing SQL
database management system (RDBMS). statements with three-part names.
surrogate pair. A coded representation for a single System Modification Program/Extended (SMP/E). A
character that consists of a sequence of two 16-bit code z/OS tool for making software changes in programming
units, in which the first value of the pair is a systems (such as DB2) and for controlling those
high-surrogate code unit in the range U+D800 through changes.
U+DBFF, and the second value is a low-surrogate code
unit in the range U+DC00 through U+DFFF. Surrogate Systems Network Architecture (SNA). The
pairs provide an extension mechanism for encoding description of the logical structure, formats, protocols,
917 476 characters without requiring the use of 32-bit and operational sequences for transmitting information
characters. through and controlling the configuration and operation
of networks.
SVC dump. A dump that is issued when a z/OS or a
DB2 functional recovery routine detects an error. SYS1.DUMPxx data set. A data set that contains a
system dump (in z/OS).
sync point. See commit point.
SYS1.LOGREC. A service aid that contains important
syncpoint tree. The tree of recovery managers and information about program and hardware errors (in
resource managers that are involved in a logical unit of z/OS).
work, starting with the recovery manager, that make the
final commit decision.
T
synonym. In SQL, an alternative name for a table or
view. Synonyms can be used to refer only to objects at table. A named data object consisting of a specific
the subsystem in which the synonym is defined. number of columns and some number of unordered
rows. See also base table or temporary table.
syntactic character set. A set of 81 graphic
characters that are registered in the IBM registry as | table-controlled partitioning. A type of partitioning in
character set 00640. This set was originally | which partition boundaries for a partitioned table are
recommended to the programming language community | controlled by values that are defined in the CREATE
to be used for syntactic purposes toward maximizing | TABLE statement. Partition limits are saved in the
portability and interchangeability across systems and | LIMITKEY_INTERNAL column of the
country boundaries. It is contained in most of the | SYSIBM.SYSTABLEPART catalog table.
primary registered character sets, with a few exceptions.
table function. A function that receives a set of
See also invariant character set.
arguments and returns a table to the SQL statement
Sysplex. See Parallel Sysplex. that references the function. A table function can be
referenced only in the FROM clause of a subselect.
Sysplex query parallelism. Parallel execution of a
single query that is accomplished by using multiple table locator. A mechanism that allows access to
tasks on more than one DB2 subsystem. See also trigger transition tables in the FROM clause of SELECT
query CP parallelism. statements, in the subselect of INSERT statements, or
from within user-defined functions. A table locator is a
system administrator. The person at a computer fullword integer value that represents a transition table.
installation who designs, controls, and manages the use
of the computer system. table space. A page set that is used to store the
records in one or more tables.
table space set. A set of table spaces and partitions time duration. A decimal integer that represents a
that should be recovered together for one of these number of hours, minutes, and seconds.
reasons:
v Each of them contains a table that is a parent or timeout. Abnormal termination of either the DB2
descendent of a table in one of the others. subsystem or of an application because of the
v The set contains a base table and associated unavailability of resources. Installation specifications are
auxiliary tables. set to determine both the amount of time DB2 is to wait
for IRLM services after starting, and the amount of time
A table space set can contain both types of IRLM is to wait if a resource that an application
relationships. requests is unavailable. If either of these time
specifications is exceeded, a timeout is declared.
task control block (TCB). A z/OS control block that is
used to communicate information about tasks within an Time-Sharing Option (TSO). An option in MVS that
address space that are connected to DB2. See also provides interactive time sharing from remote terminals.
address space connection.
timestamp. A seven-part value that consists of a date
TB. Terabyte (1 099 511 627 776 bytes). and time. The timestamp is expressed in years, months,
days, hours, minutes, seconds, and microseconds.
TCB. Task control block (in z/OS).
TMP. Terminal Monitor Program.
TCP/IP. A network communication protocol that
computer systems use to exchange information across to-do. A state of a unit of recovery that indicates that
telecommunication links. the unit of recovery’s changes to recoverable DB2
resources are indoubt and must either be applied to the
TCP/IP port. A 2-byte value that identifies an end user
disk media or backed out, as determined by the commit
or a TCP/IP network application within a TCP/IP host.
coordinator.
template. A DB2 utilities output data set descriptor that
trace. A DB2 facility that provides the ability to monitor
is used for dynamic allocation. A template is defined by
and collect DB2 monitoring, auditing, performance,
the TEMPLATE utility control statement.
accounting, statistics, and serviceability (global) data.
temporary table. A table that holds temporary data.
transaction lock. A lock that is used to control
Temporary tables are useful for holding or sorting
concurrent execution of SQL statements.
intermediate results from queries that contain a large
number of rows. The two types of temporary table, transaction program name. In SNA LU 6.2
which are created by different SQL statements, are the conversations, the name of the program at the remote
created temporary table and the declared temporary logical unit that is to be the other half of the
table. Contrast with result table. See also created conversation.
temporary table and declared temporary table.
| transient XML data type. A data type for XML values
Terminal Monitor Program (TMP). A program that | that exists only during query processing.
provides an interface between terminal users and
command processors and has access to many system transition table. A temporary table that contains all
services (in z/OS). the affected rows of the subject table in their state
before or after the triggering event occurs. Triggered
thread. The DB2 structure that describes an SQL statements in the trigger definition can reference
application’s connection, traces its progress, processes the table of changed rows in the old state or the new
resource functions, and delimits its accessibility to DB2 state.
resources and services. Most DB2 functions execute
under a thread structure. See also allied thread and transition variable. A variable that contains a column
database access thread. value of the affected row of the subject table in its state
before or after the triggering event occurs. Triggered
threadsafe. A characteristic of code that allows SQL statements in the trigger definition can reference
multithreading both by providing private storage areas the set of old values or the set of new values.
for each thread, and by properly serializing shared
(global) storage areas. | tree structure. A data structure that represents entities
| in nodes, with a most one parent node for each node,
three-part name. The full name of a table, view, or | and with only one root node.
alias. It consists of a location name, authorization ID,
and an object name, separated by a period. trigger. A set of SQL statements that are stored in a
DB2 database and executed when a certain event
time. A three-part value that designates a time of day occurs in a DB2 table.
in hours, minutes, and seconds.
Glossary 915
trigger activation • unit of recovery
trigger activation. The process that occurs when the typed parameter marker. A parameter marker that is
trigger event that is defined in a trigger definition is specified along with its target data type. It has the
executed. Trigger activation consists of the evaluation of general form:
the triggered action condition and conditional execution CAST(? AS data-type)
of the triggered SQL statements.
type 1 indexes. Indexes that were created by a
trigger activation time. An indication in the trigger release of DB2 before DB2 Version 4 or that are
definition of whether the trigger should be activated specified as type 1 indexes in Version 4. Contrast with
before or after the triggered event. type 2 indexes. As of Version 8, type 1 indexes are no
longer supported.
trigger body. The set of SQL statements that is
executed when a trigger is activated and its triggered type 2 indexes. Indexes that are created on a release
action condition evaluates to true. A trigger body is also of DB2 after Version 7 or that are specified as type 2
called triggered SQL statements. indexes in Version 4 or later.
trigger cascading. The process that occurs when the
triggered action of a trigger causes the activation of U
another trigger.
UCS-2. Universal Character Set, coded in 2 octets,
triggered action. The SQL logic that is performed which means that characters are represented in 16-bits
when a trigger is activated. The triggered action per character.
consists of an optional triggered action condition and a
set of triggered SQL statements that are executed only UDF. User-defined function.
if the condition evaluates to true.
UDT. User-defined data type. In DB2 UDB for z/OS,
triggered action condition. An optional part of the the term distinct type is used instead of user-defined
triggered action. This Boolean condition appears as a data type. See distinct type.
WHEN clause and specifies a condition that DB2
evaluates to determine if the triggered SQL statements uncommitted read (UR). The isolation level that
should be executed. allows an application to read uncommitted data.
triggered SQL statements. The set of SQL underlying view. The view on which another view is
statements that is executed when a trigger is activated directly or indirectly defined.
and its triggered action condition evaluates to true.
Triggered SQL statements are also called the trigger undo. A state of a unit of recovery that indicates that
body. the changes that the unit of recovery made to
recoverable DB2 resources must be backed out.
trigger granularity. A characteristic of a trigger, which
determines whether the trigger is activated: Unicode. A standard that parallels the ISO-10646
v Only once for the triggering SQL statement standard. Several implementations of the Unicode
v Once for each row that the SQL statement modifies standard exist, all of which have the ability to represent
a large percentage of the characters that are contained
triggering event. The specified operation in a trigger in the many scripts that are used throughout the world.
definition that causes the activation of that trigger. The
triggering event is comprised of a triggering operation uniform resource locator (URL). A Web address,
(INSERT, UPDATE, or DELETE) and a subject table on which offers a way of naming and locating specific items
which the operation is performed. on the Web.
triggering SQL operation. The SQL operation that union. An SQL operation that combines the results of
causes a trigger to be activated when performed on the two SELECT statements. Unions are often used to
subject table. merge lists of values that are obtained from several
tables.
trigger package. A package that is created when a
CREATE TRIGGER statement is executed. The unique constraint. An SQL rule that no two values in
package is executed when the trigger is activated. a primary key, or in the key of a unique index, can be
the same.
TSO. Time-Sharing Option.
unique index. An index that ensures that no identical
TSO attachment facility. A DB2 facility consisting of key values are stored in a column or a set of columns in
the DSN command processor and DB2I. Applications a table.
that are not written for the CICS or IMS environments
can run under the TSO attachment facility. unit of recovery. A recoverable sequence of
operations within a single resource manager, such as
an instance of DB2. Contrast with unit of work.
unit of recovery identifier (URID). The LOGRBA of user view. In logical data modeling, a model or
the first log record for a unit of recovery. The URID also representation of critical information that the business
appears in all subsequent log records for that unit of requires.
recovery.
UTF-8. Unicode Transformation Format, 8-bit encoding
unit of work. A recoverable sequence of operations form, which is designed for ease of use with existing
within an application process. At any time, an ASCII-based systems. The CCSID value for data in
application process is a single unit of work, but the life UTF-8 format is 1208. DB2 UDB for z/OS supports
of an application process can involve many units of UTF-8 in mixed data fields.
work as a result of commit or rollback operations. In a
multisite update operation, a single unit of work can UTF-16. Unicode Transformation Format, 16-bit
include several units of recovery. Contrast with unit of encoding form, which is designed to provide code
recovery. values for over a million characters and a superset of
UCS-2. The CCSID value for data in UTF-16 format is
Universal Unique Identifier (UUID). An identifier that 1200. DB2 UDB for z/OS supports UTF-16 in graphic
is immutable and unique across time and space (in data fields.
z/OS).
UUID. Universal Unique Identifier.
unlock. The act of releasing an object or system
resource that was previously locked and returning it to
general availability within DB2.
V
untyped parameter marker. A parameter marker that value. The smallest unit of data that is manipulated in
is specified without its target data type. It has the form SQL.
of a single question mark (?).
variable. A data element that specifies a value that
updatability. The ability of a cursor to perform can be changed. A COBOL elementary data item is an
positioned updates and deletes. The updatability of a example of a variable. Contrast with constant.
cursor can be influenced by the SELECT statement and
variant function. See nondeterministic function.
the cursor sensitivity option that is specified on the
DECLARE CURSOR statement. varying-length string. A character or graphic string
whose length varies within set limits. Contrast with
update hole. The location on which a cursor is
fixed-length string.
positioned when a row in a result table is fetched again
and the new values no longer satisfy the search version. A member of a set of similar programs,
condition. DB2 marks a row in the result table as an DBRMs, packages, or LOBs.
update hole when an update to the corresponding row A version of a program is the source code that is
in the database causes that row to no longer qualify for produced by precompiling the program. The program
the result table. version is identified by the program name and a
timestamp (consistency token).
update trigger. A trigger that is defined with the
A version of a DBRM is the DBRM that is produced
triggering SQL operation UPDATE.
by precompiling a program. The DBRM version is
upstream. The node in the syncpoint tree that is identified by the same program name and timestamp
responsible, in addition to other recovery or resource as a corresponding program version.
managers, for coordinating the execution of a two-phase A version of a package is the result of binding a
commit. DBRM within a particular database system. The
package version is identified by the same program
UR. Uncommitted read. name and consistency token as the DBRM.
A version of a LOB is a copy of a LOB value at a
URE. Unit of recovery element. point in time. The version number for a LOB is
stored in the auxiliary index entry for the LOB.
URID . Unit of recovery identifier.
view. An alternative representation of data from one or
URL. Uniform resource locator. more tables. A view can include all or some of the
columns that are contained in tables on which it is
user-defined data type (UDT). See distinct type.
defined.
user-defined function (UDF). A function that is
view check option. An option that specifies whether
defined to DB2 by using the CREATE FUNCTION
every row that is inserted or updated through a view
statement and that can be referenced thereafter in SQL
must conform to the definition of that view. A view check
statements. A user-defined function can be an external
option can be specified with the WITH CASCADED
function, a sourced function, or an SQL function.
Contrast with built-in function.
Glossary 917
Virtual Storage Access Method (VSAM) • z/OS Distributed Computing Environment (z/OS
DCE)
CHECK OPTION, WITH CHECK OPTION, or WITH | XML node. The smallest unit of valid, complete
LOCAL CHECK OPTION clauses of the CREATE VIEW | structure in a document. For example, a node can
statement. | represent an element, an attribute, or a text string.
Virtual Storage Access Method (VSAM). An access | XML publishing functions. Functions that return XML
method for direct or sequential processing of fixed- and | values from SQL values.
varying-length records on disk devices. The records in a
VSAM data set or file can be organized in logical X/Open. An independent, worldwide open systems
sequence by a key field (key sequence), in the physical organization that is supported by most of the world’s
sequence in which they are written on the data set or largest information systems suppliers, user
file (entry-sequence), or by relative-record number (in organizations, and software companies. X/Open's goal
z/OS). is to increase the portability of applications by
combining existing and emerging standards.
Virtual Telecommunications Access Method
(VTAM). An IBM licensed program that controls XRF. Extended recovery facility.
communication and the flow of data in an SNA network
(in z/OS). Z
| volatile table. A table for which SQL operations
| z/OS. An operating system for the eServer™ product
| choose index access whenever possible.
| line that supports 64-bit real and virtual storage.
VSAM. Virtual Storage Access Method.
z/OS Distributed Computing Environment (z/OS
VTAM. Virtual Telecommunication Access Method (in DCE). A set of technologies that are provided by the
z/OS). Open Software Foundation to implement distributed
computing.
W
warm start. The normal DB2 restart process, which
involves reading and processing log records so that
data that is under the control of DB2 is consistent.
Contrast with cold start.
X
XCF. See cross-system coupling facility.
Bibliography 921
v z/OS DFSMSdss Storage Administration eServer zSeries®
Reference, SC35-0424 v IBM eServer zSeries Processor
v z/OS DFSMShsm Managing Your Own Data, Resource/System Manager Planning Guide,
SC35-0420 SB10-7033
v z/OS DFSMSdfp: Using DFSMSdfp in the z/OS
Environment, SC26-7473 Fortran: VS Fortran
v z/OS DFSMSdfp Diagnosis Reference, v VS Fortran Version 2: Language and Library
GY27-7618 Reference, SC26-4221
v z/OS DFSMS: Implementing System-Managed v VS Fortran Version 2: Programming Guide for
Storage, SC27-7407 CMS and MVS, SC26-4222
v z/OS DFSMS: Macro Instructions for Data Sets,
SC26-7408 High Level Assembler
v z/OS DFSMS: Managing Catalogs, SC26-7409 v High Level Assembler for MVS and VM and
v z/OS DFSMS: Program Management, VSE Language Reference, SC26-4940
SA22-7643 v High Level Assembler for MVS and VM and
v z/OS MVS Program Management: Advanced VSE Programmer's Guide, SC26-4941
Facilities, SA22-7644
v z/OS DFSMSdfp Storage Administration ICSF
Reference, SC26-7402 v z/OS ICSF Overview, SA22-7519
v z/OS DFSMS: Using Data Sets, SC26-7410 v Integrated Cryptographic Service Facility
v DFSMS/MVS: Using Advanced Services , Administrator's Guide, SA22-7521
SC26-7400
v DFSMS/MVS: Utilities, SC26-7414 IMS Version 8
Bibliography 923
v IBM TCP/IP for MVS: Planning and Migration v z/OS DCE Administration Guide, SC24-5904
Guide, SC31-7189 v z/OS DCE Introduction, GC24-5911
v z/OS DCE Messages and Codes, SC24-5912
TotalStorage® Enterprise Storage Server v z/OS Information Roadmap, SA22-7500
v RAMAC Virtual Array: Implementing v z/OS Introduction and Release Guide,
Peer-to-Peer Remote Copy, SG24-5680 GA22-7502
v Enterprise Storage Server Introduction and v z/OS JES2 Initialization and Tuning Guide,
Planning, GC26-7444 SA22-7532
v IBM RAMAC Virtual Array, SG24-6424 v z/OS JES3 Initialization and Tuning Guide,
SA22-7549
Unicode v z/OS Language Environment Concepts Guide,
v z/OS Support for Unicode: Using Conversion SA22-7567
Services, SA22-7649 v z/OS Language Environment Customization,
SA22-7564
Information about Unicode, the Unicode v z/OS Language Environment Debugging Guide,
consortium, the Unicode standard, and standards GA22-7560
conformance requirements is available at v z/OS Language Environment Programming
www.unicode.org Guide, SA22-7561
v z/OS Language Environment Programming
VTAM Reference, SA22-7562
v Planning for NetView, NCP, and VTAM, v z/OS Managed System Infrastructure for Setup
SC31-8063 User's Guide, SC33-7985
v VTAM for MVS/ESA Diagnosis, LY43-0078 v z/OS MVS Diagnosis: Procedures, GA22-7587
v VTAM for MVS/ESA Messages and Codes, v z/OS MVS Diagnosis: Reference, GA22-7588
GC31-8369 v z/OS MVS Diagnosis: Tools and Service Aids,
v VTAM for MVS/ESA Network Implementation GA22-7589
Guide, SC31-8370 v z/OS MVS Initialization and Tuning Guide,
v VTAM for MVS/ESA Operation, SC31-8372 SA22-7591
v VTAM for MVS/ESA Programming, SC31-8373 v z/OS MVS Initialization and Tuning Reference,
v VTAM for MVS/ESA Programming for LU 6.2, SA22-7592
SC31-8374 v z/OS MVS Installation Exits, SA22-7593
v VTAM for MVS/ESA Resource Definition v z/OS MVS JCL Reference, SA22-7597
Reference, SC31-8377 v z/OS MVS JCL User's Guide, SA22-7598
v z/OS MVS Planning: Global Resource
WebSphere® family Serialization, SA22-7600
v WebSphere MQ Integrator Broker: v z/OS MVS Planning: Operations, SA22-7601
Administration Guide, SC34-6171 v z/OS MVS Planning: Workload Management,
v WebSphere MQ Integrator Broker for z/OS: SA22-7602
Customization and Administration Guide, v z/OS MVS Programming: Assembler Services
SC34-6175 Guide, SA22-7605
v WebSphere MQ Integrator Broker: Introduction v z/OS MVS Programming: Assembler Services
and Planning, GC34-5599 Reference, Volumes 1 and 2, SA22-7606 and
v WebSphere MQ Integrator Broker: Using the SA22-7607
Control Center, SC34-6168 v z/OS MVS Programming: Authorized Assembler
Services Guide, SA22-7608
z/Architecture™ v z/OS MVS Programming: Authorized Assembler
v z/Architecture Principles of Operation, Services Reference Volumes 1-4, SA22-7609,
SA22-7832 SA22-7610, SA22-7611, and SA22-7612
v z/OS MVS Programming: Callable Services for
z/OS High-Level Languages, SA22-7613
v z/OS C/C++ Programming Guide, SC09-4765 v z/OS MVS Programming: Extended
v z/OS C/C++ Run-Time Library Reference, Addressability Guide, SA22-7614
SA22-7821 v z/OS MVS Programming: Sysplex Services
v z/OS C/C++ User's Guide, SC09-4767 Guide, SA22-7617
v z/OS Communications Server: IP Configuration v z/OS MVS Programming: Sysplex Services
Guide, SC31-8875 Reference, SA22-7618
Bibliography 925
926 Utility Guide and Reference
Index
Numerics AUXERROR REPORT, option of CHECK DATA
utility 68
32K
AUXERROR, option of CHECK DATA utility 58
option of DSN1COMP utility 699
auxiliary CHECK-pending (ACHKP) status
option of DSN1COPY utility 710
description 831
option of DSN1PRNT utility 748
resetting
for a LOB table space 831
for a table space 831
A set by CHECK DATA utility 58
abend, forcing 156 auxiliary CHECK-pending (ACHKP) status, CHECK
ABEND, option of DIAGNOSE utility 154 DATA utility 59, 69
abnormal-termination, option of TEMPLATE auxiliary index, reorganizing after loading data 258
statement 582 auxiliary warning (AUXW) status
Access Method Services, new active log definition 671 description 832
access path, catalog table columns used to select 556 resetting 832
ACCESSPATH set by CHECK DATA utility 58, 68
option of MODIFY STATISTICS utility 297 AUXW
option of REORG TABLESPACE utility 432 See auxiliary warning (AUXW) status
ACHKP AVGKEYLEN column
See auxiliary CHECK-pending (ACHKP) status SYSINDEXES catalog table 562
ACTION, option of DSN1SDMP utility 760 SYSINDEXES_HIST catalog table 562
ACTION2 SYSINDEXPART catalog table 562
option of DSN1SDMP utility 761 SYSINDEXPART_HIST catalog table 565
active log AVGROWLEN column
adding 671 SYSTABLEPART catalog table 559
data set with I/O error, deleting 673 SYSTABLEPART_HIST catalog table 562
defining in BSDS 671 SYSTABLES catalog table 559
deleting from BSDS 671 SYSTABLES_HIST catalog table 559
enlarging 672 SYSTABLESPACE catalog table 559
recording from BSDS 671
active status, of a utility 39
AFTER, option of DSN1SDMP utility 760
AFTER2, option of DSN1SDMP utility 762
B
BACKOUT, option of DSNJU003 utility 667
AGE option of MODIFY STATISTICS utility 297
BACKUP SYSTEM utility
ALIAS, option of DSNJU003 utility 667
authorization 47
ALL
compatibility 50
option of LISTDEF utility 170
data sets needed 49
option of REBUILD INDEX utility 323
description 47
option of RUNSTATS utility 545
examples
ALLDUMPS, option of DIAGNOSE utility 153
data-only backup 50
ANCHOR, option of DSN1CHKR utility 692
full backup 50
application package
execution phases 47
See package
history, printing 679
archive log
instructions 48
adding to BSDS 673
option descriptions 48
deleting from BSDS 673
output 47
ARCHLOG, option of REPORT utility 513
restarting 50
ASCII
syntax diagram 48
option of LOAD utility 198
terminating 50
option of UNLOAD utility 601
using the DISPLAY UTILITY command 49
authorization ID
BASE, option of LISTDEF utility 170
naming convention xii
basic predicate 424, 622
primary 3
BETWEEN predicate 425, 622
secondary 3
BIT
SQL 3
option of LOAD utility for CHAR 212, 213
AUXERROR INVALIDATE, option of CHECK DATA
BLOB
utility 68
option of LOAD utility 220
option of UNLOAD utility 620
Index X-3
CHECKPAGE, option of COPY utility 104 compatibility (continued)
checkpoint queue RUNSTATS utility 553
printing contents 679 STOSPACE utility 572
updating 669 TEMPLATE utility 589
CHECKPT, option of DSNJU003 utility 669 UNLOAD utility 640
CHKP utilities access description 40
See CHECK-pending (CHKP) status compression
CHKPTRBA, option of DSNJU003 utility 666 data, UNLOAD utility description 639
CLOB estimating disk savings 699
option of LOAD utility 220 compression dictionary, building 453
option of UNLOAD utility 620 concurrency
CLUSTERING column of SYSINDEXES catalog table, BACKUP SYSTEM utility 50
use by RUNSTATS 558 utilities access description 40
CLUSTERRATIOF column, SYSINDEXES catalog utility jobs 40
table 558 with real-time statistics 873
COLCARDF column concurrent copies
SYSCOLUMNS catalog table 557 COPYTOCOPY utility restriction 133
cold start invoking 104
example, creating a conditional restart control making 114
record 673 CONCURRENT, option of COPY utility 104, 114
specifying for conditional restart 663 conditional restart control record
COLDEL creating 665, 673
option of LOAD utility 197 reading 689
option of UNLOAD utility 603 sample 689
COLGROUP, option of RUNSTATS utility 540 status printed by print log map utility 679
COLGROUPCOLNO column, SYSCOLDIST catalog connection-name, naming convention xii
table 558 CONSTANT, option of UNLOAD utility 619
COLUMN constraint violations, checking 55
option of LOAD STATISTICS 192 CONTINUE, option of RECOVER utility 353
option of RUNSTATS utility 539 CONTINUEIF, option of LOAD utility 201
COLVALUE column, SYSCOLDIST catalog table 558 continuous operation, recovering an error range 354
COMMAND, option of DSN1SDMP utility 761 control interval
commit point LOAD REPLACE, effect of 228, 259
DSNU command 32 RECOVER utility, effect of 371
REPAIR utility LOCATE statement 489 REORG TABLESPACE, effect of 470
restarting after out-of-space condition 45 control statement
comparison operators 424 See utility control statement
compatibility CONTROL, option of DSNU CLIST command 30
CHECK DATA utility 70 conversion of data, LOAD utility 243
CHECK INDEX utility 85 copies
CHECK LOB utility 93 merging 275
COPY utility 121 online, merging 280
COPYTOCOPY utility 145 copy pool 47
declared temporary table 4 COPY statement, using more than one 114
DEFINE NO objects 4 COPY utility
DIAGNOSE utility 156 adding conditional code 116
EXEC SQL utility 161 allowing other programs to access data 105
LISTDEF utility 177 authorization 96
LOAD utility 253 block size, specifying 107
MERGECOPY utility 282 catalog table, copying 109
MODIFY RECOVERY utility 290 checking pages 104
MODIFY STATISTICS utility 299 claims and drains 120
OPTIONS utility 307 compatibility 120, 121
QUIESCE utility 316 consistency 112
REBUILD INDEX utility 334 COPY-pending status, resetting 95
RECOVER utility 370 copying a list of objects 112
REORG INDEX utility 398 copying segmented table spaces 114
REORG TABLESPACE utility 464 data sets needed 106
REPAIR utility 503 description 95
REPORT utility 517 directory, copying 109
RESTORE SYSTEM utility 532 effect on real-time statistics 870
Index X-5
COPYTOCOPY utility (continued) data sets (continued)
restrictions 133 CHECK INDEX utility 77
syntax diagram 134 CHECK LOB utility 91, 92
SYSIBM.SYSCOPY records, updating 142 concatenating 24
tape mounts, retaining 143 COPY utility 106
terminating 144 copying table space in separate jobs 112
using TEMPLATE 142 COPYTOCOPY utility 139
correlation ID, naming convention xii definitions, changing during REORG 452
COUNT DIAGNOSE utility 155
option of LOAD STATISTICS 193 disposition
option of REBUILD INDEX utility 325 defaults for dynamically allocated data sets 583
option of REORG INDEX utility 386 defaults for dynamically allocated data sets on
option of RUNSTATS utility 540 RESTART 583
COUNT option disposition, controlling 24
option of RUNSTATS utility 541, 546 DSNJCNVB utility 655
CREATE option of DSNJU003 utility 665 for copies, naming 111
CRESTART, option of DSNJU003 utility 665 input, using 23
cross loader function 159 LOAD utility 223
CSRONLY, option of DSNJU003 utility 667 MERGECOPY utility 278
CURRENT DATE, incrementing and decrementing MODIFY RECOVERY utility 288
value 626 MODIFY STATISTICS utility 298
CURRENT option of REPORT utility 513 naming convention xiii
current restart, description 43 output, using 24
CURRENTCOPYONLY option of RECOVER utility 346 QUIESCE utility 313
cursor REBUILD INDEX utility 327, 329
naming convention xiii RECOVER utility 350
CYL, option of TEMPLATE statement 584 recovering, partition 352
REORG INDEX utility 389
REORG TABLESPACE utility 438, 445
D REPAIR utility 498
data REPORT utility 514
adding 230 RESTORE SYSTEM utility 531
compressing 236 RUNSTATS utility 548
converting 237 security 25
converting with LOAD utility 243 space parameter, changing 394
deleting 230 space parameter, changing during REORG 452
DATA specifying 21
option of CHECK DATA utility 57 STOSPACE utility 570
option of LOAD utility 188 UNLOAD utility 627
option of REPAIR DUMP 496 use by utilities 21, 22, 23
option of REPAIR REPLACE 493 data sharing
option of REPAIR VERIFY 492 backing up group 47
option of UNLOAD utility 597 real-time statistics 872
data compression restoring data 531
dictionary running online utilities 41
building 236, 453 data type, specifying with LOAD utility 212
number of records needed 236 data-only backup
using again 236 example 50
LOAD utility explanation 48
description 236 database
KEEPDICTIONARY option 194, 236 limits 771
REORG TABLESPACE utility, KEEPDICTIONARY naming convention xiii
option 429 DATABASE
DATA ONLY, option of BACKUP SYSTEM utility 48 option of LISTDEF utility 168
data set option of REPAIR utility 497
name format in ICF catalog 103 DATACLAS, option of TEMPLATE statement 581
name limitations 587 DATAONLY, option of DSN1LOGP utility 729
data sets DataRefresher 237
BACKUP SYSTEM utility 49 DATAWKnn
change log inventory utility (DSNJU003) 670 data set of REORG utility 21
CHECK DATA utility 64 purpose 21
Index X-7
DIAGNOSE utility (continued) DSN1CHKR utility (continued)
examples (continued) DSN1COPY utility, running before 693
forcing an abend 157 dump format, printing 691
service level, finding 156 environment 693
suspending utility execution 158 examples
TYPE 157 table space 696
WAIT 158 temporary data set 694
forcing an abend 156 formatting table space pages on output 691
instructions 156 hash value, specifying for DBID 691
option descriptions 153 option descriptions 691
restarting 156 output 697
syntax diagram 151 pointers, following 692
terminating 156 restrictions 693
WAIT statement running 693
description 154 syntax diagram 691
syntax diagram 152 SYSPRINT DD name 693
DIAGNOSE, option of REPAIR utility 497 SYSUT1 DD name 693
directory valid table spaces 694
integrity, verifying 691 DSN1COMP utility
MERGECOPY utility, restrictions 281 authorization required 702
order of recovering objects 355 control statement 702
discard data set, specifying DD statement for LOAD data set size, specifying 700
utility 200 data sets required 702
DISCARD, option of REORG TABLESPACE utility 434 DD statements
DISCARDDN SYSPRINT 702
option of LOAD PART 207 SYSUT1 702
option of LOAD utility 200 description 699
option of REORG TABLESPACE utility 433 environment 702
DISCARDS, option of LOAD utility 200 examples
DISCDSN, option of DSNU CLIST command 31 free space 705
DISP, option of TEMPLATE statement 582 FREEPAGE 705
DISPLAY DATABASE command, displaying range of full image copy 704
pages in error 354 FULLCOPY 704
DISPLAY Utility command LARGE 705
using with BACKUP SYSTEM for data sharing NUMPARTS 705
group 49 PCTFREE 705
DISPLAY UTILITY command REORG 705
description 39 ROWLIMIT 705
using with RESTORE SYSTEM utility on a data free pages, specifying 700
sharing group 531 free space
DISPLAY, option of DIAGNOSE utility 153 including in compression calculations 704
displaying status of DB2 utilities 39 specifying 700
disposition, data sets, controlling 24 FREEPAGE 704
DL/I, loading data 237 full image copy as input, specifying 701
DOUBLE, option of UNLOAD utility 618 identical data rows 704
DRAIN LARGE data sets, specifying 700
option of REORG INDEX utility 383 maximum number of rows to evaluate,
option of REORG TABLESPACE utility 419 specifying 701
DRAIN_WAIT message DSN1941 706
option of CHECK INDEX utility 76 option descriptions 699
option of REORG INDEX utility 382 output
option of REORG TABLESPACE utility 417 interpreting 706
DROP, option of REPAIR utility 496 sample 704, 706
DSN, option of TEMPLATE statement 578 page size of input data set, specifying 699
DSN1CHKR utility partitions, specifying number 700
anchor point, mapping 692 PCTFREE 704
authorization 693 prerequisite actions 701
concurrent copy, compatibility 694 recommendations 703
control statement 693 REORG 703
data sets needed 693 running 702, 703
description 691 savings comparable to REORG 701
Index X-9
DSN1LOGP utility (continued) DSN1SDMP utility (continued)
table and index identifiers, locating 737 option descriptions 758
type of log records, limiting report by 732 output 768
unit of recovery identifier, using to limit report 731 required data sets 762
value in log record, limiting report by 733 running 762
DSN1PRNT utility selection criteria, specifying 758
authorization required 754 syntax diagram 757
comparison with DSN1COPY utility 755 trace destination 758
control statement 754 traces
data set size, determining 755 modifying 764
data set size, specifying 750 stopping 764
data sets required 754 DSN8G810, updating space information 572
DD statements DSN8S81E table space, finding information about space
SYSPRINT 754 utilization 571
SYSUT1 754 DSNACCAV stored procedure
description 747 description 798
environment 754 option descriptions 799
examples output 802
printing a data set in hexadecimal format 755 sample JCL 801
printing a nonpartitioning index 756 syntax diagram 799
printing a partitioned data set 756 DSNACCOR stored procedure
printing a single page of an image copy 756 description 808
filtering pages by value 752 example call 821
formatting output 753 option descriptions 810
full image copy, specifying 749 output 826
incremental copy, specifying 749 syntax diagram 809
inline copy, specifying 749 DSNACCQC stored procedure
LARGE data set, specifying 749 description 790
LOB table space, specifying 749 option descriptions 792
number of partitions, specifying 751 output 796
option descriptions 748 sample JCL 796
output 756 syntax diagram 791
page size, determining 755 DSNAME, option of DSNJU003 utility 662
page size, specifying 749 DSNDB01.DBD01
piece size, specifying 750 copying restrictions 110
processing encrypted data 755 recovery information 516
recommendations 755 DSNDB01.SYSCOPYs
running 754 copying restrictions 110
syntax diagram 748 DSNDB01.SYSUTILX
SYSUT1 data set, printing on SYSPRINT data copying restrictions 110
set 751 recovery information 516
DSN1SDMP utility DSNDB06.SYSCOPY
action, specifying 760, 761 recovery information 516
authorization required 762 DSNJCNVB utility
buffers, assigning 763 authorization required 655
control statement 762 control statement 655
DD statements data sets used 655
SDMPIN 762 description 655
SDMPPRNT 762 dual BSDSs, converting 655
SDMPTRAC 763 environment 655
SYSABEND 763 example 656
SYSTSIN 763 JOBCAT DD name 655
description 757 output 656
dump, generating 764 prerequisite actions 655
environment 762 running 656
examples STEPCAT DD name 655
abend 765, 766 SYSPRINT DD name 655
dump 766 SYSUT1 DD name 655
second trace 767 SYSUT2 DD name 655
skeleton JCL 764 DSNJLOGF utility
instructions 763 control statement 657
Index X-11
DSNU CLIST command (continued) encryption
option descriptions 29 DSN1PRNT utility effect on 755
output 33 REORG TABLESPACE utility effect on 436
syntax 29 REPAIR utility effect on 498
DSNUM UNLOAD utility effect on 627
option of COPY utility 102 utilities effect on 5
option of COPYTOCOPY utility 136 END, option of DIAGNOSE utility 155
option of MERGECOPY utility 277 ENDLRSN, option of DSNJU003 utility 664
option of MODIFY RECOVERY utility 287, 289 ENDRBA, option of DSNJU003 utility 663
option of RECOVER utility 345 ENDTIME, option of DSNJU003 utility 665
option of REPORT utility 512 ENFORCE, option of LOAD utility 199, 235
DSNUM column ERRDDN
SYSINDEXPART catalog table, use by option of CHECK DATA utility 60
RUNSTATS 562 option of LOAD utility 199
SYSTABLEPART catalog table 560 error data set
DSNUPROC JCL procedure CHECK DATA utility 60, 65
description 35 error range recovery 354
option descriptions 35 ERROR RANGE, option of RECOVER utility 348
sample 36 error, calculating, LOAD utility 226
syntax 35 ESA data compression, estimating disk savings 699
DSNUTILS stored procedure ESCAPE clause 428, 623
authorization required 778, 788 EVENT, option of OPTIONS statement 305
data sets 778 exception table
description 777 columns 62
option descriptions 780, 789 creating 62
output 787 definition 65
sample JCL 787, 790 example 63
syntax diagram 780, 789 with LOB columns 63
DSNUTILU stored procedure EXCEPTIONS
data sets 788 option of CHECK DATA utility 60
description 787 option of CHECK LOB utility 90
output 790 exceptions, specifying the maximum number
DSSIZE CHECK DATA utility 60
option of DSN1COMP utility 700 CHECK LOB utility 90
option of DSN1COPY utility 711 EXCLUDE
option of DSN1PRNT utility 750 option of LISTDEF 171
DSSPRINT EXCLUDE, option of LISTDEF utility 165
data set of COPY utility 21 EXEC SQL utility
purpose 21 authorization 159
DUMP compatibility 161
option of DSN1CHKR utility 691 cursors 160
statement of REPAIR utility 494 declare cursor statement
used in LOCATE block 488 description 160
syntax diagram 160
description 159
E dynamic SQL statements 160
EBCDIC examples
option of LOAD utility 198 creating a table 161
option of UNLOAD utility 601 declaring a cursor 161
edit routine inserting rows into a table 161
LOAD utility 183 using a mapping table 479
REORG TABLESPACE utility 421 execution phase 159
EDIT, option of DSNU CLIST command 31 option descriptions 160
embedded semicolon output 159
embedded 844 restarting 160
encrypted data syntax diagram 159
running DSN1PRNT on 755 terminating 160
running REORG TABLESPACE on 436 EXEC statement
running REPAIR on 498 built by CLIST 34
running UNLOAD on 627 description 38
running utilities on 5
Index X-13
H INDDN
option of LOAD PART 207
HALT, option of OPTIONS statement 305
option of LOAD utility 188
HASH, option of DSN1CHKR utility 691, 692
INDEREFLIMIT option of REORG TABLESPACE
HEADER, option of UNLOAD utility 608
utility 420
hexadecimal-constant, naming convention xiii
index
hexadecimal-string, naming convention xiii
building during LOAD 245
HIGH2KEY column, SYSCOLUMNS catalog table 557
checking 73, 258
HIGHRBA, option of DSNJU003 utility 669
determining when to reorganize 392
HISTORY
naming convention xiii
option of LOAD STATISTICS 193
organization 392
option of REBUILD INDEX utility 326
REBUILD INDEX utility 321
option of REORG INDEX utility 386
rebuilding in parallel 330
option of REORG TABLESPACE utility 432
rebuilt, recoverability 334
option of RUNSTATS utility 542, 547
version numbers, recycling 259
INDEX
option of CHECK INDEX utility 74
I option of COPY utility 99
ICBACKUP column in SYSIBM.SYSCOPY 110 option of COPYTOCOPY utility 136
ICOPY status option of LISTDEF utility 169
See informational COPY-pending status option of MODIFY STATISTICS utility 296
ICUNIT column option of REORG INDEX utility 379
SYSIBM.SYSCOPY 110 option of REPAIR utility
identity columns, loading 230 LEVELID statement 485
IGNOREFIELDS, option of LOAD utility 205 LOCATE statement 491
image copy SET statement 487
cataloging 108, 140 option of REPORT utility 512
conditional, specifying 115 option of RUNSTATS utility 540, 545
COPY utility 95 INDEX ALL, option of REPORT utility 512
copying 133 INDEX NONE, option of REPORT utility 512
COPYTOCOPY utility 133 index partitions, rebuilding 329
data set, finding size 107 index space
deleting all 289 recovering 321
full index space status, resetting 500
description 95 INDEX
making 101, 108 option of RECOVER utility 345
incremental option of REORG TABLESPACE utility 431
conditions 111 INDEXSPACE
copying 141 option of COPY utility 99
description 95 option of COPYTOCOPY utility 136
making 109 option of LISTDEF utility 168
merging 275 option of MODIFY STATISTICS utility 296
performance advantage 110 option of REBUILD INDEX utility 323
list of objects 112 option of RECOVER utility 344
making after loading a table 255 option of REORG INDEX utility 379
making in parallel 95 option of REPAIR utility
multiple, making 110 SET statement 487
obtaining information about 116 option of REPAIR utility for LEVELID statement 485
putting on tape 118, 143 option of REPORT utility 511
IMS DPROP 237 INDEXSPACES, option of LISTDEF utility 166
IN predicate 427, 623 INDEXSPACESTATS
in-abort state 667 contents 857
in-commit state 666 real-time statistics table 849
INCLUDE, option of LISTDEF utility 165, 171 indoubt state 666
inconsistent data indicator, resetting 493 INDSN, option of DSNU CLIST command 30
INCRCOPY inflight state 667
option of DSN1COPY utility 711 informational COPY-pending (ICOPY) status
option of DSN1PRNT utility 749 COPY utility 101
INCURSOR description 834
option of LOAD PART 207 resetting 95, 117, 834
option of LOAD utility 188 informational referential constraints, LOAD utility 183
Index X-15
LISTDEF utility (continued) LOAD utility (continued)
description 163 CHECK DATA (continued)
examples data sets used 257
all objects in a database 178 error data sets 257
COPY YES 179 exception tables 257
excluding objects 179 running 256
including all but one partition 179 sort data sets 257
including COPY YES indexes 179 compatibility 253
library data set 179 compressing data 236
lists that reference other lists 179 concatenating records 201
matching name patterns 178 concurrent access to data, setting 189
partition-level lists 178 cursors
pattern-matching characters 178 identifying 188, 207
related objects, including 182 preparing to use 222
RI option 182 data conversion 243
simple list 178 data sets needed 223
using LIST 176 data type compatibility 243
using with QUIESCE utility 179 data type, specifying 212
execution phases 163 data with referential constraints 234
indexes, specifying 167 default values, setting criteria for 220
instructions 171 defects, calculating number 226
LOB indicator keywords 170 DEFINE NO table space, consequences 232
LOB objects, including 170 deleting all data 230
option descriptions 165 delimited file format
OPTIONS, using 177 acceptable data types 233
output 163 restrictions 232
partitions, specifying 169 specifying 197
pattern-matching expressions delimited files 232
characters 173 delimiters 232
description 173 description 183
restriction 173 DFSORT data sets, device type 201
using 173 discard data set
previewing 175 declaring 200
restarting 177 maximum number of records 200
restrictions 163, 173 discarded rows, inline statistics 250
statement library 175 duplicate keys, effects 227
syntax diagram 164 dynamic SQL 238
TEMPLATE, using 177 effect on real-time statistics 864
terminating 177 ENFORCE NO
LISTDEFDD, option of OPTIONS statement 305 actions to take 257
lists consequences 235
objects enforcing constraints 199
excluding 171 error work data set, specifying 200
including 171 error, calculating 226
previewing 175 examples
processing order 176 CHECK DATA 257
using with other utilities 175 CHECK DATA after LOAD RESUME 258
LOAD INTO PART 231 concatenating records 263
LOAD REPLACE LOG YES 228 CONTINUEIF 263
LOAD utility COPYDDN 267
adding more data 230 CURSOR 272
after loading 255 data 261, 271
appending to data 189 declared cursors 272
authorization 183 default values, loading 264
auxiliary index, reorganizing after LOAD 258 DEFAULTIF 264
building indexes DELIMITED 262
in parallel 245 delimited files 262
sequentially 245 ENFORCE CONSTRAINTS 264
catalog tables, loading 222 ENFORCE NO 265
CHECK DATA field positions, specifying 259
after LOAD RESUME 257 inline copies, creating 267
Index X-17
LOB (large object) log (continued)
checking 58 backward recovery 667
invalid 68 command history, printing 679
missing 68 data set
option of DSN1PRNT utility 749 active, renaming 675
option of LISTDEF utility 170 archive, renaming 676
orphan 68 printing map 679
out-of-synch 68 printing names 679
recovering 359 forward recovery 666
LOB column record structure, types 732
checking data 61 truncation 676
definitions, completing 64 utilities
errors 68 DSNJU003 (change log inventory) 659
loading 249 DSNJU004 (print log map) 679
LOB table space LOG
copying 118, 143 option of LOAD utility 195
LOAD LOG 249, 250 option of REORG TABLESPACE utility 413
REORG LOG 249, 250 option of REPAIR utility 485
reorganizing 460 log data sets with errors, deleting 673
LOCALSITE log map utility
option of RECOVER utility 349 See print log map utility
option of REPORT utility 513 log RBAs, resetting 722
LOCATE INDEX statement of REPAIR utility 491 logical partition, checking 80
LOCATE INDEXSPACE statement of REPAIR logical unit name, naming convention xiii
utility 491 LOGONLY
LOCATE statement of REPAIR utility 488 option of RECOVER utility 347
LOCATE TABLESPACE statement of REPAIR option of RESTORE SYSTEM utility 530
utility 489 LOGRANGES, option of RECOVER utility 349
location name, naming convention xiii LONGLOG
LOCATION, option of DSNJU003 utility 667 option of REORG INDEX utility 383
locking option of REORG TABLESPACE utility 419
BACKUP SYSTEM utility 50 LOW2KEY column, SYSCOLUMNS catalog table 557
CHECK DATA utility 69 LPL status 831
CHECK INDEX utility 85 LRSNEND option of DSN1LOGP utility 729
CHECK LOB utility 93 LRSNSTART, option of DSN1LOGP utility 729
COPY utility 120 LUNAME, option of DSNJU003 utility 668
COPYTOCOPY utility 145 LUWID option of DSN1LOGP utility 732
DIAGNOSE utility 156
EXEC SQL utility 161
LISTDEF utility 177 M
LOAD utility 253 MAP
MERGECOPY utility 282 option of DSN1CHKR utility 692
MODIFY RECOVERY utility 290 option of REPAIR utility 495
MODIFY STATISTICS utility 299 map, calculating, LOAD utility 226
OPTIONS utility 307 MAPDDN, option of LOAD utility 200
QUIESCE utility 316 MAPPINGTABLE, option of REORG TABLESPACE
REBUILD INDEX utility 334 utility 418
RECOVER utility 369 MAXERR, option of UNLOAD utility 604
REORG INDEX utility 398 MAXPRIME, option of TEMPLATE statement 585
REORG TABLESPACE utility 464 MAXRO
REPAIR utility 503 option of REORG INDEX utility 382
REPORT utility 517 option of REORG TABLESPACE utility 418
RUNSTATS utility 553 MAXROWS, option of DSN1COMP utility 701
STOSPACE utility 572 MB, option of TEMPLATE statement 584
TEMPLATE utility 589 media failure, resolving 93
UNLOAD utility 640 member name, naming convention xiii
utilities access description 40 MEMBER option of DSNJU004 utility 679
log MERGCOPY utility
active DBD01 281
data set status 688 SYSCOPY 281
printing available data sets 679
Index X-19
multilevel security with row-level granularity (continued) NUMCOLS (continued)
authorization restrictions for stand-alone utilities 654 option of REBUILD INDEX utility 325
LOAD REPLACE authorization restrictions 183 option of REORG INDEX utility 385
REORG TABLESPACE authorization option of RUNSTATS utility 541, 545
restrictions 404 NUMCOLUMNS column, SYSCOLDIST catalog
UNLOAD authorization restrictions 595 table 558
NUMPARTS
option of DSN1COMP utility 700
N option of DSN1COPY utility 712
NACTIVE column, SYSTABLESPACE catalog option of DSN1PRNT utility 751
table 558
NACTIVEF column, SYSTABLESPACE catalog
table 558 O
naming convention, variables in command syntax xii OBID, option of DSN1LOGP utility 730
NBRSECND, option of TEMPLATE statement 585 OBIDXLAT, option of DSN1COPY utility 714
NEARINDREF column, SYSTABLEPART catalog object lists
table 560 adding related objects 172
NEAROFFPOSF column of SYSINDEXPART catalog creating 163
table object status
catalog query to retrieve value for 446 advisory, resetting 831
description 564 restrictive, resetting 831
NEWCAT, option of DSNJU003 utility 667 OBJECT, option of REPAIR utility 485
NEWCOPY, option of MERGECOPY utility 277 OFF, option of OPTIONS statement 306
NEWLOG OFFLRBA, option of DSNJU003 utility 669
option of DSNJU003 utility 661 OFFPOSLIMIT, option of REORG TABLESPACE
statement 671 utility 420
NGENERIC, option of DSNJU003 utility 668 OFFSET
NLEAF column, SYSINDEXES catalog table 558 option of DSN1LOGP utility 733
NLEVELS column, SYSINDEXES catalog table 558 option of REPAIR utility
NOALIAS, option of DSNJU003 utility 668 DUMP statement 495
NOAREORPENDSTAR, option of REPAIR utility 488 REPLACE statement 493
NOAUXCHKP, option of REPAIR utility 488 VERIFY statement 492
NOAUXWARN, option of REPAIR utility 488 OLDEST_VERSION column, updating 291
NOCHECKPEND, option of REPAIR utility 488 online copies, merging 280
NOCOPYPEND online utilities
option of LOAD utility 195 description 3
option of REPAIR utility 488 invoking 19
NODUMPS, option of DIAGNOSE utility 153 option description, example 21
non-DB2 utilities OPTIONS utility
effect on real-time statistics 871 altering return codes 306
NOPAD authorization 303
option of REORG TABLESPACE utility 422 compatibility 307
option of UNLOAD utility 602 concurrency 307
NOPASSWD, option of DSNJU003 utility 668 description 303
NORBDPEND, option of REPAIR utility 488 errors, handling 305
NORCVRPEND, option of REPAIR utility 488 examples
normal-termination, option of TEMPLATE utility 582 checking syntax 307
NOSUBS COPY 308
option of LOAD utility 199 EVENT 308
option of UNLOAD utility 602 forcing return code 0 308
NOSYSREC, option of REORG TABLESPACE ITEMERROR 308
utility 414 LISTDEF 307
not sign, problems with 425 LISTDEF definition libraries 308
notices, legal 881 LISTDEFDD 308
NPAGES column, SYSTABLES catalog table 557 MODIFY RECOVERY 308
NPAGES column, SYSTABSTATS catalog table 557 PREVIEW 307
NPAGESF column, SYSTABLES catalog table 557 SKIP 308
NULL predicate 429, 625 TEMPLATE 307
NULLIF, option of LOAD utility 221 TEMPLATE definition libraries 308
NUMCOLS TEMPLATEDD 308
option of LOAD STATISTICS 192 execution phases 303
Index X-21
phases of execution (continued) privilege
RECOVER utility 342 description 3
REORG INDEX utility 376 granting 4
REORG TABLESPACE utility 405 revoking 4
REPAIR utility 483 privilege set of a process 3
REPORT utility 509 process, privilege set of 3
RESTORE SYSTEM utility 529 PSEUDO_DEL_ENTRIES column of SYSINDEXPART
RUNSTATS utility 536 catalog table 564
STOSPACE utility 569 pseudo-deleted keys 564
TEMPLATE utility 575 PUNCHDDN
UNLOAD utility 595 option of REORG TABLESPACE utility 433
PIECESIZ option of UNLOAD utility 599
option of DSN1COPY utility 713 PUNCHDSN, option of DSNU CLIST command 31
option of DSN1PRNT utility 750
point-in-time recovery
options 344 Q
performing 358, 360 qualifier-name, naming convention xiv
planning for 358 quiesce point, establishing 311
PORT, option of DSNJU003 utility 668 QUIESCE utility
POSITION authorization 311
option of LOAD utility 211 catalog and directory objects 314
option of UNLOAD utility 610 compatibility 316
PQTY column data sets needed 313
SYSINDEXPART catalog table 564 description 311
SYSTABLEPART catalog table, use by examples
RUNSTATS 561 list 318
predicate quiesce point for three table spaces 317
basic 424 table space set 318
BETWEEN 425 WRITE NO 319
IN 427 history record, printing 679
LIKE 427 instructions 314
NULL 429 lists 315
overview 424 lists, using 312
predicates option descriptions 312
basic 622 output 311
BETWEEN 622 partitions 312
IN 623 phases of execution 311
LIKE 623 quiesce point, establishing 314
NULL 625 recovery preparations 314
PREFORMAT restarting 316
option of LOAD PART 206 restrictive states, compatibility 315
option of LOAD utility 189, 241 syntax diagram 312
option of REORG INDEX utility 387 table space set 313
option of REORG TABLESPACE utility 434 table space, specifying 312
option of REORG utility 241 terminating 316
preformatting active logs write to disk, failure 316
data sets required 657 writing changed pages to disk 313
description 657
example 657
output 657 R
PREVIEW RBA (relative byte address), range printed by print log
option of OPTIONS utility 304 map 681
using with LISTDEF utility 175 RBAEND, option of DSN1LOGP utility 729
preview mode 306 RBASTART, option of DSN1LOGP utility 729
PREVIEW mode, executing utilities in 588 RBDP
PRINT See REBUILD-pending (RBDP) status
option of DSN1COPY utility 713 RBDP (REBUILD-pending) status
option of DSN1PRNT utility 751 description 333, 367
print log map utility (DSNJU004) resetting 367
JCL requirements 680 RBDP* (REBUILD-pending star) status, resetting 334
SYSIN stream parsing 680 RC0, option of OPTIONS statement 306
Index X-23
RECOVER utility (continued) recovery (continued)
examples (continued) real-time statistics tables 873
TORBA 351 reporting information 514
fallback 367 table space
hierarchy of dependencies 357 description 351
incremental image copies 353 multiple spaces 351
input data sets 351 recovery information, reporting 511
instructions 351 recovery log
JES3 environment 367 backward 667
lists of objects 352 forward 666
lists, using 344 RECOVERY, option of REPORT utility 511
LOB data 359 RECOVERYDDN
LOGAPPLY phase, optimizing 365 option of COPY utility 100
mixed volume IDs 368 option of COPYTOCOPY utility 139
non-DB2 data sets 354 option of LOAD utility 191, 239
option descriptions 344 option of MERGECOPY utility 278
order of recovery 357 option of REORG TABLESPACE utility 415, 455
output 341 RECOVERYSITE
pages, recovering 345, 353 option of RECOVER utility 349
parallel recovery 346, 352 option of REPORT utility 513
partitions, recovering 345, 352 RECP
performance recommendations 365 See RECOVER-pending (RECP) status
phases of execution 342 referential constraint
point-in-time recovery loading data 234
performing 358 violations, correcting 236
planning for 358 REFP
point-time-recovery See REFRESH-pending (REFP) status
planning for 362 REFRESH-pending (REFP) status
RBA, recovering to 346 description 836
rebalancing partitions with REORG 361 resetting 836
REBUILD-pending status 341 remote site recovery 110
recovery status 361 REORG INDEX utility
restarting 369 access, allowing 380
restriction 341 access, specifying 393
skipping SYSLGRNX during LOGAPPLY phase 349 authorization 375
specific data set, skipping 364 catalog updates 400
syntax diagram 343 CHECK-pending status, compatibility 389
table space sets 341 compatibility 398
TABLESPACE option 116 control statement, creating 392
tape mounts, retaining 368 data set
terminating 369 shadow, determining name 391
RECOVER-pending (RECP) status data sets
description 367, 835 defintions, changing 394
resetting 256, 367, 835 needed 389
recovery shadow 391
catalog objects 355 shadow, estimating size 391
compressed data 364 unload, specifying 387
consistency, ensuring 362 user-managed 390
data set, partition 352 data-sharing considerations 388
database description 375
LOB table space 117 drain behavior, specifying 383
REBUILD INDEX utility 321 DRAIN_WAIT, when to use 395
RECOVER utility 341 examples
REORG makes image copies invalid 109 HISTORY 401
directory objects 355 KEYCARD 401
error range 354 LIST 401
image copies 367 MAXRO 401
JES3 environment 367 OPTIONS 401
page 353 REPORT 401
partial 360 SHRLEVEL 401
preparing for with copies 116 single index 401
Index X-25
REORG TABLESPACE utility (continued) REORG TABLESPACE utility (continued)
examples (continued) segmented table spaces, reorganizing 460
table space, reorganizing 470 selection condition 424
unload data set, specifying 470 shadow data sets, defining 442
failed job, recovering 462 shadow objects 442
fallback recovery considerations 450 SHRLEVEL
indexes, building in parallel 457 specifying 447
inline copy 455 user-managed data sets with 442
instructions 445 SHRLEVEL CHANGE
interrupting temporarily 452 compatibility with restart-pending status 437
lists, using 412 log processing 448
LOB table space performance implications 456
reorganizing 460 when to use 456
restriction 405 slow processing, operator actions 448
log processing, specifying max time 418 sort device type, specifying 433
logging, specifying 413 sort subtasks
long logs, action taken 419 allocation 458
LONGLOG action, specifying interval 419 determining number 458
mapping table sort work file, estimating size 459
example 435 statistics, specifying 430
preventing concurrent use 436 switch methodology, specifying 420
specifying name 418 syntax diagram 407
using 435 temporary data sets, specifying number 434
multilevel security restrictions 404 time to wait for drain, specifying 417
option descriptions 412 time-out, specifying action 419
output 403, 468 timestamps
partitioned table spaces, reorganizing 460 decrementing 427
partitions, REORG-pending status incrementing 427
considerations 453 unload, specifying action 421
performance recommendations unloading data, methods of 459
after adding column 290 versions, effect on 469
general 455 REORG utility
phases of execution See also REORG INDEX utility
BUILD phase 405 See also REORG TABLESPACE utility
BUILD2 phase 405 compressing data 236
LOG phase 405 effect on real-time statistics 866
RELOAD phase, description 405 KEEPDICTIONARY option 236
RELOAD phase, error 460 REORG-pending (REORP) status
SORT phase 405 description 836
SORTBLD phase 405 resetting 836
SWITCH phase 405 REORG, option of DSN1COMP utility 701
UNLOAD phase 405 reorganizing
UTILINIT phase 405 indexes 392
UTILTERM phase 405 table spaces 392
preformatting pages 434 table spaces, determining when to reorganize 446
processing encrypted data 436 REORP
REBALANCE See REORG-pending (REORP) status
restrictions 436 REPAIR utility
rebalancing partitions 453 authorization 483
reclaiming space from dropped tables 450 before running
records, discarding 434 copying table space 498
recycling version numbers 469 restoring indexes 498
region size recommendation 435 catalog, repairing 501
RELOAD phase CHECK-pending status 506
counting records loaded 460 commit point for LOCATE statement 489
RELOAD phase, encountering an error in 460 compatibility 503
reload, skipping 449 control statement, creating 499
restarting 462 damaged page, repairing 500
restriction 403 data sets needed 498
sample generated LOAD statement 423 DBD statement
scope, specifying 413 declared temporary table 496
Index X-27
RESET restarting (continued)
option of DSN1COPY utility 715 utilities (continued)
option of REPAIR utility 493 OPTIONS 307
resetting out-of-space condition 45
pending status phase restart 42
advisory 831 QUIESCE 316
auxiliary CHECK-pending (ACHKP) 831 REBUILD INDEX 334
CHECK-pending (CHKP) 832 RECOVER 369
COPY-pending 833 REORG INDEX 396, 397
group buffer pool RECOVER-pending REORG TABLESPACE 461, 462
(GRECP) 834 RESTORE SYSTEM 532
informational COPY-pending (ICOPY) 117, 834 RUNSTATS 553
page set REBUILD-pending (PSRBD) 834 STATISTICS keyword 46
REBUILD-pending (RBDP) 334 STOSPACE 572
REBUILD-pending (RBDP), for the RECOVER TEMPLATE 589
utility 367 templates 45
REBUILD-pending (RBDP), summary 834 UNLOAD 640
RECOVER-pending (RECP), for the RECOVER using DB2I 44
utility 367 using the DSNU CLIST command 44
RECOVER-pending (RECP), summary 835 utility statements 45
REORG-pending (REORP) 836 UTPROC 36
restart-pending 837 volume serial 46
refresh status, REFRESH-pending (REFP) 836 RESTORE SYSTEM utility
warning status, auxiliary warning (AUXW) 832 actions after running 532
RESPORT, option of DSNJU003 utility 668 authorization 529
restart compatibility 532
conditional control record creating system point in time for 665
reading 689 data sets needed 531
sample 689 data sharing environment 531
restart-pending (RESTP) status description 529
description 837 DISPLAY UTILITY command 531
resetting 837 examples
RESTART, option of DSNU CLIST command 32 LOGONLY 533
restarting recovering a system 532
performing first two phases only 667 instructions 531
problems option descriptions 530
cannot restart REPAIR 502 output 529
cannot restart REPORT 516 phases of execution 529
utilities preparation 531
BACKUP SYSTEM 50 restarting 532
CHECK DATA 69 syntax diagram 530
CHECK INDEX 85 terminating 532
CHECK LOB 93 RESTP
COPY 119 See restart-pending (RESTP) status
COPYTOCOPY 145 restrictive status
COPYTOCOPY utility 145 resetting 499, 500, 831
creating your own JCL 44 RESUME, option of LOAD PART 206
current restart 43 RESUME, option of LOAD utility 189
data set name 46 RETPD, option of TEMPLATE statement 581
data sharing 41 RETRY
DIAGNOSE 156 option of CHECK INDEX utility 76
EXEC SQL 160 option of REORG INDEX utility 382
EXEC statement 38 option of REORG TABLESPACE utility 418
JCL, updating 44 RETRY_DELAY
LISTDEF 177 option of CHECK INDEX utility 76
LISTS 46 return code, altering 306
LOAD 251 return code, CHANGELIMIT 116
MERGECOPY 282 REUSE
methods of restart 44 option of LOAD PART 206
MODIFY RECOVERY utility 290 option of LOAD utility 194
MODIFY STATISTICS 299 option of REBUILD INDEX 324
Index X-29
security (continued) SORTNUM (continued)
multilevel with row-level granularity (continued) option of LOAD utility 201
authorization restrictions for stand-alone option of REBUILD INDEX 325
utilities 654 option of REORG INDEX 386
security, data sets 25 option of REORG TABLESPACE utility 434
SEGMENT, option of DSN1COPY utility 711 option of RUNSTATS utility 543, 547
segmented table spaces, reorganizing 460 SORTOUT
SELECT statement data set of CHECK DATA utility 22
list data set of LOAD utility, estimating size 224
maximum number of elements 773 purpose 22
SYSIBM.SYSTABLESPACE, example 571 SORTWKnn data set 22
select-statement, option of EXEC SQL utility 160 space
SELECT, option of DSN1SDMP utility 758 DBD, reclaiming 290
SELECT2, option of DSN1SDMP utility 762 unused, finding for nonsegmented table space 446
semicolon SPACE
embedded 844 option of MODIFY STATISTICS utility 297
SET INDEX statement of REPAIR utility 486 option of REORG TABLESPACE utility 432
SET INDEXSPACE statement of REPAIR utility 486 option of TEMPLATE utility 584
SET TABLESPACE statement of REPAIR utility 486 SPACE column
setting SQL terminator analyzing values 572
DSNTIAD 844 SYSTABLEPART catalog table, use by
shadow data sets RUNSTATS 561
CHECK INDEX utility 78 SPACE column of SYSINDEXPART catalog table 564
defining space statistics 559
REORG INDEX utility 391 SPACEF column
REORG TABLESPACE utility 444 SYSINDEXPART catalog table 564
estimating size, REORG INDEX utility 391 SYSTABLEPART catalog table, use by
shift-in character, LOAD utility 209 RUNSTATS 561
shift-out character, LOAD utility 209 SQL (Structured Query Language)
SHRLEVEL limits 771
option of CHECK INDEX utility 75 statement terminator 844
option of COPY utility SQL statement terminator
CHANGE 105, 113 modifying in DSNTEP2 and DSNTEP4 846
REFERENCE 105, 112 modifying in DSNTIAD 844
option of LOAD utility 189 SQL terminator, specifying in DSNTEP2 and
option of REORG INDEX utility 380 DSNTEP4 846
option of REORG TABLESPACE utility 415 SQL terminator, specifying in DSNTIAD 844
option of RUNSTATS utility 541, 546 SQTY column
option of UNLOAD utility 604 SYSINDEXPART catalog table 564
SIZE, option of DSNUPROC utility 36 SYSTABLEPART catalog table, use by
SKIP, option of OPTIONS statement 305 RUNSTATS 561
SMALLINT ST01WKnn data set 22
option of LOAD utility 217 STACK, option of TEMPLATE statement 585
option of UNLOAD utility 615 stand-alone utilities
SORTDATA, option of REORG TABLESPACE control statement, creating 653
utility 414 description 3
SORTDEVT invoking 653
option of CHECK DATA utility 60 JCL EXEC PARM, using to specify options 653
option of CHECK INDEX utility 76 multilevel security with row-level granularity,
option of CHECK LOB utility 90 effects 654
option of LOAD utility 201 specifying options 653
option of REBUILD INDEX 324 START TRACE command, option of DSN1SDMP
option of REORG INDEX 386 utility 758
option of REORG TABLESPACE utility 433 STARTIME, option of DSNJU003 utility 664
option of RUNSTATS utility 542, 547 STARTRBA, option of DSNJU003 utility 662
SORTKEYS state, utility execution 39
option of LOAD utility 196, 241 statistics
SORTNUM deciding when to gather 550
option of CHECK DATA utility 60 gathering 535
option of CHECK INDEX utility 77 STATISTICS
option of CHECK LOB utility 91 option of LOAD utility 191
Index X-31
syntax diagram (continued) TABLE (continued)
RUNSTATS TABLESPACE 537 option of LOAD STATISTICS 191
STOSPACE utility 569 option of REORG TABLESPACE utility 430
TEMPLATE statement 575 option of RUNSTATS utility 539
UNLOAD utility 596 table name, naming convention xiv
SYSCOPY table space
catalog table, information from REPORT utility 514 assessing status with RUNSTATS 550
data set 22 checking 55
directory table space, MERGECOPY checking multiple 66
restrictions 277, 281 determining when to reorganize 392, 446
option of DSN1LOGP utility 730 merging copies 275
SYSCOPY, deleting rows 289 mixed volume IDs, copying 118
SYSDISC data set naming convention xiv
description for LOAD and REORG 22 nonsegmented, finding unused space 446
LOAD utility, estimating size 224 partitioned, updating statistics 551
SYSERR data set reorganizing
description for CHECK DATA and LOAD 22 determining when to reorganize 446
LOAD utility, estimating size 224 using SORTDATA option of REORG utility 447
SYSIBM.SYSCOPY utilization 392
ICBACKUP column 110 segmented
ICUNIT column 110 copying 114
SYSIN data set 22 LOAD utility 228
SYSIN DD statement, built by CLIST 34 status, resetting 499
SYSLGRNX directory table, information from REPORT TABLESPACE
utility 514 option of CHECK DATA utility 58
SYSLGRNX, deleting rows 289 option of CHECK INDEX utility 75
SYSMAP data set option of CHECK LOB utility 90
description 23 option of COPY utility 99
estimating size 224 option of COPYTOCOPY utility 136
SYSOBDS entries, deleting 290 option of LISTDEF utility 168
SYSPITR, option of DSNJU003 utility 665 option of MERGECOPY utility 276
SYSPRINT data set 23 option of MODIFY RECOVERY utility 286
SYSPRINT DD statement, built by CLIST 34 option of MODIFY STATISTICS utility 296
SYSPUNCH data set 23 option of QUIESCE utility 312
SYSREC data set 23 option of REBUILD INDEX utility 323
SYSTEM option of RECOVER utility 344
option of DSNU CLIST command 32 option of REORG TABLESPACE utility 412
option of DSNUPROC utility 36 option of REPAIR utility
system data sets, renaming 675 general description 485
system monitoring on LOCATE TABLESPACE statement 489
index organization 392 on SET TABLESPACE and SET INDEX
table space organization 392, 446 statements 487
system point in time, creating 665 option of REPORT utility 511
system, limits 771 option of RUNSTATS utility 538, 545
SYSTEMPAGES, option of COPY utility 104 option of UNLOAD utility 597
SYSUT1 data set 23 TABLESPACES, option of LISTDEF utility 166
SYSUT1 data set for LOAD utility, estimating size 224 TABLESPACESET
SYSUTILX directory table space option of QUIESCE utility 313
MERGECOPY restrictions 277, 281 option of REPORT utility 513
order of recovering 355 TABLESPACESTATS
contents 851
real-time statistics table 849
T TAPEUNITS
table option of COPY utility 103
dropping, reclaiming space 450 option of RECOVER utility 347
exception, creating 62 TEMPLATE library 586
multiple, loading 202 TEMPLATE library, specifying 306
replacing 228 TEMPLATE utility
replacing data 228 authorization 575
TABLE BSAM buffers, specifying number 581
option of LISTDEF utility 169 compatibility 589
Index X-33
TOLASTFULLCOPY option of RECOVER utility 348 UNLOAD utility (continued)
TOLOGPOINT, option of RECOVER utility 346 character strings, truncating 612
TORBA option of RECOVER utility 346 CLOB data type, specifying 620
TOSEQNO, option of RECOVER utility 348 CLOB strings, truncating 620
TOVOLUME, option of RECOVER utility 348 compatibility 640
TRACEID, option of DIAGNOSE utility 155, 156 compressed data 639
TRK, option of TEMPLATE statement 584 constant field, specifying 619
TRTCH, option of TEMPLATE statement 586 converting data types 630
TRUNCATE copies, concatenating 629
option of LOAD utility data sets used 627
CHAR data type 213, 244 data type compatibility 631
GRAPHIC data type 215, 244 data, identifying 597
GRAPHIC EXTERNAL data type 216 DBCLOB format, specifying 621
VARCHAR data type 214, 244 DBCS string, truncating 621
VARGRAPHIC data type 217, 244 DD name of unload data set, specifying 600
TYPE DD statement for image copy, specifying 599
option of DIAGNOSE utility 153 decimal format, specifying 616
option of DSN1LOGP utility 732 decimal point character, specifying for delimited
formats 603
delimited files 636
U delimited format, specifying 602
UID delimiters
option of DSNU command 32 column 603
option of DSNUPROC utility 36 string 603
UNCNT, option of TEMPLATE statement 582 description 595
UNICODE EBCDIC format, specifying 601
option of LOAD utility 199 examples
option of UNLOAD utility 601 delimited file format 646
UNIT FROMCOPY option 643
option of DSNJU003 utility 663 HEADER option 643
option of DSNU CLIST command 33 partitioned table space 643
option of TEMPLATE statement 581 SAMPLE option 643
unit of recovery specifying a header 643
in-abort 667 unloading a sample of rows 643
inflight 667 unloading all columns 642
unit of work unloading data from an image copy 643
See also unit of recovery unloading data in parallel 643
in-commit 666 unloading from two tables 643
indoubt, conditional restart 666 unloading multiple table spaces 645
UNLDDN unloading specific columns 642
option of REORG TABLESPACE utility 433 unloading specified partitions 644, 646
option of UNLOAD utility 600 using a field specification list 642
UNLOAD using LISTDEF 644, 645, 646
option of REORG INDEX utility 384 using TEMPLATE 643
option of REORG TABLESPACE utility 421 field position, specifying 610
UNLOAD utility field specification errors, interpreting 639
64-bit floating point notation, specifying 618 field specifications 605
access, specifying 604 floating-point data, specifying format 603
ASCII format, specifying 601 FROM TABLE clause 628
authorization required 595 compatibility with LIST 605
binary floating-point number format, specifying 618 parentheses 605
blanks in VARCHAR fields, removing 612 FROM TABLE option descriptions 608
blanks in VARGRAPHIC fields, removing 614 FROM TABLE syntax diagram 605
BLOB data type, specifying 620 graphic type, specifying 613
BLOB strings, truncating 620 graphic type, truncating 614
CCSID format, specifying 601 header field, specifying 608
CHAR data type, specifying 611 image copies, unloading 629
character string representation of date, image copy, specifying 598
specifying 618 instructions 628
character string representation of time, integer format, specifying 615
specifying 619 labeled duration expression 625
Index X-35
utility control statements WHEN
creating 19 option of LOAD utility 208
parsing rules 19 option of REORG TABLESPACE utility 423
scanning rules 19 option of UNLOAD utility 621
utility-id naming convention xiv work data sets
UTILITY, option of DSNU CLIST command 29 CHECK DATA utility 60, 65
UTPRINmm data set 23 CHECK INDEX utility 78
UTPRINT data set 23 LOAD utility 224
UTPRINT DD statement, built by CLIST 34 WORKDDN
UTPROC, option of DSNUPROC utility 36 option of CHECK DATA utility 60
option of CHECK INDEX utility 76
option of CHECK LOB utility 92
V option of LOAD utility 195
validation routine option of MERGECOPY utility 277
LOAD utility 183 option of REBUILD INDEX utility 329
REORG TABLESPACE utility 421 option of REORG INDEX utility 387
VALUE option of REORG TABLESPACE utility 445
option of DSN1COPY utility 714 WRITE, option of QUIESCE utility 313
option of DSN1LOGP utility 733
option of DSN1PRNT utility 752
VARCHAR
data type, loading 227
option of LOAD utility 213
option of UNLOAD utility 612
VARGRAPHIC
data type, loading 227
option of LOAD utility 216
option of UNLOAD utility 614
varying-length rows, relocated to other pages, finding
number of 446
VERIFY, statement of REPAIR utility 488, 492
version information
updating when moving to another system 501
version number management 291
LOAD utility 259
REBUILD INDEX utility 336
REORG INDEX utility 400
REORG TABLESPACE utility 469
version numbers, recycling 291
VERSION, option of REPAIR utility on LOCATE
statement 491
VERSIONS, option of REPAIR utility 486
versions, REORG TABLESPACE effect on 469
violation messages 66
violations
correcting 67
finding 66
virtual storage access method (VSAM)
See VSAM (virtual storage access method)
VOLCNT, option of TEMPLATE statement 582
VOLUME, option of DSNU CLIST command 33
VOLUMES, option of TEMPLATE statement 582
VSAM (Virtual Storage Access Method)
used by REORG TABLESPACE 441
used by STOSPACE 571
VSAMCAT, option of DSNJU003 utility 667
W
WAIT, option of DIAGNOSE utility 154
WARNING, option of OPTIONS statement 306
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
SC18-7427-00 Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC18-7427-00 Along Line
Printed in USA
SC18-7427-00
Spine information:
IBM DB2 Universal Database for z/OS Version 8 Utility Guide and Reference