DB2 For zOS Application Programming Topics - sg246300 PDF
DB2 For zOS Application Programming Topics - sg246300 PDF
ibm.com/redbooks
International Technical Support Organization DB2 for z/OS Application Programming Topics October 2001
SG24-6300-00
Take Note! Before using this information and the product it supports, be sure to read the general information in Special notices on page 257.
First Edition (October 2001) This edition applies to Version 7 of IBM DATABASE 2 Universal Database Server for z/OS and OS/390 (DB2 for z/OS and OS/390 Version 7), Program Number 5675-DB2. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Special notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii IBM trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Part 1. Object-oriented enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2. Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 What is a schema? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Schema characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Authorizations on schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Schema path and special register. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 How is a schema name determined? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 The schema processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 3. Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Trigger definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Why use a trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Trigger characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Trigger activation time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 How many times is a trigger activated? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Trigger action condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Trigger action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Transition variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Transition tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Allowable combinations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Valid triggered SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Invoking stored procedures and UDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Setting error conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Trigger cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Global trigger ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 When external actions are backed out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Passing transition tables to SPs and UDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13 Trigger package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Rebinding a trigger package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Trigger package dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.16 DROP, GRANT, and COMMENT ON statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.17 Catalog changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2001
13 14 14 16 17 18 19 19 20 21 22 22 23 24 26 28 29 30 30 32 33 33 34 35 iii
Trigger and constraint execution model . . . . . . . . . Design considerations . . . . . . . . . . . . . . . . . . . . . . . Some alternatives to a trigger . . . . . . . . . . . . . . . . . Useful queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trigger restrictions . . . . . . . . . . . . . . . . . . . . . . . . .
35 37 38 41 42 43 44 44 45 46 48 49 49 53 53 54 54 55 56 56 57 58 59 59 60 60 61 62 64 64 65 71 72 72 72 72 73 75 76
Chapter 4. User-defined distinct types (UDT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Creating distinct data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 CAST functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Privileges required to work with UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Using CAST functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Operations allowed on distinct types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Extending operations allowed in UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 UDTs in host language programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Using the LIKE comparison with UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 UDTs and utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.4 Implementing UDTs in an existing environment . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 Miscellaneous considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 UDTs in the catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. User-defined functions (UDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Terminology overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Definition of a UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 The need for user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Implementation and maintenance of UDFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Scalar functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Column functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Table functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 UDF design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Maximizing UDF efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Consider sourced functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Built-in functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 What is a built-in function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Why use a built-in function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Built-in function characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 List of built-in functions before Version 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 New built-in functions in Version 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 New functions in Version 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Built-in function restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 2. Enhancements that allow a more flexible design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Chapter 7. Temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Summary of differences between types of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Created temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 What is a created temporary table? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Why use created temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Created temporary tables characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Created temporary tables pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Created temporary tables restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Declared temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 80 80 81 81 82 86 86 88
iv
7.3.1 What is a declared temporary table? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Why use declared temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Declared temporary tables characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Creating a temporary database and table space . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Creating a declared temporary table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Using declared temporary tables in a program . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.7 Creating declared temporary tables for scrollable cursors . . . . . . . . . . . . . . . . . . 7.3.8 Remote declared temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.9 Creating indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.10 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.11 Converting from created temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.12 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.13 Declared temporary table restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88 88 88 89 90 92 93 93 94 94 95 95 95
Chapter 8. Savepoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.1 What is a savepoint? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.2 Why to use savepoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.3 Savepoint characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.4 Remote connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8.5 Savepoint restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Chapter 9. Unique column identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Identity columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 What is an identity column? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 When to use identity columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Identity column characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.4 Creating a table with an identity column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.5 How to populate an identity column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.6 How to retrieve an identity column value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.7 Identity columns in a data sharing environment . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.8 Trying to overcome the identity column deficiencies. . . . . . . . . . . . . . . . . . . . . . 9.1.9 Application design considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.10 Identity column restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 ROWID and direct row access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 What is a ROWID? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 ROWID implementation and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 How ROWIDs are generated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Casting to a ROWID data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 ROWIDs and partitioning keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 ROWID and direct row access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.7 ROWID and direct row access restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Identity column and ROWID usage and comparison . . . . . . . . . . . . . . . . . . . . . . . . . 103 104 104 104 104 105 106 109 110 110 111 112 112 113 113 115 117 118 119 121 122
Part 3. More powerful SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Chapter 10. SQL CASE expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 What is an SQL CASE expression? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Why use an SQL CASE expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Alternative solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Other uses of CASE expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 SQL CASE expression restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 126 127 131 131 133
Chapter 11. Union everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 11.1 What is a union everywhere? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Contents
11.2 Why union everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Unions in nested table expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Unions in subqueries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Unions in basic predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Unions in quantified predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3 Unions in EXISTS predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.4 Unions in IN predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.5 Unions in selects of INSERT statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.6 Unions in UPDATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Unions in views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Explain and unions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Technical design and new frontiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. Scrollable cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 What is a scrollable cursor? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Why use a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Scrollable cursors characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Types of cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Scrollable cursors in depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 How to choose the right type of cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Using a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Declaring a scrollable cursor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.2 Opening a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.3 Fetching rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.4 Moving the cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.5 Using functions in a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Update and delete holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Delete hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Update hole. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Maintaining updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 Locking and scrollable cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.9 Stored procedures and scrollable cursors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.10 Scrollable cursors recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 13. More SQL enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 The ON clause extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Classifying predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 During join predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Row expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 What is a row expression? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Types of row expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.3 Row expression restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 ORDER BY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 ORDER BY columns no longer have to be in select list (V5) . . . . . . . . . . . . . . 13.3.2 ORDER BY expression in SELECT (V7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 ORDER BY sort avoidance (V7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Using the DEFAULT keyword in VALUES clause of an INSERT . . . . . . . . . . . 13.4.2 Inserting using expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Inserting with self-referencing SELECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Inserting with UNION or UNION ALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Subselect UPDATE/DELETE self-referencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Scalar subquery in the SET clause of an UPDATE . . . . . . . . . . . . . . . . . . . . . . . . .
136 136 137 137 137 138 139 139 140 140 142 143 149 150 150 151 151 152 153 154 155 155 157 164 168 170 171 172 174 177 178 179 181 182 182 182 185 185 185 188 188 188 189 190 191 191 192 192 193 193 195
vi
13.6.1 Conditions for usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.2 Self-referencing considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 FETCH FIRST n ROWS ONLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8 Limiting rows for SELECT INTO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9 Host variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.1 VALUES INTO statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.9.2 Host variables must be preceded by a colon . . . . . . . . . . . . . . . . . . . . . . . . . . 13.10 The IN predicate supports any expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.11 Partitioning key update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 4. Utilities versus applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Chapter 14. Utilities versus application programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Online LOAD RESUME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 What is online LOAD RESUME? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.2 Why use Online LOAD RESUME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.3 Online LOAD RESUME versus classic LOAD . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.4 Online LOAD RESUME versus INSERT programs. . . . . . . . . . . . . . . . . . . . . . 14.1.5 Online LOAD RESUME pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.6 Online LOAD RESUME restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 REORG DISCARD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 What is REORG DISCARD?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.2 When to use a REORG DISCARD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Implementation and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 REORG DISCARD restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 REORG UNLOAD EXTERNAL and UNLOAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 What are REORG UNLOAD EXTERNAL and UNLOAD? . . . . . . . . . . . . . . . . 14.3.2 REORG UNLOAD EXTERNAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 UNLOAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 UNLOAD implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.5 UNLOAD restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.6 UNLOAD highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.7 UNLOAD pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.8 Comparing DSNTIAUL, REORG UNLOAD EXTERNAL and UNLOAD . . . . . . 14.4 Using SQL statements in the utility input stream . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 EXEC SQL utility control statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Possible usage of the EXEC SQL utility statement. . . . . . . . . . . . . . . . . . . . . . 205 206 206 206 207 208 209 209 210 210 210 210 211 212 212 212 213 213 214 214 215 215 217 217 218
Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Appendix A. DDL of the DB2 objects used in the examples . . . . . . . . . . . . . . . . . . . . E/R-diagram of the tables used by the examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JCL for the SC246300 schema definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creation of a database, table spaces, UDTs and UDFs . . . . . . . . . . . . . . . . . . . . . . . . . . Creation of tables used in the examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creation of sample triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Populated tables used in the examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDL to clean up the environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 224 224 225 228 236 238 240
Appendix B. Sample programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Returning SQLSTATE from a stored procedure to a trigger . . . . . . . . . . . . . . . . . . . . . . . 244 Passing a transition table from a trigger to a SP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Contents
vii
Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... 253 253 253 254 254 255
viii
Figures
3-1 3-2 3-3 3-4 4-1 8-1 9-1 12-1 12-2 12-3 12-4 12-5 12-6 13-1 14-1 14-2 14-3 A-1 Allowable trigger parameter combinations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Allowed SQL statement matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Trigger cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 SQL processing order and triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Comparison operators allowed on UDTs created WITH COMPARISONS . . . . . . . . 49 Travel reservation savepoint sample itinerary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Identity column value assignment in a data sharing environment . . . . . . . . . . . . . . 110 Fetch syntax changes to support scrollable cursors . . . . . . . . . . . . . . . . . . . . . . . . 158 How to scroll within the result table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 SQLCODEs and cursor position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 How DB2 validates a positioned UPDATE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 How DB2 validates a positioned DELETE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Stored procedures and scrollable cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Improved sort avoidance for ORDER BY clause . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Online LOAD RESUME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Generated LOAD statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Sample UNLOAD utility statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Relations of tables used in the examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
ix
Tables
4-1 5-1 7-1 10-1 11-1 12-1 12-2 13-1 14-1 Catalog changes to support UDTs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Allowable combinations of function types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Distinctions between DB2 base tables and temporary tables . . . . . . . . . . . . . . . . . . 80 Functions equivalent to CASE expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 PLAN_TABLE changes for UNION everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Cursor type comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Sensitivity of FETCH to changes made to the base table . . . . . . . . . . . . . . . . . . . . 161 How the FETCH FIRST clause and the OPTIMIZE FOR clause interact . . . . . . . . 198 Comparing different means to unload data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
xi
xii
Examples
2-1 2-2 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 3-17 3-18 3-19 3-20 3-21 3-22 3-23 3-24 3-25 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 Overriding the implicit search path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Schema authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Trigger to maintain summary data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Trigger to maintain summary data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Trigger to initiate an external action. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 BEFORE trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Multiple trigger actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Transition variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Transition table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Single trigger action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Invoking a UDF within a trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Raising error conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Signaling SQLSTATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Information returned after a SIGNAL SQLSTATE . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Sample information returned when trigger receives an SQLCODE. . . . . . . . . . . . . . 26 Passing SQLSTATE back to a trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Cascading error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Using table locators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Sample COBOL program using a SP and table locator. . . . . . . . . . . . . . . . . . . . . . . 31 Rebinding a trigger package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Information retuned when trigger package is invalid . . . . . . . . . . . . . . . . . . . . . . . . . 34 Comment on trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Check constraint is better than a trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Trigger is better than a check constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Alternative trigger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Identify all triggers for a table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Identify all triggers for a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Sample DDL to create UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Automatically generated CAST functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Create table using several UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 GRANT USAGE/ EXECUTE ON DISTINCT TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . 47 DROP and COMMENT ON for UDTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Strong typing and invalid comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Two casting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Using sourced functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Defining sourced column sourced functions on UDTs . . . . . . . . . . . . . . . . . . . . . . . . 50 Strong typing and invalid comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Comparing pesetas and euros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Another way to compare pesetas and euros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Automatic conversion of euros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Using LIKE on a UDT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Loading a table with a UDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Example of an SQL scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 User-defined external scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Sourced column function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 User-defined table function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 External UDF to convert from SMALLINT to VARCHAR . . . . . . . . . . . . . . . . . . . . . . 66 Creating a sourced UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Using CAST instead of a UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Built-in function instead of a UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 xiii
Examples
5-9 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 8-1 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 10-9 10-10 10-11 10-12 10-13 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9 11-10 11-11 11-12 11-13
CHARNSI source code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Created temporary table DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Created temporary table in SYSIBM.SYSTABLES . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Use of LIKE clause with created temporary tables . . . . . . . . . . . . . . . . . . . . . . . . . . 83 View on a created temporary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Dropping a created temporary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Using a created temporary table in a program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Sample DDL for a declared temporary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Create a database and table spaces for declared temporary tables . . . . . . . . . . . . . 89 Explicitly specify columns of declared temporary table . . . . . . . . . . . . . . . . . . . . . . . 91 Implicit define declared temporary table and identity column . . . . . . . . . . . . . . . . . . 91 Define declared temporary table from a view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Dropping a declared temporary table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Declared temporary tables in a program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Three-part name of declared temporary table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Setting up a savepoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 IDENTITY column for member number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Copying identity column attributes with the LIKE clause . . . . . . . . . . . . . . . . . . . . . 106 Insert with select from another table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Retrieving an identity column value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 ROWID column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 SELECTing based on ROWIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Copying data to a table with GENERATED ALWAYS ROWID via subselect . . . . . 115 Copying data to a table with GENERATED BY DEFAULT ROWID via subselect. . 116 DCLGEN output for a ROWID column. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Coding a ROWID host variable in Cobol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Why not to use a ROWID column as a partitioning key . . . . . . . . . . . . . . . . . . . . . . 118 ROWID direct row access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Inappropriate coding for direct row access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 SELECT with CASE expression and simple WHEN clause. . . . . . . . . . . . . . . . . . . 126 Update with CASE expression and searched WHEN clause. . . . . . . . . . . . . . . . . . 126 Three updates vs. one update with a CASE expression . . . . . . . . . . . . . . . . . . . . . 127 One update with the CASE expression and only one pass of the data . . . . . . . . . . 128 Three updates vs. one update with a CASE expression . . . . . . . . . . . . . . . . . . . . . 128 Same update implemented with CASE expression and only one pass of the data . 128 Same update with simplified logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Avoiding division by zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Avoid division by zero, second example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Replacing several UNION ALL clauses with one CASE expression . . . . . . . . . . . . 130 Raise an error in CASE statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Pivoting tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Use CASE expression for grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Unions in nested table expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Using UNION in basic predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Using UNION with quantified predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Using UNION in the EXISTS predicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Using UNION in an IN predicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Using UNION in an INSERT statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Using UNION in an UPDATE statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Create view with UNION ALL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Use view containing UNION ALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 PLAN_TABLE output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 DDL to create split tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 DDL to create UNION in view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Sample SELECT from view to mask the underlying tables . . . . . . . . . . . . . . . . . . . 147
xiv
11-14 11-15 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 12-11 12-12 12-13 13-1 13-2 13-3 13-4 13-5 13-6 13-7 13-8 13-9 13-10 13-11 13-12 13-13 13-14 13-15 13-16 13-17 13-18 13-19 13-20 13-21 13-22 13-23 13-24 13-25 13-26 13-27 13-28 13-29 13-30 13-31 13-32 14-1 14-2 14-3 14-4 A-1 A-2 A-3 A-4
Views to use for UPDATE and DELETE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WITH CHECK OTPION preventing INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample DECLARE for scrollable cursors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Opening a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of a FETCH SENSITIVE request which creates an update hole . . . . . . . Scrolling through the last five rows of a table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Several FETCH SENSITIVE statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using functions in a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using functions in an insensitive scrollable cursor. . . . . . . . . . . . . . . . . . . . . . . . . . Aggregate function in a SENSITIVE cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aggregate function in an INSENSITIVE cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scalar functions in a cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expression in a sensitive scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Update holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample tables and rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inner join and ON clause with AND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LEFT OUTER JOIN with ANDed predicate on WORKDEPT field in the ON clause LEFT OUTER JOIN with ON clause and WHERE clause . . . . . . . . . . . . . . . . . . . . Inner join and ON clause with OR in the WORKDEPT column . . . . . . . . . . . . . . . . Row expressions with equal operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Row expressions with = ANY operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Row expression with <> ALL operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IN row expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NOT IN row expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Row expression restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ORDER BY column not in the select list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ORDER BY expression in SELECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data showing improved sort avoidance for the ORDER BY clause. . . . . . . . . . . . . Inserting with the DEFAULT keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inserting using an expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inserting with a self-referencing SELECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UPDATE with a self referencing non-correlated subquery . . . . . . . . . . . . . . . . . . . UPDATE with a self referencing non-correlated subquery . . . . . . . . . . . . . . . . . . . DELETE with self referencing non-correlated subquery . . . . . . . . . . . . . . . . . . . . . DELETE with self referencing non-correlated subquery . . . . . . . . . . . . . . . . . . . . . Invalid positioned update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-correlated subquery in the SET clause of an UPDATE . . . . . . . . . . . . . . . . . . Correlated subquery in the SET clause of an UPDATE. . . . . . . . . . . . . . . . . . . . . . Correlated subquery in the SET clause of an UPDATE with a column function . . . Correlated subquery in the SET clause of an UPDATE using the same table . . . . Row expression in the SET clause of an UPDATE . . . . . . . . . . . . . . . . . . . . . . . . . FETCH FIRST n ROWS ONLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limiting rows for SELECT INTO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Use of VALUES INTO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some uses of the SET assignment statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . IN predicate supports any expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REORG DISCARD utility statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REORG UNLOAD EXTERNAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of dynamic SQL statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a new table with the same layout as SYSIBM.SYSTABLES . . . . . . . . . . . . Schema creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDL for the stogroup, database, table space creation. . . . . . . . . . . . . . . . . . . . . . . DDL for UDT and UDF creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDL for the table creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147 147 155 156 163 165 166 168 169 169 170 170 170 172 173 182 183 183 184 184 185 186 186 187 187 188 188 189 190 191 192 192 193 193 194 194 194 195 195 195 196 196 197 199 199 199 201 211 212 217 217 224 226 227 228 xv
Examples
DDL for triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Populated tables used in the examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDL to clean up the examples environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Returning SQLSTATE to a trigger from a SP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Passing a transition table from a trigger to a SP . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvi
Preface
This IBM Redbook describes the major enhancements that affect application programming when accessing DB2 data on a S/390 or z/Series platform, including the object-oriented extensions such as triggers, user-defined functions and user-defined distinct types, the usage of temporary tables, savepoints and the numerous extensions to the SQL language to help you build powerful, reliable and scalable applications, whether it be in a traditional environment or on an e-business platform. IBM DATABASE 2 Universal Database Server for z/OS and OS/390 Version 7 (or just DB2 V7 throughout this book) is currently at its eleventh release. Over the last couple of versions a large number of enhancements were added to the product. Many of these enhancements affect application programming and the way you access your DB2 data. This book will help you to understand how these programming enhancements work and provide examples of how to use them. It provides considerations and recommendations for implementing these enhancements and for evaluating their applicability in your DB2 environments.
xvii
Thanks to the following people for their contributions to this project: Emma Jacobs Yvonne Lyon Gabrielle Velez International Technical Support Organization, San Jose Center Sherry Guo William Kyu Roger Miller San Phoenix Kalpana Shyam Yunfei Xie Koko Yamaguchi IBM Silicon Valley Laboratory Robert Begg IBM Toronto, Canada Michael Parbs Paul Tainsh IBM Australia Rich Conway IBM International Technical Support Organization, Poughkeepsie Center Peter Backlund Martin Hubel Gabrielle Wiorkowski IBM Database Gold Consultants
Special notice
This publication is intended to help managers and professionals understand and evaluate some of the application related enhancements that were introduced over the last couple of years into the IBM DATABASE 2 Universal Database Server for z/OS and OS/390 products. The information in this publication is not intended as the specification of any programming interfaces that are provided by the IBM DATABASE 2 Universal Database Server for z/OS and OS/390. See the PUBLICATIONS section of the IBM Programming Announcement for the IBM DATABASE 2 Universal Database Server for z/OS and OS/390 for more information about what publications are considered to be product documentation.
xviii
IBM trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries:
e (logo) IBM AIX CICS DB2 DB2 Connect DB2 Universal Database DFS DRDA ECKD Enterprise Storage Server IMS MORE MVS MVS/ESA Redbooks Redbooks Logo Notes OS/390 Parallel Sysplex Perform QMF RACF RETAIN S/390 SP System/390 TME WebSphere z/OS
Comments welcome
Your comments are important to us! We want our IBM Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Preface
xix
xx
Chapter 1.
Introduction
New DB2 versions and releases have always been a balanced mixture between system related enhancements and improvements that benefit database administrators and the application programming community. When we first started to lay out the content of this book we wanted to include all the application programming related enhancements since DB2 Version 4, where a lot of major programming related enhancements like OUTER JOIN and nested table expressions to name only a few, became available. Even though these are very important enhancements and very relevant to application programming and design, we soon had to give up the idea and decided to concentrate mainly on the enhancements since DB2 Version 6, and even that turned out to be over ambitious. Therefore, if a certain topic is not included in this redbook, it is most likely because it has already been covered in some of the other available redbooks. Good references are: DB2 UDB Server for OS/390 and z/OS Version 7 Presentation Guide, SG24-6121 DB2 UDB Server for OS/390 Version 6 Technical Update, SG24-6108 DB2 Server for OS/390 Version 5 Recent Enhancements - Reference Guide, SG24-5421 There is also another category of enhancements, that, although application related, did not make it into this redbook. Stored procedures and Java support are good examples. They have a big impact on the way you develop applications and are, or therefore will be, treated in separate redbooks like: DB2 for OS/390 and z/OS Powering the Worlds e-business Solutions, SG24-6257 DB2 Java Stored Procedures Learning by Example, SG24-5945 Cross-Platform DB2 Stored Procedures: Building and Debugging, SG24-5485 For a complete list of recent DB2 for OS/390 related redbooks, see Related publications on page 253, or visit the Redbooks Web site at: ibm.com/redbooks. This redbook is based on DB2 for z/OS Version 7 (PUT0106)1 and all the examples in this book are written using a Version 7 system. However, since a large customer basis is still running DB2 V6, we mention when a feature was introduced in V7. If the version is not specifically mentioned, it is part of Version 6. (Some features that were introduced in Version 5 are also included in the book.)
1
DB2 Version 6 was a very large release and had a vast number of enhancements that have an impact on application programming and application design. With V6, DB2 for z/OS stepped into the world of object-oriented databases, introducing features like triggers, user-defined distinct types and user-defined functions. With these feature you can turn your database management system from a passive database into an active database. Triggers provide automatic execution of a set of SQL statements (verifying input, executing additional SQL statements, invoking external programs written in any popular language) whenever a specified event occurs. This opens the door for new possibilities in application and database design. User-defined distinct types can help you enforce strong typing through the database management system. The distinct type reflects the use of the data that is required and/or allowed. Strong typing is especially valuable for ad-hoc access to data where users don t always understand the full semantics of the data. The number of built-in functions increased considerably in last versions of DB2. There are now over 90 different functions that perform a wide range of string, date, time, and timestamp manipulations, data type conversions, and arithmetic calculations. In some cases even this large number of built-in functions does not fit all needs. Therefore, DB2 allows you to write your own user-defined functions that can call an external program. This extends the functionality of SQL to whatever you can code in an application program; essentially, there are no limits. User-defined functions also act as the methods for user-defined data types by providing consistent behavior and encapsulating the types. Another set of enhancements is more geared toward giving you more flexibility when designing databases and applications. When you need a table only for the life of an application process, you dont have to create a permanent table to store this information but you can use a temporary table instead. There are two kinds of temporary tables: created temporary tables, also know as global temporary tables and declared temporary tables. Their implementation is different from normal tables. They have reduced or no logging and also virtually no locking. The latter is not required since the data is not shared between applications. The data that is stored in the temporary table is only visible to the application process that created it. Another major enhancement that can make you change the way you have been designing applications in the past is the introduction of savepoints. Savepoints enable you to code contingency or what-if logic and can be useful for programs with sophisticated error recovery, or to undo updates made by a stored procedure or subroutine when an error is detected, and to ensure that only the work done in a stored procedure or subroutine is rolled back. Another area which has always caused a lot of debate is how and when to assign a unique identification to a relational object. Often a natural key is available and should be used to identify the relation. However, this is not always the case and this is where a lot of discussions start. Should we use an ever-ascending or descending key? Should it be random instead? Should we put that ever-increasing number in a table and if so, how do we access it and when do we update the number? These are all questions that keep DBAs from loosing their jobs. Recently DB2 has introduced two new concepts that can guarantee unique column values without having to create an index. In addition, you can eliminate the application coding that was implemented to assign unique column values for those columns. The first concept is the introduction of identity columns. Identity columns offer us a new possibility to guarantee uniqueness of a column and enables us to automatically generate the value inside the database management system. 2
DB2 for z/OS Application Programming Topics
The second concept is the usage of a new data type called ROWID. There three aspects to ROWIDs. Its first and primary function is to access the LOB data from LOB columns (like audio and video) stored in the auxiliary table. (This usage of ROWID is beyond the scope of this redbook). The second function is the fact that a ROWID can be a unique random number generated by the DBMS. This looks like a very appealing option to solve many design issues. The third aspect of using ROWID columns is that they can be used for a special type of access path to the data, called direct row access. Another major enhancement is scrollable cursors. The ability to be able to scroll backwards as well as forwards has been a requirement of many screen-based applications. DB2 V7 introduces facilities not only to allow scrolling forwards and backwards, but also the ability to jump around and directly retrieve a row that is located at any position within the cursor result table. DB2 also can, if desired, maintain the relationship between the rows in the result set and the data in the base table. That is, the scrollable cursor function allows the changes made outside the opened cursor, to be reflected in the result set returned by the cursor. DB2 utilities are normally not directly the terrain of the application programmer. However, new DB2 utilities have been introduced and existing utilities have been enhanced in such a way that they can take over some of the work that was traditionally done in application programs. Some ideas where these enhancements can be used and how they compare to implementing the same processes using application code will be provided, such as using online LOAD RESUME versus coding your own program to add rows to a table, using REORG DISCARD versus deleting rows via a program followed by a REORG to reclaim unused space and restore the cluster ratio, and comparing REORG UNLOAD EXTERNAL, the new UNLOAD utility to a home grown program or the DSNTIAUL sample program. And last but not least, we will discuss and provide examples for a large list of enhancements to the SQL language that will boost programmer productivity, such as CASE expressions, that allow you to code IF-THEN logic inside an SQL statement, UNION everywhere, that enables you to code a full select, wherever you were allowed to code a subselect before. This also included the long-awaited union-in-view feature and will finally allow you to code a UNION clause inside a CREATE VIEW statement. With this enhancement, DB2 is delivering one of the oldest outstanding requirements which should make a lot of people want to migrate to Version 7 sooner rather than later. With such a vast range of programming capabilities, that not only provide rich functionality but also good performance and great scalability, DB2 for z/OS Version 7 is certainly capable of competing with any other DBMS in the marketplace. We hope you enjoy reading this Redbook as much as we did writing it.
Chapter 1. Introduction
Part 1
Part
Object-oriented enhancements
In this part we describe and discuss various object-oriented enhancements that can transform DB2 from a passive database manager into an active one by allowing you to move application logic into the database. These enhancements allow you to move application logic that may reside in various places, platforms, and environments into one place, the database itself. These are the enhancements we discuss: Schemas Triggers User-defined distinct types User defined functions Built-in functions
Chapter 2.
Schemas
The concept of schemas has been around for quite a some time. In this section, we discuss the schema concept, since it is now starting to be widely used in a DB2 for z/OS environment and is required for some features implemented in DB2 V6, like triggers, user-defined functions, user-defined distinct types and stored procedures.
Note: An authorization ID 'JOHN' has the implicit CREATEIN, ALTERIN, and DROPIN privileges for the schema named 'JOHN'.
Chapter 2. Schemas
10
CREATE TABLESPACE .... GRANT .... CREATE TABLE .... CREATE INDEX .... CREATE DISTINCT TYPE .... CREATE FUNCTION .... CREATE UNIQUE INDEX .... CREATE INDEX .... CREATE TRIGGER .... GRANT ....
Important: Databases and table spaces can and should be defined through the schema processor, however they are not part of a schema (not schema objects) since database names must be unique within the DB2 system and table space names must be unique within a database. All statements passed to the schema processor are considered one unit of work. If one or more statements fail with a negative SQLCODE, all other statements continue to be processed. However, when the end of the input is reached, all work is rolled back. You can only process one schema per job step execution. An example of a schema processor job can be found in DDL of the DB2 objects used in the examples, Example A-1 on page 224. Note: The CREATE SCHEMA statement cannot be embedded in a host program or executed interactively. It can only be executed in a batch job using the schema processor program DSNHSP.
Chapter 2. Schemas
11
12
Chapter 3.
Triggers
Triggers provide automatic execution of a set of SQL statements whenever a specified event occurs. This opens the door for new possibilities in application design. In this section we discuss triggers, and how and when to use them.
13
Triggers provide several improvements to the development and execution of DB2 applications and can bring a lot of benefits to your organization: Faster application development. Because triggers are stored in the database, the actions performed by triggers do not have to be coded in each application. Code reusability. A trigger can be defined once and then automatically used by every application program that changes data in the table on which the trigger is defined. Enforce data integrity rules system wide. No matter what application performs inserts, updates, or deletes on a table, you can be certain that the associated business rules that are imbedded in the trigger are carried out. This is especially important with highly distributed applications, ad-hoc queries, and dynamic SQL. Easier maintenance. If a business policy changes, only a change to the corresponding triggers is needed, instead of changes to multiple application programs. Having trigger support in DB2 makes it easier for customers to migrate from other relational database management systems that also have triggers. Triggers can also be used during migration to make some updates to the old system transparent to the applications, when not all programs have yet converted to the new system.
14
Set default values based on business logic Cross-reference other tables (enhance RI) Maintain summary data Initiate external actions via User-Defined Functions (UDFs) and Stored Procedures (SPs) to: Propagate changes to an external file Send e-mail, fax, or pager notifications Maintain an audit trail Schedule a batch job
Enforcement of transitional business rules: Triggers can enforce data integrity rules with far more capability than is possible with declarative (static) constraints. Triggers are most useful for defining and enforcing transitional business rules. These are rules that involve different states of the data. In Example 3-11 on page 25, we show a trigger constraint that prevents a salary column from being increased by more than twenty percent. To enforce this rule, the value of the salary before and after the increase must be compared. This is something that cannot be done with a check constraint. Generation and editing of column values: Triggers can automatically generate values for newly inserted rows, that is, you can implement user-defined default values, possibly based on other values in the row or values in other tables. Similarly, column values provided in an insert or update operation can be modified/corrected before the operation occurs. An example of this can be found in Example 4-13 on page 53. This trigger generates a value for the EUROFEE column based on a conversion formula that calculates an amount in euro based on an amount in pesetas (PESETAFEE). Cross-reference other tables: You can enhance the existing referential integrity rules that are supported by DB2. For example, RI allows a foreign key to reference a primary key in another table. With the new union everywhere feature you can change your design to split a logical table into multiple physical tables (as an alternative to partitioning). Now it has become impossible to use DB2 RI to implement a foreign key relationship, because you cannot refer multiple tables to check whether it has an existing primary key value. Here triggers can come to the rescue. You can implement a before trigger to check the different tables to make sure the value you are inserting has a parent row. Note: Make sure you have the PTF for APAR PQ53030 (still open at the time of writing) applied on your system when trying this. You can also implement so called negative RI this way. Instead of checking whether a row exists in a parent table (normal RI), you can use a trigger to make sure a row or column value does not exist in another table. For example, when you want to add a new customer to the customer table (TBCUSTOMER), you might want to check (using a trigger) whether that customer does not already exists in the customer archive database (TBCUSTOMER_ARCH, that contains customers that have not placed any orders the past year). If it does, you restrict the insert of the new customer and make the application copy the existing customer information from the customer archive database.
Chapter 3. Triggers
15
Maintenance of summary data: Triggers can automatically update summary data in one or more tables when a change occurs to another table. For example, triggers can ensure that every time a new order is added, updates are made to rows in the TBREGION and TBSTATE table to reflect the change in the number of orders per region and state. Example 3-2 shows part of the solution to implement this. You also need to define an UPDATE and DELETE trigger to cover all cases if you want the number of outstanding orders to be correctly reflected at the region and state level.
Example 3-2 Trigger to maintain summary data CREATE TRIGGER SC246300.TGSUMORD AFTER INSERT ON SC246300.TBORDER FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE SC246300.TBREGION SET NUM_ORDERS = NUM_ORDERS + 1; UPDATE SC246300.TBSTATE SET NUM_ORDERS = NUM_ORDERS + 1; END
Initiate external actions: In Example 3-3, we demonstrate how a user-defined function can be used within a trigger to initiate an external function. Since a user-defined function can be written in any of the popular programming languages, using a user-defined function in a trigger gives access to a vast number of possible actions you can code. A common uses may be to communicate with an e-mail package and send out an e-mail to the employee to let him know that a change was made to his payroll data.
Example 3-3 Trigger to initiate an external action CREATE TRIGGER PAYROLL1 AFTER UPDATE ON PAYROLL FOR EACH STATEMENT MODE DB2SQL VALUES (PAYROLL_LOG (USER, 'UPDATE', CURRENT TIME, CURRENT DATE))
UPDATE
16
DELETE
A delete operation can be caused by a DELETE statement or as a result of enforcing a referential constraint rule of ON DELETE CASCADE.
Triggers cannot be activated by SELECT statements. The table named as the triggering table cannot be a DB2 catalog table, view, alias, synonym, or a three-part table name. A triggering operation can be the result of changes that occur due to referential constraint enforcement. For example, given two tables TBDEPARTMENT and TBEMPLOYEE, if deleting from TBDEPARTMENT causes propagated deletes (ON DELETE CASCADE) or updates (ON DELETE SET NULL) to TBEMPLOYEE because of a referential constraint, then delete or update triggers defined on TBEMPLOYEE will be activated. The triggers defined on TBEMPLOYEE run either before or after the referential constraint operation depending on their defined activation time (whether they are a before or after trigger). When you define an update trigger you can specify that it should only be activated when certain columns of the triggering table are updated. Of course, if an SQL data change operation affects a table through a view, any triggers defined on the table for that operation are activated. A triggering event is the occurrence of the triggering operation on the triggering table which causes the trigger to be activated.
The trigger action is activated before the triggering operation is processed. The trigger action is activated for each row in the set of affected rows, as the rows are accessed, but before the triggering operation is performed on each row and before any table check or referential integrity constraints that the rows may be subject to are processed. The NO CASCADE keyword is required and serves to remind you that a before trigger cannot perform update operations and, therefore, cannot cause trigger cascading. Before triggers are generally used as an extension to the constraint system. In Example 3-4, an error is passed back to the application when an INSERT is performed containing an invalid CITY (the CITY being inserted is not in the TBCITIES table). Please note that this could have been implemented using a CHECK CONSTRAINT, RI, or using application logic and all have different performance characteristics. More information on this can be found in sections 3.19, Design considerations on page 37 and 3.20, Some alternatives to a trigger on page 38.
Example 3-4 BEFORE trigger CREATE TRIGGER SC246300.TGBEFOR7 NO CASCADE BEFORE INSERT ON SC246300.TBCUSTOMER REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN (NOT EXISTS (SELECT CITYKEY FROM SC246300.TBCITIES WHERE CITYKEY = N.CITYKEY))
Chapter 3. Triggers
17
When inserted rows satisfy the WHEN condition you receive: DSNT408I SQLCODE = -438, ERROR: APPLICATION RAISED ERROR WITH DIAGNOSTIC TEXT: NOT A VALID CITY DSNT418I SQLSTATE = ERR10 SQLSTATE RETURN CODE
AFTER The trigger is activated after the triggering operation is performed. The trigger can be viewed as a segment of application logic to: Perform follow-on update operations Perform external actions
This type of trigger is often referred to as an after trigger. The trigger action is activated after the triggering operation has been processed and after all table check constraints and referential constraints that the triggering operation may be subject to have been processed. After triggers can be viewed as segments of application logic that run every time a specific event occurs. After triggers see the database in the state that an application would see it following the execution of the change operation. This means they can be used to perform actions that an application might otherwise have performed such as maintaining summary data or an audit log. Example 3-6 on page 20 shows an after trigger.
18
Note: For a cursor-controlled UPDATE or DELETE (with a WHERE CURRENT OF clause), a statement trigger is executed once per row because only one row is affected. For online LOAD RESUME, each row that is loaded is treated as a statement. When you have a statement trigger defined on that table, it will be invoked for every row that is added to the table.
Chapter 3. Triggers
19
Tip: SQL processor programs, such as SPUFI and DSNTEP2, might not correctly parse SQL statements in the trigger action that are ended with semicolons. These processor programs accept multiple SQL statements, each separated with a terminator character, as input. Processor programs that use a semicolon as the SQL statement terminator can truncate a CREATE TRIGGER statement with embedded semicolons and pass only a portion of it to DB2. Therefore, you might need to change the SQL terminator character for these processor programs. For information on changing the terminator character for SPUFI and DSNTEP2, see DB2 UDB for OS/390 and z/OS Version 7 Application Programming and SQL Guide, SC26-9933. The trigger activation time determines which statements are allowed in the trigger body. See Figure 3-1 on page 22 for a list of the allowable combinations.
The above example shows how transition variables can be used in a row trigger to maintain summary data in another table. Assume that the department table has a column that records the budget for each department. Updates to the SALARY of any employee (for example, from $50,000 to $60,000) in the TBEMPLOYEE table are automatically reflected in the budget of the updated employee's department. In this case, NEW_EMP.SALARY has a value of 60000 and OLD_EMP.SALARY a value of 50000, and BUDGET is increased by 10000.
20
Important: Transition tables are populated by DB2 before any after-row or after-statement trigger is activated. Transition tables are read-only. In Example 3-8, a trigger is used to maintain the supply of parts in the PARTS table. The trigger action condition specifies that the set of triggered SQL statements should only be executed for rows in which the value of the ON_HAND column is less than ten per cent of the value of the MAX_STOCKED column. When this condition is true, the trigger action is to reorder (MAX-STOCKED - ON_HAND) items of the affected part using a UDF called ISSUE_SHIP_REQUEST.
Example 3-8 Single trigger action CREATE TRIGGER REORDER AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS REFERENCING NEW AS N_ROW FOR EACH ROW MODE DB2SQL WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED) VALUES (ISSUE_SHIP_REQUEST(N_ROW.PARTNO, N_ROW.MAX_STOCKED -N_ROW.ON_HAND))
Chapter 3. Triggers
21
UPDATE DELETE
Granularity
NONE
INVALID TRIGGER
NEW _TABLE
NONE
22
B E F O R E Trig g er
S E T tra n s itio n-v aria b le
A F T E R Trig g e r
IN S E R T
S earch ed U P D A T E (no t a curso r U P D A T E ) S earch ed D E LE T E (no t a curso r D E LE T E ) F u ll se le ct and th e V A L U E S s tatem e nt (ca n be us ed to inv ok e U se r D e fine d F u nctions ) C A LL p ro ced ure-nam e S IG N A L S Q LS T A T E
Figure 3-2 Allowed SQL statement matrix
Chapter 3. Triggers
23
Attention: If the SQLCODE is ignored by the stored procedure or the UDF and returns to the invoking trigger, the triggering action is NOT undone. More information on how to deal with error situations in triggers can be found in Section 3.8, Error handling on page 26. UDFs cannot be invoked in a stand-alone manner, that is, they must appear in an SQL statement. A convenient method for invoking a UDF is to use a VALUES statement or a full select as show in Example 3-9.
Example 3-9 Invoking a UDF within a trigger VALUES (UDF1(NEW.COL1), UDF2(NEW.COL2)) SELECT UDF1(COL1), UDF2(COL2) FROM NEW_TABLE WHERE COL1 > COL3
24
In Example 3-11, we show how a trigger can be used to ensure that salary increases do not exceed 20%.
Example 3-11 Signaling SQLSTATE CREATE TRIGGER CHK_SAL NO CASCADE BEFORE UPDATE OF SALARY ON SC246300.TBEMPLOYEE REFERENCING OLD AS OLD_EMP NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL WHEN (NEW_EMP.SALARY > OLD_EMP.SALARY * 1.20) SIGNAL SQLSTATE '75001'('INVALID SALARY INCREASE - EXCEEDS 20%')
The SIGNAL SQLSTATE statement is only valid in a trigger body. However, the RAISE_ERROR built-in function can also be used in SQL statements other than a trigger. When SIGNAL SQLSTATE and RAISE_ERROR are issued, the execution of the trigger is terminated and all database changes performed as part of the triggering operation and all trigger actions are backed out. The application receives SQLCODE -438 along with the SQLSTATE (in SQLCA field SQLSTATE) and text (in SQLCA field SQLERRMC) that have been set by the trigger. For instance, if you perform an illegal update against the CHK_SAL trigger of Example 3-11, you will receive the following information in the SQLCA as shown in Example 3-12.
Example 3-12 Information returned after a SIGNAL SQLSTATE After calling DSNTIAR DSNT408I SQLCODE = -438, ERROR: APPLICATION RAISED ERROR WITH DIAGNOSTIC TEXT: INVALID SALARY INCREASE - EXCEEDS 20% DSNT418I SQLSTATE = 75001 SQLSTATE RETURN CODE DSNT415I SQLERRP = DSNXRTYP SQL PROCEDURE DETECTING ERROR DSNT416I SQLERRD = 1 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION DSNT416I SQLERRD = X'00000001' X'00000000' X'00000000' X'FFFFFFFF' X'00000000' X'00000000' SQL DIAGNOSTIC INFORMATION Raw SQLCA format *** START OF UNFORMATTED SQLCA *** SQLCAID X(8) SQLCA SQLCABC I 000000136 SQLCODE I -000000438 SQLERRML SI +000000037 SQLERRMC X(70) INVALID SALARY INCREASE - EXCEEDS 20% SQLERRP X(8) DSNXRTYP SQLERRD1 I +000000001 SQLERRD2 I +000000000 SQLERRD3 I +000000000 SQLERRD4 I -000000001 SQLERRD5 I +000000000 SQLERRD6 I +000000000 SQLWARN0 X(1) SQLWARN1 X(1) SQLWARN2 X(1) SQLWARN3 X(1) SQLWARN4 X(1) SQLWARN5 X(1)
Chapter 3. Triggers
25
X(1) X(1) X(1) X(1) X(1) X(5) 75001 OF UNFORMATTED SQLCA ***
26
SQLERRD5 SQLERRD6 SQLWARN0 SQLWARN1 SQLWARN2 SQLWARN3 SQLWARN4 SQLWARN5 SQLWARN6 SQLWARN7 SQLWARN8 SQLWARN9 SQLWARNA SQLSTATE *** END
I -974970879 I +012714050 X(1) X(1) X(1) X(1) X(1) X(1) X(1) X(1) X(1) X(1) X(1) X(5) 09000 OF UNFORMATTED SQLCA ***
If a stored procedure (SP) or UDF is invoked by a trigger, and the SP/UDF encounters an error, the SP/UDF can choose to ignore the error and continue or it can return an error to the trigger. Attention: If the SQLCODE is ignored by the SP or the UDF and returns to the invoking trigger, the triggering action is NOT undone. To avoid data inconsistencies it is best (easiest) for the SP/UDF to issue a ROLLBACK. This will place the SP/UDF in a MUST_ROLLBACK state and will cause the triggering action to be rolled back also. Another way is to return an SQLSTATE to the trigger that will translate into an SQLCODE -723. External UDFs or stored procedures can be written to perform exception checking and to return an error if an exception is detected. When a SP/UDF is invoked from a trigger, and that SP/UDF returns an SQLSTATE, this SQLSTATE is translated into a negative SQLCODE by DB2. The trigger execution terminates and all database changes performed as part of the triggering operation are backed out. Remember that a stored procedure cannot return output parameters (containing for instance the SQLCODE) when invoked from a trigger. To pass back an SQLSTATE to the invoking trigger, the PARAMETER STYLE DB2SQL option has to be used on the CREATE PROCEDURE or CREATE FUNCTION statement. This indicates to DB2 that additional information can be passed back and forth between the caller and the procedure or function. This information includes the SQLSTATE and a diagnostic string. For more details on how to use PARAMETER STYLE DB2SQL, please refer to the DB2 UDB for OS/390 and z/OS Version 7 Application Programming and SQL Guide, SC26-9933. Example 3-14 shows how the error is returned to the triggering statement. A full listing and additional information is available in the additional material that can be downloaded from the Internet (see Appendix C, Additional material on page 251) as well as Returning SQLSTATE from a stored procedure to a trigger on page 244. Both the SQLSTATE 38601 and diagnostic string SP HAD SQL ERROR are set by the stored procedure after it detects its initial error.
Example 3-14 Passing SQLSTATE back to a trigger Formatted by DSNTIAR DSNT408I SQLCODE = -723, ERROR: AN ERROR OCCURRED IN A TRIGGERED SQL STATEMENT IN TRIGGER SC246300.TSD0BMS3, SECTION NUMBER 2.
Chapter 3. Triggers
27
INFORMATION RETURNED: SQLCODE -443, SQLSTATE 38601, AND MESSAGE TOKENS SD0BMS3,SD0BMS3,SP HAD SQL ERROR, DSNT418I SQLSTATE = 09000 SQLSTATE RETURN CODE DSNT415I SQLERRP = DSNXRRTN SQL PROCEDURE DETECTING ERROR DSNT416I SQLERRD = -891 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION DSNT416I SQLERRD = X'FFFFFC85' X'00000000' X'00000000' X'FFFFFFFF' X'00000000' X'00000000' SQL DIAGNOSTIC INFORMATION Or directly from the SQLCA *** START OF UNFORMATTED SQLCA *** SQLCAID X(8) SQLCA SQLCABC I 000000136 SQLCODE I -000000723 SQLERRML SI +000000064 SQLERRMC X(70) SC246300.TSD0BMS3 2 -443 38601 SD0BMS3,SD0BMS3,SP HAD SQL ERROR, SQLERRP X(8) DSNXRRTN SQLERRD1 I -000000891 SQLERRD2 I +000000000 SQLERRD3 I +000000000 SQLERRD4 I -000000001 SQLERRD5 I +000000000 SQLERRD6 I +000000000 SQLWARN0 X(1) SQLWARN1 X(1) SQLWARN2 X(1) SQLWARN3 X(1) SQLWARN4 X(1) SQLWARN5 X(1) SQLWARN6 X(1) SQLWARN7 X(1) SQLWARN8 X(1) SQLWARN9 X(1) SQLWARNA X(1) SQLSTATE X(5) 09000 *** END OF UNFORMATTED SQLCA ***
If a trigger invokes a stored procedure or external UDF and that procedure or function does something that puts the thread into a must-rollback state, no further SQL is allowed. SQLCODE -751 is returned to the trigger which causes the trigger to terminate. All database changes performed as part of the triggering operation are backed out and control is returned to the application. All subsequent SQL statements receive an SQLCODE -919.
28
of a single delete, insert or update operation. In addition, triggers can invoke external UDFs and stored procedures, which in turn can activate other triggers, UDFs and stored procedures. The cascade path can therefore involve a combination of triggers, UDFs and stored procedures.
A pplication T R IG G E R 1
U PD ATE TABL E1 INSER T IN TO TABLE2
T RIG G E R 2
U PDATE TABLE3
A pplication SP1
C ALL SP1 IN SER T INTO TABLE1
T R IG G E R 1
VAL UES (UD F1)
U D F1
UP DATE TABLE2
The allowed run-time depth level of a trigger, UDF or stored procedure is 16. If a trigger, UDF or stored procedure at nesting level 17 is activated, SQLCODE -724 is returned to the application. None of the database changes made as part of the triggering operation are applied to the database. This provides protection against endless loop situations that can be created with triggers. In Example 3-15, you can see the error message you receive in your application program if you attempt to go beyond the 16 nesting levels permitted.
Example 3-15 Cascading error message DSNT408I SQLCODE = -724, ERROR: THE ACTIVATION OF THE TRIGGER OBJECT objectname WOULD EXCEED THE MAXIMUM LEVEL OF INDIRECT SQL CASCADING
Tip: Remember before triggers cannot cause trigger cascading because INSERT, UPDATE, and DELETE statements are not allowed in a before trigger.
Chapter 3. Triggers
29
When triggers are defined using the CREATE TRIGGER statement, their creation time is registered by a timestamp in the DB2 catalog table SYSIBM.SYSTRIGGERS. The value of this timestamp is subsequently used to order the activation of triggers when there is more than one trigger that should be run at the same time. Existing triggers are activated before new triggers so that new triggers can be used as incremental additions to the logic that affects the database. For example, if a triggered SQL statement of trigger T1 inserts a new row into a table T, a triggered SQL statement of trigger T2 that is run after T1 can be used to update the new row table T with specific values. This ordering scheme means that if you drop the first trigger created and re-create it, it will become the last trigger to be activated by DB2. Tip: If you want a trigger to remain the first one activated, you must drop all the triggers defined on the table and recreate them in the order you want them executed.
30
(IN TABLE LIKE SC246300.TBEMPLOYEE AS LOCATOR) LANGUAGE COBOL EXTERNAL NAME SPTRTT COLLID BARTCOB PROGRAM TYPE MAIN NO WLM ENVIRONMENT PARAMETER STYLE DB2SQL # CREATE TRIGGER SC246300.TRTTTR AFTER UPDATE OF SALARY ON SC246300.TBEMPLOYEE REFERENCING NEW_TABLE AS NTAB FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC CALL SPTRTT (TABLE NTAB) ; END #
The SP SPTRTT is defined with a single table locator argument. The keyword LIKE followed by SC246300.TBEMPLOYEE specifies that the table represented by the table locator has the same column names and data types as table SC246300.TBEMPLOYEE. To access a transition table in an external UDF or stored procedure, you need to: 1. 2. 3. 4. Declare a parameter to receive the table locator Declare a table locator host variable Assign the parameter value to the table locator host variable Use the table locator host variable to reference the transition table
In Example 3-17, the SQL syntax used to declare the table locator host variable TRIG-TBL-ID is transformed by the precompiler to a COBOL host language statement. The keyword LIKE followed by table-name specifies that the table represented by the table locator host variable TRIG-TBL-ID has the same column names and data types as table SC246300.TBEMPLOYEE. A full listing is available in Passing a transition table from a trigger to a SP on page 246 and in the additional material downloadable from the Internet. See Appendix C, Additional material on page 251 for instructions. Using a transition table is an interesting technique. This way you call the stored procedure only once using a statement trigger, instead of using a row trigger that would call the stored procedure for every row that is updated. Passing a transition table to a UDF or SP allows you to do things that you cannot do with row triggers like calculations based on the entire set of rows that were changed by the triggering action.
Example 3-17 Sample COBOL program using a SP and table locator IDENTIFICATION DIVISION. PROGRAM-ID. "SPTRTT". DATA DIVISION. WORKING-STORAGE SECTION. ..... * ************************************************** * 2. DECLARE TABLE LOCATOR HOST VARIABLE TRIG-TBL-ID * ************************************************** 01 TRIG-TBL-ID SQL TYPE IS TABLE LIKE SC246300.TBEMPLOYEE AS LOCATOR. ..... LINKAGE SECTION. * **************************************** * 1. DECLARE TABLOC AS LARGE INTEGER PARM
Chapter 3. Triggers
31
* **************************************** 01 TABLOC PIC S9(9) USAGE BINARY. 01 INDTABLOC PIC S9(4) COMP. ..... PROCEDURE DIVISION USING TABLOC , INDTABLOC, P-SQLSTATE, P-PROC, P-SPEC, P-DIAG. * ********************************************* * 4. DECLARE CURSOR USING THE TRANSITION TABLE * ********************************************* EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME FROM TABLE ( :TRIG-TBL-ID LIKE SC246300.TBEMPLOYEE ) END-EXEC. * *************************************************************** * 3. COPY TABLE LOCATOR INPUT PARM TO THE TABLE LOCATOR HOST VAR * *************************************************************** MOVE TABLOC TO TRIG-TBL-ID. ..... Start processing the transition table
32
Note: Only a package is created for the trigger and no associated plan is created. Trigger packages are implicitly loaded at execution time.
Tip: We recommend that you treat trigger packages in the same way as standard packages in that you REBIND them when you REBIND other types of package, for example, when there are significant changes to the statistics. This ensures that access paths are based on accurate information.
Chapter 3. Triggers
33
Example 3-19 Information retuned when trigger package is invalid DSNT408I SQLCODE = -723, ERROR: AN ERROR OCCURRED IN A TRIGGERED SQL STATEMENT IN TRIGGER DB28710.TR5EMP, SECTION NUMBER 1. INFORMATION RETURNED: SQLCODE -904, SQLSTATE 57011, AND MESSAGE TOKENS 00E30305,00000801,DB28710.TR5EMP.16D1060E1 DSNT418I SQLSTATE = 09000 SQLSTATE RETURN CODE DSNT415I SQLERRP = DSNXEAAL SQL PROCEDURE DETECTING ERROR DSNT416I SQLERRD = -150 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION DSNT416I SQLERRD = X'FFFFFF6A' X'00000000' X'00000000' X'FFFFFFFF' X'00000000' X'00000000' SQL DIAGNOSTIC INFORMATION
Note: A user-defined function cannot be dropped if it is referenced within a triggered SQL statement.
Tip: If a triggering table is dropped, its triggers are also dropped. If the table is recreated, and you want the triggers back, then they must be recreated.
34
Tip: We recommend that you use COMMENT ON TRIGGER to comment the function implemented by the trigger.
Chapter 3. Triggers
35
A violation of any constraint or WITH CHECK OPTION results in an error and all changes made as a result of the original S1 (so far) are rolled back. If the SAR is empty, this step is skipped. 4. Apply the SAR to the target table The actual DELETE, INSERT, or UPDATE is applied using the SAR to the target table. An error may occur when applying the SAR (such as attempting to insert a row with a duplicate key where a unique index exists) in which case all changes made as a result of the original SQL statement S1 (so far) are rolled back. 5. Process after triggers All after triggers activated by S1 are processed in ascending order of creation. After-statement triggers are activated exactly once, even if the SAR is empty. After-row triggers are activated once for each row in the SAR. An error may occur during the processing of a trigger action in which case all changes made as a result of the original S1 (so far) are rolled back. The trigger action of an after trigger may include triggered SQL statements that are DELETE, INSERT or UPDATE statements. Each such statement is considered a cascaded SQL statement because it starts a cascaded level of trigger processing. This can be thought of as assigning the triggered SQL statement as a new S1 and performing all of the steps described here recursively. Once all triggered SQL statements from all after triggers activated by each S1 have been processed to completion, the processing of the original S1 is complete.
error
ROLLBACK
error
ROLLBACK
violation
ROLLBACK
error
ROLLBACK
36
Chapter 3. Triggers
37
Note: APAR PQ34506 provides an important performance improvement for triggers with a WHERE clause and a subselect. A where clause in the subselect can now be evaluated as a Stage 1 predicate. We recommend that you prototype your physical design first if you are considering using triggers for tables that are heavily updated and/or fire SQL statements that process significant quantities of data. You can then evaluate their cost relative to the cost of equivalent functionality embedded in your applications. Be cautious when using triggers to create summary tables from tables that are heavily updated, you may end up creating a locking bottleneck on the summary table rows. When you begin physical design, you may find that you need several triggers defined on a single table. To avoid the overhead of multiple triggers, you can write a stored procedure to do all the triggered processing logic. The body of the trigger could then simply consist of a CALL stored-procedure-name. When porting applications from other RDBMS systems, dont forget that there may be syntax incompatibility with these other platforms. Tip: Within DB2s resource limit facility, the execution of a trigger is counted as part of the triggering SQL statement. Triggers can be very helpful in a number of areas, but can also make the design more complex. You can no longer just rely on looking at the programs to find out what is happening to the data, you also have to look inside the DBMS. Adding triggers to the data model should be implemented with great care, especially when you get into cascading situations. People doing program design should be aware of existing triggers. Otherwise you might end up doing the same update twice, once in the application program and again in the trigger. Important: You should not use triggers just for the sake of using them. You should first see if what you are trying to implement can be done with a check constraint, if not, then try with referential integrity, if not, then try with a trigger. Do not get trigger happy!!!
38
Referential integrity Triggers are implemented through the Relational Data Systems (RDS) component of DB2 through the use of packages. Referential Integrity is enforced by Data Manager which has a shorter path length and thus has a performance advantage. Referential Integrity is enforced when created and can be checked through existing utilities and triggers cannot. User-defined defaults If the default value can change over time, then a trigger may be a better way to implement it since all that is required is dropping and recreate the trigger with the new values. Making such a change to a column defined with a default would involve the unloading, dropping, recreating of the table with the new default value, and reloading the table (this may change in a future version of DB2). As you can see, the trigger would be much less disruptive but takes additional CPU resources to be performed. If the default value does not change, then use user-defined defaults instead of triggers to achieve better performance results. Data replication and propagation Although triggers can be used for simple propagation (for example, to create an audit trail), they are not intended to be used as an alternative to, or a replacement for Data Propagator. Table check constraints Before triggers are generally used as an extension to the constraint system. There are trade-offs between when to use a trigger versus a table check constraint. A good rule of thumb is that if the values of the constraint are static (no new values added or values not changed very often), then it is better to enforce the constraint via the use of a table check constraint. If the values of the constraint are dynamic (often changed, many new values added, and so on), then a BEFORE trigger would be a better choice to enforce the constraint. For example, (see Example 3-21), if you wanted to specify a constraint to validate a column such as sex, a check constraint would be the better choice to implement it since there are a finite number of sex codes or values and they are not changed very often. However, if you wanted to specify a constraint to validate an item number or store number (see Example 3-22), then a BEFORE trigger could be a better choice to implement the constraint since new items are constantly added and deleted and new stores may be opened or closed frequently depending on the volatility or growth of the business. In this case RI could also be used and may be a better solution. Constraints are good for declarative comparisons, they are enforced when created and through existing utilities. In Example 3-21, we demonstrate equivalent constraints. One is coded as a check and the other as a trigger. Lets assume that the values of L_ITEM_NUMBER are constantly changing. That is, new items are often added and old items removed. In order to make these changes to the check constraint, you would have to drop the check constraint and alter the table to re-add it. This causes the table to be in check pending status (with CURRENT RULES = DB2) until the check utility completes and the table will be unavailable to the application causing an outage. If high online availability is important, and you can afford the extra cost of processing a trigger, then the trigger is a better choice.
Example 3-21 Check constraint is better than a trigger CREATE TRIGGER SC246300.SEXCNST NO CASCADE BEFORE INSERT ON SC246300.TBEMPLOYEE REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL
Chapter 3. Triggers
39
WHEN ( N.SEX NOT IN('M','F')) SIGNAL SQLSTATE 'ERRSX' ('SEX MUST BE EITHER M OR F') CREATE TRIGGER SC246300.SEXCNST NO CASCADE BEFORE UPDATE ON SC246300.TBEMPLOYEE REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.SEX NOT IN('M','F')) SIGNAL SQLSTATE 'ERRSX' ('SEX MUST BE EITHER M OR F') or ALTER TABLE SC246300.TBEMPLOYEE ADD CHECK (SEX IN ('M','F'))
Example 3-22 Trigger is better than a check constraint CREATE TRIGGER SC246300.ITEMNMBR NO CASCADE BEFORE INSERT ON SC246300.TBLINEITEM REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.L_ITEM_NUMBER NOT IN (1, 5, 6, ..... 9996,9998,10000) ) SIGNAL SQLSTATE 'ERR30' ('ITEM NUMBER DOES NOT EXIST') CREATE TRIGGER SC246300.ITEMNMBR NO CASCADE BEFORE UPDATE ON SC246300.TBLINEITEM REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.L_ITEM_NUMBER NOT IN (1, 5, 6, ..... 9996,9998,10000) ) SIGNAL SQLSTATE 'ERR30' ('ITEM NUMBER DOES NOT EXIST') or ALTER TABLE SC246300.TBLINEITEM ADD CHECK (L_ITEM_NUMBER IN (1, 5, 6, ..... 9996,9998,10000))
In Example 3-23, we show another way that the trigger in Example 3-22 can be coded. This example is more flexible since there is no need to update the trigger with new values but merely to insert the new values in a table called TBITEMS. This could also be accomplished more efficiently with RI because a foreign key causes less overhead and guarantees the consistency when using utilities (the trigger is only enforced in an online LOAD RESUME utility). However, a foreign key must reference a unique key and a trigger does not have that requirement. The trigger can return a customized error message.
Example 3-23 Alternative trigger CREATE TRIGGER SC246300.ITEMNMB2 NO CASCADE BEFORE INSERT ON SC246300.TBLINEITEM REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.L_ITEM_NUMBER NOT IN ( SELECT ITEM_NUMBER FROM SC246300.TBITEMS WHERE N.L_ITEM_NUMBER = ITEM_NUMBER ) SIGNAL SQLSTATE 'ERR30' ('ITEM NUMBER DOES NOT EXIST')
40
In Example 3-25, we provide a query that can be used to identify all the triggers defined for all the tables in a particular database and the order in which the triggers are executed.
Example 3-25 Identify all triggers for a database SELECT TBOWNER , TBNAME , SCHEMA , T. NAME , CASE WHEN TRIGTIME = 'B' THEN 'BEFORE' WHEN TRIGTIME = 'A' THEN 'AFTER' ELSE ' ' END AS TIME , CASE WHEN TRIGEVENT = 'I' THEN 'INSERT OF' WHEN TRIGEVENT = 'U' THEN 'UPDATE OF' WHEN TRIGEVENT = 'D' THEN 'DELETE OF' ELSE ' ' END AS EVENT , CASE WHEN GRANULARITY = 'R' THEN 'ROW' WHEN GRANULARITY = 'S' THEN 'STATEMENT' ELSE ' ' END AS GR , REPLACE(TEXT,' ','') AS TRIGGER --TEXT IS 3460 BYTES WIDE AND HAS LOT OF BLANKS
Chapter 3. Triggers
41
FROM SYSIBM.SYSTRIGGERS T, SYSIBM.SYSDATABASE D WHERE D.NAME = 'DB246300' AND T.DBID = D.DBID ORDER BY TBOWNER, TBNAME, TRIGTIME DESC, T.CREATEDTS, SEQNO;
42
Chapter 4.
43
4.1 Introduction
A distinct type is based (sourced) on an existing built-in data type. Once a distinct type is defined, column definitions can reference that type when tables are created or altered. A UDT is a schema object. If a distinct type is referenced without a schema name, the distinct type is resolved by searching the schemas in the CURRENT PATH. Each column in a table has a specific data type which determines the column's data representation and the operations that are allowed on that column. DB2 supports a number of built-in data types, for example, INTEGER and CHARACTER. In building a database, you might decide to use one of the built-in data types in a specialized way; for example, you might use the INTEGER data type to represent ages, or the DECIMAL(8,2) data type to represent amounts of money. When you do this, you might have certain rules in mind about the kinds of operations that make sense on your data. For example, it makes sense to add or subtract two amounts of money, but it might not make sense to multiply two amounts of money, and it almost certainly makes no sense to add or compare an age to an amount of money. UDTs provide a way for you to declare such specialized usages of data types and the rules that go with them. DB2 enforces the rules, by performing only the kinds of computations and comparisons that you have declared to be reasonable for your data. You have to define the operations that are allowed on the UDT. In other words, DB2 guarantees the type-safety of your queries.
44
WITH COMPARISONS ; CREATE DISTINCT TYPE SC246300.EURO AS DECIMAL(17,2) WITH COMPARISONS ; CREATE DISTINCT TYPE SC246300.CUSTOMER AS CHAR(11) WITH COMPARISONS ;
An instance of a distinct type is considered comparable only with another instance of the same distinct type. The WITH COMPARISONS clause serves as a reminder that instances of the new distinct type can be compared with each other, using any of the comparison operators. (For a list of comparison operators allowed on UDTs, see Figure 4-1 on page 49.) This clause is required if the source data type is not a large object data type. If the source data type is BLOB, CLOB, or DBCLOB, the phrase is tolerated with a warning message (SQLCODE +599, SQLSTATE 01596), even though comparisons are not supported for these source data types. Note: Do not specify WITH COMPARISONS for sourced-data-types that are BLOBS, CLOBs, or DBCLOBs. The WITH COMPARISONS clause is required for all other source-data-types.
45
-- Function EURO(DECIMAL) returns a EURO type CREATE FUNCTION SC246300.EURO(DECIMAL(17,2)) RETURNS SC246300.EURO SOURCE SYSIBM.DECIMAL(DECIMAL(17,2)) ; -- Function DECIMAL(EURO) returns a DECIMAL type CREATE FUNCTION SC246300.DECIMAL(EURO) RETURNS SYSIBM.EURO SOURCE SYSIBM.DECIMAL(DECIMAL(17,2)) ;
-- Function CUSTOMER(CHAR) returns a CUSTOMER type CREATE FUNCTION SC246300.CUSTOMER(CHAR(11)) RETURNS SC246300.CUSTOMER SOURCE SYSIBM.CHAR(CHAR(11)) ; -- Function CHAR(CUSTOMER) returns a PESETA type CREATE FUNCTION SC246300.CHAR(CUSTOMER) RETURNS SYSIBM.CHAR SOURCE SYSIBM.CHAR(CHAR(11)) ;
CAST functions are also used to convert a data type to use a different length, precision or scale. This is explained in more detail in Section 4.5, Using CAST functions on page 48.
CREATE TABLE SC246300.TBCONTRACT ( SELLER CHAR (6) NOT NULL , BUYER CUSTOMER NOT NULL , RECNO CHAR(15) NOT NULL , PESETAFEE SC246300.PESETA , PESETACOMM PESETA , EUROFEE SC246300.EURO , EUROCOMM EURO , CONTRDATE DATE, CLAUSE VARCHAR(500) NOT NULL WITH DEFAULT, FOREIGN KEY (BUYER) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (SELLER) REFERENCES SC246300.TBEMPLOYEE ) IN DB246300.TS246308 WITH RESTRICT ON DROP;
In order to be allowed to reference a distinct type in a DDL statement, you need to have the proper authorizations.
46
The GRANT USAGE ON DISTINCT TYPE statement is used to grant users the privilege to: Use a UDT as a column data type (that is, in a CREATE or ALTER TABLE statement) Use a UDT as a parameter of a stored procedure or user-defined functions (that is, in a CREATE PROCEDURE or CREATE FUNCTION statement) The GRANT EXECUTE ON statement allows users to use CAST functions on a UDT. Both the USAGE and EXECUTE privileges can be revoked. In Example 4-4, we show a few typical grants.
Example 4-4 GRANT USAGE/ EXECUTE ON DISTINCT TYPE GRANT USAGE ON DISTINCT TYPE SC246300.EURO TO PUBLIC# GRANT EXECUTE ON FUNCTION SC246300.EURO(DECIMAL) TO PUBLIC# GRANT EXECUTE ON FUNCTION SC246300.DECIMAL(EURO) TO PUBLIC#
In Example 4-5, we show the use of the DROP and COMMENT ON statements for distinct types. The DATA keyword can be used as a synonym for DISTINCT. The RESTRICT clause on the DROP statement, must be specified when dropping a distinct type. This clause restricts (prevents) dropping a distinct type if one of the dependencies below exist: A column of a table is defined as this distinct type A parameter or return value of a user-defined function or stored procedure is defined as this distinct type. This distinct type's CAST functions are used in: A view definition A trigger definition A check constraint on CREATE or ALTER table A default clause on CREATE or ALTER table
If the distinct type can be dropped, DB2 also drop the CAST functions that were generated for the distinct type. However, if you have created additional functions to support the distinct type (second bullet above), you have to drop them first before dropping the UDT. Use COMMENT ON to document what the distinct data type is to be used for.
Example 4-5 DROP and COMMENT ON for UDTs SET CURRENT PATH = SC246300; COMMENT ON {DISTINCT/DATA} TYPE EURO IS string-constant; DROP {DISTINCT/DATA} TYPE EURO RESTRICT;
47
Because you cannot compare data of type EURO (the EUROFEE column is defined as a EURO distinct type) with data of the source type of EURO (that is, DECIMAL) directly, you must use the CAST function to cast data from DECIMAL to EURO. You can also use the DECIMAL function, to cast from EURO to DECIMAL. Either way you decide to cast, from or to the UDT, you can use: The function name notation, data-type(argument) or The cast notation, CAST(argument AS data-type) An example of both is shown in Example 4-7. In fact, the EURO CAST function can be invoked on any data type that can be promoted to the DECIMAL data type by the rules of data type promotion. More details on data type promotion can be found in the DB2 UDB for OS/390 and z/OS Version 7 SQL Reference, SC26-9944. Example 4-7 Two casting methods
SET CURRENT PATH = SC246300; SELECT RECNO, BUYER, SELLER FROM SC246300.TBCONTRACT WHERE EUROFEE > EURO (100000.00) -- Function name notation SELECT RECNO, BUYER, SELLER FROM SC246300.TBCONTRACT WHERE EUROFEE > CAST (100000.00 AS EURO) -- CAST notation
48
Note: The LIKE and NOT LIKE comparison operators are not supported for UDTs.
After creating a distinct type, you can specify that the distinct type inherits some or all of the functions that operate on its source type. This is done by creating new functions, called sourced functions, that operate on the distinct type and duplicate the semantics of built-in functions that operate on the source type. For example, you might specify that your distinct type WEIGHT inherits the arithmetic operators + and -, and the column functions SUM and AVG, from its source type FLOAT. By selectively inheriting the semantics of the source type, you can make sure that programs do not perform operations that make no sense, such as multiplying two weights, even though the underlying source type supports multiplication. Table SC246300.TBCONTRACT has columns EUROFEE and EUROCOMM defined with distinct type EURO, and columns PESETAFEE and PESETACOMM defined with distinct type PESETA. The first query shown in Example 4-8 is invalid because there is no "+" function defined on distinct type EURO nor PESETA. However, the "+" function can be defined for EURO and PESETA by sourcing it on the built-in "+" function for DECIMAL as shown in the CREATE statement.
Example 4-8 Using sourced functions SET CURRENT PATH = SC246300; SELECT BUYER, SELLER, RECNO, EUROFEE + EUROCOMM, PESETAFEE + PESETACOMM FROM SC246300.TBCONTRACT WHERE SELLER LIKE A% -- This is an invalid query because -- the sourced function + has not -- been defined on UDT EURO. CREATE FUNCTION SC246300.+ (EURO,EURO) RETURNS EURO SOURCE SYSIBM.+ (DECIMAL(17,2),DECIMAL(17,2)) SELECT BUYER, SELLER, RECNO, EUROFEE + EUROCOMM, PESETAFEE + PESETACOMM FROM SC246300.TBCONTRACT WHERE SELLER LIKE A% -- This is still an invalid query because -- the sourced function + has not -- been defined on UDT PESETA CREATE FUNCTION SC246300.+ (PESETA,PESETA) RETURNS PESETA SOURCE SYSIBM.+ (DECIMAL(18,0),DECIMAL(18,0)); SELECT BUYER, SELLER, RECNO, EUROFEE + EUROCOMM, PESETAFEE + PESETACOMM FROM SC246300.TBCONTRACT WHERE SELLER LIKE A% -- This is a valid query now that -- functions + has been defined on -- UDTs EURO and PESETA
Example 4-9 uses the SUM and AVG column sourced functions.
Example 4-9 Defining sourced column sourced functions on UDTs SET CURRENT PATH = SC246300;
50
SELECT SELLER, SUM(PESETAFEE), AVG(PESETAFEE) FROM SC246300.TBCONTRACT GROUP BY SELLER ; -- This is an invalid query because -- the sourced column function SUM has not -- been defined on UDT PESETA. CREATE FUNCTION SC246300.SUM(PESETA) RETURNS PESETA SOURCE SYSIBM.SUM(DECIMAL(18,0)) ; SELECT SELLER, SUM(PESETAFEE), AVG(PESETAFEE) FROM SC246300.TBCONTRACT GROUP BY SELLER ; -- This is still an invalid query because -- the sourced column function AVG has not -- been defined on UDT PESETA. CREATE FUNCTION SC246300.AVG(PESETA) RETURNS PESETA SOURCE SYSIBM.AVG(DECIMAL(18,0)) ; SELECT SELLER, SUM(PESETAFEE), AVG(PESETAFEE) FROM SC246300.TBCONTRACT GROUP BY SELLER ; -- This is now a valid query
In order to directly compare or perform arithmetic operations with columns of different currency types, it is necessary to first cast the column(s) to a common data type. In Example 4-11, show how the WHERE clause of the query in Example 4-10 can be re-coded in order to be able to compare pesetas to euros. The example converts EUROFEE to its
51
source data type (DECIMAL) using the automatically generated CAST function DECIMAL, then multiplies it by 166 (the current monetary conversion factor to convert from euros to pesetas), and finally this result is casted to pesetas with the PESETA function and compared with the column PESETAFEE which is a PESETA distinct type column.
Example 4-11 Comparing pesetas and euros SET CURRENT PATH = SC246300; SELECT CUSTOMERNO, BUYER FROM SC246301.TBCONTRACT WHERE PESETAFEE = PESETA((DECIMAL(EUROFEE))*166)
Example 4-12 accomplished the same thing than Example 4-11, but we have placed the conversion factor (multiplication by 166) into a user-defined function called EUR2PES. For more information on user-defined functions see Chapter 5, User-defined functions (UDF) on page 57. In the additional material you can find an external UDF (EUR22PES) using a Cobol program to implement the same functionality. See Appendix C, Additional material on page 251 for details.
Example 4-12 Another way to compare pesetas and euros SET CURRENT PATH = SC246300; CREATE FUNCTION SC246300.EUR2PES (X DECIMAL) RETURNS DECIMAL LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION NOT DETERMINISTIC RETURN X*166 ; -- DB2 V7 allows to create functions -- with LANGUAGE SQL, so it is not -- necessary to use external code. SELECT CUSTOMERNO, BUYER FROM SC246301.TBCONTRACT WHERE PESETAFEE = PESETA(EUR2PES(DECIMAL(EUROFEE))) You can also code the UDF the following way: CREATE FUNCTION SC246300.EUR22PES (X EURO) RETURNS PESETA LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION NOT DETERMINISTIC RETURN PESETA (DECIMAL(X) * 166) # And refer to it as: SELECT CUSTOMERNO, BUYER FROM SC246301.TBCONTRACT WHERE PESETAFEE = EUR2PES(EUROFEE)
52
Note: The EUR2PES function to convert euros to pesetas that is used here is just for illustration purposes. The actual conversion formula is somewhat more complicated and is probably best implemented through an external function using a regular programming language like C, Cobol or PL/I. Example 4-13 shows the use of the CAST functions EURO(DECIMAL) and DECIMAL(PESETA) in a trigger. The trigger automatically supplies a value for the EUROFEE column with the amount in euros when a contract is inserted in table SC246300.TBCONTRACT with the fee in pesetas (the insert includes the PESETAFEE column).
Example 4-13 Automatic conversion of euros SET CURRENT PATH = SC246300; CREATE TRIGGER SC246300.TGEURMOD NO CASCADE BEFORE INSERT ON SC246300.TBCONTRACT REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL SET N.EUROFEE = EURO(DECIMAL(N.PESETAFEE)/166)
The use of this sort of trigger can be interesting during a conversion. Programs that have not been converted can continue to INSERT values in pesetas into the PESETAFEE column, whereas new programs can already use the new EUROFEE column. This way you dont have to change all the programs in one big operation, but have a more gradual migration. Note: You probably need to implement an UPDATE trigger with similar functionality when there is a process that can updated the PESETAFEE column.
53
Note: DCLGEN generates a DECLARE TABLE statement that refers to the source data type and not the distinct type. A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. You can assign a column value of a distinct type to a host variable if you can assign a column value of the distinct type's source type to the host variable. In other words you should use the same definition for your host variables when referring to a UDT than you would use when referring to its source data type. If for example, a Cobol program needs to reference a distinct type named CUSTOMER that is based on a CHAR(15) built-in data type, you should define the host variable as PIC X(15).
54
DISP(OLD,CATLG,CATLG) LOAD DATA INDDN U6830982 LOG NO RESUME YES EBCDIC CCSID(00037,00000,00000) INTO TABLE "SC246300"."TBCONTRACT " WHEN(00001:00002 = X'0041') ( "SELLER " POSITION( 00003:00008) , "BUYER " POSITION( 00009:00019) , "RECNO " POSITION( 00020:00034) , "PESETAFEE " POSITION( 00036:00045) NULLIF(00035)=X'FF' , "PESETACOMM " POSITION( 00047:00056) NULLIF(00046)=X'FF' , "EUROFEE " POSITION( 00058:00066) NULLIF(00057)=X'FF' , "EUROCOMM " POSITION( 00068:00076) NULLIF(00067)=X'FF' , "CONTRDATE " POSITION( 00078:00087) NULLIF(00077)=X'FF' , "CLAUSE " POSITION( 00088) )
CHAR(006) CHAR(011) CHAR(015) DECIMAL DECIMAL DECIMAL DECIMAL DATE EXTERNAL VARCHAR
-- To enable you to compare, we show the table definition of TBCONTRACT hereafter CREATE TABLE SC246300.TBCONTRACT ( SELLER CHAR ( 6 ) NOT NULL , BUYER CUSTOMER NOT NULL , RECNO CHAR(15) NOT NULL , PESETAFEE SC246300.PESETA , PESETACOMM PESETA , EUROFEE SC246300.EURO , EUROCOMM EURO , CONTRDATE DATE, CLAUSE VARCHAR(500) NOT NULL WITH DEFAULT, FOREIGN KEY (BUYER) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (SELLER) REFERENCES SC246300.TBEMPLOYEE ON DELETE CASCADE ) IN DB246300.TS246308 WITH RESTRICT ON DROP;
55
You cannot create a declared temporary table that contains a user distinct type. It is not supported and you receive the following error:
DSNT408I SQLCODE = -607, ERROR: OPERATION OR OPTION USER DEFINED DATA TYPE IS NOT DEFINED FOR THIS OBJECT DSNT418I SQLSTATE = 42832 SQLSTATE RETURN CODE
A field procedure can be defined on a distinct data type column. The source type of the distinct data type must be a short string column that has a null default value. When the field procedure is invoked, the value of the column is casted to its source type and then passed to the field procedure. Also be aware that are indexable predicates:
WHERE BUYER = CAST ('ANNE' AS CUSTOMER) and WHERE CHAR(BUYER) = 'ANNE'#
one row for each distinct type one row for each CAST function records privileges held by users on CAST functions new value added to OBTYPE for distinct type USAGE privilege
56
Chapter 5.
57
User-defined functions
Another way to categorize them is by the type of arguments they use as input and the number of arguments they return as output: Scalar functions A scalar function is an SQL operation that returns a single value from another value or set of values, and is expressed as a function name, followed by a list of arguments that are enclosed in parentheses. Each argument of a scalar function is a single value. Examples of scalar functions are CHAR, DATE, and SUBSTR. Column functions A column function is an SQL operation that produces a single value from the values of a single column (or a subset thereof). As with scalar functions, column functions also return a single value. However, the argument of a column function is a set of like values. Examples of column functions are: AVG, COUNT, MAX, MIN and SUM. Table functions A table function is a function that returns a table to the SQL statement that references it. A table function can be referenced only in the FROM clause of a SELECT statement. In general, the returned table can be referenced in exactly the same way as any other table. Table functions are useful for performing SQL operations on non-DB2 data or moving non-DB2 data into a DB2 table. Arithmetic and string operators These are the traditional operators that are allowed on columns (depending on their data type). They can also be thought of as functions. Arithmetic and string operators are: +, -, *, /, CONCAT and ||. When discussing user-defined functions, you can classify those as: External Are based on programs written by you, and may be written in any of the programming languages supported by the target database management system. DB2 V7 even allows you to build external functions using SQL. These are SQL functions. You can only define scalar functions this way. Are based on existing built-in functions or existing user-defined functions that are already known to DB2. Their primary purpose is to extend existing functions (for example, the AVG function or the LENGTH function) for the
Internal/sourced
58
source data type to a newly created user-defined distinct type. The following Table 5-1shows the different combinations of function types that are allowed:
Table 5-1 Allowable combinations of function types Scalar Built-in function UDF External Sourced OK OK OK Column OK N/A OK Table N/A OK N/A Arithmetic OK N/A OK
59
DB2 does not have or have a different name. You can make up for this by creating your own UDF to implement the function from the other DBMS in your DB2 system. UDFs and UDTs can also be exploited by software developers who write specialized applications. The software developer can provide UDTs and UDFs as part of their software package. This approach is used in the family of DB2 Extender products. Simplifying SQL syntax - With a UDF you can encapsulate the logic of having to write a complex expression into a UDF. Replacing a complex expression by a UDF improves readability of the SQL statement. It can also avoid coding errors as you can easy make a mistake when repeatedly coding the same complex expressions.
-- Test
60
+0.2827433340000000E+02
Another example of an SQL scalar UDF can be found in Example 4-12 on page 52. Example 5-2 demonstrates the definition and use of a user-defined scalar function called CALC_BONUS. The CALC_BONUS function is defined to DB2 by using the CREATE FUNCTION statement which associates the function with a user-written C program called CBONUS. The function calculates the bonus for each employee based upon the employee's salary and commission. The function takes two input parameters (both DECIMAL(9,2)) and returns a result of DECIMAL(9,2). Note that a scalar function can only return one value. The external name 'CBONUS' identifies the name of the load module containing the code for the function. A user-defined scalar function can be referenced in the same context that built-in functions can be referenced; that is, wherever an expression can be used in a SELECT, INSERT, UPDATE or DELETE statement. User-defined scalar functions can also be referenced in CALL, VALUES and SET statements. The UPDATE statement in Example 5-2 shows how to use the CALC_BONUS function.
Example 5-2 User-defined external scalar function CREATE FUNCTION CALC_BONUS (DECIMAL(9,2),DECIMAL(9,2)) RETURNS DECIMAL(9,2) EXTERNAL NAME 'CBONUS' LANGUAGE C Function Program pseudo code: cbonus (salary,comm,bonus) bonus=(salary+comm)*.10 return UPDATE SC246300.EMPLOYEE SET BONUS = CALC_BONUS (SALARY,COMM)
61
User-defined column functions can be referenced wherever a built-in column function can be used in a SELECT, INSERT, UPDATE or DELETE statement. The ALL or DISTINCT keywords can only be specified for a built-in or user-defined column function. Transition tables cannot be passed to user-defined column functions. User-defined column functions can only be created based on a DB2 built-in column function; you cannot create your own programs to do that.
62
STAY RESIDENT NO PROGRAM TYPE SUB WLM ENVIRONMENT V7PERF SECURITY DB2 NO DBINFO ;
SELECT * FROM TABLE( SC246300.EXCH_RATES()) AS X WHERE CURRENCY_FROM = 'USD' ; ---------+---------+---------+--------+-------+---------+ CURRENCY_FROM CURRENCY_TO EXCHANGE_RATE ---------+---------+---------+--------+-------+---------+ USD EURO 1.0970 USD FF 7.1955 USD BEF 44.6730
When considering table UDFs there are a few additional things that you have to know. Table UDFs do not use the same mechanism to pass information back and forth as triggers do when passing transition tables to for instance a stored procedure. There are no table locator variables used when dealing with table UDFs. The program that is invoking the table UDF (SPUFI in the example above) does its normal SQL processing as with any other regular table. It will OPEN the cursor, FETCH rows from it and CLOSE the cursor. (We are ignoring here the additional calls that can take place with dynamic SQL like PREPARE and DESCRIBE.) When executing OPEN, FETCH and CLOSE calls, a trip is made to the UDF program that executes in a WLM address space. On each trip to the UDF program, a CALLTYPE parameter is passed to the (Cobol) program that is invoked. The program uses this information to do decide what part of the code to execute. If you are using the FINAL CALL keyword in the CREATE FUNCTION statement a couple of extra calls (with a special CALLTYPE) are executed. Following is a list of possible CALLTYPEs that are used with table UDFs. For more information, see section "Passing parameter values to and from a user-defined function" in the DB2 UDB for OS/390 and z/OS Version 7 Application Programming and SQL Guide, SC26-9933. -2 This is the first call to the user-defined function for the SQL statement. This type of call occurs only if the FINAL CALL keyword is specified in the user-defined function definition. This is the OPEN call to the user-defined function by an SQL statement. This is a FETCH call to the user-defined function by an SQL statement. For a FETCH call, all input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. You will have multiple calls of this type, basically one for every row you want to pass back to the caller to end up in the result table. This is a CLOSE call. This is a final call. This type of final call occurs only if FINAL CALL is specified in the user-defined function definition. This is another type of final call. This type of final call occurs when the invoking application executes a COMMIT or ROLLBACK statement, or when the invoking application abnormally terminates. When a value of 255 is passed to the user-defined function, the user-defined function
-1 0
1 2 255
63
cannot execute any SQL statements, except for CLOSE CURSOR. If the user-defined function executes any close cursor statements during this type of final call, the user-defined function should tolerate SQLCODE -501 because DB2 might have already closed cursors before the final call. When you want to signal to the calling application (SPUFI in our example) that you have finished passing rows (reached the end of the sequential file in our case), you set the SQLSTATE variable to 02000 before returning to the invoker. When you want to preserve information between subsequent invocations of the same table UDF (for instance when processing/reading the sequential file - CALLTYEP=0) you can use a scratchpad to store that information. In our example there is no real need to do so. The CREATE FUNCTION does specify the SCRATCHPAD keyword because to debug the code, it was interesting to keep a counter to track the number of invocations of the table UDF to build the result table. The filtering from the WHERE clause is done by DB2 not by the table UDFs program. The WHERE clause information is not passed to the program. So when the amount of information that you pass back to the invoker is large (big sequential file) and almost all of the rows are filtered out by a WHERE clause, you can end up using more resources than you might expect based on the number of rows that actually show up in the result set. You must code a user-defined table function that accesses external resources as a subprogram. Also ensure that the definer specifies the EXTERNAL ACTION parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program variables for a subprogram persist between invocations of the user-defined function, and use of the EXTERNAL ACTION parameter ensures that the user-defined function stays in the same address space from one invocation to another.
64
If you can, code your load module as re-entrant. This allows you to override the default NO of the STAY RESIDENT option of the CREATE FUNCTION statement. If you specify YES: The load module remains in storage after it has been loaded. This single copy of the module can then be shared across multiple invocations of the UDF. The impact of STAY RESIDENT YES is very important if multiple instances of a UDF are specified in the same SQL statement. There is overhead processing for each input parameter, so keep the number to the minimum required. Remember that, just as with built-in functions, or with any change to your application, the access path chosen by DB2 can be affected by an external UDF. A statement that is indexable without the function may become non-indexable by adding an improperly coded function. There are two obvious cases in which the statement can become non-indexable: The UDF is returning a CHAR value with a length different from the one that it is compared to. The UDF is returning a nullable result and the compared value is not nullable. We strongly recommend that you use EXPLAIN to determine whether the access path is what you expect, and whether it is as efficient as it can be. If you think the UDF is preventing DB2 from choosing an efficient access path, experiment by coding the statement with and without the UDF. This helps you understand the impact of the UDF on the access path. UDFs have been fully integrated into the SQL language. This means that the UDF call can appear anywhere in the statement. Also, a single SQL statement can often be written in different ways and still achieve the same result. Use this to: Ensure that the access path is efficient. Code the SQL statement such that the UDF processes the fewest rows. This reduces the cost of the statement. Exploit the fact that the LE architecture makes processing subroutines more efficient than main programs by defining the UDF program type as SUB. It is evident that you should make your UDF application code as efficient as possible. Two frequently overlooked opportunities to maximize efficiency are: Ensure that all variable types match. This ensures that additional overhead is not incurred within LE, performing unnecessary conversion. In the C programming language, ensure that pragmas are coded correctly. Since the cost of DB2 built-in functions is low, exploit them wherever possible.
65
This may appear as a highly attractive option if, for example, you are converting from another database management system to DB2. The application might extensively use a function that has a different name in the other DBMS, or behaves slightly differently from DB2s version of the same function. Suppose, for example, the function used by the application to convert SMALLINT data to a string is called CHARNSI (see Example 5-9 on page 67 for sample code). There is no function in DB2 with this name. To reduce the need to alter application code, you could code your own external UDF in a host language. The application can then run without any change and invoke your UDF. The CREATE FUNCTION SQL necessary to define it to DB2 can be found in Example 5-5.
Example 5-5 External UDF to convert from SMALLINT to VARCHAR CREATE FUNCTION SC246300.CHAR_N_SI (SMALLINT ) RETURNS VARCHAR(32) SPECIFIC CHAR_N_SI LANGUAGE C DETERMINISTIC NO SQL EXTERNAL NAME CHARNSI PARAMETER STYLE DB2SQL NULL CALL NO EXTERNAL ACTION NO SCRATCHPAD NO FINAL CALL ALLOW PARALLEL NO COLLID ASUTIME LIMIT 5 STAY RESIDENT YES PROGRAM TYPE SUB WLM ENVIRONMENT V7PERF SECURITY DB2 NO DBINFO; SET CURRENT PATH = 'SC246300'; SELECT CHAR_N_SI(SMALLINT(4899)) FROM SYSIBM.SYSDUMMY1; 4899 -- Result -- To show it is a CHAR string now use a SUBSTR function on it SELECT SUBSTR(CHAR_N_SI(SMALLINT(4899)),1,2) FROM SYSIBM.SYSDUMMY1 # 48 -- Result
Create a sourced UDF. Since a sourced UDF is based on an internal DB2 built-in function, you can expect comparable performance. There is no call to LE, and the UDF does not need to execute under a WLM environment. In Example 5-6, we show how you can code the sourced UDF. It can be called CHARNSI, which would satisfy your requirement that the application could be readily converted to DB2.
Example 5-6 Creating a sourced UDF CREATE FUNCTION CHARNSI(DECIMAL(6,0)) RETURNS VARCHAR(32) SOURCE SYSIBM.SMALINT(DECIMAL(6,0));
66
SELECT CHARNSI(4899) FORM SYSIBM.SYSDUMMY1; 489 -- Result SELECT SUBSTR(CHARNSI(4899),1,2) FROM SYSIBM.SYSDUMMY; 48 -- Result
Use the CAST function or use DB2 built-in functions. The CAST function is illustrated in Example 5-7, the use of a DB2 built-in function is shown in Example 5-8. You can expect good and comparable performance from both. The disadvantage, if you are converting from another database management system, is that application code needs to be changed. If you need to change application code anyway or choose to do it for other reasons, then we recommend switching to DB2 built-in functions.
Example 5-7 Using CAST instead of a UDF SELECT SUBSTR(CAST(4899 AS CHAR(6)),1,2) FROM SYSIBM.SYSDUMMY1; 48 -- Result
Example 5-8 Built-in function instead of a UDF SELECT SUBSTR(CHAR(4899),1,2) FROM SYSIBM.SYSDUMMY1; 48 -- Result
Example 5-9 CHARNSI source code C program listing /********************************************************************* * Module name = CHARNSI * * * * DESCRIPTIVE NAME = Convert small integer number to a string * * * * * * LICENSED MATERIALS - PROPERTY OF IBM * * 5645-DB2 * * (C) COPYRIGHT 1999 IBM CORP. ALL RIGHTS RESERVED. * * * * STATUS = VERSION 6 * * * * * * Example invocations: * * 1) EXEC SQL SET :String = CHARN(number) ; * * ==> converts the small integer number to a string * * Notes: * * Dependencies: Requires IBM C/C++ for OS/390 V1R3 or higher * * * * Restrictions: * * * * Module type: C program * * Processor: IBM C/C++ for OS/390 V1R3 or higher * * Module size: See linkedit output *
Chapter 5. User-defined functions (UDF)
67
* Attributes: Re-entrant and re-usable * * * * Entry Point: CHARNSI * * Purpose: See Function * * Linkage: DB2SQL * * Invoked via SQL UDF call * * * * * * Input: Parameters explicitly passed to this function: * * - *number : a pointer to a small inteher number * * to convert to a string * * * * Output: Parameters explicitly passed by this function: * * - *numString : pointer to a char[32], null-termi- * * nated string to receive the refor- * * matted number. * * - *nullNumString : pointer to a short integer to re- * * ceive the null indicator variable * * for *numString. * * - *sqlstate : pointer to a char[06], null-termin-* * ated string to receive the SQLSTATE* * - *message : pointer to a char[70], null-termin-* * ated string to receive a diagnostic* * message if one is generated by this* * function. * * * * Normal Exit: Return Code: SQLSTATE = 00000 * * - Message: none * * * * * * Error Exit: None * * * * External References: * * - Routines/Services: None * * - Data areas : None * * - Control blocks : None * * * * * * Pseudocode: * * CHARNSI: * * 1) If input number is NULL, then return NULL,exit * * 2) Translate the small integer number to a string * * 3) Return output string * * * *********************************************************************/ /********************************************************************* * Module name = CHARNSI * * * * DESCRIPTIVE NAME = Convert small integer number to a string * * * *********************************************************************/ /********************** C library definitions ***********************/ #pragma linkage(CHARNSI,fetchable) #pragma runopts(POSIX(ON)) #include <stdio.h> #include <stdlib.h> #include <string.h> #include <decimal.h> #include <ctype.h> /***************************** Equates ******************************/
68
#define NULLCHAR '\0' /*********************** GREATN functions ***************************/ void CHARNSI /* main routine */ ( short int *p1In, /* First parameter address */ char *pOut, /* Output address */ short int *null1In, /* in: indic var for null1In */ short int *nullpOut, /* out: indic var for pOut */ char *sqlstate, /* out: SQLSTATE */ char *fnName, /* in: family name of function*/ char *specificName, /* in: specific name of func */ char *message /* out: diagnostic message */ ); /*******************************************************************/ /************************** main routine ***************************/ /*******************************************************************/ void CHARNSI /* main routine */ ( short int *p1In, /* in: timestp1 */ char *pOut, /* out: timestp */ short int *null1In, /* in: indic var for null1In */ short int *nullpOut, /* out: indic var for pOut */ char *sqlstate, /* out: SQLSTATE */ char *fnName, /* in: family name of function*/ char *specificName, /* in: specific name of func */ char *message /* out: diagnostic message */ ) { #define DEF_OUTPUT_LENGTH 32 /************************ Local variables *************************/ char strret??( 100 ??); /* string reciever */ /******************************************************************* * Initialize SQLSTATE to 00000 and MESSAGE to "" * *******************************************************************/ message[0] = NULLCHAR; *nullpOut = 0; /* -1 if Null value returned */ memcpy( sqlstate,"00000",6 ); memset(pOut, NULLCHAR, DEF_OUTPUT_LENGTH); /******************************************************************* * Return NULL if at least one input parameter is NULL * *******************************************************************/ if (*null1In |= 0) { *nullpOut = -1; return; } /******************************************************************* * Convert an integer to a string * *******************************************************************/ sprintf( pOut, "%-d", *p1In); return; } /* end of CHARNSI */
69
70
Chapter 6.
Built-in functions
In the last couple of versions, DB2 has expanded the number of built-in functions dramatically. In this chapter, we give an overview of the built-in functions that were added to DB2 in versions 6 and 7 and briefly describe their characteristics.
71
Column functions
AVG, COUNT, MAX, MIN and SUM.
Scalar functions
CHAR, COALESCE, DATE, DAY, DAYS, DECIMAL, DIGITS, FLOAT, HEX, HOUR, INTEGER, LENGTH, MICROSECOND, MINUTE, MONTH, NULLIF, SECOND, STRIP, SUBSTR, TIME, TIMESTAMP, VALUE, VARGRAPHIC and YEAR.
72
Column functions
COUNT_BIG Returns the number of rows or values in a set of rows or values. It performs the same function as COUNT, except the result can be greater than the maximum value of an integer. Returns the standard deviation of a set of numbers. Returns the variance of a set of numbers.
Scalar functions
ABS, ABSVAL ACOS ASIN ATAN ATANH ATAN2 BLOB CEIL, CEILING CLOB COS COSH DAYOFMONTH DAYOFWEEK DAYOFYEAR DBCLOB DOUBLE DOUBLE_PRECISION EXP FLOOR Returns the absolute value of the argument. Returns the arccosine of an argument as an angle, expressed in radians. Returns the arcsine of an argument as an angle, expressed in radians. Returns the arctangent of an argument as an angle, expressed in radians. Returns the hyperbolic arctangent of an argument as an angle, expressed in radians. Returns the arctangent of x and y coordinates as an angle, expressed in radians. Returns a BLOB representation of a string of any type or a ROWID type. Returns the smallest integer value that is greater than or equal to the argument. Returns a CLOB representation of a character string or ROWID type. Returns the cosine of an argument that is expressed as an angle in radians. Returns the hyperbolic cosine of an argument that is expressed as an angle in radians. Identical to the DAY function. Returns an integer between 1 and 7 which represents the day of the week where 1 is Sunday and 7 is Saturday. Returns an integer between 1 and 366 which represents the day of the year where 1 is January 1. Returns a DBCLOB representation of a graphic string type. Returns a double precision floating-point representation of a number or character string in the form of a numeric constant. See description for built-in function DOUBLE. Returns the exponential function of an argument. Returns the largest integer value that is less than or equal to the argument.
Chapter 6. Built-in functions
73
GRAPHIC
IDENTITY_VAL_LOCAL Returns the most recently assigned value for an identity column. IFNULL INSERT JULIAN_DAY Identical to the COALESCE and VALUE functions with two arguments. Returns the modified contents of a string. Returns an integer value representing a number of days from January 1,4712 BC (the start of the Julian date calendar) to the date specified in the argument. Returns a string with the characters converted to lowercase. Identical to LCASE. Returns a string that consists of the specified number of leftmost bytes of a string. Returns the natural logarithm of an argument. Returns the starting position of the first occurrence of one string within another string based on a specified starting position. Identical to LN. Returns the base 10 logarithm of an argument. Removes blanks from the beginning of a string. Returns an integer value in the range 0 to 86400 representing the number of seconds between midnight and the time specified in the argument. Divides the first argument by the second argument and returns the remainder. Identical to LOCATE function (except that POSSTR always starts at position 1). Returns the value of one argument raised to the power of a second argument. Returns an integer between 1 and 4 which represents the quarter of the year in which the date resides. Returns the number of radians for an argument that is expressed in degrees. Causes the statement that includes the function to return an error with the specified SQLSTATE and diagnostic-string. Returns a double precision floating-point random number. Returns a single precision floating-point representation of a number or character string in the form of a numeric constant. Returns a string composed of an expression repeated a specified number of times. Replaces all occurrences of a string within an input string with a new string. Returns a string that consists of the specified number of rightmost bytes of a string. Rounds a number to a specified number of decimal points.
MOD POSSTR POWER QUARTER RADIANS RAISE_ERROR RAND REAL REPEAT REPLACE RIGHT ROUND
74
ROWID RTRIM SIGN SIN SINH SMALLINT SPACE SQRT TAN TANH TIMESTAMP_FORMAT TRANSLATE TRUNCATE UCASE UPPER VARCHAR
Casts the input argument type to the ROWID type. Removes blanks from the end of a string. Returns an indicator of the sign of the argument. Returns the sine of an argument that is expressed as an angle in radians. Returns the hyperbolic sine of an argument that is expressed as an angle in radians. Returns a small integer representation of a number or character string in the form of a numeric constant. Returns a character string consisting of the number of SBCS blanks specified by the argument. Returns the square root of the argument. Returns the tangent of an argument that is expressed as an angle in radians. Returns the hyperbolic tangent of an argument that is expressed as an angle in radians. Returns a timestamp for a character string, using a specified format to interpret the string. Returns a string with one or more characters translated. Truncates a number to a specified number of decimal points. Returns a string with the characters converted to uppercase. Identical to UCASE. Returns a varying length character string representation of a character string, datetime value, integer number, decimal number, floating-point number, or ROWID value. Returns a varying-length character string representation of a timestamp, with the string in a specified format. Returns an integer between 1 and 54 which represents the week of the year. The week starts with Sunday.
VARCHAR_FORMAT WEEK
Some of these functions provide different ways of obtaining the same result. For a detailed description of the syntax and how to use these functions, please refer to the DB2 UDB for OS/390 Version 6 SQL Reference, SC26-9014.
Column functions
STDDEV_SAMP VARIANCE_SAMP Returns the sample standard deviation (/n-1) of a set of numbers. Returns the sample variance of a set of numbers.
Scalar functions
ADD_MONTHS Returns a date that represents the date argument plus the number of months argument.
75
CCSID_ENCODING DAYOFWEEK_ISO LAST_DAY MAX(scalar) MIN(scalar) MULTIPLY_ALT NEXT_DAY ROUND _TIMESTAMP TRUNC_TIMESTAMP WEEK_ISO
Returns the encoding scheme of a CCSID with a value of ASCII, EBCDIC, UNICODE, or UNKNOWN. Returns an integer in the range of 1 to 7, where 1 represents Monday. Returns a date that represents the last day of the month indicated by date-expression. Returns the maximum value in a set of values. Returns the minimum value in a set of values. Returns the product of the two arguments as a decimal value, used when the sum of the argument precision exceeds 31. Returns a timestamp that represents the first weekday, named by the second argument, after the date argument. Returns a timestamp rounded to the unit specified by timestamp format string. Returns a timestamp truncated to the unit specified by the timestamp format string. Returns an integer that represents the week of the year with Monday as first day of week.
For a complete list of all the functions available and for a detailed description of the syntax and how to use these functions, please refer to DB2 UDB for OS/390 and z/OS Version 7 SQL Reference, SC26-9944. Tip: When using the function WEEK, make sure that you understand that the weeks are based on a starting day of SUNDAY. If you want your week to start on a Monday, then you should use WEEK_ISO instead.
76
Part 2
Part
77
78
Chapter 7.
Temporary tables
When you need a table only for the life of an application process, you can create a temporary table. There are two kinds of temporary tables: Created temporary tables, which you define using a CREATE GLOBAL TEMPORARY TABLE statement (introduced in DB2 V5 as global temporary tables). Declared temporary tables, which you define in a program using a DECLARE GLOBAL TEMPORARY TABLE statement. SQL statements that use temporary tables can run faster, because: There is no logging for created temporary tables. Only UNDO records (required for rollback processing) are logged for declared temporary tables. There is no locking for created temporary tables and only share level locks for declared temporary table space.
79
All references to the table from multiple applications are to a single persistent table.
The table can be stored in a simple, segmented, or partitioned table space in a user-defined database or in the default database DSNDB04. Can have indexes. Can INSERT, DELETE, and UPDATE individual rows. WITH DEFAULT clause supported. UDTs supported. SAVEPOINTs supported. Threads can be reused.
80
81
Example 7-2 shows the information that is stored in the DB2 catalog for created temporary tables.
Example 7-2 Created temporary table in SYSIBM.SYSTABLES SELECT NAME, CREATOR, TYPE, DBNAME, TSNAME FROM SYSIBM.SYSTABLES WHERE NAME ='GLOBALINEITEM' NAME CREATOR TYPE DBNAME TSNAME ---------+---------+---------+---------+---------+------GLOBALINEITEM SC246300 G DSNDB06 SYSPKAGE
82
Note: All created temporary tables seem to reside in the catalog table space SYSPKAGE, but in reality, they are instantiated (materialized) in the DSNDB07 work files. The TYPE = G denotes a created temporary table. In Example 7-3, the LIKE table-name or view-name specifies that the columns of the created temporary table have exactly the same name and description as the columns from the identified table or view. That is, the columns of SC246300.GLOBALIKE_ITEM have the same name and description as those of SC246300.TBLINEITEM except for attributes not allowed for created temporary tables and no default values other than NULL. The name specified after the LIKE must identify a table, view, or created temporary table that exists at the current server, and the privilege set must implicitly or explicitly include the SELECT privilege on the identified table or view. A created temporary table GLOBALIKE_ITEM, similar to TBLINEITEM can be useful, for example, for a program that handles all the changes that a customer makes while shopping on the internet. All items from an order can be inserted in a created temporary table work file without locking and logging contention. When the customer confirms an order, the created temporary rows can be inserted in TBLINEITEM before the work file data is lost at COMMIT time. If the customer cancels the order, then entries in the GLOBALIKE_ITEM temporary table are removed by the rollback and the TBLINEITEM table is not involved in the process and contention is avoided. However, created temporary tables are not updatable. So if the customer would like to change the quantity of an item during the ordering process, he has to start the order process from the beginning.
Example 7-3 Use of LIKE clause with created temporary tables CREATE GLOBAL TEMPORARY TABLE SC246300.GLOBALIKE_ITEM LIKE SC246300.TBLINEITEM
This clause is similar to the LIKE clause on CREATE TABLE with the following differences: If any column of the identified table or view has an attribute value that is not allowed for a column of a created temporary table (for example, UNIQUE), that attribute value is ignored and the corresponding column in the new created temporary table will have the default value for that attribute unless otherwise indicated. If a column of the identified table or view allows a default value other than the null value, then that default value is ignored and the corresponding column in the new created temporary table will have no default. A default value other than null value is not allowed for any column of a created temporary table. You can also create a view on a created temporary table. Example 7-4 show a view on the created temporary table SC246300.GLOBALINEITEM. Every application sees different values returned from the view SC2463.GLOBALVIEW depending on the content of their own created temporary table SC246300.GLOBALINEITEM. The view SC246300.GLOBALVIEW is defined in the catalog as a normal view on a base table.
Example 7-4 View on a created temporary table CREATE VIEW SC246300.GLOBALVIEW AS SELECT NORDERKEY ,ITEMNUMBER
83
FROM SC246300.GLOBALINEITEM ;
84
When you execute the INSERT statement, DB2 creates an instance of GLOBALINEITEM and populates that instance with rows from table TBLINEITEM. When the COMMIT statement is executed, DB2 deletes all rows from GLOBALINEITEM. However, if you change the declaration of cursor C1 to:
EXEC SQL DECLARE C1 CURSOR WITH HOLD FOR SELECT * FROM SC246300.GLOBALINEITEM ;
The contents of GLOBALINEITEM is not deleted until the application ends because C1, a cursor defined WITH HOLD, is open when the COMMIT statement is executed. In either case, DB2 drops the instance of GLOBALINEITEM when the application ends.
Insert the data into a temporary table. Open a cursor against the temporary table. End the stored procedure. The client can then fetch the rows from the cursor defined on the temporary table.
Considerations
Those responsible for system planning should be aware that work file and buffer pool storage might need to be increased depending on how large a created temporary table is and the amount of sorting activity required. Given that a logical work file will not be used for any other purpose for as long as that instantiation of a created temporary table is active, there may be a need to increase the size or number of physical work file table spaces, especially when there is also a significant amount of other work (like sort activity) using the work file database concurrently running on the system. Those responsible for performance monitoring and tuning should be aware that for created temporary tables, no index scans are done, only table scans are associated with created temporary tables. In addition, a DELETE of all rows performed on a created temporary table does not immediately reclaim the space used for the table. Space reclamation does not occur until a COMMIT is done.
86
They cannot be defined as parents in a referential constraint. The columns cannot have LOB or ROWID data types (or a distinct type based on one). Cannot have a validproc, editproc, fieldproc or trigger. Cannot be referenced: In any DB2 utility commands (message DSNU062I is issued for this error). In a LOCK TABLE statement. As the target of an UPDATE statement, where the target is the created temporary table or a view on the created temporary table. If you try to UPDATE a created temporary table, you receive the following message:
DSNT408I SQLCODE = -526, ERROR: THE REQUESTED OPERATION OR USAGE DOES NOT APPLY TO CREATED TEMPORARY TABLE tablename DSNT418I SQLSTATE = 42995 SQLSTATE RETURN CODE
However, the created temporary table can be referenced in the WHERE clause of an UPDATE statement. A created temporary table can be referenced in the FROM clause of any subselect. As with all tables stored in work files, query parallelism is not considered for any query referencing a created temporary table in the FROM clause. DELETE FROM specifying a created temporary table is valid when the statement does not include a WHERE or WHERE CURRENT OF clause. When a view is created on a created temporary table, then the CREATE VIEW statement for the view cannot contain a WHERE clause because the DELETE FROM view fails with an SQLCODE -526. However, you can delete all rows (mass delete) from a created temporary table or a view on a created temporary table. If a created temporary table is referenced in the subselect of a CREATE VIEW statement, the WITH CHECK OPTION must not be specified for the WHERE clause of the subselect of the CREATE VIEW statement. GRANT ALL PRIVILEGES ON a created temporary table is valid, but specific privileges cannot be granted on a created temporary table. Of the ALL privileges, only the ALTER, INSERT, DELETE, and SELECT privileges can actually be used on a created temporary table. REVOKE ALL PRIVILEGES ON a created temporary table is valid, but specific privileges cannot be revoked from a created temporary table. The DROP DATABASE statement cannot be used to implicitly drop a created temporary table. You must use the DROP TABLE statement to drop a created temporary table. ALTER TABLE on a created temporary table is valid only when used to add a column to it and if any column being added has a default value of NULL. When the ALTER is performed, any plan or package that references the table is marked as invalid (that is, the SYSPLAN or SYSPACKAGE column VALID is changed to N), and the next time the plan or package is run, DB2 performs an automatic rebind (autobind) of the plan or package. The added column is then available to the SQL statements in the plan or package. On a successful autobind, the VALID column is changed to have a Y. Created temporary tables can be referenced in DROP TABLE, CREATE VIEW, COMMENT ON, INSERT, SELECT, LABEL ON, CREATE ALIAS, CREATE SYNONYM, CREATE TABLE LIKE, DESCRIBE TABLE, and DECLARE TABLE. There are no restrictions or additional rules other than the ones mentioned above.
87
88
ON COMMIT DELETE ROWS, which is the default definition, specifies that the rows of the table are to be deleted following a commit operation (if no WITH HOLD cursors are open on the table). To avoid mistakes, define the table always as ON COMMIT PRESERVE ROWS when you want to preserve the rows at COMMIT. This way there is not need to have a cursor WITH HOLD open to preserve the rows in the DTT across COMMITs. Important: Always explicitly drop the declared temporary table when it is no longer needed. If you use the ON COMMIT PRESERVE ROWS option, the thread cannot be inactivated or reused unless the program explicitly drops the table before the commit. If you do not explicitly drop the table, it is possible to run out of usable threads.
SG246300 SEGSIZE 32 SG246300 SEGSIZE 32 SG246300 SEGSIZE 32 SG246300 SEGSIZE 32 SG246300 SEGSIZE 32 SG246300 SEGSIZE 32 SG246300 SEGSIZE 32
Tip: All table spaces in a TEMP database should be created with the same segment size and with the same primary space allocation values. DB2 chooses which table space to place your created temporary table. You should create table spaces in your TEMP database for all page sizes you use in base tables. DB2 determines which table space in the TEMP database is used for a given declared temporary table. If a DECLARE GLOBAL TEMPORARY TABLE statement specifies a row size that is not supported by any of the table spaces defined in the TEMP database, the
Chapter 7. Temporary tables
89
statement fails. DB2 determines the buffer pool based on the page size that is required for the declared temporary table and assigns DTT to a table space in the TEMP database that can handle this page size. An INSERT statement fails if there is insufficient space in the table space used for the declared temporary table. Allocate enough space for all concurrently executing threads to create their declared temporary tables. You may want to have several smaller table spaces rather than a few large ones, to limit the space one declared temporary table can use, since a declared temporary table cannot span multiple physical TEMP table spaces. Tip: You may want to isolate declared temporary tables to their own set of buffer pools.
An encoding scheme (CCSID ASCII, EBCDIC or UNICODE) cannot be specified for a TEMP database or for a table space defined within a TEMP database. However, an encoding scheme can be specified for a declared temporary table USING CCSID. This means that a table space defined within a TEMP database can hold temporary tables with different encoding schemes. You should bear in mind that the TEMP database could become quite large if the usage of declared temporary tables is high. You may also need to increase the size of the EDM pool to account for this extra database. START, STOP and DISPLAY DB are the only supported commands against the TEMP database. The standard command syntax should be used but please note the following: You cannot start a TEMP database as RO (read only). You cannot use the AT COMMIT option of the STOP DB command. You cannot stop and start any index spaces that the applications have created. Tip: Do not specify the following clauses of the CREATE TABLESPACE statement when defining a table space in a TEMP database: CCSID, LARGE, MEMBER CLUSTER, COMPRESS, LOB, NUMPARTS, DSSIZE, LOCKSIZE, PCTFREE, FREEPAGE, LOCKPART, TRACKMOD, GBPCACHE, LOG.
If the base table, created temporary table, or view from which you select columns has identity columns, you can specify that the corresponding columns in the declared temporary table are also identity columns. Do that by specifying the INCLUDING IDENTITY COLUMN ATTRIBUTES clause when you define the declared temporary table. In Example 7-9, the statement defines a declared temporary table called TEMPPROD by explicitly specifying the columns.
Example 7-9 Explicitly specify columns of declared temporary table DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMPPROD (SERIAL CHAR(8) NOT NULL WITH DEFAULT '99999999' ,DESCRIPTION VARCHAR(60) NOT NULL ,PRODCOUNT INTEGER GENERATED ALWAYS AS IDENTITY ,MFGCOST DECIMAL(8,2) ,MFGDEPT CHAR(3) ,MARKUP SMALLINT ,SALESDEPT CHAR(3) ,CURDATE DATE NOT NULL);
In Example 7-10, the statement defines a declared temporary table called TEMPPROD by copying the definition of a base table. The base table has an identity column that the declared temporary table also uses as an identity column.
Example 7-10 Implicit define declared temporary table and identity column DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMPPROD LIKE BASEPROD INCLUDING IDENTITY COLUMN ATTRIBUTES;
In Example 7-11, the statement defines a declared temporary table called TEMPPROD by selecting columns from a view. The view has an identity column that the declared temporary table also uses as an identity column. The declared temporary table inherits the default defined columns from the view definition. Notice also the DEFINITION ONLY clause. This is to make clear that the SELECT is not copying data from the original table but merely its definition.
Example 7-11 Define declared temporary table from a view DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMPPROD AS (SELECT * FROM PRODVIEW) DEFINITION ONLY INCLUDING IDENTITY COLUMN ATTRIBUTES INCLUDING COLUMN DEFAULTS;
Example 7-12 show how to drop the definition of a declared temporary table.
91
92
INCLUDING COLUMN DEFAULTS ON COMMIT PRESERVE ROWS END-EXEC . . . EXEC SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD END-EXEC . . . EXEC SQL COMMIT END-EXEC . . EXEC SQL DROP TABLE SESSION.TEMPPROD END-EXEC
When the DECLARE GLOBAL TEMPORARY TABLE statement is executed, DB2 creates an empty instance of TEMPPROD. The INSERT statement populates that instance with rows from table BASEPROD. The qualifier, SESSION, must be specified in any statement that references TEMPPROD. When the application issues the COMMIT statement, DB2 keeps all rows in TEMPPROD because TEMPPROD is defined with ON COMMIT PRESERVE ROWS. In that case you need to drop the table before the program ends to avoid problems with thread reuse and inactive threads.
93
DECLARE GLOBAL TEMPORARY TABLE TEMPPROD (CHARCOL CHAR(6) NOT NULL) ; EXEC SQL CONNECT RESET ; EXEC SQL INSERT INTO CHICAGO.SESSION.TEMPPROD (VALUES 'ABCDEF') ;
/* Define the temporary table */ /* at the remote site */ /* Connect back to local site */
94
7.3.12 Authorization
No authority is required to declare a temporary table unless you use the LIKE clause. In this case, SELECT access is required on the base table or view specified. PUBLIC implicitly has authority to create tables in the TEMP database and USE authority on the table spaces in that database. PUBLIC also has all table privileges (declare, access, and drop) on declared temporary tables implicitly. The PUBLIC privileges are not recorded in the catalog nor are they revokable. Despite PUBLIC authority, there is no security exposure, as the table can only be referenced by the application process that declared it.
95
Currently, declared temporary tables cannot be used within the body of a trigger. However, a trigger can call a stored procedure or UDF that refers to a declared temporary table. The following statements are not allowed against a declared temporary table: CREATE VIEW ALTER TABLE ALTER INDEX RENAME TABLE LOCK TABLE GRANT/REVOKE table privileges CREATE ALIAS CREATE SYNONYM CREATE TRIGGER LABEL ON/COMMENT ON
96
Chapter 8.
Savepoints
In this chapter, we discuss how savepoints can be used to create points of consistency within a logical unit of work.
97
Amsterdam
Copenhagen
Singapore
Kuala Lumpur
Melbourne
Alice Springs
98
First we make the flight reservation to Melbourne and take a savepoint called FIRSTSTOP. Then we make the flight reservation to Singapore and take a savepoint called SECONDSTOP. Then we find out that there are no seats from Singapore to Copenhagen. We ROLLBACK TO SAVEPOINT FIRSTSTOP, since we do not want to loose the reservation to Melbourne. Then we make the reservation to Kuala Lumpur and take a savepoint called SECONDSTOP. And to Amsterdam with savepoint THIRDSTOP. And to Copenhagen with savepoint DESTINATION. Now that we are at our destination and have not used more than 3 stops, we can release all savepoints except DESTINATION. Then we make the return reservation to Singapore with a savepoint called FIRSTSTOP. There is no seats from Singapore to Melbourne and we need to ROLLBACK TO SAVEPOINT DESTINATION that is Copenhagen. Rolling back also releases the FIRSTSTOP savepoint. Then we make the reservation from Copenhagen to Amsterdam with a savepoint called FIRSTSTOP And to Kuala Lumpur with savepoint SECONDSTOP. And to Melbourne with savepoint THIRDSTOP. If the reservation from Melbourne to Alice Springs fails we want to ROLLBACK the whole reservation, that is to the beginning of the logical unit of work. If we can find a seat to Alice Springs and the number of stops is no more than 3 (as in our case), we can COMMIT the UOW. This example shows that there is sometimes a need to additional points in time to roll back to. They do not change the behavior - nor the need - to COMMIT. Important: Savepoints are not a substitute for COMMITs.
Chapter 8. Savepoints
99
The UNIQUE clause is optional and specifies that the application program cannot activate a savepoint with the same name as an already active savepoint with the unit of recovery. If you plan to use a UNIQUE savepoint in a loop, and you do not release or rollback that savepoint in the loop prior to its reuse, you get an error:
SQLCODE -881: A SAVEPOINT WITH NAME savepoint-name ALREADY EXISTS, BUT THIS SAVEPOINT NAME CANNOT BE REUSED SQLSTATE: 3B501
Omitting UNIQUE indicates that the application can reuse this savepoint name within the unit of recovery. If a savepoint with the same name already exists within the unit of recovery and the savepoint was not created with the UNIQUE option, the old (existing) savepoint is destroyed and a new savepoint is created. This is different than using the RELEASE SAVEPOINT statement, which releases the named savepoint and all subsequently established savepoints. Rollback to released savepoints is not possible. Application logic determines whether the savepoint name needs to be or can be reused as the application progresses, or whether the savepoint name needs to denote a unique milestone. Specify the optional UNIQUE clause on the SAVEPOINT statement when you do not intend to reuse the name without first releasing the savepoint. This prevents an invoked program from accidentally reusing the name. Tip: You can reuse a savepoint that has been specified as UNIQUE as long as the prior savepoint with the same name has been released (through the use of a ROLLBACK or a RELEASE SAVEPOINT) prior to attempting to reuse it.
The ON ROLLBACK RETAIN CURSORS clause is mandatory and specifies that any cursors that are opened after the savepoint is set are not tracked, and thus, are not closed upon rollback to the savepoint. Although these cursors remain open after rollback to the savepoint, they might not be usable. For example, if rolling back to the savepoint causes the insertion of a row upon which the cursor is positioned to be rolled back, using the cursor to update or delete the row results in an error:
SQLCODE -508: THE CURSOR IDENTIFIED IN THE UPDATE OR DELETE STATEMENT IS NOT POSITIONED ON A ROW SQLSTATE: 24504
With scrollable cursors (see Update and delete holes on page 170 for more details), you would get a different error:
SQLCODE -222: AN UPDATE OR DELETE WAS ATTEMPTED AGAINST A HOLE USING cursor-name SQLSTATE: 24510
The ON ROLLBACK RETAIN LOCKS clause specifies that any locks that are acquired after the savepoint is set are not tracked and are not released upon rollback to the savepoint. If you do not specify this clause, it is implied (this is the default and at the present time, there is no other option for locks). In Example 8-1, we show how to set a unique savepoint named START_OVER.
Example 8-1 Setting up a savepoint EXEC SQL SAVEPOINT START_OVER UNIQUE ON ROLLBACK RETAIN CURSORS ON ROLLBACK RETAIN LOCKS ;
100
The ROLLBACK statement with the TO SAVEPOINT clause is used to restore to a savepoint, that is, undo data and schema changes (excluding changes to created temporary tables) made after the savepoint was set. Changes made to created temporary tables are not logged and are not backed out; a warning is issued instead. The same warning is also issued when a created temporary table is changed and there is an active savepoint. The warning issued is:
SQLCODE +883: ROLLBACK TO SAVEPOINT OCCURED WHEN THERE WERE OPERATIONS THAT CANNOT BE UNDONE, OR AN OPERATION THAT CANNOT BE UNDONE OCCURRED WHEN THERE WAS A SAVEPOINT OUTSTANDING SQLSTATE: 01640
Any updates outside the local DBMS, such as remote DB2s, VSAM, CICS, and IMS, are not backed out upon rollback to the savepoint, not even when under the control of RRS. Any cursors that are opened after the savepoint is set are not closed when a rollback to the savepoint is issued. Changes in cursor positioning are not backed out upon rollback to the savepoint. Any locks that are acquired after the savepoint is set are not released upon rollback to the savepoint. Any savepoints that are set after the one to which you roll back to, are released. The savepoint to which the rollback is performed is not released. If a savepoint name is not specified, the rollback is to the last active savepoint. If no savepoint is active, an error occurs:
SQLCODE -880: SAVEPOINT savepoint-name DOES NOT EXIST OR IS INVALID IN THIS CONTEXT SQLSTATE: 3B001 or SQLCODE -882: SAVEPOINT DOES NOT EXIST SQLSTATE: 3B502
Rolling back a savepoint has no effects on created temporary tables because there is no logging for CTTs. Changes to declared temporary tables on the other hand, can be safeguarded or undone by rolling back to a savepoint. For more information on declared temporary tables, see 7.3, Declared temporary tables on page 88. The ROLLBACK statement without the TO SAVEPOINT clause (this is the normal SQL ROLLBACK statement) rolls back the entire unit of recovery. All savepoints set within the unit of recovery are released. The RELEASE SAVEPOINT statement is used to release a savepoint and any subsequently established savepoints.The syntax of the COMMIT statement is unchanged. COMMIT releases all savepoints that were set within the unit of recovery.
101
We recommend that you code the RELEASE SAVEPOINT svptname statement to release savepoints that are no longer required for clarity, and to re-enable the use of three part name remote connections.
102
Chapter 9.
103
For better performance, DB2 can keep some preallocated numbers in memory. The default is to cache 20 numbers, but can be defined in the CACHE integer clause. The minimum value is 2. If you do not want or need caching, specify NO CACHE. If DB2 fails, you lose the numbers which are cached but not used yet. So the values may have some gaps for this reason. Other reasons to end up with gaps can be found in 9.1.9, Application design considerations on page 111.
GENERATED ALWAYS AS IDENTITY (START WITH 1000, INCREMENT BY +2, CACHE 40 CYCLE, MAXVALUE 9999, MINVALUE 1007)
105
NOT NULL WITH DEFAULT NOT NULL WITH DEFAULT NOT NULL WITH DEFAULT ;
This is the order in which member numbers get assigned: 1000 1002 1004 1006 ... 9994 9996 9998 1007 <---1009 | 1011 | 1013 | once we reach MAXVALUE 9999, we CYCLE back to MINVALUE ... | and begin assigning numbers from there 9995 | 9997 | 9999 -----
The fact that you can create a table with an IDENTITY column as specified in Example 9-1, does not mean that you should. More often than not, it is very important that you avoid duplicates, so you do not specify the CYCLE keyword. Also, you should take into account how large the number may get over a long period of time and provide for a much larger MAXVALUE. If you reach MAXVALUE and you dont have CYCLE specified, you are not able to insert additional rows, you have to drop the table and recreate it with a larger MAXVALUE and reload the rows. The GENERATED ALWAYS attribute of the identity column definition is treated in more detail in section 9.1.5, How to populate an identity column . When a table is being created LIKE another table that contains an identity column, a new option on the LIKE clause, INCLUDING IDENTITY COLUMN ATTRIBUTES, can be used to specify that all the identity column attributes are to be inherited by the new table. If INCLUDING IDENTITY COLUMN ATTRIBUTES is omitted, the new table only inherits the data type of the identity column and none of the other column attributes. You cannot create a table LIKE a view and specify the INCLUDING IDENTITY COLUMN ATTRIBUTES keywords. In Example 9-2, we specify that T2 should inherit all of the identity column attributes from T1 by specifying the INCLUDING IDENTITY COLUMN ATTRIBUTES clause.
Example 9-2 Copying identity column attributes with the LIKE clause CREATE TABLE T2 LIKE T1 INCLUDING IDENTITY COLUMN ATTRIBUTES
106
107
Identity columns defined as GENERATED BY DEFAULT can be loaded like any other column. That is, you can load data into the identity column. To do this, you specify the column name of the identity column in your load control cards just like the rest of the columns. Even when the identity column is defined as GENERATED BY DEFAULT and the (un)load file contains identity column values, if you prefer to have DB2 generate (or re-generate) values for the column, then you can use the name of DSN_IDENTITY for the identity column in the load control cards and specify the IGNOREFIELDS keyword. For more information on this, refer to DB2 UDB for OS/390 and z/OS Utility Guide and Reference, SC26-9945. Important: There is no means by which you can load existing values into an identity column that is GENERATED ALWAYS. If you are reloading data to a table with such a column, you must first drop and recreate the table and make the identity column GENERATED BY DEFAULT. There is no way to convert from GENERATED ALWAYS to GENERATED BY DEFAULT or visa versa.
------
In this case, C1 is the identity column and C2, C3, C4, C5, C6 are all the other columns of the table, note that C1 is not coded in either column list
108
Note: the insert could also have been coded like this: INSERT INTO TBMEMBER (MEMBNO, NAME, INCOME, DONATION) VALUES (DEFAULT, 'Kate', 120000.00, 500) ; or like this: INSERT INTO TBMEMBER VALUES (DEFAULT,'Kate', 120000.00, 500) ;
109
Member DB2A
Member DB2B
Identity Column 21 1
Column 2
Column 3
110
If you ever have to reset the value of the identity column, you just drop the table with the identity column and recreate it. You might have to do this after loading additional data in a partition. (LOAD PART x is not allowed on a table with an identity column). By cutting the link between the actual sequence number used in the data table and the table with the identity column that generates the sequence number, you are in control of the numbers that will be assigned (at least you will be able to (re)set starting values). When both the data and the identity column are in the same table, DB2 is in control of the numbers that are assigned and this irrespective of the actual data in the table. Even if you for example, delete all rows from a table with an identity column, DB2 will not reset the MAXASSIGNEDVAL (highest value used so far). In order to hide the complexity of using two tables instead of one, you can mask the process with a stored procedure or a UDF. The stored procedure or UDF can be implemented as follows: 1. Insert a row into the identity column table. 2. Retrieve its value back using the IDENTITY_VAL_LOCAL() function. 3. Delete that row from the identity column table since we no longer need it. 4. Insert the actual row using the retrieved value as a unique identifier. The major advantage of this approach over the old style highest key assigned table is that there is no locking problem. The row(s) in the identity column table are never updated. We only INSERT and DELETE rows. In case of a very high volume transaction workload, you might want to avoid deleting the row from the identity column table in the stored procedure or UDF because it is an extra SQL call. Instead you can have an asynchronous process running in the background that cleans up rows at a regular intervals. This does not cause any locking contention either, since INSERTS take a new page in case the candidate page is locked.
111
Be aware of running out of values. If you created an identity column as SMALLINT, the maximum value that can be inserted is 32767. When you reach this value (depending on the parameters you used to define your identity column) you may either start getting duplicate values assigned (if you specified CYCLE) or your inserts fail. If you unload, drop, recreate and (re)load the table, you may end up having different identity column values than you had before. This is very dangerous, especially if you have RI or if you saved the identity column values in other tables. This is also very important when you have to change the table design, like eliminating a column. A lot of these design changes require the table to be dropped and recreated. You probably want the same identity column values to be used after the drop/create operation than before, for example, because of RI relationships with other tables. This forces you to define the identity column as GENERATED BY DEFAULT. In order to guarantee uniqueness of the values, which is probably why you wanted the identity column in the first place, a unique index is required. Be aware that when you specify the keyword CYCLE, once you reach MAXVALUE, you start the numbering from the MINVALUE again. DB2 does not warn you when reaching the MAXVALUE. When using CYCLE, to be sure that the values are unique, you should create a unique index on the identity column, even when the identity column is defined as GENERATED ALWAYS. Important: You cannot alter any of the definitions for an identity column. Be sure you consider every possible situation of how the data is or may be used before adding an identity column to a table. When the identity column is defined as GENERATED ALWAYS and is the primary key in an RI structure, loading data into the RI structure is extremely complicated. Since the values for the identity column are not known until they are loaded (because of the GENERATED ALWAYS attribute) you have to generate the foreign keys in the dependent tables based on these values since they have to match the primary key. This can be another reason for not using GENERATED ALWAYS.
112
The second aspect of using ROWID columns is that they can be used for a special type of access path to the data called direct row access. We try to explain that there are some advantages when using ROWIDs, but there is also a maintenance cost (REORGs cause the external representation of the ROWIDs to change) associated with using ROWIDs and it is easy to use them the wrong way in application programs.
There is one ROWID column per base table, so, all LOB columns in a row have the same ROWID. The ROWID column is a unique identifier for each row of the table and the basis of the key for indexing the auxiliary table(s). Example 9-5 shows a table that contains LOB columns, and therefore, a ROWID column is required. Column IDCOL is defined with the ROWID data type. Notice that in order to be able to work with the table that contains a LOB column, you also need to create the LOB table space, the auxiliary table and the auxiliary index.
Example 9-5 ROWID column CREATE TABLESPACE TSLITERA IN DB246300 # CREATE TABLE SC246300.LITERATURE ( TITLE CHAR(25) ,IDCOL ROWID NOT NULL GENERATED ALWAYS ,MOVLENGTH INTEGER ,LOBMOVIE BLOB(2K) ,LOBBOOK CLOB(10K) ) IN DB246300.TSLITERA # CREATE LOB TABLESPACE TSLOB1 IN DB246300 # CREATE AUX TABLE SC246300.LOBMOVIE_ATAB
Chapter 9. Unique column identification
113
IN DB246300.TSLOB1 STORES SC246300.LITERATURE COLUMN LOBMOVIE# CREATE INDEX DB246300.AXLOB1 ON SC246300.LOBMOVIE_ATAB # CREATE LOB TABLESPACE TSLOB2 IN DB246300 # CREATE AUX TABLE SC246300.LOBBOOK_ATAB IN DB246300.TSLOB2 STORES SC246300.LITERATURE COLUMN LOBBOOK# CREATE INDEX DB246300.AXLOB2 ON SC246300.LOBBOOK_ATAB # INSERT INTO SC246300.LITERATURE (TITLE) VALUES ('DON QUIJOTE DE LA MANCHA')# INSERT INTO SC246300.LITERATURE (TITLE) VALUES ('MACBETH') ; SELECT TITLE, IDCOL FROM SC246300.LITERATURE # TITLE IDCOL -----------------------------------------------------------------------MACBETH CDC8A0A420E6D44F260401D370180100000000000203 DON QUIJOTE DE LA MANCHA 7B80A0A420E6D64F260401D370180100000000000204 ------------------------------------------------------------------------
As mentioned in the introduction of this section, the use of a ROWID column is not restricted to tables that have LOB columns defined. You can have a ROWID column in a table that does not have any LOB columns defined. Selecting a ROWID results in 22 logical bytes, but it is stored physically as 17 bytes. Example 9-6 shows how to code a ROWID function in an SQL statement.
Example 9-6 SELECTing based on ROWIDs SELECT TITLE,IDCOL FROM SC246300.LITERATURE WHERE IDCOL=ROWID(X'7B80A0A420E6D64F260401D370180100000000000204') ---------+---------+---------+---------+---------+---------+---------+------TITLE IDCOL ---------+---------+---------+---------+---------+---------+---------+------DON QUIJOTE DE LA MANCHA 7B80A0A420E6D64F260401D370180100000000000204
The value generated by DB2 is essentially a random number. If data is being propagated or copied from one table to another, the ROWID value is allowed to appear in the VALUES clause of an INSERT statement. In this case, the ROWID column has to be defined as GENERATED BY DEFAULT and to ensure uniqueness, a unique index on the ROWID column must be created. Note: Applications are not permitted to UPDATE the ROWID column.
114
In Example 9-7, we show how you can copy data from one table to another. If you have defined the ROWID as GENERATED ALWAYS, then DB2 generates a new ROWID for each INSERTed row (the ROWID columns cannot be copied or propagated). Note that the LITERATURE_GA table does no longer contain the LOB columns; this shows that you can also create a table with a ROWID column without any LOB columns. Note also that in that case you no longer need the LOB table space, auxiliary table and index (they are only present when you have LOB columns defined).
Example 9-7 Copying data to a table with GENERATED ALWAYS ROWID via subselect CREATE TABLE SC246300.LITERATURE_GA (TITLE CHAR(30) ,IDCOL ROWID NOT NULL GENERATED ALWAYS ,MOVLENGTH INTEGER ) IN DB246300.TS246300 ; INSERT INTO SC246300.LITERATURE_GA (TITLE ,MOVLENGTH ) SELECT TITLE ,MOVLENGTH FROM SC246300.LITERATURE; -- The ROWID column (IDCOL) is not specified because you can not insert -- values in a GENERATED ALWAYS column. SELECT TITLE, IDCOL FROM SC246300.LITERATURE ---------+---------+---------+---------+---------+---------+---------+------TITLE IDCOL ---------+---------+---------+---------+---------+---------+---------+-------
115
CDC8A0A420E6D44F260401D370180100000000000203 7B80A0A420E6D64F260401D370180100000000000204
SELECT TITLE, IDCOL FROM SC246300.LITERATURE_GA; ---------+---------+---------+---------+---------+---------+---------+------TITLE IDCOL ---------+---------+---------+---------+---------+---------+---------+------MACBETH 88DDF53DA0E6D808260401D370100100000000000207 DON QUIJOTE DE LA MANCHA D1DDF53DA0E6D818260401D370100100000000000208
Notice that when copying data to a table with a GENERATED ALWAYS ROWID column via a subselect, the ROWIDs are completely different. New values are generated when the rows are inserted into the LITERATURE_GA table. The ROWID column is not selected because you cannot insert values in a GENERATED ALWAYS column. If GENERATED BY DEFAULT was specified, you can supply a ROWID value for the INSERTed rows.
Example 9-8 Copying data to a table with GENERATED BY DEFAULT ROWID via subselect CREATE TABLE SC246300.LITERATURE_GDEF (TITLE CHAR(30) ,IDCOL ROWID NOT NULL GENERATED BY DEFAULT ,MOVLENGTH INTEGER ) IN DB246300.TS246300 ; CREATE UNIQUE INDEX DD ON SC246300.LITERATURE_GDEF (IDCOL) ; -- This index is required for tables with GENERATED BY DEFAULT ROWID -- columns in order to guarantee uniqueness. You receive an -- SQLCODE -540 if the index does not exist. INSERT INTO SC246300.LITERATURE_GDEF (TITLE ,IDCOL) SELECT TITLE ,IDCOL FROM SC246300.LITERATURE; SELECT TITLE, IDCOL FROM SC246300.LITERATURE ---------+---------+---------+---------+---------+---------+---------+------TITLE IDCOL ---------+---------+---------+---------+---------+---------+---------+------MACBETH CDC8A0A420E6D44F260401D370180100000000000203 DON QUIJOTE DE LA MANCHA 7B80A0A420E6D64F260401D370180100000000000204 SELECT TITLE, IDCOL FROM SC246300.LITERATURE_GDEF ---------+---------+---------+---------+---------+---------+---------+------TITLE IDCOL ---------+---------+---------+---------+---------+---------+---------+------MACBETH CDC8A0A420E6D44F260401D370180100000000000209 DON QUIJOTE DE LA MANCHA 7B80A0A420E6D64F260401D37018010000000000020A
116
Notice that the ROWID values in the output (external representation) are similar when copying data on a table with GENERATED BY DEFAULT ROWID via subselect. The only difference is the last bytes that include the rows physical location. Actually the ROWID values that are stored physically on DASD (internal representation-highlighted part in the example) are the same in both tables.
The precompiler will turn SQLTYPE IS ROWID into normal host language variable definitions as shown in Example 9-10 in case of a Cobol program. The variable that will contain the ROWID column is defined as a 40 byte character field and a 2 byte length field. The length field will contain the actual length of the ROWID value (normally 22 bytes when the ROWID column has not been added to the table after the table was created).
Example 9-10 Coding a ROWID host variable in Cobol *01 IDCOL USAGE SQL TYPE IS ROWID. <-- What you code in Cobol (* is the comment placed by precompiler) 01 IDCOL. <-- Precompiler replacement 49 IDCOL-LEN PIC S9(4) USAGE COMP. 49 IDCOL-TEXT PIC X(40).
117
118
Important: LOAD PART x is not allowed for a table whose partitioning key contains a ROWID. This is because ROWIDs are pseudo-random generated and may point to any partition of the table, whereas LOAD PART x can only insert rows into partition x.
Direct row access can be very fast, but should only be used if extremely high performance is needed. In order for an application to be able to benefit from direct row access the application logic must first retrieve the row with data and at a later stage update or delete it.
119
If the application can use a cursor that is defined as for update of , then it should use this type of cursor instead of direct row access. The update where current of cursor also does a direct update to the row as it does not have to search the row before updating it. The cursor is already positioned on the row that you are about to update. Using a cursor that is defined with for update of does not have to consider a fallback access path as is the case with direct row access. A lot of applications cannot use a for update of cursor. For example, an on-line IMS transaction cannot use this type of cursor since selecting the data and displaying it on the screen, and updating the data are separate executions of a transaction. The cursor is no longer there when the transaction returns to do the update. Another reason for not using a for update of cursor by many applications is the type of locking that is involved with this type of cursor. The UPDATE or DELETE operation takes place immediately after the row has been fetched because of the usage of the WHERE CURRENT OF clause (that is what makes it a positioned UPDATE or DELETE). Therefore, the X-lock on the page/row is usually held longer than is the case when using a non-cursor update, that you usually perform as close to the COMMIT as possible (to reduce the amount of time the X-lock is held). Therefore, using a positioned UPDATE or DELETE may have an impact on concurrency. Therefore, in order to be considered as a candidate for direct row access, your application: Needs to read the data first and update/delete it later Is unable to use a for update of cursor If the application does not have to read the row, you cannot use direct row access. You must retrieve the ROWID column first and use it later on to read/update/delete the same data again. A good use may be if for some reason you have to update the same row several times. Even though direct row access can be a great access path (avoiding index access and scanning needs), storing ROWID values in a table is usually a bad idea, especially when storing a large number of rows for a longer period of time as is explained in the next section.
If you are storing ROWID columns from another table (as VARCHARs, not as ROWIDs), you should somehow update those values after the table with the ROWID columns is reorganized. Therefore, storing ROWIDs of a table in another table is not recommended. Supplement the ROWID column predicate with another boolean term predicate that enables DB2 to use an existing index on the table or a good alternate access path. Create an index on the ROWID column, so that DB2 can use the index if direct row access is disabled.
Example 9-13 Inappropriate coding for direct row access. --Do NOT attempt the following: SELECT IDCOL INTO :bookid FROM SC246300.LITERATURE COMMIT ; DELETE FROM SC246300.LITERATURE WHERE IDCOL = :bookid -- This results in a table space scan -- if the ROWID is no longer valid and -- no index exists on IDCOL WHERE ... ;
One of the checks that is performed to see if a ROWID value is valid is verifying the EPOCH number. The EPOCH number can be found in SYSIBM.SYSTABLEPART. When a table space is created, the initial value of EPOCH is zero, and it is incremented whenever an operation resets the page set or partition (such as REORG or LOAD REPLACE). Important: If you use non-IBM utilities that reset the page set or partition (like REORG or LOAD REPLACE) and you use direct row access, make sure that these utilities update the EPOCH column in SYSTABLEPART when they reset the page set or partition.
121
The DB2 predictive governor estimates the cost of the query with the assumption that the direct access is successful. It does not report the estimated cost of the alternative access path.
122
Part 3
Part
123
124
10
Chapter 10.
125
In the Example 10-1 we select the employee number, last name and division from the TBEMPLOYEE table. The first character of the work department number represents the division within the organization. By using a CASE expression with a simple-when-clause, it is possible to translate the codes and list the full name of the division to which each employee belongs.
Example 10-1 SELECT with CASE expression and simple WHEN clause SELECT EMPNO ,LASTNAME ,CASE SUBSTR(WORKDEPT,1,1) WHEN 'A' THEN 'ADMINISTRATION' WHEN 'B' THEN 'HUMAN RESOURCES' WHEN 'C' THEN 'DESIGN' WHEN 'D' THEN 'OPERATIONS' ELSE 'UNKNOWN DEPARTMENT' END AS DIVISION FROM SC246300.TBEMPLOYEE ;
Searched-when-clause specifies a search condition to be applied to each row or group of table data presented for evaluation, and the result when that condition is true.
WHEN search-condition THEN result-expression / NULL WHEN ORDERDATE > CURRENT DATE + 60 DAYS THEN '2'
Example 10-2 gives an example of using a CASE expression with a searched-when-clause to update the salary of our employees depending on their job. As shown in the example, CASE expressions are not limited to SELECT statements.
Example 10-2 Update with CASE expression and searched WHEN clause UPDATE SC246300.TBEMPLOYEE
126
THEN SALARY * 1.10 THEN SALARY * 1.08 THEN SALARY * 1.05 SALARY * 1.035
Result-expression specifies an expression that follows the THEN and ELSE keywords. It specifies the result of a searched-when-clause or a simple-when-clause that is true, or a result if no case is true.
WHEN 'D' THEN 'OPERATIONS' ELSE 'UNKNOWN DEPARTMENT'
All result-expressions must be compatible. There must be at least one result-expression in the CASE expression with a defined data type. NULL cannot be specified for every data type. RAISE_ERROR cannot be specified for every CASE result-expression. For an example of using CASE expressions in a trigger see Example 3-10 on page 24. Search-condition specifies a condition that is true, false or unknown about a row, or group of table data. The ELSE clause is either followed by a result-expression or NULL. The keyword END ends a case-expression
127
UPDATE SC246300.TBEMPLOYEE SET SALARY = SALARY * 1.05 WHERE JOB = PRGRMR ; UPDATE SC246300.TBEMPLOYEE SET SALARY = SALARY * 1.035 WHERE JOB NOT IN (MANAGER , SUPRVSR, DBA , SYS PROG, PRGRMR) ;
Example 10-4 One update with the CASE expression and only one pass of the data UPDATE SC246300.TBEMPLOYEE SET SALARY = CASE WHEN JOB IN (MANAGER , SUPRVSR) WHEN JOB IN (DBA , SYS PROG) WHEN JOB = PRGRMR ELSE END ;
THEN SALARY * 1.10 THEN SALARY * 1.08 THEN SALARY * 1.05 SALARY * 1.035
In the following examples we update ORDERSTATUS on table TBORDER depending on the value of ORDERDATE. We can accomplish this with three SQL statements and three passes of the data (see Example 10-5) or with one SQL statement using a CASE expression and one pass of the data (see Example 10-6). Additionally, since the CASE expression stops evaluating once the first WHEN clause evaluates true, we can also simplify the logic (see Example 10-7).
Example 10-5 Three updates vs. one update with a CASE expression UPDATE SC246300.TBORDER SET ORDERSTATUS = '0' WHERE ORDERDATE <= CURRENT DATE + 30 DAYS ; UPDATE SC246300.TBORDER SET ORDERSTATUS = '1' WHERE ORDERDATE > CURRENT DATE + 30 DAYS AND ORDERDATE <= CURRENT DATE + 60 DAYS ; UPDATE SC246300.TBORDER SET ORDERSTATUS = '2' WHERE ORDERDATE > CURRENT DATE + 60 DAYS ;
Example 10-6 Same update implemented with CASE expression and only one pass of the data
ORDERDATE <= CURRENT DATE + 30 DAYS THEN 0 (ORDERDATE > CURRENT DATE + 30 DAYS AND ORDERDATE <= CURRENT DATE + 60 DAYS) THEN '1' ORDERDATE > CURRENT DATE + 60 DAYS THEN '2'
128
Example 10-7 Same update with simplified logic UPDATE SC246300.TBORDER SET ORDERSTATUS = CASE WHEN ORDERDATE <= CURRENT DATE + 30 DAYS WHEN ORDERDATE <= CURRENT DATE + 60 DAYS ELSE 2 END ;
Example 10-8 uses a CASE expression to avoid division by zero errors. The following queries show an accumulation or summing operation. In the first query, it is possible to get an error because PAYMT_PAST_DUE_CT can be zero and division by zero is not allowed. Notice that in the second part of the example, the CASE statement first checks to see if the value of PAYMT_PAST_DUE_CT is zero, if it is we return a zero, otherwise we perform the division and return the result to the SUM operation.
Example 10-8 Avoiding division by zero SELECT REF_ID ,PAYMT_PAST_DUE_CT ,SUM ( BAL_AMT / PAYMT_PAST_DUE_CT ) FROM PAY_TABLE GROUP BY REF_ID ,PAYMT_PAST_DUE_CT; --This statement can get a division by zero error versus SELECT REF_ID ,PAYMT_PAST_DUE_CT ,SUM (CASE WHEN PAYMT_PAST_DUE_CT = 0 THEN 0 WHEN PAYMT_PAST_DUE_CT > 0 THEN BAL_AMT / PAYMT_PAST_DUE_CT END) FROM PAY_TABLE GROUP BY REF_ID ,PAYMT_PAST_DUE_CT; --This statement avoids division by zero errors
Following is another example (Example 10-9) that also shows how to use a CASE expression to avoid division by zero errors. From the TBEMPLOYEE table, find all employees who earn more than 25 percent of their income from commission, but who are not fully paid on commission.
Example 10-9 Avoid division by zero, second example SELECT EMPNO ,WORKDEPT ,SALARY + COMM FROM SC246300.TBEMPLOYEE WHERE (CASE WHEN SALARY = 0 THEN 0 ELSE COMM/(SALARY + COMM) END) > 0.25 ; SALARY 90000 100000 COMM 10000 0 SALARY+COMM COMM/(SALARY+COMM) 100000 0.10 100000 0.00
129
0 0
100000 0
100000 0
Example 10-10 shows how to replace many UNION ALL clauses with one CASE expression. In this example, if the table is not clustered by the column STATUS, DB2 probably does not use an index to access the data and scan the entire table once for each SELECT. By using the CASE expression we scan the data only once. This could be a significant performance improvement depending on the size of the table and the number of rows that are accessed by the query.
Example 10-10 Replacing several UNION ALL clauses with one CASE expression SELECT CLERK ,CUSTKEY ,'CURRENT' AS STATUS ,ORDERPRIORITY FROM SC246300.TBORDER WHERE ORDERSTATUS = '0' UNION ALL SELECT CLERK ,CUSTKEY ,'OVER 30' AS STATUS ,ORDERPRIORITY FROM SC246300.TBORDER WHERE ORDERSTATUS = '1' UNION ALL SELECT CLERK ,CUSTKEY ,'OVER 60' AS STATUS ,ORDERPRIORITY FROM SC246300.TBORDER WHERE ORDERSTATUS = '2' ; versus SELECT CLERK ,CUSTKEY ,CASE WHEN ORDERSTATUS = '0' THEN 'CURRENT' WHEN ORDERSTATUS = '1' THEN 'OVER 30' WHEN ORDERSTATUS = '2' THEN 'OVER 60' END AS STATUS ,ORDERPRIORITY FROM SC246300.TBORDER ; -- this statement is equivalent to the above -- UNION ALL but requires only one pass through -- the data.
CASE expressions can also be used in before triggers to edit and validate input data and raise an error if the input is not correct. This can simplify programming by taking application logic out of programs and placing it into the data base management system, reducing the number of lines of code needed since the code is only in one place. See the trigger example in Example 3-10 on page 24.
130
CASE expressions also provide additional DB2 Family consistency and thereby enhance application portability in a client/server environment.
131
Example 10-12 shows how a CASE statement can be used to pivot a table. Sometimes it is convenient to pivot the results of a query in order to simplify program logic to display the data on a screen or simplify the generation of a report.
Example 10-12 Pivoting tables SELECT CUSTKEY , SUM(CASE WHEN TOTALPRICE BETWEEN 0 AND 99 THEN 1 ELSE 0 END) AS SMALL , SUM(CASE WHEN TOTALPRICE BETWEEN 100 AND 250 THEN 1 ELSE 0 END) AS MEDIUM , SUM(CASE WHEN TOTALPRICE > 250 THEN 1 ELSE 0 END) AS LARGE FROM SC246300.TBORDER GROUP BY CUSTKEY # Assume that table TBORDER contains the following rows: ---------+---------+---CUSTKEY TOTPRICE ---------+---------+---03 224. 05 30. 03 560. 05 150. 01 50. 03 550. 01 40. 04 40. 02 438. 03 75. The result of the above query is: ---------+---------+---------+---------+---------+---------+---CUSTKEY SMALL MEDIUM LARGE ---------+---------+---------+---------+---------+---------+---01 2 0 0 02 0 0 1 03 1 1 2 04 1 0 0 05 1 1 0
Example 10-13 shows how a CASE statement can be used to group the results of a query without having to re-type the expression. Using the employee table, find the maximum, minimum, and average salary. Instead of finding these values for each department, assume that we want to combine some departments into the same group. Combine departments A00 and E21, and combine departments D11 and E11.
Example 10-13 Use CASE expression for grouping SELECT CASE_DEPT
132
,MAX(SALARY) AS MAX_SALARY ,MIN(SALARY) AS MIN_SALARY ,AVG(SALARY) AS AVG_SALARY FROM (SELECT SALARY ,CASE WORKDEPT WHEN A00 THEN WHEN E21 THEN WHEN D11 THEN WHEN E11 THEN ELSE WORKDEPT END AS CASE_DEPT FROM SC246300.TBEMPLOYEE) AS X GROUP BY CASE_DEPT ;
Assume that table TBEMPLOYEE contains the following rows: SALARY 70000 60000 50000 36000 40000 44000 42000 54000 45000 30000 40000 51200 80100 40250 50800 WORKDEPT A00 A00 A00 B20 B20 B20 D11 D11 E11 E21 F20 F20 F20 J11 J11
The result of the above query is: CASE_DEPT MAX_SALARY MIN_SALARY AVG_SALARY ------------+-----------+------------+----------A00_E21 70000 30000 52500 B20 44000 36000 40000 D11_E11 54000 42000 47000 F20 80100 40000 57100 J11 50800 40250 30350
133
CASE expressions in the WHERE clause are stage 2 predicates. Tip: If the best possible access path is not a table space scan and you have CASE expressions in the WHERE clause, make sure that you also code other indexable and filtering predicates in the WHERE clause whenever possible. The result-expressions of a CASE statement (expressions following the THEN and ELSE keywords) cannot be coded so that all of them are NULL. If you attempt to code a CASE expression that always returns a NULL result, you receive an SQLCODE -580. The data type of every result-expressions must be compatible. If the CASE condition result data types are not compatible (either all character, graphic, numeric, date, time or timestamp) you receive an SQLCODE -581. Columns of data types VARCHAR (greater than 255 bytes), VARGRAPHIC (greater than 127 bytes), and LOBs (CLOB, DBCLOB or BLOB) cannot be used anywhere within a CASE expression. In addition, the only type of user defined function that is allowed in the expression prior to the first WHEN keyword must be a deterministic UDF and it cannot contain external actions.
134
11
Chapter 11.
Union everywhere
In this chapter we discuss UNIONs and the many new places where they can now be used. Starting with DB2 Version 7, UNIONs may now be used in: Views Table expressions Predicates (subqueries) Inserts Updates Because a UNION can now be coded everywhere that you could previously code a subselect, this feature is also called union everywhere. We also discuss some ways union everywhere can be applied in the physical design of extremely large tables.
135
136
,CITYKEY FROM SC246300.TBEMPLOYEE UNION ALL SELECT SEX ,YEAR(DATE(DAYS(CURRENT DATE)-DAYS(BIRTHDATE))) AS AGE ,CITYKEY FROM SC246300.TBCUSTOMER ) AS EMPCUST WHERE AGE > 21 GROUP BY SEX, CITYKEY
137
SELECT CITYKEY ,CITYNAME FROM SC246300.TBCITIES A WHERE CITYKEY = SOME (SELECT CITYKEY FROM SC246300.TBCUSTOMER) OR CITYKEY = SOME (SELECT CITYKEY FROM SC246300.TBEMPLOYEE) ;
--In DB2 V7 SELECT CITYKEY ,CITYNAME FROM SC246300.TBCITIES A WHERE CITYKEY = SOME (SELECT CITYKEY FROM SC246300.TBCUSTOMER UNION SELECT CITYKEY FROM SC246300.TBEMPLOYEE ) ;
Note: The V7 query is coded with a UNION and not a UNION ALL. In this case, because of the =SOME predicate, DB2 converts the UNION into a UNION ALL. This shows up in the explain output. There is no sorting of the result of the UNION operation.
--In DB2 V7 SELECT CITYKEY ,CITYNAME FROM SC246300.TBCITIES A WHERE EXISTS (SELECT DUMMY FROM SC246300.TBCUSTOMER
138
WHERE CITYKEY=A.CITYKEY UNION ALL SELECT DUMMY FROM SC246300.TBEMPLOYEE WHERE CITYKEY=A.CITYKEY ) ;
--In DB2 V7 SELECT CITYKEY ,CITYNAME FROM SC246300.TBCITIES A WHERE CITYKEY IN (SELECT CITYKEY FROM SC246300.TBCUSTOMER UNION ALL SELECT CITYKEY FROM SC246300.TBEMPLOYEE ) ;
139
,ADDRESS FROM SC246300.TBEMPLOYEE WHERE YEAR(BIRTHDATE) > 1968 UNION SELECT PHONENO ,'U' ,SEX ,BIRTHDATE ,CITYKEY ,FIRSTNAME ,LASTNAME ,ADDRESS FROM SC246300.TBCUSTOMER WHERE YEAR(BIRTHDATE) > 1968 #
140
Important: No updates are allowed on views containing the UNION ALL or UNION keyword. Before UNIONs were allowed in views, there were only two options to perform the equivalent function: The first option was to create a physical table containing the merged data from the multiple tables. This required some work, you needed to unload the multiple tables and load into the single table periodically. This meant that there was the potential for the data to be inaccurate, since the actual tables were not directly used for the users queries. The second option was for the user code UNIONs without the use of a view. However, coding the union is a bit more complex, and had to be coded again for every query wanting to work with the same data. Additionally, functions (such as AVERAGE, COUNT, and SUM) could not range across the union, further complicating the process. These functions could be accomplished through additional steps, for example, by creating a temporary table containing the output of the SELECT statement with the union clause, and then run another SELECT to perform the column function against the temporary table. DB2 V7 provides a simple answer for this problem. Data from the base tables can be merged dynamically by creating a view using unions. Once coded, the user only has to refer to the view to have access to data across several tables and can now easily use the full suite of functions, such as COUNT, AVG, and SUM across all the data. To get a combined list of employees and customers, we can create a view described in Example 11-8.
Example 11-8 Create view with UNION ALL CREATE VIEW SC246300.CUSTOMRANDEMPLOYEE AS SELECT FIRSTNME AS FIRSTNAME ,LASTNAME ,PHONENO ,BIRTHDATE ,SEX ,YEAR(DATE(DAYS(CURRENT DATE)-DAYS(BIRTHDATE))) AS AGE ,ADDRESS ,CITYKEY FROM SC246300.TBEMPLOYEE UNION ALL SELECT FIRSTNAME ,LASTNAME ,PHONENO ,BIRTHDATE ,SEX ,YEAR(DATE(DAYS(CURRENT DATE)-DAYS(BIRTHDATE))) AS AGE ,ADDRESS ,CITYKEY FROM SC246300.TBCUSTOMER ;
Example 11-9 shows a query to get the average age of employees and customers under 35 years old and how many of them there are.
141
Example 11-9 Use view containing UNION ALL SELECT AVG(AGE),COUNT(*) FROM SC246300.CUSTOMRANDEMPLOYEE WHERE AGE < 35
As you can see, the addition of unions in the CREATE VIEW statement simplifies the merging of data from tables for end user queries and allows the use of the full suite of DB2 functions against the data without the need of temporary tables or complex SQL.
In Example 11-10 we show the partial output from the PLAN_TABLE for the explain of the query in Example 11-6 on page 139. This EXPLAIN output shows that query block 3 and 4 (QBLOCKNO 3 and 4) have a parent query block 2 (PARENT_QBLOCKNO 2). If you see the the query processing as a tree, the outer select is the root (PARENT_QBLOCKNO 0). At the next level you find a union (PARENT_QBLOCKNO 1) which has 2 fullselects and each of them has PARENT_QBLOCKNO 2. Note: Examining the QBLOCKNO and PARENT_QBLOCKNO sequence is the only means of figuring out if query rewrite has taken place. DB2 does not provide specific information on the outcome of a query rewrite.
Example 11-10 PLAN_TABLE output ---------+---------+---------+---------+---------+---------+---------+---------+----QBLOCKNO TNAME ACCESSTYPE QBLOCK_TYPE PARENT_QBLOCKNO TABLE_TYPE ---------+---------+---------+---------+---------+---------+---------+---------+----1 INVITATION_CARDS INSERT 0 T
142
2 3 4
TBEMPLOYEE TBCUSTOMER
R R
1 2 2
---------T T
143
However, the number of inserting, deleting and updating programs is usually limited. Writing an insert, a delete and an update (or few) modules to take care of the manipulation of data from these tables is usually sufficient. To prevent inserting rows into the wrong table, you should either define check constraint on the tables reflecting the same criteria that were used to divide the tables, or do these updates to views on each individual table using a WITH CHECK OPTION. (Views referencing a single table can be updatable.) This way you can be sure that although DB2 cannot verify the data integrity by design, there is a way to guarantee that rows are only inserted or changed in the correct tables. Anyhow, programs still have to know in which table to insert, update or delete. Special processing is also required in case the value in the column that was used to assign rows to different tables is changed and the row now has to move to a different table. This type of manipulation requires a delete from the original table and an insert into the new table, operations similar to updating a column that is part of the partitioning key, before the introduction of the partitioning key update feature. Because of the complexity of handling inserts, updates and deletes correctly, this type of design is probably most beneficial in a (mostly) read only environment, like data warehouses. Loading the data is not a problem. DB2 can load many tables in one run. You only need to specify the exact rules how to select the correct table to which the data belongs. (That rule is our splitting criteria.) Splitting tables is not relevant just because we can do it. Partitioning a table must always be the first choice for large tables. Usually partitioning the data is sufficient to solve most availability and manageability issues. DB2 offers more possibilities (more partitions, more parallelism, more flexibility) for partitioned tables in each new release. Dividing tables should be a last resort when nothing else is good enough. Note also that the divided tables can themselves be partitioned as well. Lets try to explain the concept using an example. Assume that our TBORDER table is getting to large and we decide to go for the UNION in view design. Suppose that the application logic uses some sort of randomizing formula to assign ORDERKEYs randomly taking into account that ORDERKEY is an integer column. We decide to split the TBORDER table into 3 separate tables, TBORDER_1, TBORDER_2 and TBORDER_3. Because of the randomizer, each table receive more or less the same number of rows. Sample DDL is shown in Example 11-11.
Example 11-11 DDL to create split tables SET CURRENT PATH = 'SC246300' # -- to make sure we find the UDT CREATE TABLESPACE TS246331 IN DB246300 #
CREATE TABLE SC246300.TBORDER_1 ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ),
144
PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246331 WITH RESTRICT ON DROP # -- Create unique index on primary key CREATE UNIQUE INDEX SC246300.X1TBORDER_1 ON SC246300.TBORDER_1(ORDERKEY ASC) # -- Create indexes on foreign keys CREATE INDEX SC246300.X2TBORDER_1 ON SC246300.TBORDER_1(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_1 ON SC246300.TBORDER_1(REGION_CODE
ASC) #
CREATE TABLESPACE TS246332 IN DB246300 # CREATE TABLE SC246300.TBORDER_2 ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246332 WITH RESTRICT ON DROP# CREATE UNIQUE INDEX SC246300.X1TBORDER_2 ON SC246300.TBORDER_2(ORDERKEY ASC) # CREATE INDEX SC246300.X2TBORDER_2 ON SC246300.TBORDER_2(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_2 ON SC246300.TBORDER_2(REGION_CODE
ASC) #
CREATE TABLESPACE TS246333 IN DB246300 # CREATE TABLE SC246300.TBORDER_3 ( ORDERKEY INTEGER NOT CUSTKEY CUSTOMER NOT ORDERSTATUS CHAR ( 1 TOTALPRICE FLOAT NOT
145
ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246332 WITH RESTRICT ON DROP # CREATE UNIQUE INDEX SC246300.X1TBORDER_3 ON SC246300.TBORDER_3(ORDERKEY ASC) # CREATE INDEX SC246300.X2TBORDER_3 ON SC246300.TBORDER_3(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_3 ON SC246300.TBORDER_3(REGION_CODE
ASC) #
Create a view, as shown in Example 11-12, that is used to retrieve rows from any of the 3 tables. This way it is transparent to the user in which physical table the data is stored. The user only uses this view to retrieve the data. the WHERE clauses are extremely important and dont serve just as documentation to show the ORDERKEY range of each table. This information is used by the optimizer to eliminate the scanning of certain partitions.
Example 11-12 DDL to create UNION in view CREATE VIEW SC246300.VWORDER AS SELECT * FROM SC246300.TBORDER_1 WHERE ORDERKEY BETWEEN 1 AND 700000000 UNION ALL SELECT * FROM SC246300.TBORDER_2 WHERE ORDERKEY BETWEEN 700000001 AND 1400000000 UNION ALL SELECT * FROM SC246300.TBORDER_3 WHERE ORDERKEY BETWEEN 1400000001 AND 2147483647 #
When the user executes the query in Example 11-13, DB2 is smart enough, based on the information in your query and the information in the view definition, to determine that the data can only come from TBORDER_1 and there is no need to look at any row in TBORDER_2 and TBORDER_3. However, keep in mind that DB2, although the individual subqueries against each table can run in parallel, the first subquery has to complete before the next one starts. (DB2 has to finish selecting from TBORDER_1 before he starts working on TBORDER_2.)
146
Example 11-13 Sample SELECT from view to mask the underlying tables SELECT * FROM SC246300.VWORDER WHERE ORDERKEY = 123456 AND CUSTKEY = CUSTOMER('000006') # -- CUSTKEY IS USING A UDT -- called CUSTOMER. Therefor a -- CAST function is required
When doing an insert, update or delete against the set of tables, you should use one of the following views shown in Example 11-14. The use of the WITH CHECK OPTION prevents you from inserting in the wrong table. An example of an insert using a wrong view is shown in Example 11-15. This is very important because the view that is used to retrieve rows, restricts the data that can be retrieved from each table. If, by accident, a row with an ORDERKEY of 555 would end up into table TBORDER_2, you would not be able to retrieve it using the VWORDER view. The WHERE clause associated with TBORDER_2 in the VWORDER view eliminates ORDERKEY from the result set.
Example 11-14 Views to use for UPDATE and DELETE CREATE VIEW SC246300.VWORDER_1UPD AS SELECT * FROM SC246300.TBORDER_1 WHERE ORDERKEY BETWEEN 1 AND 700000000 WITH CHECK OPTION CREATE VIEW SC246300.VWORDER_2UPD AS SELECT * FROM SC246300.TBORDER_2 WHERE ORDERKEY BETWEEN 700000001 AND 1400000000 WITH CHECK OPTION UNION ALL CREATE VIEW SC246300.VWORDER_3UPD AS SELECT * FROM SC246300.TBORDER_3 WHERE ORDERKEY BETWEEN 1400000001 AND 2147483647 WITH CHECK OPTION
Example 11-15 WITH CHECK OTPION preventing INSERT INSERT INTO SC246300.VWORDER_1UPD VALUES ( 700000001 -- ORDERKEY INTEGER NOT NULL ,'01' -- CUSTKEY CUSTOMER NOT NULL ,'N' -- ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT , 123.45 -- TOTALPRICE FLOAT NOT NULL ,CURRENT DATE -- ORDERDATE DATE NOT NULL WITH DEFAULT ,'LOW' -- ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT ,'BART' -- CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT ,5 -- SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT ,'NW' -- STATE CHAR ( 2 ) NOT NULL WITH DEFAULT ,55 -- REGION_CODE INTEGER
Chapter 11. Union everywhere
147
,CURRENT DATE -- INVOICE_DATE DATE NOT NULL WITH DEFAULT ,'MY ORDER' -- COMMENT VARCHAR ( 79 ) ) # ------------------------------------------------------------------------------DSNT408I SQLCODE = -161, ERROR: THE INSERT OR UPDATE IS NOT ALLOWED BECAUSE A RESULTING ROW DOES NOT SATISFY THE VIEW DEFINITION DSNT418I SQLSTATE = 44000 SQLSTATE RETURN CODE
This type of view is not all that helpful when updating or deleting rows. However, you receive an SQLCODE +100 when you try to run the statement using the wrong view. This should be a signal that you might be using the wrong table, but is of course no guarantee that this is actually the case. It is also possible that the row you are trying to update or delete does not exist in the table. In trying also to close this loophole, at some extra cost, you could select the row first using the VWORDER view before trying to update or delete it using the correct VWORDER_xUPD view. Note also that when your TBORDER tables need to be the parent table in an RI relationship, this construct cannot be used. A foreign key cannot point to a set of tables (our TBORDER_x tables). In summary, although splitting very large tables into several smaller ones and using the union in view concept to make it transparent to the application has some attractive features, there are also several drawbacks that have to be carefully considered before implementing this design.
148
12
Chapter 12.
Scrollable cursors
The ability to be able to scroll backwards as well as forwards is a requirement of many screen-based applications. DB2 V7 introduces facilities not only to allow scrolling forwards and backwards, but also the ability to jump around and directly retrieve a row that is located at any position within the cursor result table. DB2 also can, if desired, maintain the relationship between the rows in the result set and the data in the base table. That is, the scrollable cursor function allows the changes made outside the opened cursor, to be reflected. For example, if the currently fetched row has been updated while being processed by the user, and an update is attempted, a warning is returned by DB2 to reflect this. When another user has deleted the row currently fetched, DB2 returns an SQLCODE if an attempt is made to update the deleted row.
149
150
Non-scrollable cursor
Used by an application program to retrieve a set of rows or retrieve a result set from a stored procedure. The rows must be processed one at a time. The rows are fetched sequentially. Result sets may be stored in a workfile.
Scrollable cursor
Used by an application program to retrieve a set of rows or retrieve a result set from a stored procedure. The rows can be fetched in random order. The rows can be fetched forward or backward. The rows can be fetched relative to the current position or from the top of the result table or result set. The result set is fixed at OPEN CURSOR time. Result sets are stored in declared temporary tables. Result sets go away at CLOSE CURSOR time.
151
Note: DB2 Version 7 supports a subset of the SQL99 standard for scrollable cursors. Sensitive DYNAMIC scrollable cursors are not supported by DB2 Version 7. To allow for possible support for sensitive dynamic cursors in a future release of DB2, the keyword STATIC must be explicitly specified for a sensitive scrollable cursor.
Insensitive
If an attempt is made to code the FOR UPDATE OF clause in a cursor defined as INSENSITIVE, then the bind returns an SQLCODE:
-228 FOR UPDATE CLAUSE SPECIFIED FOR READ-ONLY SCROLLABE CURSOR USING cursor-name.
The characteristics of an insensitive scrollable cursor are: The cursor cannot be used to issue positioned updates and deletes. FETCH processing on the result table is insensitive to changes made to the base table after the result table is built (even if changes are made by the current agent outside the cursor). The number and content of the rows stored in the result table is fixed at OPEN CURSOR time and does not change.
Sensitive
Fundamentally, the SENSITIVE STATIC cursor is updatable. As such, the FOR UPDATE OF clause can be coded for a SENSITIVE cursor. If the SELECT statement connected to a cursor declared as SENSITIVE STATIC uses any keywords that forces the cursor to be read-only, the bind rejects the cursor declaration. In this case the bind returns an SQLCODE:
152
-243 SENSITIVE CURSOR cursor-name CANNOT BE DEFINED FOR THE SPECIFIED SELECT STATEMENT.
Important: Use of column functions, such as MAX and AVG, and table joins forces a scrollable cursor into implicit read-only mode and therefore are not valid for a SENSITIVE cursor. In contrast with non-scrollable cursors, the usage of the ORDER BY clause in a scrollable cursor does not make it read-only. A SENSITIVE cursor can be made explicitly read-only by including FOR FETCH ONLY in the DECLARE CURSOR statement. Even if a SENSITIVE cursor is read-only, it is still aware of all changes made to the base table data through updates and deletes. The characteristics of a sensitive scrollable cursor are: The cursor can be used to issue positioned updates and deletes FETCH processing on the result table is sensitive (to varying degrees) to changes made to the base table after the result table has been built The number of rows in the result table does not change but the row content can change STATIC cursors are insensitive to inserts.
153
Some applications may require that a result table remains constant (STATIC) as the application scrolls through it. For example, some accounting applications require data to be constant. On the other hand, other applications, like airline reservations, may require to see the latest flight availability no matter how much they scroll through the data (DYNAMIC). Furthermore, sensitivity can be limited to visibility of changes made by the same cursor and process that is fetching rows or the sensitivity can be extended to also see updates made outside the cursor and process by refreshing the row explicitly. Table 12-1 can be used to help you decide which type of cursor to use. If you just want to blast through your data then choose a forward-only cursor. If you want to scroll through a constant copy of your data you may want to use an INSENSITIVE CURSOR instead of making a regular cursor materialize the result table. If you want to control the cursor's position, scroll back and forth, choose a scrollable cursor. If you don't care about the freshness of data, choose an INSENSITIVE CURSOR. If you want fresh data some times, choose a SENSITIVE on DECLARE CURSOR and SENSITIVE or INSENSITIVE on FETCH. If you want fresh data all the time, choose a SENSITIVE CURSOR and SENSITIVE on FETCH or non specific FETCH.
Table 12-1 Cursor type comparison Cursor type Result table Visibility of own cursors changes No Yes No Yes (Inserts not allowed) Yes Visibility of other cursors changes No Yes No Yes (Not to inserts) Yes Updatability
Non-scrollable Materialized Non-Scrollable Not Materialized INSENSITIVE SCROLL SENSITIVE STATIC SCROLL SENSITIVE ** DYNAMIC SCROLL
Fixed, workfile No workfile, base table access Fixed, declared temp table Fixed, declared temp table
No Yes No Yes
Yes
**Note: Sensitive Dynamic scrollable cursors are not available as of DB2 V7 and are only shown here for comparison purposes.
154
In this section we discuss how to use a scrollable cursor. We discuss the following topics: Declaring a scrollable cursor Opening a scrollable cursor Fetching rows from a scrollable cursor Moving the cursor Using functions in a scrollable cursor
....
DECLARE C2
SENSITIVE STATIC SCROLL CURSOR FOR SELECT .... FROM .... WHERE .... FOR UPDATE OF .... ;
155
The record identifier (RID) of the row is also retrieved and stored with the rows in the temporary table. If the cursor is declared as SENSITIVE STATIC, the RIDs are used to maintain changes between the result set row and the base table row. Note: DECLARE CURSOR statements that do not use the new keyword SCROLL do not create the temporary table and are only able to scroll in a forward direction.
It is important to note that, for a cursor which is declared as INSENSITIVE or SENSITIVE STATIC, the number of rows of the result set table does not change once the rows are retrieved from the base table and stored. This means that all subsequent inserts which are made by other users or by the current process into the base table and which would fit the selection criteria of the cursors SELECT statement are not visible to the cursor. Only updates and deletes to data within the result set may be seen by the cursor. Once the result set has been retrieved, it is only visible to the current cursor process and remains available until a CLOSE CURSOR is executed or the process itself completes. For programs, the result set is dropped on exit of the current program; for stored procedures, the cursors defined are allocated from the calling program, and the result set is dropped when the calling program concludes.
Example 12-2 Opening a scrollable cursor DECLARE CUR1 SENSITIVE STATIC SCROLL CURSOR WITH HOLD FOR SELECT ACCOUNT ,ACCOUNT_NAME, ,CREDIT_LIMIT ,TYPE INTO :ACCOUNT ,:ACCOUNT_NAME, ,:CREDIT_LIMIT ,:TYPE FROM ACCOUNT WHERE CREDIT_LIMIT > 20000 ; ... OPEN CUR1 ...
If a LOB column is selected in the DECLARE CURSOR statement, the LOB column is represented by a LOB descriptor column in the result table. The LOB descriptor column is 120 bytes and holds information which enables DB2 to quickly retrieve the associated LOB column value from the auxiliary table when the application fetches a row from the result table. DB2 has to retrieve the LOB column value (from the auxiliary table) when it processes either a sensitive or an insensitive FETCH request. Suppose an application issues a ROLLBACK TO SAVEPOINT S1, and savepoint S1 was set before scrollable cursor C1 was opened. Once the rollback has completed, the result table for C1 contains the same data as it did on completion of the open cursor statement for C1. In addition, the rollback does not change the position of cursor C1. The OPEN CURSOR and ALLOCATE CURSOR statement return the following information in the SQLCA for scrollable cursors regarding the sensitivity of the cursor even though the SQLCODE and SQLSTATE are zero:
156
Whether the cursor is scrollable or not This information is provided in the SQLWARN1 field. It is set to: S = scrollable and N = non-scrollable cursor Effective sensitivity of the cursor (that is, insensitive versus sensitive): This information is returned in SQLWARN4. SQLWARN4 is not set for non-scrollable cursors. I = insensitive S = sensitive Effective capability of the cursor (that is, whether the scrollable cursor is updatable, deletable or read-only): This information is returned in SQLWARN5. SQLWARN5 is not set for non-scrollable cursors. 1 = Read-only, the result table of the query is read-only either because the content of the SELECT statement (implicitly read-only), or for READ/FETCH ONLY was explicitly specified. 2 = Read and Delete allowed, the result table of the query is deletable, but not updatable. 4 = Read, Delete and Update allowed, the result table of the query is deletable and updatable. When SQLWARN1, SQLWARN4 and SQLWARN5 are set, then SQLWARN0 (the summary flag) is NOT set for these cases. Note: If the size of the result table exceeds the DB2 established limit of declared temporary tables, a resource unavailable message, DSNT501I, is generated with the appropriate resource reason code at OPEN CURSOR time.
157
FETCH INSENSITIVE SENSITIVE NEXT PRIOR FIRST LAST CURRENT BEFORE AFTER ABSOLUTE RELATIVE
Below is a complete list of the new keywords which have been added to the FETCH statement syntax for moving the cursor. NEXT PRIOR FIRST LAST CURRENT Positions the cursor on the next row of the result table relative to the current cursor position and fetches the row - This is the default. Positions the cursor on the previous row of the result table relative to the current position and fetches the row. Positions the cursor on the first row of the result table and fetches the row. Positions the cursor on the last row of the result table and fetches the row. Fetches the current row without changing position within the result table. If CURRENT is specified and the cursor is not positioned at a valid row (for example, BEFORE the beginning of the result table) a warning SQLCODE +231, SQLSTATE 02000 is returned. Positions the cursor before the first row of the result table. No output host variables can be coded with this keyword as no data can be returned Positions the cursor after the last row of the result table No output host variables can be coded with this keyword, as no data can be returned.
BEFORE
AFTER
158
ABSOLUTE
Used with either a host-variable or integer-constant. This keyword evaluates the host-variable or integer-constant and fetches the data at the row number specified. If the value of the host-variable or integer-constant is 0, then the cursor is positioned at the position before the first row and the warning SQLCODE +100, SQLSTATE 02000 is returned. If the value of the host-variable or integer-constant is greater than the count of rows in the result table, the cursor is positioned after the last row in the result table, and the warning SQLCODE +100, SQLSTATE 02000 is returned.
RELATIVE
Used with either a host-variable or integer-constant. This keyword evaluates the host-variable or integer-constant and fetches the data in the row which is that value away from the current cursor position. If the value in the host-variable or integer-constant is equal to 0, then the current cursor position is maintained and the data fetched. If the value in the host-variable or integer-constant is less than 0, then the cursor is positioned the number of rows specified in the host-variable or integer-constant from the cursor position towards the beginning of the result table and the row is fetched. If the value in the host-variable or integer-constant is greater than 0, then the cursor is positioned the number of rows specified in the host-variable or integer-constant from the cursor position towards the end of the result table and the row is fetched. If a relative position is specified that is before the first row or after the last row, a warning SQLCODE +100, SQLSTATE 02000 is returned, and the cursor is positioned either before the first row or after the last row and no data is returned .
In Figure 12-2 on page 167, we show you the results of various fetches when the cursor is currently positioned on the 10th row. In Figure 12-3 on page 168, we show you a matrix of the possible SQLCODEs returned after a FETCH from a scrollable cursor.
159
A FETCH INSENSITIVE request retrieves the row data from the result table. A FETCH SENSITIVE request retrieves the row data from the base table. Tip: You want to specify FETCH SENSITIVE when you want DB2 to check if the underlying data has changed since you last retrieved it from the base table and you intend to update or delete the row or you simply need the latest data. See Maintaining updates on page 174 for more details. In SENSITIVE STATIC scrollable cursors, the number of rows in the result table is fixed but deletes and updates to the underlying table can create delete holes or update holes. In Update and delete holes on page 170 we discuss in detail how these holes are created.
160
Updates and deletes can be made in a SENSITIVE STATIC cursor using the clause WHERE CURRENT OF in UPDATE and DELETE statements. However, when the INSENSITIVE keyword is used in the FETCH statement, we are saying that we do not want the cursor to reflect updates or deletes of the cursors rows made by statements outside the cursor. In this case, the FETCH statement shows holes made by updates and deletes from within the current cursor, but does not show any holes created by any update and deletes outside the cursor. If the application issues a FETCH INSENSITIVE request it sees the positioned updates and deletes it has made via the current scrollable cursor. These changes are visible to the application because DB2 updates both the base table and the result table when a positioned update or delete is issued by the application owning the cursor. Changes made by agents outside the cursor is not visible to data returned by the FETCH INSENSITIVE. CURSOR SENSITIVE STATIC and FETCH SENSITIVE A temporary table is created at open cursor time to hold all rows returned by the SELECT statement. Updates and deletes can be made using the WHERE CURRENT OF CURSOR clause. If an update is to be made against the cursor, the FOR UPDATE OF must be coded in the DECLARE CURSOR statement. The FETCH statement returns all updates or deletes made by the cursor, all updates and deletes made outside the cursor and within the current process, and all committed updates and deletes made to the rows within the cursors result set by all other processes. As you would expect, the application sees the uncommitted changes as well as the committed changes it has made to the base table. However, it does not see uncommitted changes made by other agents unless isolation level UR is in effect for the scrollable cursor.
Table 12-2 Sensitivity of FETCH to changes made to the base table Specification on DECLARE CURSOR INSENSITIVE INSENSITIVE SENSITIVE Specification on FETCH INSENSITIVE SENSITIVE INSENSITIVE Comment Default for FETCH is INSENSITIVE Invalid combination Valid combination Sensitivity to changes made to the base table None Not applicable Application sees the changes it has made using positioned UPDATEs and DELETEs Application sees: 1. Changes it has made using positioned and searched UPDATEs and DELETEs 2. Changes it has made outside the cursor (using searched UPDATEs and DELETEs) 3. Committed UPDATEs and DELETEs made by other applications
SENSITIVE
SENSITIVE
161
Tip: A FETCH request never sees new rows which have been INSERTed into the base table after the result table has been built.
DB2 attempts to retrieve the corresponding row from the base table. If the row is not found, DB2 marks the result table row as a delete hole and returns SQLCODE +222. If row is found, DB2 checks to see if the base table row still satisfies the search condition specified in the DECLARE CURSOR statement. If the row no longer satisfies the search condition, DB2 marks the result table row as an update hole and returns SQLCODE +222. If the row still satisfies the search condition, DB2 refreshes the result table row with the column values from the base table row and returns the result table row to the application.
A delete or update hole is counted as a row by DB2 when a FETCH request is processed, that is, holes are never skipped over. An application cannot distinguish between a delete hole and an update hole. Once a row in the result table is marked as a delete hole, then that row remains a delete hole. In Example 12-3, we show a FETCH SENSITIVE request which creates an update hole in the result table. The update hole is created because the base table row no longer satisfies the search condition WHERE TXNID = 'SMITH'.
Example 12-3 Example of a FETCH SENSITIVE request which creates an update hole DECLARE CURSOR C1 SENSITIVE STATIC SCROLL FOR SELECT TXNDATE ,AMT FROM TBTRAN WHERE TXNID = 'SMITH' ORDER BY TXNDATE Sequence of events: 1. Cursor C1 is opened and the result table is built. 2. Another application updates the column value of TXNID of the row with RID A0D to SMYTHE. 3. A FETCH SENSITIVE is invoked that positions the cursor on the row with RID A0D. 4. DB2 checks the base table row (using its RID), since the row no longer satisfies the search condition WHERE TXNID = 'SMITH', DB2 changes the current row in the result table to an update hole, and returns an SQLCODE +222. Before update hole is created: BASE TABLE TBTRAN TXNID TXNDATE DESC AMT RID ------+--------+------+----902 Brown 080200 CR +500 903 Smith 071200 CR +500 A04 Brown 080900 DB -20 A05 Doe 081000 CR +500 A07 Smith 080300 DB -40 A08 George 080800 DB -100 A09 Smith 081600 CR +800 A0A Black 081700 CR +200 A0B White 071100 DB -50 A0C Smith 080200 CR +500 A0D Smith 080400 CR +100 A0E Black 080500 DB -500 A0F Smith 080100 CR +200 After update hole is created: BASE TABLE TBTRAN TXNID TXNDATE DESC AMT RID ------+--------+------+----902 Brown 080200 CR +500 903 Smith 071200 CR +500 A04 Brown 080900 DB -20 A05 Doe 081000 CR +500 A07 Smith 080300 DB -40 A08 George 080800 DB -100 A09 Smith 081600 CR +800
RESULT TABLE RID TXNDATE AMT ----+---------+-----903 071200 +500 A0F 080100 +200 A0C 080200 +500 A07 080300 -40 A0D 080400 +100 A09 081600 +800
RESULT TABLE RID TXNDATE AMT ----+---------+-----903 071200 +500 A0F 080100 +200 A0C 080200 +500 A07 080300 -40 A0D A09 081600 +800
163
Black 081700 White 071100 Smith 080200 Smythe 080400 Black 080500 Smith 080100
CR DB CR CR DB CR
Cursor movement
The RELATIVE and ABSOLUTE keywords can be followed by either an integer constant or a host variable which contains the value to be used. The INTO clause specifies the host variable(s) into which the row data should be fetched. This clause must be specified for all FETCH requests except for a FETCH BEFORE and a FETCH AFTER request. An alternative to the INTO clause is the INTO DESCRIPTOR clause. If the RELATIVE or ABSOLUTE keyword is followed by the name of a host variable, then the named host variable must be declared with a data type of INTEGER or DECIMAL(n,0). Data type DECIMAL(n,0) only has to be used if a number specified is beyond the range of an integer (-2147483648 to +2147483647). If the number of the row to be fetched is specified via a constant, then the constant must be an integer. For example, 7 is valid, 7.0 is not valid.
Absolute moves
An absolute move is one where the cursor position is moved to an absolute position within the result table. For example, if a program wants to retrieve the fourth row of a table, the FETCH statement would be coded as:
EXEC SQL FETCH ... ABSOLUTE +4 FROM CUR1 INTO ... END-EXEC. or 01 CURSOR-POSITION PIC S9(9) USAGE BINARY. .. MOVE 4 TO CURSOR-POSITION. EXEC SQL FETCH ... ABSOLUTE :CURSOR-POSITION FROM CUR1 INTO ... END-EXEC.
Here, :CURSOR-POSITION is a host-variable of type INTEGER. Another form of the absolute move is through the use of keywords which represent fixed positions within the result set. For example, to move to the first row of a result set, the following FETCH statement can be coded: 164
There are also two special absolute keywords which allow for the cursor to be positioned outside the result set. The keyword BEFORE is used to move the cursor before the first row of the result set and AFTER is used to move to the position after the last row in the result set. Host variables cannot be coded with these keywords as they can never return values. A FETCH ABSOLUTE 0 request and a FETCH BEFORE request both position the cursor before the first row in the result table. DB2 returns SQLCODE +100 for FETCH ABSOLUTE 0 requests (that is, no data is returned) and SQLCODE 0 for FETCH BEFORE requests. A FETCH ABSOLUTE -1 request is equivalent to a FETCH LAST request, that is, both requests fetch the last row in the result table. DB2 returns SQLCODE 0 for both of these requests. Tip: A FETCH BEFORE and a FETCH AFTER request only position the cursor, DB2 does not return any row data. The SENSITIVE and INSENSITIVE keywords cannot be used if BEFORE or AFTER are specified on the FETCH statement.
Relative moves
A relative move is one made with reference to the current cursor position. To code a statement which moves three rows back from the current cursor, the statement would be:
FETCH ... RELATIVE -3 FROM CUR1 INTO ...; or MOVE -3 TO CURSOR-MOVE. FETCH ... RELATIVE :CURSOR-MOVE FROM CUR1 INTO ... ;
Here, CURSOR-MOVE is a host-variable of INTEGER type. If you attempt to make a relative jump which positions you either before the first row or after the last row of the result set, an SQLCODE of +100 is returned. In this case the cursor is positioned just before the first row, if the jump was backwards through the result set; or just after the last row, if the jump was forward within the result set. The keywords CURRENT, NEXT, and PRIOR make fixed moves relative to the current cursor position. For example, to move to the next row, the FETCH statement would be coded as:
FETCH ... NEXT FROM CUR1 ; or FETCH ... FROM CUR1 ;
Please refer to the DB2 UDB for OS/390 and z/OS Version 7 SQL Reference, SC26-9944, for a complete list of synonymous scroll specifications for ABSOLUTE and RELATIVE moves inside a scrollable cursor. In Example 12-4 we show you sample program logic to display the last five rows from a table.
Example 12-4 Scrolling through the last five rows of a table DECLARE CURSOR CUR1 SENSITIVE STATIC SCROLL FOR SELECT TXNID ,TXSTATUS ,TXNDATE ,AMT
165
FROM TBTRAN WHERE TXNID = 'SMITH' AND TXSTATUS = LATE ORDER BY TXNDATE ; OPEN CURSOR CUR1 ; FETCH INSENSITIVE ABSOLUTE -6 FROM CUR1 INTO :HV1 ,:HV2 ; -- Position us on the 6th row from the bottom -- of the result table
DO I = 1 TO 5 FETCH NEXT FROM CUR1 INTO :HV1 ,:HV2 ; application logic to process the rows END CLOSE CURSOR CUR1 ;
Some examples of the full syntax are shown in Example 12-5. In this example, first we fetch the 20th row from the result table. Then we position the cursor to the beginning of the result table (no data is returned). Then we scroll down to the NEXT row (in this case the first row in the result table). Then we scroll forward 10 rows. After that we scroll forward an additional :rownumhv rows. Then we position the cursor at the end of the result table (no data is returned). Then we scroll backwards 4 rows from the bottom of the result table, and finally we scroll forward to the next row from there.
Example 12-5 Several FETCH SENSITIVE statements FETCH FETCH FETCH FETCH FETCH FETCH FETCH FETCH FETCH ABSOLUTE 20 FROM C1 INTO :hv1, :hv2 ; BEFORE FROM C1 ; NEXT FROM C1 INTO :hv1, :hv2 ; SENSITIVE RELATIVE 10 FROM C1 INTO :hv1, :hv2 ; SENSITIVE RELATIVE :rownumhv FROM C1 INTO :hv1, :hv2 ; AFTER FROM C1 ; INSENSITIVE PRIOR FROM C1 INTO :Hv1, :hv2 ; SENSITIVE RELATIVE -4 USING DESCRIPTOR :sqldahv ; FROM C1 INTO :hv1, :hv2 ;
In Figure 12-2 we show the effects of different FETCH requests when the cursor is currently positioned on row number 10. NEXT is the default for a FETCH request.
166
R e s u lt t a b l e
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Figure 12-2 How to scroll within the result table
-3
NEXT 3
-4
Refer to Figure 12-3 for a list of cursor positioning values and the possible SQLCODEs that may be returned.
167
NEXT
PRIOR
+100
+100
OK
OK
+222
+222
IF OK
Resulting Position Before First row IF +100 FROM FIRST ROW IF+100 FROM LAST ROW N/A N/A IF OK
Resulting Position After Last Row IF +100 FROM LAST ROW IF +100 FROM FIRST ROW N/A N/A N/A
OK OK OK
OK OK OK
OK OK OK
OK OK OK
IF OK IFOK N/A
OK +231
OK OK
OK OK
OK +231
+222 +222
+222 +222
N/A IF OK
N/A N/A
IF OK N/A
OK
OK
OK
OK
+222
+222
IF OK
N/A
ABSOLUTE -n
OK
OK
OK
OK
+222
+222
IF OK
RELATIVE +n
OK
OK
+100
+100
+222
+222
IF OK
RELATIVE -n
+100
+100
OK
OK
+222
+222
IF OK
168
However, with a scalar function, DB2 can maintain the relationship between the temporary result set and the rows in the base table, and therefore allows these functions to be used in both INSENSITIVE and SENSITIVE cursors. If used in an INSENSITIVE cursor, the function is evaluated once at OPEN CURSOR time. For SENSITIVE cursors and where a FETCH SENSITIVE is used, this function is evaluated at FETCH time. For SENSITIVE cursors with an INSENSITIVE FETCH the function is evaluated at FETCH time only against the result set for the cursor, the function is not evaluated against the base table. In Example 12-7, we can see an expression (COMM + BONUS) and a column function (AVG(SALARY)) being used in an insensitive scrollable cursor. Here, the column function and the expression are evaluated when the cursor is opened and the results are saved by DB2.
Example 12-7 Using functions in an insensitive scrollable cursor EXEC SQL DECLARE C1 INSENSITIVE SCROLL CURSOR FOR SELECT EMPNO ,FIRSTNME ,SALARY ,COMM ,BONUS ,COMM + BONUS AS ADDITIONAL_MONEY FROM SC246300.TBEMPLOYEE WHERE SALARY > (SELECT AVG(SALARY) FROM SC246300.TBEMPLOYEE)
Scalar functions, UDFs, and expressions are re-evaluated using the base table row when a FETCH SENSITIVE request is processed. A positioned update or delete compares the column value in the result table with the re-evaluated value for the base table row. External UDFs are executed for each qualifying row when a scrollable cursor is opened. Therefore, if an external UDF sends an e-mail then an e-mail is sent for each qualifying row. UDFs are not re-executed when an insensitive fetch is issued.
169
In Example 12-9 we show how the same cursor defined in Example 12-8 is valid if it is INSENSITIVE. Since the AVG function is only processed at OPEN CURSOR time, the data does not change and thus the values for the AVG does not change.
Example 12-9 Aggregate function in an INSENSITIVE cursor DECLARE C1 INSENSITIVE SCROLL CURSOR WITH HOLD FOR SELECT NORDERKEY, AVG(TAX) FROM SC246300.TBLINEITEM GROUP BY NORDERKEY
In Example 12-10 we show the scalar function SUBSTR. If the cursor is SENSITIVE, the function is evaluated at FETCH time. If the cursor is INSENSITIVE, the function is evaluated at OPEN CURSOR time.
Example 12-10 Scalar functions in a cursor DECLARE C1 SENSITIVE STATIC SCROLL CURSOR WITH HOLD FOR SELECT CUSTKEY, LASTNAME, SUBSTR(COMMENT,1,20) FROM SC246300.TBCUSTOMER
In Example 12-11 shows that you can also use an expression in a scrollable cursor. In this example we use a sensitive scrollable cursor. Therefore, the expression will be re-evaluated against the base table at each FETCH operation to make sure the rows still qualifies.
Example 12-11 Expression in a sensitive scrollable cursor DECLARE TELETEST SENSITIVE STATIC SCROLL CURSOR FOR SELECT EMPNO ,FIRSTNME ,SALARY ,COMM ,BONUS ,COMM + BONUS AS ADDITIONAL_MONEY FROM SC246300.TBEMPLOYEE WHERE COMM + BONUS > 200 FOR UPDATE OF BONUS
170
Note: An application program is not able to distinguish between a delete hole and an update hole, only that there is a hole.
The OPEN CURSOR is executed and the DB2 temporary table is built with two rows. See Example 12-12 for the results of the OPEN CURSOR. Another user executes the statement:
DELETE FROM TBACCOUNT WHERE TYPE = P AND ACCOUNT = MNP230 ; COMMIT ;
The row is deleted from the base table. The process executes its first FETCH:
FETCH SENSITIVE FROM C1 INTO :hv_account, hv_account_name ;
DB2 attempts to fetch the row from the base table but the row is not found, DB2 marks the row in the result table as a delete hole. DB2 returns the SQLCODE +222 to highlight the fact that the current cursor position is over a hole.
+222: HOLE DETECTED USING cursor-name
At this stage, the host variables are empty; however, it is important for your application program to recognize the hole, as DB2 does not reset the host variables if a hole is encountered. If the FETCH is executed again, the cursor is positioned on the next row, which in the example is for account ULP231. The host variables now contain ULP231 and MS S FLYNN. It is important to note that if an INSENSITIVE fetch is used, then only update and delete holes created under the current open cursor are recognized. Updates and deletes made by other processes or outside the cursor are not recognized by the INSENSITIVE fetch.
171
If the above SENSITIVE fetch was replaced with an INSENSITIVE fetch, the fetch would return a zero SQLCODE, since the delete to the base row was made by another process. The column values would be set to those at the time of the OPEN CURSOR statement execution.
Example 12-12 Delete holes Base Table ACCOUNT ACCOUNT_NAME ABC010 BIG PETROLEUM BWH450 RUTH & DAUGHTERS ZXY930 MIGHTY DUCKS PLC MNP230 BASEL FERRARI BMP291 MR R GARCIA XPM673 SCREAM SAVER LTD ULP231 MS S FLYNN XPM961 MR CJ MUNSON Result table RID ACCOUNT A04 MNP230 A07 ULP231
TYPE C C C P C C P C
Base Table after DELETE ACCOUNT ACCOUNT_NAME ABC010 BIG PETROLEUM BWH450 RUTH & DAUGHTERS ZXY930 MIGHTY DUCKS PLC BMP291 XPM673 ULP231 XPM961 MR R GARCIA SCREAM SAVER LTD MS S FLYNN MR CJ MUNSON
Result table after FETCH RID ACCOUNT ACCOUNT_NAME A04 A07 ULP231 MS S FLYNN
The OPEN CURSOR is executed and the DB2 temporary table is built with two rows. See Example 12-13 for the results of the OPEN CURSOR. Another user executes the statement:
UPDATE TBACCOUNT
172
Here, it can be seen that the row for account MNP230 no longer qualifies the requirements of the WHERE clause of the DECLARE CURSOR statement. The process executes its first FETCH:
FETCH SENSITIVE FROM C1 INTO :hv_account, hv_account_name ;
DB2 verifies that the row is valid by executing a SELECT with the WHERE values used in the initial open against the base table. If the row now falls outside the SELECT, DB2 returns the SQLCODE +222 to highlight the fact that the current cursor position is over an update hole.
+222: HOLE DETECTED USING cursor-name
At this stage, the host variables are empty; however, it is important for your application program to recognize the hole, as DB2 does not reset the host variables if a hole is encountered. If the FETCH is executed again, the cursor is positioned on the next row, which in the example is for account ULP231. The host variables now contain ULP231 and MS S FLYNN. It is important to note that if an INSENSITIVE fetch is used, then only update and delete holes created under the current open cursor are recognized. Updates and deletes made by other processes are not recognized by the INSENSITIVE fetch. If the above SENSITIVE fetch was replaced with an INSENSITIVE fetch, the fetch would return a zero SQLCODE, as the update to the base row was made by another process. The column values would be set to those at the time of the OPEN CURSOR statement execution.
Example 12-13 Update holes Base Table ACCOUNT ACCOUNT_NAME ABC010 BIG PETROLEUM BWH450 RUTH & DAUGHTERS ZXY930 MIGHTY DUCKS PLC MNP230 BASEL FERRARI BMP291 MR R GARCIA XPM673 SCREAM SAVER LTD ULP231 MS S FLYNN XPM961 MR CJ MUNSON Result table RID ACCOUNT A04 MNP230 A07 ULP231
TYPE C C C P C C P C
Base Table after UPDATE ACCOUNT ACCOUNT_NAME ABC010 BIG PETROLEUM BWH450 RUTH & DAUGHTERS ZXY930 MIGHTY DUCKS PLC MNP230 BASEL FERRARI BMP291 MR R GARCIA XPM673 SCREAM SAVER LTD ULP231 MS S FLYNN XPM961 MR CJ MUNSON
TYPE C C C C C C P C
173
Result table after FETCH RID ACCOUNT ACCOUNT_NAME A04 A07 ULP231 MS S FLYNN
When you receive this return code, you can choose to fetch the new data again by using the FETCH CURRENT to retrieve the new values. The program can then choose to reapply the changes or not. Important: DB2 only validates the columns listed in the select clause of the cursor against the base table. Other changed columns do not cause the SQLCODE -224 to be issued.
Note: A scrollable cursor never sees rows that have been inserted to the base table nor rows that are updated and now (after open cursor time) fit the selection criteria of the DECLARE CURSOR statement.
174
Now let's assume isolation level RR or RS is in effect for the sensitive scrollable cursor. The application issues a FETCH SENSITIVE request which positions the cursor on a row. DB2 does not release the lock on the base table row. The application now decides to update the current row. DB2 must still validate the positioned update. This is because the same application might have updated or deleted the base table row (by issuing a searched update or delete) between the time it was fetched and the time the positioned update is requested. Validation of a positioned UPDATE: The Optimistic Locking Concur By Value technique is used by DB2 to validate a positioned UPDATE. Lets take a look at how this process works. Suppose that we have executed the following statements:
DECLARE C1 SENSITIVE STATIC SCROLL CURSOR FOR SELECT .... FROM BASE_TABLE WHERE .... FOR UPDATE OF .... OPEN CURSOR C1 FETCH SENSITIVE UPDATE BASE_TABLE SET ... = ..., ... = ... WHERE CURRENT OF C1
The flow chart in Figure 12-4 shows the sequence of events that occurs.
175
DB2 attempts to lock and retrieve the corresponding row in the base table
No
Yes
No
Yes
Are column values of base table and result table equal?
No
Yes
DB2 updates the row and refreshes the results table with new column values Returns SQLCODE 0
Validation of a positioned DELETE: The optimistic locking concur by value technique is also used by DB2 to validate a positioned delete. Lets take a look at how this process works. Suppose that we have executed the following statements:
DECLARE C1 SENSITIVE STATIC SCROLL CURSOR FOR SELECT .... FROM BASE_TABLE WHERE .... OPEN CURSOR C1 FETCH SENSITIVE DELETE FROM BASE_TABLE WHERE CURRENT OF C1
The flow chart in Figure 12-5 shows the sequence of events that occurs.
176
DB2 attempts to lock and retrieve the corresponding row in the base table
No
Yes
No
Yes
Are column values of base table and result table equal?
No
Yes
DB2 deletes the base row and marks the results table row as a delete hole Returns SQLCODE 0
177
taking the lock. When FOR UPDATE OF is specified, a lock is taken for the last page or row read. Keeps locks on the last page or row read if the cursor was declared FOR UPDATE OF. Releases all locks on completion of the OPEN CURSOR. A cursor that has been bound with uncommitted read (UR) does not take any page or row locks, and does not check whether a row has been committed when selecting it for inclusion in the temporary result set. Application programs can leverage the use of SENSITIVE STATIC scrollable cursors in combination with the Isolation level CS and the SENSITIVE option of FETCH to minimize concurrency problems and assure currency of data when required. The STATIC cursor does give the application a constant result table to scroll on, thus perhaps eliminating the need for isolation level RR and RS. The SENSITIVE option of FETCH statement provides the application a means of re-fetching any preselected row requesting the most current data when desired. For example, when the application is ready to update a row.
Duration of locks
Locks acquired for positioned updates, positioned deletes, or to provide isolation level RR or isolation level RS are held until commit. If the cursor is defined WITH HOLD, then the locks are held until the first commit after the close of the cursor.
178
S tore d pro ce du re
Pro gra m
Note: If a stored procedure issues FETCH statements for a scrollable cursor, then before ending, it must issue a FETCH BEFORE statement to position the cursor before the first row in the result table before returning to the calling program.
179
180
13
Chapter 13.
181
Totally after join predicates After all join processing is done these predicates are evaluated. For more information on predicate classification, see DB2 UDB for OS/390 Version 6 Performance Topics, SG24-5351.
182
------------+---------+-MIRIAM A01 JUKKA A02 GLADYS -ABI B01 TONI -EVA A01
Example 13-2 shows an inner join with a predicate E.WORKDEPT=A01 in the ON clause. Before this enhancement these predicate could not be coded in the ON clause. Only join predicates were allowed. In this particular case, because it is an INNER JOIN and the additional predicate in the ON clause was ANDed, the query behaves the same way as if the AND E.WORKDEPT = A01 predicate was coded in the WHERE clause. However, this is not always the case.
Example 13-2 Inner join and ON clause with AND SELECT E.FIRSTNME,E.WORKDEPT, D.DEPTNO,D.DEPTNAME FROM SC246300.TBEMPLOYEE E INNER JOIN SC246300.TBDEPARTMENT D ON E.WORKDEPT = D.DEPTNO AND E.WORKDEPT = 'A01' # ---------+---------+---------+---------+---------+---------+---------+ FIRSTNME WORKDEPT DEPTNO DEPTNAME ---------+---------+---------+---------+---------+---------+---------+ MIRIAM A01 A01 SALES EVA A01 A01 SALES
Example 13-3 show a left outer join with AND in the ON clause on the DEPTNO column. Now you notice the difference between ANDing the predicate E.WORKDEPT = 'A01' in the ON clause, and coding the sample predicate in the WHERE clause, as shown in Example 13-4.
Example 13-3 LEFT OUTER JOIN with ANDed predicate on WORKDEPT field in the ON clause SELECT E.FIRSTNME,E.WORKDEPT, D.DEPTNO,D.DEPTNAME FROM SC246300.TBEMPLOYEE E LEFT OUTER JOIN SC246300.TBDEPARTMENT D ON E.WORKDEPT = D.DEPTNO AND E.WORKDEPT = 'A01' # ---------+---------+---------+---------+---------+---------+---FIRSTNME WORKDEPT DEPTNO DEPTNAME ---------+---------+---------+---------+---------+---------+---EVA A01 A01 SALES MIRIAM A01 A01 SALES JUKKA A02 ------ ----------------------------ABI B01 ------ ----------------------------GLADYS -------- ------ ----------------------------TONI -------- ------ -----------------------------
In Example 13-3, in order for the join condition to be satisfied, both conditions have to be true. If this is not the case, the columns from the right hand table (TBDEPARTMENT) are set to null.
183
Example 13-4 LEFT OUTER JOIN with ON clause and WHERE clause SELECT E.FIRSTNME,E.WORKDEPT, D.DEPTNO,D.DEPTNAME FROM SC246300.TBEMPLOYEE E LEFT OUTER JOIN SC246300.TBDEPARTMENT D ON E.WORKDEPT = D.DEPTNO WHERE E.WORKDEPT = 'A01' # ---------+---------+---------+---------+---------+-FIRSTNME WORKDEPT DEPTNO DEPTNAME ---------+---------+---------+---------+---------+-EVA A01 A01 SALES MIRIAM A01 A01 SALES
This is totally different from the case where you code the E.WORKDEPT = A01 condition in the WHERE clause, as in Example 13-4. The WHERE clause is not evaluated at the time the join is performed. According to the semantics of the SQL language It must be evaluated after the result from the join is built (to eliminate all rows where E.WORKDEPT is not equal to A01. (Actually this is not entirely true. Version 6 introduced a lot of performance enhancements related to outer join and in this case, the predicate that is coded in the WHERE clause can and is evaluated when the data is retrieved from the TBEMPLOYEE table. The system only moves the evaluation of the predicate to an earlier stage during the processing, if it is sure the result is not influenced by this transformation . In our example this is the case because the WHERE predicate is not on the null supplying table. For more information on the outer join performance enhancements, see DB2 UDB for OS/390 Version 6 Performance Topics, SG24-5351.) Example 13-5 shows another flavour of the ON clause extensions using an OR condition in the ON clause. MIRIAM and EVA (rows satisfying WORKDEPT = 'A01') are joined with every row of the TBDEPARTMENT table. The predicates on the ON clause determine whether we have a match for the join. So in this example, If either E.WORKDEPT = D.DEPTNO or E.WORKDEPT = A01 is true, the rows of both tables are matched up.
Example 13-5 Inner join and ON clause with OR in the WORKDEPT column SELECT E.FIRSTNME,E.WORKDEPT, D.DEPTNO,D.DEPTNAME FROM SC246300.TBEMPLOYEE E INNER JOIN SC246300.TBDEPARTMENT D ON E.WORKDEPT = D.DEPTNO OR E.WORKDEPT = 'A01' # ---------+---------+---------+---------+---------+---------+---------+ FIRSTNME WORKDEPT DEPTNO DEPTNAME ---------+---------+---------+---------+---------+---------+---------+ MIRIAM A01 B01 DB2 MIRIAM A01 A01 SALES MIRIAM A01 C01 MVS MIRIAM A01 A02 MARKETING JUKKA A02 A02 MARKETING EVA A01 B01 DB2 EVA A01 A01 SALES EVA A01 C01 MVS EVA A01 A02 MARKETING ABI B01 B01 DB2
184
185
,JOB ,WORKDEPT ,SEX ,EDLEVEL FROM SC246300.TBEMPLOYEE WHERE (CITYKEY, NATIONKEY) = (1, :nationkey ) ; ---------+---------+---------+---------+---------+FIRSTNME JOB WORKDEPT SEX EDLEVEL ---------+---------+---------+---------+---------+MIRIAM DBA A01 F 4
Note: Row expressions with equal and not equal operators are not allowed to be compared against fullselect.
In order for the <>ALL comparison to evaluate to true, there must not be any row on the right hand side of the comparison that matches the values specified on the left hand side. Example 13-8 uses a row expression with a <> ALL operator against a subquery. This example lists the employees that do not live in the same city and country as any of our customers.
Example 13-8 Row expression with <> ALL operator SELECT FIRSTNME ,JOB ,WORKDEPT ,SEX ,EDLEVEL FROM SC246300.TBEMPLOYEE
186
WHERE (CITYKEY,NATIONKEY) <> ALL (SELECT CITYKEY,NATIONKEY FROM SC246300.TBCUSTOMER ) ; ---------+---------+---------+---------+---------+-FIRSTNME JOB WORKDEPT SEX EDLEVEL ---------+---------+---------+---------+---------+-MIRIAM DBA A01 F 4 JUKKA SALESMAN A02 M 7 TONI -------- M 0 GLADYS -------- F 0 ABI TEACHER B01 F 9
Note: The use of row value expressions on the left-hand side of a predicate with = SOME or = ANY operators is the same as using the IN keyword. The <> ALL operator is the same as using the NOT IN keywords.
Example 13-10 is equivalent to Example 13-8 and evaluates in the same way.
Example 13-10 NOT IN row expression SELECT FIRSTNME ,JOB ,WORKDEPT ,SEX ,EDLEVEL FROM SC246300.TBEMPLOYEE WHERE (CITYKEY,NATIONKEY) NOT IN (SELECT CITYKEY,NATIONKEY FROM SC246300.TBCUSTOMER ) ; ---------+---------+---------+---------+---------+-FIRSTNME JOB WORKDEPT SEX EDLEVEL ---------+---------+---------+---------+---------+-MIRIAM DBA A01 F 4 JUKKA SALESMAN A02 M 7
Chapter 13. More SQL enhancements
187
TEACHER
--------------B01
M F F
0 0 9
If the number of expressions and the number of columns returned do not match, then an SQL error is returned:
SQLCODE -216 THE NUMBER OF ELEMENTS ON EACH SIDE OF A PREDICATE OPERATOR DOES NOT MATCH. PREDICATE OPERATOR IS IN. SQLSTATE: 428C4
13.3 ORDER BY
The order of the selected rows depends on the sort keys that you identify in the ORDER BY clause. A sort key can be a column name, an integer that represents the number of a column in the result table, or an expression. DB2 sorts the rows by the first sort key, followed by the second sort key, and so on. You can list the rows in ascending or descending order. Null values appear last in an ascending sort and first in a descending sort. The ordering can be different for each column in the ORDER BY clause.
188
Prior to this enhancement the query above would have returned an SQLCODE -208. The following restrictions apply: There is no UNION or UNION ALL (SQLCODE -208). There is no GROUP BY clause (SQLCODE -122). There is no DISTINCT clause in the select list (new SQLCODE -214). Tip: When calculating the amount of sort space required for the query, all columns, including the ones being sorted on should be included in the sort data length as well as in the sort key length.
Some vendor applications generate SQL statements which contain ORDER BY expressions. In such cases, it is often not feasible to re-write the generated SQL.
189
SELECT C1, C2, C3, C4 FROM T1 WHERE C2 = 5 AND C4 = 7 AND C5 = 2 ORDER BY C1, C2, C3, C4
logically equivalent to ORDER BY C1,C3 logically equivalent to a key of (C1,C3) with respect to ordering
DB2 is now able to recognize that the index supports the ORDER BY clause
In Example 13-14, we show how this works. The ORDER BY in Figure 13-1 is equivalent to an ORDER BY C1, C3. Note that index on C2, C1, C5, C4, C3 can be used and is equivalent to an index on C1, C3 and thus avoiding a sort or the need for an additional index.
Example 13-14 Data showing improved sort avoidance for the ORDER BY clause C2 C1 C5 C4 C3 -------+------+------+------+-----1 8 7 7 3 1 9 5 4 8 2 4 1 3 2 2 4 2 8 5 3 2 6 9 7 3 4 3 1 3 3 4 4 0 6 4 1 2 3 9 4 7 4 8 0 5 2 2 7 8 -5 2 3 8 1 5 3 2 7 5 -5 5 3 7 3 5 6 2 5 8 5 6 2 7 7 -5 8 2 7 4 -5 9 3 7 3 5 9 4 7 1
190
6 Qualified 5 5 5 5
3 rows: 2 3 6 8
2 2 2 2
7 7 7 7
8 5 7 4
Note: Logically removing columns from an index key has no effect on the filtering capability of the index.
13.4 INSERT
In this section, we discuss enhancements made to the INSERT statement. The new enhancements are: Inserting with DEFAULT keyword Inserting with any expression Inserting with self-referencing select Inserting with UNION
In Example 13-15, we show how we can code an INSERT to take advantage of this keyword.
Example 13-15 Inserting with the DEFAULT keyword INSERT INTO SC246300.TBITEMS VALUES (440 ,'HAMMER' ,50 ,1.25 ,DEFAULT) ; Inserts the following row: ---------+---------+---------+---------+---------+---------+---------+---------+ ITEM_NUMBER PRODUCT_NAME STOCK PRICE COMMENT ---------+---------+---------+---------+---------+---------+---------+---------+ 440 HAMMER 50 1.25 NONE
191
192
A non-correlated subquery is executed only once before update and delete. Example 13-19 shows how we can give a 10% salary increase to all employes whose salary is lower than the average salary of their department.
Example 13-19 UPDATE with a self referencing non-correlated subquery UPDATE SC246300.TBEMPLOYEE X SET SALARY = SALARY * 1.10 WHERE SALARY < (SELECT AVG(SALARY) FROM SC246300.TBEMPLOYEE Y WHERE X.WORKDEPT = Y.WORKDEPT) ;
DB2 processes the query in Example 13-19 as follows: 1. The correlated subquery is executed for each row in the outer (TBEMPLOYEE X) table. DB2 creates a record, in a work file, for each row that satisfies the subquery. For the UPDATE statement, DB2 stores the RID of the row and the updated SALARY column value. For the DELETE statement, DB2 stores the RID of the row. 2. Once all the qualifying rows have been determined, DB2 reads the work file, and for each record, updates or deletes the corresponding row in the TBEMPLOYEE table.
193
This two-step processing is not shown in the EXPLAIN output. Two-step processing is used for an UPDATE whenever a column that is being updated is also referenced in the WHERE clause of the UPDATE or is used in the correlation predicate of the subquery as shown in Example 13-19. For a DELETE statement, two-step processing is always used for a correlated subquery. Example 13-20 shows a new way to delete the department with the lowest budget in just one SQL sentence. However, be careful when coding such a statement because even though only one value is returned by the self-referencing subselect, more than one row may have a budget equal to the minimum and thus would be deleted.
Example 13-20 DELETE with self referencing non-correlated subquery DELETE FROM SC246300.TBDEPARTMENT WHERE BUDGET = (SELECT MIN(BUDGET) FROM SC246300.TBDEPARTMENT)
Example 13-21 shows how we can delete the employees with the highest salary for each department. To perform this DELETE in our sample environment, we have to drop the self referencing foreign key of the TBEMPLOYEE table since it is defined with the delete rule ON DELETE NO ACTION.
Example 13-21 DELETE with self referencing non-correlated subquery DELETE FROM SC246300.TBEMPLOYEE X WHERE SALARY = (SELECT MAX(SALARY) FROM SC246300.TBEMPLOYEE Y WHERE X.WORKDEPT= Y.WORKDEPT)
The enhanced support for the UPDATE and DELETE statements also improves DB2 family compatibility.
Restrictions on usage
This enhancement does not extend to a positioned UPDATE or DELETE statement, that is, an UPDATE or DELETE statement which uses the WHERE CURRENT OF cursor-name clause. DB2 positioned updates and deletes continue to return the SQLCODE -118 if a subquery in the WHERE clause references the table being updated or which contains rows to be deleted. For example, the positioned update in Example 13-22 is still invalid.
Example 13-22 Invalid positioned update EXEC SQL DECLARE CURSOR C1 CURSOR FOR SELECT T1.ACCOUNT, T1.ACCOUNT_NAME, T1.CREDIT_LIMIT FROM ACCOUNT T1 WHERE T1.CREDIT_LIMIT < (SELECT AVG(T2.CREDIT_LIMT) FROM ACCOUNT T2) FOR UPDATE OF T1.CREDIT_LIMIT; . EXEC SQL OPEN C1; ... EXEC SQL FETCH C1 INTO :hv_account, :hv_acctname, :hv_crdlmt; ...
194
EXEC SQL UPDATE ACCOUNT SET CREDIT_LIMIT = CREDIT_LIMIT * 1.1 WHERE CURRENT OF C1;
Note: If the subselect returns no rows, the null value is assigned to the column to be updated, if the column cannot be null, an error occurs. The columns of the target table or view of the UPDATE can be used in the search condition of the subselect. Using correlation names to refer to these columns is only allowed in a searched UPDATE. In Example 13-24 we update the manager of all employees with the manager assigned to their department. It is up to the user to make sure only a single row is returned in the subselect. Otherwise you receive an SQLCODE -811.
Example 13-24 Correlated subquery in the SET clause of an UPDATE UPDATE SC246300.TBEMPLOYEE X SET MANAGER = (SELECT MGRNO FROM SC246300.TBDEPARTMENT Y WHERE X.WORKDEPT = Y.DEPTNO ) #
In Example 13-25 we update the number of orders (NUMORDERS column) for all employees with a NATIONKEY of 34.
Example 13-25 Correlated subquery in the SET clause of an UPDATE with a column function UPDATE SC246300.TBEMPLOYEE X SET NUMORDERS = (SELECT COUNT(*) FROM SC246300.TBORDER WHERE CLERK = X.EMPNO)
195
WHERE NATIONKEY = 34
Example 13-26 shows how the update in Example 13-23 can be changed to provide DEPTSIZE for employees of all departments. This is done by using an UPDATE with a correlated subquery in the SET clause.
Example 13-26 Correlated subquery in the SET clause of an UPDATE using the same table UPDATE SC246300.TBEMPLOYEE X SET DEPTSIZE = (SELECT COUNT(*) FROM SC246300.TBEMPLOYEE Y WHERE X.WORKDEPT = X.WORKDEPT) ;
Example 13-27 shows how to move employees Mark and Ivan to department number A01 using a row expression in the SET clause of an UPDATE.
Example 13-27 Row expression in the SET clause of an UPDATE UPDATE SC246300.TBEMPLOYEE SET (MANAGER,WORKDEPT) = (SELECT MGRNO,DEPTNO FROM SC246300.TBDEPARTMENT WHERE DEPTNO = 'A01') WHERE FIRSTNME IN ('MARK','IVAN')
The SET assignment statement assigns the value of one or more expressions or a NULL value to one or more host-variables or transition-variables and replaces the SET host-variable statement documented in previous releases of DB2. The statement can be embedded in an application program or can be contained in the body of a trigger. If the statement is a triggered SQL statement, then it must be part of a trigger whose action is BEFORE and whose granularity is FOR EACH ROW. In this context, a host-variable cannot be specified. This enhancement also improves DB2 family compatibility.
196
The FETCH FIRST clause can also be used for scrollable cursors. The OPTIMIZE FOR clause is another that you can specify in the SELECT statement. Table 13-1 describes the effect of specifying the FETCH FIRST clause with and without the OPTIMIZE FOR clause.
197
Note: The behavior described in the following table only applies when the PTF for APAR PQ49458 (still open at the time of writing) is applied to your system.
Table 13-1 How the FETCH FIRST clause and the OPTIMIZE FOR clause interact Clauses specified on the SELECT statement FETCH FIRST n ROWS ONLY (OPTIMIZE FOR clause not specified) FETCH FIRST n ROWS ONLY OPTIMIZE FOR m ROWS (where m>n) FETCH FIRST n ROWS ONLY OPTIMIZE FOR m ROWS (where m<n) OPTIMIZE value used by DB2 DB2 optimizes for n rows DB2 optimizes for m rows DB2 optimizes for m rows
When both options are specified, only the OPTIMIZE FOR clause is used during access path selection and determining how to do blocking in a DRDA environment. By using the new FETCH FIRST n ONLY clause, you can: Limit the number of rows in a result table Use a SELECT INTO statement to retrieve the first row of a result table Limit the number of rows returned by a DB2 for z/OS and OS/390 DRDA server
198
Example 13-29 shows how to use a SELECT INTO statement that could return several rows. By adding the FOR FETCH ONLY we can avoid having to open a cursor.
Example 13-29 Limiting rows for SELECT INTO SELECT ACCOUNT ,ACCOUNT_NAME ,TYPE ,CREDIT_LIMIT INTO :hv_account ,hv_acctname ,:hv_type ,:hv_crdlmt FROM ACCOUNT WHERE ACOCUNT_NAME = :hv_in_acctname FETCH FIRST 1 ROW ONLY ;
As you can choose to pick up only a row, the SELECT INTO statement support also the GROUP BY and HAVING clauses. Note: Be cautious when using this feature, make sure that logically (within your application) it is appropriate to ignore multiple rows.
The SET assignment statement is an alternative to assign the value of one or more expressions or a NULL value to one or more host-variables or transition-variables. In Example 13-31, we show several ways to use the SET. The implementation of VALUES INTO is another enhancement to increase DB2 family compatibility.
Example 13-31 Some uses of the SET assignment statement SET :hv = CUSTOMER(01) This is equivalent to VALUES(CUSTOMER(01)) INTO :hv The SET assignment statement is recommended
199
:hv = NULL :hv = udf1(:hv2,1,1+2) :hv = udf2(:hv2,1) :lastmon = MONTH(CURRENT DATE - 30 DAYS)
Background information
The colon was an option in DB2 V1 code and documentation. It was not an option for some other products, and the optional colon did not get accepted into the SQL standard. With DB2 V2, the manuals were changed to indicate that the colon should always be specified, and a warning message, DSNH315I, was implemented. Most customers noticed the warnings and corrected their applications to comply with the standard. DB2 V3, V4, and V5, continued to allow the optional colons with the warning messages. With DB2 V6 the level of complexity of SQL was such that it was decided not to allow the colons to remain optional: when DB2 V6 was announced in 1998, this incompatible change was included in the announcement, and in the documentation.
For cases where you have the source code, the best resolution is running the precompiler and then binding the new DBRM. The best practice is to set an application programming standard and to add the colon in all DB2 applications. If the return code from the precompiler is not zero, then the warning could cause problems in production. If you do not have the source code, the options are very limited. APAR II12100 may provide some help. APARs PQ26922 and PQ30390 may be applicable in some cases.
DB2 now also supports multiple expressions (row expressions) on the left-hand side of the IN and NOT IN predicate when a fullselect is specified on the right-hand side. Example 13-10 on page 187 shows a row expression and a fullselect into a NOT IN list.
201
202
Part 4
Part
203
204
14
Chapter 14.
205
RESUME?
SHRLEVEL is a new keyword for the LOAD utility in V7 and it specifies the extent to which applications can concurrently access the table space or partition(s) being loaded. SHRLEVEL NONE (default) specifies that applications have no concurrent access to the table space or partition(s) being loaded, that is, the LOAD utility operates the same as prior to version 7. This type of LOAD is also referred to as classic LOAD throughout this chapter. The term online refers to the mode of operation of the LOAD utility when SHRLEVEL CHANGE is specified. When using online LOAD RESUME, other applications can concurrently issue SQL SELECT, UPDATE, INSERT and DELETE statements against table(s) in the table space or partition(s) being loaded. In brief, by using online LOAD RESUME, you can load data with minimal impact on SQL applications and without writing and maintaining INSERT programs.
to the classic LOAD, online LOAD RESUME runs longer. But on other hand, all tables are available to other users all the time. Also when you compare online LOAD RESUME to a program doing massive SQL INSERTs, online LOAD RESUME is faster and consumes less CPU. For more details, see DB2 for z/OS and OS/390 Version 7 Performance Topics, SG24-6129.
Logging
Only LOG YES is allowed. Therefore, no COPY is required afterwards. If you are thinking about converting from classic LOAD to online LOAD RESUME for large tables, you may want to check your DB2 logging environment, like the number of active logs, their placement on the disk volumes and fast devices, consider striping and increasing DB2 log output buffer size.
RI
Referential integrity is enforced when loading a table with online LOAD RESUME. When you use online LOAD RESUME on a self-referencing table, it forces you to sort the input in such a way that referential integrity rules are met, rather than sorting the input in clustering sequence, which you used to do for classic LOAD.
Duplicate keys
The handling of duplicate keys is somewhat different when using online LOAD RESUME compared to the classic LOAD utility. When using online LOAD RESUME and a unique index is defined, INSERTs (done by the online LOAD) are accepted as long as they provide different values for this column (or set of columns). This is different from the classic LOAD procedure, which discards all rows that you try to load having the same value for a unique key. You may have setup a procedure (manual or automated) to handle the rows classic LOAD discards. When you move over to online LOAD RESUME, you have to change handling the discarded rows accordingly.
Clustering
Whereas the classic LOAD RESUME stores the new records (in the sequence of the input) at the end of the already existing records, online LOAD RESUME tries to insert the records into the available free space respecting the clustering order as much as possible. When you have to LOAD (insert) a lot of rows, make sure there is enough free space available. Otherwise these rows are likely to be stored out of the clustering order and you might end up having to run a REORG to restore proper clustering (which can be run online as well).
207
So, a REORG may be needed after the classic LOAD, as the clustering may not be preserved, but also after an online LOAD RESUME if no sufficient free space is available.
Free space
As mentioned before, the available free space, PCTFREE or FREEPAGE, is used by online LOAD RESUME, in contrast to the classic LOAD. As a consequence, a REORG may be needed after an online LOAD RESUME to ensure sufficient free space (PCTFREE and FREEPAGE) is available for subsequent inserts. Figure 14-1 shows an example of the online LOAD RESUME syntax and the inserts that it performs. Some differences with the classic LOAD are listed.
O n lin e L O A D R E S U M E S y n ta x
LOAD DATA INSERT INTO VALUES
P r o c e s s in g
SC246300.TBEMPLOYEE ('000001' ,'4082651590' ,'A05' ); SC246300.TBEMPLOYEE ('000002' ,'4082651596' ,'C02' ); SC246300.TBEMPLOYEE ('000003' ,'4082651598' ,'B10' );
RESUME YES SHRLEVEL CHANGE INTO TABLE SC246300.TBEMPLOYEE ( EMPNO POSITION ( 1: 6) CHAR , PHONENO POSITION ( 9:24) CHAR , WORKDEPT POSITION (25:28) CHAR ...
...
C la im s ( n o t d r a in s ) L O G Y E S o n ly F ir e T r ig g e r s D a t a I n t e g r it y e n f o r c e d : R I , c h e c k c o n t r a in t s a n d u n iq u e k e y s . F r e e s p a c e d u s e d ( c lu s t e r in g m a y b e p re s e rv e d )
Commit frequency
Online LOAD RESUME dynamically monitors the current locking situation for the table spaces or partitions being loaded. This enables online LOAD RESUME to choose the commit frequency and avoid lock contention with other SQL. This kind of commit frequency flexibility is not possible to code at batch insert programs, though the frequency can be changed while program is running.
Restart
During RELOAD, internal commit points are set, therefore, RESTART(CURRENT) is possible as with the classic LOAD. When using an INSERT program, application repositioning techniques are needed to be able to restart, which is not all that easy when sequential files are involved, especially when writing them (for example, records that could not be inserted for some reason).
208
Phases
Some utility phases are obviously not included in the online LOAD RESUME, as this kind of LOAD operates like SQL INSERTs. But the DISCARD and REPORT phase are still performed. Input records which fail the insert are written to the discard data set and error information is stored in the error data set. Batch INSERT programs have to take care of records which were not inserted. Application programs allow more refined handling of errors, than just finding duplicates and data type violations that the LOAD utility does. On the other hand, if that type of checking is required, you can write a checking routine to validate input before you start loading the data into the tables.
209
LOG NO. Online LOAD RESUME always writes to the DB2 log. ENFORCE NO. Inserts done by online LOAD RESUME always enforce RI and Check constraints. STATISTICS. Online LOAD does not gather inline statistics. COPYDDN/RECOVERYDDN. Online LOAD does not create inline image copies PART integer REPLACE INCURSOR. You cannot use the cross-loader functionality with online LOAD. Online LOAD RESUME and REORG (including online REORG) cannot concurrently process the same target object. For a complete list of options and other utilities that are incompatible with LOAD SHRLEVEL CHANGE, see DB2 UDB for OS/390 and z/OS Version 7 Utility Guide and Reference, SC26-9945.
DISCARD?
The REORG DISCARD enhancement allows you to discard unwanted rows during a normal REORG. This capability is mutually exclusive with UNLOAD EXTERNAL, but shares much of the same implementation.
210
Example 14-1 REORG DISCARD utility statement REORG TABLESPACE DB246300.TS246304 DISCARD FROM TABLE SC246300.TBEMPLOYEE WHEN (EMPNO = '000001')
The WHEN conditions that you can specify are a simple subset of what can be coded on a WHERE clause. WHEN conditions allow AND-ing, OR-ing of selection conditions and predicates. Comparison operators are =, >, <, <>, <=, >=, IS (NOT) NULL, (NOT) LIKE,(NOT) BETWEEN and (NOT) IN. The predicates that are allowed are only comparisons between a column and a constant or a labeled-duration-expression (like CURRENT DATE + 30 DAYS). Discarded rows are written to a SYSDISC data set or a DD-name specified with DISCARDDN keyword. Important: If no discard data set is specified, the discarded records are lost. Either UNLOAD EXTERNAL or REORG DISCARD can generate the same LOAD control statements, based on the data being processed. A sample LOAD statement generated by REORG DISCARD is shown in Figure 14-2.
LOAD DATA LOG NO INDDN SYSREC EBCDIC CCSID (500,0,0) INTO TABLE "SC246300"."TBEMPLOYEE" this identifies thetable WHEN (00004:00005 = X'0012') ( "EMPNO " POSITION(00007:00012) CHAR(006) , "PHONENO " POSITION(00013:00027) CHAR(015) , "WORKDEPT " POSITION(00028:00030) CHAR(003) , ... , ... , ... , "SALARY " POSITION(00091:00095) DECIMAL NULLIF(00090)=X'FF' , ... )
Figure 14-2 Generated LOAD statements
211
Many installations want to be able to unload data into a user friendly format, and to do it quickly (several times faster than DSNTIAUL, for example). REORG UNLOAD ONLY is fast, but places the data in a internal format that is distinctly not user friendly. Its only use is to be used as input for a LOAD FORMAT UNLOAD utility, and it must be loaded back into the same table it was unloaded from. A new option, REORG UNLOAD EXTERNAL, provides the required capability to unload data in an external format. Like DSNTIAUL, this function also generates standard LOAD utility statements as part of the process. The unloaded data can be loaded into another table. The UNLOAD utility is a new member of the DB2 Utilities Suite that was introduced in V7. The UNLOAD utility unloads the data from one or more source objects to one or more sequential data sets in external format. The source objects can be DB2 table spaces or DB2 image copy data sets.
212
Important: If there are multiple tables in the table space, those not subject to the WHEN clause are unloaded in their entirety.
14.3.3 UNLOAD
The UNLOAD utility can unload data from one or more source objects to one or more BSAM sequential data sets in external format. The source objects can be DB2 table spaces or DB2 image copy data sets.The UNLOAD utility does not use indexes to access the source table(s). The utility scans the table space or partition(s) directly. In addition to the functions that are also supported by REORG UNLOAD EXTERNAL, the UNLOAD utility also supports the ability to: Unload data from an image copy data set(s), including full, incremental, DSN1COPY and inline copies. Select columns (specifying an order of the fields in the output record). Sample and limit the number of rows unloaded (by table). Specify the start position, length and data type of output fields. Format output fields. Translate output character-type data to EBCDIC, ASCII or UNICODE. Specify SHRLEVEL and ISOLATION level. Unload table space partitions in parallel.
UNLOAD TABLESPACE DB246300.TS246304 ASCII NOPAD FROM TABLE SC246300.TBEMPLOYEE HEADER CONST 'EMP' SAMPLE 50 LIMIT 4000 (EMPNO, LASTNAME, SALARY DECIMAL EXTERNAL) WHEN (WORKDEPT = 'D11' AND SALARY > 25000)
T h e c o lu m n o r d e r s p e c if ie s t h e fi e ld o r d e r in th e o u t p u t re c o rd s
M a x im u m n u m b e r o f r o w s w h ic h w ill b e u n lo a d e d f r o m t a b le E M P
213
The figure shows some of the options that can be used by the UNLOAD utility. Here we use the HEADER keyword to give a more meaningful identifier to an output record (instead of the OBID). The data are unloaded in ASCII format and variable length data are not padded. We are not unloading the entire table, but only unloading every 50th row (that qualifies the WHEN conditions) up to a maximum of 4000 rows. We only unload the columns specified in the order specified and the SALARY column is unloaded in readable format instead of normal packed decimal. For more detailed information on how to use the UNLOAD utility refer to: DB2 UDB for OS/390 and z/OS Version 7 Utility Guide and Reference, SC26-9945 DB2 for z/OS and OS/390 Version 7: Using the Utilities Suite, SG24-6289
214
On top, it has a lot of additional capabilities that have been described earlier in this section like field output selection, ordering of the columns in the output, position, data type, length and format specification, sampling and limiting the number of rows, unloading table space partitions in parallel, and so on. Another major advantage of UNLOAD is the possibility to unload data from an image copy data set, avoiding access to the base table and interfering with other processes. Even though the UNLOAD utility can run with an isolation level of UR, it still access data through the buffer pool which might still cause response time degradation for other processes when pages get pushed out of the buffer pool by the full scan of the UNLOAD utility. Note: It is possible to unload from an image copy that is no longer present in SYSIBM.SYSCOPY. However, the table space and table from which you want to unload still has to exist in the DB2 catalog. If it is necessary to access the base table to unload the latest version of the table, you could use the SHRLEVEL and ISOLATION clauses as you do for other utilities. Otherwise unload from an image copy. Because the DB2 UNLOAD utility uses the DB2 buffer pools to retrieve the data, no QUIESCE WRITE YES is required before starting the unload process.
215
Table 14-1 Comparing different means to unload data DSNTIAUL REORG UNLOAD EXTERNAL No No Fast No Complete None No No Yes No Yes No UNLOAD
Unloading rows through full SQL SELECTs (including joins from multiple tables) Unloading rows in a specific order Performance Unloading from image copies Unavailability of data while running (1) Formatting possibilities (2) Parallelism (3) Can connect to remote systems to unload data Support for TEMPLATE and LISTDEF Unload against the catalog (4) Restart capabilities Unloading RI related data
Notes: (1) You can change DSNTIAUL to run with ISOLATION level UR or add the WITH UR clause to your unload SQL statement. REORG UNLOAD EXTERNAL can only use SHRLEVEL NONE. UNLOAD can use ISOLATION CS or UR (SHRLEVEL REFERENCE) or SHRLEVEL CHANGE. (2) Although DSNTIAUL has limited formatting capabilities (only what you can do in an SQL statement), if you write your own program you have of course full control. Even though the UNLOAD utility has a lot more formatting options that REORG UNLOAD EXTERNAL, there is some room for improvement, like standard support for common PC formats. (3) When executing plain SQL statements, the optimizer decides whether or not to use parallelism for a certain query (if the plan is bound with DEGREE(ANY)). (4) Although you can unload data from the catalog, some catalog tables that contain LOB columns might pose a problem, mostly because of the 32K record length restriction on output files.
If you are looking for a fast way to unload data either directly from the table space or from an image copy and your requirements for the format in which the data needs to be unloaded are not to stringent, the UNLOAD utility is an excellent choice. Once you migrated to Version 7 there is probably no reason to continue using REORG UNLOAD EXTERNAL. The UNLOAD utility is a complete functional replacement (even adding a lot of extra capabilities) and its performance is slightly better. Home grown unload applications can still be useful in some cases, for instance when you have very specific output format requirements or when you need to unload data that is somehow related (like a parent record with all its dependent rows) in a single unload operation.
216
Normal application programs that unload data can also perform quite well depending on the circumstances. When unloading an entire table space, DB2 utilities easily outperform home grown programs. However, when you are unloading a selective number of rows and a good index can be used to retrieve the data, normal SQL applications can outperform the UNLOAD utility that always scans the entire table space or partition.
You can only put one SQL statement between the EXEC SQL and ENDEXEC keywords. The SQL statement can be any dynamic SQL statement that can be used as input for the EXECUTE IMMEDIATE statement, as listed in Example 14-3.
Example 14-3 List of dynamic SQL statements CREATE,ALTER,DROP a DB2 object RENAME a DB2 table COMMENT ON,LABEL ON a DB2 table, view, or column GRANT,REVOKE a DB2 authority DELETE,INSERT,UPDATE SQL operation LOCK TABLE operation EXPLAIN a SQL statement SET CURRENT register COMMIT,ROLLBACK operation
In Example 14-4 we create a new table in the default database DSNDB04 with the same layout as SYSIBM.SYSTABLES.
Example 14-4 Create a new table with the same layout as SYSIBM.SYSTABLES EXEC SQL CREATE TABLE PAOLOR3.SYSTABLES LIKE SYSIBM.SYSTABLES
217
ENDEXEC
In the same way, we are able to create indexes on this table, create views on it, and so on. All this is done in the utility input stream.
218
Benefits of EXEC SQL: You can execute any non-select dynamically preparable SQL statement within the utility input stream. You can declare cursors for use with the LOAD utility, including joins, unions, conversions, aggregations, and remote DRDA access. Successfully executed SQL statements are skipped during restart of the utility. In many cases, the need for extra dynamic SQL programs in the utility job stream is eliminated. Considerable simplification of JCL is possible.
Restrictions of EXEC SQL: There are no select statements. There is no control after error: the whole utility step stops after the first SQL error. There is no concept of unit-of-work consisting of multiple SQL statements. There are no comments possible between SQL statements. For a more detailed description, see DB2 for z/OS and OS/390 Version 7: Using the Utilities Suite, SG24-6289.
219
220
Part 5
Part
Appendixes
221
222
Appendix A.
223
TBORDER
ORDERKEY CUSTKEY ORDERSTATUS TOTALPRICE ORDERDATE ORDERPRIORITY CLERK SHIPPRIORITY STATE REGION_CODE INVOICE_DATE COMMENT INTEGER CUSTOMER CHAR(1) FLOAT DATE CHAR(15) CHAR(6) INTEGER CHAR(2) INTEGER DATE VARCHAR(79)
TBCUSTOMER
CUSTKEY MKTSEGMENT ACCTBALL PHONENO NATIONKEY CITYKEY BIRTHDATE SEX STATE ZIPCODE FIRSTNAME LASTNAME ADDRESS COMMENT CUSTOMER CHAR(10) FLOAT CHAR(15) INTEGER INTEGER DATE CHAR(1) CHAR(2) INTEGER VARCHAR(12) VARCHAR(15) VARCHAR(40) VARCHAR(117)
TBCONTRACT
BUYER SELLER RECNO PESETAFEE PESETACOMM EUROFEE EUROCOMM CONTRDATE CLAUSE CUSTOMER CHAR(6) CHAR(15) PESETA PESETA EURO EURO DATE VARCHAR(500)
cascade
TBREGION
REGION_CODE REGION_NAME NUM_ORDERS NUM_ITEMS DOLLAR_AMOUNT INTEGER CHAR(25) INTEGER INTEGER DOLLAR
TBLINEITEM
NORDERKEY L_ITEM_NUMBER DISCOUNT QUANTITY SUPPKEY LINENUMBER TAX RETURNFLAG LINESTATUS SHIPDATE COMMITDATE RECEIPTDATE SHIPINSTRUCT SHIPMODE COMMENT INTEGER INTEGER FLOAT INTEGER INTEGER INTEGER FLOAT CHAR(1) CHAR(1) DATE DATE DATE CHAR(25) CHAR(10) VARCHAR(44)
TBEMPLOYEE
EMPNO PHONENO WORKDEPT HIREDATE JOB EDLEVEL SEX BIRTHDATE SALARY BONUS COMM DEPTSIZE MANAGER NUMORDERS CITYKEY NATIONKEY STATE ZIPCODE FIRSTNME LASTNAME ADDRESS CHAR(6) CHAR(15) CHAR(3) DATE CHAR(8) INTEGER CHAR(1) DATE DECIMAL(9) DECIMAL(9) DECIMAL(9, 2) INT CHAR6 INTEGER INTEGER INTEGER CHAR(2) INTEGER VARCHAR(12) VARCHAR(15) VARCHAR(40)
TBCITIES
CITYKEY REGION_CODE STATE CITYNAME COUNTRY INTEGER INTEGER CHAR(2) VARCHAR(20) VARCHAR(20)
TBDEPARTMENT
DEPTNO MGRNO ADMRDEPT LOCATION BUDGET DEPTNAME CHAR(3) CHAR(6) CHAR(3) CHAR(16) INTEGER VARCHAR(29)
TBITEMS
ITEM_NUMBER PRODUCT_NAME STOCK PRICE COMMENT INTEGER CHAR(10) INTEGER DECIMAL(8,2) VARCHAR(100)
224
//* * //* STATUS = VERSION 7 * //* * //* FUNCTION = THIS JCL BINDS AND RUNS THE SCHEMA PROCESSOR. * //* * //********************************************************************** //JOBLIB DD DSN=DB2G7.SDSNLOAD,DISP=SHR //* //* //* STEP 1 : BIND AND RUN PROGRAM DSNHSP //PH01SS01 EXEC PGM=IKJEFT01,DYNAMNBR=20 //DBRMLIB DD DSN=DB2G7.SDSNDBRM, // DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DB2G) BIND PLAN(DSNHSP71) MEMBER(DSNHSPMN) ACTION(REP) ISO(CS) ENCODING(EBCDIC) RUN PROGRAM(DSNHSP) PLAN(DSNHSP71) LIB('DB2G7.SDSNLOAD') END //* //SYSIN DD * CREATE SCHEMA AUTHORIZATION SC246300 CREATE INDEX .... CREATE DISTINCT TYPE .... CREATE TABLE .... CREATE TABLESPACE .... GRANT .... CREATE TABLE .... CREATE INDEX .... CREATE DISTINCT TYPE .... GRANT .... CREATE FUNCTION .... CREATE DATABASE ... CREATE UNIQUE INDEX .... CREATE INDEX .... CREATE TRIGGER .... GRANT ALTERIN GRANT CREATEIN GRANT DROPIN //* SC236300 TO PUBLIC SC236300 TO PUBLIC SC236300 TO PUBLIC
225
Example: A-2 DDL for the stogroup, database, table space creation --------------------------------------------------------------------- DDL TO CREATE THE DATABASE AND TS (OPTIONALLY STOGROUP) -OBJECTS ARE VERY SMALL (SO WE CAN AFFORD TO USE DEFAULT SPACE ) --------------------------------------------------------------------- CREATE STOGROUP SG246300 VOLUMES(SBOX24) VCAT DB2V710G# -CREATE DATABASE DB246300 BUFFERPOOL BP1 INDEXBP BP2 STOGROUP SG246300 CCSID EBCDIC # -CREATE TABLESPACE TS246300 IN DB246300# -CREATE TABLESPACE TS246301 IN DB246300# -CREATE TABLESPACE TS246302 IN DB246300# -CREATE TABLESPACE TS246303 IN DB246300# -CREATE TABLESPACE TS246304 IN DB246300# -CREATE TABLESPACE TS246305 IN DB246300# -CREATE TABLESPACE TS246306 IN DB246300# -CREATE TABLESPACE TS246307 IN DB246300# -CREATE TABLESPACE TS246308 IN DB246300# -CREATE TABLESPACE TS246309 IN DB246300# -CREATE TABLESPACE TS246310 IN DB246300# -CREATE TABLESPACE TS246311 IN DB246300# -CREATE TABLESPACE TS246331 IN DB246300# -CREATE TABLESPACE TS246332 IN DB246300# -CREATE TABLESPACE TS246333 IN DB246300# -CREATE TABLESPACE TSLITERA
226
IN DB246300# -CREATE LOB TABLESPACE TSLOB1 IN DB246300# -CREATE LOB TABLESPACE TSLOB2 IN DB246300# -CREATE TABLESPACE TS246399 IN DB246300 NUMPARTS 4 #
Example: A-3 DDL for UDT and UDF creation -CREATE DISTINCT TYPE SC246300.DOLLAR AS DECIMAL(17,2) WITH COMPARISONS# -CREATE DISTINCT TYPE SC246300.PESETA AS DECIMAL(18,0) WITH COMPARISONS # -CREATE DISTINCT TYPE SC246300.EURO AS DECIMAL(17,2) WITH COMPARISONS# -CREATE DISTINCT TYPE SC246300.CUSTOMER AS CHAR(11) WITH COMPARISONS# -GRANT USAGE ON DISTINCT TYPE SC246300.EURO TO PUBLIC# GRANT EXECUTE ON FUNCTION SC246300.EURO(DECIMAL) TO PUBLIC# GRANT EXECUTE ON FUNCTION SC246300.DECIMAL(EURO) TO PUBLIC# -CREATE FUNCTION SC246300."+" (EURO,EURO) RETURNS EURO SOURCE SYSIBM."+" (DECIMAL(17,2),DECIMAL(17,2)) # GRANT EXECUTE ON FUNCTION SC246300."+"(EURO,EURO) TO PUBLIC# -CREATE FUNCTION SC246300."+" (PESETA,PESETA) RETURNS PESETA SOURCE SYSIBM."+" (DECIMAL(18,0),DECIMAL(18,0)) # GRANT EXECUTE ON FUNCTION SC246300."+"(PESETA,PESETA) TO PUBLIC# -SET CURRENT PATH = 'SC246300'# -CREATE FUNCTION SC246300.PES2EUR (X DECIMAL) RETURNS DECIMAL LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION NOT DETERMINISTIC
Appendix A. DDL of the DB2 objects used in the examples
227
FUNCTION SC246300.EUR2PES (X DECIMAL) RETURNS DECIMAL LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION NOT DETERMINISTIC RETURN X*166 #
-CREATE FUNCTION SC246300.SUM(PESETA) RETURNS PESETA SOURCE SYSIBM.SUM(DECIMAL(18,0)) # -CREATE FUNCTION SC246300.SUM(EURO) RETURNS EURO SOURCE SYSIBM.SUM(DECIMAL(17,2)) # -CREATE FUNCTION SC246300.AVG(EURO) RETURNS SC246300.EURO SOURCE SYSIBM.SUM(DECIMAL(17,2)) # -CREATE FUNCTION SC246300.AVG(PESETA) RETURNS SC246300.PESETA SOURCE SYSIBM.AVG(DECIMAL(18,0)) # --- JUST DUMMY DEFINITION TO MAKE SAMPLES WORK -CREATE FUNCTION SC246300.LARGE_ORDER_ALERT (CUSTKEY CUSTOMER, TOTALPRICE FLOAT, ORDERDATE DATE) RETURNS CHAR(2) LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION NOT DETERMINISTIC RETURN 'OK' #
SET CURRENT PATH = 'SC246300' # --CURRENT PATH IS NEEDED TO FIND THE USER DEFINED DATA TYPES -CREATE TABLE SC246300.TBITEMS ( ITEM_NUMBER INTEGER NOT NULL, PRODUCT_NAME CHAR( 10 ) NOT NULL WITH DEFAULT, STOCK INTEGER NOT NULL WITH DEFAULT, PRICE DECIMAL ( 8, 2 ), COMMENT VARCHAR ( 100 ) WITH DEFAULT 'NONE', PRIMARY KEY (ITEM_NUMBER) ) IN DB246300.TS246300
228
WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.ITEMIX ON SC246300.TBITEMS (ITEM_NUMBER)# -CREATE TABLE SC246300.TBCITIES ( CITYKEY INTEGER NOT NULL , REGION_CODE INTEGER , STATE CHAR ( 2 ) NOT NULL WITH DEFAULT , CITYNAME VARCHAR( 20 ) NOT NULL WITH DEFAULT, COUNTRY VARCHAR( 20 ) NOT NULL WITH DEFAULT, PRIMARY KEY (CITYKEY) ) IN DB246300.TS246309 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.CITYIX ON SC246300.TBCITIES (CITYKEY)# -CREATE TABLE SC246300.TBCUSTOMER ( CUSTKEY CUSTOMER NOT NULL, MKTSEGMENT CHAR( 10 ) NOT NULL WITH DEFAULT, ACCTBAL FLOAT, PHONENO CHAR( 15 ) NOT NULL WITH DEFAULT, NATIONKEY INTEGER NOT NULL WITH DEFAULT, CITYKEY INTEGER , BIRTHDATE DATE, SEX CHAR( 1 ) NOT NULL, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT , ZIPCODE INTEGER NOT NULL WITH DEFAULT , FIRSTNAME VARCHAR( 12 ) NOT NULL WITH DEFAULT, LASTNAME VARCHAR( 15 ) NOT NULL WITH DEFAULT, ADDRESS VARCHAR ( 40 ) NOT NULL WITH DEFAULT, COMMENT VARCHAR( 117 ), PRIMARY KEY (CUSTKEY), CONSTRAINT SEXCUST CHECK (SEX IN ('M','F')), FOREIGN KEY (CITYKEY) REFERENCES SC246300.TBCITIES ) IN DB246300.TS246301 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.CUSTIX ON SC246300.TBCUSTOMER (CUSTKEY)# -CREATE TABLE SC246300.TBCUSTOMER_ARCH ( CUSTKEY CUSTOMER NOT NULL, MKTSEGMENT CHAR( 10 ) NOT NULL WITH DEFAULT, ACCTBAL FLOAT, PHONENO CHAR( 15 ) NOT NULL WITH DEFAULT, NATIONKEY INTEGER NOT NULL WITH DEFAULT, CITYKEY INTEGER , BIRTHDATE DATE, SEX CHAR( 1 ) NOT NULL, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT , ZIPCODE INTEGER NOT NULL WITH DEFAULT , FIRSTNAME VARCHAR( 12 ) NOT NULL WITH DEFAULT, LASTNAME VARCHAR( 15 ) NOT NULL WITH DEFAULT,
229
ADDRESS VARCHAR ( 40 ) NOT NULL WITH DEFAULT, COMMENT VARCHAR( 117 ), PRIMARY KEY (CUSTKEY), CONSTRAINT SEXCUST CHECK (SEX IN ('M','F')) ) IN DB246300.TS246311 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.CUST_ARCHIX ON SC246300.TBCUSTOMER_ARCH (CUSTKEY)# -CREATE TABLE SC246300.TBREGION ( REGION_CODE INTEGER NOT NULL, REGION_NAME CHAR ( 25 ) NOT NULL , NUM_ORDERS INTEGER , NUM_ITEMS INTEGER , DOLLAR_AMOUNT DOLLAR NOT NULL WITH DEFAULT, PRIMARY KEY (REGION_CODE) ) IN DB246300.TS246306 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.REGX ON SC246300.TBREGION (REGION_CODE)# -CREATE TABLE SC246300.TBORDER ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246303 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.ORDERIX ON SC246300.TBORDER(ORDERKEY)# -CREATE TABLE SC246300.TBLINEITEM ( NORDERKEY INTEGER NOT NULL , L_ITEM_NUMBER INTEGER NOT NULL , DISCOUNT FLOAT NOT NULL WITH DEFAULT, QUANTITY INTEGER NOT NULL , SUPPKEY INTEGER NOT NULL WITH DEFAULT, LINENUMBER INTEGER NOT NULL , TAX FLOAT NOT NULL WITH DEFAULT, RETURNFLAG CHAR ( 1 ) NOT NULL WITH DEFAULT,
230
LINESTATUS CHAR( 1 ) NOT NULL WITH DEFAULT, SHIPDATE DATE NOT NULL WITH DEFAULT, COMMITDATE DATE NOT NULL WITH DEFAULT, RECEIPTDATE DATE NOT NULL WITH DEFAULT, SHIPINSTRUCT CHAR ( 25 ) NOT NULL WITH DEFAULT, SHIPMODE CHAR ( 10 ) NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 44 ), PRIMARY KEY (NORDERKEY,LINENUMBER), FOREIGN KEY (NORDERKEY) REFERENCES SC246300.TBORDER ON DELETE CASCADE, FOREIGN KEY (L_ITEM_NUMBER) REFERENCES SC246300.TBITEMS ) IN DB246300.TS246302 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.LINEITEMIX ON SC246300.TBLINEITEM (NORDERKEY,LINENUMBER)# -CREATE TABLE SC246300.TBDEPARTMENT ( DEPTNO CHAR ( 3 ) NOT NULL , MGRNO CHAR ( 6 ) , ADMRDEPT CHAR ( 3 ) NOT NULL WITH DEFAULT, LOCATION CHAR ( 16 ) NOT NULL WITH DEFAULT, BUDGET INTEGER , DEPTNAME VARCHAR ( 29 ) NOT NULL WITH DEFAULT, PRIMARY KEY (DEPTNO) ) IN DB246300.TS246305 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.DEPTIX ON SC246300.TBDEPARTMENT (DEPTNO)# -CREATE TABLE SC246300.TBEMPLOYEE ( EMPNO CHAR ( 6 ) NOT NULL , PHONENO CHAR ( 15 ) NOT NULL WITH DEFAULT, WORKDEPT CHAR ( 3 ) , HIREDATE DATE NOT NULL WITH DEFAULT, JOB CHAR ( 8 ) NOT NULL WITH DEFAULT, EDLEVEL INTEGER NOT NULL WITH DEFAULT, SEX CHAR ( 1 ) NOT NULL , BIRTHDATE DATE , SALARY DECIMAL ( 9 ) , BONUS DECIMAL ( 9 ) , COMM DECIMAL ( 9, 2 ) , DEPTSIZE INT NOT NULL WITH DEFAULT, MANAGER CHAR(6), NUMORDERS INTEGER , CITYKEY INTEGER NOT NULL WITH DEFAULT, NATIONKEY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, ZIPCODE INTEGER NOT NULL WITH DEFAULT, FIRSTNME VARCHAR ( 12 ) NOT NULL WITH DEFAULT, LASTNAME VARCHAR ( 15 ) NOT NULL WITH DEFAULT, ADDRESS VARCHAR( 40 ), CONSTRAINT SEXEMPL CHECK (SEX IN ('M','F')), PRIMARY KEY (EMPNO), FOREIGN KEY (WORKDEPT) REFERENCES SC246300.TBDEPARTMENT
231
) IN DB246300.TS246304 WITH RESTRICT ON DROP# -CREATE UNIQUE INDEX SC246300.EMPLIX ON SC246300.TBEMPLOYEE (EMPNO)# -CREATE INDEX SC246300.EMPNMEIX ON SC246300.TBEMPLOYEE (LASTNAME,FIRSTNME)# -ALTER TABLE SC246300.TBEMPLOYEE FOREIGN KEY (MANAGER) REFERENCES SC246300.TBEMPLOYEE ON DELETE NO ACTION # -ALTER TABLE SC246300.TBDEPARTMENT FOREIGN KEY (MGRNO) REFERENCES SC246300.TBEMPLOYEE ON DELETE NO ACTION # -CREATE TABLE SC246300.TBCONTRACT ( SELLER CHAR ( 6 ) NOT NULL , BUYER CUSTOMER NOT NULL , RECNO CHAR(15) NOT NULL , PESETAFEE SC246300.PESETA , PESETACOMM PESETA , EUROFEE SC246300.EURO , EUROCOMM EURO , CONTRDATE DATE, CLAUSE VARCHAR(500) NOT NULL WITH DEFAULT, FOREIGN KEY (BUYER) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (SELLER) REFERENCES SC246300.TBEMPLOYEE ON DELETE CASCADE ) IN DB246300.TS246308 WITH RESTRICT ON DROP# -CREATE TABLE SC246300.TBSTATE ( STATE CHAR ( 2 ) NOT NULL , NUM_ORDERS INTEGER , NUM_ITEMS INTEGER , DOLLAR_AMOUNT DOLLAR ) IN DB246300.TS246307 WITH RESTRICT ON DROP# -CREATE TABLE SC246300.INVITATION_CARDS ( PHONENO CHAR( 15 ) NOT NULL WITH DEFAULT, STATUS CHAR( 1 ) , SEX CHAR( 1 ) NOT NULL WITH DEFAULT, BIRTHDATE DATE, CITYKEY INTEGER NOT NULL WITH DEFAULT, FIRSTNAME VARCHAR( 12 ) NOT NULL WITH DEFAULT, LASTNAME VARCHAR( 15 ) NOT NULL WITH DEFAULT, ADDRESS VARCHAR ( 40 ) ) IN DB246300.TS246310 # --
232
CREATE TABLE SC246300.TBORDER_1 ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246331 WITH RESTRICT ON DROP # -- Create unique index on primary key CREATE UNIQUE INDEX SC246300.X1TBORDER_1 ON SC246300.TBORDER_1(ORDERKEY ASC) # -- Create indexes on foreign keys CREATE INDEX SC246300.X2TBORDER_1 ON SC246300.TBORDER_1(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_1 ON SC246300.TBORDER_1(REGION_CODE
ASC) #
CREATE TABLE SC246300.TBORDER_2 ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246332 WITH RESTRICT ON DROP# CREATE UNIQUE INDEX SC246300.X1TBORDER_2 ON SC246300.TBORDER_2(ORDERKEY ASC) # CREATE INDEX SC246300.X2TBORDER_2 ON SC246300.TBORDER_2(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_2
233
ON SC246300.TBORDER_2(REGION_CODE
ASC) #
CREATE TABLE SC246300.TBORDER_3 ( ORDERKEY INTEGER NOT NULL , CUSTKEY CUSTOMER NOT NULL , ORDERSTATUS CHAR ( 1 ) NOT NULL WITH DEFAULT, TOTALPRICE FLOAT NOT NULL , ORDERDATE DATE NOT NULL WITH DEFAULT, ORDERPRIORITY CHAR (15 ) NOT NULL WITH DEFAULT, CLERK CHAR ( 6 ) NOT NULL WITH DEFAULT, SHIPPRIORITY INTEGER NOT NULL WITH DEFAULT, STATE CHAR ( 2 ) NOT NULL WITH DEFAULT, REGION_CODE INTEGER, INVOICE_DATE DATE NOT NULL WITH DEFAULT, COMMENT VARCHAR ( 79 ), PRIMARY KEY (ORDERKEY), FOREIGN KEY (CUSTKEY) REFERENCES SC246300.TBCUSTOMER, FOREIGN KEY (REGION_CODE) REFERENCES SC246300.TBREGION ) IN DB246300.TS246332 WITH RESTRICT ON DROP # CREATE UNIQUE INDEX SC246300.X1TBORDER_3 ON SC246300.TBORDER_3(ORDERKEY ASC) # CREATE INDEX SC246300.X2TBORDER_3 ON SC246300.TBORDER_3(CUSTKEY ASC) # CREATE INDEX SC246300.X3TBORDER_3 ON SC246300.TBORDER_3(REGION_CODE
ASC) #
CREATE VIEW SC246300.VWORDER AS SELECT * FROM SC246300.TBORDER_1 WHERE ORDERKEY BETWEEN 1 AND 700000000 UNION ALL SELECT * FROM SC246300.TBORDER_2 WHERE ORDERKEY BETWEEN 700000001 AND 1400000000 UNION ALL SELECT * FROM SC246300.TBORDER_3 WHERE ORDERKEY BETWEEN 1400000001 AND 2147483647 # CREATE VIEW SC246300.VWORDER_1UPD AS SELECT * FROM SC246300.TBORDER_1 WHERE ORDERKEY BETWEEN 1 AND 700000000 WITH CHECK OPTION # CREATE VIEW SC246300.VWORDER_2UPD AS SELECT * FROM SC246300.TBORDER_2 WHERE ORDERKEY BETWEEN 700000001 AND 1400000000 WITH CHECK OPTION #
234
CREATE VIEW SC246300.VWORDER_3UPD AS SELECT * FROM SC246300.TBORDER_3 WHERE ORDERKEY BETWEEN 1400000001 AND 2147483647 WITH CHECK OPTION # -- ROWID USAGE CREATE TABLE SC246300.LITERATURE ( TITLE CHAR(25) ,IDCOL ROWID NOT NULL GENERATED ALWAYS ,MOVLENGTH INTEGER ,LOBMOVIE BLOB(2K) ,LOBBOOK CLOB(10K) ) IN DB246300.TSLITERA # CREATE AUX TABLE SC246300.LOBMOVIE_ATAB IN DB246300.TSLOB1 STORES SC246300.LITERATURE COLUMN LOBMOVIE# CREATE INDEX DB246300.AXLOB1 ON SC246300.LOBMOVIE_ATAB # CREATE AUX TABLE SC246300.LOBBOOK_ATAB IN DB246300.TSLOB2 STORES SC246300.LITERATURE COLUMN LOBBOOK# CREATE INDEX DB246300.AXLOB2 ON SC246300.LOBBOOK_ATAB # CREATE TABLE SC246300.LITERATURE_GA (TITLE CHAR(30) ,IDCOL ROWID NOT NULL GENERATED ALWAYS ,MOVLENGTH INTEGER ) IN DB246300.TS246300 # CREATE TABLE SC246300.LITERATURE_GDEF (TITLE CHAR(30) ,IDCOL ROWID NOT NULL GENERATED BY DEFAULT ,MOVLENGTH INTEGER ) IN DB246300.TS246300 # CREATE UNIQUE INDEX DD ON SC246300.LITERATURE_GDEF (IDCOL) # -- This index is required for tables with GENERATED BY DEFAULT ROWID -- columns in order to guarantee uniqueness. You receive an -- SQLCODE -540 if the index does not exist. CREATE TABLE SC246300.LITERATURE_PART (TITLE CHAR(25) ,IDCOL ROWID NOT NULL GENERATED ALWAYS ,SEQNO INTEGER ,BOOKTEXT LONG VARCHAR ) IN DB246300.TS246399 # CREATE INDEX SC246300.IDCOLIX_PART
235
ON SC246300.LITERATURE_PART (IDCOL) CLUSTER (PART 1 VALUES(X'3F'), PART 2 VALUES(X'7F'), PART 3 VALUES(X'BF'), PART 4 VALUES(X'FF')) # CREATE UNIQUE INDEX SC246300.TITLEIX ON SC246300.LITERATURE_PART (TITLE,SEQNO) # CREATE VIEW SC246300.CUSTOMRANDEMPLOYEE AS SELECT FIRSTNME AS FIRSTNAME ,LASTNAME ,PHONENO ,BIRTHDATE ,SEX ,YEAR(DATE(DAYS(CURRENT DATE)-DAYS(BIRTHDATE))) AS AGE ,ADDRESS ,CITYKEY FROM SC246300.TBEMPLOYEE UNION ALL SELECT FIRSTNAME ,LASTNAME ,PHONENO ,BIRTHDATE ,SEX ,YEAR(DATE(DAYS(CURRENT DATE)-DAYS(BIRTHDATE))) AS AGE ,ADDRESS ,CITYKEY FROM SC246300.TBCUSTOMER #
236
NO CASCADE BEFORE INSERT ON SC246300.TBCUSTOMER REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN (NOT EXISTS (SELECT CITYKEY FROM SC246300.TBCITIES WHERE CITYKEY = N.CITYKEY)) SIGNAL SQLSTATE 'ERR10' ('NOT A VALID CITY') # -CREATE TRIGGER SC246300.BUDG_ADJ AFTER UPDATE OF SALARY ON SC246300.TBEMPLOYEE REFERENCING OLD AS OLD_EMP NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL UPDATE SC246300.TBDEPARTMENT SET BUDGET = BUDGET + (NEW_EMP.SALARY - OLD_EMP.SALARY) WHERE DEPTNO = NEW_EMP.WORKDEPT # -CREATE TRIGGER SC246300.CHK_SAL NO CASCADE BEFORE UPDATE OF SALARY ON SC246300.TBEMPLOYEE REFERENCING OLD AS OLD_EMP NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL WHEN (NEW_EMP.SALARY > OLD_EMP.SALARY * 1.20) SIGNAL SQLSTATE '75001'('INVALID SALARY INCREASE - EXCEEDS 20%')# -CREATE TRIGGER SC246300.CHK_HDAT NO CASCADE BEFORE INSERT ON SC246300.TBEMPLOYEE REFERENCING NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL VALUES ( CASE WHEN NEW_EMP.HIREDATE < CURRENT DATE THEN RAISE_ERROR('75001','HIREDATE HAS PASSED') WHEN NEW_EMP.HIREDATE - CURRENT DATE > 365 THEN RAISE_ERROR ('85002','HIREDATE TOO FAR IN FUTURE') ELSE 0 END) # --- SECURITY: TO PREVENT UPDATING SALARY ACCIDENTALLY -WILL PREVENT SAMPLE SQL TO WORK SO NOT IMPLEMENTED HERE --- CREATE TRIGGER SC246300.UPDSALAR -NO CASCADE BEFORE -UPDATE OF SALARY -ON SC246300.TBEMPLOYEE -FOR EACH STATEMENT MODE DB2SQL -VALUES ( -RAISE_ERROR('90001','YOU MUST DROP THE TRIGGER UPDSALAR'))# -CREATE TRIGGER SC246300.SEXCNST NO CASCADE BEFORE INSERT ON SC246300.TBEMPLOYEE REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.SEX NOT IN('M','F')) SIGNAL SQLSTATE 'ERRSX' ('SEX MUST BE EITHER M OR F') #
237
----
-- CREATE TRIGGER SC246300.ITEMNMBR -NO CASCADE BEFORE -INSERT -ON SC246300.TBLINEITEM -REFERENCING NEW AS N -FOR EACH ROW -MODE DB2SQL -WHEN ( N.L_ITEM_NUMBER NOT IN -(1, 5, 6, -9996,9998,10000) ) -SIGNAL SQLSTATE 'ERR30' -('ITEM NUMBER DOES NOT EXIST') # --- MORE DYNAMIC VERSION OF PREVIOUS TRIGGER -CREATE TRIGGER SC246300.ITEMNMB2 NO CASCADE BEFORE INSERT ON SC246300.TBLINEITEM REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL WHEN ( N.L_ITEM_NUMBER NOT IN ( SELECT ITEM_NUMBER FROM SC246300.TBITEMS WHERE N.L_ITEM_NUMBER = ITEM_NUMBER ) ) SIGNAL SQLSTATE 'ERR30' ('ITEM NUMBER DOES NOT EXIST') # -CREATE TRIGGER SC246300.TGEURMOD NO CASCADE BEFORE INSERT ON SC246300.TBCONTRACT REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL SET N.EUROFEE = EURO(DECIMAL(N.PESETAFEE)/166) # -CREATE TRIGGER SC246300.LARG_ORD AFTER INSERT ON SC246300.TBORDER REFERENCING NEW_TABLE AS N_TABLE FOR EACH STATEMENT MODE DB2SQL SELECT LARGE_ORDER_ALERT(CUSTKEY, TOTALPRICE, ORDERDATE) FROM N_TABLE WHERE TOTALPRICE > 10000 #
238
F M M F F F
4 7 0 0 0 9
SC246300.TBDEPARTMENT ---------+---------+---------+---------+---------+---------+---------+ DEPTNO MGRNO ADMRDEPT LOCATION BUDGET DEPTNAME ---------+---------+---------+---------+---------+---------+---------+ B01 ------ KHI SAN JOSE 100000 DB2 A01 000001 EKV MADRID 40000 SALES C01 ------ DAH FLORIDA 35000 MVS A02 000001 ERG SAN FRANCISCO 32000 MARKETING SC246300.TBCUSTOMER ---------+---------+---------+---------+---------+---------+---------+-CUSTKEY FIRSTNAME LASTNAME SEX CITYKEY ---------+---------+---------+---------+---------+---------+---------+-01 ADELA SALVADOR F 1 02 MIRIAM ANTOLIN F 2 03 MARK SMITH M 3 04 SILVIA YOUNG F 3 05 IVAN KENT M 3 SC246300.TBCITIES --------+---------+---------+---------+---------+---------+---------+ CITYKEY REGION_CODE STATE CITYNAME COUNTRY --------+---------+---------+---------+---------+---------+---------+ 1 28010 MADRID SPAIN 2 15 AMSTERDAM HOLLAND 3 55 SAN FRANCISCO USA 4 42 NOKIA FINLAND 5 33076 CORAL SPRINGS USA 6 97 TOKIO JAPAN SC246300.TBCONTRACT ---------+---------+---------+---------+---------+---------+ SELLER BUYER RECNO PESETAFEE ---------+---------+---------+---------+---------+---------+ 000006 02 333 90000. 000001 03 222 10000. 000003 01 111 50000. SC246300.TBREGION ---------+---------+---------+---------+ REGION_CODE REGION_NAME ---------+---------+---------+---------+ 28010 PROVINCE OF MADRID 55 SILICON VALLEY 15 THE NETHERLANDS SC246300.TBORDER --------+---------+---------+---------+---------+---------+---------+---------+-ORDERKEY CUSTKEY ORDERSTATUS ORDERPRIORITY SHIPPRIORITY REGION_CODE --------+---------+---------+---------+---------+---------+---------+---------+-10 03 O URGENT 1 5 1 05 M NORMAL 3 2801 2 03 M VERY URGENT 0 1
239
3 4 5 6 7 8 9
05 01 03 01 04 02 03
O M O O O M O
3 3 1 2 2 2 0
1 5 5 2801 2801 5 1
SC246300.TBLINEITEM ---------+---------+---------+---------+---------+---------+---------+ NORDERKEY LINENUMBER L_ITEM_NUMBER QUANTITY TAX ---------+---------+---------+---------+---------+---------+---------+ 1 1 100 3 5 1 2 120 1 5 2 1 440 2 10 3 1 660 3 5 4 1 505 4 10 4 2 440 1 10 4 3 660 1 5 5 1 133 1 5 6 1 100 8 5 7 1 120 1 5 8 1 440 1 10 9 1 660 1 5 10 1 440 1 10 10 2 100 1 5 SC246300.TBITEMS ---------+---------+---------+---------+---------+---------+---------+---------+ ITEM_NUMBER PRODUCT_NAME STOCK PRICE COMMENT ---------+---------+---------+---------+---------+---------+---------+---------+ 100 WIDGET 50 1.25 INCOMPATIBLE WITH HAMMER 120 NUT 50 1.25 FOR TYPE 20 NUT ONLY 133 WASHER 50 1.25 BRONZE 440 HAMMER 50 1.25 NONE 505 NAIL 50 1.25 GALVANIZED 660 SCREW 50 1.25 WOOD
240
SC246300.TBDEPARTMENT DROP RESTRICT ON DROP# SC246300.TBSTATE DROP RESTRICT ON DROP# SC246300.TBORDER_1 DROP RESTRICT ON DROP# SC246300.TBORDER_2 DROP RESTRICT ON DROP# SC246300.TBORDER_3 DROP RESTRICT ON DROP#
DROP DATABASE DB246300# -DROP DROP DROP DROP DROP DROP DROP DROP DROP DROP -FUNCTION FUNCTION FUNCTION FUNCTION FUNCTION FUNCTION DISTINCT DISTINCT DISTINCT DISTINCT SC246300.PES2EUR RESTRICT# SC246300.EUR2PES RESTRICT# SC246300.EUR22PES RESTRICT# SC246300.SUM(PESETA) RESTRICT# SC246300.SUM(EURO) RESTRICT# SC246300.LARGE_ORDER_ALERT RESTRICT # TYPE SC246300.CUSTOMER RESTRICT # TYPE SC246300.PESETA RESTRICT # TYPE SC246300.EURO RESTRICT # TYPE SC246300.DOLLAR RESTRICT #
241
242
Appendix B.
Sample programs
A selected set of sample programs is shown in this appendix. The rest are available from the additional material on the Internet. Returning SQLSTATE from a stored procedure to a trigger Passing a transition table from a trigger to a SP
243
PIC S9(4) PIC X(120) PIC S9(8) PIC 9(8) PIC X PIC X(20)
77 77 77 77 * 77 77 77 77
COMP VALUE +960. OCCURS 8 TIMES INDEXED BY ERROR-INDEX. COMP VALUE +120. VALUE 0. VALUE SPACE. VALUE SPACE.
VARIABLE TO GET FIELDS IN SQLCA FORMATTED PROPERLY XSQLCABC PIC 9(9). XSQLCODE PIC S9(9) SIGN IS LEADING , SEPARATE. XSQLERRML PIC S9(9) SIGN IS LEADING , SEPARATE. XSQLERRD PIC S9(9) SIGN IS LEADING , SEPARATE.
LINKAGE SECTION. * INPUT PARM PASSED BY STORED PROC 01 PARM1 PIC X(20). 01 INDPARM1 PIC S9(4) COMP. * DECLARE THE SQLSTATE THAT CAN BE SET BY STORED PROC 01 P-SQLSTATE PIC X(5). * DECLARE THE QUALIFIED PROCEDURE NAME 01 P-PROC. 49 P-PROC-LEN PIC 9(4) USAGE BINARY. 49 P-PROC-TEXT PIC X(27). * DECLARE THE SPECIFIC PROCEDURE NAME 01 P-SPEC. 49 P-SPEC-LEN PIC 9(4) USAGE BINARY. 49 P-SPEC-TEXT PIC X(18). * DECLARE SQL DIAGNOSTIC MESSAGE TOKEN 01 P-DIAG. 49 P-DIAG-LEN PIC 9(4) USAGE BINARY. 49 P-DIAG-TEXT PIC X(70). PROCEDURE DIVISION USING PARM1, INDPARM1, P-SQLSTATE,
244
P-PROC, P-SPEC, P-DIAG. MOVE PARM1 TO VAR1. DISPLAY "VAR1 " VAR1. MOVE "UPD ONE" TO LINE-EXEC. * This operation will fail because the trigger that invokes * the SP is a before trigger. Therefore it CANNOT update. EXEC SQL UPDATE SC246300.TBEMPLOYEE SET PHONENO=:VAR1 WHERE EMPNO ='000006' END-EXEC. * DISPLAY "AFTER UPDATE ONE" SQLCODE . IF SQLCODE NOT EQUAL 0 THEN PERFORM DBERROR END-IF. * ERROR-EXIT. GOBACK. DBERROR. DISPLAY "*** SQLERR FROM SD0BMS3 ***". MOVE SQLCODE TO ERR-CODE . IF SQLCODE < 0 THEN MOVE '-' TO ERR-MINUS. DISPLAY "SQLCODE = " ERR-MINUS ERR-CODE "LINE " LINE-EXEC. CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN. IF RETURN-CODE = ZERO PERFORM ERROR-PRINT VARYING ERROR-INDEX FROM 1 BY 1 UNTIL ERROR-INDEX GREATER THAN 8 * TO SHOW WHERE EVERYTHING GOES IN SQLCA DISPLAY "*** START OF UNFORMATTED SQLCA ***" DISPLAY "SQLCAID X(8) " SQLCAID MOVE SQLCABC TO XSQLCABC DISPLAY "SQLCABC I " XSQLCABC MOVE SQLCODE TO XSQLCODE DISPLAY "SQLCODE I " XSQLCODE MOVE SQLERRML TO XSQLERRML DISPLAY "SQLERRML SI " XSQLERRML DISPLAY "SQLERRMC X(70) " SQLERRMC DISPLAY "SQLERRP X(8) " SQLERRP MOVE SQLERRD(1) TO XSQLERRD DISPLAY "SQLERRD1 I " XSQLERRD MOVE SQLERRD(2) TO XSQLERRD DISPLAY "SQLERRD2 I " XSQLERRD MOVE SQLERRD(3) TO XSQLERRD DISPLAY "SQLERRD3 I " XSQLERRD MOVE SQLERRD(4) TO XSQLERRD DISPLAY "SQLERRD4 I " XSQLERRD MOVE SQLERRD(5) TO XSQLERRD DISPLAY "SQLERRD5 I " XSQLERRD MOVE SQLERRD(6) TO XSQLERRD DISPLAY "SQLERRD6 I " XSQLERRD DISPLAY "SQLWARN0 X(1) " SQLWARN0 DISPLAY "SQLWARN1 X(1) " SQLWARN1 DISPLAY "SQLWARN2 X(1) " SQLWARN2 DISPLAY "SQLWARN3 X(1) " SQLWARN3 DISPLAY "SQLWARN4 X(1) " SQLWARN4 DISPLAY "SQLWARN5 X(1) " SQLWARN5 DISPLAY "SQLWARN6 X(1) " SQLWARN6 DISPLAY "SQLWARN7 X(1) " SQLWARN7 DISPLAY "SQLWARN8 X(1) " SQLWARN8
245
DISPLAY "SQLWARN9 X(1) " SQLWARN9 DISPLAY "SQLWARNA X(1) " SQLWARNA DISPLAY "SQLSTATE X(5) " SQLSTATE DISPLAY "*** END OF UNFORMATTED SQLCA ***" ELSE DISPLAY RETURN-CODE. MOVE '38601' TO P-SQLSTATE . MOVE 16 TO P-DIAG-LEN. MOVE 'SP HAD SQL ERROR' TO P-DIAG-TEXT. ERROR-PRINT. DISPLAY ERROR-TEXT (ERROR-INDEX).
PIC S9(4) PIC X(120) PIC S9(8) PIC 9(8) PIC X PIC X(20)
77 77 77 77 *
COMP VALUE +960. OCCURS 8 TIMES INDEXED BY ERROR-INDEX. COMP VALUE +120. VALUE 0. VALUE SPACE. VALUE SPACE.
246
00670000 00680000 00690000 00700000 00710000 77 I PIC 9(5) VALUE 0. 00720000 00740000 LINKAGE SECTION. 00750000 * **************************************** 00760000 * 1. DECLARE TABLOC AS LARGE INTEGER PARM 00770000 * **************************************** 00780000 01 TABLOC PIC S9(9) USAGE BINARY. 00790000 01 INDTABLOC PIC S9(4) COMP. 00800000 * DECLARE THE SQLSTATE THAT CAN BE SET BY STORED PROC 00810000 01 P-SQLSTATE PIC X(5). 00820000 * DECLARE THE QUALIFIED PROCEDURE NAME 00830000 01 P-PROC. 00840000 49 P-PROC-LEN PIC 9(4) USAGE BINARY. 00850000 49 P-PROC-TEXT PIC X(27). 00860000 * DECLARE THE SPECIFIC PROCEDURE NAME 00870000 01 P-SPEC. 00880000 49 P-SPEC-LEN PIC 9(4) USAGE BINARY. 00890000 49 P-SPEC-TEXT PIC X(18). 00900000 * DECLARE SQL DIAGNOSTIC MESSAGE TOKEN 00910000 01 P-DIAG. 00920000 49 P-DIAG-LEN PIC 9(4) USAGE BINARY. 00930000 49 P-DIAG-TEXT PIC X(70). 00940000 00950000 00954000 PROCEDURE DIVISION USING TABLOC , INDTABLOC, P-SQLSTATE, 00960000 P-PROC, P-SPEC, P-DIAG. 00970000 00971000 * THE INDTABLOC INDICATOR VARIABLE IS IMPORTANT 00980000 * OTHERWISE YOU WON'T BE ABLE TO PASS BACK INFO THROUGH 00990000 * P-SQLSTATE 01000000 01020000 * ********************************************* 01030000 * 4. DECLARE CURSOR USING THE TRANSITION TABLE 01040000 * ********************************************* 01050000 EXEC SQL 01060000 DECLARE C1 CURSOR FOR 01070000 SELECT EMPNO, FIRSTNME 01080000 FROM TABLE ( :TRIG-TBL-ID LIKE SC246300.TBEMPLOYEE ) 01090000 END-EXEC. 01100000 * *************************************************************** 01110000 * 3. COPY TABLE LOCATOR INPUT PARM TO THE TABLE LOCATOR HOST VAR 01120000 * *************************************************************** 01130000 MOVE TABLOC TO TRIG-TBL-ID. 01140000 01150000 MOVE "OPEN CUR" TO LINE-EXEC. 01160000 EXEC SQL OPEN C1 END-EXEC. 01170000 * MOVE SQLCODE TO XSQLCODE. 01180000 * DISPLAY " AFTER OPEN SQLCODE " XSQLCODE. 01190000 IF SQLCODE < 0 THEN 01200000 PERFORM DBERROR 01210000 PERFORM ERROR-EXIT. 01220000 * ********************************************************** 01230000 * 5. PROCESS DATA FROM TRANSITION TABLE 01240000 * 01241000 * HERE WE ONLY DISPLAY THE INFO IN THE OUTPUT OF THE SP 01242000
77 77 77 77
9(9). S9(9) SIGN IS LEADING , SEPARATE. S9(9) SIGN IS LEADING , SEPARATE. S9(9) SIGN IS LEADING , SEPARATE.
247
* ADDRESS SPACE, BUT HERE IS WHERE YOU WOULD DO THE * REAL WORK ON THE TRANSITION TABLE * * ********************************************************** DISPLAY " ". DISPLAY " DATA FROM TRANSITION TABLE " DISPLAY " PEOPLE WITH SALARY INCREASE" DISPLAY " ". DISPLAY " ROW EMPNO FIRSTNAME" PERFORM GET-ROWS-E VARYING I FROM 1 BY 1 UNTIL SQLCODE = 100. MOVE "CLOS CUR" TO LINE-EXEC. EXEC SQL CLOSE C1 END-EXEC. ERROR-EXIT. GOBACK. GET-ROWS-E. * * * FETCH ROWS FROM THE TRANSITION TABLE INTO HOST VARIABLES. MOVE "FETCH E " TO LINE-EXEC. EXEC SQL FETCH C1 INTO :EMPNO , :FIRSTNME END-EXEC. DISPLAY INFO TO DEBUG MOVE SQLCODE TO XSQLCODE. DISPLAY " AFTER FETCH SQLCODE " XSQLCODE. IF SQLCODE = 0 THEN DISPLAY I ' ' EMPNO ' ELSE IF SQLCODE < 0 THEN PERFORM DBERROR END-IF. ADD 1 TO J. ' FIRSTNME-TEXT
* * *
DBERROR. DISPLAY "*** SQLERR FROM SPTRTT ***". MOVE SQLCODE TO ERR-CODE . IF SQLCODE < 0 THEN MOVE '-' TO ERR-MINUS. DISPLAY "SQLCODE = " ERR-MINUS ERR-CODE "LINE " LINE-EXEC. CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN. IF RETURN-CODE = ZERO PERFORM ERROR-PRINT VARYING ERROR-INDEX FROM 1 BY 1 UNTIL ERROR-INDEX GREATER THAN 8 * TO SHOW WHERE EVERYTHING GOES IN SQLCA DISPLAY "*** START OF UNFORMATTED SQLCA ***" DISPLAY "SQLCAID X(8) " SQLCAID MOVE SQLCABC TO XSQLCABC DISPLAY "SQLCABC I " XSQLCABC MOVE SQLCODE TO XSQLCODE DISPLAY "SQLCODE I " XSQLCODE MOVE SQLERRML TO XSQLERRML DISPLAY "SQLERRML SI " XSQLERRML DISPLAY "SQLERRMC X(70) " SQLERRMC DISPLAY "SQLERRP X(8) " SQLERRP MOVE SQLERRD(1) TO XSQLERRD
01243000 01244000 01245000 01250000 01260000 01270000 01280000 01290000 01300000 01310000 01320000 01330000 01340000 01350000 01360000 01370000 01380000 01420000 01430000 01440000 01450000 01460000 01470000 01480000 01490000 01500000 01510000 01520000 01530000 01540000 01550000 01560000 01611000 01620000 01630000 01640000 01650000 01660000 01670000 01680000 01690000 01700000 01710000 01720000 01730000 01740000 01750000 01760000 01770000 01780000 01790000 01800000 01810000 01820000 01830000 01840000 01850000 01860000 01870000 01880000
248
DISPLAY "SQLERRD1 I " XSQLERRD MOVE SQLERRD(2) TO XSQLERRD DISPLAY "SQLERRD2 I " XSQLERRD MOVE SQLERRD(3) TO XSQLERRD DISPLAY "SQLERRD3 I " XSQLERRD MOVE SQLERRD(4) TO XSQLERRD DISPLAY "SQLERRD4 I " XSQLERRD MOVE SQLERRD(5) TO XSQLERRD DISPLAY "SQLERRD5 I " XSQLERRD MOVE SQLERRD(6) TO XSQLERRD DISPLAY "SQLERRD6 I " XSQLERRD DISPLAY "SQLWARN0 X(1) " SQLWARN0 DISPLAY "SQLWARN1 X(1) " SQLWARN1 DISPLAY "SQLWARN2 X(1) " SQLWARN2 DISPLAY "SQLWARN3 X(1) " SQLWARN3 DISPLAY "SQLWARN4 X(1) " SQLWARN4 DISPLAY "SQLWARN5 X(1) " SQLWARN5 DISPLAY "SQLWARN6 X(1) " SQLWARN6 DISPLAY "SQLWARN7 X(1) " SQLWARN7 DISPLAY "SQLWARN8 X(1) " SQLWARN8 DISPLAY "SQLWARN9 X(1) " SQLWARN9 DISPLAY "SQLWARNA X(1) " SQLWARNA DISPLAY "SQLSTATE X(5) " SQLSTATE DISPLAY "*** END OF UNFORMATTED SQLCA ***" ELSE DISPLAY RETURN-CODE. MOVE '38601' TO P-SQLSTATE . MOVE 16 TO P-DIAG-LEN. MOVE 'SP HAD SQL ERROR' TO P-DIAG-TEXT. ERROR-PRINT. DISPLAY ERROR-TEXT (ERROR-INDEX).
01890000 01900000 01910000 01920000 01930000 01940000 01950000 01960000 01970000 01980000 01990000 02000000 02010000 02020000 02030000 02040000 02050000 02060000 02070000 02080000 02090000 02100000 02110000 02120000 02130000 02140000 02150000 02160000 02170000 02180000 02190000
249
250
Appendix C.
Additional material
This redbook refers to additional material that can be downloaded from the Internet as described below.
Select the Additional materials and open the directory that corresponds with the redbook form number, SG246300.
251
These two sequential files were created from partitioned data sets using the TSO TRANSMIT OUTDA() command. To recreate the partitioned data sets on OS/390 from the downloaded file, you need to: 1. Transfer the files from PC to MVS as binary, with the following attributes for the output data set: DSORG=PS RECFM=FB LRECL=80 BLKSIZE=3200 2. Use the TSO RECEIVE INDA( ) command to create the partitioned data sets (PDSs) from the sequential data sets you just transferred. You can use the TSO HELP RECEIVE command to find out about the optional parameters for the RECEIVE command. Both PDS data sets have an $INDEX member that explain the content of the individual members and how to use them.
252
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 254. DB2 for z/OS and OS/390 Version 7: Using the Utilities Suite, SG24-6289 DB2 for OS/390 and z/OS Powering the Worlds e-business Solutions, SG24-6257 DB2 for z/OS and OS/390 Version 7 Performance Topics, SG24-6129 DB2 UDB Server for OS/390 and z/OS Version 7 Presentation Guide, SG24-6121 DB2 UDB Server for OS/390 Version 6 Technical Update, SG24-6108 DB2 Java Stored Procedures Learning by Example, SG24-5945 DB2 UDB for OS/390 Version 6 Performance Topics, SG24-5351 DB2 for OS/390 Version 5 Performance Topics, SG24-2213 DB2 for MVS/ESA Version 4 Non-Data-Sharing Performance Topics, SG24-4562 DB2 UDB for OS/390 Version 6 Management Tools Package, SG24-5759 DB2 Server for OS/390 Version 5 Recent Enhancements - Reference Guide, SG24-5421 DB2 for OS/390 Capacity Planning, SG24-2244 Cross-Platform DB2 Stored Procedures: Building and Debugging, SG24-5485 DB2 for OS/390 and Continuous Availability, SG24-5486 Connecting WebSphere to DB2 UDB Server, SG24-6219 Parallel Sysplex Configuration: Cookbook, SG24-2076 DB2 for OS/390 Application Design Guidelines for High Performance, SG24-2233 Using RVA and SnapShot for BI with OS/390 and DB2, SG24-5333 IBM Enterprise Storage Server Performance Monitoring and Tuning Guide, SG24-5656 DFSMS Release 10 Technical Update, SG24-6120 Storage Management with DB2 for OS/390, SG24-5462 Implementing ESS Copy Services on S/390, SG24-5680
Other resources
These publications are also relevant as further information sources: DB2 UDB for OS/390 and z/OS Version 7 Whats New, GC26-9946 DB2 UDB for OS/390 and z/OS Version 7 Installation Guide, GC26-9936 DB2 UDB for OS/390 and z/OS Version 7 Command Reference, SC26-9934 DB2 UDB for OS/390 and z/OS Version 7 Messages and Codes, GC26-9940 DB2 UDB for OS/390 and z/OS Version 7 Utility Guide and Reference, SC26-9945
253
DB2 UDB for OS/390 and z/OS Version 7 Application Programming Guide and Reference for Java, SC26-9932 DB2 UDB for OS/390 and z/OS Version 7 Administration Guide, SC26-9931 DB2 UDB for OS/390 and z/OS Version 7 Application Programming and SQL Guide, SC26-9933 DB2 UDB for OS/390 and z/OS Version 7 Release Planning Guide, SC26-9943 DB2 UDB for OS/390 and z/OS Version 7 SQL Reference, SC26-9944 DB2 UDB for OS/390 and z/OS Version 7 Text Extender Administration and Programming, SC26-9948 DB2 UDB for OS/390 and z/OS Version 7 Data Sharing: Planning and Administration, SC26-9935 DB2 UDB for OS/390 and z/OS Version 7 Image, Audio, and Video Extenders Administration and Programming, SC26-9947 DB2 UDB for OS/390 and z/OS Version 7 ODBC Guide and Reference, SC26-9941 DB2 UDB for OS/390 and z/OS Version 7 XML Extender Administration and Programming, SC26-9949 OS/390 V2R10.0 DFSMS Using Data Sets, SC26-7339 DB2 UDB for OS/390 Version 6 SQL Reference, SC26-9014
Also download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become Redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows.
254
Related publications
255
256
Special notices
References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other
257
countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
258
ITSO
259
LPL LPAR LRECL LRSN LUW LVM MB NPI ODB ODBC OS/390 PAV PDS PIB PSID PSP PTF PUNC QMF QA RACF RBA RECFM RID RRS RRSAF RS RR SDK SMIT UOW
logical page list logically partitioned mode logical record length log record sequence number logical unit of work logical volume manager megabyte (1,048,576 bytes) non-partitioning index object descriptor in DBD Open Data Base Connectivity Operating System/390 parallel access volume partitioned data set parallel index build pageset identifier preventive service planning program temporary fix possibly uncommitted Query Management Facility Quality Assurance Resource Access Control Facility relative byte address record format record identifier resource recovery services resource recovery services attach facility read stability repeatable read software developers kit System Management Interface Tool unit of work
260
Index
Symbols
=ANY 186 =SOME 186 END 127 grouping 132 other uses 131 pivot 132 restrictions 133 trigger 130 UNION 130 WHEN 126 why 127 CAST function 44, 45, 47, 48, 53, 67 casting 49 catalog table 42 CCSID 90 check constraint 38, 39, 47, 121 check pending 38 CICS 95 CLOB 45 COALESCE 131 column function 58, 60, 72, 168 COMMIT 84, 85, 86 comparison operator 49 CONNECT 93 connection 81, 84, 85, 93 constraint 35 cost of trigger 37 COST_CATEGORY 37 CREATE DISTINCT TYPE 44 CREATE FUNCTION 63 CREATE GLOBAL TEMPORARY TABLE 82 CREATE SCHEMA 10, 11 CREATE TABLE 83 created temporary tables 79, 80, 81, 83, 84, 85, 86 characteristics 82 COMMIT 85 considerations 86 creating an instance 84 multiple instances 85 pitfalls 86 RELEASE(COMMIT) 85 RELEASE(DEALLOCATE) 85 restrictions 86 result sets 85 ROLLBACK 85 CREATETAB 82 CREATETMTAB 82 Cross Loader 217 CURRENT 165 CURRENT PATH 8, 9, 44 current server 83, 84 CURRENTDATA 177, 179 cursor movement 164 cursor stability 177 cursor types 151
A
ABSOLUTE 164 absolute moves 164 ACCESSTYPE 120 adding an identity column 107 after join step predicates 182 after trigger 18, 35, 36 alias 8, 42 ALL 186 arithmetic operator 49 AS TEMP 89 ASCII 90, 213 authorization 8 automatic rebind 33 auxiliary table 42
B
base table 89, 92 before trigger 35 BIND 200 bind 92 BLOB 45 boolean term 121 buffer pool 86, 90 built-in data type 43, 44, 45, 49 built-in function exploitation 72 built-in functions 25, 49, 57, 58, 59, 60, 64, 65, 72 arithmetic and string operators 72 before version 6 72 column functions 62, 72, 73, 75 in version 6 73 in version 7 75 restrictions 76 scalar functions 72, 73, 75 built-in operator 49
C
CALL 38 CALLTYPE 63 cartesian product 182 cascade path 29 cascaded level 36 CASE 125 CASE expression 125, 143 alternatives 131 characteristics 126 division by zero 129 ELSE 127
261
D
Data Propagator 39 data sharing 89 data sharing group 89 data warehouse 81 DB2 Connect 154 DB2 Extender 60 DB2 family compatibility 194 DB2 Utilities Suite 212 DBADM 82 DBCLOB 45 DBCTRL 82 DBD 37 DBMAINT 82 DBRM 200, 201 DBRM Colon Finder 201 DDF 95 DECLARE CURSOR 150, 155, 156 DECLARE GLOBAL TEMPORARY TABLE 88 declared temporary tables 79, 85, 88, 89, 90, 92, 93 AS TEMP 89 characteristics 88 considerations 94 converting created temporary tables 95 CREATE LIKE 95 index 94 ON COMMIT DELETE ROWS 95 ON COMMIT PRESERVE ROWS 95 PUBLIC 95 remote 93 scrollable cursors 93 three-part name 93 USE 95 DEFAULT 191 default clause 47 default value 83 DELETE self-referencing 193 DELETE ALL 86 delete hole 162, 170 delete rule 35 delete trigger 22 differences between table types 80 direct row access 119 drain 207 DRDA 85, 93, 197 DROP TABLE 87 DROP TRIGGER 34 DSN1COPY 213 DSNDB07 37 DSNH315I 200 DSNHSP 11, 224 DSNTIAUL 205, 212 comparing 215 During join predicates 182
editproc 87, 121 EDM pool 90 encoding scheme 90 ENDEXEC 217 EPOCH 121 EXEC SQL 217, 219 EXECUTE IMMEDIATE 217 EXPLAIN 142 expression 192 external 58 action 30 function 51, 59 program 57 external resource 62 extract 88
F
falling back 120 fence 64 FETCH 154, 157 FETCH ABSOLUTE 0 165 FETCH AFTER 164 FETCH BEFORE 164, 165 FETCH FIRST n ROWS 197 FETCH FIRST n ROWS ONLY SELECT INTO 198 FETCH LAST 165 FETCH SENSITIVE 159, 162 fieldproc 87, 121 FINAL CALL 63 FOR FETCH ONLY 199 FOR UPDATE OF 161, 174 foreign key 40 fullselect 136
G
GENERATED ALWAYS 113, 122 GENERATED BY DEFAULT 113, 122 global temporary table 79, 80 GRANT 8, 87 GRANT USAGE 47 GROUP BY 196
H
HAVING 196 HEADER 214 hole 162, 170 host variables 53, 199 preceded by a colon 200, 201
I
identity column 104 add 107 CACHE 105, 110, 111 characteristics 104 comparison 122 creating 105 CYCLE 105
E
E/R-diagram 224 EBCDIC 90, 213
262
data sharing 110 deficiencies 110 design considerations 111 DSN_IDENTITY 107 GENERATED ALWAYS 104, 106, 107 GENERATED BY DEFAULT 107, 111 IDENTITY_VAL_LOCAL 109 IGNOREFIELDS 107 INCLUDING IDENTITY COLUMN ATTRIBUTES 106 LOAD 107 MAXVALUE 105 MINVALUE 105 OVERRIDING USER VALUE 108 populating 106 restrictions 112 START WITH 105 when 104 IDENTITY_VAL_LOCAL 109, 122 IMS 85, 88 IN 187 supports any expression 201 index 8, 86, 88 indexable 81 INNER JOIN 183 INSENSITIVE 152, 155, 156 INSENSITIVE FETCH 159 INSERT 191, 206 DEFAULT keyword 191 self-referencing SELECT 192 UNION 193 using expressions 192 INSERT program 208 insert trigger 22 instance workfile 81 internal 58 isolation level UR 215
LONG VARGRAPHIC 44
M
materialize 154 missing colons sample program 201
N
nested loop join 81 nested table expression 136 nesting level 29, 42 NEXT 165 NO CASCADE BEFORE 17 non-correlated subquery 193 non-DB2 resource 62 non-partitioning index 143 non-relational data 85 NOT IN 187 null value 83 NULLIF 131
O
OBD 37 object-oriented 43 object-relational 59 ODBC 88 ON clause extensions 182 ON COMMIT DELETE ROWS 89 ON COMMIT PRESERVE ROWS 89, 92 ON condition 182 ON DELETE CASCADE 17 ON DELETE SET NULL 16 online LOAD RESUME 16, 206 clustering 207 commit frequency 208 duplicate keys 207 free space 208 logging 207 pitfalls 209 restart 208 restrictions 209 RI 207 OPEN CURSOR 155 Optimistic Locking Concur By Value 174, 175 optimize 37 OPTIMIZE FOR 197 ORDER BY 188 expression 189 select list 188 sort avoidance 190 overload 60
J
Java 154
L
large object 45 LEFT OUTER JOIN 184 LIKE 83 LISTDEF 216 LOAD 16, 42 FORMAT UNLOAD 212 SHRLEVEL CHANGE 206 SHRLEVEL NONE 206 LOAD PART 119 LOAD RESUME 38, 42, 205 LOB 87, 113, 156 lock 81, 82, 83, 86 LOCK TABLE 87 locking 86, 207 log 81, 82, 83, 86 logical work file 85 LONG VARCHAR 44
P
package 32, 34, 39, 59, 92 page size 89, 90, 93 parallelism 216 PARAMETER STYLE DB2SQL 27 PARENT_QBLOCKNO 142
Index
263
partitioning 144 partitioning key update 202 PARTKEYU 202 PATH 9 plan 92 PLAN_TABLE 142 populate 85, 88 positioned DELETE 176 positioned UPDATE 175 powerful SQL 123 PQ16946 202 PQ19897 210, 212 PQ23219 210, 212 PQ34506 38 PQ53030 15 PRIMARY_ACCESSTYPE 119 PRIOR 165 privilege 82, 83 propagation 39
Q
QBLOCK_TYPE 142 QMF 154 QUALIFIER 9, 10 qualifier 8 quantified predicates 185
ROLLBACK 84, 85 ROLLBACK TO SAVEPOINT 156 row expressions 185 quantified predicates 186 restrictions 188 types 185 row size 89 row trigger 18 ROWID 87, 112, 113 casting 117 comparison 122 DCLGEN 117 direct row access 119 EPOCH 121 GENERATED BY DEFAULT 115 implementation 113 IMS 120 LOB 113 partitioning 118 restrictions 121 storing 120 UPDATE 114 USAGE SQL TYPE IS 117 row-value-expression 185
S
SAR 35, 36 savepoint 98 characteristics 99 CONNECT 101 ON ROLLBACK RETAIN CURSORS 100 ON ROLLBACK RETAIN LOCKS 100 RELEASE SAVEPOINT 99 remote connections 101 restrictions 102 ROLLBACK TO SAVEPOINT 99 UNIQUE 100 why 98 scalar function 58, 60, 61, 72, 168 scalar subquery 195 schema 7, 8, 10, 72 authorization 8 authorization ID 10 characteristics 8 name 16, 32, 44 object 14 processor 10 scratchpad 64 SCROLL 155, 156 scrollable cursor 93, 149 absolute moves 164 allowable combinations 160 characteristics 151 choose the right type 153 CLOSE CURSOR 156 cursor movement 164 declaring 155 delete hole 170 FETCH 154, 157 FETCH ABSOLUTE 159
R
RAISE_ERROR 25, 127 read stability 177 read-only cursor 153 REBIND 37, 200 rebind 33 REBIND TRIGGER PACKAGE 33 recovery 82 Redbooks Web site 254 Contact us xix referential constraint 35, 38, 87 referential integrity 28 RELATIVE 164 relative moves 165 RELEASE(DEALLOCATE) 37 remote server 84, 93 REORG DISCARD 205, 210 DISCARD restrictions 211 UNLOAD EXTERNAL 205, 211, 212, 213 comparing 215 UNLOAD ONLY 212 repeatable read 177 restart 216 RESTRICT 47 result set 85, 88 result table 154, 162, 166 REXX 154 REXX procedure 201 RI 39 RID 113 RID list processing 121
264
FETCH AFTER 158 FETCH BEFORE 158 FETCH CURRENT 158 FETCH FIRST 158 FETCH LAST 158 FETCH NEXT 158 FETCH PRIOR 158 FETCH RELATIVE 159 fetching 157 in depth 152 INSENSITIVE 154 insensitive 151, 152 LOB 156 locking 177 OPEN CURSOR 155 opening 155 read-only 153 recommendations 179 relative moves 165 SENSITIVE 154 sensitive 152 sensitive dynamic 151 sensitive static 151 stored procedures 178 TEMP database 155 update hole 170 using 154 using functions 168 why 150 searched-when-clause 127 SELECT INTO 198 self-referencing DELETE 193 INSERT 192 restrictions 194 restrictions on usage 194 UPDATE 193 SENSITIVE STATIC 152, 155, 156, 168 SESSION 88, 92 SET 23, 196 SET CURRENT PATH 9 set of affected rows (SAR) 35 SIGNAL SQLSTATE 25 simple-when-clause 127 source data type 44 sourced 44 sourced function 50, 59 special register 8, 9, 10 splitting a table 143 SPUFI 154 SQL enhancements 181 SQL statement terminator 20 SQL_STATEMNT_TABLE 37 SQLSTATE 64 SQLWARN 157 SQLWARN0 157 SQLWARN1 157 SQLWARN4 157 statement trigger 18 STATIC 155
stored 28 stored procedure 7, 8, 9, 15, 28, 29, 33, 38, 42, 81, 85, 88 string operator 58 strong typing 45 subquery 182 subselect 136, 193 synonym 42 SYSADM 8, 82 SYSCTRL 82 SYSFUN 8, 9 SYSIBM 8, 9, 72 SYSOBJ 35 SYSPACKAGE 35 SYSPROC 8, 9 SYSTABAUTH 35 SYSTRIGGERS 30, 35
T
table 83 table access predicates 182 table function 58, 60 table space 89, 90, 93 TABLE_TYPE 142 TEMP database 90, 93, 155 TEMP table spaces 93 TEMPLATE 216 temporary database 88, 89 temporary table 42, 79, 92 temporary table space 88 thread 81, 85, 89, 90, 92 thread reuse 85 thread termination 85 three-part name 42, 85, 93 three-part table name 85 totally after join predicates 182 transition table 22, 37, 62 transition variable 22, 23, 37 transitional business rules 14, 15 trigger 7, 8, 11, 21, 22, 23, 25, 26, 28, 29, 30, 35, 37, 38, 39, 42, 47, 53, 87, 130 action 19, 21, 30, 33, 36, 37 activation time 17, 22 allowable combinations 22 ATOMIC 19 body 19, 23 cascading 28 definition 14 error handling 26 external actions backout 30 FOR EACH ROW 18 FOR EACH STATEMENT 18 granularity 18, 22 invoking SP and UDF 23 name 16 ordering 29 package 32, 33, 37 passing transition tables 30 processing 37 RAISE_ERROR 24 restrictions 42
Index
265
SIGNAL SQLSTATE 24 table locator 31 transition tables 21 transition variables 20 useful queries 41 valid statements 22 VALUES 23 WHEN 19 trigger action condition 19 trigger characteristics 16 trigger happy 38 trigger package 33 dependencies 33 TRIGGERAUTH 35 triggered operation 19 triggering event 17 triggering operation 16, 19, 22, 29 triggering table 16 two-part name 8, 44
LOB table spaces 214 pitfalls 215 restriction 214 unqualified table reference 92 UPDATE 195 scalar subquery 195 self-referencing 193 update hole 162, 170, 172 update trigger 22 update with subselect conditions 196 self referencing 197 user-defined 58 user-defined column function 62 user-defined distinct type 7, 9, 44, 72 user-defined function 7, 9, 15, 42, 58, 59, 60, 64, 72 user-defined scalar function 61 user-defined table function 62 USING CCSID 90
U
UDF 9, 15, 21, 23, 24, 27, 28, 29, 30, 42, 57, 58, 59, 64 column functions 61 definition 59 design considerations 64 efficiency 64 implementation and maintenance 60 scalar functions 60 sourced 66 sourced function 65 table functions 62 UDT 9, 43, 44, 45, 47, 48, 54 CAST 44 catalog tables 56 COMMENT ON 47 DROP 47 EXECUTE 44 GRANT EXECUTE ON 47 privileges 46 USAGE 44 UNDO record 79 UNICODE 90, 213 UNION 127, 136, 142 UPDATE statement 139 UNION ALL 136 union everywhere 136 basic predicates 137 EXISTS predicates 138 explain 142 IN predicates 139 INSERT statements 139 nested table expressions 136 quantified predicates 137 subqueries 137 UPDATE statements 140 views 140 UNIQUE 83 unit of work 11, 84 UNLOAD 205, 212, 213 comparing 215
V
validproc 87 VALUES INTO 199 view 35, 42, 47, 83 views 8
W
WHEN 126 WHERE CURRENT OF 87, 161 WITH CHECK OPTIO 144 WITH CHECK OPTION 35, 36, 87 WITH COMPARISONS 45 WITH HOLD 84, 85, 89 WITH HOLD, 85 workfile 81, 82, 83, 86
Z
ZPARM 202
266
Back cover
SG24-6300-00
ISBN 073842353X