Database SQL Programming
Database SQL Programming
7.3
Database
SQL programming
IBM
Note
Before using this information and the product it supports, read the information in “Notices” on page
399.
This edition applies to IBM i 7.3 (product number 5770-SS1) and to all subsequent releases and modifications until
otherwise indicated in new editions. This version does not run on all reduced instruction set computer (RISC) models nor
does it run on CISC models.
This document may contain references to Licensed Internal Code. Licensed Internal Code is Machine Code and is
licensed to you under the terms of the IBM License Agreement for Machine Code.
© Copyright International Business Machines Corporation 1998, 2015.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
SQL programming.................................................................................................. 1
What's new for IBM i 7.3..............................................................................................................................1
PDF file for SQL programming..................................................................................................................... 3
Introduction to Db2 for i Structured Query Language................................................................................ 3
SQL concepts.......................................................................................................................................... 3
SQL relational database and system terminology........................................................................... 5
SQL and system naming conventions...............................................................................................6
Types of SQL statements.................................................................................................................. 6
SQL communication area..................................................................................................................8
SQL diagnostics area.........................................................................................................................8
SQL objects............................................................................................................................................. 9
Schemas............................................................................................................................................ 9
Journals and journal receivers......................................................................................................... 9
Catalogs.............................................................................................................................................9
Tables, rows, and columns............................................................................................................... 9
Aliases............................................................................................................................................. 10
Views............................................................................................................................................... 10
Indexes............................................................................................................................................10
Constraints...................................................................................................................................... 10
Triggers............................................................................................................................................11
Stored procedures.......................................................................................................................... 11
Sequences.......................................................................................................................................11
Global variables.............................................................................................................................. 12
User-defined functions................................................................................................................... 12
User-defined types......................................................................................................................... 12
XSR objects..................................................................................................................................... 12
SQL packages..................................................................................................................................12
Application program objects................................................................................................................13
User source file............................................................................................................................... 14
Output source file member.............................................................................................................14
Program...........................................................................................................................................14
SQL package....................................................................................................................................15
Module.............................................................................................................................................15
Service program..............................................................................................................................15
Data definition language............................................................................................................................15
Creating a schema................................................................................................................................ 16
Creating a table.................................................................................................................................... 16
Adding and removing constraints...................................................................................................17
Referential integrity and tables...................................................................................................... 17
Adding and removing referential constraints........................................................................... 18
Example: Adding referential constraints.................................................................................. 18
Example: Removing constraints..................................................................................................... 19
Check pending................................................................................................................................ 19
Creating a table using LIKE.................................................................................................................. 20
Creating a table using AS..................................................................................................................... 20
Creating and altering a materialized query table................................................................................ 21
Creating a system-period temporal table............................................................................................22
Declaring a global temporary table......................................................................................................23
Creating a table with remote server data............................................................................................ 23
Creating a row change timestamp column.......................................................................................... 24
Creating auditing columns................................................................................................................... 24
iii
Creating and altering an identity column.............................................................................................25
Using ROWID........................................................................................................................................ 26
Creating and using sequences............................................................................................................. 26
Comparison of identity columns and sequences........................................................................... 28
Defining field procedures..................................................................................................................... 29
Field definition for field procedures............................................................................................... 29
Specifying the field procedure........................................................................................................30
When field procedures are invoked................................................................................................30
Parameter list for execution of field procedures........................................................................... 31
The field procedure parameter value list (FPPVL)....................................................................33
Parameter value descriptors for field procedures....................................................................33
Field-definition (function code 8)............................................................................................. 34
Field-encoding (function code 0)..............................................................................................35
Field-decoding (function code 4)..............................................................................................36
Example field procedure program............................................................................................ 36
General guidelines for writing field procedures.............................................................................47
Index considerations.................................................................................................................48
Thread considerations...............................................................................................................49
Debug considerations............................................................................................................... 49
Guidelines for writing field procedures that mask data................................................................ 49
Example field procedure program that masks data................................................................. 51
Creating descriptive labels using the LABEL ON statement............................................................... 53
Describing an SQL object using COMMENT ON................................................................................... 54
Changing a table definition.................................................................................................................. 54
Adding a column............................................................................................................................. 54
Changing a column..........................................................................................................................55
Allowable conversions of data types..............................................................................................55
Deleting a column........................................................................................................................... 56
Order of operations for the ALTER TABLE statement.................................................................... 57
Using CREATE OR REPLACE TABLE................................................................................................ 57
Creating and using ALIAS names.........................................................................................................58
Creating and using views......................................................................................................................59
WITH CHECK OPTION on a view.................................................................................................... 61
WITH CASCADED CHECK OPTION............................................................................................61
WITH LOCAL CHECK OPTION................................................................................................... 62
Example: Cascaded check option............................................................................................. 62
Creating indexes................................................................................................................................... 63
Creating and using global variables..................................................................................................... 64
Replacing existing objects....................................................................................................................64
Catalogs in database design................................................................................................................ 65
Getting catalog information about a table..................................................................................... 65
Getting catalog information about a column................................................................................. 65
Dropping a database object................................................................................................................. 66
Data manipulation language......................................................................................................................66
Retrieving data using the SELECT statement...................................................................................... 66
Basic SELECT statement.................................................................................................................66
Specifying a search condition using the WHERE clause................................................................68
Expressions in the WHERE clause............................................................................................ 69
Comparison operators...............................................................................................................70
NOT keyword............................................................................................................................. 70
GROUP BY clause............................................................................................................................70
HAVING clause................................................................................................................................72
ORDER BY clause............................................................................................................................73
Static SELECT statements.............................................................................................................. 75
Handling null values........................................................................................................................76
Special registers in SQL statements...............................................................................................77
Casting data types.......................................................................................................................... 78
Date, time, and timestamp data types........................................................................................... 79
iv
Specifying current date and time values.................................................................................. 79
Date/time arithmetic................................................................................................................. 80
Row change expressions................................................................................................................ 80
Handling duplicate rows................................................................................................................. 80
Defining complex search conditions.............................................................................................. 81
Special considerations for LIKE................................................................................................ 82
Multiple search conditions within a WHERE clause................................................................. 83
Using OLAP specifications.............................................................................................................. 84
Joining data from more than one table.......................................................................................... 90
Inner join................................................................................................................................... 90
Left outer join............................................................................................................................ 91
Right outer join.......................................................................................................................... 92
Exception join............................................................................................................................ 92
Cross join................................................................................................................................... 93
Full outer join.............................................................................................................................94
Multiple join types in one statement........................................................................................ 94
Using table expressions..................................................................................................................95
Using recursive queries.................................................................................................................. 97
Using the UNION keyword to combine subselects......................................................................109
Specifying the UNION ALL keyword....................................................................................... 112
Using the EXCEPT keyword.......................................................................................................... 113
Using the INTERSECT keyword.................................................................................................... 115
Data retrieval errors......................................................................................................................117
Inserting rows using the INSERT statement..................................................................................... 118
Inserting rows using the VALUES clause..................................................................................... 120
Inserting rows using a select-statement..................................................................................... 120
Inserting multiple rows using the blocked INSERT statement................................................... 121
Inserting data into tables with referential constraints................................................................121
Inserting values into an identity column......................................................................................122
Selecting inserted values............................................................................................................. 123
Inserting data from a remote database....................................................................................... 124
Changing data in a table using the UPDATE statement.................................................................... 124
Updating a table using a scalar-subselect...................................................................................125
Updating a table with rows from another table........................................................................... 126
Updating tables with referential constraints............................................................................... 126
Examples: UPDATE rules........................................................................................................ 127
Updating an identity column........................................................................................................ 127
Updating data as it is retrieved from a table................................................................................128
Removing rows from a table using the DELETE statement...............................................................129
Removing rows from tables with referential constraints............................................................ 130
Example: DELETE rules...........................................................................................................131
Removing rows from a table using the TRUNCATE statement......................................................... 132
Merging data ...................................................................................................................................... 133
Using subqueries................................................................................................................................134
Subqueries in SELECT statements............................................................................................... 134
Subqueries and search conditions......................................................................................... 135
Usage notes on subqueries.....................................................................................................136
Including subqueries in the WHERE or HAVING clause........................................................ 136
Correlated subqueries.................................................................................................................. 138
Correlated names and references.......................................................................................... 138
Example: Correlated subquery in a WHERE clause............................................................... 138
Example: Correlated subquery in a HAVING clause.............................................................. 139
Example: Correlated subquery in a select-list....................................................................... 140
Example: Correlated subquery in an UPDATE statement...................................................... 141
Example: Correlated subquery in a DELETE statement.........................................................141
Sort sequences and normalization in SQL.............................................................................................. 142
Sort sequence used with ORDER BY and row selection................................................................... 142
Sort sequence and ORDER BY......................................................................................................143
v
Sort sequence and row selection................................................................................................. 144
Sort sequence and views................................................................................................................... 145
Sort sequence and the CREATE INDEX statement............................................................................146
Sort sequence and constraints.......................................................................................................... 146
ICU sort sequence..............................................................................................................................146
Normalization..................................................................................................................................... 147
Data protection........................................................................................................................................ 148
Security for SQL objects.....................................................................................................................148
Authorization ID............................................................................................................................149
Views.............................................................................................................................................149
Column masks and row permissions........................................................................................... 149
Auditing.........................................................................................................................................149
Data integrity...................................................................................................................................... 150
Concurrency..................................................................................................................................150
Journaling..................................................................................................................................... 151
Commitment control.....................................................................................................................152
Savepoints.................................................................................................................................... 156
Atomic operations........................................................................................................................ 157
Constraints....................................................................................................................................159
Adding and using check constraints....................................................................................... 159
Save and restore functions...........................................................................................................160
Damage tolerance.........................................................................................................................161
Index recovery.............................................................................................................................. 161
Catalog integrity............................................................................................................................161
User auxiliary storage pool...........................................................................................................162
Independent auxiliary storage pool............................................................................................. 162
Routines................................................................................................................................................... 163
Stored procedures..............................................................................................................................163
Defining an external procedure.................................................................................................... 164
Defining an SQL procedure...........................................................................................................164
Defining a procedure with default parameters............................................................................ 169
Calling a stored procedure........................................................................................................... 170
Using the CALL statement where procedure definition exists...............................................171
Using the embedded CALL statement where no procedure definition exists....................... 172
Using the embedded CALL statement with an SQLDA...........................................................172
Using the dynamic CALL statement where no CREATE PROCEDURE exists......................... 173
Examples: CALL statements................................................................................................... 174
Returning result sets from stored procedures.............................................................................179
Example 1: Calling a stored procedure that returns a single result set................................ 180
Example 2: Calling a stored procedure that returns a result set from a nested procedure..181
Writing a program or SQL procedure to receive the result sets from a stored procedure..........186
Parameter passing conventions for stored procedures and user-defined functions................. 191
Indicator variables and stored procedures..................................................................................196
Returning a completion status to the calling program................................................................ 198
Passing parameters from Db2 to external procedures............................................................... 198
Parameter style SQL................................................................................................................198
Parameter style GENERAL...................................................................................................... 200
Parameter style GENERAL WITH NULLS................................................................................ 200
Parameter style DB2GENERAL............................................................................................... 200
Parameter style Java...............................................................................................................200
Dynamic compound statement..........................................................................................................201
Using user-defined functions.............................................................................................................201
UDF concepts................................................................................................................................202
Writing UDFs as SQL functions..................................................................................................... 204
Example: SQL scalar UDFs......................................................................................................204
Example: SQL table UDFs....................................................................................................... 205
Writing UDFs as external functions.............................................................................................. 205
Registering UDFs..................................................................................................................... 206
vi
Passing arguments from Db2 to external functions...............................................................209
Table function considerations.................................................................................................214
Error processing for UDFs....................................................................................................... 214
Threads considerations...........................................................................................................215
Parallel processing.................................................................................................................. 215
Fenced or unfenced considerations....................................................................................... 215
Save and restore considerations............................................................................................ 216
Defining UDFs with default parameters....................................................................................... 216
Examples: UDF code.....................................................................................................................217
Example: Square of a number UDF.........................................................................................217
Example: Counter....................................................................................................................218
Example: Weather table function........................................................................................... 219
Using UDFs in SQL statements..................................................................................................... 224
Using parameter markers or the NULL values as function arguments.................................. 224
Using qualified function references........................................................................................225
Using unqualified function references....................................................................................225
Invoking UDFs with named arguments.................................................................................. 226
Summary of function references............................................................................................ 226
Triggers............................................................................................................................................... 228
SQL triggers...................................................................................................................................229
BEFORE SQL triggers...............................................................................................................229
AFTER SQL triggers................................................................................................................. 230
Multiple event SQL triggers.....................................................................................................231
INSTEAD OF SQL triggers....................................................................................................... 233
Handlers in SQL triggers......................................................................................................... 234
SQL trigger transition tables................................................................................................... 235
External triggers........................................................................................................................... 236
Varying length parameter lists for external procedures and functions ........................................... 236
Using the INCLUDE statement...........................................................................................................237
Array support in SQL procedures and functions............................................................................... 241
Debugging an SQL routine..................................................................................................................243
Obfuscating an SQL routine or SQL trigger........................................................................................244
Managing SQL and external routine objects......................................................................................245
Improving performance of procedures and functions...................................................................... 246
Improving implementation of procedures and functions............................................................247
Redesigning routines for performance........................................................................................ 249
Processing special data types................................................................................................................. 250
Large objects...................................................................................................................................... 250
Large object data types................................................................................................................ 250
Large object locators.................................................................................................................... 251
Example: Using a locator to work with a CLOB value.................................................................. 251
Example: LOBLOC in C............................................................................................................ 252
Example: LOBLOC in COBOL................................................................................................... 253
Indicator variables and LOB locators...........................................................................................254
LOB file reference variables......................................................................................................... 254
Example: Extracting CLOB data to a file.......................................................................................255
Example: LOBFILE in C............................................................................................................256
Example: LOBFILE in COBOL.................................................................................................. 256
Example: Inserting data into a CLOB column.............................................................................. 257
Displaying the layout of LOB columns..........................................................................................257
Journal entry layout of LOB columns........................................................................................... 258
User-defined distinct types................................................................................................................258
Defining a UDT.............................................................................................................................. 259
Example: Money......................................................................................................................259
Example: Resumé................................................................................................................... 260
Defining tables with UDTs.............................................................................................................260
Example: Sales........................................................................................................................ 260
Example: Application forms....................................................................................................260
vii
Manipulating UDTs........................................................................................................................261
Examples: Using UDTs.................................................................................................................. 261
Example: Comparisons between UDTs and constants.......................................................... 261
Example: Casting between UDTs............................................................................................ 261
Example: Comparisons involving UDTs.................................................................................. 262
Example: Sourced UDFs involving UDTs.................................................................................263
Example: Assignments involving UDTs...................................................................................263
Example: Assignments in dynamic SQL................................................................................. 263
Example: Assignments involving different UDTs ................................................................... 264
Example: Using UDTs in UNION..............................................................................................264
Examples: Using UDTs, UDFs, and LOBs........................................................................................... 265
Example: Defining the UDT and UDFs ......................................................................................... 265
Example: Using the LOB function to populate the database.......................................................266
Example: Using UDFs to query instances of UDTs....................................................................... 266
Example: Using LOB locators to manipulate UDT instances....................................................... 267
Using DataLinks..................................................................................................................................267
Linking control levels in DataLinks............................................................................................... 268
NO LINK CONTROL..................................................................................................................268
FILE LINK CONTROL with FS permissions............................................................................. 268
FILE LINK CONTROL with DB permissions.............................................................................268
Working with DataLinks................................................................................................................ 269
HTTP functions overview.........................................................................................................................270
Working with JSON data..........................................................................................................................275
JSON concepts................................................................................................................................... 275
Using JSON_TABLE............................................................................................................................ 278
Generating JSON data........................................................................................................................282
Using SQL in different environments...................................................................................................... 288
Using a cursor.....................................................................................................................................288
Types of cursors............................................................................................................................288
Examples: Using a cursor............................................................................................................. 289
Step 1: Defining the cursor..................................................................................................... 291
Step 2: Opening the cursor..................................................................................................... 292
Step 3: Specifying what to do when the end of data is reached............................................292
Step 4: Retrieving a row using a cursor.................................................................................. 293
Step 5a: Updating the current row......................................................................................... 293
Step 5b: Deleting the current row.......................................................................................... 294
Step 6: Closing the cursor.......................................................................................................294
Example: Using the OFFSET clause with a cursor....................................................................... 295
Using the multiple-row FETCH statement................................................................................... 295
Multiple-row FETCH using a host structure array.................................................................. 296
Multiple-row FETCH using a row storage area....................................................................... 297
Unit of work and open cursors..................................................................................................... 299
Dynamic SQL applications................................................................................................................. 300
Running dynamic SQL statements............................................................................................... 300
CCSID of dynamic SQL statements.............................................................................................. 300
Processing non-SELECT statements............................................................................................ 301
Using the PREPARE and EXECUTE statements...................................................................... 301
Processing SELECT statements and using a descriptor.............................................................. 302
Fixed-list SELECT statements.................................................................................................302
Varying-list SELECT statements............................................................................................. 303
SQL descriptor areas...............................................................................................................303
SQLDA format..........................................................................................................................304
Example: A SELECT statement for allocating storage for SQLDA.......................................... 306
Example: A SELECT statement using an allocated SQL descriptor....................................... 310
Parameter markers................................................................................................................. 313
Using interactive SQL......................................................................................................................... 314
Starting interactive SQL................................................................................................................315
Using the statement entry function............................................................................................. 316
viii
Prompting..................................................................................................................................... 316
Syntax checking...................................................................................................................... 318
Statement processing mode...................................................................................................318
Subqueries.............................................................................................................................. 318
CREATE TABLE prompting.......................................................................................................318
Entering DBCS data................................................................................................................. 318
Using the list selection function................................................................................................... 319
Example: Using the list selection function............................................................................. 319
Session services description........................................................................................................ 321
Exiting interactive SQL..................................................................................................................322
Using an existing SQL session...................................................................................................... 323
Recovering an SQL session...........................................................................................................323
Accessing remote databases with interactive SQL......................................................................323
Using the SQL statement processor.................................................................................................. 325
Execution of statements after errors occur................................................................................. 327
Commitment control in the SQL statement processor................................................................ 327
Source listing for the SQL statement processor.......................................................................... 327
Using the RUNSQL CL command....................................................................................................... 328
Distributed relational database function and SQL..................................................................................330
Db2 for i distributed relational database support.............................................................................331
Db2 for i distributed relational database example program.............................................................332
SQL package support......................................................................................................................... 333
Valid SQL statements in an SQL package.....................................................................................333
Considerations for creating an SQL package............................................................................... 334
CRTSQLPKG authorization...................................................................................................... 334
Creating a package on a database other than Db2 for i......................................................... 334
Target release (TGTRLS) parameter....................................................................................... 334
SQL statement size................................................................................................................. 335
Statements that do not require a package............................................................................. 335
Package object type................................................................................................................ 335
ILE programs and service programs.......................................................................................335
Package creation connection..................................................................................................335
Unit of work............................................................................................................................. 335
Creating packages locally....................................................................................................... 336
Labels...................................................................................................................................... 336
Consistency token................................................................................................................... 336
SQL and recursion................................................................................................................... 336
CCSID considerations for SQL........................................................................................................... 336
Connection management and activation groups.............................................................................. 337
Source code for PGM1.................................................................................................................. 337
Source code for PGM2.................................................................................................................. 338
Source code for PGM3.................................................................................................................. 338
Multiple connections to the same relational database............................................................... 340
Implicit connection management for the default activation group............................................ 341
Implicit connection management for nondefault activation groups...........................................341
Distributed support............................................................................................................................ 342
Determining the connection type.................................................................................................342
Connect and commitment control restrictions............................................................................344
Determining the connection status..............................................................................................345
Distributed unit of work connection considerations....................................................................346
Ending connections...................................................................................................................... 347
Distributed unit of work..................................................................................................................... 347
Managing distributed unit of work connections...........................................................................347
Checking the connection status................................................................................................... 349
Cursors and prepared statements............................................................................................... 349
DRDA stored procedure considerations............................................................................................ 350
WebSphere MQ with Db2........................................................................................................................ 350
WebSphere MQ messages................................................................................................................. 351
ix
WebSphere MQ message handling.............................................................................................. 351
Db2 MQ services..................................................................................................................... 351
Db2 MQ policies...................................................................................................................... 352
Db2 MQ functions...............................................................................................................................352
Db2 MQ dependencies................................................................................................................. 354
Db2 MQ tables....................................................................................................................................354
Db2 MQ CCSID conversion.................................................................................................................362
Websphere MQ transactions..............................................................................................................363
Basic messaging with WebSphere MQ.............................................................................................. 364
Sending messages with WebSphere MQ........................................................................................... 365
Retrieving messages with WebSphere MQ........................................................................................365
Application to application connectivity with WebSphere MQ...........................................................366
Db2 for i sample tables............................................................................................................................367
Sample tables.....................................................................................................................................367
Department table (DEPARTMENT)...............................................................................................367
DEPARTMENT..........................................................................................................................368
Employee table (EMPLOYEE)........................................................................................................369
EMPLOYEE............................................................................................................................... 371
Employee photo table (EMP_PHOTO).......................................................................................... 372
EMP_PHOTO............................................................................................................................ 372
Employee resumé table (EMP_RESUME)..................................................................................... 372
EMP_RESUME.......................................................................................................................... 373
Employee to project activity table (EMPPROJACT)..................................................................... 373
EMPPROJACT.......................................................................................................................... 374
Project table (PROJECT)...............................................................................................................376
PROJECT..................................................................................................................................377
Project activity table (PROJACT).................................................................................................. 378
PROJACT................................................................................................................................. 379
Activity table (ACT)....................................................................................................................... 381
ACT.......................................................................................................................................... 381
Class schedule table (CL_SCHED)................................................................................................382
CL_SCHED............................................................................................................................... 382
In-tray table (IN_TRAY)................................................................................................................382
IN_TRAY.................................................................................................................................. 383
Organization table (ORG)..............................................................................................................384
ORG..........................................................................................................................................384
Staff table (STAFF)........................................................................................................................384
STAFF.......................................................................................................................................385
Sales table (SALES)...................................................................................................................... 386
SALES...................................................................................................................................... 386
Sample XML tables.............................................................................................................................387
Product table (PRODUCT)............................................................................................................ 388
PRODUCT.................................................................................................................................388
Purchase order table (PURCHASEORDER).................................................................................. 389
PURCHASEORDER...................................................................................................................390
Customer table (CUSTOMER)....................................................................................................... 393
CUSTOMER.............................................................................................................................. 393
Catalog table (CATALOG).............................................................................................................. 395
CATALOG..................................................................................................................................395
Suppliers table (SUPPLIERS)....................................................................................................... 395
SUPPLIERS.............................................................................................................................. 395
Inventory table (INVENTORY)...................................................................................................... 395
INVENTORY............................................................................................................................. 396
Product Supplier table (PRODUCTSUPPLIER)............................................................................. 396
PRODUCTSUPPLIER................................................................................................................396
Db2 for i CL command descriptions...................................................................................................396
x
Notices..............................................................................................................399
Programming interface information........................................................................................................ 400
Trademarks.............................................................................................................................................. 400
Terms and conditions.............................................................................................................................. 401
xi
xii
SQL programming
The Db2® for IBM® i database provides a wide range of support for Structured Query Language (SQL).
The examples of SQL statements shown in this topic collection are based on the sample tables and
assume that the following statements are true:
• Each SQL example is shown on several lines, with each clause of the statement on a separate line.
• SQL keywords are highlighted.
• Table names provided in the sample tables use the schema CORPDATA. Table names that are not found
in the Sample Tables should use schemas you create.
• The SQL naming convention is used.
• The APOST and APOSTSQL precompiler options are assumed although they are not the default options
in COBOL. Character string literals within SQL and host language statements are delimited by single-
quotation marks (').
• A sort sequence of *HEX is used, unless otherwise noted.
Whenever the examples vary from these assumptions, it is stated.
Because this topic collection is for the application programmer, most of the examples are shown as if
they were written in an application program. However, many examples can be slightly changed and run
interactively by using interactive SQL. The syntax of an SQL statement, when using interactive SQL, differs
slightly from the format of the same statement when it is embedded in a program.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
Related concepts
Embedded SQL programming
Related reference
Db2 for i sample tables
These sample tables are referred to and used in the SQL programming and the SQL reference topic
collections.
Db2 for i SQL reference
Auditing columns
You can define columns in a table that are maintained by the system to track information about changes
to a row such as type of change and the user that made the change. For more information, see “Creating
auditing columns” on page 24.
OLAP aggregates
Additional OLAP aggregate functions have been added. See “Using OLAP specifications” on page 84 for
some examples.
SQL concepts
Db2 for i SQL consists of several main parts, such as SQL runtime support, precompilers, and interactive
SQL.
• SQL runtime support
SQL run time parses SQL statements and runs any SQL statements. This support is part of the IBM i
licensed program, which allows applications that contain SQL statements to be run on systems where
the IBM DB2 Query Manager and SQL Development Kit for i licensed program is not installed.
• SQL precompilers
SQL programming 3
SQL precompilers support precompiling embedded SQL statements in host languages. The following
languages are supported:
– ILE C
– ILE C++
– ILE COBOL
– COBOL
– PL/I
– RPG III (part of RPG)
– ILE RPG
The SQL host language precompilers prepare an application program that contains SQL statements. The
host language compilers then compile the precompiled host source programs. For more information
about precompiling, see Preparing and running a program with SQL statements in the Embedded SQL
programming information. The precompiler support is part of the IBM DB2 Query Manager and SQL
Development Kit for i licensed program.
• SQL interactive interface
The SQL interactive interface allows you to create and run SQL statements. For more information about
interactive SQL, see “Using interactive SQL” on page 314. Interactive SQL is part of the IBM DB2 Query
Manager and SQL Development Kit for i licensed program.
• Run SQL Scripts
The Run SQL Scripts window in System i® Navigator allows you to create, edit, run, and troubleshoot
scripts of SQL statements.
• Run SQL Scripts in IBM i Access Client Solutions (ACS)
For information about ACS, see http://www-03.ibm.com/systems/power/software/i/access/
solutions.html
• Run SQL Statements (RUNSQLSTM) CL command
The RUNSQLSTM command can be used to run a series of SQL statements that are stored in a source
file or a source stream file. For more information about the RUNSQLSTM command, see “Using the SQL
statement processor” on page 325.
• Run SQL (RUNSQL) CL command
The RUNSQL command can be used to run a single SQL statements. For more information about the
RUNSQL command, see “Using the RUNSQL CL command” on page 328.
• DB2 Query Manager
DB2 Query Manager provides a prompt-driven interactive interface that allows you to create data, add
data, maintain data, and run reports on the databases. Query Manager is part of the IBM DB2 Query
Manager and SQL Development Kit for i licensed program. For more information, see Query Manager
Use .
SQL programming 5
Related concepts
Distributed database programming
schema/table
or
schema.table
schema.table
Related reference
Qualification of unqualified object names
DELETE CONNECT
INSERT DISCONNECT
MERGE RELEASE
TRUNCATE SET CONNECTION
UPDATE
SQL programming 7
Dynamic SQL statements Embedded SQL host language statements
CALL
SQL statements can operate on objects that are created by SQL as well as externally described physical
files and single-format logical files. They do not refer to the interactive data definition utility (IDDU)
dictionary definition for program-described files. Program-described files appear as a table with only a
single column.
Related concepts
Data definition language
Data definition language (DDL) describes the portion of SQL that creates, alters, and deletes database
objects. These database objects include schemas, tables, views, sequences, catalogs, indexes, variables,
masks, permissions, and aliases.
Data manipulation language
Data manipulation language (DML) describes the portion of SQL that manipulates or controls data.
Related reference
Db2 for i SQL reference
Schemas
A schema provides a logical grouping of SQL objects. A schema consists of a library, a journal, a journal
receiver, a catalog, and, optionally, a data dictionary.
Tables, views, and system objects (such as programs) can be created, moved, or restored into any system
library. All system files can be created or moved into an SQL schema if the SQL schema does not contain a
data dictionary. If the SQL schema contains a data dictionary then:
• Source physical files or nonsource physical files with one member can be created, moved, or restored
into an SQL schema.
• Logical files cannot be placed in an SQL schema because they cannot be described in the data
dictionary.
You can create and own many schemas.
Catalogs
An SQL catalog is a collection of tables and views that describe tables, views, indexes, procedures,
functions, sequences, triggers, masks, permissions, variables, constraints, programs, packages, and XSR
objects.
This information is contained in a set of cross-reference tables in libraries QSYS and QSYS2. In each SQL
schema there is a set of views built over the catalog tables that contains information about the objects in
the schema.
A catalog is automatically created when you create a schema. You cannot drop or explicitly change the
catalog.
Related reference
Catalog
SQL programming 9
A partitioned table is a table whose data is contained in one or more local partitions (members).
Related concepts
Db2 Multisystem
Related reference
Data types
Creating and altering a materialized query table
A materialized query table is a table whose definition is based on the result of a query, and whose data is
in the form of precomputed results that are taken from the table or tables on which the materialized query
table definition is based.
Aliases
An alias is an alternate name for a table or view.
You can use an alias to refer to a table or view in those cases where an existing table or view can be
referred to. Additionally, aliases can refer to a specific member of a table. An alias can also be a three-part
name with an RDB name that refers to a remote system.
Related reference
Aliases
Views
A view appears like a table to an application program. However, a view contains no data and only logically
represents one or more tables over which it is created.
A view can contain all the columns and rows of the given tables or a subset of them. The columns can
be arranged differently in a view than they are in the tables from which they are taken. A view in SQL is a
special form of a nonkeyed logical file.
Related reference
Views
Indexes
An SQL index is a subset of the data in the columns of a table that are logically arranged in either
ascending or descending order.
Each index defines a set of columns or expressions as keys. These keys are used for ordering, grouping,
and joining. The index is used by the system for faster data retrieval.
Db2 for i supports two types of indexes: binary radix tree indexes and encoded vector indexes (EVIs).
Creating an index is optional. You can create any number of indexes. You can create or drop an index
at any time. The index is automatically maintained by the system. However, because the indexes are
maintained by the system, a large number of indexes can adversely affect the performance of the
applications that change the table.
Related concepts
Creating an index strategy
Related reference
CREATE INDEX
Constraints
A constraint is a rule enforced by the database manager to limit the values that can be inserted, deleted,
or updated in a table.
Db2 for i supports the following constraints:
• Unique constraints
Triggers
A trigger is a set of actions that runs automatically whenever a specified event occurs to a specified table
or view.
An event can be an insert, an update, a delete, or a read operation. A trigger can run either before or after
the event. Db2 for i supports SQL insert, update, and delete triggers and external triggers.
Related tasks
Triggering automatic events in your database
Stored procedures
A stored procedure is a program that can be called with the SQL CALL statement.
Db2 for i supports external procedures and SQL procedures. An external procedure can be any system
program, service program, or REXX procedure. It cannot be a System/36 program or procedure. An SQL
procedure is defined entirely in SQL and can contain SQL statements, including SQL control statements.
Related concepts
Stored procedures
A procedure (often called a stored procedure) is a program that can be called to perform operations. A
procedure can include both host language statements and SQL statements. Procedures in SQL provide the
same benefits as procedures in a host language.
Sequences
A sequence is a data area object that provides a quick and easy way of generating unique numbers.
You can use a sequence to replace an identity column or a user-generated numeric column. A sequence
has uses similar to these alternatives.
Related reference
Creating and using sequences
SQL programming 11
Sequences are similar to identity columns in that they both generate unique values. However, sequences
are objects that are independent of any tables. You can use sequences to generate values quickly and
easily.
Global variables
A global variable is a named variable that can be created, accessed, and modified using SQL.
A global variable can provide a unique value for a session. The variable can be used as part of any
expression in places such as a query, a create view, or an insert statement.
User-defined functions
A user-defined function is a program that can be called like any built-in functions.
Db2 for i supports external functions, SQL functions, and sourced functions. An external function can be
any system ILE program or service program. An SQL function is defined entirely in SQL and can contain
SQL statements, including SQL control statements. A sourced function is built over any built-in or any
existing user-defined function. You can create a scalar function or a table function as either an SQL
function or an external function.
Related concepts
Using user-defined functions
In writing SQL applications, you can implement some actions or operations as a user-defined function
(UDF) or as a subroutine in your application. Although it might appear easier to implement new operations
as subroutines, you might want to consider the advantages of using a UDF instead.
User-defined types
A user-defined type is a data type that you can define independently of the data types that are provided by
the database management system.
Distinct data types map to built-in types. Array data types are defined using a built-in type as the element
type and a maximum cardinality value.
Related concepts
User-defined distinct types
A user-defined distinct type (UDT) is a mechanism to extend Db2 capabilities beyond the built-in data
types that are available.
XSR objects
An XSR object is one or more XML schema documents that have been registered in the XML schema
repository with the same name.
You can use an XSR object during validation of an XML document or during annotated XML schema
decomposition.
SQL packages
An SQL package is an object that contains the control structure produced when the SQL statements in an
application program are bound to a remote relational database management system (DBMS).
The DBMS uses the control structure to process SQL statements encountered while running the
application program.
SQL packages are created when a relational database name (RDB parameter) is specified on a Create SQL
(CRTSQLxxx) command and a program object is created. Packages can also be created with the Create
SQL Package (CRTSQLPKG) command.
Note: The xxx in this command refers to the host language indicators: CI for ILE C, CPPI for ILE C++, CBL
for COBOL, CBLI for ILE COBOL, PLI for PL/I, RPG for RPG/400®, and RPGI for ILE RPG.
With a nondistributed ILE Db2 for i program, you might need to manage the original source, the modules,
and the resulting program or service program. The following figure shows the objects involved and
the steps that happen during the precompile and compile processes for a nondistributed ILE Db2
for i program when OBJTYPE(*PGM) is specified on the precompile command. The user source file
precompiles the source to a temporary source file member. This member is then compiled into a module
that binds to a program.
With a distributed non-ILE Db2 for i program, you must manage the original source, the resulting program,
and the resulting package. The following figure shows the objects and the steps that occur during the
precompile and compile processes for a distributed non-ILE Db2 for i program. The user source file
precompiles the source to a temporary source file member. This member is then compiled into a program.
After the program is created, an SQL package is created to hold the program.
SQL programming 13
With a distributed ILE Db2 for i program, you must manage the original source, module objects, the
resulting program or service program, and the resulting packages. An SQL package can be created for
each distributed module in a distributed ILE program or service program. The following figure shows
the objects and the steps that occur during the precompile and compile processes for a distributed ILE
Db2 for i program. The user source file precompiles the source to a temporary source file member. This
member is then compiled into a module that binds to a program. After the program is created, an SQL
package is created to hold the program.
Note: The access plans associated with the Db2 for i distributed program object are not created until the
program is run locally.
Related tasks
Preparing and running a program with SQL statements
Program
A program is an object that is created as a result of the compilation process for non-ILE compilations or as
a result of the bind process for ILE compilations.
An access plan is a set of internal structures and information that tells SQL how to run an embedded SQL
statement most effectively. It is created only when the program has been successfully created. Access
SQL package
An SQL package contains the access plans for a distributed SQL program.
An SQL package is an object that is created when:
• You successfully create a distributed SQL program by specifying the relational database (RDB)
parameter on the CREATE SQL (CRTSQLxxx) commands.
• You run the Create SQL Package (CRTSQLPKG) command.
When a distributed SQL program is created, the name of the SQL package and an internal consistency
token are saved in the program. They are used at run time to find the SQL package and to verify that
the SQL package is correct for this program. Because the name of the SQL package is critical for running
distributed SQL programs, an SQL package cannot be:
• Moved
• Renamed
• Duplicated
• Restored to a different library
Module
A module is an Integrated Language Environment® (ILE) object that you create by compiling source code
using the Create Module (CRTxxxMOD) command (or any of the Create Bound Program (CRTBNDxxx)
commands, where xxx is C, CBL, CPP, or RPG).
You can run a module only if you use the Create Program (CRTPGM) command to bind it into a program.
You typically bind several modules together, but you can bind a module by itself. Modules contain
information about the SQL statements; however, the SQL access plans are not created until the modules
are bound into either a program or service program.
Related reference
Create Program (CRTPGM) command
Service program
A service program is an Integrated Language Environment (ILE) object that provides a means of packaging
externally supported callable routines (functions or procedures) into a separate object.
Bound programs and other service programs can access these routines by resolving their imports to the
exports provided by a service program. The connections to these services are made when the calling
programs are created. This improves call performance to these routines without including the code in the
calling program.
SQL programming 15
There are several basic types of SQL statements. They are listed here according to their functions.
Related tasks
Getting started with SQL
Creating a schema
A schema provides a logical grouping of SQL objects. To create a schema, use the CREATE SCHEMA
statement.
A schema consists of a library, a journal, a journal receiver, a catalog, and optionally, a data dictionary.
Tables, views, and system objects (such as programs) can be created, moved, or restored into any system
libraries. All system files can be created or moved into an SQL schema if the SQL schema does not contain
a data dictionary. If the SQL schema contains a data dictionary then:
• Source physical files or nonsource physical files with one member can be created, moved, or restored
into an SQL schema.
• Logical files cannot be placed in an SQL schema because they cannot be described in the data
dictionary.
You can create and own many schemas.
You can create a schema using the CREATE SCHEMA statement. For example, create a schema called
DBTEMP:
Related reference
CREATE SCHEMA
Creating a table
A table can be visualized as a two-dimensional arrangement of data that consists of rows and columns. To
create a table, use the CREATE TABLE statement.
The row is the horizontal part containing one or more columns. The column is the vertical part containing
one or more rows of data of one data type. All data for a column must be of the same type. A table in SQL
is a keyed or non-keyed physical file.
You can create a table using the CREATE TABLE statement. You provide a name for the table. If the table
name is not a valid system object name, you can use the optional FOR SYSTEM NAME clause to specify a
system name.
The definition includes the names and attributes of its columns. The definition can include other
attributes of the table, such as the primary key.
Example: Given that you have administrative authority, create a table named 'INVENTORY' with the
following columns:
• Part number: Integer between 1 and 9999, and must not be null
• Description: Character of length 0 to 24
• Quantity on hand: Integer between 0 and 100000
The primary key is PARTNO.
Related concepts
Data types
To make this key a unique key, replace the keyword PRIMARY with UNIQUE.
You can remove a constraint using the same ALTER TABLE statement:
SQL programming 17
Adding and removing referential constraints
You can use the CREATE TABLE statement or the ALTER TABLE statement to add a referential constraint.
To remove a referential constraint, use the ALTER TABLE statement.
Constraints are rules that ensure that references from one table, a dependent table, to data in another
table, the parent table, are valid. You use referential constraints to ensure referential integrity.
With a referential constraint, non-null values of the foreign key are valid only if they also appear as values
of a parent key. When you define a referential constraint, you specify:
• A primary or unique key
• A foreign key
• Delete and update rules that specify the action taken with respect to dependent rows when the parent
row is deleted or updated.
Optionally, you can specify a name for the constraint. If a name is not specified, one is automatically
generated.
After a referential constraint is defined, the system enforces the constraint on every INSERT, DELETE,
and UPDATE operation performed through SQL or any other interface, including System i Navigator, CL
commands, utilities, or high-level language statements.
Related reference
CREATE TABLE
ALTER TABLE
In this case, the DEPARTMENT table has a column of unique department numbers (DEPTNO) which
functions as a primary key, and is a parent table in two constraint relationships:
REPORTS_TO_EXISTS
is a self-referencing constraint in which the DEPARTMENT table is both the parent and the dependent
in the same relationship. Every non-null value of ADMRDEPT must match a value of DEPTNO. A
Check pending
Referential constraints and check constraints can be in a check pending state, where potential violations
of the constraints exist.
For referential constraints, a violation occurs when potential mismatches exist between parent and
foreign keys. For check constraints, a violation occurs when potential values exist in columns that are
limited by the check constraint. When the system determines that a constraint might have been violated
(such as after a restore operation), the constraint is marked as check pending. When this happens,
restrictions are placed on the use of tables involved in the constraint. For referential constraints, the
following restrictions apply:
• No input or output operations are allowed on the dependent file.
• Only read and insert operations are allowed on the parent file.
When a check constraint is in check pending, the following restrictions apply:
• Read operations are not allowed on the file.
• Insert and update operations are allowed and the constraint is enforced.
To get a constraint out of check pending, follow these steps:
1. Disable the relationship with the Change Physical File Constraint (CHGPFCST) CL command.
2. Correct the key (foreign, parent, or both) data for referential constraints or column data for check
constraints.
3. Enable the constraint again with the CHGPFCST CL command.
You can identify the rows that are in violation of the constraint with the Display Check Pending Constraint
(DSPCPCST) CL command or by looking in the Database Maintenance folder in IBM i Navigator.
SQL programming 19
Related concepts
Check pending status in referential constraints
Related tasks
Working with constraints that are in check pending status
Related reference
CREATE TABLE
If the specified table or view contains an identity column, you must specify the option INCLUDING
IDENTITY on the CREATE TABLE statement if you want the identity column to exist in the new table.
The default behavior for CREATE TABLE is EXCLUDING IDENTITY. There are similar options to include
the default value, the hidden attribute, and the row change timestamp attribute. The WITH NO DATA
This materialized query table specifies that the table is not populated at the time that it is created
by using the DATA INITIALLY DEFERRED clause. REFRESH DEFERRED indicates that changes made to
TRANS are not reflected in STRANS. Additionally, this table is maintained by the user, enabling the user to
use ALTER, INSERT, DELETE, and UPDATE statements.
To populate the materialized query table or refresh the table after it has been populated, use the
REFRESH TABLE statement. This causes the query associated with the materialized query table to be
run and causes the table to be filled with the results of the query. To populate the STRANS table, run the
following statement:
You can create a materialized query table from an existing base table as long as the result of the
select-statement provides a set of columns that match the columns in the existing table (same number of
columns and compatible column definitions). For example, create a table TRANSCOUNT. Then, change the
base table TRANSCOUNT into a materialized query table:
To create the table:
SQL programming 21
You can alter this table to be a materialized query table:
Finally, you can change a materialized query table back to a base table. For example:
In this example, the table TRANSCOUNT is not dropped, but it is no longer a materialized query table.
Related concepts
Tables, rows, and columns
A table is a two-dimensional arrangement of data that consists of rows and columns.
Then you would tie them together in a versioning relationship like this:
Once versioning is defined for the system-period temporal table, updates and deletes to it cause the
version of the row prior to the change to be inserted as a row in the history table. The special row begin
and row end timestamp columns are set by the system to indicate the time span when the data for the
historical row was the active data.
You can write a query that will automatically return data from both the system-period temporal table and
the history table.
For example, to see what the DEPARTMENT table looked like six months ago, issue the following query:
This table is created in QTEMP. To reference the table using a schema name, use either SESSION or
QTEMP. You can issue SELECT, INSERT, UPDATE, and DELETE statements against this table, the same as
any other table. You can drop this table by issuing the DROP TABLE statement:
Related reference
DECLARE GLOBAL TEMPORARY TABLE
You can also create this table as a global temporary table, which will create it in QTEMP. In this example,
different column names are provided for the new table. The table definition will pick up the default values
for its columns from the remote server.
The following restrictions apply to using a remote server as the source for the new table:
• The materialized query table clauses are not allowed.
• A column with a FIELDPROC cannot be listed in the select list.
• The copy options cannot be specified if the remote server is Db2 for LUW or Db2 for z/OS®.
SQL programming 23
Related reference
Creating a table using AS
You can create a table from the result of a SELECT statement. To create this type of table, use the CREATE
TABLE AS statement.
When a row is inserted into the ORDERS table, the CHANGE_TS column for the row is set to the timestamp
of the insert operation. Any time a row in ORDERS is updated, the CHANGE_TS column for the row is
modified to reflect the timestamp of the update operation.
You can drop the row change timestamp attribute from a column:
The column CHANGE_TS remains as a TIMESTAMP column in the table, but the system no longer
automatically updates timestamp values for this column.
When you add a generated expression column to an existing table, defining the IMPLICITLY HIDDEN
attribute for the column as well can prevent existing applications that use SQL from requiring
modifications. Hidden columns are excluded when a SELECT *, an INSERT without a column list, or an
UPDATE using ROW determines its implicit list of columns. The only time a hidden column is included is
when it is explicitly mentioned by name.
These auditing columns can be especially useful when using a system-period temporal table. Since all
the historical rows are kept in the corresponding history table, an auditing column will complement the
history by recording information such as who was responsible for each change.
Related reference
CREATE TABLE
Using a system-period temporal table for tracking auditing information
This column is defined with a starting value of 500, incremented by 1 for every new row inserted, and will
recycle when the maximum value is reached. In this example, the maximum value for the identity column
is the maximum value for the data type. Because the data type is defined as SMALLINT, the range of
values that can be assigned to ORDERNO is from 500 to 32 767. When this column value reaches 32 767,
it will restart at 500 again. If 500 is still assigned to a column, and a unique key is specified on the identity
column, a duplicate key error is returned. The next insert operation will attempt to use 501. If you do not
have a unique key specified for the identity column, 500 is used again, regardless of how many times it
appears in the table.
For a larger range of values, specify the column to be data type INTEGER or even BIGINT. If you want the
value of the identity column to decrease, specify a negative value for the INCREMENT option. It is also
possible to specify the exact range of numbers by using MINVALUE and MAXVALUE.
SQL programming 25
You can modify the attributes of an existing identity column using the ALTER TABLE statement. For
example, you want to restart the identity column with a new value:
The column ORDERNO remains as a SMALLINT column, but the identity attribute is dropped. The system
will no longer generate values for this column.
Related reference
Comparison of identity columns and sequences
While identity columns and sequences are similar in many ways, there are also differences.
Inserting values into an identity column
You can insert a value into an identity column or allow the system to insert a value for you.
Updating an identity column
You can update the value in an identity column to a specified value or have the system generate a new
value.
Using ROWID
Using ROWID is another way to have the system assign a unique value to a column. ROWID is similar to
identity columns. But rather than being an attribute of a numeric column, it is a separate data type.
To create a table similar to the identity column example:
This sequence is defined with a starting value of 500, incremented by 1 for every use, and recycles when
the maximum value is reached. In this example, the maximum value for the sequence is 1000. When this
value reaches 1000, it will restart at 500 again.
After this sequence is created, you can insert values into a column using the sequence. For example,
insert the next value of the sequence ORDER_SEQ into a table ORDERS with columns ORDERNO and
CUSTNO.
SELECT *
FROM ORDERS
In this example, the next value for sequence ORDER is inserted into the ORDERNO column. Issue the
INSERT statement again. Then run the SELECT statement.
You can also insert the previous value for the sequence ORDER by using the PREVIOUS VALUE expression.
You can use NEXT VALUE and PREVIOUS VALUE in the following expressions:
• Within the select-clause of a SELECT statement or SELECT INTO statement as long as the statement
does not contain a DISTINCT keyword, a GROUP BY clause, an ORDER BY clause, a UNION keyword, an
INTERSECT keyword, or an EXCEPT keyword
• Within a VALUES clause of an INSERT statement
• Within the select-clause of the fullselect of an INSERT statement
• Within the SET clause of a searched or positioned UPDATE statement, though NEXT VALUE cannot be
specified in the select-clause of the subselect of an expression in the SET clause
You can alter a sequence by issuing the ALTER SEQUENCE statement. Sequences can be altered in the
following ways:
• Restarting the sequence
• Changing the increment between future sequence values
• Setting or eliminating the minimum or maximum values
• Changing the number of cached sequence numbers
• Changing the attribute that determines whether the sequence can cycle or not
• Changing whether sequence numbers must be generated in order of request
For example, change the increment of values of sequence ORDER from 1 to 5:
After this change is complete, run the INSERT statement again and then the SELECT statement. Now the
table contains the following columns.
SQL programming 27
Table 4. Results for SELECT from table ORDERS
ORDERNO CUSTNO
500 12
501 12
528 12
Notice that the next value that the sequence uses is a 528. At first glance, this number appears to be
incorrect. However, look at the events that lead up to this assignment. First, when the sequence was
originally create, a cache value of 24 was assigned. The system assigns the first 24 values for this cache.
Next, the sequence was altered. When the ALTER SEQUENCE statement is issued, the system drops the
assigned values and starts up again with the next available value; in this case the original 24 that was
cached, plus the next increment, 5. If the original CREATE SEQUENCE statement did not have the CACHE
clause, the system automatically assigns a default cache value of 20. If that sequence was altered, then
the next available value is 25.
Related concepts
Sequences
A sequence is a data area object that provides a quick and easy way of generating unique numbers.
SQL programming 29
Specifying the field procedure
To name a field procedure for a column, use the FIELDPROC clause of the CREATE TABLE or ALTER TABLE
statement, followed by the name of the procedure and, optionally, a list of parameters.
The optional parameter list that follows the procedure name is a list of constants, enclosed in
parentheses, called the literal list. The literal list is converted by Db2 into a data structure called the
field procedure parameter value list (FPPVL). The FPPVL is passed to the field procedure during the
field-definition operation. At that time, the procedure can modify it or return it unchanged. The output
form of the FPPVL is called the modified FPPVL. It is stored in the Db2 QSYS2.SYSFIELDS catalog table as
part of the column description. The modified FPPVL is passed again to the field procedure whenever that
procedure is invoked for field-encoding or field-decoding.
SQL programming 31
• If function code 0, then the location to place the encoded data. This parameter is output only.
• If function code 4, then the encoded form of the data. This parameter is input only.
Parameter 7
The SQLSTATE (character(5)). This parameter is input/output.
This parameter is set by Db2 to '00000' before calling the field procedure. It can be set by the field
procedure. While normally the SQLSTATE is not set by a field procedure, it can be used to signal an
error to the database as follows:
• If the field procedure detects an error, it should set the SQLSTATE to '38xxx', where xxx may be one
of several possible strings. For more information, see Db2 Messages and Codes.
Warnings are not supported for field procedures
Parameter 8
The message text area (varchar(1000)). This parameter is input/output.
This argument is set by Db2 to the empty string before calling the field procedure. It is a
VARCHAR(1000) value that can be used by the field procedure to send message text back when
an SQLSTATE error is signaled by the field procedure. Message text is ignored by Db2 unless the
SQLSTATE parameter is set by the field procedure. The message text is assumed to be in the job
CCSID.
Parameter 9
A 128-byte structure containing additional information for the field procedure. This parameter is input
only.
This structure is set by Db2 before calling the field procedure.
• sqlfpNoMask - For field procedures that mask data, it indicates that the caller is a system function
that requires that the data be decoded without masking. For example, in some cases, RGZPFM and
ALTER TABLE may need to copy data. If the field procedure ignores this parameter and masks data
when these operations are performed, the column data will be lost. Hence, it is critical that a field
procedure that masks data properly handle this parameter.
Supported values are:
– '0' - Normal masking applied if needed.
– '1' - Do not mask, this decode operation is being performed on behalf of the database manager.
• sqlfpOperation - This parameter indicates whether the field procedure is being called for an
encode operation that is building a key value for a key positioning operation. Key positioning
operations such as RPG SETLL, RPG CHAIN, and COBOL START are common in applications that
use the non-SQL interface for data access. This parameter can be useful for field procedure that
mask data.
Supported values are:
– '0' - Not called for key positioning.
– '1' - Called for key positioning.
If a field procedure encounters a masked value on an encode request to build a key value, there are
two actions for the field procedure to take:
– Return SQLSTATE '09501' in parameter 7 which will cause the key positioning operation to fail
with a field procedure error.
– Return the masked value as the encoded value in parameter 6, so that the key positioning
operation can continue using the masked value. In this case, the key positioning operation may or
may not be successful. For example, an RPG SETLL request will likely be successful. However, an
RPG CHAIN operation will likely fail with a record not found error.
Include SQLFP in QSYSINC/H describes these parameters.
Offse
Name t Data Type Description
Table 6. sqlfpOptionalParameterValueDescriptor_T
Offse
Name t Data Type Description
SQL programming 33
Table 7. sqlfpParameterDescription_T
Offse
Name t Data Type Description
sqlfpAllocatedLength 16 unsigned 2-byte integer The allocated length specified for the
column on the CREATE TABLE or ALTER
TABLE statement.
SQL programming 35
Errors returned by a field procedure result in SQLCODE -681 (SQLSTATE '23507'), which is set in the SQL
communication area (SQLCA) and the DB2_RETURNED_SQLCODE and RETURNED_SQLSTATE condition
area item of the SQL diagnostics area. The contents of Parameter 7 and 8 are placed into the tokens,
in SQLCA, as field SQLERRMT and in the SQL Diagnostic area condition area item MESSAGE_TEXT. If the
database manager is unable to invoke the field procedure then SQLCODE -682 (SQLSTATE '57010') is
returned.
The following two field procedure programs demonstrate the same operations being done first in ILE RPG
and then in C. These programs illustrate how a field procedure program could use the IBM i Cryptographic
Services APIs to encrypt and decrypt fixed length character, variable length character, or character LOB
data.
ctl-opt main(SampleFieldProcProgram);
ctl-opt option(*srcstmt);
ctl-opt stgmdl(*inherit);
ctl-opt thread(*concurrent);
/if defined(*crtbndrpg)
ctl-opt actgrp(*caller);
/endif
/copy QSYSINC/QRPGLESRC,QC3CCI
/copy QSYSINC/QRPGLESRC,QUSEC
/copy QSYSINC/QRPGLESRC,SQL
/copy QSYSINC/QRPGLESRC,SQLFP
// QSYSINC/H QC3DTAEN
dcl-pr Qc3EncryptData extproc(*dclcase);
clearData pointer value;
clearDataLen int(10) const;
clearDataFormat char(8) const;
algorithmDesc likeds(QC3D0200); // Qc3_Format_ALGD0200
algorithmDescFormat char(8) const;
keyDesc likeds(T_key_descriptor0200) const;
keyDescFormat char(8) const;
cryptoServiceProvider char(1) const;
cryptoDeviceName char(10) const;
encryptedData pointer value;
lengthOfAreaForEncryptedData int(10) const;
lengthOfEncryptedDataReturned int(10);
errorCode likeds(QUSEC);
end-pr;
// QSYSINC/H QC3DTADE
dcl-pr Qc3DecryptData extproc(*dclcase);
encryptedData pointer value;
encryptedDataLen int(10) const;
algorithmDesc likeds(QC3D0200); // Qc3_Format_ALGD0200
algorithmDescFormat char(8) const;
keyDesc likeds(T_key_descriptor0200) const;
keyDescFormat char(8) const;
cryptoServiceProvider char(1) const;
cryptoDeviceName char(10) const;
clearData pointer value;
lengthOfAreaForClearData int(10) const;
lengthOfClearDataReturned int(10);
errorCode likeds(QUSEC);
end-pr;
SQL programming 37
dcl-c SQL_TYP_CLOB 408; // CLOB - varying length string
dcl-c SQL_TYP_NCLOB 409; // (SQL_TYP_CLOB + 1 for NULL)
// Other constants
dcl-c KEY_MGMT_SIZE 16;
dcl-c MAX_VARCHAR_SIZE 32767;
dcl-c MAX_CLOB_SIZE 100000;
// Main procedure
dcl-proc SampleFieldProcProgram;
dcl-pi *n EXTPGM('FP_EXV1RPG');
FuncCode uns(5) const;
OptionalParms likeds(T_optional);
DecodedDataType likeds(SQLFPD); // sqlfpParameterDescription_T
DecodedData likeds(T_DECODED_DATA);
EncodedDataType likeds(SQLFPD); // sqlfpParameterDescription_T
EncodedData likeds(T_ENCODED_DATA);
SqlState char(5);
Msgtext varchar(1000); // SQLFMT DS in QSYSINC/SQLFP is an RPG VARCHAR
end-pi;
ErrCode = *allx'00';
ErrCode.QUSBPRV = %size(QUSEC); // Bytes_provided
if FuncCode = 0; // encode
SQL programming 39
: Qc3_Any_CSP
: ' '
: Encrypted_Datap
: EncryptedDataLen
: RtnLen
: ErrCode);
RtnLen += KEY_MGMT_SIZE; // add in the Key Area size
else; // length is 0
RtnLen = 0;
endif;
// store the length (number of bytes that database needs to write)
// in either the 2 or 4 byte length field based on the encrypted
// data type
if OptionalParms.type_indicator = '0';
EncodedData.Varchar.len = RtnLen;
else;
EncodedData.Clob.len = RtnLen;
endif;
elseif FuncCode = 4; // decode
// Determine if the encoded data type is varchar or CLOB based on the
// optional parameter information that was saved at create time. Set
// pointers to the key management data, the user encrypted data, and
// the length of the data.
if OptionalParms.type_indicator = '0'; // varchar
KeyMgmtp = %addr(EncodedData.Varchar.keyManagementData);
Encrypted_Datap = %addr(EncodedData.Varchar.data);
EncryptedDataLen = EncodedData.Varchar.len;
else; // CLOB
KeyMgmtp = %addr(EncodedData.Clob.keyManagementData);
Encrypted_Datap = %addr(EncodedData.Clob.data);
EncryptedDataLen = EncodedData.Clob.len;
endif;
// Set the number of bytes to decrypt. Subtract
// off the bytes used for "key management".
EncryptedDataLen -= KEY_MGMT_SIZE;
if EncryptedDataLen > 0; // have data to decrypt
// Set the pointer to where the decrypted data should
// be placed.
select;
when DecodedDataType.SQLFST = SQL_TYP_VARCHAR
or DecodedDataType.SQLFST = SQL_TYP_NVARCHAR;
Decrypted_Datap = %addr(DecodedData.varchar.data);
when DecodedDataType.SQLFST = SQL_TYP_CLOB
or DecodedDataType.SQLFST = SQL_TYP_NCLOB;
decryptedDataLen = DecodedData.Clob.len;
decrypted_Datap = %addr(DecodedData.Clob.data);
other; // must be fixed Length
decrypted_Datap = %addr(DecodedData);
endsl;
end-proc SampleFieldProcProgram;
// procedure FieldCreatedOrAltered
dcl-proc FieldCreatedOrAltered;
dcl-pi *n extproc(*dclcase);
OptionalParms_p pointer value;
DecodedDataType likeds(SQLFPD); // sqlfpParameterDescription_T
EncodedDataType likeds(SQLFPD); // sqlfpParameterDescription_T
SqlState char(5);
Msgtext varchar(1000);
end-pi;
// Note that while optional parameters are not supported on input into
// this fieldproc, it will set information into the structure for
// usage by encode/decode operations. The length of this
// structure must be at least 8 bytes long, so the length is not
// reset.
select;
when DecodedDataType.SQLFST = SQL_TYP_CHAR // Fixed char
or DecodedDataType.SQLFST = SQL_TYP_NCHAR;
// set the encode data type to VarChar
EncodedDataType.SQLFST = SQL_TYP_VARCHAR;
// This example shows how the fieldproc pgm can modify the optional parm data area
to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Indicate that the encode type is varchar.
outputOptionalParms.type_indicator = '0';
when DecodedDataType.SQLFST = SQL_TYP_VARCHAR // Varying char
or DecodedDataType.SQLFST = SQL_TYP_NVARCHAR;
// set the data type to BLOB */
EncodedDataType.SQLFST = SQL_TYP_VARCHAR;
// This example shows how the fieldproc pgm can modify the optional parm data area
to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Indicate that the encode type is varchar.
outputOptionalParms.type_indicator = '0';
when DecodedDataType.SQLFST = SQL_TYP_CLOB // CLOB
or DecodedDataType.SQLFST = SQL_TYP_NCLOB;
// set the data type to BLOB */
EncodedDataType.SQLFST = SQL_TYP_BLOB;
// This example shows how the fieldproc pgm can modify the optional parm data area
to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Indicate that the encode type is CLOB.
outputOptionalParms.type_indicator = '1';
other;
// this field proc does not handle any other data types
SqlState = '38002';
msgtext = errortext1;
return;
SQL programming 41
endsl;
// Determine the result length by adding one byte for the pad character counter and
// rounding the length up to a multiple of 15-- the AES encryption alogrithm
// will return the encrypted data in a multiple of 15.
// This example also shows how additional data can be stored by the fieldproc
// program in the encrypted data. An additional 16 bytes are added for use by
// the fieldproc program.
// Note that this fieldproc does not check for exceeding the maximum length for
// the data type. There may also be other conditions that are not handled by
// this sample fieldproc program.
EncodedDataType.SQLFL =
(%div(DecodedDataType.SQLFL + 16 : 16) * 16) + KEY_MGMT_SIZE; // characters
EncodedDataType.SQLFBL = EncodedDataType.SQLFL; // Byte
// result is *HEX CCSID
EncodedDataType.SQLFC = 65535;
if DecodedDataType.SQLFST = SQL_TYP_CHAR
or DecodedDataType.SQLFST = SQL_TYP_NCHAR; // fixed length character
// need to set the allocated length for fixed length since the default value
// must fit in the allocated portion of varchar. Note that if the varchar or
// CLOB had a default value of something other than the empty string, the
// allocated length must be set appropriately but this fieldproc does not
// handle this situtation.
EncodedDataType.SQLFAL = EncodedDataType.SQLFL;
endif;
end-proc FieldCreatedOrAltered;
// procedure getKeyMgmt
dcl-proc getKeyMgmt;
dcl-pi *n extproc(*dclcase);
type char(1) const;
keyMgmt char(KEY_MGMT_SIZE);
keyData char(KEY_MGMT_SIZE);
end-pi;
// This is a trivial key management idea and is used to demonstrate how additional
// information may be stored in the encoded data which is written to the table and
// be used to communicate between the encode and decode operations.
if type = 'E'; // encoding, set the current key
keyMgmt = 'KEYTYPE2';
keyData = '0123456789ABCDEG'; // end in G
elseif keyMgmt = 'KEYTYPE1'; // decoding, determine which key to use
keyData = '0123456789ABCDEF'; // end in F
elseif keyMgmt = 'KEYTYPE2';
keyData = '0123456789ABCDEG'; // end in G
endif;
end-proc getKeyMgmt;
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <QC3CCI.H>
#include <QUSEC.H>
#include <QC3DTAEN.H>
#include <QC3DTADE.H>
#include <SQL.h>
#include <SQLFP.h>
#define KEY_MGMT_SIZE 16
typedef _Packed struct
{
unsigned short int len;
SQL programming 43
KeyDesc0200->desc.Key_Type = Qc3_AES ;
KeyDesc0200->desc.Key_String_Len = 16;
KeyDesc0200->desc.Key_Format = Qc3_Bin_String;
if (*funccode == 0) // encode
{
// Address the data and get the actual length of the data.
switch(decodedDataType->sqlfpSqlType)
{
case SQL_TYP_VARCHAR:
case SQL_TYP_NVARCHAR:
{
// varchar data is passed with a 2 byte length followed
// by the data
VarCharStrp = argv[4];
DecryptedDataLen = VarCharStrp->len;
Decrypted_Datap = VarCharStrp->data;
break;
}
case SQL_TYP_CLOB:
case SQL_TYP_NCLOB:
{
// CLOB data is passed with a 4 byte length followed
// by the data
ClobStrp = argv[4];
DecryptedDataLen = ClobStrp->len;
Decrypted_Datap = ClobStrp->data;
break;
}
default:// must be fixed Length
{
// for fixed length, only the data is passed, get the
// length of the data from the data type parameter
DecryptedDataLen = decodedDataType->sqlfpByteLength;
Decrypted_Datap = argv[4];
break;
}
}
// Determine if the encoded data type is varchar or CLOB based on
// the optional parameter information that was saved at create time.
if (optionalParms->type_indicator == '0') // encoded data is varchar
{
VarCharEncodedStrp = argv[6];
Encrypted_Datap = VarCharEncodedStrp->data;
keyMgmtp = VarCharEncodedStrp->keyManagementData;
}
else // encoded data is CLOB
{
ClobEncodedStrp = argv[6];
Encrypted_Datap = ClobEncodedStrp->data;
keyMgmtp = ClobEncodedStrp->keyManagementData;
}
if (DecryptedDataLen >0) // have some data to encrypt.
{
// get the encrypt key
KeyMgmt('E', keyMgmtp, KeyDesc0200->key);
// Set the number of bytes available for encrypting. Subtracting
// off the bytes used for "key management".
EncryptedDataLen = encodedDataType->sqlfpByteLength - KEY_MGMT_SIZE;
// Encrypt the data
Qc3EncryptData(Decrypted_Datap,
&DecryptedDataLen,
Qc3_Data,
(char *)ALGD0200,
Qc3_Alg_Block_Cipher,
(char *)KeyDesc0200,
Qc3_Key_Parms,
&Qc3_Any_CSP_Flag,
" ",
Encrypted_Datap,
&EncryptedDataLen,
&RtnLen,
&ERRCODE);
RtnLen += KEY_MGMT_SIZE; // add in the Key Area size
}
else // length is 0
RtnLen = 0;
// store the length (number of bytes that database needs to write)
// in either the 2 or 4 byte length field based on the encrypted
// tell the database manager how many characters of data are being returned
switch (decodedDataType->sqlfpSqlType)
{
case SQL_TYP_VARCHAR:
case SQL_TYP_NVARCHAR:
SQL programming 45
VarCharStrp->len = RtnLen;
break;
case SQL_TYP_CLOB:
case SQL_TYP_NCLOB:
ClobStrp->len = RtnLen;
break;
default:
// must be fixed Length and the full number of characters must be
// returned
break;
}
}
else // unsupported option -- error
memcpy(sqlstate, "38003",5);
// Note that while optional parameters are not supported on input into
// this fieldproc, it will set information into the structure for
// usage by encode/decode operations. The length of this
// structure must be at least 8 bytes long, so the length is not
// reset.
sqlfpFieldProcedureParameterList_T *inputOptionalParms = argv[2];
T_optional *outputOptionalParms = argv[2];
if (inputOptionalParms->sqlfpNumberOfOptionalParms != 0)
{
// this fieldproc does not handle input optional parameters
memcpy(sqlstate,"38001",5);
return;
}
switch(decodedDataType->sqlfpSqlType)
{
case SQL_TYP_CHAR: /* Fixed char */
case SQL_TYP_NCHAR:
// set the encode data type to VarChar
encodedDataType->sqlfpSqlType = SQL_TYP_VARCHAR;
// This example shows how the fieldproc pgm can modify the optional parm data area to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Indicate that the encode type is varchar.
outputOptionalParms->type_indicator = '0';
break;
case SQL_TYP_VARCHAR:
case SQL_TYP_NVARCHAR:
/* set the data type to BLOB */
encodedDataType->sqlfpSqlType = SQL_TYP_VARCHAR;
// This example shows how the fieldproc pgm can modify the optional parm data area to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Indicate that the encode type is varchar.
outputOptionalParms->type_indicator = '0';
break;
case SQL_TYP_CLOB:
case SQL_TYP_NCLOB:
/* set the data type to BLOB */
encodedDataType->sqlfpSqlType = SQL_TYP_BLOB;
// This example shows how the fieldproc pgm can modify the optional parm data area to
// store "constant" information to be used by the fieldproc on encode and decode
operations.
// Determine the result length by adding one byte for the pad character counter and
// rounding the length up to a multiple of 15-- the AES encryption alogrithm
// will return the encrypted data in a multiple of 15.
// This example also shows how additional data can be stored by the fieldproc
// program in the encrypted data. An additional 16 bytes are added for use by
// the fieldproc program.
// Note that this fieldproc does not check for exceeding the maximum length for
// the data type. There may also be other conditions that are not handled by
// this sample fieldproc program.
encodedDataType->sqlfpLength =
(((decodedDataType->sqlfpLength + 16) /16) * 16) + KEY_MGMT_SIZE;
encodedDataType->sqlfpByteLength = encodedDataType->sqlfpLength;
// result is *HEX CCSID
encodedDataType->sqlfpCcsid = 65535;
if (decodedDataType->sqlfpSqlType == SQL_TYP_CHAR ||
decodedDataType->sqlfpSqlType == SQL_TYP_NCHAR) // fixed length character
{
// need to set the allocated length for fixed length since the default value
// must fit in the allocated portion of varchar. Note that if the varchar or
// CLOB had a default value of something other than the empty string, the
// allocated length must be set appropriately but this fieldproc does not
// handle this situtation.
encodedDataType->sqlfpAllocatedLength = encodedDataType->sqlfpLength;
}
}
// This is a trivial key management idea and is used to demonstrate how additional
// information may be stored in the encoded data which is written to the table and
// be used to communicate between the encode and decode operations.
static void KeyMgmt(char type, char *keyMgmt, char *keyData)
{
if (type == 'E') // encoding, set the current key
{
memcpy((char *)keyMgmt, "KEYTYPE2 ", KEY_MGMT_SIZE);
memcpy(keyData, "0123456789ABCDEG", 16);
}
else // decoding, determine which key to use
if (memcmp(keyMgmt, "KEYTYPE1 ", KEY_MGMT_SIZE) == 0)
memcpy(keyData, "0123456789ABCDEF", 16);
else
if (memcmp(keyMgmt, "KEYTYPE2 ", KEY_MGMT_SIZE) == 0)
memcpy(keyData, "0123456789ABCDEG", 16);
}
SQL programming 47
• No SQL is allowed in a field procedure.
• The field procedure will not be called if the data to be encoded or decoded is the null value.
• On an encode operation, packed decimal and zoned decimal values will be converted to the preferred
sign prior to calling the user field procedure program.
• The field procedure must be deterministic. For SQE, caching of results will occur based on the QAQQINI
FIELDPROC_ENCODED_COMPARISON.
• The field procedure must be parallel capable and capable of running in a multi-threaded environment.
For RPG, this means the THREAD control specification keyword must be specified. For COBOL, this
means the THREAD(SERIALIZE) process option must be specified.
• Must be capable of running in both a fenced and non-fenced environment.
• The program cannot be created with ACTGRP(*NEW). If the program is created with ACTGRP(*CALLER),
the program will run in the default activation group.
• Field procedure programs are expected to be short running. It is recommended that the field procedure
program avoid commitment control and native database operations.
• Create the program in the physical file's library.
• If an error occurs or is detected in the field procedure program, the field procedure program should set
the SQLSTATE and message text parameters. If the SQLSTATE parameter is not set to indicate an error,
database assumes that the field procedure ran successfully. This might cause the user data to end up in
an inconsistent state.
Warning: Field procedures are a productive way both to provide application functions and to manage
information. However, field procedure programs could provide the ability for someone with devious
intentions to create a "Trojan horse"1 on your system. This is why it is important to restrict who has the
authority to alter a table. If you are managing object authority carefully, the typical user will not have
sufficient authority to add a field procedure program.
Index considerations
Indexes may be recovered at IPL time based on the RECOVER parameter of CRTPF, CRTLF, CHGPF,
or CHGLF commands. Indexes that are based on a column that has a field procedure have special
considerations.
Use of PASE(QSH) and JAVA within field procedures must be avoided if the index keys are built over
expressions that contain columns with field procedures or the sparse index criteria references a column
with an associated field procedure. If use of PASE or JAVA is required, consider changing indexes to
RECOVER(*NO) so that they are not recovered during the IPL process and recovered during an open
operation instead.
The following restrictions apply to keys for both SQL indexes and DDS keyed files.
• If the column has a field procedure, the index key must be a column. No expressions (derivations) are
allowed. This includes DDS keywords like SST and CONCAT.
• Sort sequence cannot be applied to the column.
• If the field procedure column is part of a foreign key, the corresponding parent key column must use the
same field procedure.
• The WHERE clause of the SQL Create Index or the Select/Omit criteria of a DDS logical file cannot
reference a column that has a field procedure.
See the SQL Reference for more details on indexes and field procedures.
1 In history, the Trojan horse was a large hollow wooden horse that was filled with Greek soldiers. After the
horse was introduced within the walls of Troy, the soldiers climbed out of the horse and fought the Trojans.
In the computer world, a program that hides destructive functions is often called a Trojan horse.
Debug considerations
There are some things to keep in mind when debugging field procedures.
Since field procedures can run in a secondary thread, it is recommended that debugging should be done
using STRSRVJOB or the graphical debugger.
For natively run field procedures, the database manager uses the job default wait time. If the field
procedure does not return within that specified time, an error is returned. This default wait time may
need to be increased when debugging field procedures. For example, to change the default wait time to 5
minutes: CHGJOB DFTWAIT(300)
SQL programming 49
stored in the table and the original value in the row would be corrupted with an encoded masked
data.
To prevent corruption, the field procedure must recognize on field-encoding that the data is masked.
Instead of encoding the data, the field procedure must return a warning SQLSTATE value of ’09501’ in
the seventh parameter.
- For an UPDATE operation, ’09501’ indicates to Db2 that the current value for the column should be
used.
- For an INSERT operation, ’09501’ indicates to Db2 that the default value should be used for the
associated column value.
Query Considerations: There are several considerations that apply to queries that reference a column of
a table that has a field procedure that masks data:
• Depending on how the optimizer implements a query, the same query may return different rows and
values for different users or environments. This will occur in cases where optimizer must decode the
data in order to perform comparisons or evaluate expressions in a query. If masking is performed for
one user but not for another user, the result of the decode operation will be very different, so the
resulting rows and values can also be quite different for the two users.
For example, assume that a field procedure returns (decodes) data for user profile MAIN without
masking and returns (decodes) data for user profile QUSER with masking. An application contains the
following query:
By default, the optimizer will try to implement the search condition (logically) as follows:
This is the best performing implementation since it allows Db2 to compare the encoded version of the
constant ’112233’ with the encoded version of the CARDNUM values that are stored in the orders table.
Since the optimizer did not decode the data to perform the comparison, the query will return the same
rows for the MAIN and QUSER user profiles. The only difference will be that QUSER will see masked
values in the result rows for the CARDNUM column.
The implementation of queries that reference a field procedure column can be controlled by
the QAQQINI FIELDPROC_ENCODED_COMPARISON option. The default value for this option is
*ALLOW_EQUAL. This option enables the optimizer to implement the comparison using the encoded
values.
In the previous example, if the FIELDPROC_ENCODED_COMPARISON option was changed to *NONE,
the query would return different rows for the two users. When the value is *NONE, an equal comparison
will be implemented internally by Db2 as follows:
In this case, Db2 has to decode the CARDNUM values for every row in the table to compare against
the original constant '112233'. This means that the comparison for the MAIN user profile will compare
the decoded and unmasked card number values (112233, 332211, etc) to ’112233’. The MAIN user
profile will find the orders associated with the specified card number (112233). However, the query
will not return any rows for the QUSER user profile. That is because the comparison for QUSER will be
comparing the masked value of the card numbers (****33, ****11, etc) with the constant ’112233’.
For more information on how the QAQQINI FIELDPROC_ENCODED_COMPARISON option affects field
procedures see the Database Performance and Query Optimization topic in the Information Center.
• REFRESH of a materialized query table is affected by the QAQQINI
FIELDPROC_ENCODED_COMPARISON option. If the materialized query table references a column with
a field procedure that masks, it is imperative that the REFRESH of the MQT be issued by a user that is
allowed to see unmasked data. Otherwise, the results in the MQT will be incorrect for all users.
#include "string.h"
#include <QSYSINC/H/SQLFP>
void reverse(char *in, char *out, long length);
main(int argc, void *argv[])
{
short *funccode = argv[1];
sqlfpFieldProcedureParameterList_T *optionalParms = argv[2];
char *sqlstate = argv[7];
sqlfpMessageText_T *msgtext = argv[8];
int version;
sqlfpOptionalParameterValueDescriptor_T *optionalParmPtr;
sqlfpInformation_T *info = argv[9];
int masked;
if (optionalParms->sqlfpNumberOfOptionalParms != 1)
{
memcpy(sqlstate,"38001",5);
return;
}
optionalParmPtr = (void *)&(optionalParms->sqlfpParmList);
version = *((int *)&optionalParmPtr->sqlfpParmData);
/*******************************************************************/
/* CREATE CALL */
/*******************************************************************/
if (*funccode == 8) /* create time */
SQL programming 51
{
sqlfpParameterDescription_T *inDataType = argv[3];
sqlfpParameterDescription_T *outDataType = argv[5];
if (inDataType->sqlfpSqlType !=452 &&
inDataType->sqlfpSqlType !=453 ) /* only support fixed length char */
{
memcpy(sqlstate,"38002",5);
return;
}
/* an example of how an optional parm could be used */
/* In this case it is used to add version support if it is */
/* expected that the fieldproc program may support multiple */
/* versions */
if (version != 1) /* only support version 1 at this time */
{
memcpy(sqlstate,"38003",5);
return;
}
LABEL ON
TABLE CORPDATA.DEPARTMENT IS 'Department Structure Table'
LABEL ON
COLUMN CORPDATA.DEPARTMENT.ADMRDEPT IS 'Reports to Dept.'
After these statements are run, the table named DEPARTMENT displays the text description as
Department Structure Table and the column named ADMRDEPT displays the heading Reports to Dept.
The label for an object or a column cannot be more than 50 bytes and the label for a column heading
cannot be more than 60 bytes (blanks included). Here are the examples of LABEL ON statements for
column headings:
This LABEL ON statement provides column heading 1 and column heading 2:
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.EMPNO IS
'Employee Number'
This LABEL ON statement provides three levels of column headings for the SALARY column:
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.SALARY IS
'Yearly Salary (in dollars)'
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.SALARY IS ''
This LABEL ON statement provides a DBCS column heading with two levels specified:
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.SALARY IS
'<AABBCCDD> <EEFFGG>'
This LABEL ON statement provides the column text for the EDLEVEL column:
*...+....1....+....2....+....3....+....4....+....5....+....6..*
LABEL ON COLUMN CORPDATA.EMPLOYEE.EDLEVEL TEXT IS
'Number of years of formal education'
Related reference
LABEL
SQL programming 53
Describing an SQL object using COMMENT ON
After you create an SQL object, such as a table or view, you can provide object information for future
reference using the COMMENT ON statement.
The information can be the purpose of the object, who uses it, and anything unusual or special about it.
You can also include similar information about each column of a table or view. A comment is especially
useful if your names do not clearly indicate the contents of the columns or objects. In that case, use a
comment to describe the specific contents of the column or objects. Usually, your comment must not be
more than 2000 characters. If the object already contains a comment, the old comment is replaced by the
new one.
An example of using COMMENT ON follows:
SELECT LONG_COMMENT
FROM CORPDATA.SYSTABLES
WHERE NAME = 'EMPLOYEE'
Related reference
COMMENT
Adding a column
When you add a new column to a table, the column is initialized with its default value for all existing rows.
If NOT NULL is specified, a default value must also be specified.
You can add a column to a table using the ADD COLUMN clause of the SQL ALTER TABLE statement.
The altered table may consist of up to 8000 columns. The sum of the byte counts of the columns must not
be greater than 32766 or, if a VARCHAR or VARGRAPHIC column is specified, 32740. If a LOB column is
specified, the sum of record data byte counts of the columns must not be greater than 15 728 640.
Changing a column
You can change a column definition in a table using the ALTER COLUMN clause of the ALTER TABLE
statement.
When you change the data type of an existing column, the old and new attributes must be compatible. You
can always change a character, graphic, or binary column from fixed length to varying length or LOB; or
from varying length or LOB to fixed length.
When you convert to a data type with a longer length, data is padded with the appropriate pad character.
When you convert to a data type with a shorter length, data might be lost because of truncation. An
inquiry message prompts you to confirm the request.
If you have a column that does not allow the null value and you want to change it to now allow the null
value, use the DROP NOT NULL clause. If you have a column that allows the null value and you want to
prevent the use of null values, use the SET NOT NULL clause. If any of the existing values in that column
are the null value, the ALTER TABLE will not be performed and an SQLCODE of -190 will result.
Related reference
Allowable conversions of data types
When you change the data type of an existing column, the old and new attributes must be compatible.
Related information
ALTER TABLE
SQL programming 55
Table 8. Allowable conversions (continued)
From data type To data type
Character DBCS-open
Character UCS-2 or UTF-16 graphic
DBCS-open Character
DBCS-open UCS-2 or UTF-16 graphic
DBCS-either Character
DBCS-either DBCS-open
DBCS-either UCS-2 or UTF-16 graphic
DBCS-only DBCS-open
DBCS-only DBCS graphic
DBCS-only UCS-2 or UTF-16 graphic
DBCS graphic UCS-2 or UTF-16 graphic
UCS-2 or UTF-16 graphic Character
UCS-2 or UTF-16 graphic DBCS-open
UCS-2 or UTF-16 graphic DBCS graphic
distinct type source type
source type distinct type
When you change an existing column, only the attributes that you specify are changed. All other attributes
remain unchanged. For example, you have a table with the following table definition:
After you run the following ALTER TABLE statement, COL2 still has an allocated length of 10 and CCSID
937, and COL3 still has an allocated length of 10.
ALTER TABLE EX1 ALTER COLUMN COL2 SET DATA TYPE VARCHAR(30)
ALTER COLUMN COL3 DROP NOT NULL
Related reference
Changing a column
You can change a column definition in a table using the ALTER COLUMN clause of the ALTER TABLE
statement.
Deleting a column
You can delete a column using the DROP COLUMN clause of the ALTER TABLE statement.
Dropping a column deletes that column from the table definition. If CASCADE is specified, any views,
indexes, and constraints dependent on that column will also be dropped. If RESTRICT is specified, and
any views, indexes, or constraints are dependent on the column, the column will not be dropped and
SQLCODE of -196 will be issued.
Perhaps over time, you have updated the column names to be more descriptive, changed the DESCR
column to be a longer Unicode column, and added a timestamp column for when the row was last
updated. The following statement reflects all of these changes and can be executed against any prior
version of the table, as long as the column names can be matched to the prior column names and the data
types are compatible.
SQL programming 57
NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP,
PRIMARY KEY(PARTNO))
Partitioned tables can be modified using CREATE OR REPLACE TABLE. The following example
demonstrates splitting a single partition into multiple partitions.
Suppose your original table was defined to have 3 partitions
To break the second partition into 3 pieces, modify the original CREATE TABLE statement to redefine the
partitions.
Now the table will have 5 partitions with the data spread among them according to the new definition.
This example uses the default of ON REPLACE PRESERVE ALL ROWS. That means that all data for all rows
is guaranteed to be kept. If data from an existing partition doesn't fit in any new partition, the statement
fails. To remove a partition and the data from that partition, omit the partition definition from the CREATE
OR REPLACE TABLE statement and use ON REPLACE PRESERVE ROWS. This will preserve all the data that
can be assigned to the remaining partitions and discard any that no longer has a defined partition.
When alias MYLIB.MYMBR2_ALIAS is specified on the following insert statement, the values are inserted
into member MBR2 in MYLIB.MYFILE:
Alias names can also be specified on DDL statements. Assume that MYLIB.MYALIAS is an alias for table
MYLIB.MYTABLE. The following DROP statement drops table MYLIB.MYTABLE:
Related reference
CREATE ALIAS
Since the view name, EMP_MANAGERS, is too long for a system object name, the FOR SYSTEM NAME
clause can be used to provide the system name. Without adding this clause, a name like EMP_M00001
will be generated for the system object.
After you create the view, you can use it in SQL statements just like a table. You can also change
the data in the base table through the view. The following SELECT statement displays the contents of
EMP_MANAGERS:
SELECT *
FROM CORPDATA.EMP_MANAGERS
LASTNAME WORKDEPT
THOMPSON B01
KWAN C01
GEYER E01
STERN D11
PULASKI D21
HENDERSON E11
SPENSER E21
If the select list contains elements other than columns such as expressions, functions, constants, or
special registers, and the AS clause was not used to name the columns, a column list must be specified
for the view. In the following example, the columns of the view are LASTNAME and YEARSOFSERVICE.
Because the results of querying this view change as the current year changes, they are not included here.
You can also define the previous view by using the AS clause in the select list to name the columns in the
view. For example:
SQL programming 59
Using the UNION keyword, you can combine two or more subselects to form a single view. For example:
Views are created with the sort sequence in effect at the time the CREATE VIEW statement is run. The
sort sequence applies to all character, or UCS-2 or UTF-16 graphic comparisons in the CREATE VIEW
statement subselect.
You can also create views using the WITH CHECK OPTION clause to specify the level of checking when
data is inserted or updated through the view.
Related concepts
Retrieving data using the SELECT statement
The SELECT statement tailors your query to gather data. You can use the SELECT statement to retrieve a
specific row or retrieve data in a specific way.
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
Using the UNION keyword to combine subselects
Using the UNION keyword, you can combine two or more subselects to form a fullselect.
CREATE VIEW
Because no WITH CHECK OPTION is specified, the following INSERT statement is successful even though
the value being inserted does not meet the search condition of the view.
Create another view over V1, specifying the WITH CASCADED CHECK OPTION clause:
The following INSERT statement fails because it produces a row that does not conform to the definition of
V2:
The following INSERT statement fails only because V3 is dependent on V2, and V2 has a WITH
CASCADED CHECK OPTION.
However, the following INSERT statement is successful because it conforms to the definition of V2.
Because V3 does not have a WITH CASCADED CHECK OPTION, it does not matter that the statement
does not conform to the definition of V3.
SQL programming 61
WITH LOCAL CHECK OPTION
The WITH LOCAL CHECK OPTION clause is identical to the WITH CASCADED CHECK OPTION clause
except that you can update a row so that it can no longer be retrieved through the view. This can happen
only when the view is directly or indirectly dependent on a view that was defined with no WITH CHECK
OPTION clause.
For example, consider the same updatable view used in the previous example:
Create second view over V1, this time specifying WITH LOCAL CHECK OPTION:
The same INSERT statement that failed in the previous CASCADED CHECK OPTION example succeeds
now because V2 does not have any search conditions, and the search conditions of V1 do not need to be
checked since V1 does not specify a check option.
The following INSERT is successful again because the search condition on V1 is not checked due to
the WITH LOCAL CHECK OPTION on V2, versus the WITH CASCADED CHECK OPTION in the previous
example.
The difference between LOCAL and CASCADED CHECK OPTION lies in how many of the dependent views'
search conditions are checked when a row is inserted or updated.
• WITH LOCAL CHECK OPTION specifies that the search conditions of only those dependent views that
have the WITH LOCAL CHECK OPTION or WITH CASCADED CHECK OPTION are checked when a row is
inserted or updated.
• WITH CASCADED CHECK OPTION specifies that the search conditions of all dependent views are
checked when a row is inserted or updated.
Creating indexes
You can use indexes to sort and select data. In addition, indexes help the system retrieve data faster for
better query performance.
Use the CREATE INDEX statement to create indexes. The following example creates an index over the
column LASTNAME in the CORPDATA.EMPLOYEE table:
You can also create an index that does not exactly match the data for a column in a table. For example,
you can create an index that uses the uppercase version of an employee name:
Most expressions allowed by SQL can be used in the definition of the key columns.
You can create any number of indexes. However, because the indexes are maintained by the system, a
large number of indexes can adversely affect performance. One type of index, the encoded vector index
(EVI), allows for faster scans that can be more easily processed in parallel.
If an index is created that has exactly the same attributes as an existing index, the new index shares the
existing indexes' binary tree. Otherwise, another binary tree is created. If the attributes of the new index
are exactly the same as another index, except that the new index has fewer columns, another binary tree
is still created. It is still created because the extra columns prevent the index from being used by cursors
or UPDATE statements that update those extra columns.
Indexes are created with the sort sequence in effect at the time the CREATE INDEX statement is run. The
sort sequence applies to all SBCS character fields, or UCS-2 or UTF-16 graphic fields of the index.
Related concepts
Sort sequences and normalization in SQL
SQL programming 63
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Creating an index strategy
Related reference
CREATE INDEX
This variable will have its initial value set based on the result of invoking a function called CLASS_FUNC.
This function is assumed to assign a class value such as administrator or clerk based on the USER special
register value.
A global variable is instantiated for a session the first time it is referenced. Once it is set, it will maintain
its value unless explicitly changed within the session.
A global variable can be used in a query to determine what results will be returned. In the following
example, a list of all employees from department A00 are listed. Only a session that has a global variable
with a USER_CLASS value of 1 will see the salaries for these employees.
SELECT EMPNO, LASTNAME, CASE WHEN USER_CLASS = 1 THEN SALARY ELSE NULL END
FROM EMPLOYEE
WHERE WORKDEPT = 'A00'
Global variables can be used in any context where an expression is allowed. Unlike a host variable, a
global variable can be used in a CREATE VIEW statement.
The sequence will be created if it does not already exist. If it does exist, the privileges from the existing
sequence will be transferred to the new sequence.
SELECT *
FROM CORPDATA.SYSTABLES
WHERE TABLE_NAME = 'DEPARTMENT'
SELECT *
FROM CORPDATA.SYSCOLUMNS
WHERE TABLE_NAME = 'DEPARTMENT'
The result of the previous example statement is a row of information for each column in the table.
For specific information about each column, specify a select-statement like this:
In addition to the column name for each column, the select-statement shows:
• The name of the table that contains the column
• The data type of the column
• The length attribute of the column
• If the column allows default values
The result looks like this.
SQL programming 65
COLUMN_NAME TABLE_NAME DATA_TYPE LENGTH HAS_DEFAULT
DEPTNO DEPARTMENT CHAR 3 N
DEPTNAME DEPARTMENT VARCHAR 29 N
MGRNO DEPARTMENT CHAR 6 Y
ADMRDEPT DEPARTMENT CHAR 3 N
Related reference
DROP
Comparisons might not be case sensitive if a shared-weight sort sequence is used where uppercase
and lowercase characters are treated as the same characters.
A SELECT statement can include the following:
1. The name of each column you want to include in the result.
2. The name of the table or view that contains the data.
3. A search condition to identify the rows that contain the information you want.
4. The name of each column used to group your data.
5. A search condition that uniquely identifies a group that contains the information you want.
6. The order of the results.
7. An offset into the result set to enable skipping a number of rows.
8. The number of rows to return.
A SELECT statement looks like this:
The SELECT and FROM clauses must be specified. The other clauses are optional.
With the SELECT clause, you specify the name of each column you want to retrieve. For example:
You can specify that only one column be retrieved, or as many as 8000 columns. The value of each
column you name is retrieved in the order specified in the SELECT clause.
If you want to retrieve all columns (in the same order as they appear in the table's definition), use an
asterisk (*) instead of naming the columns:
SELECT *
The FROM clause specifies the table that you want to select data from. You can select columns from more
than one table. When issuing a SELECT, you must specify a FROM clause. Issue the following statement:
SELECT *
FROM EMPLOYEE
The result is all of the columns and rows from the table EMPLOYEE.
The SELECT list can also contain expressions, including constants, special registers, and scalar fullselects.
An AS clause can be used to give the resulting column a name. For example, issue the following
statement:
SQL programming 67
The result of this statement follows.
In this case, the search condition consists of one predicate: WORKDEPT = 'C01'.
To further illustrate WHERE, put it into a SELECT statement. Assume that each department listed in the
CORPDATA.DEPARTMENT table has a unique department number. You want to retrieve the department
name and manager number from the CORPDATA.DEPARTMENT table for department C01. Issue the
following statement:
If the search condition contains character, or UCS-2 or UTF-16 graphic column predicates, the sort
sequence that is in effect when the query is run is applied to those predicates. If a sort sequence is not
being used, character constants must be specified in uppercase or lowercase to match the column or
expression they are being compared to.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
Defining complex search conditions
In addition to the basic comparison predicates, such as = and >, a search condition can contain any of
these predicates: BETWEEN, IN, EXISTS, IS NULL, and LIKE.
Multiple search conditions within a WHERE clause
When the order of evaluation is not specified by parentheses, the expression is evaluated in the
following order:
1. Prefix operators
2. Exponentiation
3. Multiplication, division, and concatenation
4. Addition and subtraction
Operators on the same precedence level are applied from left to right.
• A constant specifies a literal value for the expression. For example:
SALARY names a column that is defined as a 9-digit packed decimal value (DECIMAL(9,2)). It is
compared to the numeric constant 40000.
• A host variable identifies a variable in an application program. For example:
• A special register identifies a special value defined by the database manager. For example:
A search condition can specify many predicates separated by AND and OR. No matter how complex
the search condition, it supplies a TRUE or FALSE value when evaluated against a row. There is also an
SQL programming 69
unknown truth value, which is effectively false. That is, if the value of a row is null, this null value is not
returned as a result of a search because it is not less than, equal to, or greater than the value specified in
the search condition.
To fully understand the WHERE clause, you need to know the order SQL evaluates search conditions and
predicates, and compares the values of expressions. This topic is discussed in the Db2 for i SQL reference
topic collection.
Related concepts
Using subqueries
You can use subqueries in a search condition as another way to select data. Subqueries can be used
anywhere an expression can be used.
Related reference
Defining complex search conditions
In addition to the basic comparison predicates, such as = and >, a search condition can contain any of
these predicates: BETWEEN, IN, EXISTS, IS NULL, and LIKE.
Expressions
Comparison operators
SQL supports several comparison operators.
NOT keyword
You can precede a predicate with the NOT keyword to specify that you want the opposite of the
predicate's value (that is, TRUE if the predicate is FALSE).
NOT applies only to the predicate it precedes, not to all predicates in the WHERE clause. For example, to
indicate that you are interested in all employees except those working in the department C01, you can
say:
GROUP BY clause
The GROUP BY clause allows you to find the characteristics of groups of rows rather than individual rows.
When you specify a GROUP BY clause, SQL divides the selected rows into groups such that the rows of
each group have matching values in one or more columns or expressions. Next, SQL processes each group
to produce a single-row result for the group. You can specify one or more columns or expressions in the
GROUP BY clause to group the rows. The items you specify in the SELECT statement are properties of
each group of rows, not properties of individual rows in a table or view.
Without a GROUP BY clause, the application of SQL aggregate functions returns one row. When GROUP BY
is used, the function is applied to each group, thereby returning as many rows as there are groups.
WORKDEPT AVG-SALARY
A00 40850
B01 41250
C01 29722
D11 25147
D21 25668
E01 40175
E11 21020
E21 24086
Notes:
1. Grouping the rows does not mean ordering them. Grouping puts each selected row in a group, which
SQL then processes to derive characteristics of the group. Ordering the rows puts all the rows in
the results table in ascending or descending collating sequence. Depending on the implementation
selected by the database manager, the resulting groups might appear to be ordered.
2. If there are null values in the column you specify in the GROUP BY clause, a single-row result is
produced for the data in the rows with null values.
3. If the grouping occurs over character, or UCS-2 or UTF-16 graphic columns, the sort sequence in effect
when the query is run is applied to the grouping.
When you use GROUP BY, you list the columns or expressions you want SQL to use to group the rows. For
example, suppose that you want a list of the number of people working on each major project described in
the CORPDATA.PROJECT table. You can issue:
The result is a list of the company's current major projects and the number of people working on each
project.
SUM(PRSTAFF) MAJPROJ
6 AD3100
5 AD3110
10 MA2100
8 MA2110
5 OP1000
4 OP2000
3 OP2010
SQL programming 71
SUM(PRSTAFF) MAJPROJ
32.5 ?
You can also specify that you want the rows grouped by more than one column or expression. For
example, you can issue a select statement to find the average salary for men and women in each
department, using the CORPDATA.EMPLOYEE table. To do this, you can issue:
Because you did not include a WHERE clause in this example, SQL examines and processes all rows in
the CORPDATA.EMPLOYEE table. The rows are grouped first by department number and next (within each
department) by sex before SQL derives the average SALARY value for each group.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
ORDER BY clause
The ORDER BY clause specifies the particular order in which you want selected rows returned. The order
is sorted by ascending or descending collating sequence of a column's or an expression's value.
HAVING clause
The HAVING clause specifies a search condition for the groups selected by the GROUP BY clause.
The HAVING clause says that you want only those groups that satisfy the condition in that clause.
Therefore, the search condition you specify in the HAVING clause must test properties of each group
rather than properties of individual rows in the group.
The HAVING clause follows the GROUP BY clause and can contain the same kind of search condition as
you can specify in a WHERE clause. In addition, you can specify aggregate functions in a HAVING clause.
You can use multiple predicates in a HAVING clause by connecting them with AND and OR, and you can
use NOT for any predicate of a search condition.
Note: If you intend to update a column or delete a row, you cannot include a GROUP BY or HAVING clause
in the SELECT statement within a DECLARE CURSOR statement. These clauses make it a read-only cursor.
Predicates with arguments that are not aggregate functions can be coded in either WHERE or HAVING
clauses. It is typically more efficient to code the selection criteria in the WHERE clause because it is
handled earlier in the query processing. The HAVING selection is performed in post processing of the
result table.
If the search condition contains predicates involving character, or UCS-2 or UTF-16 graphic columns, the
sort sequence in effect when the query is run is applied to those predicates.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
Using a cursor
When SQL runs a SELECT statement, the resulting rows comprise the result table. A cursor provides a way
to access a result table.
ORDER BY clause
The ORDER BY clause specifies the particular order in which you want selected rows returned. The order
is sorted by ascending or descending collating sequence of a column's or an expression's value.
For example, to retrieve the names and department numbers of female employees listed in the
alphanumeric order of their department numbers, you can use this select-statement:
SELECT LASTNAME,WORKDEPT
FROM CORPDATA.EMPLOYEE
WHERE SEX='F'
ORDER BY WORKDEPT
SQL programming 73
LASTNAME WORKDEPT
HAAS A00
HEMMINGER A00
KWAN C01
QUINTANA C01
NICHOLLS C01
NATZ C01
PIANKA D11
SCOUTTEN D11
LUTZ D11
JOHN D11
PULASKI D21
JOHNSON D21
PEREZ D21
HENDERSON E11
SCHNEIDER E11
SETRIGHT E11
SCHWARTZ E11
SPRINGER E11
WONG E21
SELECT LASTNAME,FIRSTNME
FROM CORPDATA.EMPLOYEE
WHERE SEX='F'
ORDER BY SALARY DESC
If an AS clause is specified to name a result column in the select-list, this name can be specified in the
ORDER BY clause. The name specified in the AS clause must be unique in the select-list. For example, to
retrieve the full names of employees listed in alphabetic order, you can use this select-statement:
Instead of naming the columns to order the results, you can use a number. For example, ORDER BY 3
specifies that you want the results ordered by the third column of the results table, as specified by the
select-list. Use a number to order the rows of the results table when the sequencing value is not a named
column.
You can specify a secondary ordering sequence (or several levels of ordering sequences) as well as a
primary one. In the previous example, you might want the rows ordered first by department number, and
within each department, ordered by employee name. To do this, specify:
If character columns, or UCS-2 or UTF-16 graphic columns are used in the ORDER BY clause, ordering for
these columns is based on the sort sequence in effect when the query is run.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
GROUP BY clause
The GROUP BY clause allows you to find the characteristics of groups of rows rather than individual rows.
SQL programming 75
When SQL runs a SELECT statement, the resulting rows comprise the result table. A cursor provides a way
to access a result table.
To get the rows that do not have a null value for the manager number, you can change the WHERE clause
like this:
Another predicate that is useful for comparing values that can contain the NULL value is the DISTINCT
predicate. Comparing two columns using a normal equal comparison (COL1 = COL2) will be true if both
columns contain an equal non-null value. If both columns are null, the result will be false because null
is never equal to any other value, not even another null value. Using the DISTINCT predicate, null values
are considered equal. So COL1 is NOT DISTINCT from COL2 will be true if both columns contain an equal
non-null value and also when both columns are the null value.
For example, suppose that you want to select information from two tables that contain null values. The
first table T1 has a column C1 with the following values.
C1
2
1
null
C2
2
null
SELECT *
FROM T1, T2
WHERE C1 IS DISTINCT FROM C2
C1 C2
1 2
1 -
2 -
- 2
For more information about the use of null values, see the Db2 for i SQL reference topic collection.
SQL programming 77
Special registers Contents
The SQL path used to resolve unqualified data type
CURRENT PATH
names, procedure names, and function names in
CURRENT_PATH
dynamically prepared SQL statements.
CURRENT FUNCTION PATH
If a single statement contains more than one reference to any of CURRENT DATE, CURRENT TIME, or
CURRENT TIMESTAMP special registers, or the CURDATE, CURTIME, or NOW scalar functions, all values
are based on a single clock reading.
For remotely run SQL statements, the values for special registers are determined at the remote system.
When a query over a distributed table references a special register, the contents of the special register on
the system that requests the query are used. For more information about distributed tables, see the DB2
Multisystem topic collection.
You can also use the CAST specification to cast data types directly:
Related reference
Casting between data types
SQL programming 79
The CURRENT TIMEZONE special register allows a local time to be converted to Universal Time
Coordinated (UTC). For example, if you have a table named DATETIME that contains a time column type
with a name of STARTT, and you want to convert STARTT to UTC, you can use the following statement:
Date/time arithmetic
Addition and subtraction are the only arithmetic operators applicable to date, time, and timestamp values.
You can increment and decrement a date, time, or timestamp by a duration; or subtract a date from a
date, a time from a time, or a timestamp from a timestamp.
Related reference
Datetime arithmetic in SQL
The ROW CHANGE TOKEN expression can be used for both tables that have a row change timestamp and
tables that do not. It represents a modification point for a row. If a table has a row change timestamp, it
is derived from the timestamp. If a table does not have a row change timestamp, it is based on an internal
modification time that is not row-based, so it is not as accurate as for a table that has a row change
timestamp.
DISTINCT means that you want to select only the unique rows. If a selected row duplicates another row in
the result table, the duplicate row is ignored (it is not put into the result table). For example, suppose you
want a list of employee job codes. You do not need to know which employee has what job code. Because
it is probable that several people in a department have the same job code, you can use DISTINCT to
ensure that the result table has only unique values.
The following example shows how to do this:
If you do not include DISTINCT in a SELECT clause, you might find duplicate rows in your result, because
SQL returns the JOB column's value for each row that satisfies the search condition. Null values are
treated as duplicate rows for DISTINCT.
If you include DISTINCT in a SELECT clause and you also include a shared-weight sort sequence, fewer
values might be returned. The sort sequence causes values that contain the same characters to be
weighted the same. If 'MGR', 'Mgr', and 'mgr' are all in the same table, only one of these values is
returned.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
The BETWEEN keyword is inclusive. A more complex, but explicit, search condition that produces the
same result is:
• IN says you are interested in rows in which the value of the specified expression is among the values
you listed. For example, to find the names of all employees in departments A00, C01, and E21, you can
specify:
• EXISTS says you are interested in testing for the existence of certain rows. For example, to find out if
there are any employees that have a salary greater than 60000, you can specify:
• IS NULL says that you are interested in testing for null values. For example, to find out if there are any
employees without a phone listing, you can specify:
• LIKE says you are interested in rows in which an expression is similar to the value you supply. When you
use LIKE, SQL searches for a character string similar to the one you specify. The degree of similarity is
determined by two special characters used in the string that you include in the search condition:
_
An underline character stands for any single character.
SQL programming 81
%
A percent sign stands for an unknown string of 0 or more characters. If the percent sign starts the
search string, then SQL allows 0 or more character(s) to precede the matching value in the column.
Otherwise, the search string must begin in the first position of the column.
Note: If you are operating on MIXED data, the following distinction applies: an SBCS underline character
refers to one SBCS character. No such restriction applies to the percent sign; that is, a percent sign
refers to any number of SBCS or DBCS characters. See the Db2 for i SQL reference topic collection for
more information about the LIKE predicate and MIXED data.
Use the underline character or percent sign either when you do not know or do not care about all the
characters of the column's value. For example, to find out which employees live in Minneapolis, you can
specify:
SQL returns any row with the string MINNEAPOLIS in the ADDRESS column, no matter where the string
occurs.
In another example, to list the towns whose names begin with 'SAN', you can specify:
If you want to find any addresses where the street name isn't in your master street name list, you can
use an expression in the LIKE expression. In this example, the STREET column in the table is assumed
to be upper case.
If you want to search for a character string that contains either the underscore or percent character,
use the ESCAPE clause to specify an escape character. For example, to see all businesses that have a
percent in their name, you can specify:
The first and last percent characters in the LIKE string are interpreted as the normal LIKE percent
characters. The combination '@%' is taken as the actual percent character.
Related concepts
Using subqueries
You can use subqueries in a search condition as another way to select data. Subqueries can be used
anywhere an expression can be used.
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
Specifying a search condition using the WHERE clause
The WHERE clause specifies a search condition that identifies the row or rows that you want to retrieve,
update, or delete.
Expressions in the WHERE clause
An expression in a WHERE clause names or specifies something that you want to compare to something
else.
Predicates
However, if you do a search using the search pattern 'ABC%' contained in a host variable with a fixed
length of 10, these values can be returned, assuming that the column has a length of 12:
Note: All returned values start with 'ABC' and end with at least 6 blanks. Blanks are used because the
last 6 characters in the host variable are not assigned a specific value.
If you want to do a search using a fixed-length host variable where the last 7 characters can be
anything, search for 'ABC%%%%%%%'. These are some of the values that can be returned:
...
WHERE WORKDEPT = 'D21' AND HIREDATE > '1987-12-31'
• OR says that, for a row to qualify, the row can satisfy the condition set by either or both predicates of
the search condition. For example, to find out which employees are in either department C01 or D11,
you can specify :
...
WHERE WORKDEPT = 'C01' OR WORKDEPT = 'D11'
Note: You can also use IN to specify this request: WHERE WORKDEPT IN ('C01', 'D11').
• NOT says that, to qualify, a row must not meet the criteria set by the search condition or predicate that
follows the NOT. For example, to find all employees in the department E11 except those with a job code
equal to analyst, you can specify:
...
WHERE WORKDEPT = 'E11' AND NOT JOB = 'ANALYST'
When SQL evaluates search conditions that contain these connectors, it does so in a specific order. SQL
first evaluates the NOT clauses, next evaluates the AND clauses, and then the OR clauses.
SQL programming 83
You can change the order of evaluation by using parentheses. The search conditions enclosed in
parentheses are evaluated first. For example, to select all employees in departments E11 and E21 who
have education levels greater than 12, you can specify:
...
WHERE EDLEVEL > 12 AND
(WORKDEPT = 'E11' OR WORKDEPT = 'E21')
The parentheses determine the meaning of the search condition. In this example, you want all rows that
have a:
• WORKDEPT value of E11 or E21, and
• EDLEVEL value greater than 12
If you did not use parentheses:
...
WHERE EDLEVEL > 12 AND WORKDEPT = 'E11'
OR WORKDEPT = 'E21'
Your result is different. The selected rows are rows that have:
• WORKDEPT = E11 and EDLEVEL > 12, or
• WORKDEPT = E21, regardless of the EDLEVEL value
If you are combining multiple equal comparisons, you can write the predicate with the ANDs as shown in
the following example:
...
WHERE WORKDEPT = 'E11' AND EDLEVEL = 12 AND JOB = 'CLERK'
...
WHERE (WORKDEPT, EDLEVEL, JOB) = ('E11', 12, 'CLERK')
When two lists are used, the first item in the first list is compared to the first item in the second list, and so
on through both lists. Thus, each list must contain the same number of entries. Using lists is identical to
writing the query with AND. Lists can only be used with the equal and not equal comparison operators.
Related reference
Specifying a search condition using the WHERE clause
The WHERE clause specifies a search condition that identifies the row or rows that you want to retrieve,
update, or delete.
In this example, the SALARY descending order with the top 10 returned. The RANK column shows the
relative ranking of each salary. Notice that there are two rows with the same salary at position 2. Each of
those rows is assigned the same rank value. The following row is assigned the value of 4. RANK returns a
value for a row that is one more than the total number of rows that precede that row. There are gaps in the
numbering sequence whenever there are duplicates.
In contrast, the DENSE_RANK column shows a value of 3 for the row directly after the duplicate rows.
DENSE_RANK returns a value for a row that is one more than the number of distinct row values that
precede it. There will never be gaps in the numbering sequence.
ROW_NUMBER returns a unique number for each row. For rows that contain duplicate values according to
the specified ordering, the assignment of a row number is arbitrary; the row numbers could be assigned in
a different order for the duplicate rows when the query is run another time.
SQL programming 85
Table 13. Results of previous query (continued)
WORKDEPT AVERAGE AVG_SALARY QUANTILE
D11 25,147 6 2
E21 24,086 7 3
E11 21,020 8 3
In this example, the NTILE function has an argument of 3, meaning that the results are to be grouped into
3 equal-sized sets. Since the result set is not evenly divisible by the number of quantiles, an additional
row is included in each of the two lowest number quantiles.
BONUS_RANK_
IN_DEPT
LASTNAME WORKDEPT BONUS
GEYER E01 800.00 1
HENDERSON E11 600.00 1
SCHNEIDER E11 500.00 2
SCHWARTZ E11 500.00 2
SMITH E11 400.00 3
PARKER E11 300.00 4
SETRIGHT E11 300.00 4
SPRINGER E11 300.00 4
SPENSER E21 500.00 1
LEE E21 500.00 1
GOUNOT E21 500.00 1
WONG E21 500.00 1
ALONZO E21 500.00 1
MENTA E21 400.00 2
ROLLING_ ROLLING_
TOTAL TOTAL_
RANGE ROWS
ROW LASTNAME SALARY DISTRIBUTION
1 JONES 18,270.00 18,270.00 18,270.00 .091
2 WALKER 20,450.00 38,720.00 38,720.00 .182
3 SCOUTTEN 21,340.00 60,060.00 60,060.00 .273
4 PIANKA 22,250.00 82,310.00 82,310.00 .364
5 YOSHIMURA 24,680.00 131,670.00 106,990.00 .545
6 YAMAMOTO 24,680.00 131,670.00 131,670.00 .545
SQL programming 87
Table 16. Results of the previous query (continued)
ROLLING_ ROLLING_
TOTAL TOTAL_
RANGE ROWS
ROW LASTNAME SALARY DISTRIBUTION
7 ADAMSON 25,280.00 156,950.00 156,950.00 .636
8 BROWN 27,740.00 184,690.00 184,690.00 .727
9 LUTZ 29,840.00 244,370.00 214,530.00 .909
10 JOHN 29,840.00 244,370.00 244,370.00 .909
11 STERN 32,250.00 276,620.00 276,620.00 1.000
This example shows two ways of defining the window to be used for calculating the value of a group.
The first way to define the window is with RANGE, which defines a group for all rows that have the same
order by value. Row numbers 5 and 6 have the same salary value, so they are treated as a group. Their
salaries are summed together and added to the previous total to generate the same value for each of the
rows as seen in the ROLLING_TOTAL_RANGE column.
The second way to define the window is with ROWS, which treats each row as a group. In the
ROLLING_TOTAL_ROWS column each row shows the sum calculated up to and including the current
row. For rows that have the same salary value, such as rows 5 and 6, the order in which they are returned
is not defined.
RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is the default aggregation group and
could be omitted from the ROLLING_TOTAL_RANGE specification.
An ORDER BY is specified for the entire query to guarantee the rows are returned ordered.
ROLLING_ WINDOWED_
TOTAL TOTAL
LASTNAME SALARY FIRST_VALUE LAST_VALUE
JONES 18,270.00 18,270.00 18,270.00 18,270.00 18,270.00
WALKER 20,450.00 38,720.00 41,790.00 20,450.00 21,340.00
SCOUTTEN 21,340.00 60,060.00 64,040.00 20,450.00 22,250.00
ROLLING_ WINDOWED_
TOTAL TOTAL
LASTNAME SALARY FIRST_VALUE LAST_VALUE
PIANKA 22,250.00 82,310.00 43,590.00 21,340.00 25,280.00
YOSHIMURA 24,680.00 131,670.00 74,640.00 24,680.00 25,280.00
YAMAMOTO 24,680.00 131,670.00 74,640.00 24,680.00 25,280.00
ADAMSON 25,280.00 156,950.00 74,640.00 24,680.00 25,280.00
BROWN 27,740.00 184,690.00 27,740.00 27,740.00 27,740.00
LUTZ 29,840.00 244,370.00 59,680.00 29,840.00 29,840.00
JOHN 29,840.00 244,370.00 59,680.00 29,840.00 29,840.00
STERN 32,250.00 276,620.00 32,250.00 32,250.00 32,250.00
For each employee, a group is defined that contains all other employees in department D11 with salaries
that fall within a range of 1000 below (PRECEDING) or 1000 above (FOLLOWING) that employee's salary.
The WINDOWED_TOTAL column returns the sum of all the salaries in that group. The FIRST_VALUE
column returns the lowest salary value that is part of the group. The LAST_VALUE column returns the
highest salary value that is part of the group. Any employee that has a salary that is more than 1000 from
the closest other salary is its own group.
An ORDER BY is specified for the entire query to guarantee the rows are returned ordered.
SQL programming 89
Table 18. Results of the previous query (continued)
LASTNAME SALARY AVG_SALARY
STERN 32,250.00 31,045.00
Inner join
An inner join returns only the rows from each table that have matching values in the join columns. Any
rows that do not have a match between the tables do not appear in the result table.
With an inner join, column values from one row of a table are combined with column values from another
row of another (or the same) table to form a single row of data. SQL examines both tables specified for the
join to retrieve data from all the rows that meet the search condition for the join. There are two ways of
specifying an inner join: using the JOIN syntax, and using the WHERE clause.
Suppose you want to retrieve the employee numbers, names, and project numbers for all employees
that are responsible for a project. In other words, you want the EMPNO and LASTNAME columns from
the CORPDATA.EMPLOYEE table and the PROJNO column from the CORPDATA.PROJECT table. Only
employees with last names starting with 'S' or later should be considered. To find this information, you
need to join the two tables.
In this example, the join is done on the two tables using the EMPNO and RESPEMP columns from the
tables. Since only employees that have last names starting with at least 'S' are to be returned, this
additional condition is provided in the WHERE clause.
The syntax in this statement is valid and equivalent to the join condition in the following statement:
The result of this query contains some employees that do not have a project number. They are listed in the
query, but have the null value returned for their project number.
SQL programming 91
EMPNO LASTNAME PROJNO
000100 SPENSER OP2010
000170 YOSHIMURA -
000180 SCOUTTEN -
000190 WALKER -
000250 SMITH AD3112
000280 SCHNEIDER -
000300 SMITH -
000310 SETRIGHT -
200170 YAMAMOTO -
200280 SCHWARTZ -
200310 SPRINGER -
200330 WONG -
Note: Using the RRN scalar function to return the relative record number for a column in the table on the
right in a left outer join or exception join will return a value of 0 for the unmatched rows.
The results of this query are identical to the results from the left outer join query.
Exception join
A left exception join returns only the rows from the first table that do not have a match in the second
table.
Using the same tables as before, return those employees that are not responsible for any projects.
An exception join can also be written as a subquery using the NOT EXISTS predicate. The previous query
can be rewritten in the following way:
The only difference in this query is that it cannot return values from the PROJECT table.
There is a right exception join, too, that works just like a left exception join but with the tables reversed.
Cross join
A cross join, also known as a Cartesian Product join, returns a result table where each row from the first
table is combined with each row from the second table.
The number of rows in the result table is the product of the number of rows in each table. If the tables
involved are large, this join can take a very long time.
A cross join can be specified in two ways: using the JOIN syntax or by listing the tables in the FROM
clause separated by commas without using a WHERE clause to supply join criteria.
Suppose that the following tables exist.
SELECT * FROM A, B
The result table for either of these SELECT statements looks like this.
SQL programming 93
ACOL1 ACOL2 BCOL1 BCOL2
A1 AA1 B1 BB1
A1 AA1 B2 BB2
A2 AA2 B1 BB1
A2 AA2 B2 BB2
A3 AA3 B1 BB1
A3 AA3 B2 BB2
Because there are no projects without an assigned employee, the query returns the same rows as a left
outer join. Here are the results.
SQL programming 95
MGRNO DEPTNO MAXSAL
000050 E01 40175.00
000090 E11 29750.00
000100 E21 26150.00
Common table expressions can be specified before the full-select in a SELECT statement, an INSERT
statement, or a CREATE VIEW statement. They can be used when the same result table needs to be
shared in a full-select. Common table expressions are preceded with the keyword WITH.
For example, suppose you want a table that shows the minimum and maximum of the average salary of
a certain set of departments. The first character of the department number has some meaning and you
want to get the minimum and maximum for those departments that start with the letter 'D' and those
that start with the letter 'E'. You can use a common table expression to select the average salary for each
department. Again, you must name the derived table; in this case, the name is DT. You can then specify a
SELECT statement using a WHERE clause to restrict the selection to only the departments that begin with
a certain letter. Specify the minimum and maximum of column AVGSAL from the derived table DT. Specify
a UNION to get the results for the letter 'E' and the results for the letter 'D'.
MAX(AVGSAL) MIN(AVGSAL)
E 40175.00 21020.00
D 25668.57 25147.27
Suppose that you want to write a query against your ordering database that will return the top 5 items (in
total quantity ordered) within the last 1000 orders from customers who also ordered item 'XXX'.
The first common table expression (X) returns the most recent 1000 order numbers. The result is ordered
by the date in descending order and then only the first 1000 of those ordered rows are returned as the
result table.
The second common table expression (Y) joins the most recent 1000 orders with the line item table and
returns (for each of the 1000 orders) the customer, line item, and quantity of the line item for that order.
SQL programming 97
PRICE INT);
There are several parts to this hierarchical query. There is an initial selection which defines the initial
seed for the recursion. In this case, it is the rows from the flights table that START WITH a departure
from ‘Chicago'. The CONNECT BY clause is used to define how the rows that have already been generated
are to be 'connected' to generate more rows for subsequent iterations of the query. The PRIOR unary
operator tells Db2 how to select a new row based on the results of the previous row. The recursive join
column (typically one column but could have several) selected by the result of the START WITH clause
is referenced by the PRIOR keyword. This means that the previous row's ARRIVAL city becomes the new
row's PRIOR value for the DEPARTURE city. This is encapsulated in the clause CONNECT BY PRIOR arrival
= departure.
SQL programming 99
In this example, there are two data sources feeding the recursion, a list of flights and a list of trains. In the
final results, you see how many connections are needed to travel between the cities.
The result table shows all the destinations possible from the origin city of New York. All sibling
destinations (those destinations that originate from the same departure city) are output sorted by ticket
price. For example, the destinations from Paris are Rome, Madrid and Cairo; they are output ordered by
ascending ticket price. Note that the output shows New York to LA as the first destination directly from
New York because it has a less expensive ticket price (330) than did the direct connects to London or
Paris which are 350 and 400 respectively.
The following query illustrates the tolerance of the cyclic data by specifying NOCYCLE. In addition,
the CONNECT_BY_ISCYCLE pseudo column is used to identify cyclic rows and the function
SYS_CONNECT_BY_PATH is used to build an Itinerary string of all the connection cities leading up to
the destination. SYS_CONNECT_BY_PATH is implemented as a CLOB data type so you have a large result
column to reflect deep recursions.
Note that the result set row that reflects cyclic data often depends on where you start in the cycle with
the START WITH clause.
This query can also be expressed without using the JOIN syntax. The query optimizer will pull out of the
WHERE clause those predicates that are join predicates to be processed first and leave any remaining
WHERE predicates to be evaluated after the recursion.
In this second example, if the WHERE predicates are more complex, you may need to aid the optimizer by
explicitly pulling out the JOIN predicates between the flights and flightstats tables and using both an ON
clause and a WHERE clause.
If you want additional search conditions to be applied as part of the recursion process, for example you
never want to take a flight with an on time percentage of less than 90%, you can also control the join
results by putting the join in a derived table with a join predicate and a WHERE clause.
Another option is to put the selection predicates in the START WITH and CONNECT BY clauses.
In this case, you would be out of luck as there are no direct flights out of New York with a greater than
90% on time statistic. Since there is nothing to seed the recursion, no rows are returned from the query.
This recursive query is written in two parts. The first part of the common table expression is called the
intialization fullselect. It selects the first rows for the result set of the common table expression. In this
example, it selects the two rows in the flights table that get you directly to another location from Chicago.
It also initializes the number of flight legs to one for each row it selects.
The second part of the recursive query joins the rows from the current result set of the common table
expression with other rows from the original table. It is called the iterative fullselect. This is where the
recursion is introduced. Notice that the rows that have already been selected for the result set are
referenced by using the name of the common table expression as the table name and the common table
expression result column names as the column names.
In this recursive part of the query, any rows from the original table that you can get to from each of the
previously selected arrival cities are selected. A previously selected row's arrival city becomes the new
departure city. Each row from this recursive select increments the flight count to the destination by one
more flight. As these new rows are added to the common table expression result set, they are also fed
into the iterative fullselect to generate more result set rows. In the data for the final result, you can see
that the total number of flights is actually the total number of recursive joins (plus 1) it took to get to that
arrival city.
A recursive view looks very similar to a recursive common table expression. You can write the previous
recursive common table expression as a recursive view like this:
The iterative fullselect part of this view definition refers to the view itself. Selection from this view returns
the same rows as you get from the previous recursive common table expression. For comparison, note
that connect by recursion is allowed anywhere a SELECT is allowed, so it can easily be included in a view
definition.
For each returned row, the results show the starting departure city and the final destination city. It counts
the number of connections needed rather than the total number of flight and adds up the total cost for all
the flights.
Example: Two tables used for recursion using recursive common table expressions
Now, suppose you start in Chicago but add in transportation by railway in addition to the airline flights,
and you want to know which cities you can go to.
The following query returns that information:
In this example, there are two parts of the common table expression that provide initialization values to
the query: one for flights and one for trains. For each of the result rows, there are two recursive references
to get from the previous arrival location to the next possible destination: one for continuing by air, the
other for continuing by train. In the final results, you would see how many connections are needed and
how many airline or train trips can be taken.
Example: DEPTH FIRST and BREADTH FIRST options for recursive common table
expressions
The two examples here show the difference in the result set row order based on whether the recursion is
processed depth first or breadth first.
Note: The search clause is not supported directly for recursive views. You can define a view that contains
a recursive common table expression to get this function.
The option to determine the result using breadth first or depth first is a recursive relationship sort
based on the recursive join column specified for the SEARCH BY clause. When the recursion is handled
breadth first, all children are processed first, then all grandchildren, then all great grandchildren. When
the recursion is handled depth first, the full recursive ancestry chain of one child is processed before
going to the next child.
In both of these cases, you specify an extra column name that is used by the recursive process to keep
track of the depth first or breadth first ordering. This column must be used in the ORDER BY clause of the
outer query to get the rows back in the specified order. If this column is not used in the ORDER BY, the
DEPTH FIRST or BREADTH FIRST processing option is ignored.
The selection of which column to use for the SEARCH BY column is important. To have any meaning in the
result, it must be the column that is used in the iterative fullselect to join from the initialization fullselect.
In this example, ARRIVAL is the column to use.
The following query returns that information:
In this result data, you can see that all destinations that are generated from the Chicago-to-Miami row are
listed before the destinations from the Chicago-to-Frankfort row.
Next, you can run the same query but request the result to be ordered breadth first.
In this example, the ARRIVAL column is defined in the CYCLE clause as the column to use for detecting
a cycle in the data. When a cycle is found, a special column, CYCLIC_DATA in this case, is set to the
character value of '1' for the cycling row in the result set. All other rows will contain the default value of
'0'. When a cycle on the ARRIVAL column is found, processing will not proceed any further in the data so
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
UNION
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO = 'MA2112' OR
PROJNO = 'MA2113' OR
PROJNO = 'AD3111'
ORDER BY EMPNO
To better understand the results from these SQL statements, imagine that SQL goes through the following
process:
Step 1. SQL processes the first SELECT statement:
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
Step 3. SQL combines the two interim result tables, removes duplicate rows, and orders the result:
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
UNION
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
ORDER BY EMPNO
The query returns a combined result table with values in ascending sequence.
EMPNO
000060
000150
000160
SELECT A + B AS X ...
UNION
SELECT X ... ORDER BY X
If the result columns are unnamed, use a positive integer to order the result. The number refers to the
position of the expression in the list of expressions you include in your subselects.
SELECT A + B ...
UNION
SELECT X ... ORDER BY 1
To identify which subselect each row is from, you can include a constant at the end of the select list of
each subselect in the union. When SQL returns your results, the last column contains the constant for the
subselect that is the source of that row. For example, you can specify:
When a row is returned, it includes a value (either A1 or B2) to indicate the table that is the source of the
row's values. If the column names in the union are different, SQL uses the set of column names specified
in the first subselect when interactive SQL displays or prints the results, or in the SQLDA resulting from
processing an SQL DESCRIBE statement.
Note: Sort sequence is applied after the fields across the UNION pieces are made compatible. The sort
sequence is used for the distinct processing that implicitly occurs during UNION processing.
Related concepts
Sort sequences and normalization in SQL
A sort sequence defines how characters in a character set relate to each other when they are compared or
ordered. Normalization allows you to compare strings that contain combining characters.
Related reference
Creating and using views
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
UNION ALL
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
ORDER BY EMPNO
EMPNO
000060
000150
000150
000150
000160
000160
000170
000170
000170
000170
000180
000180
000190
000190
000190
000200
000210
000210
000210
000220
000230
000230
When you include the UNION ALL in the same SQL statement as a UNION operator, however, the result of
the operation depends on the order of evaluation. Where there are no parentheses, evaluation is from left
to right. Where parentheses are included, the parenthesized subselect is evaluated first, followed, from
left to right, by the other parts of the statement.
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
EXCEPT
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO = 'MA2112' OR
PROJNO = 'MA2113' OR
PROJNO = 'AD3111'
ORDER BY EMPNO
To better understand the results from these SQL statements, imagine that SQL goes through the following
process:
Step 1. SQL processes the first SELECT statement:
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
EXCEPT
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
ORDER BY EMPNO
This query returns a combined result table with values in ascending sequence.
EMPNO
000060
000200
000220
200170
200220
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
INTERSECT
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO = 'MA2112' OR
PROJNO = 'MA2113' OR
PROJNO = 'AD3111'
ORDER BY EMPNO
To better understand the results from these SQL statements, imagine that SQL goes through the following
process:
Step 1. SQL processes the first SELECT statement:
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
SELECT EMPNO
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
INTERSECT
SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO='MA2112' OR
PROJNO= 'MA2113' OR
PROJNO= 'AD3111'
ORDER BY EMPNO
This query returns a combined result table with values in ascending sequence.
EMPNO
000150
000160
000170
000180
000190
000210
The INTO clause names the columns for which you specify values. The VALUES clause specifies a value
for each column named in the INTO clause. The value you specify can be:
• A constant. Inserts the value provided in the VALUES clause.
• A null value. Inserts the null value, using the keyword NULL. The column must be defined as capable of
containing a null value or an error occurs.
• A host variable. Inserts the contents of a host variable.
You can also insert multiple rows into a table using the VALUES clause. The following example inserts
two rows into the PROJECT table. Values for the Project number (PROJNO) , Project name (PROJNAME),
Department number (DEPTNO), and Responsible employee (RESPEMP) are given in the values list. The
value for the Project start date (PRSTDATE) uses the current date. The rest of the columns in the table that
are not listed in the column list are assigned their default value.
The select-statement embedded in the INSERT statement is no different from the select-statement you
use to retrieve data. With the exception of FOR READ ONLY, FOR UPDATE, or the OPTIMIZE clause, you
can use all the keywords, functions, and techniques used to retrieve data. SQL inserts all the rows that
meet the search conditions into the table you specify. Inserting rows from one table into another table
does not affect any existing rows in either the source table or the target table.
You should consider the following when inserting multiple rows into a table:
Notes:
DSTRUCT is a host structure array with five elements that is declared in the program. The five elements
correspond to EMPNO, FIRSTNME, MIDINIT, LASTNAME, and WORKDEPT. DSTRUCT has a dimension of
at least ten to accommodate inserting ten rows. ISTRUCT is a host structure array that is declared in the
program. ISTRUCT has a dimension of at least ten small integer fields for the indicators.
Blocked INSERT statements are supported for non-distributed SQL applications and for distributed
applications where both the application server and the application requester are IBM i products.
Related concepts
Embedded SQL programming
Notice that the parent table columns are not specified in the REFERENCES clause. The columns are not
required to be specified as long as the referenced table has a primary key or eligible unique key which can
be used as the parent key.
Every row inserted into the PROJECT table must have a value of DEPTNO that is equal to some value of
DEPTNO in the department table. (The null value is not allowed because DEPTNO in the project table is
defined as NOT NULL.) The row must also have a value of RESPEMP that is either equal to some value of
EMPNO in the employee table or is null.
The following INSERT statement fails because there is no matching DEPTNO value ('A01') in the
DEPARTMENT table.
Likewise, the following INSERT statement is unsuccessful because there is no EMPNO value of '000011'
in the EMPLOYEE table.
The following INSERT statement completes successfully because there is a matching DEPTNO value of
'E01' in the DEPARTMENT table and a matching EMPNO value of '000010' in the EMPLOYEE table.
In this case, a value is generated by the system for the identity column automatically. You can also write
this statement using the DEFAULT keyword:
After the insert, you can use the IDENTITY_VAL_LOCAL function to determine the value that the system
assigned to the column.
Sometimes a value for an identity column is specified by the user, such as in this INSERT statement using
a SELECT:
This INSERT statement uses the value from SELECT; it does not generate a new value for the identity
column. You cannot provide a value for an identity column created using GENERATED ALWAYS without
using the OVERRIDING SYSTEM VALUE clause.
Related reference
Creating and altering an identity column
Every time a row is added to a table with an identity column, the identity column value for the new row is
generated by the system.
Scalar functions
To insert a row for a new employee and see the values that were used for EMPNO, HIRETYPE, and
HIREDATE, use the following statement:
The returned values are the generated value for EMPNO, 'New Employee' for HIRETYPE, and the current
date for HIREDATE.
Db2 for i will connect to REMOTESYS to run the SELECT, return the selected rows to the local system,
and insert them into the local SALES table. The value for CURRENT DATE will be the current date on
REMOTESYS.
Since a three-part object name or an alias that is defined to reference a three-part name of a table or
view creates an implicit connection to the application server, a server authentication entry must exist. Use
the Add Server Authentication Entry (ADDSVRAUTE) command on the application requestor specifying the
server name, user ID, and password. The server name and user ID must be entered in upper case.
See Distributed Database Programming for additional details on server authentication usage for DRDA.
Related reference
Inserting rows using a select-statement
You can use a select-statement within an INSERT statement to insert zero, one, or more rows into a table
from the result table of the select-statement.
UPDATE table-name
SET column-1 = value-1,
column-2 = value-2, ...
WHERE search-condition ...
Suppose that an employee is relocated. To update the CORPDATA.EMPLOYEE table to reflect the move,
run the following statement:
UPDATE CORPDATA.EMPLOYEE
SET JOB = :PGM-CODE,
PHONENO = :PGM-PHONE
WHERE EMPNO = :PGM-SERIAL
Use the SET clause to specify a new value for each column that you want to update. The SET clause
names the columns that you want updated and provides the values that you want them changed to. You
can specify the following types of values:
• A column name. Replace the column's current value with the contents of another column in the same
row.
• A constant. Replace the column's current value with the value provided in the SET clause.
• A null value. Replace the column's current value with the null value, using the keyword NULL. The
column must be defined as capable of containing a null value when the table was created, or an error
occurs.
• A host variable. Replace the column's current value with the contents of a host variable.
• A global variable. Replace the column's current value with the contents of a global variable.
UPDATE WORKTABLE
SET COL1 = 'ASC',
COL2 = NULL,
COL3 = :FIELD3,
COL4 = CURRENT TIME,
COL5 = AMT - 6.00,
COL6 = COL7
WHERE EMPNO = :PGM-SERIAL
UPDATE EMPLOYEE
SET WORKDEPT = 'D11',
PHONENO = '7213',
JOB = 'DESIGNER'
WHERE EMPNO = '000270'
You can also write this UPDATE statement by specifying all of the columns and then all of the values:
UPDATE EMPLOYEE
SET (WORKDEPT, PHONENO, JOB)
= ('D11', '7213', 'DESIGNER')
WHERE EMPNO = '000270'
Related reference
UPDATE
UPDATE PROJECT
SET DEPTNO =
(SELECT WORKDEPT FROM EMPLOYEE
WHERE PROJECT.RESPEMP = EMPLOYEE.EMPNO)
WHERE RESPEMP='000030'
This same technique can be used to update a list of columns with multiple values returned from a single
select.
UPDATE CL_SCHED
SET ROW =
(SELECT * FROM MYCOPY
WHERE CL_SCHED.CLASS_CODE = MYCOPY.CLASS_CODE)
This update will update all of the rows in CL_SCHED with the values from MYCOPY.
Update rules
The action taken on dependent tables when an UPDATE is performed on a parent table depends on
the update rule specified for the referential constraint. If no update rule was defined for a referential
constraint, the UPDATE NO ACTION rule is used.
UPDATE NO ACTION
Specifies that the row in the parent table can be updated if no other row depends on it. If a dependent
row exists in the relationship, the UPDATE fails. The check for dependent rows is performed at the end
of the statement.
UPDATE RESTRICT
Specifies that the row in the parent table can be updated if no other row depends on it. If a dependent
row exists in the relationship, the UPDATE fails. The check for dependent rows is performed
immediately.
The subtle difference between the RESTRICT rule and the NO ACTION rule is easiest seen when looking
at the interaction of triggers and referential constraints. Triggers can be defined to fire either before
or after an operation (an UPDATE statement, in this case). A before trigger fires before the UPDATE is
performed and therefore before any checking of constraints. An after trigger is fired after the UPDATE
is performed, and after a constraint rule of RESTRICT (where checking is performed immediately), but
before a constraint rule of NO ACTION (where checking is performed at the end of the statement). The
triggers and rules occur in the following order:
1. A before trigger is fired before the UPDATE and before a constraint rule of RESTRICT or NO ACTION.
2. An after trigger is fired after a constraint rule of RESTRICT, but before a NO ACTION rule.
If you are updating a dependent table, any non-null foreign key values that you change must match the
primary key for each relationship in which the table is a dependent. For example, department numbers
UPDATE CORPDATA.DEPARTMENT
SET DEPTNO = 'D99'
WHERE DEPTNAME = 'DEVELOPMENT CENTER'
The following statement fails because it violates the referential constraint that exists between the primary
key DEPTNO in DEPARTMENT and the foreign key DEPTNO in PROJECT:
UPDATE CORPDATA.PROJECT
SET DEPTNO = 'D00'
WHERE DEPTNO = 'D01';
The statement attempts to change all department numbers of D01 to department number D00. Because
D00 is not a value of the primary key DEPTNO in DEPARTMENT, the statement fails.
UPDATE ORDERS
SET (ORDERNO, ORDER_DATE)=
(DEFAULT, 2002-02-05)
WHERE SHIPPED_TO = 'BME TOOL'
A value is generated by the system for the identity column automatically. You can override having the
system generate a value by using the OVERRIDING SYSTEM VALUE clause:
Related reference
Creating and altering an identity column
EXEC SQL
DECLARE THISEMP DYNAMIC SCROLL CURSOR FOR
SELECT EMPNO, WORKDEPT, BONUS
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'D11'
FOR UPDATE OF BONUS
END-EXEC.
EXEC SQL
OPEN THISEMP
END-EXEC.
EXEC SQL
WHENEVER NOT FOUND
GO TO CLOSE-THISEMP
END-EXEC.
... branch back to determine if any more employees in the block have a
bonus under $500.00.
... branch back to fetch and process the next block of rows.
CLOSE-THISEMP.
EXEC SQL
CLOSE THISEMP
END-EXEC.
Related reference
Using a cursor
When SQL runs a SELECT statement, the resulting rows comprise the result table. A cursor provides a way
to access a result table.
For example, suppose that department D11 is moved to another site. You delete each row in the
CORPDATA.EMPLOYEE table with a WORKDEPT value of D11 as follows:
The WHERE clause tells SQL which rows you want to delete from the table. SQL deletes all the rows that
satisfy the search condition from the base table. Deleting rows from a view deletes the rows from the
base table. You can omit the WHERE clause, but it is best to include one, because a DELETE statement
without a WHERE clause deletes all the rows from the table or view. To delete a table definition as well as
the table contents, issue the DROP statement.
If SQL finds an error while running your DELETE statement, it stops deleting data and returns a negative
SQLCODE. If you specify COMMIT(*ALL), COMMIT(*CS), COMMIT(*CHG), or COMMIT(*RR), no rows in the
table are deleted (rows already deleted by this statement, if any, are restored to their previous values). If
COMMIT(*NONE) is specified, any rows already deleted are not restored to their previous values.
If SQL cannot find any rows that satisfy the search condition, an SQLCODE of +100 is returned.
Note: The DELETE statement may have deleted more than one row. The number of rows deleted is
reflected in SQLERRD(3) of the SQLCA. This value is also available from the ROW_COUNT diagnostics item
in the GET DIAGNOSTICS statement.
Given the tables and the data in the “Db2 for i sample tables” on page 367, one row is deleted from table
DEPARTMENT, and table EMPLOYEE is updated to set the value of WORKDEPT to its default wherever
the value was 'E11'. A question mark ('?') in the following sample data reflects the null value. The results
appear as follows:
Table 34. DEPARTMENT table. Contents of the table after the DELETE statement is complete.
DEPTNO DEPTNAME MGRNO ADMRDEPT
A00 SPIFFY COMPUTER SERVICE DIV. 000010 A00
B01 PLANNING 000020 A00
C01 INFORMATION CENTER 000030 A00
D01 DEVELOPMENT CENTER ? A00
D11 MANUFACTURING SYSTEMS 000060 D01
D21 ADMINISTRATION SYSTEMS 000070 D01
E01 SUPPORT SERVICES 000050 A00
E21 SOFTWARE SUPPORT 000100 E01
F22 BRANCH OFFICE F2 ? E01
G22 BRANCH OFFICE G2 ? E01
H22 BRANCH OFFICE H2 ? E01
I22 BRANCH OFFICE I2 ? E01
J22 BRANCH OFFICE J2 ? E01
Table 35. Partial EMPLOYEE table. Partial contents before the DELETE statement.
EMPNO FIRSTNME MI LASTNAME WORKDEPT PHONENO HIREDATE
000230 JAMES J JEFFERSON D21 2094 1966-11-21
000240 SALVATORE M MARINO D21 3780 1979-12-05
000250 DANIEL S SMITH D21 0961 1960-10-30
000260 SYBIL P JOHNSON D21 8953 1975-09-11
000270 MARIA L PEREZ D21 9001 1980-09-30
000280 ETHEL R SCHNEIDER E11 0997 1967-03-24
000290 JOHN R PARKER E11 4502 1980-05-30
000300 PHILIP X SMITH E11 2095 1972-06-19
000310 MAUDE F SETRIGHT E11 3332 1964-09-12
000320 RAMLAL V MEHTA E21 9990 1965-07-07
000330 WING LEE E21 2103 1976-02-23
000340 JASON R GOUNOT E21 5696 1947-05-05
Table 36. Partial EMPLOYEE table. Partial contents after the DELETE statement.
EMPNO FIRSTNME MI LASTNAME WORKDEPT PHONENO HIREDATE
000230 JAMES J JEFFERSON D21 2094 1966-11-21
000240 SALVATORE M MARINO D21 3780 1979-12-05
000250 DANIEL S SMITH D21 0961 1960-10-30
000260 SYBIL P JOHNSON D21 8953 1975-09-11
000270 MARIA L PEREZ D21 9001 1980-09-30
000280 ETHEL R SCHNEIDER ? 0997 1967-03-24
000290 JOHN R PARKER ? 4502 1980-05-30
000300 PHILIP X SMITH ? 2095 1972-06-19
000310 MAUDE F SETRIGHT ? 3332 1964-09-12
000320 RAMLAL V MEHTA E21 9990 1965-07-07
000330 WING LEE E21 2103 1976-02-23
000340 JASON R GOUNOT E21 5696 1947-05-05
The TRUNCATE statement has some additional options that are not available on the DELETE statement to
control how triggers are handled during the truncation operation and the behavior of the table's identity
column after truncation is complete.
The default for TRUNCATE is to not activate any delete triggers during truncation. If you want delete
triggers to be activated, you must use the DELETE statement.
For an identity column, you can specify to either continue generating identity values in the same way as
if the truncate operation did not occur, or you can request to have the identity column start from its initial
value when it was first defined. The default is to continue generating values.
Note: The TRUNCATE statement does not return the number of rows deleted in SQLERRD(3) of the SQLCA
or the ROW_COUNT diagnostics item in the GET DIAGNOSTICS statement.
Related concepts
Removing rows from a table using the DELETE statement
To remove rows from a table, use the DELETE statement.
Related reference
TRUNCATE
Merging data
Use the MERGE statement to conditionally insert, update, or delete rows in a table or view.
You can use the MERGE statement to update a target table from another table, a derived table, or any
other table-reference. This other table is called the source table. The simplest form of a source table is a
list of values.
Based on whether or not a row from the source table exists in the target table, the MERGE statement can
insert a new row, or update or delete the matching row.
The most basic form of a MERGE statement is one where a new row is to be inserted if it doesn't already
exist, or updated if it does exist. Rather than attempting the insert or update and, based on the SQLCODE
or SQLSTATE, then trying the other option, by using the MERGE statement the appropriate action will be
performed.
In this example, you want to add or replace a row for a department. If the department does not already
exist, a row will be inserted. If the department does exist, information for the department will be updated.
Suppose you have a temporary table that is a copy of the EMP_PHOTO table. In this table, you have added
photo information for new employees and updated the photo for existing employees. The temporary table
only contains changes, no rows for unchanged photo information.
To merge these updates into the master EMP_PHOTO table, you can use the following MERGE statement.
Now, let us assume that the person who maintains the TEMP_EMP_PHOTO table has some rows in the
temporary table that have already been merged into the master copy of the EMP_PHOTO table. When
doing the MERGE, you don't want to update the same rows again since the values have not changed. It is
also possible that someone else has updated the master EMP_PHOTO with a more recent picture.
This statement has an extra condition added to the MATCHED clause. Adding the comparison of the
LAST_CHANGED column will prevent the master EMP_PHOTO table from being updated with a photo that
has a timestamp older than the current master's timestamp.
Using subqueries
You can use subqueries in a search condition as another way to select data. Subqueries can be used
anywhere an expression can be used.
Conceptually, a subquery is evaluated whenever a new row or a group of rows must be processed. In fact,
if the subquery is the same for every row or group, it is evaluated only once. Subqueries like this are said
to be uncorrelated.
Some subqueries return different values from row to row or group to group. The mechanism that allows
this is called correlation, and the subqueries are said to be correlated.
Related reference
Expressions in the WHERE clause
An expression in a WHERE clause names or specifies something that you want to compare to something
else.
Defining complex search conditions
In addition to the basic comparison predicates, such as = and >, a search condition can contain any of
these predicates: BETWEEN, IN, EXISTS, IS NULL, and LIKE.
But you cannot go further because the CORPDATA.EMPLOYEE table does not include project number
data. You do not know which employees are working on project MA2100 without issuing another SELECT
statement against the CORPDATA.EMP_ACT table.
With SQL, you can nest one SELECT statement within another to solve this problem. The inner SELECT
statement is called a subquery. The SELECT statement surrounding the subquery is called the outer-
level SELECT. Using a subquery, you can issue just one SQL statement to retrieve the employee numbers,
names, and job codes for employees who work on the project MA2100:
To better understand what will result from this SQL statement, imagine that SQL goes through the
following process:
Step 1: SQL evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO
FROM CORPDATA.EMPPROJACT
WHERE PROJNO= 'MA2100')
Step 2: The interim result table then serves as a list in the search condition of the outer-level SELECT
statement. Essentially, this is the statement that is run:
Subqueries can also appear in the search conditions of other subqueries. Such subqueries are said to be
nested at some level of nesting. For example, a subquery within a subquery within an outer-level SELECT
is nested at a nesting level of two. SQL allows nesting down to a nesting level of 32.
Basic comparisons
You can use a subquery before or after any of the comparison operators. The subquery can return only
one row. It can return multiple values for the row if the equal or not equal operators are used. SQL
compares each value from the subquery row with the corresponding value on the other side of the
comparison operator. For example, suppose that you want to find the employee numbers, names, and
salaries for employees whose education level is higher than the average education level throughout the
company.
SQL first evaluates the subquery and then substitutes the result in the WHERE clause of the SELECT
statement. In this example, the result is the company-wide average educational level. Besides returning a
single row, a subquery can return no rows. If it does, the result of the compare is unknown.
To satisfy this WHERE clause, the value of the expression must be greater than the result for each of the
rows (that is, greater than the highest value) returned by the subquery. If the subquery returns an empty
set (that is, no rows were selected), the condition is satisfied.
• Use ANY or SOME to indicate that the value you supplied must compare in the indicated way to at
least one of the rows the subquery returns. For example, suppose you use the greater-than comparison
operator with ANY:
To satisfy this WHERE clause, the value in the expression must be greater than at least one of the rows
(that is, greater than the lowest value) returned by the subquery. If what the subquery returns is the
empty set, the condition is not satisfied.
Note: The results when a subquery returns one or more null values may surprise you, unless you are
familiar with formal logic.
IN keyword
You can use IN to say that the value in the expression must be among the rows returned by the subquery.
Using IN is equivalent to using =ANY or =SOME. Using ANY and SOME were previously described. You can
also use the IN keyword with the NOT keyword in order to select rows when the value is not among the
rows returned by the subquery. For example, you can use:
EXISTS keyword
In the subqueries presented so far, SQL evaluates the subquery and uses the result as part of the WHERE
clause of the outer-level SELECT. In contrast, when you use the keyword EXISTS, SQL checks whether the
subquery returns one or more rows. If it does, the condition is satisfied. If it returns no rows, the condition
is not satisfied. For example:
SELECT EMPNO,LASTNAME
FROM CORPDATA.EMPLOYEE
WHERE EXISTS
(SELECT *
FROM CORPDATA.PROJECT
WHERE PRSTDATE > '1982-01-01');
In the example, the search condition is true if any project represented in the CORPDATA.PROJECT table
has an estimated start date that is later than January 1, 1982. This example does not show the full power
of EXISTS, because the result is always the same for every row examined for the outer-level SELECT. As
a consequence, either every row appears in the results, or none appear. In a more powerful example, the
subquery itself would be correlated, and change from row to row.
As shown in the example, you do not need to specify column names in the select-list of the subquery of an
EXISTS clause. Instead, you should code SELECT *.
Correlated subqueries
A correlated subquery is a subquery that SQL might need to re-evaluate when it examines each new row
(the WHERE clause) or each group of rows (the HAVING clause) in the outer-level SELECT statement.
Any number of correlated references can appear in a subquery. For example, one correlated name in a
search condition can be defined in the outer-level SELECT, while another can be defined in a containing
subquery.
Before the subquery is executed, a value from the referenced column is always substituted for the
correlated reference.
A correlated subquery looks like an uncorrelated one, except for the presence of one or more correlated
references. In the example, the single correlated reference is the occurrence of X.WORKDEPT in the
subselect's FROM clause. Here, the qualifier X is the correlation name defined in the FROM clause
of the outer SELECT statement. In that clause, X is introduced as the correlation name of the table
CORPDATA.EMPLOYEE.
Now, consider what happens when the subquery is executed for a given row of CORPDATA.EMPLOYEE.
Before it is executed, the occurrence of X.WORKDEPT is replaced with the value of the WORKDEPT
(SELECT AVG(EDLEVEL)
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = 'A00')
Thus, for the row considered, the subquery produces the average education level of Christine's
department. This is then compared in the outer statement to Christine's own education level. For some
other row for which WORKDEPT has a different value, that value appears in the subquery in place of A00.
For example, for the row for MICHAEL L THOMPSON, this value is B01, and the subquery for his row
delivers the average education level for department B01.
The result table produced by the query has the following values.
Consider what happens when the subquery is executed for a given department of CORPDATA.EMPLOYEE.
Before it is executed, the occurrence of X.WORKDEPT is replaced with the value of the WORKDEPT
column for that group. Suppose, for example, that the first group selected has A00 for the value of
WORKDEPT. The subquery executed for this group is:
(SELECT AVG(SALARY)
FROM CORPDATA.EMPLOYEE
WHERE SUBSTR('A00',1,1) = SUBSTR(WORKDEPT,1,1))
Thus, for the group considered, the subquery produces the average salary for the area. This value is then
compared in the outer statement to the average salary for department 'A00'. For some other group for
which WORKDEPT is 'B01', the subquery results in the average salary for the area where department B01
belongs.
The result table produced by the query has the following values.
For each row returned for DEPTNO and DEPTNAME, the system finds where EMPNO = MGRNO and returns
the manager's name. The result table produced by the query has the following values.
UPDATE CORPDATA.PROJECT X
SET PRIORITY = 1
WHERE '1983-09-01' >
(SELECT MAX(EMENDATE)
FROM CORPDATA.EMPPROJACT
WHERE PROJNO = X.PROJNO)
As SQL examines each row in the CORPDATA.EMPPROJACT table, it determines the maximum activity
end date (EMENDATE) for all activities of the project (from the CORPDATA.PROJECT table). If the end
date of each activity associated with the project is before September 1983, the current row in the
CORPDATA.PROJECT table qualifies and is updated.
Update the master order table with any changes to the quantity ordered. If the quantity in the orders table
is not set (the NULL value), keep the value that is in the master order table. These tables do not exist in
the CORPDATA sample database.
UPDATE MASTER_ORDERS X
SET QTY=(SELECT COALESCE (Y.QTY, X.QTY)
FROM ORDERS Y
WHERE X.ORDER_NUM = Y.ORDER_NUM)
WHERE X.ORDER_NUM IN (SELECT ORDER_NUM
FROM ORDERS)
In this example, each row of the MASTER_ORDERS table is checked to see if it has a corresponding row in
the ORDERS table. If it does have a matching row in the ORDERS table, the COALESCE function is used to
return a value for the QTY column. If QTY in the ORDERS table has a non-null value, that value is used to
update the QTY column in the MASTER_ORDERS table. If the QTY value in the ORDERS table is NULL, the
MASTER_ORDERS QTY column is updated with its own value.
In the following examples, the results are shown for each statement using:
• *HEX sort sequence
• Shared-weight sort sequence using the language identifier ENU
• Unique-weight sort sequence using the language identifier ENU
Note: ENU is chosen as a language identifier by specifying either SRTSEQ(*LANGIDUNQ), or
SRTSEQ(*LANGIDSHR) and LANGID(ENU), on the CRTSQLxxx, STRSQL, or RUNSQLSTM commands, or
by using the SET OPTION statement.
The following table shows the result using a *HEX sort sequence. The rows are sorted based on the
EBCDIC value in the JOB column. In this case, all lowercase letters sort before the uppercase letters.
The following table shows how sorting is done for a unique-weight sort sequence. After the sort sequence
is applied to the values in the JOB column, the rows are sorted. Notice that after the sort, lowercase
Table 41. Result of using the unique-weight sort sequence for the ENU language identifier
ID NAME DEPT JOB YEARS SALARY COMM
80 James 20 Clerk 0 13504.60 128.20
100 Plotz 42 mgr 6 18352.80 0
10 Sanders 20 Mgr 7 18357.50 0
50 Hanes 15 Mgr 10 20659.80 0
30 Merenghi 38 MGR 5 17506.75 0
90 Koonitz 42 sales 6 18001.75 1386.70
20 Pernal 20 Sales 8 18171.25 612.45
40 OBrien 38 Sales 6 18006.00 846.55
70 Rothman 15 Sales 7 16502.83 1152.00
60 Quigley 38 SALES 0 16808.30 650.25
The following table shows how sorting is done for a shared-weight sort sequence. After the sort sequence
is applied to the values in the JOB column, the rows are sorted. For the sort comparison, each lowercase
letter is treated the same as the corresponding uppercase letter. In this table, notice that all the values
'MGR', 'mgr' and 'Mgr' are mixed together.
Table 42. Result of using the shared-weight sort sequence for the ENU language identifier
ID NAME DEPT JOB YEARS SALARY COMM
80 James 20 Clerk 0 13504.60 128.20
10 Sanders 20 Mgr 7 18357.50 0
30 Merenghi 38 MGR 5 17506.75 0
50 Hanes 15 Mgr 10 20659.80 0
100 Plotz 42 mgr 6 18352.80 0
20 Pernal 20 Sales 8 18171.25 612.45
40 OBrien 38 Sales 6 18006.00 846.55
60 Quigley 38 SALES 0 16808.30 650.25
70 Rothman 15 Sales 7 16502.83 1152.00
90 Koonitz 42 sales 6 18001.75 1386.70
The first table shows how row selection is done with a *HEX sort sequence. The rows that match the row
selection criteria for the column JOB are selected exactly as specified in the select statement. Only the
uppercase 'MGR' is selected.
Table 2 shows how row selection is done with a unique-weight sort sequence. The lowercase and
uppercase letters are treated as unique. The lowercase 'mgr' is not treated the same as uppercase 'MGR'.
Therefore, the lowercase 'mgr' is not selected.
Table 44. Result of using unique-weight sort sequence for the ENU language identifier
ID NAME DEPT JOB YEARS SALARY COMM
30 Merenghi 38 MGR 5 17506.75 0
The following table shows how row selection is done with a shared-weight sort sequence. The rows that
match the row selection criteria for the column 'JOB' are selected by treating uppercase letters the same
as lowercase letters. Notice that all the values 'mgr', 'Mgr' and 'MGR' are selected.
Table 45. Result of using the shared-weight sort sequence for the ENU language identifier
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 18357.50 0
30 Merenghi 38 MGR 5 17506.75 0
50 Hanes 15 Mgr 10 20659.80 0
100 Plotz 42 mgr 6 18352.80 0
Any queries run against view V1 are run against the result table shown above. The query shown below is
run with a sort sequence of SRTSEQ(*LANGIDUNQ) and LANGID(ENU).
NAME
Gomer
Gumby
Gómez
NAME
Gomer
Gómez
Gumby
When an ICU sort sequence table is specified, the performance of SQL statements that use the table can
be much slower than the performance of SQL statements that use a non-ICU sort sequence table or use
a *HEX sort sequence. The slower performance results from calling the ICU support to get the weighted
value for each piece of data that needs to be sorted. An ICU sort sequence table can provide more sorting
function but at the cost of slower running SQL statements. However, indexes created with an ICU sort
sequence table can be created over columns to help reduce the need of calling the ICU support. In this
case, the index key already contains the ICU weighted value, so there is no need to call the ICU support.
Related concepts
International Components for Unicode
Normalization
Normalization allows you to compare strings that contain combining characters.
Data tagged with a UTF-8 or UTF-16 CCSID can contain combining characters. Combining characters
allow a resulting character to be composed of more than one character. After the first character of the
compound character, one of many different non-spacing characters such as umlauts and accents can
follow in the data string. If the resulting character is one that is already defined in the character set,
normalization of the string results in multiple combining characters being replaced by the value of the
defined character. For example, if your string contained the letter 'a' followed by an '..', the string is
normalized to contain the single character 'ä'.
Normalization makes it possible to accurately compare strings. If data is not normalized, two strings that
look identical on the display may not compare equal since the stored representation can be different.
When UTF-8 and UTF-16 string data is not normalized, it is possible that a column in a table can have one
row with the letter 'a' followed by the umlaut character and another row with the combined 'ä' character.
These two values are not both compare equal in a comparison predicate: WHERE C1 = 'ä'. For this reason,
it is recommended that all string columns in a table are stored in normalized form.
You can normalize the data yourself before inserting or updating it, or you can define a column in a table
to be automatically normalized by the database. To have the database perform the normalization, specify
NORMALIZED as part of the column definition. This option is only allowed for columns that are tagged
with a CCSID of 1208 (UTF-8) or 1200 (UTF-16). The database assumes all columns in a table have been
normalized.
The NORMALIZED clause can also be specified for function and procedure parameters. If it is specified
for an input parameter, the normalization will be done by the database for the parameter value before
invoking the function or procedure. If it is specified for an output parameter, the clause is not enforced; it
is assumed that the user's routine code will return a normalized value.
The NORMALIZE_DATA option in the QAQQINI file is used to indicate whether the system is to perform
normalization when working with UTF-8 and UTF-16 data. This option controls whether the system will
normalize literals, host variables, parameter markers, and expressions that combine strings before using
them in SQL. The option is initialized to not perform normalization. This is the correct value for you if
the data in your tables and any literal values in your applications is always normalized already through
some other mechanism or never contains characters which will need to be normalized. If this is the case,
you will want to avoid the overhead of system normalization in your query. If your data is not already
normalized, you will want to switch the value of this option to have the system perform normalization for
you.
Data protection
The Db2 for i database provides various methods for protecting SQL data from unauthorized users and for
ensuring data integrity.
Views
Views can prevent unauthorized users from having access to sensitive data.
The application program can access the data it needs in a table, without having access to sensitive or
restricted data in the table. A view can restrict access to particular columns by not specifying those
columns in the SELECT list (for example, employee salaries). A view can also restrict access to particular
rows in a table by specifying a WHERE clause (for example, allowing access only to the rows associated
with a particular department number).
Auditing
The Db2 for i database is designed to comply with the U.S. government C2 security level. A key feature of
the C2 level is the ability to perform auditing on the system.
Db2 for i uses the audit facilities that are managed by the system security function. Auditing can be
performed at an object level, a user level, or a system level. The system value QAUDCTL controls whether
auditing is performed at the object or user level. The Change User Audit (CHGUSRAUD) and Change
Object Audit (CHGOBJAUD) commands specify which users and objects are audited. The system value
QAUDLVL controls what types of actions are audited (for example, authorization failures; and create,
delete, grant, or revoke operations).
Db2 for i can also audit row changes through the Db2 for i journal support.
In some cases, entries in the auditing journal will not be in the same order as they occured. For example,
a job that is running under commitment control deletes a table, creates a new table with the same name
as the one that was deleted, then does a commit. This will be recorded in the auditing journal as a create
followed by a delete. This is because objects that are created are journaled immediately. An object that
is deleted under commitment control is hidden and not actually deleted until a commit is done. Once the
commit is done, the action is journaled.
Related reference
Security reference
Concurrency
Concurrency is the ability for multiple users to access and change data in the same table or view at the
same time without risk of losing data integrity.
This ability is automatically supplied by the Db2 for i database manager. Locks are implicitly acquired on
tables and rows to protect concurrent users from changing the same data at precisely the same time.
Typically, Db2 for i acquires locks on rows to ensure integrity. However, some situations require Db2 for i
to acquire a more exclusive table-level lock instead of row locks.
For example, an update (exclusive) lock on a row currently held by one cursor can be acquired by another
cursor in the same program (or in a DELETE or UPDATE statement not associated with the cursor). This
will prevent a positioned UPDATE or positioned DELETE statement that references the first cursor until
another FETCH is performed. A read (shared no-update) lock on a row currently held by one cursor will
not prevent another cursor in the same program (or DELETE or UPDATE statement) from acquiring a lock
on the same row.
Default and user-specifiable lock-wait timeout values are supported. Db2 for i creates tables, views, and
indexes with the default record wait time (60 seconds) and the default file wait time (*IMMED). This lock
wait time is used for data manipulation language (DML) statements. You can change these values by using
the CL commands Change Physical File (CHGPF), Change Logical File (CHGLF), and Override Database File
(OVRDBF).
The lock wait time used for all data definition language (DDL) statements and the LOCK TABLE statement
is the job default wait time (DFTWAIT). You can change this value by using the CL command Change Job
(CHGJOB) or Change Class (CHGCLS).
If a large record wait time is specified, deadlock detection is provided. For example, assume that one job
has an exclusive lock on row 1 and another job has an exclusive lock on row 2. If the first job attempts to
lock row 2, it waits because the second job is holding the lock. If the second job then attempts to lock row
1, Db2 for i detects that the two jobs are in a deadlock and an error is returned to the second job.
You can explicitly prevent other users from using a table at the same time by using the SQL LOCK TABLE
statement. Using COMMIT(*RR) will also prevent other users from using a table during a unit of work.
To improve performance, Db2 for i frequently leaves the open data path (ODP) open. This performance
feature also leaves a lock on tables referenced by the ODP, but does not leave any locks on rows. A lock
left on a table might prevent another job from performing an operation on that table. In most cases,
however, Db2 for i can detect that other jobs are holding locks and events can be signaled to those jobs.
The event causes Db2 for i to close any ODPs (and release the table locks) that are associated with that
table and are currently only open for performance reasons.
Note: The lock wait timeout must be large enough for the events to be signalled and the other jobs to
close the ODPs, or an error is returned.
Unless the LOCK TABLE statement is used to acquire table locks, or either COMMIT(*ALL) or
COMMIT(*RR) is used, data which has been read by one job can be immediately changed by another
job. Typically, the data that is read at the time the SQL statement is executed and therefore it is very
current (for example, during FETCH). In the following cases, however, data is read before the execution of
the SQL statement and therefore the data may not be current (for example, during OPEN).
• ALWCPYDTA(*OPTIMIZE) was specified and the optimizer determined that making a copy of the data
performs better than not making a copy.
Journaling
The Db2 for i journal support provides an audit trail and forward and backward recovery.
Forward recovery can be used to take an older version of a table and apply the changes logged on the
journal to the table. Backward recovery can be used to remove changes logged on the journal from the
table.
When an SQL schema is created, a journal and journal receiver are created in the schema. When SQL
creates the journal and journal receiver, they are only created on a user auxiliary storage pool (ASP) if the
Commitment control
The Db2 for i commitment control support provides a means for processing a group of database changes,
such as update, insert, or delete operations or data definition language (DDL) operations, as a single unit
of work (also referred to as a transaction).
A commit operation guarantees that the group of operations is completed. A rollback operation
guarantees that the group of operations is backed out. A savepoint can be used to break a transaction
into smaller units that can be rolled back. A commit operation can be issued through several different
interfaces. For example,
• An SQL COMMIT statement
• A CL COMMIT command
• A language commit statement (such as an RPG COMMIT statement)
A rollback operation can be issued through several different interfaces. For example,
• An SQL ROLLBACK statement
• A CL ROLLBACK command
• A language rollback statement (such as an RPG ROLBK statement)
The only SQL statements that cannot be committed or rolled back are:
• DROP SCHEMA
• GRANT or REVOKE if an authority holder exists for the specified object
If commitment control was not already started when either an SQL statement is run with an isolation
level other than COMMIT(*NONE) or a RELEASE statement is run, then Db2 for i sets up the commitment
control environment by running the internal equivalent of the Start Commitment Control (STRCMTCTL)
command. Db2 for i specifies the NFYOBJ(*NONE) and CMTSCOPE(*ACTGRP) parameters, along with the
LCKLVL parameter. The LCKLVL parameter specified is the lock level on the COMMIT parameter of the
Create SQL (CRTSQLxxx), Start SQL Interactive Session (STRSQL), or Run SQL Statements (RUNSQLSTM)
command. In REXX, the LCKLVL parameter specified is the lock level on the SET OPTION statement. You
SQL statement COMMIT parameter (see Duration of row locks Lock type
note 5)
FETCH (update or delete *NONE When row is not updated or deleted UPDATE
capable cursor) (See note from read until next FETCH
1) When row is updated
from read until next FETCH
When row is deleted UPDATE
from read until next DELETE
*CHG When row is not updated or deleted
from read until next FETCH
When row is updated or deleted UPDATE
from read until COMMIT or ROLLBACK
*CS When row is not updated or deleted
from read until next FETCH
When row is updated or deleted UPDATE
from read until COMMIT or ROLLBACK
*ALL From read until ROLLBACK or COMMIT
UPDATE (non-cursor) *NONE Each row locked while being updated UPDATE
*CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
DELETE (non-cursor) *NONE Each row locked while being deleted UPDATE
*CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
UPDATE (with cursor) *NONE From read until next FETCH UPDATE
*CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
SQL statement COMMIT parameter (see Duration of row locks Lock type
note 5)
MERGE, UPDATE *NONE From read until MERGE statement completion UPDATE
sub-statement *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
DELETE (with cursor) *NONE Lock released when row deleted UPDATE
*CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
MERGE, DELETE *NONE From read until MERGE statement completion UPDATE
sub-statement *CHG From read until ROLLBACK or COMMIT UPDATE
*CS From read until ROLLBACK or COMMIT UPDATE
*ALL From read until ROLLBACK or COMMIT UPDATE
Notes:
1. A cursor is open with UPDATE or DELETE capabilities if the result table is not read-only and if one of the following is true:
• The cursor is defined with a FOR UPDATE clause.
• The cursor is defined without a FOR UPDATE, FOR READ ONLY, or ORDER BY clause and the program contains at least one of the
following:
– Cursor UPDATE referring to the same cursor-name
– Cursor DELETE referring to the same cursor-name
– An EXECUTE or EXECUTE IMMEDIATE statement and ALWBLK(*READ) or ALWBLK(*NONE) was specified on the CRTSQLxxx
command.
2. A table or view can be locked exclusively in order to satisfy COMMIT(*ALL). If a subselect is processed that includes a UNION, or if the
processing of the query requires the use of a temporary result, an exclusive lock is acquired to protect you from seeing uncommitted
changes.
3. An UPDATE lock on rows of the target table and a READ lock on the rows of the subselect table.
4. A table or view can be locked exclusively in order to satisfy repeatable read. Row locking is still done under repeatable read. The locks
acquired and their duration are identical to *ALL.
5. Repeatable read (*RR) row locks will be the same as the locks indicated for *ALL.
6. If the KEEP LOCKS clause is specified with *CS, any read locks are held until the cursor is closed or until a COMMIT or ROLLBACK is
done. If no cursors are associated with the isolation clause, then locks are held until the completion of the SQL statement.
7. If the USE AND KEEP EXCLUSIVE LOCKS clause is specified with the *RS or *RR isolation level, an UPDATE lock on the row will be
obtained instead of a READ lock.
Related concepts
Commitment control
Related reference
DECLARE CURSOR
Isolation level
XA APIs
SAVEPOINT STOP_HERE
ON ROLLBACK RETAIN CURSORS
Program logic in the application dictates whether the savepoint name is reused as the application
progresses, or if the savepoint name denotes a unique milestone in the application that should not be
reused.
If the savepoint represents a unique milestone that should not be moved with another SAVEPOINT
statement, specify the UNIQUE keyword. This prevents the accidental reuse of the name that can occur
by invoking a stored procedure that uses the identical savepoint name in a SAVEPOINT statement.
However, if the SAVEPOINT statement is used in a loop, then the UNIQUE keyword should not be used.
The following SQL statement sets a unique savepoint named START_OVER.
To rollback to a savepoint, use the ROLLBACK statement with the TO SAVEPOINT clause. The following
example illustrates using the SAVEPOINT and ROLLBACK TO SAVEPOINT statements:
This application logic books airline reservations on a preferred date, then books hotel reservations. If the
hotel is unavailable, it rolls back the airline reservations and then repeats the process for another date. Up
to 3 dates are tried.
got_reservations =0;
EXEC SQL SAVEPOINT START_OVER UNIQUE ON ROLLBACK RETAIN CURSORS;
if (SQLCODE != 0) return;
Savepoints are released using the RELEASE SAVEPOINT statement. If a RELEASE SAVEPOINT statement
is not used to explicitly release a savepoint, it is released at the end of the current savepoint level or at the
end of the transaction. The following statement releases savepoint START_OVER.
Savepoints are released when the transaction is committed or rolled back. Once the savepoint name is
released, a rollback to the savepoint name is no longer possible. The COMMIT or ROLLBACK statement
releases all savepoint names established within a transactions. Since all savepoint names are released
within the transaction, all savepoint names can be reused following a commit or rollback.
A new savepoint level is initiated when: That savepoint level ends when:
A new unit of work is started COMMIT or ROLLBACK is issued
A trigger is invoked The trigger completes
A user-defined function is invoked The user-defined function returns to the invoker
A stored procedure is invoked, and that stored The stored procedure returns to the caller
procedure was created with the NEW SAVEPOINT
LEVEL clause
There is a BEGIN for an ATOMIC compound SQL There is an END for an ATOMIC compound
statement statement
A savepoint that is established in a savepoint level is implicitly released when that savepoint level is
terminated.
Atomic operations
When running under COMMIT(*CHG), COMMIT(*CS), or COMMIT(*ALL), all operations are guaranteed to
be atomic.
That is, they will complete or they will appear not to have started. This is true regardless of when or how
the function was ended or interrupted (such as power failure, abnormal job end, or job cancel).
If COMMIT (*NONE) is specified, however, some underlying database data definition functions are not
atomic. The following SQL data definition statements are guaranteed to be atomic:
• ALTER TABLE (See note 1)
• COMMENT (See note 2)
• LABEL (See note 2)
• GRANT (See note 3)
• REVOKE (See note 3)
• DROP TABLE (See note 4)
• DROP VIEW (See note 4)
• DROP INDEX
• DROP PACKAGE
• REFRESH TABLE
Notes:
1. If multiple alter table options are specified, the operations are processed one at a time so the entire
SQL statement is not atomic. The order of operations is:
• Remove constraints
Constraints
The Db2 for i database supports unique, referential, and check constraints.
A unique constraint is a rule that guarantees that the values of a key are unique. A referential constraint
is a rule that all non-null values of foreign keys in a dependent table have a corresponding parent key in a
parent table. A check constraint is a rule that limits the values allowed in a column or group of columns.
Db2 for i enforces the validity of the constraint during any data manipulation language (DML) statement.
Certain operations (such as restoring the dependent table), however, cause the validity of the constraint
to be unknown. In this case, DML statements might be prevented until Db2 for i has verified the validity of
the constraint.
• Unique constraints are implemented with indexes. If an index that implements a unique constraint is
not valid, the Edit Rebuild of Access Paths (EDTRBDAP) command can be used to display any indexes
that currently require being rebuilt.
• If Db2 for i does not currently know whether a referential constraint or check constraint is valid, the
constraint is considered to be in a check pending state. The Edit Check Pending Constraints (EDTCPCST)
command can be used to display any indexes that currently require being rebuilt.
Related concepts
Constraints
A constraint is a rule enforced by the database manager to limit the values that can be inserted, deleted,
or updated in a table.
CREATE TABLE T1 (COL1 INT, COL2 INT CHECK (COL2>0), COL3 INT)
fails because the value to be inserted into COL2 does not meet the check constraint; that is, -1 is not
greater than 0.
The following statement is successful:
This ALTER TABLE statement attempts to add a second check constraint that limits the value allowed in
COL1 to 1 and also effectively rules that values in COL2 be greater than 1. This constraint is not allowed
because the second part of the constraint is not met by the existing data (the value of '1' in COL2 is not
less than the value of '1' in COL1).
Related reference
ALTER TABLE
CREATE TABLE
Index recovery
The Db2 for i database provides several functions to deal with index recovery.
• System managed index protection
The Edit Recovery for Access Paths (EDTRCYAP) CL command allows you to instruct Db2 for i to
guarantee that in the event of a system or power failure, the amount of time required to recover
all indexes on the system is kept below a specified time. The system automatically journals enough
information in a system journal to limit the recovery time to the specified amount.
• Journaling of indexes
Db2 for i provides an index journaling function that makes it unnecessary to rebuild an entire index
in the event of a power or system failure. If the index is journaled, the system database support
automatically makes sure that the index is in synchronization with the data in the tables without having
to rebuild it from scratch. SQL indexes are not journaled automatically. You can, however, use the CL
command Start Journal Access Path (STRJRNAP) to journal any index created by Db2 for i.
• Index rebuild
All indexes on the system have a maintenance option that specifies when an index is maintained. SQL
indexes are created with an attribute of *IMMED maintenance.
In the event of a power failure or an abnormal system failure, if indexes are not protected by one of
the previously described techniques, those indexes in the process of change might need to be rebuilt
by the database manager to make sure that they agree with the actual data. All indexes on the system
have a recovery option that specifies when an index should be rebuilt if necessary. All SQL indexes with
an attribute of UNIQUE are created with a recovery attribute of *IPL (this means that these indexes are
rebuilt before the IBM i operating system is started). All other SQL indexes are created with the *AFTIPL
recovery option (this means that after the operating system is started, indexes are asynchronously
rebuilt). During an IPL, the operator can see a display showing the indexes that need to be rebuilt and
their recovery options. The operator can override the recovery options.
• Save and restore of indexes
The save/restore function allows you to save indexes when a table is saved by using ACCPTH(*YES) on
the Save Object (SAVOBJ) or Save Library (SAVLIB) CL commands. In the event of a restore when the
indexes have also been saved, there is no need to rebuild the indexes. Any indexes not previously saved
and restored are automatically and asynchronously rebuilt by the database manager.
Catalog integrity
To ensure that the information in the catalog is always accurate, the Db2 for i database prevents users
from explicitly changing the information in the catalog and implicitly maintains the information when an
SQL object described in the catalog is changed.
The integrity of the catalog is maintained whether objects in the schema are changed by SQL statements,
IBM i CL commands, System/38 Environment CL commands, System/36 Environment functions, or any
Stored procedures
A procedure (often called a stored procedure) is a program that can be called to perform operations. A
procedure can include both host language statements and SQL statements. Procedures in SQL provide the
same benefits as procedures in a host language.
Db2 stored procedure support provides a way for an SQL application to define and then call a procedure
through SQL statements. Stored procedures can be used in both distributed and nondistributed Db2
applications. One of the advantages of using stored procedures is that for distributed applications, the
processing of one CALL statement on the application requester, or client, can perform any amount of work
on the application server.
You may define a procedure as either an SQL procedure or an external procedure. An external procedure
can be any supported high level language program (except System/36 programs and procedures) or
a REXX procedure. The procedure does not need to contain SQL statements, but it may contain SQL
statements. An SQL procedure is defined entirely in SQL, and can contain SQL statements that include
SQL control statements.
Coding stored procedures requires that the user understand the following:
• Stored procedure definition through the CREATE PROCEDURE statement
• Stored procedure invocation through the CALL statement
• Parameter passing conventions
• Methods for returning a completion status to the program invoking the procedure.
You may define stored procedures by using the CREATE PROCEDURE statement. The CREATE
PROCEDURE statement adds procedure and parameter definitions to the catalog tables SYSROUTINES
and SYSPARMS. These definitions are then accessible by any SQL CALL statement on the system.
To create an external procedure or an SQL procedure, you can use the SQL CREATE PROCEDURE
statement.
The following sections describe the SQL statements used to define and call the stored procedure,
information about passing parameters to the stored procedure, and examples of stored procedure usage.
For more information about stored procedures, see Stored Procedures, Triggers, and User-Defined
CREATE PROCEDURE P1
(INOUT PARM1 CHAR(10))
EXTERNAL NAME MYLIB.PROC1
LANGUAGE C
GENERAL WITH NULLS
This procedure has been in use for a long time and is invoked from many places. Now, someone has
suggested that it would be useful to have this procedure update a few other columns in the same table,
JOB and EDLEVEL. Finding and changing all the calls to this procedure is a huge job, but if you add the
new parameters so they have default values it is very easy.
The parameter definitions in the following CREATE PROCEDURE statement will allow all of the columns
except the employee number to be passed in optionally.
The code for this procedure, either an SQL routine or an external program, needs to be modified to handle
the new parameters and to correctly process the two existing parameters when a NULL value is passed.
Since default parameters are optional, any existing call to this procedure will not need to change; the
two new parameters will pass a value of NULL to the procedure code. Any caller who needs the new
parameters can include them on the SQL CALL statement.
Although this example uses NULL for all the default values, almost any expression can be used. It can be a
simple constant or a complex query. It cannot reference any of the other parameters.
There are several ways to have the defaults used for the CALL statement.
• Omit the parameters at the end that you do not need to use.
The defaults will be used for the JOB and EDLEVEL parameters.
• Use the DEFAULT keyword for any parameters that are omitted.
All the parameters are represented in this statement. The defaults will be used for the EMP_DEPT, JOB,
and EDLEVEL parameters.
By using the parameter name, the other three parameters do not need to be represented in this CALL
statement. The defaults will be used for the EMP_DEPT, PHONE_NUMBER, and JOB parameters.
Named arguments can be in any order in the CALL statement. Unnamed arguments must match the order
of the parameter definitions for the procedure and must be specified ahead of any named arguments.
Once a named argument is used in the statement, all arguments that follow it must also be named. Any
parameters that do not have an argument in the CALL statement must have a default defined.
In the following example, the procedure creates a table containing all employees in a specified
department. The schema where it gets created has always been hard-coded for the environment where it
is used. For testing, however, it would be convenient to create the table in a test schema.
A second parameter is defined to pass a schema name. It has a default of ’CORPDATA’. This is the value
that has been used by the procedure in the past.
When run in the production environment, the CALL statement might be:
CALL CREATE_DEPT_TABLE2('D21')
Since the SCHEMA_NAME parameter is not specified, the default parameter value is used. The table
DEPT_D21 is created in CORPDATA.
When run in the test environment, the CALL statement might be:
When this CALL statement is issued, a call to program MYLIB/PROC1 is made and two arguments are
passed. Because the language of the program is ILE C, the first argument is a C NUL-terminated string, 11
characters long, which contains the contents of host variable HV1. On a call to an ILE C procedure, SQL
adds one character to the parameter declaration if the parameter is declared to be a character, graphic,
date, time, or timestamp variable. The second argument is the indicator array. In this case, it is one
short integer because there is only one parameter in the CREATE PROCEDURE statement. This argument
contains the contents of indicator variable IND1 on entry to the procedure.
Since the first parameter is declared as INOUT, SQL updates the host variable HV1 and the indicator
variable IND1 with the values returned from MYLIB.PROC1 before returning to the user program.
Notes:
1. The procedure names specified on the CREATE PROCEDURE and CALL statements must match
EXACTLY in order for the link between the two to be made during the SQL precompile of the program.
2. For an embedded CALL statement where both a CREATE PROCEDURE and a DECLARE PROCEDURE
statement exist, the DECLARE PROCEDURE statement will be used.
When the CALL statement is issued, SQL attempts to find the program based on standard SQL naming
conventions. For the preceding example, assume that the naming option of *SYS (system naming) is
used and that a DFTRDBCOL parameter is not specified on the Create SQL PL/I Program (CRTSQLPLI)
command. In this case, the library list is searched for a program named P2. Because the call type is
GENERAL, no additional argument is passed to the program for indicator variables.
Note: If an indicator variable is specified on the CALL statement and its value is less than zero when
the CALL statement is executed, an error results because there is no way to pass the indicator to the
procedure.
Assuming program P2 is found in the library list, the contents of host variable HV2 are passed in to the
program on the CALL and the argument returned from P2 is mapped back to the host variable after P2 has
completed execution.
For numeric constants passed on a CALL statement, the following rules apply:
• All integer constants are passed as fullword binary integers.
• All decimal constants are passed as packed decimal values. Precision and scale are determined based
on the constant value. For instance, a value of 123.45 is passed as a packed decimal(5,2). Likewise, a
value of 001.01 is also passed with a precision of 5 and a scale of 2.
• All floating point constants are passed as double-precision floating point.
Special registers specified on a dynamic CALL statement are passed as their defined data type and length
with the following exceptions:
CURRENT DATE
Passed as a 10-byte character string in ISO format.
CURRENT TIME
Passed as an 8-byte character string in ISO format.
CURRENT TIMESTAMP
Passed as a 26-byte character string in IBM SQL format.
#define SQLDA_HV_ENTRIES 2
#define SHORTINT 500
#define NUL_TERM_CHAR 460
The name of the called procedure may also be stored in a host variable and the host variable used in the
CALL statement, instead of the hard-coded procedure name. For example:
...
main()
{
char proc_name[15];
...
strcpy (proc_name, "MYLIB.P3");
...
EXEC SQL CALL :proc_name ...;
...
}
In the above example, if MYLIB.P3 is expecting parameters, either a parameter list or an SQLDA passed
with the USING DESCRIPTOR clause may be used, as shown in the previous example.
When a host variable containing the procedure name is used in the CALL statement and a CREATE
PROCEDURE catalog definition exists, it will be used. The procedure name cannot be specified as a
parameter marker.
char hv3[10],string[100];
:
strcpy(string,"CALL MYLIB.P3 ('P3 TEST')");
EXEC SQL EXECUTE IMMEDIATE :string;
:
This example shows a dynamic CALL statement executed through an EXECUTE IMMEDIATE statement.
The call is made to program MYLIB.P3 with one parameter passed as a character variable containing 'P3
TEST'.
When executing a CALL statement and passing a constant, as in the previous example, the length of the
expected argument in the program must be kept in mind. If program MYLIB.P3 expected an argument of
only 5 characters, the last 2 characters of the constant specified in the example is lost to the program.
Note: For this reason, it is always safer to use host variables on the CALL statement so that the attributes
of the procedure can be matched exactly and so that characters are not lost. For dynamic SQL, host
variables can be specified for CALL statement arguments if the PREPARE and EXECUTE statements are
used to process it.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
/**************************************************************/
/*********** START OF SQL C Application ***********************/
/*******************************************************/
/* Initialize variables for the call to the procedures */
/*******************************************************/
strcpy(PARM1,"PARM1");
PARM2 = 7000;
PARM3 = -1;
PARM4 = 1.2;
PARM5 = 1.0;
PARM6 = 10.555;
PARM7.parm7l = 5;
strcpy(PARM7.parm7c,"PARM7");
strncpy(PARM8,"1994-12-31",10); /* FOR DATE */
strncpy(PARM9,"12.00.00",8); /* FOR TIME */
strncpy(PARM10,"1994-12-31-12.00.00.000000",26);
/* FOR TIMESTAMP */
/***********************************************/
/* Call the C procedure */
/* */
/* */
/***********************************************/
EXEC SQL CALL P1 (:PARM1, :PARM2, :PARM3,
:PARM4, :PARM5, :PARM6,
:PARM7, :PARM8, :PARM9,
:PARM10 );
if (strncmp(SQLSTATE,"00000",5))
{
/* Handle error or warning returned on CALL statement */
}
/***********************************************/
/* Call the PLI procedure */
/* */
/* */
/***********************************************/
/* Reset the host variables before making the CALL */
/* */
:
EXEC SQL CALL P2 (:PARM1, :PARM2, :PARM3,
:PARM4, :PARM5, :PARM6,
:PARM7, :PARM8, :PARM9,
:PARM10 );
if (strncmp(SQLSTATE,"00000",5))
{
/* Handle error or warning returned on CALL statement */
}
/* Process return values from the CALL. */
:
}
#include <stdio.h>
#include <string.h>
#include <decimal.h>
main(argc,argv)
int argc;
char *argv[];
{
char parm1[11];
long int parm2;
short int parm3,i,j,*ind,ind1,ind2,ind3,ind4,ind5,ind6,ind7,
ind8,ind9,ind10;
float parm4;
double parm5;
decimal(10,5) parm6;
char parm7[11];
char parm8[10];
char parm9[8];
char parm10[26];
/* *********************************************************/
/* Receive the parameters into the local variables - */
/* Character, date, time, and timestamp are passed as */
/* NUL terminated strings - cast the argument vector to */
/* the proper data type for each variable. Note that */
/* the argument vector can be used directly instead of */
/* copying the parameters into local variables - the copy */
/* is done here just to illustrate the method. */
/* *********************************************************/
/**********************************************************/
/* Copy NUL terminated string into local variable. */
/* Note that the parameter in the CREATE PROCEDURE was */
/* declared as varying length character. For C, varying */
/* length are passed as NUL terminated strings unless */
/* FOR BIT DATA is specified in the CREATE PROCEDURE */
/**********************************************************/
strcpy(parm7,argv[7]);
/**********************************************************/
/* Copy date into local variable. */
/* Note that date and time variables are always passed in */
/* ISO format so that the lengths of the strings are */
/* known. strcpy works here just as well. */
/**********************************************************/
strncpy(parm8,argv[8],10);
/**********************************************************/
/* Copy timestamp into local variable. */
/* IBM SQL timestamp format is always passed so the length*/
/* of the string is known. */
/**********************************************************/
strncpy(parm10,argv[10],26);
/**********************************************************/
Procedure P2
END CALLPROC;
/**************************************************************/
/*********** START OF SQL C Application ***********************/
#include <decimal.h>
#include <stdio.h>
#include <string.h>
#include <wcstr.h>
/*-----------------------------------------------------------*/
exec sql include sqlca;
exec sql include sqlda;
/* ***********************************************************/
/* Declare host variable for the CALL statement */
/* ***********************************************************/
char parm1[20];
signed long int parm2;
decimal(10,5) parm3;
double parm4;
struct { short dlen;
char dat[10];
} parm5;
wchar_t parm6[4] = { 0xC1C1, 0xC2C2, 0xC3C3, 0x0000 };
struct { short dlen;
wchar_t dat[10];
} parm7 = {0x0009, 0xE2E2,0xE3E3,0xE4E4, 0xE5E5, 0xE6E6,
0xE7E7, 0xE8E8, 0xE9E9, 0xC1C1, 0x0000 };
char parm8[10];
char parm9[8];
char parm10[26];
main()
{
/* *************************************************************/
/* Call the procedure - on return from the CALL statement the */
/* SQLCODE should be 0. If the SQLCODE is non-zero, */
/* the procedure detected an error. */
/* *************************************************************/
strcpy(parm1,"TestingREXX");
parm2 = 12345;
parm3 = 5.5;
parm4 = 3e3;
parm5.dlen = 5;
strcpy(parm5.dat,"parm6");
strcpy(parm8,"1994-01-01");
strcpy(parm9,"13.01.00");
strcpy(parm10,"1994-01-01-13.01.00.000000");
if (strncpy(SQLSTATE,"00000",5))
{
/* handle error or warning returned on CALL */
:
}
:
}
/**********************************************************************/
/****** START OF REXX MEMBER TEST/CALLSRC CALLREXX ********************/
/**********************************************************************/
/* REXX source member TEST/CALLSRC CALLREXX */
/* Note the extra parameter being passed for the indicator*/
/* array. */
/**********************************************************/
/* Parse the arguments into individual parameters */
/**********************************************************/
parse arg ar1 ar2 ar3 ar4 ar5 ar6 ar7 ar8 ar9 ar10 ar11
/**********************************************************/
/* Verify that the values are as expected */
/**********************************************************/
if ar1<>"'TestingREXX'" then signal ar1tag
if ar2<>12345 then signal ar2tag
if ar3<>5.5 then signal ar3tag
if ar4<>3e3 then signal ar4tag
if ar5<>"'parm6'" then signal ar5tag
if ar6 <>"G'AABBCC'" then signal ar6tag
if ar7 <>"G'SSTTUUVVWWXXYYZZAA'" then ,
signal ar7tag
if ar8 <> "'1994-01-01'" then signal ar8tag
if ar9 <> "'13.01.00'" then signal ar9tag
if ar10 <> "'1994-01-01-13.01.00.000000'" then signal ar10tag
if ar11 <> "+0+0+0+0+0+0+0+0+0+0" then signal ar11tag
/************************************************************/
/* Perform other processing as necessary .. */
/************************************************************/
:
/************************************************************/
/* Indicate the call was successful by exiting with a */
/* return code of 0 */
/************************************************************/
exit(0)
ar1tag:
say "ar1 did not match" ar1
exit(1)
ar2tag:
say "ar2 did not match" ar2
exit(1)
:
:
PROCEDURE prod.resset
ODBC application
Note: Some of the logic has been removed.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
:
strcpy(stmt,"call prod.resset()");
rc = SQLExecDirect(hstmt,stmt,SQL_NTS);
if (rc == SQL_SUCCESS)
{
// CALL statement has executed successfully. Process the result set.
// Get number of result columns for the result set.
rc = SQLNumResultCols(hstmt, &wNum);
if (rc == SQL_SUCCESS)
// Get description of result columns in result set
{ rc = SQLDescribeCol(hstmt,à);
if (rc == SQL_SUCCESS)
:
{
// Bind result columns based on attributes returned
//
rc = SQLBindCol(hstmt,à);
:
// FETCH records until EOF is returned
rc = SQLFetch(hstmt);
Example 2: Calling a stored procedure that returns a result set from a nested
procedure
This example shows how a nested stored procedure can open and return a result set to the outermost
procedure.
To return a result set to the outermost procedure in an environment where there are nested stored
procedures, the RETURN TO CLIENT returnability attribute should be used on the DECLARE CURSOR
statement or on the SET RESULT SETS statement to indicate that the cursors are to be returned to the
application which called the outermost procedure. Note that this nested procedure returns two result sets
to the client; the first, an array result set, and the second a cursor result set. Both an ODBC and a JDBC
client application are shown below along with the stored procedures.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
PGM
CALL PGM(PROD/RTNCLIENT)
DRESULT DS OCCURS(20)
D COL1 1 16A
C 1 DO 10 X 2 0
C X OCCUR RESULT
C EVAL COL1='array result set'
C ENDDO
C EVAL X=X-1
C/EXEC SQL DECLARE C2 CURSOR WITH RETURN TO CLIENT
C+ FOR SELECT LSTNAM FROM QIWS.QCUSTCDT FOR FETCH ONLY
C/END-EXEC
C/EXEC SQL
C+ OPEN C2
C/END-EXEC
C/EXEC SQL
C+ SET RESULT SETS FOR RETURN TO CLIENT ARRAY :RESULT FOR :X ROWS,
C+ CURSOR C2
C/END-EXEC
C SETON LR
C RETURN
//
// *******************************************************************
#include "common.h"
#include "stdio.h"
// *******************************************************************
//
// Local function prototypes.
//
// *******************************************************************
// *******************************************************************
//
// Constant strings definitions for SQL statements used in
// the auto test.
//
// *******************************************************************
//
// Declarations of variables global to the auto test.
//
// *******************************************************************
#define ARRAYCOL_LEN 16
#define LSTNAM_LEN 8
char stmt[2048];
char buf[2000];
UDWORD rowcnt;
char arraycol[ARRAYCOL_LEN+1];
char lstnam[LSTNAM_LEN+1];
SDWORD cbcol1,cbcol2;
// ********************************************************************
//
// Define the auto test name and the number of test cases
// for the current auto test. These informations will
// be returned by AutoTestName().
//
// ********************************************************************
// *******************************************************************
//
// Define the structure for test case names, descriptions,
// and function names for the current auto test.
// Test case names and descriptions will be returned by
// AutoTestDesc(). Functions will be run by
// AutoTestFunc() if the bits for the corresponding test cases
// are set in the rglMask member of the SERVERINFO
// structure.
//
// *******************************************************************
struct TestCase TestCasesInfo[] =
{
"Return to Client",
"2 result sets ",
RetClient
// *******************************************************************
//
// Sample return to Client:
// Return to Client result sets. Call a CL program which in turn
// calls an RPG program which returns 2 result sets. The first
// result set is an array result set and the second is a cursor
// result set.
//
//
// *******************************************************************
SWORD FAR PASCAL RetClient(lpSERVERINFO lpSI)
{
SWORD sRC = SUCCESS;
RETCODE returncode;
HENV henv;
HDBC hdbc;
HSTMT hstmt;
if (Bind_Second_RS(hstmt) == FALSE)
{
myRETCHECK(lpSI, henv, hdbc, hstmt, SQL_SUCCESS,
returncode, "Bind_Second_RS");
sRC = FAIL;
goto ErrorRet;
}
else
{
vWrite(lpSI, "Bind_Second_RS Complete...", TRUE);
}
// **************************************************************
// Fetch the rows from the cursor result set. After the last row
// is read, a returncode of SQL_NO_DATA_FOUND will be returned to
// the application on the SQLFetch request.
// **************************************************************
returncode = SQLFetch(hstmt);
while(returncode == SQL_SUCCESS)
{
wsprintf(stmt,"lstnam = %s",lstnam);
vWrite(lpSI,stmt,TRUE);
returncode = SQLFetch(hstmt);
}
if (returncode == SQL_NO_DATA_FOUND) ;
else {
myRETCHECK(lpSI, henv, hdbc, hstmt, SQL_SUCCESS_WITH_INFO,
returncode, "SQLFetch");
sRC = FAIL;
goto ErrorRet;
}
returncode = SQLFreeStmt(hstmt,SQL_CLOSE);
if (returncode != SQL_SUCCESS)
{
myRETCHECK(lpSI, henv, hdbc, hstmt, SQL_SUCCESS,
returncode, "Close statement");
sRC = FAIL;
goto ErrorRet;
}
else
{
vWrite(lpSI, "Close statement...", TRUE);
}
ErrorRet:
FullDisconnect(lpSI, henv, hdbc, hstmt);
if (sRC == FAIL)
{
// a failure in an ODBC function that prevents completion of the
// test - for example, connect to the server
vWrite(lpSI, "\t\t *** Unrecoverable RTNClient Test FAILURE ***", TRUE);
} /* endif */
ExitNoDisconnect:
return(sRC);
} // RetClient
JDBC application
//-----------------------------------------------------------
// Call Nested procedures which return result sets to the
// client, in this case a JDBC client.
//-----------------------------------------------------------
import java.sql.*;
public class callNested
{
public static void main (String argv[]) // Main entry point
{
try {
Class.forName("com.ibm.db2.jdbc.app.DB2Driver");
}
catch (ClassNotFoundException e) {
e.printStackTrace();
}
try {
Connection jdbcCon =
DriverManager.getConnection("jdbc:db2:lp066ab","Userid","xxxxxxx");
jdbcCon.setAutoCommit(false);
CallableStatement cs = jdbcCon.prepareCall("CALL PROD.RTNNESTED");
cs.execute();
ResultSet rs1 = cs.getResultSet();
int r = 0;
while (rs1.next())
{
r++;
String s1 = rs1.getString(1);
System.out.print("Result set 1 Row: " + r + ": ");
System.out.print(s1 + " " );
System.out.println();
}
cs.getMoreResults();
r = 0;
ResultSet rs2 = cs.getResultSet();
while (rs2.next())
{
r++;
String s2 = rs2.getString(1);
System.out.print("Result set 2 Row: " + r + ": ");
System.out.print(s2 + " ");
System.out.println();
}
}
catch ( SQLException e ) {
System.out.println( "SQLState: " + e.getSQLState() );
System.out.println( "Message : " + e.getMessage() );
e.printStackTrace();
}
} // main
}
/*************************************************************/
/* Declare result set locators. For this example, */
/* assume you know that two result sets will be returned. */
/* Also, assume that you know the format of each result set. */
/*************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
static volatile SQL TYPE IS RESULT_SET_LOCATOR loc1, loc2;
EXEC SQL END DECLARE SECTION;
.
.
.
/*************************************************************/
/* Call stored procedure P1. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P1(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. */
The following example demonstrates how you receive result sets when you do not know how many result
sets are returned or what is in each result set.
/*************************************************************/
/* Declare result set locators. For this example, */
/* assume that no more than three result sets will be */
/* returned, so declare three locators. Also, assume */
/* that you do not know the format of the result sets. */
/*************************************************************/
EXEC SQL BEGIN DECLARE SECTION;
static volatile SQL TYPE IS RESULT_SET_LOCATOR loc1, loc2, loc3;
EXEC SQL END DECLARE SECTION;
.
.
.
/*************************************************************/
/* Call stored procedure P2. */
/* Check for SQLCODE +466, which indicates that result sets */
/* were returned. */
/*************************************************************/
EXEC SQL CALL P2(:parm1, :parm2, ...);
if(SQLCODE==+466)
{
/*************************************************************/
/* Determine how many result sets P2 returned, using the */
/* statement DESCRIBE PROCEDURE. :proc_da is an SQLDA */
/* with enough storage to accommodate up to three SQLVAR */
/* entries. */
/*************************************************************/
EXEC SQL DESCRIBE PROCEDURE P2 INTO :proc_da;
.
.
.
/*************************************************************/
/* Now that you know how many result sets were returned, */
/* establish a link between each result set and its */
/* locator using the ASSOCIATE LOCATORS. For this example, */
/* we assume that three result sets are returned. */
/*************************************************************/
EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2, :loc3) WITH PROCEDURE P2;
.
.
.
/*************************************************************/
/* Associate a cursor with each result set. */
/*************************************************************/
EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1;
EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2;
EXEC SQL ALLOCATE C3 CURSOR FOR RESULT SET :loc3;
/*************************************************************/
The following example demonstrates how you receive result sets using an SQL descriptor.
This is the SQL procedure that will be called:
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
enddata:
printf("All rows fetched.\n");
return;
error:
printf("Unexpected error, SQLCODE = %d \n", SQLCODE);
return;
}
The following example demonstrates how you can use an SQL procedure to receive result sets. It is just a
fragment of a larger SQL procedure.
WHILE AT_END = 0 DO
FETCH RSCUR1 INTO VAR1;
SET TOTAL1 = TOTAL1 + VAR1;
END WHILE;
WHILE AT_END = 0 DO
FETCH RSCUR2 INTO VAR2;
SET TOTAL2 = TOTAL2 + VAR2;
END WHILE;
.
.
.
or
Related concepts
Embedded SQL programming
Java SQL routines
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Program CRPG
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
D INOUT1 S 7P 2
D INOUT1IND S 4B 0
D INOUT2 S 7P 2
D INOUT2IND S 4B 0
C EVAL INOUT1 = 1
C EVAL INOUT1IND = 0
C EVAL INOUT2 = 1
C EVAL INOUT2IND = -2
C/EXEC SQL CALL PROC1 (:INOUT1 :INOUT1IND , :INOUT2
C+ :INOUT2IND)
C/END-EXEC
C EVAL INOUT1 = 1
C EVAL INOUT1IND = 0
C EVAL INOUT2 = 1
C EVAL INOUT2IND = -2
C/EXEC SQL CALL PROC1 (:INOUT1 :INOUT1IND , :INOUT2
C+ :INOUT2IND)
C/END-EXEC
C INOUT1IND IFLT 0
C* :
C* HANDLE NULL INDICATOR
C* :
C ELSE
C* :
C* INOUT1 CONTAINS VALID DATA
C* :
C ENDIF
C* :
C* HANDLE ALL OTHER PARAMETERS
C* IN A SIMILAR FASHION
C* :
C RETURN
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of PROGRAM CRPG
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Program PROC1
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
D INOUTP S 7P 2
D INOUTP2 S 7P 2
D NULLARRAY S 4B 0 DIM(2)
C *ENTRY PLIST
C PARM INOUTP
C PARM INOUTP2
C PARM NULLARRAY
C NULLARRAY(1) IFLT 0
C* :
C* CODE FOR INOUTP DOES NOT CONTAIN MEANINGFUL DATA
C* :
C ELSE
C* :
C* CODE FOR INOUTP CONTAINS MEANINGFUL DATA
C* :
C ENDIF
C* PROCESS ALL REMAINING VARIABLES
C*
C* BEFORE RETURNING, SET OUTPUT VALUE FOR FIRST
C* PARAMETER AND SET THE INDICATOR TO A NON-NEGATIVE
C* VALUE SO THAT THE DATA IS RETURNED TO THE CALLING
C* PROGRAM
C*
C EVAL INOUTP2 = 20.5
C EVAL NULLARRAY(2) = 0
C*
C* INDICATE THAT THE SECOND PARAMETER IS TO CONTAIN
C* THE NULL VALUE UPON RETURN. THERE IS NO POINT
C* IN SETTING THE VALUE IN INOUTP SINCE IT WON'T BE
C* PASSED BACK TO THE CALLER.
C EVAL NULLARRAY(1) = -1
C RETURN
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SQL-state
SQL-parameter SQL-parameter-ind
SQL-parameter
This argument is set by Db2 before calling the procedure. This value repeats n times, where n is the
number of parameters specified in the procedure definition. The value of each of these parameters
is taken from the expression specified in the CALL statement. It is expressed in the data type of the
defined parameter in the CREATE PROCEDURE statement. Note: Changes to any parameters that are
defined as INPUT will be ignored by Db2 upon return.
<schema-name>.<procedure-name>
This parameter is useful when the procedure code is being used by multiple procedure definitions so
that the code can distinguish which definition is being called. Note: This parameter is treated as input
only; any changes to the parameter value made by the procedure are ignored by Db2.
specific-name
This argument is set by Db2 before calling the procedure. It is a VARCHAR(128) value that contains
the specific name of the procedure on whose behalf the procedure code is being called.
Like procedure-name, this parameter is useful when the procedure code is being used by multiple
procedure definitions so that the code can distinguish which definition is being called. Note: This
parameter is treated as input only; any changes to the parameter value made by the procedure are
ignored by Db2.
diagnostic-message
This argument is set by Db2 before calling the procedure. It is a VARCHAR(70) value that can be used
by the procedure to send message text back when an SQLSTATE warning or error is signaled by the
procedure.
It is initialized by the database on input to the procedure and may be set by the procedure with
descriptive information. Message text is ignored by Db2 unless the SQL-state parameter is set by the
procedure.
dbinfo
This argument is set by Db2 before calling the procedure. It is only present if the CREATE PROCEDURE
statement for the procedure specifies the DBINFO keyword. The argument is a structure whose
definition is contained in the sqludf include.
SQL-parameter
SQL-parameter
This argument is set by Db2 before calling the procedure. This value repeats n times, where n is the
number of parameters specified in the procedure call. The value of each of these parameters is taken
from the expression specified in the CALL statement. It is expressed in the data type of the defined
parameter in the CREATE PROCEDURE statement. Note: Changes to any parameters that are defined
as INPUT will be ignored by Db2 upon return.
SQL-parameter-ind-array
SQL-parameter
SQL-parameter
This argument is set by Db2 before calling the procedure. This value repeats n times, where n is the
number of parameters specified in the procedure call. The value of each of these parameters is taken
from the expression specified in the CALL statement. It is expressed in the data type of the defined
parameter in the CREATE PROCEDURE statement. Note: Changes to any parameters that are defined
as INPUT will be ignored by Db2 upon return.
SQL-parameter-ind-array
This argument is set by Db2 before calling the procedure. It can be used by the procedure to
determine if one or more SQL-parameters are null or not. It is an array of two-byte signed integers
(indicators). The nth array argument corresponds to the nth SQL-parameter. Each array entry is set to
one of the following values:
0
The parameter is present and not null.
-1
The parameter is null.
The procedure should check for null input. Note: Changes to any indicator array entries that
correspond to parameters that are defined as INPUT will be ignored by Db2 upon return.
If you don't know that the table contains all the correct values, you need to delete all the rows and insert
them again. By introducing a compound (dynamic) statement in the script, when the table is already built
correctly, it does not need to be repopulated.
BEGIN
DECLARE day_count INT;
DECLARE unique_day_count INT DEFAULT 0;
DECLARE insert_cnt INT;
END
Related reference
compound (dynamic) statement
SELECT A, B, C FROM T
When it receives each row, it runs the program's SELECTION_CRITERIA function against the data to
decide if it is interested in processing the data further. Here, every row of table T must be passed back to
the application. But, if SELECTION_CRITERIA() is implemented as a UDF, your application can issue the
following statement:
In this case, only the rows and one column of interest are passed across the interface between the
application and the database.
Another case where a UDF can offer a performance benefit is when you deal with large objects (LOBs).
Suppose that you have a function that extracts some information from a value of a LOB. You can perform
this extraction right on the database server and pass only the extracted value back to the application.
This is more efficient than passing the entire LOB value back to the application and then performing the
extraction. The performance value of packaging this function as a UDF can be enormous, depending on
the particular situation.
Related concepts
User-defined functions
A user-defined function is a program that can be called like any built-in functions.
UDF concepts
A user-defined function (UDF) is a function that is defined to the Db2 database system through the
CREATE FUNCTION statement and that can be referenced in SQL statements. A UDF can be an external
function or an SQL function.
Types of function
There are several types of functions:
• Built-in. These are functions provided by and shipped with the database. SUBSTR() is an example.
• System-generated. These are functions implicitly generated by the database engine when a DISTINCT
TYPE is created. These functions provide casting operations between the DISTINCT TYPE and its base
type.
• User-defined. These are functions created by users and registered to the database. Some system
provided services such as QSYS2.DISPLAY_JOURNAL are considered user-defined functions even
though they are defined and maintained by the system.
In addition, each function can be further classified as a scalar function, an aggregate function, or a table
function.
A scalar function returns a single value answer each time it is called. For example, the built-in function
SUBSTR() is a scalar function, as are many built-in functions. System-generated functions are always
scalar functions. Scalar UDFs can either be external (coded in a programming language such as C), written
in SQL, or sourced (using the implementation of an existing function).
An aggregate function receives a set of like values (a column of data) and returns a single value answer
from this set of values. Some built-in functions are aggregate functions. An example of an aggregate
function is the built-in function AVG(). An external UDF cannot be defined as an aggregate function.
However, a sourced UDF is defined to be an aggregate function if it is sourced on one of the built-in
aggregate functions. The latter is useful for distinct types. For example, if a distinct type SHOESIZE exists
that is defined with base type INTEGER, you can define a UDF, AVG(SHOESIZE), as an aggregate function
sourced on the existing built-in aggregate function, AVG(INTEGER).
However, you may also omit the <schema-name>., in which case, Db2 must determine the function to
which you are referring. For example:
Path
The concept of path is central to Db2's resolution of unqualified references that occur when schema-
name is not specified. The path is an ordered list of schema names that is used for resolving unqualified
references to UDFs and UDTs. In cases where a function reference matches a function in more than one
schema in the path, the order of the schemas in the path is used to resolve this match. The path is
established by means of the SQLPATH option on the precompile commands for static SQL. The path is
set by the SET PATH statement for dynamic SQL. When the first SQL statement that runs in an activation
group runs with SQL naming, the path has the following default value:
"QSYS","QSYS2","<ID>"
This applies to both static and dynamic SQL, where <ID> represents the current statement authorization
ID.
When the first SQL statement in an activation group runs with system naming, the default path is *LIBL.
Function resolution
It is the function resolution algorithm that takes into account the facts of overloading and function path
to choose the best fit for every function reference, whether it is a qualified or an unqualified reference.
All functions, even built-in functions, are processed through the function selection algorithm. The function
resolution algorithm does not take into account the type of a function. So a table function may be resolved
to as the best fit function, even though the usage of the reference requires an scalar function, or vice
versa.
Non-pipelined SQL table functions are required to have one and only one RETURN statement.
This pipelined table function returns data based on a date. In addition to the column values returned by
the query, it also returns an indication of whether the project has been carried over from a prior year.
Pipelined SQL table functions can contain any number of PIPE statements. Each PIPE statement must
return a value for every result column. A RETURN statement must be executed at the end of processing.
The body of a pipelined function is not limited to querying tables. It could call another program, get
information from an API, query data from some other system, and then combine the results and use one
or more PIPE statements to determine what row values to return as the result table.
Example: Exponentiation
Suppose that you write an external function to perform exponentiation of floating point values, and you
want to register it in the MATH schema.
In this example, the RETURNS NULL ON NULL INPUT is specified since you want the result to be NULL if
either argument is NULL. As there is no reason why EXPON cannot be parallel, the ALLOW PARALLEL value
is specified.
Note that a CAST FROM clause is used to specify that the UDF program really returns a FLOAT value, but
you want to cast this to INTEGER before returning the value to the SQL statement which used the UDF.
Also, you want to provide your own specific name for the function. Because the UDF was not written to
handle NULL values, you use the RETURNS NULL ON NULL INPUT.
This example illustrates overloading of the UDF name and shows that multiple UDFs can share the same
program. Note that although a BLOB cannot be assigned to a CLOB, the same source code can be used.
There is no programming problem in the above example as the interface for BLOB and CLOB between Db2
and the UDF program is the same: length followed by data.
Note that this FINDSTRING function has a different signature from the FINDSTRING functions in
“Example: BLOB string search” on page 207, so there is no problem overloading the name. Because you
are using the SOURCE clause, you cannot use the EXTERNAL NAME clause or any of the related keywords
specifying function attributes. These attributes are taken from the source function. Finally, observe that in
identifying the source function you are using the specific function name explicitly provided in “Example:
BLOB string search” on page 207. Because this is an unqualified reference, the schema in which this
source function resides must be in the function path, or the reference will not be resolved.
Note that in the SOURCE clause you have qualified the function name, just in case there might be some
other AVG function lurking in your SQL path.
Note that no parameter definitions are provided, just empty parentheses. The above function specifies
SCRATCHPAD and uses the default specification of NO FINAL CALL. In this case, the size of the
scratchpad is set to only 4 bytes, which is sufficient for a counter. Since the COUNTER function requires
that a single scratchpad be used to operate properly, DISALLOW PARALLEL is added to prevent Db2 from
operating it in parallel.
Within the context of a single session it will always return the same table, and therefore it is defined as
DETERMINISTIC. The RETURNS clause defines the output from DOCMATCH, including the column name
DOC_ID. FINAL CALL does not need to be specified for this table function. The DISALLOW PARALLEL
keyword is required since table functions cannot operate in parallel. Although the size of the output from
DOCMATCH can be a large table, CARDINALITY 20 is a representative value, and is specified to help the
optimizer make good decisions.
Typically, this table function is used in a join with the table containing the document text, as follows:
Note the special syntax (TABLE keyword) for specifying a table function in a FROM clause. In this
invocation, the DOCMATCH() table function returns a row containing the single column DOC_ID for each
MATHEMATICS document referencing ZORN'S LEMMA. These DOC_ID values are joined to the master
document table, retrieving the author's name and document text.
SQL-result SQL-result-ind
SQL-argument SQL-argument-ind
call-type dbinfo
SQL-argument
This argument is set by Db2 before calling the UDF. This value repeats n times, where n is the number
of arguments specified in the function reference. The value of each of these arguments is taken from
the expression specified in the function invocation. It is expressed in the data type of the defined
parameter in the create function statement. Note: These parameters are treated as input only; any
changes to the parameter values made by the UDF are ignored by Db2.
SQL-result
This argument is set by the UDF before returning to Db2. The database provides the storage for the
return value. Since the parameter is passed by address, the address is of the storage where the
return value should be placed. The database provides as much storage as needed for the return value
as defined on the CREATE FUNCTION statement. If the CAST FROM clause is used in the CREATE
FUNCTION statement, Db2 assumes the UDF returns the value as defined in the CAST FROM clause,
otherwise Db2 assumes the UDF returns the value as defined in the RETURNS clause.
SQL-argument-ind
This argument is set by Db2 before calling the UDF. It can be used by the UDF to determine if
the corresponding SQL-argument is null or not. The nth SQL-argument-ind corresponds to the nth
SQL-argument, described previously. Each indicator is defined as a two-byte signed integer. It is set to
one of the following values:
0
The argument is present and not null.
-1
The argument is null.
<schema-name>.<function-name>
This parameter is useful when the function code is being used by multiple UDF definitions so that the
code can distinguish which definition is being called. Note: This parameter is treated as input only; any
changes to the parameter value made by the UDF are ignored by Db2.
specific-name
This argument is set by Db2 before calling the UDF. It is a VARCHAR(128) value that contains the
specific name of the function on whose behalf the function code is being called.
Like function-name, this parameter is useful when the function code is being used by multiple UDF
definitions so that the code can distinguish which definition is being called. Note: This parameter is
treated as input only; any changes to the parameter value made by the UDF are ignored by Db2.
diagnostic-message
This argument is set by Db2 before calling the UDF. It is a VARCHAR(70) value that can be used by the
UDF to send message text back when an SQLSTATE warning or error is signaled by the UDF.
It is initialized by the database on input to the UDF and may be set by the UDF with descriptive
information. Message text is ignored by Db2 unless the SQL-state parameter is set by the UDF.
scratchpad
This argument is set by Db2 before calling the UDF. It is only present if the CREATE FUNCTION
statement for the UDF specified the SCRATCHPAD keyword. This argument is a structure with the
following elements:
• An INTEGER containing the length of the scratchpad.
• The actual scratchpad, initialized to all binary 0's by Db2 before the first call to the UDF.
The scratchpad can be used by the UDF either as working storage or as persistent storage, since it is
maintained across UDF invocations.
SQL-result = func ( )
SQL-argument
SQL-argument
This argument is set by Db2 before calling the UDF. This value repeats n times, where n is the number
of arguments specified in the function reference. The value of each of these arguments is taken from
the expression specified in the function invocation. It is expressed in the data type of the defined
parameter in the CREATE FUNCTION statement. Note: These parameters are treated as input only;
any changes to the parameter values made by the UDF are ignored by Db2.
SQL-result
This value is returned by the UDF. Db2 copies the value into database storage. In order to return the
value correctly, the function code must be a value-returning function. The database copies only as
much of the value as defined for the return value as specified on the CREATE FUNCTION statement.
If the CAST FROM clause is used in the CREATE FUNCTION statement, Db2 assumes the UDF returns
the value as defined in the CAST FROM clause, otherwise Db2 assumes the UDF returns the value as
defined in the RETURNS clause.
Because of the requirement that the function code be a value-returning function, any function code
used for parameter style GENERAL must be created into a service program.
SQL-result = funcname (
SQL-argument SQL-argument-ind-array
SQL-result-ind )
SQL-argument
This argument is set by Db2 before calling the UDF. This value repeats n times, where n is the number
of arguments specified in the function reference. The value of each of these arguments is taken from
the expression specified in the function invocation. It is expressed in the data type of the defined
parameter in the CREATE FUNCTION statement. Note: These parameters are treated as input only;
any changes to the parameter values made by the UDF are ignored by Db2.
SQL-argument-ind-array
This argument is set by Db2 before calling the UDF. It can be used by the UDF to determine if one
or more SQL-arguments are null or not. It is an array of two-byte signed integers (indicators). The nth
array argument corresponds corresponds to the nth SQL-argument. Each array entry is set to one of
the following values:
0
The argument is present and not null.
-1
The argument is null.
void myentry(
int*in
int*out,
.
.
. .
the database will not find the entry because it is in lowercase myentry and the database was
instructed to look for uppercase MYENTRY.
2. For service programs with C++ modules, make sure in the C++ source code to precede the program
function definition with extern "C". Otherwise, the C++ compiler will perform 'name mangling' of
the function's name and the database will not find it.
Threads considerations
A user-defined function (UDF) that is defined as FENCED runs in the same job as the SQL statement that
calls the function. However, the UDF runs in a system thread, separate from the thread that is running the
SQL statement.
Because the UDF runs in the same job as the SQL statement, it shares much of the same environment
as the SQL statement. However, because it runs under a separate thread, the following threads
considerations apply:
• The UDF will conflict with thread level resources held by the SQL statement's thread. Primarily, these
are the table resources discussed above.
• UDFs do not inherit any program adopted authority that may have been active at the time the SQL
statement was called. UDF authority comes from either the authority associated with the UDF program
itself or the authority of the user running the SQL statement.
• The UDF cannot perform any operation that is blocked from being run in a secondary thread.
Related reference
Multithreaded applications
Fenced or unfenced considerations
When you create a user-defined function (UDF), consider whether to make the UDF an unfenced UDF.
Parallel processing
A user-defined function (UDF) can be defined to allow parallel processing.
This means that the same UDF program can be running in multiple threads at the same time. Therefore, if
ALLOW PARALLEL is specified for the UDF, ensure that it is thread safe.
Related reference
Multithreaded applications
Related reference
Threads considerations
The function logic needs to be modified to accept two new parameters that might contain dates or might
be the NULL value. Any invocation of the function that does not have a reason to pass a date range does
not need to be changed. It would continue to look like this:
ActProj(:projnum)
Any application that needs the date range can pass one or both of the date parameters:
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
The following examples show how to define the UDF in several different ways.
The code:
The code:
Example: Counter
Suppose that you want to number the rows in a SELECT statement. So you write a user-defined function
(UDF) that increments and returns a counter.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
This example uses an external function with Db2 SQL parameter style and a scratchpad.
/* structure scr defines the passed scratchpad for the function "ctr" */
struct scr {
long len;
long countr;
char not_used[92];
};
void ctr (
long *out, /* output answer (counter) */
short *outnull, /* output NULL indicator */
char *sqlstate, /* SQL STATE */
char *funcname, /* function name */
char *specname, /* specific function name */
char *mesgtext, /* message text insert */
struct scr *scratchptr) { /* scratch pad */
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <sqludf.h> /* for use in compiling User Defined Function */
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Find a full city name using a short name */
int get_name( char * short_name, char * long_name ) {
int name_pos = 0 ;
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Clean all field data and field null indicator data */
int clean_fields( int field_pos ) {
fld_desc * field ;
char field_buf[31] ;
double * double_ptr ;
int * int_ptr, buf_pos ;
while ( fields[field_pos].fld_length != 0 ) {
field = &fields[field_pos] ;
memset( field_buf, '\0', 31 ) ;
memcpy( field_buf,
( value + field->fld_offset ),
field->fld_length ) ;
buf_pos = field->fld_length ;
while ( ( buf_pos > 0 ) &&
( field_buf[buf_pos] == ' ' ) )
field_buf[buf_pos--] = '\0' ;
buf_pos = 0 ;
while ( ( buf_pos < field->fld_length ) &&
( field_buf[buf_pos] == ' ' ) )
buf_pos++ ;
if ( strlen( ( char * ) ( field_buf + buf_pos ) ) > 0 ||
strcmp( ( char * ) ( field_buf + buf_pos ), "n/a") != 0 ) {
field->fld_ind = SQL_NOTNULL ;
}
field_pos++ ;
}
return( 0 ) ;
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN weather( /* Return row fields */
SQLUDF_VARCHAR * city,
SQLUDF_INTEGER * temp_in_f,
SQLUDF_INTEGER * humidity,
SQLUDF_VARCHAR * wind,
SQLUDF_INTEGER * wind_velocity,
SQLUDF_DOUBLE * barometer,
SQLUDF_VARCHAR * forecast,
/* You may want to add more fields here */
break ;
}
memset( line_buf, '\0', 81 ) ;
strcpy( line_buf, weather_data[save_area->file_pos] ) ;
line_buf[3] = '\0' ;
break ;
/* Special last call UDF for clean up (no real args!): Close table */ (See note 3)
case SQL_TF_CLOSE:
/* If you use a weather data text file */
/* fclose(save_area->file_ptr); */
/* save_area->file_ptr = NULL; */
save_area->file_pos = 0 ;
break ;
Referring to the embedded notes in this UDF code, note the following points:
1. The scratchpad is defined. The row variable is initialized on the OPEN call, and the iptr array and
nbr_rows variable are filled in by the mystery function at open time.
2. FETCH traverses the iptr array, using row as an index, and moves the values of interest from the
current element of iptr to the location pointed to by out_c1, out_c2, and out_c3 result value
pointers.
3. CLOSE frees the storage acquired by OPEN and anchored in the scratchpad.
Following is the CREATE FUNCTION statement for this UDF:
SELECT *
FROM TABLE (tfweather_u())x
BLOOP(?)
or
BLOOP(NULL)
You can use the CAST specification to provide a data type for the parameter marker or NULL value that
function resolution can use to find the correct function:
BLOOP(CAST(? AS INTEGER))
or
BLOOP(CAST(NULL AS INTEGER))
Related reference
Determining the best fit
Only the BLOOP functions in schema PABLO are considered. It does not matter that user SERGE has
defined a BLOOP function, or whether there is a built-in BLOOP function. Now suppose that user PABLO
has defined two BLOOP functions in his schema:
BLOOP is thus overloaded within the PABLO schema, and the function selection algorithm chooses
the best BLOOP, depending on the data type of the argument, COLUMN1. In this case, both of the
PABLO.BLOOPs take numeric arguments. If COLUMN1 is not castable to a numeric type, the statement
will fail. If COLUMN1 is a character or graphic type, the castable process for function resolution will
resolve to the second BLOOP function. If COLUMN1 is either SMALLINT or INTEGER, function selection
will resolve to the first BLOOP, while if COLUMN1 is DECIMAL or DOUBLE, the second BLOOP will be
chosen.
Several points about this example:
1. It illustrates argument promotion. The first BLOOP is defined with an INTEGER parameter, yet you
can pass it a SMALLINT argument. The function selection algorithm supports promotions among the
built-in data types and Db2 performs the appropriate data value conversions.
2. If for some reason you want to call the second BLOOP with a SMALLINT or INTEGER argument, you
need to take an explicit action in your statement as follows:
3. If you want to call the first BLOOP with a DECIMAL or DOUBLE argument, you have your choice of
explicit actions, depending on your intent:
Related reference
Using unqualified function references
You can use an unqualified function reference instead of a qualified function reference. When searching
for a matching function, Db2 normally uses the function path to qualify the reference.
Defining a UDT
You define a user-defined type (UDT) using the CREATE DISTINCT TYPE statement.
"QSYS","QSYS2","PABLO"
However, suppose you have forgotten that you are using a script for precompiling and binding which you
previously wrote for another purpose. In this script, you explicitly coded your SQLPATH parameter to
specify the following function path for another reason that does not apply to your current work:
"KATHY","QSYS","QSYS2","PABLO"
If there is a BLOOP function in schema KATHY, the function selection can very well resolve to that
function, and your statement executes without error. You are not notified because Db2 assumes that you
know what you are doing. It is your responsibility to identify the incorrect output from your statement and
make the required correction.
Related reference
Using qualified function references
If you use a qualified function reference, you restrict the search for a matching function to the specified
schema.
CREATE FUNCTION testfunc (parm1 INT, parm2 INT, parm3 INT DEFAULT 10, parm4 INT DEFAULT -1) ...
Since the first two parameters do not have defaults, they must be included somewhere in the function
invocation. The last two parameters can be omitted, since they have defaults. All of the following are valid
and pass identical values to the function:
If COLUMN1 is a DECIMAL or DOUBLE column, the inner BLOOP reference resolves to the second BLOOP
defined above. Because this BLOOP returns an INTEGER, the outer BLOOP resolves to the first BLOOP.
Alternatively, if COLUMN1 is a SMALLINT or INTEGER column, the inner BLOOP reference resolves to the
first BLOOP defined above. Because this BLOOP returns an INTEGER, the outer BLOOP also resolves to
the first BLOOP. In this case, you are seeing nested references to the same function.
A few additional points important for function references are:
• You can define a function with the name of one of the SQL operators. For example, suppose you can
attach some meaning to the "+" operator for values which have distinct type BOAT. You can define the
following UDF:
You are not permitted to overload the built-in conditional operators such as >, =, LIKE, IN, and so on, in
this way.
• The function selection algorithm does not consider the context of the reference in resolving to a
particular function. Look at these BLOOP functions, modified a bit from before:
Because the best match, resolved using the SMALLINT argument, is the first BLOOP defined above,
the second operand of the CONCAT resolves to data type INTEGER. The statement might not return
the expected result since the returned integer will be cast as a VARCHAR before the CONCAT is
performed. If the first BLOOP was not present, the other BLOOP is chosen and the statement execution
is successful.
• UDFs can be defined with parameters or results having any of the LOB types: BLOB, CLOB, or DBCLOB.
The system will materialize the entire LOB value in storage before calling such a function, even if the
source of the value is a LOB locator host variable. For example, consider the following fragment of a C
language application:
Either host variable :clob150K or :clob_locator1 is valid as an argument for a function whose
corresponding parameter is defined as CLOB(500K). Referring to the FINDSTRING defined in
“Example: String search” on page 206 both of the following are valid in the program:
• External UDF parameters or results which have one of the LOB types can be created with the AS
LOCATOR modifier. In this case, the entire LOB value is not materialized before invocation. Instead, a
LOB LOCATOR is passed to the UDF.
Both of the following statements correctly resolve to the BOAT_COST function, because both cast
the :ship host variable to type BOAT:
If there are multiple BOAT distinct types in the database, or BOAT UDFs in other schema, you must be
careful with your function path. Otherwise your results may be unpredictable.
Related reference
Determining the best fit
Triggers
A trigger is a set of actions that runs automatically when a specified change operation is performed on a
specified table or view.
The change operation can be an SQL INSERT, UPDATE, or DELETE statement, or an insert, an update, or
a delete high-level language statement in an application program. Triggers are useful for tasks such as
enforcing business rules, validating input data, and keeping an audit trail.
Triggers can be defined as SQL or external.
For an external trigger, the ADDPFTRG CL command is used. The program containing the set of trigger
actions can be defined in any supported high level language. External triggers can be insert, update,
delete, or read triggers.
For an SQL trigger, the CREATE TRIGGER statement is used. The trigger program is defined entirely using
SQL. SQL triggers can be insert, update, or delete triggers. An SQL trigger can also be defined to have
more than one of these events within a single trigger program.
Once a trigger is associated with a table or view, the trigger support calls the trigger program whenever
a change operation is initiated against the table or view, or any logical file or view created over the table
or view. SQL triggers and external triggers can be defined for the same table. Only SQL triggers can be
defined for a view. Up to 300 triggers can be defined for a single table or view.
Each change operation for a table can call a trigger before or after the change operation occurs.
Additionally, you can add a read trigger that is called every time the table is accessed. Thus, a table
can be associated with many types of triggers.
• Before delete trigger
• Before insert trigger
• Before update trigger
SQL triggers
The SQL CREATE TRIGGER statement provides a way for the database management system to actively
control, monitor, and manage a group of tables and views whenever an insert, an update, or a delete
operation is performed. The body of the trigger is written in the SQL procedural language, SQL PL.
The statements specified in the SQL trigger are executed each time an insert, update, or delete operation
is performed. An SQL trigger may call stored procedures or user-defined functions to perform additional
processing when the trigger is executed.
Unlike stored procedures, an SQL trigger cannot be directly called from an application. Instead, an SQL
trigger is invoked by the database management system on the execution of a triggering insert, update, or
delete operation. The definition of the SQL trigger is stored in the database management system and is
invoked by the database management system when the SQL table or view that the trigger is defined on, is
modified.
An SQL trigger can be created by specifying the CREATE TRIGGER SQL statement. All objects referred to
in the CREATE TRIGGER statement (such as tables and functions) must exist; otherwise, the trigger will
not be created. If an object does not exist when the trigger is being created, dynamic SQL can be used
to generate a statement that references the object. The statements in the routine-body of the SQL trigger
are transformed by SQL into a program (*PGM) object. The program is created in the schema specified
by the trigger name qualifier. The specified trigger is registered in the SYSTRIGGERS, SYSTRIGDEP,
SYSTRIGCOL, and SYSTRIGUPD SQL catalogs.
Related concepts
Debugging an SQL routine
By specifying SET OPTION DBGVIEW = *SOURCE in the CREATE PROCEDURE, CREATE FUNCTION, or
CREATE TRIGGER statement, you can debug the generated program or module at the SQL statement
level.
Related reference
SQL control statements
CREATE TRIGGER
For the SQL insert statement below, the "FiscalQuarter" column is set to 2, if the current date is
November 14, 2000.
SQL triggers have access to and can use user-defined types (UDTs) and stored procedures. In the
following example, the SQL trigger calls a stored procedure to execute some predefined business logic, in
this case, to set a column to a predefined value for the business.
For the SQL insert statement below, the "ClassRating" column is set to "Economy Class", if the
"VariousSizes" column has the value of 3.0.
SQL requires all tables, user-defined functions, procedures and user-defined types to exist before
creating an SQL trigger. In the examples above, all of the tables, stored procedures, and user-defined
types are defined before the trigger is created.
For the SQL update statement below, the RecordMaxBarometricPressure in OurCitysRecords is updated
by the UpdateMaxPressureTrigger.
SQL allows the definition of multiple triggers for a single triggering action. In the previous example,
there are two AFTER UPDATE triggers: UpdateMaxPressureTrigger and UpdateMinPressureTrigger. These
triggers are activated only when specific columns of the table TodaysRecords are updated.
AFTER triggers may modify tables. In the example above, an UPDATE operation is applied to a second
table. Note that recursive insert and update operations should be avoided. The database management
system terminates the operation if the maximum trigger nesting level is reached. You can avoid recursion
by adding conditional logic so that the insert or update operation is exited before the maximum nesting
level is reached. The same situation needs to be avoided in a network of triggers that recursively cascade
through the network of triggers.
CREATE TABLE PARTS (INV_NUM INT, PART_NAME CHAR(20), ON_HAND INT, MAX_INV INT,
PRIMARY KEY (INV_NUM))
In the trigger body the INSERTING, UPDATING, and DELETING predicates are used to determine which
event caused the trigger to activate. A distinct piece of code is executed for each of the defined events.
For each event, a warning condition is handled.
In the next example, the trigger event predicates are used in an INSERT statement to generate a row for a
transaction history table.
For this trigger, the same routine logic is used by all three trigger events. The trigger event predicates are
used to determine whether the inventory number needs to be read from the before or after row value and
a second time to set the type of transaction.
For the following insert statement, C1 in table T1 will be assigned a value of 'A'. C2 will be assigned the
NULL value. The NULL value would cause the new row to not match the selection criteria C2 > 10 for the
view V1.
Adding the INSTEAD OF trigger IOT1 can provide a different value for the row that will be selected by the
view:
CREATE VIEW V3(Z1, Z2) AS SELECT V1.X1, V2.Y1 FROM V1, V2 WHERE V1.X1 = 'A' AND V2.Y1 > 'B'
With this trigger, the following DELETE statement is allowed. It deletes all rows from table A having an A1
value of 'A', and all rows from table B having a B1 value of 'X'.
View V2 remains not updatable since the original definition of view V2 remains not updatable.
Any delete operations for view V1 result in the AFTER DELETE trigger AFTER1 being activated also
because trigger IOT1 performs a delete on table T1. The delete for table T1 causes the AFTER1 trigger to
be activated.
When the first SQL delete statement below is executed, the ItemWeight for the item "Desks" is added to
the column total for TotalWeight in the table YearToDateTotals. When the second SQL delete statement
is executed, an overflow occurs when the ItemWeight for the item "Chairs" is added to the column
total for TotalWeight, as the column only handles values up to 32767. When the overflow occurs, the
invalid_number exit handler is executed and a row is written to the FailureLog table. The sqlexception exit
handler runs, for example, if the YearToDateTotals table was deleted by accident. In this example, the
handlers are used to write a log so that the problem can be diagnosed at a later time.
In the preceding example, the trigger is processed a single time following the processing of a triggering
update statement because it is defined as a FOR EACH STATEMENT trigger. You will need to consider the
processing overhead required by the database management system for populating the transition tables
when you define a trigger that references transition tables.
In the RPG code, the number of parameters can be obtained by using the %PARMS built-in function.
For other ILE languages, this information is available by using the _NPMPARMLISTADDR built-in function.
This is the content of MYLIB/QCSRC(MYLPRINTF). Observe that this code is aware that the input
parameter is qualified with the procedure name. It also knows that the parameter is a varying length
character, so correctly references the length and the data parts of the generated declaration.
{
/* declare prototype for Qp0zLprintf */
extern int Qp0zLprintf (char *format, ...);
{
/* declare prototype for Qp0zLprintf */
extern int Qp0zLprintf (char *format, ...);
{
/* declare prototype for Qp0zLprintf */
extern int Qp0zLprintf (char *format, ...);
This procedure prints a CLOB variable to the job log. You must have QSYSINC in your library list to find the
C #includes.
{
#include "qp0ztrc.h" /* for Qp0zLprintf */
#include "sqludf.h" /* for sqludf_length/sqludf_substr */
long lob_length;
int rc;
rc = sqludf_length(&LPRINTF.C_STRING, &lob_length);
if ((rc == 0) && (lob_length > 0)) {
unsigned char* lob = malloc(lob_length);
rc = sqludf_substr(&LPRINTF.C_STRING, 1, lob_length,
lob, &lob_length);
if (rc == 0) {
/* print CLOB variable to job log */
Qp0zLprintf("%.*s\n", lob_length, lob);
}
free(lob);
}
}
This procedure prints a CLOB variable to the job log and handles a return code. You must have QSYSINC in
your library list to find the C #includes.
{
#include "qp0ztrc.h" /* for Qp0zLprintf */
#include "sqludf.h" /* for sqludf_length/sqludf_substr */
long lob_length;
int rc;
rc = sqludf_length(&LPRINTF.C_STRING, &lob_length);
if (rc == 0) {
if (lob_length > 0) {
unsigned char* lob = malloc(lob_length);
rc = sqludf_substr(&LPRINTF.C_STRING, 1, lob_length,
lob, &lob_length);
if (rc == 0) {
/* print CLOB variable to job log */
Qp0zLprintf("%.*s\n", lob_length, lob);
}
else {
LPRINTF.ERROR_VAR = rc; /* indicate error */
}
free(lob);
}
}
else {
LPRINTF.ERROR_VAR = rc; /* indicate error */
}
}
This is the content of QSQLSRC(SQL_INUSE). It uses two IBM i services to log information about the
lock failure and uses built-in global variables to indicate the failing routine. The logged information can
be queried after the failure to examine the lock information. The handler also uses the PRINT_VARCHAR
procedure to log a message to the job log.
--
-- Common handler for SQL0913, object in use.
--
-- This handler retrieves the message tokens for the error to determine
-- the table name encountering the error.
-- It gets the most recent 5 rows from the joblog and saves them in a table.
-- It uses the name of the table to find the lock status information
-- and saves the lock information in a different table.
-- In both tables, rows are tagged with the routine name that
-- received the error and a common timestamp.
-- Finally, it prints a notification to the joblog.
--
-- This handler assumes the following two tables have been created:
-- CREATE TABLE APPLIB.HARD_TO_DEBUG_PROBLEMS AS
-- (SELECT SYSIBM.ROUTINE_SCHEMA, SYSIBM.ROUTINE_SPECIFIC_NAME,
-- CURRENT TIMESTAMP AS LOCK_ERROR_TIMESTAMP,
-- X.* FROM TABLE(QSYS2.JOBLOG_INFO('*')) X) WITH NO DATA;
-- CREATE TABLE APPLIB.HARD_TO_DEBUG_LOCK_PROBLEMS AS
-- (SELECT SYSIBM.ROUTINE_SCHEMA, SYSIBM.ROUTINE_SPECIFIC_NAME,
-- CURRENT TIMESTAMP AS LOCK_ERROR_TIMESTAMP,
-- X.* FROM QSYS2.OBJECT_LOCK_INFO X) WITH NO DATA;
--
Example 1
This example shows two procedures, sum and main. Procedure main creates an array of 6 integers using
an array constructor. It then passes the array to procedure sum, which computes the sum of all the
elements in the input array and returns the result to main. Procedure sum illustrates the use of array
subindexing and of the CARDINALITY function, which returns the number of elements in an array.
SET n = CARDINALITY(inList);
SET i = 1;
SET total = 0;
WHILE (i <= n) DO
SET total = total + inList[i];
SET i = i + 1;
END WHILE;
END
END
Example 2
This example is similar to Example 1 but uses one procedure, main, to invoke a function named sum.
SET n = CARDINALITY(inList);
SET i = 1;
SET total = 0;
WHILE (i <= n) DO
SET total = total + inList[i];
SET i = i + 1;
END WHILE;
RETURN total;
END
END
Example 3
In this example, we use two array data types (intArray and stringArray), and a persons table with two
columns (id and name). Procedure processPersons adds three additional persons to the table, and
returns an array with the person names that contain the letter 'a', ordered by id. The ids and names of
the three persons to be added are represented as two arrays (ids and names). These arrays are used as
arguments to the UNNEST function, which turns the arrays into a two-column table, whose elements are
then inserted into the persons table. Finally, the last set statement in the procedure uses the ARRAY_AGG
aggregate function to compute the value of the output parameter.
After testing the procedure to make sure it works, you can generate an obfuscated form of the statement
by using the WRAP function.
VALUES(SYSIBMADM.WRAP(
'CREATE PROCEDURE UPDATE_STAFF (
IN P_EmpNo CHAR(6),
IN P_NewJob CHAR(5),
IN P_Dept INTEGER)
BEGIN
UPDATE STAFF
SET JOB = P_NewJob
WHERE DEPT = P_Dept and ID = P_EmpNo;
END'
));
The result from this statement is a CLOB value that contains a value that looks something like the
following statement. Since the timestamp is used during the obfuscation process, your result can be
different every time. The value is shown here on several lines for convenience. New line characters are not
allowed in the wrapped part of the statement text.
This is an executable SQL statement. It can be run just like the original SQL statement. Altering any of the
characters that follow the WRAPPED keyword will cause the statement to fail.
To deploy this statement, the obfuscated form can be embedded in a RUNSQLSTM source member or
source stream file. You need to be very careful to include exactly the characters in the obfuscated version.
A second way of obfuscating an SQL routine is to use the CREATE_WRAPPED SQL procedure:
CALL SYSIBMADM.CREATE_WRAPPED(
'CREATE PROCEDURE UPDATE_STAFF (
IN P_EmpNo CHAR(6),
IN P_NewJob CHAR(5),
IN P_Dept INTEGER)
BEGIN
UPDATE STAFF
SET JOB = P_NewJob
WHERE DEPT = P_Dept and ID = P_EmpNo;
END'
);
This will create the procedure and the entire SQL routine body will be obfuscated. Looking at the
ROUTINE_DEFINITION column in SYSROUTINES will show the obfuscated form of the routine body,
starting with the WRAPPED keyword. You must save the original routine source if you might need it for
future reference since there is no way to generate the original source from the obfuscated text.
Related reference
WRAP scalar function
CREATE_WRAPPED procedure
IF M_days<=30 THEN
SET I = M_days-7;
SET J = 23
RETURN decimal(M_week_1 + ((M_month_1 - M_week_1)*I)/J,16,7);
END IF
IF M_days<=30 THEN
Return decimal(M-week_1 + ((M_month_1 - M_week_1)* (M_days-7))/23,16,7);
END IF
• Combine sequences of complex SET statements into one statement. This applies to statements where C
code only cannot be generated because of CCSIDs or data types.
• Simple array element assignments can be implemented in C code when only one assignment is
performed in a SET statement.
• Use IF () ELSE IF () ... ELSE ... constructs instead of IF (x AND y) to avoid unnecessary comparisons.
• Do as much in SELECT statements as possible:
• Avoid doing character or date comparisons inside of loops when not necessary. In some cases the loop
can be rewritten to move a comparison to precede the loop and have the comparison set an integer
variable that is used within the loop. This causes the complex expression to be evaluated only one time.
An integer comparison within the loop is more efficient since it can be done with generated C code.
• Avoid setting variables that might not be used. For example, if a variable is set outside of the an IF
statement, be sure that the variable will actually be used in all instances of the IF statement. If not, then
set the variable only in the portion of the IF statement that is it actually used.
• Replace sections of code with a single SELECT statement when possible. Look at the following code
snippet:
SET vnb_decimal = 4;
cdecimal:
FOR vdec AS cdec CURSOR FOR
SELECT nb_decimal
FROM K$FX_RULES
WHERE first_currency=Pi_curl AND second_currency=P1_cur2
DO
SET vnb_decimal=SMALLINT(cdecimal.nb_decimal);
END FOR cdecimal;
This code snippet can be more efficient if rewritten in the following way:
RETURN( SELECT
CASE
WHEN MIN(nb_decimal) IS NULL THEN ROUND(Vrate1/Vrate2,4)
ELSE ROUND(Vrate1/Vrate2,SMALLINT(MIN(nb_decimal)))
• C code can only be used for assignments and comparisons of character data if the CCSIDs of both
operands are the same, if one of the CCSIDs is 65535, if the CCSID is not UTF-8, and if truncation of
character data is not possible. If the CCSID of the variable is not specified, the CCSID is not determined
until the procedure is called. In this case, code must be generated to determine and compare the CCSID
at runtime. If an alternate collating sequence is specified or if *JOBRUN is specified, C code cannot be
generated for character comparisons.
• C code can only be used for assignments and comparisons of graphic data if the CCSIDs of both
operands are the same, if the CCSID is not UTF-16, and if truncation of graphic data is not possible. If
an alternate collating sequence is specified or if *JOBRUN is specified, C code cannot be generated for
graphic comparisons.
• Use the same data type, length and scale for numeric variables that are used together in assignments. C
code can only be generated if truncation is not possible.
• Using identical attributes to set or retrieve array elements may result in the generation of C code
for integer, character, varchar, decimal, and numeric types. Character variables that require CCSID
processing can require an SQL SET statement to be generated.
• Comparisons using array elements are never generated in C. Some comparisons could result in better
performance if the element value is assigned to a local variable first that can be used in the comparison.
Lab1: BEGIN
DECLARE var1 INT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
RETURN -3;
Because redesigning a whole routine takes a lot of effort, examine routines that are showing up as
key performance bottlenecks rather than looking at the application as a whole. More important than
redesigning existing performance bottlenecks is to spend time during the design of the application
thinking about the performance impacts of the design. Focusing on areas of the application that are
expected to be high use areas and making sure that they are designed with performance in mind saves
you from having to do a redesign of those areas later.
Large objects
A large object (LOB) is a string data type with a size ranging from 0 bytes to 2 GB (GB equals 1 073 741
824 bytes).
The VARCHAR, VARGRAPHIC, and VARBINARY data types have a limit of 32 KB (where KB equals 1024
bytes) of storage. While this might be sufficient for small to medium-sized text data, applications often
need to store large text documents. They might also need to store a wide variety of additional data types,
such as audio, video, drawings, mixed text and graphics, and images. Some data types can store these
data objects as strings of up to 2 GB.
These data types are binary large objects (BLOBs), single-byte character large objects (CLOBs), and
double-byte character large objects (DBCLOBs). Each table can have a large amount of associated LOB
data. Although a single row that contains one or more LOB values cannot exceed 3.5 GB, a table can
contain nearly 256 GB of LOB data.
You can refer to and manipulate LOBs using host variables as you do any other data type. However, host
variables use the program's storage that might not be large enough to hold LOB values, so you might
need to manipulate large values in other ways. Locators are useful for identifying and manipulating a large
object value at the database server and for extracting pieces of the LOB value. File reference variables are
useful for physically moving a large object value (or a large part of it) to and from the client.
Example: LOBLOC in C
This example program, written in C, uses a locator to retrieve a CLOB value.
Note: By using the code examples, you agree to the terms of the “Code license and disclaimer
information” on page 396.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
if (argc == 1) {
EXEC SQL CONNECT TO sample;
}
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
}
else {
printf ("\nUSAGE: lobloc [userid passwd]\n\n");
return 1;
} /* endif */
do {
EXEC SQL FETCH c1 INTO :number, :resume :lobind; [2]
if (SQLCODE != 0) break;
if (lobind < 0) {
printf ("NULL LOB indicated\n");
} else {
/* EVALUATE the LOB LOCATOR */
/* Locate the beginning of "Department Information" section */
EXEC SQL VALUES (POSSTR(:resume, 'Department Information'))
INTO :deptInfoBeginLoc;
Identification Division.
Program-ID. "lobloc".
Data Division.
Working-Storage Section.
EXEC SQL INCLUDE SQLCA END-EXEC.
Procedure Division.
Main Section.
display "Sample COBOL program: LOBLOC".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Move 0 to buffer-length.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :empnum, :resume :lobind [2]
END-EXEC.
go to End-Fetch-Loop.
NULL-lob-indicated.
display "NULL LOB indicated".
End-Fetch-Loop. exit.
End-Prog.
stop run.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sql.h>
if (argc == 1) {
EXEC SQL CONNECT TO sample;
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
else {
printf ("\nUSAGE: lobfile [userid passwd]\n\n");
return 1;
} /* endif */
EXEC SQL SELECT resume INTO :resume :lobind FROM emp_resume [3]
WHERE resume_format='ascii' AND empno='000130';
if (lobind < 0) {
printf ("NULL LOB indicated \n");
} else {
printf ("Resume for EMPNO 000130 is in file : RESUME.TXT\n");
} /* endif */
Identification Division.
Program-ID. "lobfile".
Data Division.
Working-Storage Section.
EXEC SQL INCLUDE SQLCA END-EXEC.
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
NULL-LOB-indicated.
display "NULL LOB indicated".
End-Main.
EXEC SQL CONNECT RESET END-EXEC.
End-Prog.
stop run.
strcpy(hv_text_file.name, "/home/userid/dirname/filnam.1");
hv_text_file.name_length = strlen("/home/userid/dirname/filnam.1");
hv_text_file.file_options = SQL_FILE_READ; /* this is a 'regular' file */
Defining a UDT
You define a user-defined type (UDT) using the CREATE DISTINCT TYPE statement.
For the CREATE DISTINCT TYPE statement, note that:
1. The name of the new UDT can be a qualified or an unqualified name.
2. The source type of the UDT is the type used by the system to internally represent the UDT. For this
reason, it must be a built-in data type. Previously defined UDTs cannot be used as source types of
other UDTs.
As part of a UDT definition, the system always generates cast functions to:
• Cast from the UDT to the source type, using the standard name of the source type. For example, if you
create a distinct type based on FLOAT, the cast function called DOUBLE is created.
• Cast from the source type to the UDT.
These functions are important for the manipulation of UDTs in queries.
The function path is used to resolve any references to an unqualified type name or function, except if the
type name or function is the main object of a CREATE, DROP, or COMMENT ON statement.
Related reference
Using qualified function references
If you use a qualified function reference, you restrict the search for a matching function to the specified
schema.
CREATE TYPE
Example: Money
Suppose that you are writing applications that handle different currencies and want to ensure that Db2
does not allow these currencies to be compared or manipulated directly with one another in queries.
Remember that conversions are necessary whenever you want to compare values of different currencies.
So you define as many UDTs as you need; one for each currency that you may need to represent:
Example: Resumé
Suppose that you want to keep the application forms that are filled out by applicants to your company in a
table, and that you are going to use functions to extract information from these forms.
Because these functions cannot be applied to regular character strings (because they are certainly not
able to find the information they are supposed to return), you define a UDT to represent the filled forms:
Example: Sales
Suppose that you want to define tables to keep your company's sales in different countries.
You create the tables as follows:
The UDTs in the preceding examples are created with the same CREATE DISTINCT TYPE statements in
“Example: Money” on page 259. Note that the preceding examples use check constraints.
You have fully qualified the UDT name because its qualifier is not the same as your authorization ID and
you have not changed the default function path. Remember that whenever type and function names are
not fully qualified, Db2 searches through the schemas listed in the current function path and looks for a
type or function name matching the given unqualified name.
SELECT PRODUCT_ITEM
FROM US_SALES
WHERE TOTAL > US_DOLLAR (100000)
AND month = 7
AND year = 1998
Because you cannot compare U.S. dollars with instances of the source type of U.S. dollars (that is,
DECIMAL) directly, you have used the cast function provided by Db2 to cast from DECIMAL to U.S. dollars.
You can also use the other cast function provided by Db2 (that is, the one to cast from U.S. dollars to
DECIMAL) and cast the column total to DECIMAL. Either way you decide to cast, from or to the UDT, you
can use the cast specification notation to perform the casting, or the functional notation. You might have
written the above query as:
SELECT PRODUCT_ITEM
FROM US_SALES
WHERE TOTAL > CAST (100000 AS us_dollar)
AND MONTH = 7
AND YEAR = 1998
Note that an invocation of the US_DOLLAR function as in US_DOLLAR(C1), where C1 is a column whose
type is Canadian dollars, has the same effect as invoking:
That is, C1 (in Canadian dollars) is cast to decimal which in turn is cast to a double value that is passed
to the CDN_TO_US_DOUBLE function. This function accesses the exchange rate file and returns a double
value (representing the amount in U.S. dollars) that is cast to decimal, and then to U.S. dollars.
A function to convert Euros to U.S. dollars is similar to the example above:
Because you cannot directly compare U.S. dollars with Canadian dollars or Euros, you use the UDF to cast
the amount in Canadian dollars to U.S. dollars, and the UDF to cast the amount in Euros to U.S. dollars.
You cannot cast them all to DECIMAL and compare the converted DECIMAL values because the amounts
are not monetarily comparable as they are not in the same currency.
You want to know the total of sales in Germany for each product in the year of 2004. You want to obtain
the total sales in U.S. dollars:
You cannot write SUM (US_DOLLAR (TOTAL)), unless you had defined a SUM function on U.S. dollar in
a manner similar to the above.
Related reference
Example: Assignments involving different UDTs
Suppose that you have defined these sourced user-defined functions (UDFs) on the built-in SUM function
to support SUM on U.S. and Canadian dollars.
You do not explicitly invoke the cast function to convert the character string to the UDT
personal.application_form. This is because Db2 allows you to assign instances of the source type
of a UDT to targets having that UDT.
Related reference
Example: Assignments in dynamic SQL
If you want to store the application form using dynamic SQL, you can use parameter markers.
Now suppose your supervisor requests that you maintain the annual total sales in U.S. dollars of each
product and in each country, in separate tables:
You explicitly cast the amounts in Canadian dollars and Euros to U.S. dollars since different UDTs are not
directly assignable to each other. You cannot use the cast specification syntax because UDTs can only be
cast to their own source type.
Related reference
Example: Sourced UDFs involving UDTs
Suppose that you have defined a sourced user-defined function (UDF) on the built-in SUM function to
support SUM on Euros.
You cast Canadian dollars to U.S. dollars and Euros to U.S. dollars because UDTs are union compatible
only with the same UDT. You must use the functional notation to cast between UDTs since the cast
specification only allows you to cast between UDTs and their source types.
All the function provided by Db2 LOB support is applicable to UDTs whose source type are LOBs.
Therefore, you have used LOB file reference variables to assign the contents of the file into the UDT
column. You have not used the cast function to convert values of BLOB type into your e-mail type. This
is because Db2 allows you to assign values of the source type of a distinct type to targets of the distinct
type.
You have used the UDFs defined on the UDT in this SQL query since they are the only means to manipulate
the UDT. In this sense, your UDT e-mail is completely encapsulated. Its internal representation and
structure are hidden and can only be manipulated by the defined UDFs. These UDFs know how to
interpret the data without the need to expose its representation.
Suppose you need to know the details of all the e-mail your company received in 1994 that had to do with
the performance of your products in the marketplace.
Because your host variable is of type BLOB locator (the source type of the UDT), you have explicitly
converted the BLOB locator to your UDT, whenever it was used as an argument of a UDF defined on the
UDT.
Using DataLinks
The DataLink data type is one of the basic building blocks for extending the types of data that can be
stored in database files. The idea of a DataLink is that the actual data stored in the column is only a
pointer to the object.
This object can be anything, an image file, a voice recording, a text file, and so on. The method used for
resolving to the object is to store a Uniform Resource Locator (URL). This means that a row in a table
can be used to contain information about the object in traditional data types, and the object itself can be
referenced using the DataLink data type. The user can use SQL scalar functions to get back the path to
the object and the server on which the object is stored (see Built-in functions in the SQL Reference). With
the DataLink data type, there is a fairly loose relationship between the row and the object. For instance,
deleting a row will sever the relationship to the object referenced by the DataLink, but the object itself
might not be deleted.
A table created with a DataLink column can be used to hold information about an object, without actually
containing the object itself. This concept gives the user much more flexibility in the types of data that can
be managed using a table. If, for instance, the user has thousands of video clips stored in the integrated
file system of their server, they may want to use an SQL table to contain information about these video
clips. But since the user already has the objects stored in a directory, they only want the SQL table to
contain references to the objects, not the actual bytes of storage. A good solution is to use DataLinks.
The SQL table uses traditional SQL data types to contain information about each clip, such as title, length,
date, and so on. But the clip itself is referenced using a DataLink column. Each row in the table stores
a URL for the object and an optional comment. Then an application that is working with the clips can
retrieve the URL using SQL interfaces, and then use a browser or other playback software to work with the
URL and display the video clip.
There are several advantages of using this technique:
• The integrated file system can store any type of stream file.
NO LINK CONTROL
If a DataLink column is created with NO LINK CONTROL, no linking takes place when rows are added to
the SQL table.
The URL is verified to be syntactically correct, but there is no check to make sure that the server is
accessible, or that the file even exists.
Adding a prefix
A prefix is a path or directory that will contain objects to be linked. When setting up the Data Links File
Manager (DLFM) on a system, the administrator must add any prefixes that will be used for DataLinks. The
http://TESTSYS1/mydir/datalinks/videos/file1.mpg
or
file://TESTSYS1/mydir/datalinks/text/story1.txt
It is also possible to remove a prefix using the script command dfmadmin -del_prefix. This is not a
commonly used function since it can only be run if there are no linked objects anywhere in the directory
structure contained within the prefix name.
Notes:
1. The following directories, or any of their subdirectories, should not be used as prefixes for DataLinks:
• /QIBM
• /QReclaim
• /QSR
• /QFPNWSSTG
2. Additionally, common base directories such as the following should not be used unless the prefix is a
subdirectory within one of the base directories:
• /home
• /dev
• /bin
• /etc
• /tmp
• /usr
• /lib
Once the DLFM has been started, and the prefixes and host database names have been registered, you
can begin linking objects in the file system.
These HTTP functions are passed parameters that indicate the HTTP server to access, the settings of
the HTTP headers, and any data to be sent to the server. The parameters are the same for each HTTP
function.
• The first parameter is the URL used to access the server.
• The second parameter is a string which indicates the options to be used on the request. These options
include the setting of the HTTP headers. The string is a JSON object with the following format:
{"option":"option-setting","option":"option-setting"}
SSL considerations
By default, the HTTP functions use the system default certificate store: /QIBM/USERDATA/ICSS/CERT/
SERVER/DEFAULT.KDB. By default, this certificate store does not exist. The Digital Certificate Manager
(DCM) can be used to create this certificate store and add certificates to this certificate store.
If your application chooses to trust the certificates used by the JVM, the following SQL statements
can be used to create a certificate store (/home/javaTrustStore/fromJava.KDB) that can
be used by the HTTP functions. This process uses the /QOpenSys/QIBM/ProdData/JavaVM/
jdk80/64bit/jre/lib/security/cacerts java trust store included when the 64-bit Java 8 is
-- The following SQL script generates a KDB trust store from a Java trust
-- store using an intermediate JKS trust store. The IFS directory and
-- name of the KDB trust store are set in the NEW_TRUST_DIRECTORY and
-- NEW_TRUST_STORE variables below.
--
-- Create an SQL schema to contain temporary variables.
-- The user must have permission to create a schema.
--
CREATE SCHEMA FROM_JAVA_TRUST_STORE;
SET SCHEMA FROM_JAVA_TRUST_STORE;
SET PATH CURRENT PATH, FROM_JAVA_TRUST_STORE;
--
-- Define global variables for parameters to the procedure for generating new trust store.
--
-- Specify the IFS directory to use for the new trust store.
-- In this example, we use /home/javaTrustStore
-- The user must have *W authority to /home in order to create this
-- directory.
CREATE OR REPLACE VARIABLE NEW_TRUST_DIRECTORY VARCHAR(80) CCSID 37;
SET NEW_TRUST_DIRECTORY='/home/javaTrustStore';
-- Specify the Java trust store password. The default password is changeit.
-- If the password has been changed on the system, the correct value will need to be used.
CREATE OR REPLACE VARIABLE JAVA_TRUST_STORE_PASSWORD VARCHAR(80) CCSID 37;
SET JAVA_TRUST_STORE_PASSWORD = 'changeit';
-- Step 1. Use QCMDEXC and QSH and mkdir to create a directory in which
-- to save the new store file
CALL QSYS2.QCMDEXC( 'QSH CMD(''mkdir ' CONCAT NEW_TRUST_DIRECTORY CONCAT ''')');
-- Step 2. Use QCMDEXC and QSH to run the keytool command to export
-- the default java certificate store in PKCS12 format
CALL QSYS2.QCMDEXC(
'QSH CMD(''keytool -importkeystore ' CONCAT
' -srcstorepass ' CONCAT JAVA_TRUST_STORE_PASSWORD CONCAT
' -srckeystore ' CONCAT JAVA_TRUST_STORE CONCAT
' -destkeystore ' CONCAT JKS_TRUST_STORE CONCAT
' -srcstoretype JKS -deststoretype PKCS12 ' CONCAT
' -deststorepass ' CONCAT JKS_TRUST_STORE_PASSWORD CONCAT ''')');
Common errors
When an error occurs, use the VERBOSE form of the function to get additional information about the
error. These functions will return error information in the RESPONSE_MESSAGE column. For example, the
following invocation will return information about a page not found error.
The following are some common errors that may be encountered when using the HTTP functions.
• If the current system does not have internet access, the following error may be encountered.
• If DNS is not configured on the system, the following error may be encountered.
• If https is being used, the connection uses SSL. In order to work properly, the server must
use a certificate signed by a certificate authority present in the certificate store specified by the
sslCertificateStoreFile option.
• http Authentication must be specified using the basicAuth option. An attempt to use basic
authentication on the URL (for example https://userid:[email protected]/login) results in the
following error.
Debugging considerations
You should use IBM i Access Client Solutions (ACS) Run SQL Scripts to execute the HTTP functions in
order to see error information. Using STRSQL to invoke HTTP functions will not provide useful problem
information.
Because the HTTP functions use the networking functions provided by AXISC and GSkit, any networking
problem may not be exposed to the SQL level with enough detail to diagnose the problem. If the HTTP
functions fail with an SQL4302 error and the information is not useful, an AXISC trace may provide more
information.
Information about enabling the AXISC trace can be found at Enabling a Client Trace for a Web Services
Client.
JSON concepts
JSON (JavaScript Object Notation) is a popular format for interchanging information. It has a simple
structure and is easily read by humans and machines. Due to its simplicity, it is used as an alternative to
XML and does not require predetermined schema designs. While initially created for use with JavaScript,
it is language independent and portable. Db2 for i conforms to the industry standard SQL support for
JSON.
BSON is a standardized binary representation format for serializing JSON text. It allows for fast traversal
of JSON. BSON does not provide any storage savings over JSON.
JSON objects, arrays, and scalar values
A JSON document consists of an object, which is a collection of key:value pairs. Each value may be a
string, a number, the null value, an array, or another object. The key:value pairs are separated by commas.
The key and value are separated by a colon. An example of a simple JSON object with two key:value pairs
is: {"name":"John","age":7}
In a key:value pair, the key is always a string. For Db2, the value may be one of the following types:
• Number – an integer, decimal, or floating point number. The decimal point is always represented by a
period.
Examples: 1, 3.14, 3E20
• String – a sequence of characters surrounded by quotes. Any special characters are escaped.
Examples: "John", "computer", "line 1\nline 2"
• Null – the omission of a value, indicated by null
• Array – square brackets surrounding an ordered list of zero or more values. The list of values may
contain different types. The values in the array are separated using commas.
Examples: [1,2,3,"yes"]
JSON path
In order to retrieve elements from a JSON object, a path expression is used. An SQL/JSON path
expression begins with a JSON path mode, which is either lax or strict. This mode will be discussed
below. The JSON path mode is followed by an sql-json-path-expression. This path expression begins with
the context identifier, $, which is followed by accessors to navigate to the next key or array element. A key
accessor begins with a period and is followed by a keyname. An array accessor begins with a left square
bracket, followed by a zero-based index value, and ends with a right square bracket. The following table
illustrates some examples of using the SQL/JSON path language to navigate to the various elements of a
JSON object. The initial JSON is:
{"isbn":"123-456-222","author": [{"name":"Jones"},{"name":"Smith"}]}
Path Value
$ {"isbn":"123-456-222","author": [{"name":"Jones"},{"name":"Smith"}]}
$.isbn "123-456-222"
$.author [{"name":"Jones"},{"name":"Smith")]
$.author[0] {"name":"Jones"}
$.author[1] {"name":"Smith"}
$.author[0].name "Jones"
2 As part of an earlier JSON technology preview, JSON was generated in a non-standard BSON format that
was prepended with a single x'03' byte. With the addition of the standard JSON functionality, this format
will continue to be recognized but will never be generated. If you have an application that depends on this
format, there is an undocumented function, SYSTOOLS.JSON2BSON, that can be used to generate it.
JSON_TABLE is the recommended way to return multiple elements from a JSON document since the
JSON document only needs to be deconstructed one time.
Using JSON_TABLE
The JSON_TABLE table function converts a JSON document into a relational table.
We will work with the following table, EMP, which contains four rows with one JSON object per row. By
using the JSON_TABLE function we will extract the data so that it can be treated as relational data.
In this example, the first argument indicates the source of the JSON to process, in this case column
JSONDOC in table EMP. The second argument is the path to the starting point in the JSON document. $
indicates to start at the beginning. Next are the definitions for the result columns. Each has a name, a
data type, and the path to use to find the column data in the JSON object. For each of these columns, the
column path is specified to use the current context ($), followed by the key name for the values.
The result of this query is:
Notice that the structural error due to Sally Smith not having an office is returned as the null value.
There are two factors that affect this behavior. Since lax was used for the office column's path, the
structural error was ignored and null was returned. If strict had been used for the office path, the path
navigation would have returned an error. The default behavior for JSON_TABLE when an error is returned
for a column is to return the null value. You can override this for strict mode by adding the ERROR ON
ERROR clause to the column definition.
Returning JSON Formatted Data
JSON_TABLE has the ability to return a column that contains data formatted as JSON. This is
accomplished by using the keywords FORMAT JSON in the column definition. The result must consist
of a single value: a JSON object, a JSON array, or a scalar value.
Here is an example of using JSON_TABLE to extract the employee name information as JSON data.
Note that the NAME column returns strings which represent JSON formatted objects.
When the path for a FORMAT JSON column results in a string value, the default behavior is to return the
quotes for string values. This is the result shown for the OFFICE column. The OMIT QUOTES ON SCALAR
STRING clause can be used to remove the quotes from scalar strings.
There is another option not demonstrated here that applies when returning FORMAT JSON data. If the
path locates a sequence of JSON objects, they must be wrapped in an array in order to be successfully
returned as a JSON value. This can be done using the WITH ARRAY WRAPPER clause. The default is to not
add a wrapper, which would result in an error.
Handling a JSON array
When returning information from a JSON array, each array element is returned as a separate row in the
result table. Now we are going to use JSON_TABLE to extract the telephone types and numbers which are
in a JSON array.
In this example, the path expression is $.phones[*] meaning all the elements of the phones array. The
column path expression used to find the column data in the JSON object is the context item, $, followed
by the key name for the value to be returned. In this case, the context item is the result of the parent path
expression, $.phones[*].
The result of this query is:
TYPE NUMBER
home 555-3762
work 555-7242
work 555-8925
work 555-4311
home 555-6312
SELECT t.*
FROM emp,
The row path expression is $, meaning the top level of the JSON object. The first and second
columns return the first and last names. This is followed by a nested column definition. The path lax
$.phone[*] for the nested path means to process all the elements of the phones array. Within that array,
the type and number values of the array elements are returned as the final two columns in the table.
This query also demonstrates the concept of an ordinality column. This is a column that generates a
number, starting from 1, for each result row for each invocation of JSON_TABLE.
The result of this query is:
In this example, there is a parent/child relationship between the nested levels. A LEFT OUTER JOIN
is used to combine the information in the parent/child relationship. Since Sally Smith has no phone
information, the LEFT OUTER JOIN returns NULL values for the phone columns.
Now let's examine the two ordinality columns. At the parent level, every row has the same ordinality
value. While you might expect to see each row numbered sequentially, this query performs a separate
invocation of JSON_TABLE for each JSONDOC row. For each invocation, the numbering starts at 1, so
every row in the result ends up with 1 for OUTER_ORDINALITY. If the JSON used for this example had
been one object containing all four employees, OUTER_ORDINALITY would have incremented for each
employee object. For NESTED_ORDINALITY, the numbering restarts at 1 every time the parent changes.
Sibling Nesting
Nested columns can exist at the same level. The following example uses sibling nested columns to return
both phone and account information. The first nested column clause accesses the phone information. A
second nested column clause accesses the account information.
In this example, there is a sibling relationship between phone and account information. For sibling related
nesting, a UNION is used to combine the information. Since the phone and account information are at the
same level in the JSON document, there is no result row that contains both pieces of information.
insert into empdata values (901, 'Doe', 'John', 'E-334', '555-7242', '555-3762');
insert into emp_account values (901, '36232'), (901, '73263');
insert into empdata values (902, 'Pan', 'Peter', 'E-216', '555-8925', null);
insert into emp_account values (902, '76232'), (902, '72963');
insert into empdata values (903, 'Jones', 'Mary', 'E-739', '555-4311', '555-6312');
insert into empdata values (904, 'Smith', 'Sally', null, null, null);
Our goal is to generate four pieces of employee information in JSON that look like the following JSON
objects that were used as input for the JSON_TABLE examples.
This example uses the JSON_OBJECT scalar function to generate a JSON object. It defines three
key:value pairs using the ID, LAST_NAME, and OFFICE_NUMBER columns from the EMPDATA table for
the values. Each key name is specified as a character string exactly as it will appear in the output. The
result of this query is four rows, one for each row in the EMPDATA table.
{"id":901,"name":"Doe","office":"E-334"}
{"id":902,"name":"Pan","office":"E-216"}
{"id":903,"name":"Jones","office":"E-739"}
{"id":904,"name":"Smith","office":null}
That is a good start toward our final goal, but maybe we want to omit the office value when it is null
rather than including it. That is easy to do by using the ABSENT ON NULL clause. This indicates that when
any of the values for the JSON object are the null value, the null key:value pair should not be included in
the result.
{"id":901,"name":"Doe","office":"E-334"}
{"id":902,"name":"Pan","office":"E-216"}
{"id":903,"name":"Jones","office":"E-739"}
{"id":904,"name":"Smith"}
{"first":"John","last":"Doe"}
{"first":"Peter","last":"Pan"}
{"first":"Mary","last":"Jones"}
{"first":"Sally","last":"Smith"}
Now our result looks like this, with a name object nested within the outer JSON object:
{"id":901,"name":{"first":"John","last":"Doe"},"office":"E-334"}
{"id":902,"name":{"first":"Peter","last":"Pan"},"office":"E-216"}
{"id":903,"name":{"first":"Mary","last":"Jones"},"office":"E-739"}
{"id":904,"name":{"first":"Sally","last":"Smith"}}
select json_array
(case when home_phone is not null then
json_object('type' value 'home',
'number' value home_phone) end,
case when work_phone is not null then
json_object('type' value 'work',
'number' value work_phone) end
)
from empdata;
Here is the result. The output for each entry is broken across two lines to show the complete result.
["{\"type\":\"home\",\"number\":\"555-3762\"}",
"{\"type\":\"work\",\"number\":\"555-7242\"}"]
["{\"type\":\"work\",\"number\":\"555-8925\"}"]
["{\"type\":\"home\",\"number\":\"555-6312\"}",
"{\"type\":\"work\",\"number\":\"555-4311\"}"]
[]
This is a bit unexpected. Why are all those extra \ characters in the result? This demonstrates the
difference between processing normal character data and character data that has already been formatted
as JSON. When JSON_ARRAY (or any of the JSON publishing functions) is looking at its arguments,
it recognizes when the argument is the direct result of another JSON function. If it is, the string is
interpreted as already formatted JSON data. That means that the function will not escape any of the
special characters in the string. If the argument is not the direct result of a JSON function, it is interpreted
as non-JSON character data and escape processing is performed on the value. Since this example
embedded the JSON_OBJECT in a CASE expression, the fact that the string was already formatted JSON
is not known. To avoid these unwanted escape sequences, you need to explicitly tell JSON_ARRAY that
the argument is already JSON by using the FORMAT JSON clause.
With that knowledge, let's try it again.
select json_array
(case when home_phone is not null then
json_object('type' value 'home',
'number' value home_phone) end
format json,
case when work_phone is not null then
json_object('type' value 'work',
'number' value work_phone) end
format json)
from empdata;
[{"type":"home","number":"555-3762"},{"type":"work","number":"555-7242"}]
[{"type":"work","number":"555-8925"}]
[{"type":"home","number":"555-6312"},{"type":"work","number":"555-4311"}]
[]
Now this information is ready to include as the next piece of our larger JSON object.
This returns:
{"id":901,"name":{"first":"John","last":"Doe"},"office":"E-334",
"phones":[{"type":"home","number":"555-3762"},{"type":"work","number":"555-7242"}]}
{"id":902,"name":{"first":"Peter","last":"Pan"},"office":"E-216",
"phones":[{"type":"work","number":"555-8925"}]}
{"id":903,"name":{"first":"Mary","last":"Jones"},"office":"E-739",
"phones":[{"type":"home","number":"555-6312"},{"type":"work","number":"555-4311"}]}
{"id":904,"name":{"first":"Sally","last":"Smith"},
"phones":[]}
[{"number":"36232"}]
[{"number":"73263"}]
[{"number":"76232"}]
[{"number":"72963"}]
That generated one array for each account number. What we need is one array for each ID value. That
requires aggregating all the number JSON objects for a single ID value into a single array.
JSON_ARRAYAGG is an aggregate function that works on groups of data. In this case, we are grouping
on the ID column. Each account for an ID generates an object, then all of those objects are aggregated
into a single array. This query returns only two rows, one for each ID value which is exactly what we were
looking for.
[{"number":"36232"},{"number":"73263"}]
[{"number":"76232"},{"number":"72963"}]
This piece can be added to what we have so far to complete the generated JSON object. To return only
the accounts from the EMP_ACCOUNT table for the current ID, a WHERE clause is needed. Also note that
{"id":901,"name":{"first":"John","last":"Doe"},"office":"E-334",
"phones":[{"type":"home","number":"555-3762"},
{"type":"work","number":"555-7242"}],
"accounts":[{"number":"36232"},{"number":"73263"}]}
{"id":902,"name":{"first":"Peter","last":"Pan"},"office":"E-216",
"phones":[{"type":"work","number":"555-8925"}],
"accounts":[{"number":"76232"},{"number":"72963"}]}
{"id":903,"name":{"first":"Mary","last":"Jones"},"office":"E-739",
"phones":[{"type":"home","number":"555-6312"},
{"type":"work","number":"555-4311"}]}
{"id":904,"name":{"first":"Sally","last":"Smith"},"phones":[]}
First we wrap the results with a JSON_ARRAYAGG to create an array of employee information. Then we
wrap the array with a JSON_OBJECT so the final result is a single JSON object. Now the result is one JSON
object.
{"employees":[
{"id":901,"name":{"first":"John","last":"Doe"},"office":"E-334",
"phones":[{"type":"home","number":"555-3762"},
{"type":"work","number":"555-7242"}],
with sales_tmp(sales_person) as (
select distinct(sales_person) from sales),
region_tmp(region,sales_person) as (
select distinct region, sales_person from sales)
From the sample table, the following three rows are returned:
{"GOUNOT":["Manitoba","Ontario-North","Ontario-South","Quebec"]}
{"LEE":["Manitoba","Ontario-North","Ontario-South","Quebec"]}
{"LUCCHESSI":["Manitoba","Ontario-South","Quebec"]}
If you want to return a single JSON object containing all the results, the following query using
JSON_OBJECTAGG will generate it for you.
with sales_tmp(sales_person) as (
select distinct(sales_person) from sales),
region_tmp(region,sales_person) as (
select distinct region, sales_person from sales)
This returns one row containing one JSON object. It is broken into several lines here to make it easy to
read.
{"LEE":["Manitoba","Ontario-North","Ontario-South","Quebec"],
"GOUNOT":["Manitoba","Ontario-North","Ontario-South","Quebec"],
"LUCCHESSI":["Manitoba","Ontario-South","Quebec"]}
Using a cursor
When SQL runs a SELECT statement, the resulting rows comprise the result table. A cursor provides a way
to access a result table.
It is used within an SQL program to maintain a position in the result table. SQL uses a cursor to work with
the rows in the result table and to make them available to your program. Your program can have several
cursors, although each must have a unique name.
Statements related to using a cursor include the following:
• A DECLARE CURSOR statement to define and name the cursor and specify the rows to be retrieved with
the embedded select statement.
• OPEN and CLOSE statements to open and close the cursor for use within the program. The cursor must
be opened before any rows can be retrieved.
• A FETCH statement to retrieve rows from the cursor's result table or to position the cursor on another
row.
• An UPDATE ... WHERE CURRENT OF statement to update the current row of a cursor.
• A DELETE ... WHERE CURRENT OF statement to delete the current row of a cursor.
Related reference
Updating data as it is retrieved from a table
You can update rows of data as you retrieve them by using a cursor.
CLOSE
DECLARE CURSOR
DELETE
FETCH
UPDATE
Types of cursors
SQL supports serial and scrollable cursors. The type of cursor determines the positioning methods that
can be used with the cursor.
Serial cursor
A serial cursor is one defined without the SCROLL keyword.
For a serial cursor, each row of the result table can be fetched only once per OPEN of the cursor. When
the cursor is opened, it is positioned before the first row in the result table. When a FETCH is issued, the
cursor is moved to the next row in the result table. That row is then the current row. If host variables are
specified (with the INTO clause on the FETCH statement), SQL moves the current row's contents into your
program's host variables.
This sequence is repeated each time a FETCH statement is issued until the end-of-data (SQLCODE = 100)
is reached. When you reach the end-of-data, close the cursor. You cannot access any rows in the result
table after you reach the end-of-data. To use a serial cursor again, you must first close the cursor and
then re-issue the OPEN statement. You can never back up using a serial cursor.
Scrollable cursor
For a scrollable cursor, the rows of the result table can be fetched many times. The cursor is moved
through the result table based on the position option specified on the FETCH statement. When the cursor
is opened, it is positioned before the first row in the result table. When a FETCH is issued, the cursor
NEXT Positions the cursor on the next row. This is the default if no position is
specified.
PRIOR Positions the cursor on the previous row.
FIRST Positions the cursor on the first row.
LAST Positions the cursor on the last row.
BEFORE Positions the cursor before the first row.
AFTER Positions the cursor after the last row.
CURRENT Does not change the cursor position.
RELATIVE n Evaluates a host variable or integer n in relationship to the cursor's
current position. For example, if n is -1, the cursor is positioned on the
previous row of the result table. If n is +3, the cursor is positioned three
rows after the current row.
For a scrollable cursor, the end of the table can be determined by the following:
Once the cursor is positioned at the end of the table, the program can use the PRIOR or RELATIVE scroll
options to position and fetch data starting from the end of the table.
EXEC SQL
UPDATE CORPDATA.EMPLOYEE
SET JOB = :NEW-CODE
WHERE CURRENT OF THISEMP
END-EXEC.
EXEC SQL
DELETE FROM CORPDATA.EMPLOYEE
WHERE CURRENT OF THISEMP
END-EXEC.
For the scrollable cursor example, the program uses the RELATIVE position option to obtain a
representative sample of salaries from department D11.
EXEC SQL
DECLARE cursor-name CURSOR FOR
SELECT column-1, column-2 ,...
FROM table-name , ...
FOR UPDATE OF column-2 ,...
END-EXEC.
EXEC SQL
DECLARE cursor-name SCROLL CURSOR FOR
SELECT column-1, column-2 ,...
FROM table-name ,...
WHERE column-1 = expression ...
END-EXEC.
The select-statements shown here are rather simple. However, you can code several other types of
clauses in a select-statement within a DECLARE CURSOR statement for a serial and a scrollable cursor.
If you intend to update any columns in any or all of the rows of the identified table (the table named in
the FROM clause), include the FOR UPDATE OF clause. It names each column you intend to update. If you
do not specify the names of columns, and you specify either the ORDER BY clause or FOR READ ONLY
clause, a negative SQLCODE is returned if an update is attempted. If you do not specify the FOR UPDATE
OF clause, the FOR READ ONLY clause, the ORDER BY clause, and the result table is not read-only and the
cursor is not scrollable, you can update any of the columns of the specified table.
You can update a column of the identified table even though it is not part of the result table. In this case,
you do not need to name the column in the SELECT statement. When the cursor retrieves a row (using
FETCH) that contains a column value you want to update, you can use UPDATE ... WHERE CURRENT OF to
update the row.
For example, assume that each row of the result table includes the EMPNO, LASTNAME, and WORKDEPT
columns from the CORPDATA.EMPLOYEE table. If you want to update the JOB column (one of the columns
in each row of the CORPDATA.EMPLOYEE table), the DECLARE CURSOR statement should include FOR
UPDATE OF JOB ... even though JOB is omitted from the SELECT statement.
For information about when the result table and cursor are read-only, see DECLARE CURSOR in the SQL
reference topic collection.
EXEC SQL
OPEN cursor-name
END-EXEC.
...
IF SQLCODE =100 GO TO DATA-NOT-FOUND.
or
EXEC SQL
WHENEVER NOT FOUND GO TO symbolic-address
END-EXEC.
Your program should anticipate an end-of-data condition whenever a cursor is used to fetch a row, and
should be prepared to handle this situation when it occurs.
When you are using a serial cursor and the end of data is reached, every subsequent FETCH statement
returns the end-of-data condition. You cannot position the cursor on rows that are already processed. The
CLOSE statement is the only operation that can be performed on the cursor.
When you are using a scrollable cursor and the end of data is reached, the result table can still process
more data. You can position the cursor anywhere in the result table using a combination of the position
options. You do not need to close the cursor when the end of data is reached.
EXEC SQL
FETCH cursor-name
INTO :host variable-1[, :host variable-2] ...
END-EXEC.
EXEC SQL
FETCH RELATIVE integer
FROM cursor-name
INTO :host variable-1[, :host variable-2] ...
END-EXEC.
EXEC SQL
UPDATE table-name
SET column-1 = value [, column-2 = value] ...
WHERE CURRENT OF cursor-name
END-EXEC.
EXEC SQL
DELETE FROM table-name
WHERE CURRENT OF cursor-name
END-EXEC.
EXEC SQL
CLOSE cursor-name
END-EXEC.
If you processed the rows of a result table and you do not want to use the cursor again, you can let the
system close the cursor. The system automatically closes the cursor when:
• A COMMIT without HOLD statement is issued and the cursor is not declared using the WITH HOLD
clause.
• A ROLLBACK without HOLD statement is issued.
• The job ends.
• The activation group ends and CLOSQLCSR(*ENDACTGRP) was specified on the precompile.
• The first SQL program in the call stack ends and neither CLOSQLCSR(*ENDJOB) or
CLOSQLCSR(*ENDACTGRP) was specified when the program was precompiled.
• The connection to the application server is ended using the DISCONNECT statement.
• The connection to the application server was released and a successful COMMIT occurred.
• An *RUW CONNECT occurred.
Because an open cursor still holds locks on referred-to-tables or views, you should explicitly close any
open cursors as soon as they are no longer needed.
OPEN catalog_page;
END
To call this procedure, you can pass the values for the offset and fetch clauses. If you omit the values, the
defaults for the parameters will be used to always return the first 1000 rows. The following 3 calls to the
procedure will return a cursor open to a result set of 100 rows skipping the number of rows identified by
the first argument.
Note that the ordering for this query is not deterministic if any items in the catalog have the same price.
This means that when these items cross "page" boundaries, multiple calls to the procedure could return
the same item for more than one page or an item could never be returned. It is important to take this into
consideration when defining an ordering for your query.
When using a cursor to read through the resulting data, you cannot read any rows prior to the OFFSET
position or beyond where FETCH NEXT n ROWS ends. These cases are treated as if you reached the
beginning or end of the data. If the offset value is greater than the number of rows in the result query, no
rows are returned.
3 You can use this alternate supported syntax: LIMIT 50 OFFSET 700.
...
01 TABLE-1.
02 DEPT OCCURS 10 TIMES.
05 EMPNO PIC X(6).
05 LASTNAME.
49 LASTNAME-LEN PIC S9(4) BINARY.
49 LASTNAME-TEXT PIC X(15).
05 WORKDEPT PIC X(3).
05 JOB PIC X(8).
01 TABLE-2.
02 IND-ARRAY OCCURS 10 TIMES.
05 INDS PIC S9(4) BINARY OCCURS 4 TIMES.
...
EXEC SQL
DECLARE D11 CURSOR FOR
SELECT EMPNO, LASTNAME, WORKDEPT, JOB
FROM CORPDATA.EMPLOYEE
WHERE WORKDEPT = "D11"
END-EXEC.
...
EXEC SQL
OPEN D11
END-EXEC.
PERFORM FETCH-PARA UNTIL SQLCODE NOT EQUAL TO ZERO.
...
FETCH-PARA.
EXEC SQL WHENEVER NOT FOUND GO TO ALL-DONE END-EXEC.
EXEC SQL FETCH D11 FOR 10 ROWS INTO :DEPT :IND-ARRAY
END-EXEC.
...
In this example, a cursor was defined for the CORPDATA.EMPLOYEE table to select all rows where the
WORKDEPT column equals 'D11'. The result table contains eight rows. The DECLARE CURSOR and OPEN
statements do not have any special syntax when they are used with a multiple-row FETCH statement.
Another FETCH statement that returns a single row against the same cursor can be coded elsewhere in
the program. The multiple-row FETCH statement is used to retrieve all of the rows in the result table.
Following the FETCH, the cursor position remains on the last row retrieved.
The host structure array DEPT and the associated indicator array IND-ARRAY are defined in the
application. Both arrays have a dimension of ten. The indicator array has an entry for each column in
the result table.
The attributes of type and length of the DEPT host structure array elementary items match the columns
that are being retrieved.
When the multiple-row FETCH statement has successfully completed, the host structure array contains
the data for all eight rows. The indicator array, IND_ARRAY, contains zeros for every column in every row
because no NULL values were returned.
The SQLCA that is returned to the application contains the following information:
• SQLCODE contains 0
• SQLSTATE contains '00000'
• SQLERRD3 contains 8, the number of rows fetched
• SQLERRD4 contains 34, the length of each row
• SQLERRD5 contains +100, indicating the last row in the result table is in the block
Related reference
SQLCA (SQL communication area)
*....+....1....+....2....+....3....+....4....+....5....+....6....+....7...*
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
...
...
...
EXEC SQL
OPEN D11;
/* SET UP THE DESCRIPTOR FOR THE MULTIPLE-ROW FETCH */
/* 4 COLUMNS ARE BEING FETCHED */
SQLD = 4;
SQLN = 4;
SQLDABC = 366;
SQLTYPE(1) = 452; /* FIXED LENGTH CHARACTER - */
/* NOT NULLABLE */
SQLLEN(1) = 6;
SQLTYPE(2) = 456; /*VARYING LENGTH CHARACTER */
/* NOT NULLABLE */
SQLLEN(2) = 15;
SQLTYPE(3) = 452; /* FIXED LENGTH CHARACTER - */
SQLLEN(3) = 3;
SQLTYPE(4) = 452; /* FIXED LENGTH CHARACTER - */
/* NOT NULLABLE */
SQLLEN(4) = 8;
/*ISSUE THE MULTIPLE-ROW FETCH STATEMENT TO RETRIEVE*/
/*THE DATA INTO THE DEPT ROW STORAGE AREA */
/*USE A HOST VARIABLE TO CONTAIN THE COUNT OF */
/*ROWS TO BE RETURNED ON THE MULTIPLE-ROW FETCH */
In this example, a cursor has been defined for the CORPDATA.EMPLOYEE table to select all rows where
the WORKDEPT column equal 'D11'. The sample EMPLOYEE table in the Sample Tables shows the result
table contains multiple rows. The DECLARE CURSOR and OPEN statements do not have special syntax
when they are used with a multiple-row FETCH statement. Another FETCH statement that returns a single
EXEC SQL
EXECUTE IMMEDIATE :stmtstrg;
Related concepts
Using interactive SQL
Interactive SQL allows a programmer or a database administrator to quickly and easily define, update,
delete, or look at data for testing, problem analysis, and database maintenance.
END;
In routines similar to the example above, you must know the number of parameter markers and their data
types, because the host variables that provide the input data are declared when the program is being
written.
Note: All prepared statements that are associated with an application server are destroyed whenever the
connection to the application server ends. Connections are ended by a CONNECT (Type 1) statement, a
DISCONNECT statement, or a RELEASE followed by a successful COMMIT.
EXEC SQL
DECLARE C2 CURSOR FOR S2 END-EXEC.
EXEC SQL
OPEN C2 USING :EMP END-EXEC.
EXEC SQL
CLOSE C2 END-EXEC.
STOP-RUN.
FETCH-ROW.
EXEC SQL
FETCH C2 INTO :EMP, :EMPNAME END-EXEC.
Note: Remember that because the SELECT statement, in this case, always returns the same number
and type of data items as previously run fixed-list SELECT statements, you do not need to use an SQL
descriptor area.
SQLDA format
An SQL descriptor area (SQLDA) consists of four variables followed by an arbitrary number of occurrences
of a sequence of six variables collectively named SQLVAR.
Note: The SQLDA in REXX is different.
When an SQLDA is used in OPEN, FETCH, CALL, and EXECUTE, each occurrence of SQLVAR describes a
host variable.
The fields of the SQLDA are as follows:
SQLDAID
SQLDAID is as used an 'eyecatcher" for storage dumps. It is a string of 8 characters that have the
value 'SQLDA' after the SQLDA is used in a PREPARE or DESCRIBE statement. This variable is not
used for FETCH, OPEN, CALL, or EXECUTE.
Byte 7 can be used to determine if more than one SQLVAR entry is needed for each column. Multiple
SQLVAR entries may be needed if there are any LOB or distinct type columns. This flag is set to a blank
if there are not any LOBs or distinct types.
SQLDAID is not applicable in REXX.
SQLDABC
SQLDABC indicates the length of the SQLDA. It is a 4-byte integer that has the value
SQLN*LENGTH(SQLVAR) + 16 after the SQLDA is used in a PREPARE or DESCRIBE statement.
SQLDABC must have a value equal to or greater than SQLN*LENGTH(SQLVAR) + 16 before it is used by
FETCH, OPEN, CALL, or EXECUTE.
SQLABC is not applicable in REXX.
SQLN
SQLN is a 2-byte integer that specifies the total number of occurrences of SQLVAR. It must be set
before being used by any SQL statement to a value greater than or equal to 0.
SQLN is not applicable in REXX.
SQLD
SQLD is a 2-byte integer that specifies the number of occurrences of SQLVAR, in other words, the
number of host variables or columns described by the SQLDA. This field is set by SQL on a DESCRIBE
or PREPARE statement. In other statements, this field must be set before being used to a value
greater than or equal to 0 and less than or equal to SQLN.
Note: This SELECT statement has no INTO clause. Dynamic SELECT statements must not have an INTO
clause, even if they return only one row.
EXEC SQL
PREPARE S1 FROM :DSTRING;
Next, you need to determine the number of result columns and their data types. To do this, you need an
SQLDA.
The first step in defining an SQLDA is to allocate storage for it. (Allocating storage is not necessary in
REXX.) The techniques for acquiring storage are language-dependent. The SQLDA must be allocated on a
16-byte boundary. The SQLDA consists of a fixed-length header that is 16 bytes in length. The header is
followed by a varying-length array section (SQLVAR), each element of which is 80 bytes in length.
The amount of storage that you need to allocate depends on how many elements you want to have in the
SQLVAR array. Each column you select must have a corresponding SQLVAR array element. Therefore, the
number of columns listed in your SELECT statement determines how many SQLVAR array elements you
should allocate. Because this SELECT statement is specified at run time, it is impossible to know exactly
how many columns will be accessed. Consequently, you must estimate the number of columns. Suppose,
in this example, that no more than 20 columns are ever expected to be accessed by a single SELECT
statement. In this case, the SQLVAR array should have a dimension of 20, ensuring that each item in the
select-list has a corresponding entry in SQLVAR. This makes the total SQLDA size 20 x 80, or 1600, plus
16 for a total of 1616 bytes
Having allocated what you estimated to be enough space for your SQLDA, you need to set the SQLN field
of the SQLDA equal to the number of SQLVAR array elements, in this case 20.
Having allocated storage and initialized the size, you can now issue a DESCRIBE statement.
EXEC SQL
DESCRIBE S1 INTO :SQLDA;
When the DESCRIBE statement is run, SQL places values in the SQLDA that provide information about the
select-list for your statement. The following tables show the contents of the SQLDA after the DESCRIBE is
run. Only the entries that are meaningful in this context are shown.
SQLDAID is an identifier field initialized by SQL when a DESCRIBE is run. SQLDABC is the byte count or
size of the SQLDA. The SQLDA header is followed by 2 occurrences of the SQLVAR structure, one for each
column in the result table of the SELECT statement being described:
Your program might need to alter the SQLN value if the SQLDA is not large enough to contain the
described SQLVAR elements. For example, suppose that instead of the estimated maximum of 20
columns, the SELECT statement actually returns 27. SQL cannot describe this select-list because the
SQLVAR needs more elements than the allocated space allows. Instead, SQL sets the SQLD to the actual
number of columns specified by the SELECT statement and the remainder of the structure is ignored.
Therefore, after a DESCRIBE, you should compare the SQLN value to the SQLD value. If the value of SQLD
is greater than the value of SQLN, allocate a larger SQLDA based on the value in SQLD, as follows, and
perform the DESCRIBE again:
EXEC SQL
DESCRIBE S1 INTO :SQLDA;
IF SQLN <= SQLD THEN
DO;
EXEC SQL
DESCRIBE S1 INTO :SQLDA;
END;
If you use DESCRIBE on a non-SELECT statement, SQL sets SQLD to 0. Therefore, if your program is
designed to process both SELECT and non SELECT statements, you can describe each statement after
it is prepared to determine whether it is a SELECT statement. This example is designed to process only
SELECT statements; the SQLD value is not checked.
Your program must now analyze the elements of SQLVAR returned from the successful DESCRIBE. The
first item in the select-list is WORKDEPT. In the SQLTYPE field, the DESCRIBE returns a value for the data
type of the expression and whether nulls are applicable or not.
In this example, SQL sets SQLTYPE to 453 in SQLVAR element 1. This specifies that WORKDEPT is a
fixed-length character string result column and that nulls are permitted in the column.
SQL sets SQLLEN to the length of the column. Because the data type of WORKDEPT is CHAR, SQL sets
SQLLEN equal to the length of the character column. For WORKDEPT, that length is 3. Therefore, when the
SELECT statement is later run, a storage area large enough to hold a CHAR(3) string will be needed.
Because the data type of WORKDEPT is CHAR FOR SBCS DATA, the first 4 bytes of SQLDATA were set to
the CCSID of the character column.
The last field in an SQLVAR element is a varying-length character string called SQLNAME. The first 2 bytes
of SQLNAME contain the length of the character data. The character data itself is typically the name of
a column used in the SELECT statement, in this case WORKDEPT. The exceptions to this are select-list
items that are unnamed, such as functions (for example, SUM(SALARY)), expressions (for example, A+B-
C), and constants. In these cases, SQLNAME is an empty string. SQLNAME can also contain a label rather
than a name. One of the parameters associated with the PREPARE and DESCRIBE statements is the
USING clause. You can specify it this way:
EXEC SQL
DESCRIBE S1 INTO:SQLDA
USING LABELS;
If you specify:
You are now ready to retrieve the SELECT statements results. Dynamically defined SELECT statements
must not have an INTO statement. Therefore, all dynamically defined SELECT statements must use
a cursor. Special forms of the DECLARE, OPEN, and FETCH are used for dynamically defined SELECT
statements.
The DECLARE statement for the example statement is:
As you can see, the only difference is that the name of the prepared SELECT statement (S1) is used
instead of the SELECT statement itself. The actual retrieval of result rows is made as follows:
EXEC SQL
OPEN C1;
EXEC SQL
FETCH C1 USING DESCRIPTOR :SQLDA;
DO WHILE (SQLCODE = 0);
/*Process the results pointed to by SQLDATA*/
EXEC SQL
FETCH C1 USING DESCRIPTOR :SQLDA;
END;
EXEC SQL
CLOSE C1;
The cursor is opened. The result rows from the SELECT are then returned one at a time using a FETCH
statement. On the FETCH statement, there is no list of output host variables. Instead, the FETCH
statement tells SQL to return results into areas described by your SQLDA. The results are returned
into the storage areas pointed to by the SQLDATA and SQLIND fields of the SQLVAR elements. After the
FETCH statement has been processed, the SQLDATA pointer for WORKDEPT has its referenced value set
to 'E11'. Its corresponding indicator value is 0 since a non-null value was returned. The SQLDATA pointer
for PHONENO has its referenced value set to '4502'. Its corresponding indicator value is also 0 since a
non-null value was returned.
Related reference
Varying-list SELECT statements
In dynamic SQL, a varying-list SELECT statement is used when the number and format of the result
columns are not predictable; that is, you do not know the data types or the number of variables that you
need.
SQLDA format
An SQL descriptor area (SQLDA) consists of four variables followed by an arbitrary number of occurrences
of a sequence of six variables collectively named SQLVAR.
Note: This SELECT statement has no INTO clause. Dynamic SELECT statements must not have an INTO
clause, even if they return only one row.
The statement is assigned to a host variable. The host variable, in this case named DSTRING, is then
processed by using the PREPARE statement as shown:
EXEC SQL
PREPARE S1 FROM :DSTRING;
Next, you need to determine the number of result columns and their data types. To do this, you need to
allocate the largest number of entries for an SQL descriptor that you think you will need. Assume that no
more than 20 columns are ever expected to be accessed by a single SELECT statement.
EXEC SQL
ALLOCATE DESCRIPTOR 'mydescr' WITH MAX 20;
Now that the descriptor is allocated, the DESCRIBE statement can be done to get the column information.
EXEC SQL
DESCRIBE S1 USING DESCRIPTOR 'mydescr';
When the DESCRIBE statement is run, SQL places values that provide information about the statement's
select-list into the SQL descriptor area defined by 'mydescr'.
If the DESCRIBE determines that not enough entries were allocated in the descriptor, SQLCODE +239
is issued. As part of this diagnostic, the second replacement text value indicates the number of entries
that are needed. The following code sample shows how this condition can be detected and shows the
descriptor allocated with the larger size.
EXEC SQL
GET DIAGNOSTICS CONDITION 1: token = DB2_ORDINAL_TOKEN_2;
/* Move the token variable from a character host variable into an integer host variable */
EXEC SQL
SET :var1 = :token;
/* Deallocate the descriptor that is too small */
EXEC SQL
DEALLOCATE DESCRIPTOR 'mydescr';
/* Allocate the new descriptor to be the size indicated by the retrieved token */
EXEC SQL
ALLOCATE DESCRIPTOR 'mydescr' WITH MAX :var1;
/* Perform the describe with the larger descriptor */
EXEC SQL
DESCRIBE s1 USING DESCRIPTOR 'mydescr';
end;
At this point, the descriptor contains the information about the SELECT statement. Now you are ready to
retrieve the SELECT statement results. For dynamic SQL, the SELECT INTO statement is not allowed. You
must use a cursor.
EXEC SQL
DECLARE C1 CURSOR FOR S1;
EXEC SQL
OPEN C1;
EXEC SQL
FETCH C1 INTO SQL DESCRIPTOR 'mydescr';
do while not at end of data;
/* process current data returned (see below for discussion of doing this) */
EXEC SQL
FETCH C1 INTO SQL DESCRIPTOR 'mydescr';
end;
EXEC SQL
CLOSE C1;
The cursor is opened. The result rows from the SELECT statement are then returned one at a time using
a FETCH statement. On the FETCH statement, there is no list of output host variables. Instead, the FETCH
statement tells SQL to return results into the descriptor area.
After the FETCH has been processed, you can use the GET DESCRIPTOR statement to read the values.
First, you must read the header value that indicates how many descriptor entries were used.
EXEC SQL
GET DESCRIPTOR 'mydescr' :count = COUNT;
Next you can read information about each of the descriptor entries. After you determine the data type of
the result column, you can do another GET DESCRIPTOR to return the actual value. To get the value of the
indicator, specify the INDICATOR item. If the value of the INDICATOR item is negative, the value of the
DATA item is not defined. Until another FETCH is done, the descriptor items will maintain their values.
do i = 1 to count;
GET DESCRIPTOR 'mydescr' VALUE :i /* set entry number to get */
:type = TYPE, /* get the data type */
:length = LENGTH, /* length value */
:result_ind = INDICATOR;
if result_ind >= 0 then
if type = character
GET DESCRIPTOR 'mydescr' VALUE :i
:char_result = DATA; /* read data into character field */
else
if type = integer
GET DESCRIPTOR 'mydescr' VALUE :i
:int_result = DATA; /* read data into integer field */
else
/* continue checking and processing for all data types that might be returned */
end;
There are several other descriptor items that you might need to check to determine how to handle the
result data. PRECISION, SCALE, DB2_CCSID, and DATETIME_INTERVAL_CODE are among them. The host
variable that has the DATA value read into it must have the same data type and CCSID as the data being
read. If the data type is varying length, the host variable can be declared longer than the actual data. For
all other data types, the length must match exactly.
NAME, DB2_SYSTEM_COLUMN_NAME, and DB2_LABEL can be used to get name-related values for
the result column. See GET DESCRIPTOR for more information about the items returned for a GET
DESCRIPTOR statement and for the definition of the TYPE values
If you want to run the same SELECT statement several times, using different values for LASTNAME, you
can use an SQL statement that looks like this:
When using parameter markers, your application does not need to set the data types and values for the
parameters until run time. By specifying a descriptor on the OPEN statement, you can substitute the
values for the parameter markers in the SELECT statement.
To code such a program, you need to use the OPEN statement with a descriptor clause. This SQL
statement is used to not only open a cursor, but to replace each parameter marker with the value of the
corresponding descriptor entry. The descriptor name that you specify with this statement must identify a
descriptor that contains a valid definition of the values. This descriptor is not used to return information
about data items that are part of a SELECT list. It provides information about values that are used to
replace parameter markers in the SELECT statement. It gets this information from the application, which
must be designed to place appropriate values into the fields of the descriptor. The descriptor is then ready
to be used by SQL for replacing parameter markers with the actual values.
When you use an SQLDA for input to the OPEN statement with the USING DESCRIPTOR clause, not all of
its fields need to be filled in. Specifically, SQLDAID, SQLRES, and SQLNAME can be left blank (SQLNAME
can be set if a specific CCSID is needed.) Therefore, when you use this method for replacing parameter
markers with values, you need to determine:
• How many parameter markers there are
• The data types and attributes of these parameters markers (SQLTYPE, SQLLEN, and SQLNAME)
• Whether an indicator variable is needed
In addition, if the routine is to handle both SELECT and non-SELECT statements, you might want to
determine what category of statement it is.
If your application uses parameter markers, your program has to perform the following steps. This can be
done using either an SQLDA or an allocated descriptor.
1. Read a statement into the DSTRING varying-length character string host variable.
2. Determine the number of parameter markers.
3. Allocate an SQLDA of that size or use ALLOCATE DESCRIPTOR to allocate a descriptor with that
number of entries. This is not applicable in REXX.
4. For an SQLDA, set SQLN and SQLD to the number of parameter markers. SQLN is not applicable in
REXX. For an allocated descriptor, use SET DESCRIPTOR to set the COUNT entry to the number of
parameter markers.
5. For an SQLDA, set SQLDABC equal to SQLN*LENGTH(SQLVAR) + 16. This is not applicable in REXX.
6. For each parameter marker:
a. Determine the data types, lengths, and indicators.
b. For an SQLDA, set SQLTYPE and SQLLEN for each parameter marker. For an allocated descriptor,
use SET DESCRIPTOR to set the entries for TYPE, LENGTH, PRECISION, and SCALE for each
parameter marker.
c. For an SQLDA, allocate storage to hold the input values.
Bottom
F3=Exit F4=Prompt F6=Insert line F9=Retrieve F10=Copy line
F12=Cancel F13=Services F24=More keys
Bottom
Note: If you are using the system naming convention, the names in parentheses appear instead of the
names shown above.
An interactive session consists of:
• Parameter values you specified for the STRSQL command.
• SQL statements you entered in the session along with corresponding messages that follow each SQL
statement
• Values of any parameters you changed using the session services function
• List selections you have made
Interactive SQL supplies a unique session-ID consisting of your user ID and the current workstation ID.
This session-ID concept allows multiple users with the same user ID to use interactive SQL from more
than one workstation at the same time. Also, more than one interactive SQL session can be run from the
same workstation at the same time from the same user ID.
If an SQL session exists and is being re-entered, any parameters specified on the STRSQL command are
ignored. The parameters from the existing SQL session are used.
Related reference
Start SQL Interactive Session (STRSQL) command
Prompting
The prompt function helps you provide the necessary information for the syntax of the statement that you
want to use. The prompt function can be used in any of these statement processing modes: *RUN, *VLD,
and *SYN. Prompting is not available for all SQL statements and is not complete for many SQL statements.
When prompting a statement using a period (.) for qualifying names in *SYS naming mode, the period will
Bottom
Type choices, press Enter.
• Press F4=Prompt before typing anything on the Enter SQL Statements display. You are shown a list
of statements. The list of statements varies and depends on the current interactive SQL statement
processing mode. For syntax check mode with a language other than *NONE, the list includes all SQL
statements. For run and validate modes, only statements that can be run in interactive SQL are shown.
You can select the number of the statement you want to use. The system prompts you for the statement
you selected.
If you press F4=Prompt without typing anything, the following display appears:
1. ALTER TABLE
2. CALL
3. COMMENT ON
4. COMMIT
5. CONNECT
6. CREATE ALIAS
7. CREATE COLLECTION
8. CREATE INDEX
9. CREATE PROCEDURE
10. CREATE TABLE
11. CREATE VIEW
12. DELETE
13. DISCONNECT
14. DROP ALIAS
More...
Selection
__
F3=Exit F12=Cancel
If you press F21=Display Statement on a prompt display, the prompter displays the formatted SQL
statement as it was filled in to that point.
Syntax checking
The syntax of the SQL statement is checked when it enters the prompter.
The prompter does not accept a syntactically incorrect statement. You must correct the syntax or remove
the incorrect part of the statement or prompting will not be allowed.
Subqueries
Subqueries can be selected on any display that has a WHERE or HAVING clause.
To see the subquery display, press F9=Specify subquery when the cursor is on a WHERE or HAVING
input line. A display appears that allows you to type in subselect information. If the cursor is within the
parentheses of the subquery when F9 is pressed, the subquery information is filled in on the next display.
If the cursor is outside the parentheses of the subquery, the next display is blank.
When Enter is pressed, the character string is put together, removing the extra shift characters. The
statement looks like this on the Enter SQL Statements display:
4. Press F17=Select tables to obtain a list of tables, because you want the table name to follow FROM.
Instead of a list of tables appearing as you expected, a list of collections appears (the Select and
Sequence collections display). You have just entered the SQL session and have not selected a
schema to work with.
5. Type a 1 in the Seq column next to YOURCOLL2 schema.
6. Press Enter.
The Select and Sequence Tables display appears, showing the tables existing in the YOURCOLL2
schema.
7. Type a 1 in the Seq column next to PEOPLE table.
8. Press Enter.
The Enter SQL Statements display appears again with the table name, YOURCOLL2.PEOPLE, inserted
after FROM. The table name is qualified by the schema name in the *SQL naming convention.
*EUR
*DMY
*USA
*MDY *USA
*JUL
*NONE
*ALL
Note: When you are connected to a Db2 for Linux®, UNIX, and Windows or Db2 for z/OS application server,
the date and time formats specified must be the same.
After the connection is completed, a message is sent stating that the session attributes have been
changed. The changed session attributes can be displayed by using the session services display. While
interactive SQL is running, no other connection can be established for the default activation group.
When connected to a remote system with interactive SQL, a statement processing mode of syntax-only
checks the syntax of the statement against the syntax supported by the local system instead of the
remote system. Similarly, the SQL prompter and list support use the statement syntax and naming
conventions supported by the local system. The statement is run, however, on the remote system.
Because of differences in the level of SQL support between the two systems, syntax errors may be found
in the statement on the remote system at run time.
Lists of schemas and tables are available when you are connected to the local relational database. Lists
of columns are available only when you are connected to a relational database manager that supports the
DESCRIBE TABLE statement.
When you exit interactive SQL with connections that have pending changes or connections that use
protected conversations, the connections remain. If you do not perform additional work over the
connections, the connections are ended during the next COMMIT or ROLLBACK operation. You can also
end the connections by doing a RELEASE ALL and a COMMIT before exiting interactive SQL.
Using interactive SQL for remote access to application servers other than Db2 for i might require some
setup.
The output listing containing the resulting messages for the SQL statements and CL commands is sent to
a print file. The default print file is QSYSPRT.
The OPTION parameter lets you choose to get an output listing or to have errors written to the joblog.
There is also an option to generate a listing only when errors are encountered during processing.
To perform syntax checking only on all statements in the source, specify the PROCESS(*SYN) parameter
on the RUNSQLSTM command. To see more details for error messages in the listing, specify the
SECLVLTXT(*YES) parameter.
Related reference
Run SQL Statement (RUNSQLSTM) command
xxxxSS1 VxRxMx yymmdd Run SQL Statements SCHEMA 02/15/08 15:35:18 Page 1
Source file...............CORPDATA/SRC
Member....................SCHEMA
Commit....................*NONE
Naming....................*SYS
Generation level..........10
Date format...............*JOB
Date separator............*JOB
Right margin..............80
Time format...............*HMS
Time separator ...........*JOB
Default collection........*NONE
IBM SQL flagging..........*NOFLAG
ANS flagging..............*NONE
Decimal point.............*JOB
Sort sequence.............*JOB
Language ID...............*JOB
Printer file..............*LIBL/QSYSPRT
Source file CCSID.........65535
Job CCSID.................37
Statement processing......*RUN
Allow copy of data........*OPTIMIZE
Allow blocking............*ALLREAD
SQL rules.................*DB2
Decimal result options:
Maximum precision.......31
Maximum scale...........31
Minimum divide scale....0
Source member changed on 11/01/07 11:54:10
xxxxSS1 VxRxMx yymmdd Run SQL Statements SCHEMA 02/15/08 15:35:18 Page 3
Record *...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 8 SEQNBR Last
change
MSG ID SEV RECORD TEXT
SQL7953 0 1 Position 1 Drop of DEPT in QSYS complete.
SQL7953 0 3 Position 3 Drop of MANAGER in QSYS complete.
SQL7952 0 5 Position 3 Schema DEPT created.
SQL7950 0 6 Position 8 Table EMP created in DEPT.
SQL7954 0 8 Position 8 Index EMPIND created in DEPT on table EMP in
DEPT.
SQL7966 0 10 Position 8 GRANT of authority to EMP in DEPT completed.
SQL7956 0 10 Position 40 1 rows inserted in EMP in DEPT.
SQL7952 0 13 Position 28 Schema MANAGER created.
SQL7950 0 19 Position 9 Table EMP_SALARY created in schema
MANAGER.
SQL7951 0 21 Position 9 View LEVEL created in MANAGER.
SQL7954 0 23 Position 9 Index SALARYIND created in MANAGER on table
EMP_SALARY in MANAGER.
SQL7966 0 25 Position 9 GRANT of authority to LEVEL in MANAGER
completed.
SQL7966 0 25 Position 37 GRANT of authority to EMP_SALARY in MANAGER
completed.
Message Summary
Total Info Warning Error Severe Terminal
13 13 0 0 0 0
00 level severity errors found in source
* * * * * E N D O F L I S T I N G * * * * *
In a CL program, you could use the Receive File (RCVF) command to read the results of the table
generated for this query:
Create a CL program that constructs and runs an SQL statement using an input parameter as part of the
statement:
To call the first display that allows you to customize the example program, specify the following command
on the command line.
The following display opens. From this display, you can customize your database example program.
DATA.............:
Bottom
F3=Exit
CRTSQLPKG authorization
When you create an SQL package on the IBM i operating system, the authorization ID used must have
*USE authority to the Create SQL Package (CRTSQLPKG) command.
Unit of work
Because package creation implicitly performs a commit or rollback, the commit definition must be at a
unit of work boundary before the package creation is attempted.
The following conditions must all be true for a commit definition to be at a unit of work boundary:
• SQL is at a unit of work boundary.
• There are no local or DDM files open using commitment control and no closed local or DDM files with
pending changes.
• There are no API resources registered.
• There are no LU 6.2 resources registered that are not associated with DRDA or DDM.
Labels
You can use the LABEL ON statement to create a description for an SQL package.
Consistency token
The program and its associated SQL package contain a consistency token that is checked when a call is
made to the SQL package.
The consistency tokens must match; otherwise, the package cannot be used. It is possible for the
program and SQL package to appear to be uncoordinated. Assume that the program and the application
server are on two distinct IBM i operating systems. The program is running in session A and it is
re-created in session B (where the SQL package is also re-created). The next call to the program in
session A might cause a consistency token error. To avoid locating the SQL package on each call, SQL
maintains a list of addresses for SQL packages that are used by each session. When session B re-creates
the SQL package, the old SQL package is moved to the QRPLOBJ library. The address to the SQL package
in session A is still valid. You can avoid this situation by creating the program and SQL package from the
session that is running the program, or by submitting a remote command to delete the old SQL package
before creating the program.
To use the new SQL package, you should end the connection with the remote system. You can either sign
off the session and then sign on again, or you can use the interactive SQL (STRSQL) command to issue
a DISCONNECT for unprotected network connections or a RELEASE followed by a COMMIT for protected
connections. RCLDDMCNV should then be used to end the network connections. Call the program again.
...
EXEC SQL
CONNECT TO SYSB
END-EXEC.
EXEC SQL
SELECT ...
END-EXEC.
CALL PGM2.
...
...
EXEC SQL
CONNECT TO SYSC;
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT ...;
EXEC SQL
OPEN C1;
do {
EXEC SQL
FETCH C1 INTO :st1;
EXEC SQL
UPDATE ...
SET COL1 = COL1+10
WHERE CURRENT OF C1;
PGM3(st1);
} while SQLCODE == 0;
EXEC SQL
CLOSE C1;
EXEC SQL COMMIT;
...
...
EXEC SQL
INSERT INTO TAB VALUES(:st1);
EXEC SQL COMMIT;
...
Distributed support
Db2 for i supports these levels of distributed relational database.
• Remote unit of work (RUW)
Remote unit of work is where the preparation and running of SQL statements occurs at only one
application server during a unit of work. An activation group with an application process at an
application requester can connect to an application server and, within one or more units of work, run
any number of static or dynamic SQL statements that refer to objects on the application server. Remote
unit of work is also referred to as DRDA level 1.
• Distributed unit of work (DUW)
Distributed unit of work is where the preparation and running of SQL statements can occur at multiple
applications servers during a unit of work. However, a single SQL statement can only refer to objects
located at a single application server. Distributed unit of work is also referred to as DRDA level 2.
Distributed unit of work allows:
– Update access to multiple application servers in one logical unit of work
or
– Update access to a single application server with read access to multiple application servers, in one
logical unit of work.
Whether multiple application servers can be updated in a unit of work is dependent on the existence of
a sync point manager at the application requester, sync point managers at the application servers, and
two-phase commit protocol support between the application requester and the application servers.
The sync point manager is a system component that coordinates commit and rollback operations among
the participants in the two-phase commit protocol. When running distributed updates, the sync point
managers on the different systems cooperate to ensure that resources reach a consistent state. The
protocols and flows used by sync point managers are also referred to as two-phase commit protocols.
If two-phase commit protocols will be used, the connection is a protected resource; otherwise the
connection is an unprotected resource.
Related concepts
Commitment control
Related reference
Accessing remote databases with interactive SQL
In interactive SQL, you can communicate with a remote relational database by using the SQL CONNECT
statement. Interactive SQL uses the CONNECT (Type 2) semantics (distributed unit of work) for CONNECT
statements.
If an attempt is made to perform a committable update over a read-only connection, the unit of work will
be placed in a rollback required state. If an unit of work is in a rollback required state, the only statement
allowed is a ROLLBACK statement; all other statements will result in SQLCODE -918.
...
EXEC SQL WHENEVER SQLERROR GO TO done;
EXEC SQL WHENEVER NOT FOUND GO TO done;
...
EXEC SQL
DECLARE C1 CURSOR WITH HOLD FOR
SELECT PARTNO, PRICE
FROM PARTS
WHERE SITES_UPDATED = 'N'
FOR UPDATE OF SITES_UPDATED;
/* Connect to the systems */
EXEC SQL CONNECT TO LOCALSYS;
EXEC SQL CONNECT TO SYSB;
EXEC SQL CONNECT TO SYSC;
/* Make the local system the current connection */
EXEC SQL SET CONNECTION LOCALSYS;
/* Open the cursor */
EXEC SQL OPEN C1;
while (SQLCODE==0)
{
/* Fetch the first row */
EXEC SQL FETCH C1 INTO :partnumber,:price;
/* Update the row which indicates that the updates have been
propagated to the other sites */
EXEC SQL UPDATE PARTS SET SITES_UPDATED='Y'
WHERE CURRENT OF C1;
/* Check if the part data is on SYSB */
if ((partnumber > 10) && (partnumber < 100))
{
/* Make SYSB the current connection and update the price */
EXEC SQL SET CONNECTION SYSB;
EXEC SQL UPDATE PARTS
SET PRICE=:price
WHERE PARTNO=:partnumber;
}
...
EXEC SQL
SET CONNECTION SYS5
END-EXEC.
...
* Check if the connection is updatable.
EXEC SQL CONNECT END-EXEC.
* If connection is updatable, update sales information otherwise
* inform the user.
IF SQLERRD(3) = 1 THEN
EXEC SQL
INSERT INTO SALES_TABLE
VALUES(:SALES-DATA)
END-EXEC
ELSE
DISPLAY 'Unable to update sales information at this time'.
...
WebSphere MQ messages
WebSphere MQ uses messages to pass information between applications.
Messages consist of the following parts:
• The message attributes, which identify the message and its properties.
• The message data, which is the application data that is carried in the message.
Db2 MQ services
A service describes a destination to which an application sends messages or from which an application
receives messages. Db2 MQ services are defined in the Db2 table SYSIBM.MQSERVICE_TABLE.
When an application program invokes a Db2 MQ function, the program selects a service from
SYSIBM.MQSERVICE_TABLE by specifying it as a parameter.
The SYSIBM.MQSERVICE_TABLE is automatically created by Db2, but once created, it is user-managed
and typically maintained by a system administrator. Db2 initially provides a row for the default service.
The default service is DB2.DEFAULT.SERVICE.
A new service can be added simply by issuing an INSERT statement. For example, the following statement
adds a new service called MYSERVICE. The MQ default queue manager and the MYQUEUE input queue
will be used for this new service.
A service can be changed (including the default service) simply by issuing an UPDATE statement. For
performance reasons, Db2 caches the service information so any existing job that has already used an MQ
function will typically not detect a concurrent change to a service. The following statement updates the
service called MYSERVICE by changing the input queue to MYQUEUE2.
UPDATE SYSIBM.MQSERVICE_TABLE
SET INPUTQUEUE = 'MYQUEUE2'
WHERE SERVICENAME = 'MYSERVICE'
Note: The default input queue initially used by the default service DB2.DEFAULT.SERVICE is the MQ
model queue called SYSTEM.DEFAULT.LOCAL.QUEUE. The default input queue used by other Db2
products is DB2MQ_DEFAULT_Q. For compatibility with other Db2 products, you may wish to create a
new input queue called DB2MQ_DEFAULT_Q and update the default service. For example:
UPDATE SYSIBM.MQSERVICE_TABLE
SET INPUTQUEUE = 'DB2MQ_DEFAULT_Q'
WHERE SERVICENAME = 'DB2.DEFAULT.SERVICE'
Db2 MQ policies
A policy controls how the MQ messages are handled. Db2 MQ policies are defined in the Db2 table
SYSIBM.MQPOLICY_TABLE. When an application program invokes a Db2 MQ function, the program
selects a policy from SYSIBM.MQPOLICY_TABLE by specifying it as a parameter.
The SYSIBM.MQPOLICY_TABLE is automatically created by Db2, but once created, it is user-managed and
typically maintained by a system administrator. Db2 initially provides a row for the default policy. The
default policy is DB2.DEFAULT.POLICY .
A new policy can be added simply by issuing an INSERT statement. For example, the following statement
adds a new policy called MYPOLICY. Since the value of the SYNCPOINT column is 'N', any MQ functions
that use MYPOLICY will not run under a transaction.
A policy can be changed (including the default policy) simply by issuing an UPDATE statement. For
performance reasons, Db2 caches the policy information so any existing job that has already used an MQ
function will typically not detect a concurrent change to a policy. The following statement updates the
policy called MYPOLICY by changing the retry interval to 5 seconds.
UPDATE SYSIBM.MQPOLICY_TABLE
SET SEND_RETRY_INTERVAL = 5000
WHERE POLICYNAME = 'MYPOLICY'
Db2 MQ functions
You can use the Db2 MQ functions to send messages to a message queue or to receive messages from the
message queue. You can send a request to a message queue and receive a response.
The Db2 MQ functions support the following types of operations:
• Send and forget, where no reply is needed.
• Read or receive, where one or all messages are either read without removing them from the queue, or
received and removed from the queue.
• Request and response, where a sending application needs a response to a request.
Notes:
1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a
message in a VARCHAR variable is 32000. The maximum length for a message in a CLOB variable is 2 MB.
Notes:
1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a
message in a VARCHAR variable is 32000. The maximum length for a message in a CLOB variable is 2 MB.
2. The first column of the result table of a Db2 MQ table function contains the message.
Db2 MQ dependencies
In order to use the Db2 MQ functions, IBM MQSeries® for IBM i must be installed, configured, and
operational.
Detailed information on installation and configuration can be found in the Websphere MQSeries
Information Center. At a minimum, the following steps are necessary once you have completed the install
of IBM MQSeries for IBM i:
1. Start the MQSeries subsystem
Ensure that the job default CCSID is set to the primary use for MQ since the CCSID of the MQ message
queue manager is set to the job default CCSID when it is created.
3. Start the default MQ message queue manager.
Db2 MQ tables
The Db2 MQ tables contain service and policy definitions that are used by the Db2 MQ functions.
The Db2 MQ tables are SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. These tables
are user-managed. The tables are initially created by Db2 and populated with one default service
(DB2.DEFAULT.SERVICE) and one default policy (DB2.DEFAULT.POLICY). You may modify the attributes
QUEUEMANAGER This column contains the name of the queue manager where the MQ functions
are to establish a connection.
If the column consists of 48 blanks, the name of the default MQSeries queue
manager is used.
INPUTQUEUE This column contains the name of the queue from which the MQ functions are
to send and retrieve messages.
CODEDCHARSETID This column contains the character set identifier (CCSID) for data in the
messages that are sent and received by the MQ functions.
This column corresponds to the CodedCharSetId field (MDCSI) in the message
descriptor (MQMD). MQ functions use the value in this column to set the
CodedCharSetId field.
The default value for this column is -3, which causes Db2 to set the
CodedCharSetId field (MDCSI) to the default job CCSID.
ENCODING This column contains the encoding value for the numeric data in the messages
that are sent and received by the MQ functions.
This column corresponds to the Encoding field (MDENC) in the message
descriptor (MQMD). MQ functions use the value in this column to set the
Encoding field.
The default value for this column is 0, which sets the Encoding field (MDENC)
to the value MQENC_NATIVE.
SEND_EXPIRY This column contains the message expiration time, in tenths of a second.
This column corresponds to the Expiry field in the message descriptor
(MQMD). MQ functions use the value in this column to set the Expiry field.
The default value is -1, which sets the Expiry field to the value
MQEI_UNLIMITED.
SEND_RETRY_COUNT This column contains the number of times that the MQ function is to try to
send a message if the procedure fails.
The default value is 5.
SEND_RETRY_INTERVAL This column contains the interval, in milliseconds, between each attempt to
send a message.
The default value is 1000.
SEND_NEW_CORRELID This column specifies how the correlation identifier is to be set if a correlation
identifier is not passed as an input parameter in the MQ function. The
correlation identifier is set in the CorrelId field in the message descriptor
(MQMD).
This column can have one of the following values:
N
Sets the CorrelId field in the MQMD to binary zeros. This value is the
default.
Y
Specifies that the queue manager is to generate a new correlation
identifier and set the CorrelId field in the MQMD to that value. This 'Y'
value is equivalent to setting the MQPMO_NEW_CORREL_ID option in the
Options field in the put message options (MQPMO).
SEND_RESPONSE_CORRELID This column specifies how the CorrelID field in the message descriptor
(MQMD) is to be set for report and reply messages.
This column corresponds to the Report field in the MQMD. MQ functions use
the value in this column to set the Report field.
This column can have one of the following values:
C
Sets the MQRO_COPY_MSG_ID_TO_CORREL_ID option in the Report field
in the MQMD. This value is the default.
P
Sets the MQRO_PASS_CORREL_ID option in the Report field in the MQMD.
SEND_EXCEPTION_ACTION This column specifies what to do with the original message when it cannot be
delivered to the destination queue.
This column corresponds to the Report field in the message descriptor
(MQMD). MQ functions use the value in this column to set the Report field.
This column can have one of the following values:
Q
Sets the MQRO_DEAD_LETTER_Q option in the Report field in the MQMD.
This value is the default.
D
Sets the MQRO_DISCARD_MSG option in the Report field in the MQMD.
P
Sets the MQRO_PASS_DISCARD_AND_EXPIRY option in the Report field in
the MQMD.
SEND_REPORT_COA This column specifies whether the queue manager is to send a confirm-on-
arrival (COA) report message when the message is placed in the destination
queue, and if so, what that COA message is to contain.
This column corresponds to the Report field in the message descriptor
(MQMD). MQ functions use the value in this column to set the Report field.
This column can have one of the following values:
N
Specifies that a COA message is not to be sent. No options in the Report
field are set. This value is the default
C
Sets the MQRO_COA option in the Report field in the MQMD
D
Sets the MQRO_COA_WITH_DATA option in the Report field in the MQMD.
F
Sets the MQRO_COA_WITH_FULL_DATA option in the Report field in the
MQMD.
SEND_REPORT_EXPIRY This column specifies whether the queue manager is to send an expiration
report message if a message is discarded before it is delivered to an
application, and if so, what that message is to contain.
This column corresponds to the Report field in the message descriptor
(MQMD). MQ functions use the value in this column to set the Report field.
This column can have one of the following values:
N
Specifies that an expiration report message is not to be sent. No options in
the Report field are set. This value is the default.
C
Sets the MQRO_EXPIRATION option in the Report field in the MQMD.
D
Sets the MQRO_EXPIRATION_WITH_DATA option in the Report field in the
MQMD.
F
Sets the MQRO_EXPIRATION_WITH_FULL_DATA option in the Report field
in the MQMD.
REPLY_TO_Q This column contains the name of the message queue to which the application
that issued the MQGET call is to send reply and report messages.
This column corresponds to the ReplyToQ field in the message descriptor
(MQMD). MQ functions use the value in this column to set the ReplyToQ field.
The default value for this column is SAME AS INPUT_Q, which sets the name
to the queue name that is defined in the service that was used for sending the
message. If no service was specified, the name is set to the name of the queue
manager for the default service.
RCV_WAIT_INTERVAL This column contains the time, in milliseconds, that Db2 is to wait for
messages to arrive in the queue.
This column corresponds to the WaitInterval field in the get message options
(MQGMO). MQ functions use the value in this column to set the WaitInterval
field.
The default is 10.
RCV_CONVERT This column indicates whether to convert the application data in the message
to conform to the CodedCharSetId and Encoding values that are defined in the
service used for the function.
This column corresponds to the Options field in the get message options
(MQGMO). MQ functions use the value in this column to set the Options field.
This column can have one of the following values:
Y
Sets the MQGMO_CONVERT option in the Options field in the MQGMO. This
value is the default.
N
Specifies that no data is to be converted.
RCV_ACCEPT_TRUNC_MSG This column specifies the behavior of the MQ function when oversized
messages are retrieved.
This column corresponds to the Options field in the get message options
(MQGMO). MQ functions use the value in this column to set the Options field.
This column can have one of the following values:
Y
Sets the MQGMO_ACCEPT_TRUNCATED_MSG option in the Options field in
the MQGMO. This value is the default.
N
Specifies that no messages are to be truncated. If the message is too large
to fit in the buffer, the MQ function terminates with an error.
Recommendation: Set this column to Y. In this case, if the message buffer is
too small to hold the complete message, the MQ function can fill the buffer
with as much of the message as the buffer can hold.
SYNCPOINT This column indicates whether the MQ function is to operate within the
protocol for a normal unit of work.
This column can have one of the following values:
Y
Specifies that the MQ function is to operate within the protocol for a
normal unit of work. Use this value if you want the MQ function to run
under a transaction. This value is the default.
N
Specifies that the MQ function is to operate outside the protocol for a
normal unit of work. Use this value if you do not want the MQ function to
run under a transaction.
Websphere MQ transactions
Websphere MQ can send and receive messages as part of a transaction or outside a transaction. If a
message is sent or received as part of a transaction, the transaction can include other resource such as
Db2 operations.
Websphere MQ can serve as the transaction manager itself or participate as a resource manager to other
transaction managers (such as CICS®, Encina, and Tuxedo). Detailed information on transactions and
supported external transaction managers can be found in the Websphere MQSeries Information Center.
Websphere can also participate in transactions managed by IBM i commitment control. Commitment
control is automatically started by Db2 when an SQL statement is executed and it is determined that
commitment control is not yet started. By default, the commitment definition scope is *ACTGRP. In order
for MQ functions and Db2 operations that occur in the same job to participate in the same transaction
managed by IBM i commitment control, commitment control must be started with a scope of *JOB. For
example, the following two CL commands end the current commitment definition and starts one with a
scope of *JOB:
ENDCMTCTL
STRCMTCTL LCKLVL(*CHG) CMTSCOPE(*JOB)
In order to start commitment control with a scope of *JOB in a server job, the end and start commitment
control CL commands must be performed together in a procedure. The following steps will create
a sample procedure called START_JOB_LEVEL_COMMIT which can then be called with an SQL CALL
statement :
1. Enter the following source in a source file member JOBSCOPE in MJASRC/CL:
PGM
ENDCMTCTL
STRCMTCTL LCKLVL(*CHG) CMTSCOPE(*JOB)
ENDPGM
2. Create the CL program using the following default MQ message queue manager:
The procedure can typically be called once in a job. Once the commitment definition with a *JOB scope
is started, MQ operations and Db2 operations can be performed as part of a transaction. For example,
the following INSERT statement and the MQSEND function will be performed under the same transaction.
If the transaction is committed, both operations are committed. If the transaction is rolled back, both
operations are rolled back.
CALL MJATST.START_JOB_LEVEL_COMMIT;
INSERT INTO MJATST.T1
VALUES(1);
VALUES MQSEND('A commit test message');
COMMMIT;
Websphere MQ can send and receive messages as part of a transaction or outside a transaction. In a
Db2 MQ function this is controlled by the specified policy. Each policy has a SYNCPOINT attribute. If the
SYNCPOINT column for a policy has a value of 'N', any Db2 MQ function that uses that policy will not
participate in a transaction. If the SYNCPOINT column for a policy has a value of 'Y', any Db2 MQ function
that uses that policy and changes the input queue will participate in a transaction.
The message indicates that the MQCONN API failed for reason code 2058. The Websphere MQSeries
Information Center contains the detailed description of reason code 2058.
The following example uses the default service DB2.DEFAULT.SERVICE and the default policy
DB2.DEFAULT.POLICY which has a SYNCPOINT value of 'Y'. Because this MQSEND function runs under a
transaction, the COMMIT statement ensures that the message is added to the queue.
When you use a policy that contains a SYNCPOINT value of 'N', you do not need to use a COMMIT
statement. For example, assume that policy MYPOLICY2 has a SYNCPOINT value of 'N'. The following SQL
statement causes the message to be added to the queue without the need for a COMMIT statement.
Message content can be any combination of SQL statements, expressions, functions, and user-specified
data. Assume that you have an EMPLOYEE table, with VARCHAR columns LASTNAME, FIRSTNAME, and
DEPARTMENT. To send a message that contains this information for each employee in DEPARTMENT
5LGA, issue the following SQL SELECT statement:
SELECT MQSEND(LASTNAME CONCAT ' ' CONCAT FIRSTNAME CONCAT ' ' CONCAT DEPARTMENT)
FROM EMPLOYEE
WHERE DEPARTMENT = '51GA';
COMMIT;
The following SQL statement causes the contents of a queue to be returned as a result set. The result
table T of the table function consists of all the messages in the queue, which is defined by the default
service, and the metadata about those messages. The first column of the materialized result table is the
message itself, and the remaining columns contain the metadata. The SELECT statement returns both the
messages and the metadata.
SELECT T.*
FROM TABLE ( MQREADALL() ) AS T;
The following statement only returns the messages. The result table T of the table function consists of
all the messages in the queue, which is defined by the default service, and the metadata about those
messages.
SELECT T.MSG
FROM TABLE ( MQREADALL() ) AS T;
The following SQL statement receives (removes) the message at the head of the queue. The SELECT
statement returns a VARCHAR(32000) string. Because this MQRECEIVE function runs under a transaction,
the COMMIT statement ensures that the message is removed from the queue. If no messages are
available to be retrieved, a null value is returned, and the queue does not change.
VALUES MQRECEIVE()
COMMIT;
Assume that you have a MESSAGES table with a single VARCHAR(32000) column. The following SQL
INSERT statement inserts all of the messages from the default service queue into the MESSAGES table:
The following SQL statement receives the first message that matches the identifier CORRID1 from the
queue that is specified by the service MYSERVICE, using the policy MYPOLICY. The SQL statement returns
a VARCHAR(32000) string. If no messages are available with this correlation identifier, a null value is
returned, and the queue does not change.
Sample tables
This group of tables includes information that describes employees, departments, projects, and activities.
This information forms a sample application that demonstrates some of the features of SQL. All examples
assume that the tables are in a schema named CORPDATA (for corporate data).
A stored procedure is included as part of the system that contains the data definition language (DDL)
statements to create all of these tables and the INSERT statements to populate them. The procedure
creates the schema specified on the call to the procedure. Because this is an external stored procedure,
it can be called from any SQL interface, including interactive SQL and IBM i Access Client Solutions (ACS).
To call the procedure where SAMPLE is the schema that you want to create, issue the following statement:
The schema name must be specified in uppercase. The schema must not already exist.
Note: In the sample table data descriptions, a question mark (?) indicates a null value.
DEPARTMENT
Here is a complete listing of the data in the DEPARTMENT table.
EMP_PHOTO
Here is a complete listing of the data in the EMP_PHOTO table.
EMP_RESUME
Here is a complete listing of the data in the EMP_RESUME table.
EMPPROJACT
Here is a complete listing of the data in the EMPPROJACT table.
PROJECT
Here is a complete listing of the data in the PROJECT table.
PROJACT
Here is a complete listing of the data in the PROJACT table.
PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE
AD3100 10 ? 1982-01-01 ?
AD3110 10 ? 1982-01-01 ?
AD3111 60 ? 1982-01-01 ?
AD3111 60 ? 1982-03-15 ?
AD3111 70 ? 1982-03-15 ?
AD3111 80 ? 1982-04-15 ?
AD3111 180 ? 1982-10-15 ?
AD3111 70 ? 1982-02-15 ?
AD3111 80 ? 1982-09-15 ?
AD3112 60 ? 1982-01-01 ?
AD3112 60 ? 1982-02-01 ?
AD3112 60 ? 1983-01-01 ?
AD3112 70 ? 1982-02-01 ?
AD3112 70 ? 1982-03-15 ?
AD3112 70 ? 1982-08-15 ?
AD3112 80 ? 1982-08-15 ?
AD3112 80 ? 1982-10-15 ?
AD3112 180 ? 1982-08-15 ?
AD3113 70 ? 1982-06-15 ?
AD3113 70 ? 1982-07-01 ?
AD3113 80 ? 1982-01-01 ?
AD3113 80 ? 1982-03-01 ?
AD3113 180 ? 1982-03-01 ?
AD3113 180 ? 1982-04-15 ?
AD3113 180 ? 1982-06-01 ?
AD3113 60 ? 1982-03-01 ?
AD3113 60 ? 1982-04-01 ?
ACT
Here is a complete listing of the data in the ACT table.
CL_SCHED
Here is a complete listing of the data in the CL_SCHED table.
IN_TRAY
Here is a complete listing of the data in the IN_TRAY table.
RECEIVED SOURCE SUBJECT NOTE_TEXT
1988-12-25-17.1 BADAMSON FWD: Fantastic year! 4th To: JWALKER Cc: QUINTANA,
2.30.000000 Quarter Bonus. NICHOLLS Jim, Looks like our
hard work has paid off. I have
some good beer in the fridge
if you want to come over to
celebrate a bit. Delores and
Heather, are you interested
as well? Bruce <Forwarding
from ISTERN> Subject:
FWD: Fantastic year! 4th
Quarter Bonus. To: Dept_D11
Congratulations on a job
well done. Enjoy this year's
bonus. Irv <Forwarding from
CHAAS> Subject: Fantastic
year! 4th Quarter Bonus.
To: All_Managers Our 4th
quarter results are in. We
pulled together as a team
and exceeded our plan! I am
pleased to announce a bonus
this year of 18%. Enjoy the
holidays. Christine Haas
1988-12-23-08.5 ISTERN FWD: Fantastic year! 4th To: Dept_D11 Congratulations
3.58.000000 Quarter Bonus. on a job well done. Enjoy this
year's bonus. Irv <Forwarding
from CHAAS> Subject:
Fantastic year! 4th Quarter
Bonus. To: All_Managers Our
4th quarter results are in. We
pulled together as a team
and exceeded our plan! I am
pleased to announce a bonus
this year of 18%. Enjoy the
holidays. Christine Haas
1988-12-22-14.0 CHAAS Fantastic year! 4th Quarter To: All_Managers Our 4th
7.21.136421 Bonus. quarter results are in. We
pulled together as a team
and exceeded our plan! I am
pleased to announce a bonus
this year of 18%. Enjoy the
holidays. Christine Haas
ORG
Here is a complete listing of the data in the ORG table.
STAFF
Here is a complete listing of the data in the STAFF table.
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 18357.50 ?
20 Pernal 20 Sales 8 18171.25 612.45
30 Marenghi 38 Mgr 5 17506.75 ?
40 O'Brien 38 Sales 6 18006.00 846.55
50 Hanes 15 Mgr 10 20659.80 ?
60 Quigley 38 Sales 7 16508.30 650.25
70 Rothman 15 Sales 7 16502.83 1152.00
80 James 20 Clerk ? 13504.60 128.20
90 Koonitz 42 Sales 6 18001.75 1386.70
100 Plotz 42 Mgr 7 18352.80 ?
110 Ngan 15 Clerk 5 12508.20 206.60
120 Naughton 38 Clerk ? 12954.75 180.00
130 Yamaguchi 42 Clerk 6 10505.90 75.60
140 Fraye 51 Mgr 6 21150.00 ?
150 Williams 51 Sales 6 19456.50 637.65
160 Molinare 10 Mgr 7 22959.20 ?
170 Kermisch 15 Clerk 4 12258.50 110.10
180 Abrahams 38 Clerk 3 12009.75 236.50
190 Sneider 20 Clerk 8 14252.75 126.50
200 Scoutten 42 Clerk ? 11508.60 84.20
210 Lu 10 Mgr 10 20010.00 ?
220 Smith 51 Sales 7 17654.50 992.80
230 Lundquist 51 Clerk 3 13369.80 189.65
240 Daniels 10 Mgr 5 19260.25 ?
250 Wheeler 51 Clerk 6 14460.00 513.30
260 Jones 10 Mgr 12 21234.00 ?
SALES
Here is a complete listing of the data in the SALES table.
The schema name must be specified in uppercase. The schema will be created if it does not already exist.
Make sure your job's CCSID is set to something other than 65535 before calling the procedure or it will
get errors.
Note: In the sample table data descriptions, a question mark (?) indicates a null value.
PRODUCT
Here is a complete listing of the data in the PRODUCT table.
CUSTOMER
Here is a complete listing of the data in the CUSTOMER table.
CID INFO, shown as it appears using the HISTORY
XMLSERIALIZE scalar function to convert it to
serialized character data
1000 ?
<customerinfo Cid="1000">
<name>Kathy Smith</name>
<addr country="Canada">
<street>5 Rosewood</street>
<city>Toronto</city>
<prov-state>Ontario</prov-state>
<pcode-zip>M6W 1E6</pcode-zip>
</addr>
<phone type="work">416-555-1358</phone>
</customerinfo>
1001 ?
<customerinfo Cid="1001">
<name>Kathy Smith</name>
<addr country="Canada">
<street>25 EastCreek</street>
<city>Markham</city>
<prov-state>Ontario</prov-state>
<pcode-zip>N9C 3T6</pcode-zip>
</addr>
<phone type="work">905-555-7258</phone>
</customerinfo>
1002 ?
<customerinfo Cid="1002">
<name>Jim Noodle</name>
<addr country="Canada">
<street>25 EastCreek</street>
<city>Markham</city>
<prov-state>Ontario</prov-state>
<pcode-zip>N9C 3T6</pcode-zip>
</addr>
<phone type="work">905-555-7258</phone>
</customerinfo>
1004 ?
<customerinfo Cid="1004">
<name>Matt Foreman</name>
<addr country="Canada">
<street>1596 Baseline</street>
<city>Toronto</city>
<prov-state>Ontario</prov-state>
<pcode-zip>M3Z 5H9</pcode-zip>
</addr>
<phone type="work">905-555-4789</phone>
<phone type="home">416-555-3376</phone>
<assistant>
<name>Gopher Runner</name>
<phone type="home">416-555-3426</phone>
</assistant>
</customerinfo>
1005 ?
<customerinfo Cid="1005">
<name>Larry Menard</name>
<addr country="Canada">
<street>223 Nature Valley Road</street>
<city>Toronto</city>
<prov-state>Ontario</prov-state>
<pcode-zip>M4C 5K8</pcode-zip>
</addr>
<phone type="work">905-555-9146</phone>
<phone type="home">416-555-6121</phone>
<assistant>
<name>Goose Defender</name>
<phone type="home">416-555-1943</phone>
</assistant>
</customerinfo>
CATALOG
The CATALOG table contains no data.
SUPPLIERS
Here is a complete listing of the data in the SUPPLIERS table.
SID ADDR
100 <supplierinfo xmlns="http://posample.org" Sid="100">
<name>Harry Suppliers</name>
<addr country="Canada">
<street>50 EastCreek</street>
<city>Markham</city>
<prov-state>Ontario</prov-state>
<pcode-zip>N9C 3T6</pcode-zip>
</addr>
</supplierinfo>
PRODUCTSUPPLIER
Here is a complete listing of the data in the PRODUCTSUPPLIER table.
PID SID
100-101-01 100
100-201-01 101
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
Software Interoperability Coordinator, Department YBWA
3605 Highway 52 N
Rochester, MN 55901
U.S.A.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
"Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
400 Notices
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Java and all Java-based trademarks and logos are trademarks of Oracle, Inc. in the United States, other
countries, or both.
Other product and service names might be trademarks of IBM or other companies.
Notices 401
402 IBM i: SQL programming
IBM®