DB2WhatsNew Db2q0e972
DB2WhatsNew Db2q0e972
SC27-2463-02
SC27-2463-02
Note Before using this information and the product it supports, read the general information under Appendix C, Notices, on page 279.
Edition Notice This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such. You can order IBM publications online or through your local IBM representative. v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/ planetwide To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU (426-4968). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright IBM Corporation 2010. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Who should use this book . How this book is structured . Highlighting conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix . ix . xi
. . . . . . . . . . . . . . . . . . . . . . 33
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 35 36 37 38
New relational monitoring interfaces are light weight and SQL accessible . . . . . . . . FP1: New relational monitoring interfaces for locking events . . . . . . . . . . . . . New monitor elements and database configuration parameters provide more granular monitoring New relational interface to monitor dynamic and static SQL statements in package cache . . . A new unit of work event monitor supports transaction monitoring . . . . . . . . . .
Copyright IBM Corp. 2010
iii
Time-spent monitor elements are more comprehensive . . . . . . . . . . . . FP1: Last referenced date is available for tables, table partitions, indexes, and packages . Lock event reporting has been enhanced . . . . . . . . . . . . . . . . The section for SQL statements captured by activity event monitor can now be collected FP1: Statements from a runtime section can be explained . . . . . . . . . . . FP1: Explain enhanced with actual values for operator cardinality . . . . . . . . FP1: New event monitor for dynamic and static SQL statements in package cache . . . Additional system monitoring information can be generated . . . . . . . . . . FP1: Text reports can be generated based on monitoring data . . . . . . . . . . FP1: Monitoring table functions information can be viewed using administrative views . FP1: Table functions for row-based formatting of monitoring information are available . FP2: Identify FCM issues more easily . . . . . . . . . . . . . . . . . . FP3: Simplify capture of detailed statement information using the new db2caem tool with options . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . db2support integration . . . . . . . .
. . . . . . . . . . . .
39 40 41 41 42 43 43 44 45 45 46 46
. 47
49
49 50 50 51 51 51 52 52 53
iv
FP1: Work action sets can be defined at workload level . . . . . . . . FP1: New time threshold limits unit of work duration . . . . . . . . FP1: Script facilitates migration from Query Patroller to workload manager .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. 79 . 80 . 80
Full-text searches support partitioned tables . . . . . . . . Full-text searches support in partitioned database environments has Incremental update based on integrity processing is supported . . FP3: DB2 Text Search and Net Search Extender index coexistence .
. . been . . . .
. . . extended . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
vi
System administrator (SYSADM) authority scope has changed . . . . . . . . . . . . . . . . . Security administrator (SECADM) abilities have been extended . . . . . . . . . . . . . . . . Database administrator (DBADM) authority scope has changed . . . . . . . . . . . . . . . . SSLconfig.ini and SSLClientconfig.ini files replaced with new database manager configuration parameters . . Audit stored procedures and table functions now require only the EXECUTE privilege . . . . . . . . . Net Search Extender command authorizations have changed . . . . . . . . . . . . . . . . . DB2 Text Search command and stored procedure authorizations have changed . . . . . . . . . . . Application development changes summary . . . . . . . . . . . . . . . . . . . . . . . . Cursor stability (CS) level behavior for newly created databases has been changed . . . . . . . . . . Maximum limit of log sequence numbers has increased . . . . . . . . . . . . . . . . . . . Some system catalog views, system-defined administrative routines and views have been added and changed New SYSIBM functions override unqualified user-defined functions with the same name . . . . . . . . Untyped NULL keyword specifications no longer resolve to identifier names . . . . . . . . . . . . CHAR(decimal-expression) scalar function return behavior has been changed . . . . . . . . . . . . DOUBLE(string-expression) scalar function return behavior has been changed . . . . . . . . . . . . Result data type for unary minus and unary operators in untyped expressions has been changed . . . . . DEFAULT keyword specification has been changed . . . . . . . . . . . . . . . . . . . . XML data is passed by reference in SQL stored procedures . . . . . . . . . . . . . . . . . . Type annotations for validated XML documents are unavailable . . . . . . . . . . . . . . . . Merge modules for ODBC, CLI, and .NET have been combined (Windows) . . . . . . . . . . . . Result data type for integer division in number_compat mode has been changed. . . . . . . . . . . FP1: Some import and load file type modifiers can accept unspecified values in trailing fields . . . . . .
190 192 193 194 195 196 197 198 199 199 200 209 209 210 212 212 213 214 216 217 217 218
Chapter 21. Version 9 deprecated and discontinued DB2 functionality summary . . . 245
Contents
vii
viii
ix
Chapter 6, High availability, backup, logging, resiliency, and recovery enhancements, on page 49 This chapter describes the new features and enhancements that help ensure that your data remains available for your users. Chapter 7, Performance enhancements, on page 55 This chapter describes the new features and enhancements that help ensure the highest performance when you access and update data. Chapter 8, SQL compatibility enhancements, on page 67 This chapter describes the new features and enhancements that help you port existing database applications from other vendors to DB2Version 9.7 environments. Chapter 9, Workload management enhancements, on page 73 This chapter describes the new workload management features that extend the existing workload management capabilities provided in previous releases. Chapter 10, Security enhancements, on page 83 This chapter describes the new features and enhancements that help you protect and manage your sensitive data. Chapter 11, Application development enhancements, on page 93 This chapter describes the new features and enhancements that simplify application development, improve application portability, and ease application deployment. Chapter 12, SQL Procedural Language (SQL PL) enhancements, on page 135 This chapter describes the new features and enhancements that help you work with the SQL Procedural Language (SQL PL). Chapter 13, Net Search Extender enhancements, on page 145 This chapter describes the new features and enhancements for Net Search Extender. Chapter 14, Installation, upgrade, and fix pack enhancements, on page 147 This chapter describes the new features and enhancements that make it faster to deploy DB2 products and easier to maintain them. Part 2, DB2 Connect enhancements and changes summary, on page 163 This chapter describes the enhancements and changes in DB2 Version 9.7 that affect the functionality of DB2 Connect.
Chapter 21, Version 9 deprecated and discontinued DB2 functionality summary, on page 245 This chapter lists features and functionality that have been deprecated or discontinued in DB2 Version 9.1, Version 9.5, and Version 9.7.
Part 3: Appendixes
Appendix A, DB2 Version 9.7 for Linux, UNIX, and Windows fix pack summary, on page 259 This appendix contains the list of functionality added or changed during Version 9.7 fix packs. Appendix B, Overview of the DB2 technical information, on page 267 This appendix contains information about accessing and using the latest documentation for your DB2 database systems. Appendix C, Notices, on page 279 This appendix contains the legal requirements and limitations related to using the DB2 database product and its documentation.
Highlighting conventions
Topics that are associated with a specific fix pack include an "FPx" prefix at the beginning of the topic title, where x represents a fix pack level. The following highlighting conventions are used in this book.
Bold Indicates commands, keywords, and other items whose names are predefined by the system. Commands written in uppercase are CLP commands, whereas commands written in lowercase are system commands. Indicates one of the following: v Names or values (variables) that must be supplied by the user v General emphasis v The introduction of a new term v A reference to another source of information Monospace Indicates one of the following: v Files and directories v Information that you are instructed to type at a command prompt or in a window v Examples of specific data values v Examples of text similar to what might be displayed by the system v Examples of system messages v Samples of programming code
Italics
xi
xii
Chapter 14, Installation, upgrade, and fix pack enhancements, on page 147 This chapter describes the new features and enhancements that make it faster to deploy DB2 products and easier to maintain them. Part 2, DB2 Connect enhancements and changes summary, on page 163 This chapter describes the enhancements and changes in DB2 Version 9.7 that affect the functionality of DB2 Connect.
Related concepts "IBM data server clients and drivers overview" in Installing IBM Data Server Clients
XML data stored in the XML storage object of tables can be compressed
XML data in the XML storage object of a table is now eligible for data row compression. In previous releases, only the table row data in a table object was eligible for compression. Data row compression saves disk space. XML data in the XML storage object of a table is eligible for data row compression if you create the XML columns in the table in Version 9.7 and if you enable the table for data row compression. To enable a table for data row compression, use the COMPRESS YES option of the ALTER TABLE or CREATE TABLE statement. The LOAD, REORG, and REDISTRIBUTE DATABASE PARTITION GROUP commands and the INSERT statement support compression of data in the XML storage object of a table. When the data in the XML storage object is compressed, a compression dictionary is created for the data and is stored in the XML storage object. The following table shows the effect of each command and the INSERT statement on the compression dictionary.
Table 2. Effects of commands and INSERT statement on the XML storage object compression dictionary Directive name LOAD command Parameters REPLACE and RESETDICTIONARY REPLACE and KEEPDICTIONARY Effect on compression dictionary Creates a new compression dictionary even if one exists Maintains the compression dictionary if one exists; otherwise, creates a new compression dictionary Creates a compression dictionary1 Creates a new compression dictionary even if one exists
Table 2. Effects of commands and INSERT statement on the XML storage object compression dictionary (continued) Directive name Parameters KEEPDICTIONARY and LONGLOBDATA Effect on compression dictionary Maintains the compression dictionary if one exists; otherwise, creates a new compression dictionary1 Creates a compression dictionary1 Creates a compression dictionary1
Note: 1Creation of a compression dictionary occurs if there is sufficient XML data in the XML storage object of the table. Compression of data in the XML storage object of a table is not supported if the table contains XML columns from DB2 Version 9.5 or earlier. For DB2 Version 9.5 or earlier, XML columns use the type-1 XML record format. If you enable such a table for data row compression, only the table row data in the table object is compressed. To make the data in the XML storage object of the table eligible for compression, use the ADMIN_MOVE_TABLE stored procedure to migrate the table and then enable data row compression. Related concepts "Compression dictionary creation" in Database Administration Concepts and Configuration Reference "Row compression" in Database Administration Concepts and Configuration Reference Related reference "ALTER TABLE " in SQL Reference, Volume 2 "CREATE TABLE " in SQL Reference, Volume 2 "ADMINTABINFO administrative view and ADMIN_GET_TAB_INFO_V97 table function - retrieve table size and state information" in Administrative Routines and Views
You can use the explain facility or the db2pd tool to determine if the optimizer chose to use temporary table compression. Related concepts "Table compression" in Database Administration Concepts and Configuration Reference
Related concepts "Table compression" in Database Administration Concepts and Configuration Reference Related reference "db2ReadLog - Read log records" in Administrative API Reference "REORG INDEXES/TABLE command" in Command Reference "TRUNCATE " in SQL Reference, Volume 2
Automatic storage supports dropping storage paths and rebalancing table spaces
Starting in Version 9.7, you can begin using any storage you add to an automatic storage database immediately by rebalancing the automatic storage table spaces. You can also drop storage paths from a database managed by automatic storage. Follow these steps to drop a storage path: 1. Drop the storage path using the ALTER DATABASE statement with the DROP STORAGE ON clause. 2. Rebalance permanent table spaces that use the storage path by using ALTER TABLESPACE with the REBALANCE clause, which moves data off the path to be dropped. 3. Drop then re-create any temporary table spaces that use the storage path that you dropped.
10
v The SNAPTBSP_PART administrative view, which you can use to obtain information about a table space on a specific database partition, has been updated to display information from the SNAP_GET_TBSP_PART_V97 table function instead of the SNAP_GET_TBSP_PART_V91 table function. Related concepts "Automatic storage" in Database Administration Concepts and Configuration Reference Related reference "SNAPSTORAGE_PATHS administrative view and SNAP_GET_STORAGE_PATHS table function - Retrieve automatic storage path information" in Administrative Routines and Views "SNAPTBSP_PART administrative view and SNAP_GET_TBSP_PART_V91 table function - Retrieve tablespace_nodeinfo logical data group snapshot information" in Administrative Routines and Views "Scenarios: Adding and removing storage with automatic storage table spaces" in Database Administration Concepts and Configuration Reference
11
Extent 0
Extent 1
Extent 2
Object 1
Free space
Object 2
Object 2
. . .
Extent n High water mark High water mark High water mark
. . .
Drop Object 1
Extents moved
To take advantage of reclaimable storage with a table space created with an earlier version of the DB2 product, replace the table space with a new one created in Version 9.7. To populate the new table space, you can use one of the following methods: v Unload and reload the data v Move the data with an online table move operation using the ADMIN_MOVE_TABLE procedure Table spaces for which reclaimable storage is enabled can coexist in the same database as table spaces without reclaimable storage. You cannot reclaim storage of temporary table spaces.
12
Related concepts "Reclaimable storage" in Database Administration Concepts and Configuration Reference Related tasks "Moving tables online by using the ADMIN_MOVE_TABLE procedure" in Data Movement Utilities Guide and Reference Related reference "ALTER TABLESPACE " in SQL Reference, Volume 2
Reclaiming space from multidimensional clustering (MDC) tables has been enhanced
MDC tables can be reorganized to reclaim extents that are not being used. Starting in Version 9.7, a complete offline table reorganization is no longer needed to reclaim the MDC extents. Both the REORG TABLE command and the db2Reorg API have a new reclaim extents option. As part of this new method to reorganize MDC tables, you can also control the access to the MDC table while the reclaim operation is taking place. Your choices include: no access, read access, and write access (which is the default). Reclaimed space from the MDC table can be used by other objects within the table space. In previous releases the free space could only be used by the MDC table. If you use an automated maintenance policy for a database, you can update your policy file to automate reclaiming space from MDC tables. To create or update a policy file, use the AUTOMAINT_SET_POLICY procedure. Related concepts "Multidimensional clustering extent management" in Partitioning and Clustering Guide Related reference "AUTOMAINT_SET_POLICY procedure - configure automatic maintenance policy" in Administrative Routines and Views
13
v New support for communicating through messages and alerts, and for working with files on the database server file system. v New support for administrative routines that are data server version independent. To provide expanded support for the existing administrative routines, some of the Version 9.5 routines have been replaced with new, more comprehensive routines or views in Version 9.7. For the list of the new and changed routines in Version 9.7, see Some system catalog views, system-defined administrative routines and views have been added and changed on page 200. For the list of all supported administrative SQL routines and views, see Supported administrative SQL routines and views in Administrative Routines and Views.
14
Related reference "ADMIN_MOVE_TABLE procedure - Move an online table" in Administrative Routines and Views "ADMIN_MOVE_TABLE_UTIL procedure - Modify the online move table procedure" in Administrative Routines and Views
Table space capacity for large and temporary table spaces has increased
In Version 9.7, the maximum size of large and temporary table spaces has been raised to 64 TB. Depending on the page size you choose, the new limits on table space sizes are:
Table 3. Changes to the upper limit of table spaces based on the page size Page size 4 KB 8 KB 16 KB 32 KB Version 9.5 table space limit Version 9.7 table space limit 2 TB 4 TB 8 TB 16 TB 8 TB 16 TB 32 TB 64 TB
Related reference "SQL and XML limits" in Database Administration Concepts and Configuration Reference
15
Related concepts "Distribution maps" in Partitioning and Clustering Guide sqlugtpi API has been deprecated on page 229 sqlugrpn API has been deprecated on page 228 Related reference "SYSCAT.PARTITIONMAPS catalog view" in SQL Reference, Volume 1 "db2GetDistMap - Get distribution map" in Administrative API Reference
Database partition servers are online immediately after being added to an instance
In Version 9.7, you can use the START DATABASE MANAGER command to add new database partition servers to a multi-partition database environment without having to stop and restart the instance. Thus, the database partition servers are immediately online. This enhancement reduces the cost of scaling the database as system downtime is eliminated. As a data warehouse matures, you might have to add additional computational power to the environment to store data or support applications. As part of this process, you must add one or more new database partition servers to increase the size of the environment. Before Version 9.7, if you added a database partition server, it was not visible to the instance until after you stopped and restarted the instance. This requirement to stop and restart the instance affected system availability. Now, when you add a new database partition server, it is online after being added. When you add the new database partition server online, the following process takes place: v The node configuration file (db2nodes.cfg) is updated automatically by the START DATABASE MANAGER command, using the values that you specify. You do not have to modify this file manually. v The new database partition server informs the rest of the database system of its addition to the environment. New applications are aware of the new database partition server as soon as you add it. Some existing database applications are aware of the new database partition server at their transaction boundaries, and other existing applications are aware of it at their next requests. v A skeleton database partition is created on the new database partition server for each database. If the new database partition is added to a single-partition environment, the new database partition is configured using the database configuration values of the catalog partition. If the new database partition is added to a multi-partition environment, the new database partition is configured using database configuration values from a non-catalog database partition. If a problem occurs during the database partition configuration, the new database partition is configured using default database configuration parameter values. You can monitor the progress of the add database partition server operation by using the -addnode parameter of the db2pd command. If you want to use the START DATABASE MANAGER command to add a new database partition server to the environment but do not want it to be active after adding it, you can set the DB2_FORCE_OFFLINE_ADD_PARTITION registry variable to TRUE and restart the instance after processing of the addition is completed.
16
Related reference "START DATABASE MANAGER " in Command Reference "Partitioned database environment variables" in Partitioning and Clustering Guide "db2pd - Monitor and troubleshoot DB2 database " in Command Reference
17
v Each REORG command must specify a different partition with the ON DATA PARTITION clause. v Each REORG command must use the ALLOW NO ACCESS mode to restrict access to the data partitions. v The partitioned table must have only partitioned indexes if issuing REORG TABLE commands. No nonpartitioned indexes (except system-generated XML path indexes) can be defined on the table. The db2Reorg API also supports reorganization of a data partition or its partitioned indexes. Related concepts "Table reorganization" in Troubleshooting and Tuning Database Performance "Index reorganization" in Troubleshooting and Tuning Database Performance Related reference "REORG INDEXES/TABLE command" in Command Reference
FP1: relocating databases using the db2relocatedb command has been improved
Starting in Fix Pack 1, you can specify additional keywords in the db2relocatedb command configuration file that make it easier to relocate a database when the paths used are different. The db2relocatedb configuration file can contain new values for the mirrorlogpath, failarchivepath, logarchmeth1, logarchmeth2, and overflowlogpath database configuration parameters. When you run the db2relocatedb command, the database configuration parameters of the relocated database are updated with the values specified in the configuration file. If you do not specify any of the new keywords, the relocated database maintains the original parameters values. Related reference "db2relocatedb - Relocate database " in Data Movement Utilities Guide and Reference
FP2: New RESTRICTED ACCESS option restricts database connections within quiesced instance
Starting with DB2 Version 9.7 Fix Pack 2, the new RESTRICTED ACCESS option can be specified to prevent authorization checking for all connect attempts to the databases of a quiesced DB2 instance. The new option can also be used when there is a need to have exclusive connections to a database within the quiesced instance. When the RESTRICTED ACCESS option is specified using the QUIESCE INSTANCE or START DATABASE MANAGER commands, or the db2InstanceQuiesce or db2InstanceStart APIs, authorization checking is prevented to determine if the user ID has DBADM authority. Instance-level authorization checking can still occur; checking a user ID for SYSADM, SYSCTRL, or SYSMAINT authority does not require a database to be activated. With the RESTRICTED ACCESS option specified, any user ID trying to connect to a database within a quiesced instance, which has DBADM authority or QUIESCE_CONNECT privilege on the database, will not be allowed to connect.
18
Only user IDs which have SYSADM, SYSCTRL, or SYSMAINT authority and the user or group specified with the commands will be allowed to connect to the database. You can use the RESTRICTED ACCESS option when there is a need to have exclusive connections to a database within the quiesced instance. Such cases can include making an offline backup or performing other maintenance activities. Related reference "START DATABASE MANAGER " in Command Reference "QUIESCE command" in Command Reference "db2InstanceQuiesce - Quiesce instance" in Administrative API Reference "db2InstanceStart - Start instance" in Administrative API Reference
19
20
21
v Distribution statistics are collected for indexes over XML data. (see FP1: Distribution statistics collected for XML columns on page 31). You can use another enhancement to compress XML data. For more information, see XML data stored in the XML storage object of tables can be compressed on page 5. You can use new sample programs to learn about pureXML enhancements. Related concepts "pureXML overview -- DB2 as an XML database" in pureXML Guide "pureXML tutorial" in pureXML Guide "XML input and output overview" in pureXML Guide New DB2 sample programs have been added on page 111
22
Related concepts "Partitioned tables" in Partitioning and Clustering Guide "Logical and physical indexes over XML data" in pureXML Guide New DB2 sample programs have been added on page 111 Related tasks "Migrating existing tables and views to partitioned tables" in Partitioning and Clustering Guide Related reference "Restrictions on the pureXML feature" in V9.7 FP2 Transportable schemas
Example
In the following example, an MDC table called CUST_INFO is created. The MDC table contains a column of type XML and uses the columns REGION, AGE, and INCOME as dimensions:
CREATE TABLE cust_info(id BIGINT, age INT, region VARCHAR(10), income DECIMAL(10,2), history XML) ORGANIZE BY (region, age, income);
Related concepts "Multidimensional clustering tables" in Partitioning and Clustering Guide "Table and index management for MDC tables" in Partitioning and Clustering Guide "pureXML overview -- DB2 as an XML database" in pureXML Guide "Restrictions on indexes over XML data" in pureXML Guide New DB2 sample programs have been added on page 111 Related reference "Restrictions on the pureXML feature" in V9.7 FP2 Transportable schemas
23
Prior to DB2 Version 9.7, you could not store XML data in declared temporary tables, so applications either had to work around not having the XML data in a declared temporary table or had to use a regular table. You can use a new sample program to learn how to use this feature. The following functionality is not supported for declared temporary tables, either with XML data or relational data: v Data redistribution v Table partitioning v Multidimensional clustering In addition, you cannot do the following with declared temporary tables, either with XML data or relational data: v Specify declared temporary tables in an ALTER, COMMENT, GRANT, LOCK, RENAME or REVOKE statement. v Reference declared temporary tables in a CREATE ALIAS, CREATE FUNCTION (SQL Scalar, Table, or Row), CREATE TRIGGER, or CREATE VIEW statement. v Specify declared temporary tables in referential constraints. v Use the LOAD or IMPORT command to add data to declared temporary tables. v Use the REORG command to reorganize data or indexes of declared temporary tables. Related concepts New DB2 sample programs have been added on page 111 "Using declared temporary tables with XML data" in pureXML Guide Related reference "DECLARE GLOBAL TEMPORARY TABLE " in SQL Reference, Volume 2
24
Parameters and variables of data type XML are not supported in compiled SQL functions. You can use new sample programs to learn how to use some of these features. Related concepts New DB2 sample programs have been added on page 111 "Inlined SQL functions and compiled SQL functions" in SQL Procedural Languages: Application Enablement and Support XML data is passed by reference in SQL stored procedures on page 214 Related reference "CREATE FUNCTION (SQL scalar, table, or row) " in SQL Reference, Volume 2 "CREATE FUNCTION (sourced or template) " in SQL Reference, Volume 2
25
v Use the LOAD command with the ANYORDER file type modifier when loading XML data into an XML column. ANYORDER is also supported in a single-partition database environment. v Issue the RUNSTATS command against a table with XML columns by connecting to any partition. v Use the XQuery transform functionality. v Store XML documents inline in the rows of the base table instead of storing them in the default XML storage object. v Use the Visual Explain tool to identify a new type of table queue operator (XTQ) that is displayed for access plans generated for XQuery in partitioned databases. You can use new sample programs to learn how to use some of these features. Related concepts "Partitioned database environments" in Partitioning and Clustering Guide "pureXML overview -- DB2 as an XML database" in pureXML Guide "XML schema repository" in pureXML Guide "SQL/XML publishing functions for constructing XML values" in pureXML Guide "Loading XML data" in pureXML Guide New DB2 sample programs have been added on page 111 Related reference "CREATE INDEX " in pureXML Guide "RUNSTATS " in Command Reference "Restrictions on the pureXML feature" in V9.7 FP2 Transportable schemas Related information "Introduction to XQuery" in XQuery Reference
The ALTER TABLE statement supports REORG-recommended operations containing XML data
Prior to DB2 Version 9.7, for any tables containing columns of data type XML, a REORG-recommended operation could not be specified in an ALTER TABLE statement. This restriction has been lifted. Using the ALTER TABLE command, you can specify any REORG-recommended operation for any table containing columns of type XML. However, an operation that drops XML columns must drop all of the XML columns in the table in a single ALTER TABLE statement. Related reference "ALTER TABLE " in SQL Reference, Volume 2
26
Related concepts "Compiler rewrite example: Predicate pushdown for combined SQL/XQuery statements" in Troubleshooting and Tuning Database Performance
27
The new CLP command and stored procedure are extensions of the DB2 annotated XML schema decomposition feature that decomposes a single XML document. Related concepts "Annotated XML schema decomposition" in pureXML Guide Related reference "XDB_DECOMP_XML_FROM_QUERY stored procedure for annotated schema decomposition" in pureXML Guide "DECOMPOSE XML DOCUMENTS " in Command Reference
28
Related concepts "Index ANDing access requests" in Troubleshooting and Tuning Database Performance "DPFXMLMOVEMENT requests" in Troubleshooting and Tuning Database Performance "XML index scan access requests" in Troubleshooting and Tuning Database Performance "XML index ANDing and ORing access requests" in Troubleshooting and Tuning Database Performance "Using optimization guidelines with XML data and XQuery expressions" in pureXML Guide "Examples of optimization guidelines with XML data" in pureXML Guide
XML data stored in the XML storage object of tables can be compressed
XML data in the XML storage object of a table is now eligible for data row compression. In previous releases, only the table row data in a table object was eligible for compression. Data row compression saves disk space. XML data in the XML storage object of a table is eligible for data row compression if you create the XML columns in the table in Version 9.7 and if you enable the table for data row compression. To enable a table for data row compression, use the COMPRESS YES option of the ALTER TABLE or CREATE TABLE statement. The LOAD, REORG, and REDISTRIBUTE DATABASE PARTITION GROUP commands and the INSERT statement support compression of data in the XML storage object of a table. When the data in the XML storage object is compressed, a compression dictionary is created for the data and is stored in the XML storage object. The following table shows the effect of each command and the INSERT statement on the compression dictionary.
Table 4. Effects of commands and INSERT statement on the XML storage object compression dictionary Directive name LOAD command Parameters REPLACE and RESETDICTIONARY REPLACE and KEEPDICTIONARY Effect on compression dictionary Creates a new compression dictionary even if one exists Maintains the compression dictionary if one exists; otherwise, creates a new compression dictionary Creates a compression dictionary1 Creates a new compression dictionary even if one exists Maintains the compression dictionary if one exists; otherwise, creates a new compression dictionary1 Creates a compression dictionary1
INSERT statement
29
Table 4. Effects of commands and INSERT statement on the XML storage object compression dictionary (continued) Directive name REDISTRIBUTE DATABASE PARTITION GROUP command Parameters Effect on compression dictionary Creates a compression dictionary1
Note: 1Creation of a compression dictionary occurs if there is sufficient XML data in the XML storage object of the table. Compression of data in the XML storage object of a table is not supported if the table contains XML columns from DB2 Version 9.5 or earlier. For DB2 Version 9.5 or earlier, XML columns use the type-1 XML record format. If you enable such a table for data row compression, only the table row data in the table object is compressed. To make the data in the XML storage object of the table eligible for compression, use the ADMIN_MOVE_TABLE stored procedure to migrate the table and then enable data row compression. Related concepts "Compression dictionary creation" in Database Administration Concepts and Configuration Reference "Row compression" in Database Administration Concepts and Configuration Reference Related reference "ALTER TABLE " in SQL Reference, Volume 2 "CREATE TABLE " in SQL Reference, Volume 2 "ADMINTABINFO administrative view and ADMIN_GET_TAB_INFO_V97 table function - retrieve table size and state information" in Administrative Routines and Views
Write access is supported during creation or reorganization of indexes over XML data
Transactions can now insert, update, and delete data in a table during creation or reorganization of indexes over XML data on the table when the table is a nonpartitioned table or a partitioned table. Because transactions that insert, update, or delete data in a table no longer have to wait for the creation or reorganization of indexes on that table to be completed, throughput for concurrent transactions is higher, and response time for concurrent transactions is faster. In Version 9.7, the following indexing operations support concurrent read and write access to a table: v For a nonpartitioned table: Using the CREATE INDEX statement on an XML column Using the REORG INDEXES ALL FOR TABLE command with the ALLOW WRITE ACCESS option on a table with one or more XML columns v For a partitioned table: Using the CREATE INDEX statement to create a nonpartitioned index over XML data
30
Using the REORG INDEX command with the ALLOW WRITE ACCESS option on a nonpartitioned index over XML data You can use a new sample program to learn how to use this feature. Related concepts "Indexing XML data" in pureXML Guide New DB2 sample programs have been added on page 111 Related reference "CREATE INDEX " in pureXML Guide "REORG INDEXES/TABLE command" in Command Reference
FP1: XQuery functions make it easier to retrieve date and time values for local time zones
Four new XQuery functions return current date and time values using the local time zone of the DB2 database system. The functions are db2-fn:current-local-time(), db2-fn:current-local-date(), db2-fn:current-local-dateTime(), and db2-fn:local-timezone(). They differ from fn:current-time(), fn:current-date(), and fn:current-dateTime(), which return the date and time values in the Coordinated Universal Time (UTC) implicit timezone and include a time zone component in the returned value. For example, if the fn:current-time() function was invoked on November 20, 2009 at 13:00 on a DB2 database system in Toronto (timezone -PT5H), the returned value might be 18:00:50.282691Z, whereas the db2-fn:current-local-time() function would return the value 13:00:50.282691. Related reference "current-local-time function" in XQuery Reference "current-local-date function" in XQuery Reference "current-local-dateTime function" in XQuery Reference "local-timezone function" in XQuery Reference
31
must be gathered in order for distribution statistics to be collected because XML distribution statistics are stored with table statistics. As the default, the RUNSTATS utility collects a maximum of 250 quantiles for distribution statistics for each index over XML data. The maximum number of quantiles for a column can be specified when executing the RUNSTATS utility. The following list describes the situations in which XML distribution statistics are not created or collected: v XML distribution statistics are not created when loading data with the STATISTICS option. v XML distribution statistics are not collected for partitioned indexes over XML data defined on a data partitioned table. v XML distribution statistics are not collected when collecting only index statistics, or collecting index statistics during index creation. Related concepts "Catalog statistics" in Troubleshooting and Tuning Database Performance "Guidelines for collecting and updating statistics" in Troubleshooting and Tuning Database Performance Related tasks "Collecting distribution statistics for specific columns" in Troubleshooting and Tuning Database Performance Related reference "RUNSTATS " in Command Reference
32
33
v Capture information about cached statement entries after they have been flushed from the database package cache (see FP1: New event monitor for dynamic and static SQL statements in package cache on page 43) v Collection of the section details for SQL activity events (see The section for SQL statements captured by activity event monitor can now be collected on page 41) v The section explain functionality captures explain information about a statement using only the contents of the runtime section (see FP1: Statements from a runtime section can be explained on page 42) v The explain facility is enhanced with actual values for operator cardinality (see FP1: Explain enhanced with actual values for operator cardinality on page 43) v Progress information about the RUNSTATS command, table and index reorganization as well as additional system monitoring information can be generated (see Additional system monitoring information can be generated on page 44) v Monitoring table functions information can be viewed using administrative views (see FP1: Monitoring table functions information can be viewed using administrative views on page 45) v Table functions for row-based formatting of monitoring information (see FP1: Table functions for row-based formatting of monitoring information are available on page 46) v Table functions for FCM monitoring (see FP2: Identify FCM issues more easily on page 46)
New relational monitoring interfaces are light weight and SQL accessible
DB2 Version 9.7 provides new relational monitoring interfaces, that can be accessed directly by SQL, resulting in enhanced reporting and monitoring of the database system, data objects, and the package cache to help you quickly identify issues that might be causing problems. The new interfaces report monitoring elements that provide information about work done on the system, data objects such as tables, indexes, buffer pools, table spaces and containers, and SQL entries in the package cache. The new interfaces, just like the workload management (WLM) table functions created for DB2 Version 9.5, are more efficient and have a lower impact on the system than existing system monitor and snapshot interfaces. System, activity, and data object level monitoring information can be accessed directly through SQL using the following table functions: System level v MON_GET_CONNECTION v MON_GET_CONNECTION_DETAILS v MON_GET_SERVICE_SUBCLASS v MON_GET_SERVICE_SUBCLASS_DETAILS v MON_GET_UNIT_OF_WORK v MON_GET_UNIT_OF_WORK_DETAILS v v MON_GET_WORKLOAD MON_GET_WORKLOAD_DETAILS
Activity level
34
v v v
MON_GET_ACTIVITY_DETAILS MON_GET_PKG_CACHE_STMT MON_GET_PKG_CACHE_STMT_DETAILS (Only available starting with DB2 Version 9.7 FixPack 1.)
35
Related reference "MON_LOCKWAITS administrative view - Retrieve metrics for applications that are waiting to obtain locks" in Administrative Routines and Views
New monitor elements and database configuration parameters provide more granular monitoring
DB2 Version 9.7 provides new monitor elements that enable you to perform more granular monitoring, without using the monitor switches or snapshot interfaces. Database-wide monitoring control is provided by new database configuration parameters. With the new monitor elements and infrastructure, you can use SQL statements to efficiently collect monitor data to determine whether specific aspects of the system are working correctly and to help you diagnose performance problems, while incurring a reasonable performance overhead. With the new access methods, you can get all the data you need without using the snapshot interfaces. The increased monitoring granularity gives you more control over the data collection process; collect the data you want from the source you want. Monitoring information is collected about the work performed by your applications and reported through table function interfaces at the following three levels: System level These monitoring elements provide details about all work being performed on the system. Monitor-element access points include service subclass, workload definition, unit of work, and connection. Activity level These monitor elements provide details about activities being performed on the system (a specific subset of the work being performed on the system). You can use these elements to understand the behavior and performance of activities. Monitor-element access points include individual activities, and entries in the database package cache. Data object level These monitoring elements provide details about the work being processed by the database system within specific database objects such as indexes, tables, buffer pools, table spaces, and containers, thereby enabling you to quickly identify issues with particular data objects that might be causing system problems. Monitor-element access points include buffer pool, container, index, table, and table space. For the list of the table functions in each level, see New relational monitoring interfaces are light weight and SQL accessible on page 34. For database-wide control over the collection of monitoring data at the system, activity and data object levels, and the generation of events in unit of work and locking event monitors, eight new configuration parameters have been added. The default settings are intended to provide the minimum level of collection and event generation that will be enabled for work running in all DB2 workloads and service classes. Monitoring control can be further customized by modifying DB2 workload and service class definitions. For example, you could disable collection of system level metrics for the entire database and enable collection for a particular service class if you are only interested in monitoring work performed in that service class.
36
Table 5. Monitor collection database configuration parameters Parameter name mon_act_metrics Description Details
Monitoring activity Controls collection of activity level metrics monitor elements on the entire database. This will affect all DB2 workload definitions. Monitoring deadlock Monitoring lock timeout Controls the generation of deadlock events at the database level for the lock event monitor. Controls the generation of lock timeout events at the database level for the lock event monitor. This will affect all DB2 workload definitions. Controls the generation of lock wait events at the database level for the lock event monitor. The amount of time spent in lock wait (specified in microseconds) before an event for mon_lockwait is generated. Controls collection of data object monitor elements on the entire database.
mon_deadlock
mon_locktimeout
mon_lockwait
Monitoring lock wait Monitoring lock wait threshold Monitoring object metrics
mon_lw_thresh
mon_obj_metrics
mon_req_metrics
Monitoring request Controls collection of request monitor metric elements on the entire database. This will affect all DB2 service classes. Monitoring unit of work events Controls the generation of unit of work events at the database level for the unit of work event monitor. This will affect all DB2 workload definitions.
mon_uow_data
Related concepts "Monitor routines and views" in Administrative Routines and Views "Monitor elements reported in monitor table functions" in Database Monitoring Guide and Reference
New relational interface to monitor dynamic and static SQL statements in package cache
DB2 Version 9.7 provides a new relational interface, MON_GET_PKG_CACHE_STMT, to monitor dynamic and static SQL statements in the database package cache. This new relational interface reports information for both static and dynamic SQL statements, unlike the dynamic SQL snapshot which only reports information for dynamic statements. For each dynamic and static SQL statement, the new relational interface returns a rich set of metrics, aggregated across executions of the statement. The metrics can help you to quickly determine the reasons for poor performance of an SQL statement, to compare the behavior and performance of one SQL statement to another, and to easily identify the most expensive SQL statements along any number of dimensions (for example, the SQL statements consuming the most CPU resources, and statements with the longest lock wait times).
37
Related reference "MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache" in Administrative Routines and Views
38
Related concepts "db2evmonfmt tool for reading event monitor data" in Database Monitoring Guide and Reference Related tasks "Collecting unit of work event data and generating reports" in Database Monitoring Guide and Reference Related reference "CREATE EVENT MONITOR " in SQL Reference, Volume 2 "MON_GET_UNIT_OF_WORK table function - Get unit of work metrics" in Administrative Routines and Views "MON_GET_UNIT_OF_WORK_DETAILS table function - Get detailed unit of work metrics" in Administrative Routines and Views "CREATE EVENT MONITOR (unit of work) " in SQL Reference, Volume 2 "EVMON_FORMAT_UE_TO_XML table function - convert unformatted events to XML" in Administrative Routines and Views "EVMON_FORMAT_UE_TO_TABLES procedure - move an XML document to relational tables" in Administrative Routines and Views "mon_req_metrics - Monitoring request metrics configuration parameter" in Database Administration Concepts and Configuration Reference "mon_uow_data - Monitoring unit of work events configuration parameter" in Database Administration Concepts and Configuration Reference
39
Example
The following diagram shows one possible visualization of the total DB2 request time on a particular system:
Section processing time Other processing time Commit / Rollback processing time
In this example, the section processing time is responsible for a significant percentage of the total request time. This is generally desirable because section processing time represents the time that is spent doing core SQL processing rather than waiting on resources or driving transaction end processing. On the other hand, a large percentage of the overall request time is also spent in various waiting situations; lock wait time specifically. This percentage of lock wait time is undesirable and indicates the need to investigate the locking behavior in more detail. Note: A new Version 9.7 feature lets you collect additional information about lock events. For more information, see "Lock event reporting has been enhanced" What's New for DB2 Version 9.7. Related concepts "Time-spent monitor elements" in Database Monitoring Guide and Reference
FP1: Last referenced date is available for tables, table partitions, indexes, and packages
The last referenced date indicates the last date that an object was used and is available for tables, table data partitions, indexes, packages, and materialized query tables (MQTs). The last referenced date is accessed through the LASTUSED column of the corresponding catalog table for the object. The last referenced date is used to identify objects that have not been accessed for an extended period of time and might be considered candidates for removal. For example, removing indexes that are never used in queries saves both disk space and maintenance overhead (that is, overhead when insertions and updates are performed on the table on which the index was defined).
40
Related concepts "Determining the date a database object was last used" in Database Monitoring Guide and Reference
The section for SQL statements captured by activity event monitor can now be collected
The section for an SQL statement can now be collected by an activity event monitor, along with the rest of the activity information, by specifying the COLLECT ACTIVITY DATA WITH DETAILS,SECTION clause on a service class, workload, work action set, or threshold. An explanation of the used access plan can be extracted from the section using the db2expln command.
41
The granular control in collecting a section for an SQL statement allows you to limit the amount of data collected, gathering only the data of interest for a particular problem determination task. For example, use a threshold to only collect a section for statements that run for a particularly long time or exceed a certain cost. Related concepts "Guidelines for capturing section explain information" in Troubleshooting and Tuning Database Performance Related tasks "Collecting data for individual activities" in Workload Manager Guide and Reference Related reference "db2expln - SQL and XQuery Explain " in Command Reference
42
Related concepts "Guidelines for capturing section explain information" in Troubleshooting and Tuning Database Performance
FP1: New event monitor for dynamic and static SQL statements in package cache
Starting with Version 9.7 Fix Pack 1, the package cache event monitor (CREATE EVENT MONITOR FOR PACKAGE CACHE) records events from both dynamic and static SQL statements when they are flushed from the database package cache. The new package cache event monitor captures information about cached statement entries after they have been flushed from the database package cache. The event monitor captures an accurate history about statements, that were in the package cache, which can help to resolve SQL query performance and problem determination issues. The core data collected for a package cache event are the monitor elements reported through the MON_GET_PKG_CACHE_STMT table function. In addition,
Chapter 5. Monitoring enhancements
43
the event monitor collects information about the executable section of the activity. The collected information is the same for both dynamic and static SQL statements. After the event monitor data has been captured, the following is a list of the methods you can use to access that data: v An XML document created by the new EVMON_FORMAT_UE_TO_XML table function v Relational tables populated by the new EVMON_FORMAT_UE_TO_TABLES procedure v An XML or text document using the Java-based db2evmonfmt tool Related concepts "db2evmonfmt tool for reading event monitor data" in Database Monitoring Guide and Reference Related tasks "Collecting package cache event data and generating reports" in Database Monitoring Guide and Reference Related reference "CREATE EVENT MONITOR (package cache) statement" in SQL Reference, Volume 2 "EVMON_FORMAT_UE_TO_XML table function - convert unformatted events to XML" in Administrative Routines and Views "EVMON_FORMAT_UE_TO_TABLES procedure - move an XML document to relational tables" in Administrative Routines and Views "MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache" in Administrative Routines and Views "MON_GET_PKG_CACHE_STMT_DETAILS - Get detailed metrics for package cache entries" in Administrative Routines and Views
44
The result of the monitoring is reported in separate output for partitioned and non-partitioned indexes. Each partition will have its index reorganization reported in its own output. Related concepts "Monitoring and troubleshooting using db2pd command" in Troubleshooting and Tuning Database Performance Related reference "db2pd - Monitor and troubleshoot DB2 database " in Command Reference
FP1: Monitoring table functions information can be viewed using administrative views
New administrative views encapsulate key queries using the new monitoring table functions introduced in DB2 Version 9.7 and Version 9.7 Fix Pack 1. The new monitoring table functions introduced in DB2 Version 9.7 and Version 9.7 Fix Pack 1 provide many detailed metrics describing the database objects and environment. To see the most important metrics in an easily readable format, you can use the new monitoring administrative views. You can simply issue a SELECT * command to see the main metrics from each table function, as well as some common calculated values. The following administrative views are available: v MON_BP_UTILIZATION
Chapter 5. Monitoring enhancements
45
v v v v v v v v v
FP1: Table functions for row-based formatting of monitoring information are available
In Version 9.7 Fix Pack 1 and later fix packs, monitor elements reported in XML documents can be displayed and analyzed in a generic fashion using new row-based formatting table functions. Detailed monitoring table functions, such a MON_GET_WORKLOAD_DETAILS, return an XML document called DETAILS, containing a number of detailed monitor elements. In addition, the statistics event monitor returns a DETAILS XML document, the activity event monitor returns a DETAILS_XML XML document, the EVMON_FORMAT_UE_TO_XML table function returns an XMLREPORT XML document, and the EVMON_FORMAT_UE_TO_TABLES procedure returns a METRICS XML document. You can review and analyze the monitor elements returned in these XML document by using the new row-based formatting table functions. Which monitor elements are returned depends on the table function or event monitor which produced the XML document. MON_FORMAT_XML_WAIT_TIMES_BY_ROW Returns a list of wait time monitor elements, such as total_wait_time and lock_wait_time for each XML document. MON_FORMAT_XML_COMPONENT_TIMES_BY_ROW Returns a list of component time monitor elements, including processing time monitor elements, such as total_compile_time, total_compile_proc_time, and stmt_exec_time, for each XML document. MON_FORMAT_XML_TIMES_BY_ROW Returns formatted row-based output for the combined hierarchy of wait and processing times that are contained in an XML metrics document. MON_FORMAT_XML_METRICS_BY_ROW Returns all the metrics contained in the XML document.
46
Returns metrics for FCM. v MON_GET_FCM_CONNECTION_LIST Returns monitor metrics for all the FCM connections on the specified member. In addition, FCM-related metrics have been added to the outputs of both the db2pd command and the GET SNAPSHOT command. Related reference "GET SNAPSHOT " in Command Reference "db2pd - Monitor and troubleshoot DB2 database " in Command Reference "MON_GET_FCM - Get FCM metrics" in Administrative Routines and Views "MON_GET_FCM_CONNECTION_LIST - Get details for all FCM connections" in Administrative Routines and Views
FP3: Simplify capture of detailed statement information using the new db2caem tool with db2support integration options
Starting with V9.7 FP3, a new tool db2caem (db2 Capture Activity Event Monitor data tool) has been created to simplify the process of capturing detailed diagnostic and runtime information about one or more statements. New db2support options have been added for the optimizer mode to collect the data captured by db2caem. The db2caem tool will cleanup any objects and tables that it creates, this includes the activity event monitor that it creates to perform the capture of statement information. If a you have already captured information for a statement using an existing activity event monitor, you can bypass the capture step by providing as input an activity event monitor name along with an application ID, unit-of-work ID, and activity ID to identify the statement of interest and use the db2caem tool to export and format the information only.
db2support integration
New db2support options (-aem , -actevm, -appid, -uowid, -actid) have been added for the optimizer mode to collect the data captured by db2caem. The new collection in db2support will be part of cl 1 if the new option is specified with st, or -sf option, and the activity event monitor information will be collected either for the SQL statements inside the SQL file or specified through -sf. se with embedded SQL. Related reference "db2support - Problem analysis and environment collection tool " in Command Reference "db2caem - Capture activity event monitor data tool " in Command Reference
47
48
49
Related tasks "Configuring a clustered environment using DB2 High Availability Instance Configuration Utility (db2haicu)" in Data Recovery and High Availability Guide and Reference
50
Related tasks "Recovering from sustained traps" in Troubleshooting and Tuning Database Performance Related reference "db2diag - db2diag logs analysis tool " in Command Reference
FP3: Data deduplication device support has been integrated into the backup utilities
In Version 9.7 Fix Pack 3 and later fix packs, you can use new backup parameters to specify that the target storage device supports data deduplication. These new parameters optimize the format of backup images for deduplication devices and make backup operations more efficient.
51
You can use the DEDUP_DEVICE parameter of the BACKUP DATABASE command or set the iOptions parameter of the DB2Backup API to DB2BACKUP_DEDUP_DEVICE. Related reference "BACKUP DATABASE " in Command Reference "db2Backup - Back up a database or table space" in Administrative API Reference "BACKUP DATABASE command using the ADMIN_CMD procedure" in Administrative Routines and Views
FP2: Proxy node support for the db2adutl command has been added
Starting in Version 9.7 Fix Pack 2 and later fix packs, you can use the OPTIONS parameter of the db2adutl command to pass valid options to the Tivoli Storage Manager (TSM) server. As a result, you can use the db2adutl command to work in TSM environments that support client proxy nodes. Using proxy nodes simplifies some administration tasks because you can consolidate multiple physical machines under multiple users into a single virtual node name. Proxy nodes configurations make it is easier to perform restore operations from one machine or user to another machine. During HADR scenarios, this setup also lets you access log files and restore data more easily. Related concepts "Recovering data using db2adutl" in Partitioning and Clustering Guide
Administration notification and diagnostic logs occupy specified amount of disk space
In Version 9.7, you have the ability to configure how much combined disk space is occupied by both the administration notification and diagnostic log files by specifying the total size with the new diagsize database manager configuration parameter. With this improvement, these log files will only grow to the extent you specify without uncontrollable growth potentially consuming all the available free disk space. The value of the new diagsize database manager configuration parameter decides what form of log files will be adopted. If the value is 0 (default), a single administration notification log file (instance_name.nfy) and diagnostic log file (db2diag.log) will be adopted, with the size of each log file limited only by the available free disk space. This was the growth behavior of these log files in previous releases. However, if the value is not 0, a series of 10 rotating administration notification log files and 10 rotating diagnostic log files will be adopted. This nonzero value also specifies the total size of all rotating administration notification log files and all rotating diagnostic log files combined, thereby limiting their total size growth. Note: Starting with DB2 Version 9.7 Fix Pack 1, if the diagsize configuration parameter is set to a non-zero value and the diagpath configuration parameter is set to split the diagnostic data into separate directories, then the non-zero value of the diagsize configuration parameter specifies the total size of the combination of all rotating administration notification log files and all rotating diagnostic log files contained within a given split diagnostic data directory. For example, if a system with 4 database partitions has diagsize set to 1 GB and diagpath set to "$n" (split
52
diagnostic data per database partition), the maximum total size of the combined notification and diagnostic logs can reach 4 GB (4 x 1 GB). The amount of the total disk space allocated to the rotating log files, as a percentage of the value specified with the diagsize configuration parameter, differs according to platform in the following ways: UNIX and Linux v 90% to rotating diagnostic log files v 10% to rotating administration notification log files Windows v 100% to rotating diagnostic log files, due to administration notification on the Windows platform using the Event Log service The instance must be restarted for the new value of the diagsize configuration parameter to take effect. Related concepts "Administration notification log" in Data Recovery and High Availability Guide and Reference "DB2 diagnostic (db2diag) log files" in Troubleshooting and Tuning Database Performance FP1: Diagnostic data can be stored in separate directories on page 158 Related reference "diagsize - Rotating diagnostic and administration notification logs configuration parameter" in Database Administration Concepts and Configuration Reference
53
Related concepts "Database schema transporting" in V9.7 FP2 Transportable schemas Related reference "db2move - Database movement tool " in Data Movement Utilities Guide and Reference
54
55
56
Related concepts JDBC and SQLJ support has been enhanced on page 115 "Statement concentrator reduces compilation overhead" in Troubleshooting and Tuning Database Performance Related reference "Statement attributes (CLI) list" in Call Level Interface Guide and Reference, Volume 2 "Connection attributes (CLI) list" in Call Level Interface Guide and Reference, Volume 2 "DB2Connection interface" in Developing Java Applications "IBM Data Server Driver for JDBC and SQLJ properties for DB2 Database for Linux , UNIX , and Windows" in Developing Java Applications "stmt_conc - Statement concentrator configuration parameter" in Database Administration Concepts and Configuration Reference "StmtConcentrator CLI/ODBC configuration keyword" in Call Level Interface Guide and Reference, Volume 2
Statistics collection sampling performance for statistical views has been improved
Starting in Version 9.7, the performance of sampling RUNSTATS on statistical views has been enhanced. The SYSTEM sampling style is now supported and the overall sampling performance has been improved. In previous releases, you could only use the BERNOULLI sampling, which uses row-level sampling to collect view statistics. Now you can use system sampling, which uses page-level sampling to collect view statistics, where it is permitted. Page-level sampling provides excellent performance because only a sample of the pages is read. In addition, the sampling performance has been improved for statistical views whose definitions are a select over a single base table or referential integrity joins among foreign key tables and primary key tables. The performance benefit occurs even if the referential integrity constraint is informational. The referential integrity constraint allows the sampling specification to be applied directly to the foreign key tables rather than to the statistical view result. Related concepts "Data sampling in queries" in Troubleshooting and Tuning Database Performance Related reference "RUNSTATS " in Command Reference
57
longer need to find the BIND file or remember the BIND options. You can use the ALTER PACKAGE statement and then issue the REBIND command. Related reference "ALTER PACKAGE " in SQL Reference, Volume 2 "ALTER_ROUTINE_PACKAGE procedure" in Administrative Routines and Views
Cost model has been improved for queries in partitioned database environments
The optimizer's cost model has been enhanced to improve performance of some queries in partitioned database environments. The updates to the optimizer's cost model for processing in partitioned database environments now better reflect the CPU cost associated with transferring data between database partitions. This improvement will have a positive impact on the performance of data partitioned queries where the CPU costs of data transmission contribute significantly to query execution time.
58
semantics obey isolation level semantics, applications running under currently committed semantics continue to respect isolation levels. Currently committed semantics require more log space because additional space is required for logging the first update of a data row during a transaction. This log data is required for retrieving the currently committed image of the row. Depending on the workload, the additional log data can have an insignificant or measurable impact on the total log space used. The requirement for additional log space does not apply when cur_commit is disabled. You can use the AIRLINE.war sample program to learn how to use this feature. Related concepts "Isolation levels" in SQL Reference, Volume 1 New DB2 sample programs have been added on page 111 "Currently committed semantics improve concurrency" in SQL Procedural Languages: Application Enablement and Support Related reference "BIND " in Command Reference "PRECOMPILE " in Command Reference "Query compiler variables" in Database Administration Concepts and Configuration Reference "SET_ROUTINE_OPTS " in Administrative Routines and Views "cur_commit - Currently committed configuration parameter" in Database Administration Concepts and Configuration Reference
59
The data server measures the distance between two scans in the same share group based on the number of buffer pool pages that lies between them. If the distance between two scans in the same share group grows too large, they might not be able to share buffer pool pages. The data server also monitors the speed of the scans. For example, assume that one scan is faster than another. In this situation, buffer pool pages that are accessed by the first scan might be cleared from the buffer pool before another scan in the share group can access them. To avoid this situation, the data server might throttle the faster scan, which allows slower scans to access the data pages before they are cleared. A high-priority scan, however, is never throttled by a lower priority scan and might move to another share group instead. By either throttling the fast scan or by moving it to a faster share group, the data server adjusts the share groups to ensure that sharing remains optimized. The method by which buffer pool pages are returned to the pool of available pages also helps to optimize scan sharing. Pages released by the trailing scan of a group are returned to the pool before pages released by the leading scan of a group. Pages from the trailing scan can be released first because the share group has no scans after the trailing scan that will read the pages; however, other scans might have to access the pages used by leading scan, so these pages are released later. In some situations, a scan might access the buffer pages of another scan part way through the list of pages, then return to the top of the list after reaching the last page. This type of scan is known as a wrapping scan. You can use the db2pd command to view information about scan sharing. For example, for an individual shared scan, the db2pd output shows data such as the scan speed and the amount of time that the scan was throttled. For a sharing group, the command output shows the number of scans in the group and the number of pages shared by the group. The EXPLAIN_ARGUMENT table has new rows to contain scan-sharing information about table scans and index scans. You can use the db2exfmt command to format and view the contents of this table. You can use optimizer profiles to override decisions that the compiler makes about scan sharing. However, it is recommended that you not use these optimizer profiles unless recommended to do so by DB2 Service. Related concepts "Access types" in Troubleshooting and Tuning Database Performance "Scan sharing" in Troubleshooting and Tuning Database Performance Related reference "EXPLAIN_ARGUMENT table" in SQL Reference, Volume 1 "db2exfmt - Explain table format " in Command Reference "db2pd - Monitor and troubleshoot DB2 database " in Command Reference
60
An index on an individual data partition is an index partition; the set of index partitions that make up the entire index for the table is a partitioned index. Before Version 9.7, if you used an ALTER TABLE statement to attach a source table to a partitioned table as a new partition, the data in the new partition was not visible until after you issued a SET INTEGRITY statement to perform tasks such as updating indexes, enforcing constraints, and checking ranges. If the source table that you attached had a large amount of data, SET INTEGRITY processing might be slow and use a considerable amount of log space. Access to the data might be delayed. Starting in Version 9.7, you can use partitioned indexes to improve performance when you roll data into a table. Before you alter a partitioned table that uses partitioned indexes to attach a new partition or a new source table, you should create indexes on the table that you are attaching to match the partitioned indexes of the partitioned table. After attaching the source table, you still must issue a SET INTEGRITY statement to perform tasks such as range validation and constraint checking. However, if the source tables indexes match all of the partitioned indexes on the target table, SET INTEGRITY processing does not incur the performance and logging overhead associated with index maintenance. The newly rolled-in data is accessible quicker than it would otherwise be. Partitioned indexes can also improve performance when you roll data out of a table. When you alter the table to detach one of its data partitions, that data partition takes its partitioned indexes with it, becoming a stand-alone table with its own indexes. You do not have to recreate the indexes for the table after detaching the data partition. Unlike nonpartitioned indexes, when you detach a data partition from a table that uses partitioned indexes, the associated index partitions go with it. As a result, there is no need for asynchronous index cleanup (AIC). In addition, partition elimination for queries against a partitioned table that uses partitioned indexes can be more efficient. For nonpartitioned indexes, partition elimination can only eliminate data partitions. For partitioned indexes, partition elimination can eliminate both data and index partitions. This can result in having to scan fewer keys and index pages than a similar query over a nonpartitioned index. By default, when you create indexes on partitioned tables, they are partitioned indexes. You can also include the PARTITIONED keyword of the CREATE INDEX statement to have a partitioned index created. You must use the NOT PARTITIONED keywords if you want a nonpartitioned index. All partitioned indexes for a data partition are stored in the same index object, regardless of whether the index partitions are stored in the same table space used for the data partition or in a different table space. As in previous releases, you can use the ALTER TABLE statement with the ADD PARTITION clause to create a data partition for a partitioned table. To specify that partitioned indexes on the new data partition are to be stored in a different table space than the table space used for the data partition, use the INDEX IN option of the ADD PARTITION clause. If partitioned indexes exist on the partitioned table, the ADD PARTITION operation extends these indexes to the new partition, and the partitioned indexes are stored in the table space that you specify. If you do not use the INDEX IN option, the partitioned indexes are stored in the same table space in which the new data partition is stored.
61
Starting with DB2 V9.7 Fix Pack 1, when creating a table that uses both multidimensional clustering (MDC) and data partitioning, the system-created MDC block indexes are created as partitioned indexes. Data partitioned MDC tables can take advantage of the features available with partitioned tables such as the rolling in and rolling out of table data. For MDC tables that use table partitioning created with DB2 V9.7 and earlier, the block indexes are nonpartitioned.
62
v Queries containing a SELECT DISTINCT or GROUP BY clause can be matched to MQTs whose definitions start with SELECT DISTINCT. v Queries containing an EXISTS predicate can also be matched to MQTs with an appropriate EXISTS predicate. v Some additional scenarios involving datetime functions are better matched. For example, queries containing a QUARTER function can be matched to an MQT returning MONTH. Another example is when a query contains DAYOFWEEK function and the MQT contains DAYOFWEEK_ISO (or the reverse scenario). v Other previously unmatched scenarios are now matched when referential integrity constraints (or informational referential integrity) are defined. Related concepts "Plan optimization guidelines" in Troubleshooting and Tuning Database Performance
63
Related concepts New DB2 sample programs have been added on page 111 "Inline LOBs improve performance" in Troubleshooting and Tuning Database Performance "Storing LOBs inline in table rows" in Database Administration Concepts and Configuration Reference Related reference "ALTER TABLE " in SQL Reference, Volume 2 "CREATE TABLE " in SQL Reference, Volume 2 "Large objects (LOBs)" in SQL Reference, Volume 1 "ADMIN_IS_INLINED function - Determine if data is inlined" in Administrative Routines and Views "ADMIN_EST_INLINE_LENGTH function - Estimate length required to inline data" in Administrative Routines and Views
I/O completion ports (IOCP) is used by default for asynchronous I/O (AIX)
When I/O completion ports are configured, they are used by default to process asynchronous I/O (AIO) requests on the AIX operating system and can improve overall system performance. AIX operating systems in symmetric multi-processor (SMP) environments use IOCP for their AIO processing. The AIO server processes from the AIX operating system manage the I/O requests by processing large numbers of requests in the most optimal way for the system. Your operating system might not have the IOCP module installed and configured. If the DB2 Version 9.7 minimum AIX operating system requirements were applied using an operating system upgrade rather than using a new operating system installation, you must install and configure the IOCP module separately. Related tasks "Configuring IOCP ( AIX)" in Troubleshooting and Tuning Database Performance
Isolation level locking intent can be specified in subselect and fullselect clauses
Starting in Version 9.7, you can now specify different isolation levels and locking intents for tables in different subselect clauses. In previous releases, you could only specify the isolation level and locking intent at the statement level, meaning that those settings would be applied to all the tables in the SQL statement. This change allows for improved concurrency because you can specifically control isolation levels instead of using a high isolation level for the entire query.
Example
In the following example of an inner join between two tables, the repeatable read (RR) isolation level specified for table MySchema.T1 will override the read stability (RS) isolation level specified for the session:
64
SET CURRENT ISOLATION LEVEL RS; SELECT T1.c1, T1.c2, T2.c3 FROM (SELECT c1, c2 FROM MySchema.T1 WITH RR) as T1, MySchema.T2 as T2 WHERE T1.c1 = T2.c4;
MySchema.T1 will be accessed using repeatable read and MySchema.T2 will be accessed using isolation read stability.
65
To minimize the impact of these restrictions, issue a COMMIT immediately after a DETACH operation. Related concepts "Asynchronous partition detach for data partitioned tables" in Partitioning and Clustering Guide "Data partition detach phases" in Partitioning and Clustering Guide Related tasks "Detaching data partitions" in Partitioning and Clustering Guide Related reference "ALTER TABLE " in SQL Reference, Volume 2
FP2: Additional optimization of performance using IBM Optim Performance Manager Extended Edition
IBM Optim Performance Manager Extended Edition Version 4.1 is a follow on to DB2 Performance Expert. Starting with Version 9.7 Fix Pack 2 and later fix packs, Optim Performance Manager Extended Edition helps optimize the performance and availability of mission critical databases and applications. Optim Performance Manager Extended Edition delivers a proactive, comprehensive performance management approach that allows IT staff to identify, diagnose, solve and prevent performance problems in DB2 products and in associated applications including Java and DB2 Call Level Interface (CLI) applications.
66
67
68
Porting existing PL/SQL scripts, package definitions, or individual PL/SQL and SQL statements that work with other database management systems to DB2 Version 9.7 is done by executing them from a DB2 Command Window or from the DB2 Command Line Processor. With the DB2 environment initialized for PL/SQL, the DB2 compiler processes the PL/SQL DDL statements and creates the required database objects in the database. PL/SQL anonymous blocks, as well as references to commonly used package routines and views can also be compiled and executed. In Version 9.7 Fix Pack 1 and later fix packs, FORALL syntax and BULK COLLECT INTO syntax are supported in the DB2 PL/SQL compiler: v The FORALL statement supports the INSERT, searched DELETE, searched UPDATE, and EXECUTE IMMEDIATE statements. Logically, a FORALL operation iterates over an array, processing the data change for each array element as a block operation. v The BULK COLLECT extension to the INTO clause (in the SELECT INTO, FETCH, and EXECUTE IMMEDIATE statements) uses arrays to store multiple rows of data (either scalar types or records). With this support, you quickly enable PL/SQL solutions in DB2 environments. Related concepts "PL/SQL support" in SQL Procedural Languages: Application Enablement and Support Related reference "DB2_COMPATIBILITY_VECTOR registry variable" in SQL Procedural Languages: Application Enablement and Support
69
Related concepts New DB2 sample programs have been added on page 111 Related reference "DB2_COMPATIBILITY_VECTOR registry variable" in SQL Procedural Languages: Application Enablement and Support "NUMBER data type" in SQL Procedural Languages: Application Enablement and Support "VARCHAR2 and NVARCHAR2 data types" in SQL Procedural Languages: Application Enablement and Support "DATE data type based on TIMESTAMP(0)" in SQL Procedural Languages: Application Enablement and Support "National character strings" in SQL Reference, Volume 1
70
Related tasks "Setting up DB2 for Oracle application enablement" in SQL Procedural Languages: Application Enablement and Support Related reference "DB2_COMPATIBILITY_VECTOR registry variable" in SQL Procedural Languages: Application Enablement and Support
71
72
73
74
APPLNAME Application name CURRENT CLIENT_ACCTNG Client accounting string CURRENT CLIENT_APPLNAME Client application name CURRENT CLIENT_USERID Client user ID CURRENT CLIENT_WRKSTNNAME Client workstation name
Workload-level aggregate activity data collection and new high watermarks provide additional statistics
To support priority aging of in-progress activities and the definition of thresholds at the workload level, enhanced monitoring and statistics are available.
75
act_cpu_time_top The high watermark for processor time used by activities at all nesting levels in a service class, workload, or work class act_rows_read_top The high watermark for the number of rows read by activities at all nesting levels in a service class, workload, or work class You can also determine the highest lock wait time of any request on a partition for a workload during a time interval by using the following new high watermark: lock_wait_time_top The high watermark for lock wait times for any request in a workload, in milliseconds Related concepts "Statistics for DB2 workload manager objects" in Workload Manager Guide and Reference Workloads support activity-based threshold controls on page 74 Chapter 5, Monitoring enhancements, on page 33
DB2 workload manager supports integration with Linux workload management (WLM)
You can use the optional integration between DB2 service classes and Linux WLM classes (with Linux kernel version 2.6.26 or later) to control the amount of processor resource allocated to each service class. When you use this integration, you take advantage of the native control capabilities provided by Linux WLM. The additional functionality provided by integrating DB2 workload manager with Linux WLM is similar to what is provided when you integrate DB2 workload manager with AIX Workload Manager. If enabled,all threads running in a DB2 service class are mapped to a Linux WLM class, where they are subject to the processor resource controls that you define in Linux WLM.
76
Related concepts "Integration of Linux workload management with DB2 workload manager" in Workload Manager Guide and Reference "Integration with operating system workload managers" in Workload Manager Guide and Reference
77
Your data server changes the priority of in-progress activities by moving them (remapping them) between service subclasses in response to the amount of processor time used or the number of rows read. You can automatically decrease, or age, the priority of longer-running activities by having them remapped to a service class with fewer resources (that is, with more resource constraints). Because the data server performs the remapping automatically, you do not require detailed knowledge of the activities running on your data server at a particular time. The following illustration shows how activities are priority aged:
User database requests
Service superclass
Requests
Workload
Threshold
As work enters the system, activities are automatically placed into the first service class and begin running using the high-priority settings of this service class. If you also define thresholds for each of the service classes that limit the time or resources used during execution, activities are dynamically reassigned to the next-lower service class if the threshold of the next-higher class is violated. This dynamic resource control is repeatedly applied until activities complete or are in the lowest-priority class, where they remain until it is completed or you force them to stop running.
78
79
The following list contains the types of work actions that are available when a work action set is applied at the workload level: v COUNT ACTIVITY v PREVENT EXECUTION v COLLECT ACTIVITY DATA v COLLECT AGGREGATE ACTIVITY DATA v Thresholds that apply to each individual activity in the matching work class: ESTIMATEDSQLCOST SQLROWSRETURNED ACTIVITYTOTALTIME SQLTEMPSPACE SQLROWSREAD CPUTIME v CONCURRENTDBCOORDACTIVITIES threshold that applies to all activities as a group in the matching work class. This threshold controls the number of concurrent activities in the matching work class from all occurrences of the workload. Related concepts "Work actions and the work action set domain" in Workload Manager Guide and Reference "Concurrency control at the workload level using work action sets" in Workload Manager Guide and Reference
80
For the most part, you can continue to use the same system control approach that QP currently uses until you determine how best to use the WLM capabilities. Related concepts "Query Patroller migration tool" in Workload Manager Guide and Reference Related tasks "Migrating from Query Patroller to DB2 workload manager" in Upgrading to DB2 Version 9.7
81
82
You can control the types of operations that fenced routines can perform. For more information, see Fenced mode process (db2fmp) privileges can be customized (Windows) on page 90.
83
84
for these applications to perform database actions that are no longer within the scope of SYSADM, you must grant the LocalSystem account the required database privileges or authorities. For example, if an application requires database administrator capabilities, grant the LocalSystem account DBADM authority using the GRANT (Database Authorities) statement. Note that the authorization ID for the LocalSystem account is SYSTEM.
85
v v v v
By default, DATAACCESS and ACCESSCTRL authorities are included if they are not specified. Related concepts "Authorization, privileges, and object ownership" in SQL Reference, Volume 1 "Windows LocalSystem account support" in Database Security Guide "Authorities roadmap" in Database Security Guide System administrator (SYSADM) authority scope has changed on page 190 Database administrator (DBADM) authority scope has changed on page 193 Security administrator (SECADM) abilities have been extended on page 192 Related reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
86
Related concepts "Authentication methods for your server" in Database Security Guide Related reference "authentication - Authentication type configuration parameter" in Database Administration Concepts and Configuration Reference "alternate_auth_enc - Alternate encryption algorithm for incoming connections at server configuration parameter" in Database Administration Concepts and Configuration Reference
Configuration enhancements
You no longer need to use separate configuration files to set up SSL support. The parameters that you used to set in the SSLconfig.ini and SSLClientconfig.ini files are now replaced by database manager configuration parameters and connection string keywords. v There are six new server-side configuration parameters: ssl_svr_keydb specifies the fully qualified path of the key database file. ssl_svr_stash specifies the fully qualified path of the stash file that holds the encrypted password to the key database. ssl_svr_label specifies the label of the digital certificate of the server in the key database. ssl_svcename specifies the port that the database server uses to await communications from remote clients using the SSL protocol. ssl_cipherspecs (optional) specifies the cipher suites that the server supports. ssl_versions (optional) specifies the SSL and TLS versions that the server supports. v There are two new client-side database manager configuration parameters: ssl_clnt_keydb specifies the fully qualified path of the key database file on the client. ssl_clnt_stash specifies the fully qualified path of the stash file on the client. v There are two new connection string keywords for CLI/ODBC applications: ssl_client_keystoredb - Set ssl_client_keystoredb to the fully-qualified key database file name. ssl_client_keystash - Set ssl_client_keystash to the fully-qualified stash file name. v There are three new connection string keywords for DB2 .Net Data Provider applications: SSLClientKeystoredb - Set SSLClientKeystoredb to the fully-qualified key database file name.
Chapter 10. Security enhancements
87
SSLClientKeystash - Set SSLClientKeystash to the fully-qualified stash file name. security - Set security to SSL.
Setting up SSL connections for CLP clients and embedded SQL clients
The SSL keyword has been added to the CATALOG TCPIP NODE command SECURITY parameter. CLP clients and embedded SQL clients can use this keyword and the client-side database manager configuration parameters ssl_clnt_keydb, and ssl_clnt_stash to connect to a database using SSL.
88
Related concepts "Supported cipher suites and SSL versions" in Database Security Guide SSLconfig.ini and SSLClientconfig.ini files replaced with new database manager configuration parameters on page 194 Related tasks "Configuring Secure Sockets Layer (SSL) support in a DB2 instance" in Database Security Guide "Configuring Secure Sockets Layer (SSL) support in non-Java DB2 clients" in Database Security Guide Related reference "CATALOG TCPIP/TCPIP4/TCPIP6 NODE " in Command Reference "security CLI/ODBC configuration keyword " in Call Level Interface Guide and Reference, Volume 2
FP1: Transparent LDAP authentication and group lookup is supported (Linux and UNIX)
DB2 Version 9.7 supports transparent LDAP on the AIX operating system which is a new option for implementing LDAP-based authentication and group lookup. Starting with DB2 V9.7 Fix Pack 1, transparent LDAP support has now been extended to the Linux, HP-UX, and Solaris operating systems. Transparent LDAP authentication and group lookup allows you to utilize LDAP-based authentication without having to first configure the DB2 environment using the DB2 LDAP security plug-ins. You can configure DB2 instances to authenticate users and acquire their groups through the operating system. The operating system will, in turn, perform the authentication and groups acquisition through an LDAP server. To enable transparent LDAP authentication, set the DB2AUTH registry variable to OSAUTHDB. As of Fix Pack 2, transparent LDAP authentication and group lookup support on AIX is extended to certify Kerberos authentication support. Related concepts "LDAP-based authentication and group lookup support" in Database Security Guide
89
SYSMON authority has been extended to LIST commands and the db2mtrk command
To improve the database monitoring capability of a user holding system monitor (SYSMON) authority, SYSMON now includes the ability to run certain LIST commands. Also, SYSMON authority enables you to run the db2mtrk command to report memory pool allocation information. The affected LIST commands are as follows: v LIST DATABASE PARTITION GROUPS v LIST DRDA INDOUBT TRANSACTIONS v LIST PACKAGES v LIST TABLES v LIST TABLESPACE CONTAINERS v LIST TABLESPACES v LIST UTILITIES Related reference "LIST "LIST "LIST "LIST "LIST PACKAGES/TABLES command" in Command Reference TABLESPACE CONTAINERS command" in Command Reference TABLESPACES command" in Data Movement Utilities Guide and Reference DRDA INDOUBT TRANSACTIONS " in Command Reference DATABASE PARTITION GROUPS command" in Command Reference
"db2mtrk - Memory tracker " in Command Reference "LIST UTILITIES " in Command Reference
90
Related reference "Miscellaneous variables" in Database Administration Concepts and Configuration Reference
FP1: 32-bit GSKit libraries are included in the 64-bit DB2 product installation
DB2 Version 9.7 Fix Pack 1 and later now installs IBM Global Security Kit (GSKit) GSKit libraries with the 64-bit DB2 database server. The 32-bit version of the GSKit libraries are automatically included when you install the 64-bit version of the DB2 database server. The GSKit libraries are necessary to support SSL connections between DB2 clients and databases. Ensure that the path to the GSKit libraries appears in the PATH environment variable on Windows and in the LIBPATH, SHLIB_PATH or LD_LIBRARY_PATH environment variables on Linux and UNIX operating systems. For example, on Windows, add the GSKit bin and lib directories to the PATH environment variable:
set PATH="C:\Program Files\ibm\gsk8\bin";%PATH% set PATH="C:\Program Files\ibm\gsk8\lib";%PATH%
Related tasks "Configuring Secure Sockets Layer (SSL) support in a DB2 instance" in Database Security Guide "Configuring Secure Sockets Layer (SSL) support in non-Java DB2 clients" in Database Security Guide
91
92
93
v Default values for parameters (see Default values and named arguments promote flexibility for creating and calling procedures and functions on page 106) v Argument assignment by parameter name when you call procedures (see Default values and named arguments promote flexibility for creating and calling procedures and functions on page 106) v Autonomous transactions, which commit work even if the invoking transaction itself is rolled back (see Autonomous transactions are supported on page 107) General database application programming has been improved with the following additional enhancements: v Extensions from Python applications to access IBM data servers (see Python application development support has been added on page 107) v System-defined modules (see System-defined modules simplify SQL PL and application logic on page 108) v An enhanced version of IBM Database Add-Ins for Visual Studio (see IBM Database Add-Ins for Visual Studio have been enhanced on page 110) v Collection of common-signature and signature-stable stored procedures that are portable across IBM data servers (see Common SQL API supports the development of portable administrative applications on page 109) v New DB2 sample programs as templates to create your own application programs (see New DB2 sample programs have been added on page 111) v OUT and INOUT parameters support in user-defined functions (see FP1: User-defined functions support OUT and INOUT parameters on page 114) SQL Procedural Language (SQL PL) programming has also been improved, see Chapter 12, SQL Procedural Language (SQL PL) enhancements, on page 135. In addition, the IBM data server clients and drivers have been enhanced as follows: v IBM Data Server Driver for JDBC and SQLJ includes support for Version 9.7 enhancements and includes additional features (see JDBC and SQLJ support has been enhanced on page 115). v IBM Data Server Driver Package has been enhanced (see IBM Data Server Driver Package has been enhanced on page 124). v The IBM_DB Ruby driver, IBM PHP extensions, and IBM Data Server Provider for .NET support trusted contexts (see Trusted context support has been extended on page 125). v Sysplex functionality is supported on IBM data server clients and non-Java data server drivers with a DB2 Connect license (see Sysplex support is extended to IBM data server clients and non-Java data server drivers on page 125). v Call Level Interface (CLI) includes support for Version 9.7 enhancements and includes additional features (see Call Level Interface (CLI) functionality has been enhanced on page 126). v The packaging of the data server drivers has been simplified (see Component names have changed on page 3).
94
Example
ALTER TABLE ORG RENAME COLUMN DEPTNUMB TO DEPTNUM
Related tasks "Renaming tables and columns" in Database Administration Concepts and Configuration Reference Related reference "ALTER TABLE " in SQL Reference, Volume 2
Example
Replace V1, a view that has dependent objects.
CREATE TABLE T1 (C1 INT, C2 INT); CREATE TABLE T2 (C1 INT, C2 INT); CREATE VIEW V1 AS SELECT * FROM T1; CREATE VIEW V2 AS SELECT * FROM V1; CREATE FUNCTION foo1() LANGUAGE SQL RETURNS INT RETURN SELECT C1 FROM V2; CREATE OR REPLACE VIEW V1 AS SELECT * FROM T2; SELECT * FROM V2; VALUES foo1();
The replaced version of V1 references T2 instead of T1. Both V2 and foo1 are invalidated by the CREATE OR REPLACE statement. Under revalidation deferred semantics, SELECT * FROM V2 successfully revalidates v2, but not foo1, which is revalidated by VALUES foo1(). Under revalidation immediate semantics, both V2 and foo1 are successfully revalidated by the CREATE OR REPLACE statement.
95
Related reference "SQL statements" in SQL Reference, Volume 2 "Auto_revalidation - Automatic revalidation and invalidation configuration parameter" in Database Administration Concepts and Configuration Reference "Creating and maintaining database objects" in Database Administration Concepts and Configuration Reference
CREATE with errors support has been added for certain database objects
Certain objects can now be created even if errors occur during the compilation of their bodies; for example, creating a view when the table to which it refers does not exist. Such objects remain invalid until they are accessed. CREATE with errors support currently extends to views and inline SQL functions (not compiled functions). This feature is enabled if the auto_reval database configuration parameter is set to DEFERRED_FORCE. The errors that are tolerated during object creation are limited to the follow types: v Any name resolution error, such as: a referenced table does not exist (SQLSTATE 42704, SQL0204N), a referenced column does not exist (SQLSTATE 42703, SQL0206N), or a referenced function cannot be found (SQLSTATE 42884, SQL0440N) v Any nested revalidation failure. An object being created can reference invalid objects, and revalidation will be invoked for those invalid objects. If revalidation of any referenced invalid object fails, the CREATE statement succeeds, and the created object will remain invalid until it is next accessed. v Any authorization error (SQLSTATE 42501, SQL0551N) An object can be created successfully even if there are multiple errors in its body. The warning message that is returned contains the name of the first undefined, invalid, or unauthorized object that was encountered during compilation. The SYSCAT.INVALIDOBJECTS catalog view contains information on invalid objects.
96
Example
CREATE VIEW V2 AS SELECT * FROM V1
If V1 does not exist, the CREATE VIEW statement completes successfully, but V2 remains invalid. Related reference "Auto_revalidation - Automatic revalidation and invalidation configuration parameter" in Database Administration Concepts and Configuration Reference "Creating and maintaining database objects" in Database Administration Concepts and Configuration Reference
97
v v v v v v v v v v v v v v v v v v
MODULE DROP PROCEDURE MODULE DROP TYPE MODULE DROP VARIABLE NICKNAME (altering the local name or the local type) TABLE ALTER COLUMN
ALTER TABLE DROP COLUMN ALTER TABLE RENAME COLUMN CREATE OR REPLACE ALIAS CREATE OR REPLACE FUNCTION CREATE OR REPLACE NICKNAME CREATE OR REPLACE PROCEDURE CREATE OR REPLACE SEQUENCE CREATE OR REPLACE TRIGGER CREATE OR REPLACE VARIABLE CREATE OR REPLACE VIEW DROP FUNCTION DROP NICKNAME DROP PROCEDURE
v DROP SEQUENCE v DROP TABLE v DROP TRIGGER v v v v DROP TYPE DROP VARIABLE DROP VIEW RENAME TABLE
Example
Disable soft invalidation support.
db2set DB2_DDL_SOFT_INVAL=off
Related concepts "Soft invalidation of database objects" in Database Administration Concepts and Configuration Reference Related reference "DROP " in SQL Reference, Volume 2 "SQL statements" in SQL Reference, Volume 2 "Auto_revalidation - Automatic revalidation and invalidation configuration parameter" in Database Administration Concepts and Configuration Reference
98
types (for example, INTEGER to SMALLINT), and the full set of type alterations that are consistent with the cast functions for data types was not supported. Altering the column data type can cause data loss. Some of this loss is consistent with casting rules; for example, blanks can be truncated from strings without returning an error, and converting a DECIMAL to an INTEGER results in truncation. To prevent unexpected errors, such as overflow errors, truncation errors, or any other kind of error returned by casting, existing column data is scanned, and messages about conflicting rows are written to the notification log. Column default values are also checked to ensure that they conform to the new data type. If a data scan does not report any errors, the column type is set to the new data type, and the existing column data is cast to the new data type. If an error is reported, the ALTER TABLE statement fails. Casting of VARCHAR, VARGRAPHIC, or LOB data types to types of lesser length is not supported.
Example
Change the data type of the SALES column in the SALES table from INTEGER to SMALLINT.
ALTER TABLE SALES ALTER COLUMN SALES SET DATA TYPE SMALLINT DB20000I The SQL command completed successfully.
Change the data type of the REGION column in the SALES table from VARCHAR(15) to VARCHAR(14).
ALTER TABLE SALES ALTER COLUMN REGION SET DATA TYPE VARCHAR(14) ... SQL0190N ALTER TABLE "ADMINISTRATOR.SALES" specified attributes for column "REGION" that are not compatible with the existing column. SQLSTATE=42837
Change a column type in a base table. There are views and functions that are directly or indirectly dependent on the base table.
CREATE TABLE T1 (C1 INT, C2 INT); CREATE VIEW v1 AS SELECT C1, C2 FROM T1; CREATE VIEW v2 AS SELECT C1, C2 FROM V1; CREATE FUNCTION foo1 () LANGUAGE SQL RETURNS INT RETURN SELECT C2 FROM T1; CREATE VIEW v3 AS SELECT C2 FROM V2 WHERE C2 = foo1(); CREATE FUNCTION foo2 () LANGUAGE SQL RETURNS INT RETURN SELECT C2 FROM V3; ALTER TABLE T1 ALTER COLUMN C1 SET DATA TYPE SMALLINT; SELECT * FROM V2;
99
The ALTER TABLE statement, which down casts the column type from INTEGER to SMALLINT, invalidates V1, V2, V3, and foo2. Under revalidation deferred semantics, SELECT * FROM V2 successfully revalidates V1 and V2, and the C1 columns in both V1 and V2 are changed to SMALLINT. But V3 and foo2 are not revalidated, because they are not referenced after being invalidated, and they are above V2 in the dependency hierarchy chain. Under revalidation immediate semantics, the ALTER TABLE statement revalidates all the dependent objects successfully. Related reference "ALTER TABLE " in SQL Reference, Volume 2 "Casting between data types" in pureXML Guide "Altering tables" in Database Administration Concepts and Configuration Reference
100
Also, any connection can refer to a created temporary table at any time without the need for a setup script to initialize the created temporary table. A connection can access only the rows that it inserts. Other DB2 family products, such as DB2 for z/OS, and the SQL Standard support created temporary tables. You can use the cgtt.db2 and the Cgtt.java sample programs to learn how to use this feature. Related concepts New DB2 sample programs have been added on page 111 Related reference "CREATE GLOBAL TEMPORARY TABLE " in SQL Reference, Volume 2
101
EXTRACT Returns a portion of a date or timestamp. INSTR (which is a synonym for the LOCATE_IN_STRING scalar function) Returns the starting position of a string within another string. INITCAP Takes a string expression and returns a string expression with the first character of each word in uppercase, and all other letters in lowercase. LAST_DAY Returns a datetime value that represents the last day of the month. LOCATE_IN_STRING (INSTR is a synonym of this scalar function) Returns the starting position of a string within another string. LPAD Adds characters, symbols, or spaces to the left side of a string. MONTHNAME Returns a character string containing the name of the month (for example, January). MONTHS_BETWEEN Returns an estimate of the number of months between two expressions. NCHAR Returns a fixed-length national character string representation of a value. This scalar function is available in Version 9.7 Fix Pack 2 and later fix packs. NCLOB Returns an NCLOB representation of a national character string. This scalar function is available in Version 9.7 Fix Pack 2 and later fix packs. NEXT_DAY Returns a datetime value that represents the first weekday later than the date in a specified expression. NVARCHAR Returns a varying-length national character string representation of a value. This scalar function is available in Version 9.7 Fix Pack 2 and later fix packs. ROUND_TIMESTAMP Returns a timestamp value from an expression rounded to a specified unit. RPAD Adds characters, symbols, or spaces to the right side of a string. SUBSTRB Returns a substring of a string. This scalar function is available in Version 9.7 Fix Pack 1 and later fix packs. TO_CLOB (which is a synonym for the CLOB scalar function) Converts character data to the CLOB data type. TO_NCHAR Returns a national character representation of an input expression that has been formatted using a character template. This scalar function is available in Version 9.7 Fix Pack 2 and later fix packs. TO_NCLOB Returns an NCLOB representation of a character string. This scalar function is available in Version 9.7 Fix Pack 2 and later fix packs.
102
TO_NUMBER (which is a synonym for the DECFLOAT_FORMAT scalar function) Returns a DECFLOAT(34) value that is based on the interpretation of a string using the specified format. TO_TIMESTAMP (which is a synonym for the TIMESTAMP_FORMAT scalar function) Returns a timestamp that is based on the interpretation of a string using the specified format. TRIM_ARRAY Deletes elements from the end of an array. TRUNC_TIMESTAMP Adds characters, symbols, or spaces to the right side of a string.
103
Prior to Version 9.7, strong typing was used during comparisons and assignments. Strong typing requires matching data types, which means that you must explicitly convert one or both data types to a common data type before performing comparisons or assignments. In Version 9.7, the rules used during comparisons and assignments have been relaxed. If two objects have mismatched types, implicit casting is used to perform comparisons or assignments if a reasonable interpretation of the data types can be made. Implicit casting is also supported during function resolution. When the data types of the arguments of a function being invoked cannot be promoted to the data types of the parameters of the selected function, the data types of the arguments are implicitly cast to the data types of the parameters. For more information, see Functions. Implicit casting reduces the amount of SQL statements that you must modify when enabling applications that run on data servers other than DB2 data servers to run on DB2 Version 9.7. In many cases, you no longer have to explicitly cast data types when comparing or assigning values with mismatched data types. You can use the implicitcasting.db2 and the ImplicitCasting.java sample programs to learn how to use this feature. Version 9.7 includes another enhancement that enables you to use untyped parameter markers and untyped NULL keywords almost anywhere in an SQL statement where you can use an expression. For more information, see Determining data types of untyped expressions. Related concepts New DB2 sample programs have been added on page 111 Related reference "Functions" in SQL Reference, Volume 1 "Casting between data types" in pureXML Guide "Assignments and comparisons" in SQL Reference, Volume 1 "Rules for result data types" in SQL Reference, Volume 1 "Determining data types of untyped expressions" in SQL Reference, Volume 1
104
Related reference "Constants" in SQL Reference, Volume 1 "Datetime values" in SQL Reference, Volume 1 "Assignments and comparisons" in SQL Reference, Volume 1 "Rules for result data types" in SQL Reference, Volume 1 "Datetime operations and durations" in SQL Reference, Volume 1
105
Related concepts New DB2 sample programs have been added on page 111 Related reference "CREATE ALIAS " in SQL Reference, Volume 2
Default values and named arguments promote flexibility for creating and calling procedures and functions
Starting in Version 9.7, you can create procedures and specify default values for parameters. When calling procedures, arguments can be assigned to parameters by name, allowing you to pass named arguments in any order. Starting with Version 9.7 Fix Pack 2, the features of default values and named arguments has been extended to user-defined functions (UDFs). With default parameter values defined in the procedure or UDF, and the ability to invoke a procedure or UDF with named arguments, you now have the following options when calling the procedure or UDF: v Explicitly use the name of the parameter to name the argument v Omit argument values such that the default value will be assigned v Specify named arguments in any order These enhancements allow for replacing an existing procedure or UDF with added parameters using defaults such that existing invocations of the procedure or UDF do not need to be changed. You can use the defaultparam.db2 sample program to learn how to use this feature.
Examples
v Example 1: Create a procedure FOO with 3 parameters (parm1, parm2 and parm3) and assign default values for each parameter.
CREATE PROCEDURE FOO(parm1 INT DEFAULT -1, parm2 INT DEFAULT -2, parm3 INT DEFAULT -3)
The invocation of FOO is identical to FOO (10, 20, 30). v Example 3: Invoke the FOO procedure by omitting argument values.
CALL FOO (40)
FOO is invoked using a value of 40 for parm1 and uses the default values of -2 for parm2 and of -3 for parm3. v Example 4: Invoke the FOO procedure by passing arguments to the named parameters in any order.
CALL FOO (parm3=>30, parm2=>20)
FOO is invoked using the default value of -1 for parm1, the value 20 for parm2 and the value 30 for parm3. v Example 5: Invoke the FOO procedure by passing a mixture of named and unnamed arguments. Values that are not passed in by name are mapped according to their position in the invocation.
CALL FOO (40, parm3=>10)
106
FOO is invoked using a value of 40 for parm1, the default value of -2 for parm2, and a value of 10 for parm3. Related concepts New DB2 sample programs have been added on page 111 Related reference "CALL " in SQL Reference, Volume 2 "CREATE FUNCTION (OLE DB external table) " in SQL Reference, Volume 2 "CREATE FUNCTION (SQL scalar, table, or row) " in SQL Reference, Volume 2 "CREATE FUNCTION (external scalar) " in SQL Reference, Volume 2 "CREATE FUNCTION (external table) " in SQL Reference, Volume 2 "CREATE FUNCTION (sourced or template) " in SQL Reference, Volume 2 "CREATE PROCEDURE (external) " in SQL Reference, Volume 2 "CREATE PROCEDURE (SQL) " in SQL Reference, Volume 2 "CREATE PROCEDURE statement (PL/SQL)" in SQL Procedural Languages: Application Enablement and Support
107
ibm_db_dbi API Implements Python Database API Specification v2.0, which provides basic functions for interacting with databases but does not offer the advanced features provided by ibm_db ibm_db_sa adaptor Provides support for using SQLAlchemy to access IBM data servers As of Fix Pack 1, the following extension is available: ibm_db_django Provides support for the Django Framework These extensions allow Python applications to access the following IBM data servers: v DB2 Database for Linux, UNIX, and Windows, Version 9.1 Fix Pack 2 (and later) v DB2 UDB for Linux, UNIX, and Windows, Version 8 Fixpak 15 (and later) v Remote connections to IBM DB2 Universal Database on i5/OS V5R3, with PTF SI27358 (includes SI27250) v Remote connections to DB2 for IBM i 5.4 (and later) with PTF SI27256 v Remote connections to DB2 for z/OS, Version 8 and Version 9 v IBM Informix Dynamic Server, Version 11.10 (and later) Related concepts "Python, SQLAlchemy and Django Framework application development for IBM data servers" in Getting Started with Database Application Development Related reference "Python downloads and related resources" in Developing Perl, PHP, Python, and Ruby on Rails Applications
DBMS_JOB module
DBMS_LOB module
108
Table 6. Supported modules (continued) Module DBMS_OUTPUT module Description Provides a set of procedures for putting messages (lines of text) in a message buffer and getting messages from the message buffer within a single session. These procedures are useful during application debugging when you need to write messages to standard output. Provides a set of routines for sending messages through a pipe within or between sessions that are connected to the same database. Provides a set of procedures for executing dynamic SQL. Provides a set of utility routines. Provides a set of routines for maintaining directory aliases that are used with the UTL_FILE module. Provides a set of routines for reading from and writing to files on the database server file system. Provides a set of procedures for sending email. Provides a set of routines for sending e-mail using the Simple Mail Transfer Protocol (SMTP).
DBMS_PIPE module
UTL_FILE module
Related concepts "System-defined modules" in SQL Procedural Languages: Application Enablement and Support
109
A simple security model The stored procedures require only EXECUTE privilege, with no additional dependencies. The ability to add additional stored procedures in fix packs The set of stored procedures can be expanded in future fix packs to provide support for performing additional administrative functions. Data server version independence The stored procedures provide syntactically identical XML parameters and error handling across all data servers to ensure data server version independence. Signature stability and commonality are achieved by using simple XML documents (with a common DTD) as parameters. Version, platform, and technology differences are expressed through different key value pairs in hierarchical property lists. The ability for clients to determine supported features Clients can call the stored procedures to determine the highest supported versions. Support for automation You can use the stored procedures in automated scripts. The common SQL API currently provides the following stored procedures.
Table 7. Common SQL API stored procedures Procedure name CANCEL_WORK Description Cancels either a specific activity (for example, an SQL statement) or all activities for a connected application Retrieves data server configuration data, including nodes.cfg file data, database manager configuration data, database configuration data, and DB2 registry settings from all database partitions Retrieves the short message text, long message text, and SQLSTATE for an SQLCODE Retrieves information about the data server, including information about the system, the current instance, installed DB2 products, environment variables, and available processors Updates the configuration parameters retrieved by the GET_CONFIG procedure
GET_CONFIG
GET_MESSAGE
GET_SYSTEM_INFO
SET_CONFIG
Related concepts "Common SQL API procedures" in Administrative Routines and Views
110
v You can install IBM Database Add-Ins for Visual Studio in Visual Studio 2008 in addition to the existing Visual Studio 2005 support. v You can install IBM Database Add-Ins for Visual Studio with a non-administrator account with elevated privileges. v You can install the 32-bit version of the IBM Database Add-Ins for Visual Studio packaged with the 64-bit DB2 server and clients. v The add-ins now support IBM DB2 Version 9.7 for Linux, UNIX, and Windows and UniVerse 10.3. Availability of some features and data type compatibility depends on the data server that you use. v You can use a feature in the add-ins with IBM Optim pureQuery Runtime to capture SQL statements in your .NET applications. You can then run the SQL statements statically. Using static SQL avoids the need to prepare certain statements at run time and can improve security and performance of your applications. v You can define compound statements in your DB2 SQL procedures. v You can use trusted contexts during data connections. v You can choose to use Internet Protocol version 6 (IPv6) format for server addresses in data connections. v You can use the following Microsoft Visual Studio 2008 features: Windows Presentation Foundation (WPF). You can generate XAML-type applications from database tables, views, and procedures. Windows Workflow Foundation (WF) integration with Windows Communication foundation (WCF). You can drag an IBM DB2 custom activity from the WF project toolbox to the Activity Designer. ASP.NET AJAX. You can call ASP.NET Web services from the browser by using client script. v You can use localhost data connections without specifying user IDs and passwords. v You can group IBM database objects by schemas in the Server Explorer. In addition, the add-ins contain additional V9.7 features that enhance working with other IBM data servers. Related concepts "DB2 integration in Visual Studio" in Developing ADO.NET and OLE DB Applications
111
Implicit casting
Autonomous transactions
Default parameters
Public aliases
Modules
112
Partitioned environments
Index reorganization
Monitoring
113
Dropping of DB2 WLM service classes, thresholds, workloads, work classes, and work action sets
Generates a script that will help migrate a Query Patroller environment to a WLM environment
114
running applications, developing applications, and performing administrative tasks on DB2 Version 9.5 and Version 9.1 servers. However, the functionality available can differ based on the combination of the version levels used on the server and client (or driver).
115
116
you can use the statementConcentrator Connection or DataSource property to specify whether the IBM Data Server Driver for JDBC and SQLJ uses the statement concentrator support. In addition, the IBM Data Server Driver for JDBC and SQLJ adds the DB2Connection.setDBStatementConcentrator method, which lets you override the statementConcentrator setting for new statements that are created on an existing Connection. The driver also provides the DB2Connection.getDBStatementConcentrator method, which lets you check the statement concentrator setting.
Savepoints support
The IBM Data Server Driver for JDBC and SQLJ supports setting of savepoints for connections to IBM Informix Dynamic Server data servers.
117
118
v IDS ISAM error reporting is enabled. For connections to IDS V11.10 and later, ISAM errors are reported as SQLException objects, so SQLException methods can be used to obtain the error code and the message description. In addition, SQLException.printStackTrace calls display information about the cause of the ISAM errors. v More functions are supported for connections to IDS. For connections to IDS 11.50 and later, the following functions are supported: Progressive streaming Multi-row insert operations SSL support Setting and Retrieving of client info properties v Client reroute support for connections to IDS is added. This support requires the existence of one or more Connection Managers, a primary server, and one or more alternate servers at IDS 11.50 or later. v Workload balancing for connections to IDS is added. For workload balancing to IDS, JDBC and SQLJ applications connect to a Connection Manager. They set the enableSysplexWLB property to indicate that IDS workload balancing is used. This support requires IDS 11.50 or later. v Support for new IDS data types is added. As of IDS 11.50, IDS supports the BIGINT and BIGSERIAL data types. The IBM Data Server Driver for JDBC and SQLJ lets you access columns with those data types. For retrieving automatically generated keys from a BIGSERIAL column, the IBM Data Server Driver for JDBC and SQLJ adds the DB2Statement.getIDSBigSerial method. v Support for savepoints is added. The IBM Data Server Driver for JDBC and SQLJ supports setting of savepoints for connections to IBM Informix Dynamic Server data servers.
119
For connections to DB2 UDB for iSeries V5R3 and later servers, the following enhancements are added: v Support for the BINARY data type v Support for the DECIMAL data type with 63 digits of precision
120
121
122
123
getEstimateCost Returns the estimated cost of an SQL statement after the statement is dynamically prepared. getEstimateRowCount Returns the estimated number of rows that are returned by an SQL statement after the statement is dynamically prepared.
124
IBM Data Server Driver Package is a lightweight deployment solution that provides runtime support for applications using ODBC, CLI, .NET, OLE DB, PHP, Ruby, JDBC, or SQLJ without the need of installing Data Server Runtime Client or Data Server Client. This driver has a small footprint and is designed to be redistributed by independent software vendors (ISVs), and to be used for application distribution in mass deployment scenarios typical of large enterprises. Related concepts "IBM OLE DB Provider for DB2" in Getting Started with Database Application Development "PHP application development for IBM data servers" in pureXML Guide "IBM data server client types" in Installing IBM Data Server Clients "The IBM_DB Ruby driver and Rails adapter" in Getting Started with Database Application Development "Command line processor plus (CLPPlus)" in Command Reference "Network installation of IBM Data Server Driver Package ( Windows)" in Installing IBM Data Server Clients Related reference "db2drdat - DRDA trace " in Command Reference "db2cli - DB2 interactive CLI " in Command Reference
Sysplex support is extended to IBM data server clients and non-Java data server drivers
IBM data server clients and non-Java data server drivers that have a DB2 Connect license can now access a DB2 for z/OS Sysplex directly. Licensed clients are no longer required to go through a middle-tier IBM DB2 Connect 9.7 server to use Sysplex capabilities. The following Sysplex capabilities are now available in IBM data server clients and the non-Java data server drivers (IBM Data Server Driver Package and IBM Data Server Driver for ODBC and CLI):
Chapter 11. Application development enhancements
125
Transaction-level load balancing Prior to the introduction of this feature, client applications that required transaction-level workload balancing had to go through a IBM DB2 Connect 9.7 server. Now, support for distributing transactions among members within a DB2 data-sharing group is available in the client, so applications accessing a DB2 for z/OS Sysplex no longer have to go through a IBM DB2 Connect 9.7 server. Automatic client reroute with seamless failover for CLI and .NET applications When connectivity to a member within a Sysplex is lost, the automatic client reroute feature allows the client to recover from the failure by attempting to reconnect to the database through any member of the Sysplex. Prior to the introduction of this feature, when a CLI or .NET application reestablished a database connection, an error (typically, SQL30081N) was always returned to the application to indicate that the failed transaction had been rolled back. Now, CLI or .NET applications that encounter a connectivity failure on the first SQL operation in a transaction are allowed to replay the failed SQL operation as part of automatic client reroute processing. If the connection is successful, no error is reported to the application, and the transaction is not rolled back. The connectivity failure and subsequent recovery are hidden from the application. Clients can perform failover based on a client-specified server list (known as the alternate server list) or the server list that is returned by the database server during the last connect. Some restrictions apply to seamless failover support. Client-side XA support available for some transaction managers Prior to the introduction of this feature, client-side XA support for DB2 for z/OS was not available, so non-Java client applications had to go through a IBM DB2 Connect 9.7 server to obtain XA support for DB2 for z/OS. Now, XA support for DB2 for z/OS is available in IBM data server clients and non-Java data server drivers. Sysplex workload balancing is also supported by the IBM Data Server Driver for JDBC and SQLJ.
126
CLI applications can improve performance when using LOAD API by specifying ANYORDER file type modifier
You can improve CLI LOAD performance by using the new statement attribute SQL_ATTR_LOAD_MODIFIED_BY to specify the ANYORDER file type modifier when using the LOAD API. The statement attribute allows you to specify multiple file type modifiers separated by spaces. For example, the following call specifies the ANYORDER file type modifier for the CLI LOAD:
char *filemod="anyorder"; SQLSetStmtAttr (hstmt, SQL_ATTR_LOAD_MODIFIED_BY, (SQLPOINTER) filemod, SQL_NTS);
127
you specify 1 for this attribute, no header information is written to the CLI trace log file. The default value for this attribute is 0. If you accept the default or specify 0, information that is similar to the following appears in the CLI trace log file for every thread that is started:
[ [ [ [ [ [ [ [ Process: 1856, Thread: -1229691200 ] Date & Time: 07/03/2008 14:43:53.074965 ] Product: QDB2/LINUX DB2 v9.1.0.4 ] Level Identifier: 01050107 ] CLI Driver Version: 09.01.0000 ] Informational Tokens: "DB2 v9.1.0.4","s080122","MI00228","Fixpack4" ] Install Path: /opt/IBM/db2/V9.1.0.4 ] db2cli.ini Location: /xxx/ramdisk2/db/cli/db2cli.ini ]
CLI applications accessing DB2 for z/OS can roll back a transaction during streaming
CLI applications accessing DB2 for z/OS can now roll back a transaction even while in SQL_NEED_DATA state by setting the new statement attribute SQL_ATTR_FORCE_ROLLBACK. This behavior is supported when the StreamPutData CLI/ODBC configuration keyword set to 1. Prior to this enhancement, CLI applications running on DB2 for z/OS had to drop and reestablish the database connection to come out of SQL_NEED_DATA state.
CLI applications can retrieve data in an interleaved fashion for LOB objects in the same row
When querying data servers that support Dynamic Data Format, CLI applications can now call SQLGetData() for previously accessed LOB columns and maintain the data offset position from the previous call to SQLGetData(). You control this behavior by specifying the new AllowInterleavedGetData CLI/ODBC configuration keyword or the new statement attribute SQL_ATTR_ALLOW_INTERLEAVED_GETDATA. Prior to this enhancement, CLI applications could call SQLGetData() for previously accessed LOB columns by specifying the AllowGetDataLOBReaccess CLI/ODBC configuration keyword. However, there was no way to maintain the data position and offset information.
128
Prior to this enhancement, users were unable to pass procedure arguments in an order that differed from the order in which the parameters were defined when the procedure was created. DB2 CLI provides no support for the ability to bind by name. DB2 CLI processes anything that matches a valid parameter marker, and treats it as if it is a normal parameter marker represented by a question mark (?). To enable CLI support for named parameter processing, set the new EnableNamedParameterSupport CLI/ODBC configuration keyword to TRUE. By default named parameter processing is disabled in the driver for all servers
When you call this procedure in a CLI application, you can omit specifying a value for any parameter that has a default value. The value for the missing parameter is supplied by the server. Therefore, the following example no longer results in an error:
CALL addEmp (empName => 'John', empDeptNo => 1, empAddr => 'Bangalore')
Compound SQL statements are not supported if CLI array input chaining is used.
129
the default behavior for currently committed semantics that is defined by the cur_commit configuration parameter.
CLI applications support additional data type conversions and variable-length timestamp
CLI applications now support conversions between the following data types:
Table 9. Support for additional data type conversions in DB2 CLI SQL data type SQL_BIGINT SQL_DECIMAL SQL_DECFLOAT SQL_DOUBLE SQL_FLOAT SQL_INTEGER SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TYPE_DATE SQL_TYPE_TIME SQL_TYPE_TIMESTAMP C data type SQL_C_DBCHAR
In addition, CLI performs the conversions required to support a variable length timestamp of the form TIMESTAMP(p), where the precision of the timestamp value, p, is between 0 and 12. CLI generates truncation warnings and errors as necessary during the conversion. The new CLI statement attribute SQL_ATTR_REPORT_TIMESTAMP_TRUNC_AS_WARN is also available to control whether a datetime overflow results in an error (SQLSTATE 22008) or warning (SQLSTATE 01S07).
130
conversion during bind-in or bind-out of character data. CLI will fetch/insert the raw data as-is from/to the server.
131
132
133
Connection String Property support The IBM Data Server Provider for .NET supports a set of connection string properties that provides client information. IBM Informix Dynamic Server enhancements specific to: ReturnValue parameter support for stored procedures Informix data server stored procedures can return single or multiple result sets. Previously, IBM Data Server Provider for .NET did not support multiple values from user defined routines (UDRs). The added support for ReturnValue parameters means that IBM Data Server Provider for .NET can retrieve the result set as a single return value. BIGINT and BIGSERIAL data types support IBM Data Server Provider for .NET previously supported only INT8 and SERIAL8 data types for 64-bit integers. Support has been added for BIGINT and BIGSERIAL which have better performance than the other two data types. High availability disaster recovery (HADR) support HADR protects against data loss by replicating data to a secondary database. IBM Data Server Provider for .NET works with this feature if you set it up on your data servers. Workload Manager (WLM) support You can use the WLM feature to maximize the utilization of your resources. IBM Data Server Provider for .NET works with this feature if you set it on your data servers. FP1: IfxType.Money Starting in Version 9.7 Fix Pack 1, support for the Informix MONEY data type is provided as an IfxType enumeration. The MONEY data type is treated as a DECIMAL data type with 2 digits of precision. FP1: Informix optimizer directives The IBM Data Server Provider for .NET does not process Informix optimizer directives. Starting in Version 9.7 Fix Pack 1, IBM Data Server Provider for .NET passes the directives through the client-side parsing to the data server where all directive-driven optimizations occur. Related concepts "IBM Data Server Provider for .NET" in Getting Started with Database Application Development Related tasks "Deploying .NET applications ( Windows)" in Developing ADO.NET and OLE DB Applications Related reference "SQL data type representation in ADO.NET database applications" in Developing ADO.NET and OLE DB Applications
134
Related database object definitions can be encapsulated in new module database object
New module database objects simplify database design and application development by allowing you to group together, in a named set within a schema, a collection of related data type definitions, database object definitions, routine prototypes, routines and other logic elements. This simple encapsulation of related database elements also facilitates easy deployment of the definitions to other schemas or databases. You can perform the following actions using modules: v Define within a single object definition the related definitions for any of: SQL procedures SQL functions External procedures External functions Global conditions A module initialization procedure for implicit execution upon module initialization User-defined data type definitions including: distinct type, array type, associative array type, row type, and cursor type Global variables
Copyright IBM Corp. 2010
135
v Define a namespace such that objects defined within the module can refer to other objects defined in the module without providing an explicit qualifier. v Add object definitions that are private to the module. These objects can only be referenced by other objects within the module. v Add object definitions that are published. Published objects can be referenced from within the module or from outside of the module. v Define published prototypes of routines without routine bodies in modules and add the routines with routine bodies later, using the same signature as the routine prototype. v Define a module initialization procedure that is automatically executed when the first reference is made to a module routine or module global variable. This procedure can include SQL statements, SQL PL statements, and can be used to set default values for global variables or to open cursors. v Reference objects defined in the module from within the module and from outside of the module by using the module name as a qualifier (2-part name support) or a combination of the module name and schema name as qualifiers (3-part name support). v Drop objects defined within the module. v Drop the module. v Manage who can reference objects in a module by allowing you to grant and revoke the EXECUTE privilege for the module. v Port similar database objects written in other procedural languages to DB2 SQL PL. Modules can be created using the CREATE MODULE statement. You can use the modules.db2 sample program to learn how to use this feature. Related concepts New DB2 sample programs have been added on page 111 Related reference "CREATE MODULE " in SQL Reference, Volume 2
136
In releases prior to Version 9.7, these features are either not available or are only available for use within SQL procedures.
Chapter 12. SQL Procedural Language (SQL PL) enhancements
137
In Version 9.7 Fix Pack 1 and later fix packs, basic support for compiled user defined functions has been added for database partitioning environments.
Examples
The following example demonstrates the difference required in a CREATE FUNCTION statement to create a compiled SQL function instead of an inline SQL function.
Table 10. Comparison of SQL syntax required for inline SQL functions and compiled SQL functions Inline SQL function definition CREATE FUNCTION TAN (X DOUBLE) RETURNS DOUBLE LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC BEGIN ATOMIC RETURN SIN(X)/COS(X); END Compiled SQL function definition CREATE FUNCTION TAN (X DOUBLE) RETURNS DOUBLE LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC BEGIN RETURN SIN(X)/COS(X); END
The following example demonstrates a compiled SQL function definition containing a cursor, condition handler statement, and a REPEAT statement:
CREATE FUNCTION exit_func( a INTEGER) SPECIFIC udfPSM320 LANGUAGE SQL RETURNS INTEGER BEGIN DECLARE val INTEGER DEFAULT 0; DECLARE myint INTEGER DEFAULT 0; DECLARE cur2 CURSOR FOR SELECT c2 FROM udfd1 WHERE c1 <= a ORDER BY c1; DECLARE EXIT HANDLER FOR NOT FOUND BEGIN SIGNAL SQLSTATE '70001' SET MESSAGE_TEXT = 'Exit handler for not found fired'; END; OPEN cur2; REPEAT FETCH cur2 INTO val; SET myint = myint + val; UNTIL (myint >= a) END REPEAT; CLOSE cur2; RETURN myint; END@ DB20000I The SQL command completed successfully.
138
The compiled SQL function can be invoked by executing the following SQL statement:
VALUES(exit_func(-1));
The output of this invocation, which demonstrates the successful firing of the exit handler, is as follows:
1 ----------SQL0438N Application raised error or warning with diagnostic text: "Exit handler for not found fired". SQLSTATE=70001
Related reference "CREATE FUNCTION (external scalar) " in SQL Reference, Volume 2
139
might be required to a parameter or variable in a PL/SQL block. Instead of coding the specific data type into the variable declaration, an anchor data type declaration can be used instead. Related concepts "Anchored data type" in SQL Procedural Languages: Application Enablement and Support "Anchored data type variables" in SQL Procedural Languages: Application Enablement and Support "Restrictions on the anchored data type" in SQL Procedural Languages: Application Enablement and Support "Features of the anchored data type" in SQL Procedural Languages: Application Enablement and Support "Examples: Anchored data type use" in SQL Procedural Languages: Application Enablement and Support Related tasks "Declaring local variables of the anchored data type" in SQL Procedural Languages: Application Enablement and Support Related reference "Anchored types" in SQL Reference, Volume 1
Example
The following is an example of the creation of a Boolean variable and the setting of it to the value TRUE:
CREATE VARIABLE gb BOOLEAN; SET gb = TRUE;
The following is an example of a simple SQL function that accepts a Boolean parameter value and also returns a Boolean value:
CREATE FUNCTION fb1(p1 BOOLEAN, p2 INT) RETURNS BOOLEAN BEGIN IF p1 = TRUE AND p2=1 THEN RETURN p1; ELSE RETURN FALSE; END IF; END
The following is an example of how to set the variable with the output function fb1:
SET gb = fb1(TRUE,1);
140
Related concepts "Boolean data type" in SQL Procedural Languages: Application Enablement and Support Related reference "Boolean values" in SQL Reference, Volume 1
141
v v v v v
Uninitialized at creation time Assigned a result set definition based on an SQL statement Set to another result set definition Used as a procedure output parameter Specified as parameters to SQL procedures or SQL functions
v Specified as the return value from SQL functions A cursor value can include the specification of parameters used in the associated query. This is known as a parameterized cursor. When a parameterized cursor is opened, argument values are provided for the defined parameters that are used in the query. This allows an OPEN using a cursor variable to provide input values similar to the use of parameter markers in dynamic cursors, or the use of host variables in statically declared cursors. Related concepts "Cursor types" in SQL Procedural Languages: Application Enablement and Support "Overview of cursor data types" in SQL Procedural Languages: Application Enablement and Support "Example: Cursor variable use" in SQL Procedural Languages: Application Enablement and Support Related tasks "Creating cursor data types using the CREATE TYPE statement" in SQL Procedural Languages: Application Enablement and Support Related reference "Cursor values" in SQL Reference, Volume 1 "CREATE TYPE (cursor) " in SQL Reference, Volume 2
142
Related concepts "Row types" in SQL Procedural Languages: Application Enablement and Support "Creating row variables" in SQL Procedural Languages: Application Enablement and Support "Referencing row values" in SQL Procedural Languages: Application Enablement and Support "Comparing row variables and row field values" in SQL Procedural Languages: Application Enablement and Support "Passing rowsas routine parameters" in SQL Procedural Languages: Application Enablement and Support "Examples: Row data type use" in SQL Procedural Languages: Application Enablement and Support "Assigning values to row variables" in SQL Procedural Languages: Application Enablement and Support Related reference "CREATE TYPE (row) " in SQL Reference, Volume 2
143
144
145
Data synchronization in Net Search Extender is based on triggers that update a log table whenever the triggers capture information about new, changed, and deleted documents. There is one log table for each text index. Applying the information in the log table to the corresponding text index is referred to as performing an incremental update. If you specify the AUXLOG option, information about new and deleted documents is captured through integrity processing in an auxiliary staging table maintained by Net Search Extender, and information about changed documents is captured through triggers and stored in the base log table. This option is enabled by default for partitioned tables and is disabled for nonpartitioned tables. Related concepts "Extended text-maintained staging infrastructure for incremental update" in Net Search Extender Administration and User's Guide "Incremental update based on integrity processing" in Net Search Extender Administration and User's Guide
FP3: DB2 Text Search and Net Search Extender index coexistence
Starting with DB2 Version 9.7 Fix Pack 3 and later fix packs, DB2 Text Search and Net Search Extender text indexes can coexist on the same table column. A DB2 Text Search text index can now be created on the same column for which a Net Search Extender index already exists. If a DB2 Text Search index is created on a column which already has an active Net Search Extender index, the new Text Search index will be marked as inactive by default; otherwise the status will be active. Administrative commands can be executed on both types of text indexes regardless of whether they are active or inactive. Some administrative operations like DROP INDEX are allowed on invalid indexes whereas others are blocked. Only active indexes are used for text search. You can use the SYSPROC.SYSTS_ALTER stored procedure or the db2ts ALTER command to switch the text index status from ACTIVE to INACTIVE and vice versa. The active DB2 Text Search index will be used if both Text Search and Net Search Extender index are active on the same column. If all indexes are inactive, an error is returned indicating no index was found. By specifying the UNILATERAL option for the activation, a DB2 Text Search index can be set to inactive independent of the status of a Net Search Extender index. The text index type is determined on query level instead of on the predicate level. All active indexes on different columns must be of the same index type in a query, that is, either of type Text Search or Net Search Extender.
146
147
If you have a Version 8 or Version 9 copy installed and you want to use Version 9.7 instead, you need to upgrade to Version 9.7. DB2 Version 9.7 is a new release. You cannot apply a fix pack to upgrade from a Version 9 copy to Version 9.7. To learn about upgrade limitations, possible issues, and other details, see Upgrade essentials for DB2 servers in Upgrading to DB2 Version 9.7 and Upgrade essentials for clients in Upgrading to DB2 Version 9.7. Upgrading your DB2 servers and DB2 clients to Version 9.7 might require that you also upgrade your database applications and routines. To help determine whether you must upgrade, see the Upgrade essentials for database applications in Upgrading to DB2 Version 9.7 and Upgrade essentials for routines in Upgrading to DB2 Version 9.7 topics.
Instances and DB2 Administration Server can be created in a shared DB2 copy (Linux and UNIX)
Starting in Version 9.7, you can create instances and a DB2 Administration Server (DAS) in a shared DB2 copy on an AIX system workload partitions, a Network File System shared server, or on Solaris Zones. The following shared systems are supported: AIX system workload partitions (WPARs) A DB2 copy is installed on the global environment and shared on system WPARs with read-only permission. For AIX WPAR, fix pack updates are also supported. Network File System (NFS) shared server A DB2 copy is installed on NFS server and shared (normally with read-only permission) on NFS clients. Solaris Zones A DB2 copy is installed on global zone on Solaris and shared on other zones with read-only permission. Related concepts "DB2 database products in a workload partition ( AIX)" in Installing DB2 Servers Related reference "db2icrt - Create instance command" in Command Reference
148
Related concepts "The response file generator" in Installing DB2 Servers Related reference "db2rspgn - Response file generator " in Command Reference
149
and UNIX operating systems, this keyword is valid only for non-root upgrades. This keyword replaces the MIGRATE_PRIOR_VERSIONS keyword, which has been deprecated. In Version 9.7 Fix Pack 2 and later fix packs, you can use the ACS response file keyword file to install or uninstall the DB2 Advanced Copy Services (ACS) component in response file installations of type custom. A response file is an ASCII text file that contains setup and configuration information. Unlike using the DB2 Setup wizard to install, deploy, or uninstall products, features, or languages, using a response file lets you perform these tasks without interaction. Ready-to-use sample response files with default entries are included on the DB2 DVD. The sample response files are located in db2/platform/samples, where platform refers to the hardware platform. Related concepts Some response file keywords have been deprecated on page 233 INTERACTIVE response file keyword has been changed on page 188 Related reference "Response file keywords" in Installing DB2 Servers
150
To use update service, ensure the update service component is installed. This can be done by choosing a typical install (if using a response file installation, set INSTALL_TYPE = TYPICAL) or a custom install with the DB2 Update Service component selected (if using a response file installation, set INSTALL_TYPE = CUSTOM and COMP = DB2_UPDATE_SERVICE). Related concepts "First Steps interface" in Installing DB2 Servers Related tasks "Checking for DB2 updates" in Installing DB2 Servers Related reference "db2updserv - Show product updates" in Command Reference
151
Related concepts "Main menu entries for DB2 tools ( Linux )" in Getting Started with DB2 Installation and Administration on Linux and Windows Related tasks "Listing DB2 products installed on your system (Linux and UNIX)" in Installing DB2 Servers
FP3: 32bit IBM Database Add-ins for Visual Studio is now available with the 64-bit DB2 installation images.
32bit IBM Database Add-ins for Visual Studio is now available with the 64-bit DB2 installation images. In DB2 Version 9.7 Fix Pack 3 and later fix packs, you can start the 32-bit IBM Database Add-ins for Visual Studio installation from the following parts of the DB2 installation wizard: v The Install a Product section of the DB2 Setup Launchpad. v The Install Additional Products section of the DB2 Setup Launchpad, when the DB2 installation has completed. IBM Database Add-Ins for Visual Studio provides tools for rapid application development, database schema development, and debugging. Related concepts "DB2 integration in Visual Studio" in Developing ADO.NET and OLE DB Applications
IBM Tivoli System Automation for Multiplatforms (SA MP) support has been improved
Version 9.7 includes a new version of IBM Tivoli System Automation for Multiplatforms (SA MP) and SA MP support has been extended to include Solaris SPARC. Version 9.7 Fix Pack 2 and later fix packs include a later version of SA MP that you can use in environments with SUSE Linux Enterprise Server (SLES) 11 or POWER7 systems. SA MP 3.1 is installed automatically on Solaris SPARC, Linux and AIX operating systems. On Windows operating systems, SA MP 3.1 is bundled with the install media but it is not integrated with the DB2 installer. Related tasks "Installing and upgrading the SA MP Base Component with the DB2 installer" in Installing DB2 Servers Related reference "Supported software and hardware for IBMTivoli System Automation for Multiplatforms (SA MP) Base Component" in Installing DB2 Servers
152
This tool is useful for large-scale DB2 deployments and for embedding DB2 products within an application. The db2iprune command removes files associated with unwanted products, features, and languages based on an input file. The result is a smaller DB2 installation image that can be installed using the regular DB2 installation methods Related tasks "Reducing the size of your DB2 fix pack installation image" in Installing DB2 Servers Related reference "db2iprune - Reduce installation image size " in Command Reference
Fix pack installations can require less space (Linux and UNIX)
On Linux and UNIX operating systems, you can reduce the amount of space required for a fix pack installation by using the installFixPack command with the new -f nobackup parameter. If you specify the -f nobackup parameter, the installation files are not backed up when the components are updated, which saves space. Related reference "installFixPack - Update installed DB2 database products " in Command Reference
FP3: License activation process for DB2 Connect Unlimited Edition has been changed (System z)
Starting in Version 9.7 Fix Pack 3, you must activate the license key for DB2 Connect Unlimited Edition for System z on the host server if you use DB2 Connect to connect directly to DB2 on System z.
Details
In previous releases, the license key for DB2 Connect Unlimited Edition for System z was activated at each client computer that required access to a z/OS subsystem. In Version 9.7 Fix Pack 3 and later fix packs, the license key is activated only on
153
the z/OS subsystem that you want to access. You can activate the license only on a subsystem that is a member of the data sharing group for which the product was purchased If you want to access System z servers through a DB2 Connect gateway server, the license activation process is the same as in previous releases. For more information, see maintaining license keys.
154
You can connect to databases with GB18030 data from clients that either use code page 1392 or Unicode code page 1208 as the application code page. Windows operating systems do not have a locale setting that reports GB18030 as the code set. To ensure that a DB2 client treats a Windows workstation as using the GB18030 code set (code page 1392), complete the following tasks: v Install the GB18030 Support Package, which is available from Microsoft. v In the Regional and Language Options, set the Language for non-Unicode programs setting to Chinese PRC. v Set the DB2CODEPAGE registry variable to 1392. File names with characters in the GB18030 code set but not in the GBK code set are not supported in the deprecated Control Center. To open or save these files, use CLP commands or CLI. Related concepts "Derivation of code page values" in Globalization Guide
155
156
The data will be collected for all databases that are activated on a test system. The priority of the run could be set to the lowest setting to minimize the performance impact of the data collector, which in most cases is negligible, on a system. The resulting compressed file, db2has_hostname_timestamp.zip, is placed into the
Copyright IBM Corp. 2010
157
default working directory, ~/sqllib/db2hasdir and is sent, by way of the Enhanced Customer Data Repository (ECuRep), to the DB2 Health Advisor Service. A report with findings and recommendations will be sent to DBA John Smith using the provided e-mail address in this example. Related tasks "How to use the DB2 Health Advisor Service" in Troubleshooting and Tuning Database Performance Related reference "db2has - DB2 Health Advisor Service data collector " in Command Reference
v Split your own specified diagnostic data directory path according to physical host:
db2 update dbm cfg using diagpath '"pathname $h"'
v Split your own specified diagnostic data directory path according to database partition:
db2 update dbm cfg using diagpath '"pathname $n"'
v Split default diagnostic data directory path according to physical host and database partition:
db2 update dbm cfg using diagpath '"$h$n"'
v Split your own specified diagnostic data directory path according to physical host and database partition:
db2 update dbm cfg using diagpath '"pathname $h$n"'
Merging separate db2diag log files can, at times, make analysis and troubleshooting easier. In that case, you can use the db2diag -merge command.
158
Related concepts "Diagnostic data directory path" in Troubleshooting and Tuning Database Performance Related tasks "Splitting a diagnostic data directory path by database partition server, database partition, or both" in Troubleshooting and Tuning Database Performance Related reference "diagpath - Diagnostic data directory path " in Database Administration Concepts and Configuration Reference "db2diag - db2diag logs analysis tool " in Command Reference
159
manager is stopped and started, and as long as no routine is dropped, the query result will reflect the routine execution history collected at the point when the db2pd command was run. Related reference "db2pd - Monitor and troubleshoot DB2 database " in Command Reference
Administration notification and diagnostic logs occupy specified amount of disk space
In Version 9.7, you have the ability to configure how much combined disk space is occupied by both the administration notification and diagnostic log files by specifying the total size with the new diagsize database manager configuration parameter. With this improvement, these log files will only grow to the extent you specify without uncontrollable growth potentially consuming all the available free disk space. The value of the new diagsize database manager configuration parameter decides what form of log files will be adopted. If the value is 0 (default), a single administration notification log file (instance_name.nfy) and diagnostic log file (db2diag.log) will be adopted, with the size of each log file limited only by the available free disk space. This was the growth behavior of these log files in previous releases. However, if the value is not 0, a series of 10 rotating administration notification log files and 10 rotating diagnostic log files will be adopted. This nonzero value also specifies the total size of all rotating administration notification log files and all rotating diagnostic log files combined, thereby limiting their total size growth. Note: Starting with DB2 Version 9.7 Fix Pack 1, if the diagsize configuration parameter is set to a non-zero value and the diagpath configuration parameter is set to split the diagnostic data into separate directories, then the non-zero value of the diagsize configuration parameter specifies the total size of the combination of all rotating administration notification log files and all rotating diagnostic log files contained within a given split diagnostic data directory. For example, if a system with 4 database partitions has diagsize set to 1 GB and diagpath set to "$n" (split diagnostic data per database partition), the maximum total size of the combined notification and diagnostic logs can reach 4 GB (4 x 1 GB). The amount of the total disk space allocated to the rotating log files, as a percentage of the value specified with the diagsize configuration parameter, differs according to platform in the following ways: UNIX and Linux v 90% to rotating diagnostic log files v 10% to rotating administration notification log files Windows v 100% to rotating diagnostic log files, due to administration notification on the Windows platform using the Event Log service The instance must be restarted for the new value of the diagsize configuration parameter to take effect.
160
Related concepts "Administration notification log" in Data Recovery and High Availability Guide and Reference "DB2 diagnostic (db2diag) log files" in Troubleshooting and Tuning Database Performance FP1: Diagnostic data can be stored in separate directories on page 158 Related reference "diagsize - Rotating diagnostic and administration notification logs configuration parameter" in Database Administration Concepts and Configuration Reference
161
162
Security enhancements
v SSL client support expanded and configuration simplified on page 87 v AES encryption of user ID and password enhances security on page 86 v FP1: Transparent LDAP authentication and group lookup is supported (Linux and UNIX) on page 89 v FP1: 32-bit GSKit libraries are included in the 64-bit DB2 product installation on page 91
JDBC and SQLJ support has been enhanced on page 115 IBM Data Server Driver Package has been enhanced on page 124 IBM Data Server Provider for .NET has been enhanced on page 131 Trusted context support has been extended on page 125
163
v Sysplex support is extended to IBM data server clients and non-Java data server drivers on page 125 v Call Level Interface (CLI) functionality has been enhanced on page 126
Licensing changes
v FP3: License activation process for DB2 Connect Unlimited Edition has been changed (System z) on page 153
Administration changes
v Some database manager configuration parameters have been changed on page 172 v Some registry and environment variables have changed on page 175 v DESCRIBE command lists information about additional index types on page 182 v Registry files have been removed from the DB2 installation path on page 188
Security changes
v System administrator (SYSADM) authority scope has changed on page 190 v Security administrator (SECADM) abilities have been extended on page 192 v Database administrator (DBADM) authority scope has changed on page 193
164
v SSLconfig.ini and SSLClientconfig.ini files replaced with new database manager configuration parameters on page 194
Deprecated functionality
v db2iupdt command -s option has been deprecated (Linux and UNIX) on page 231 v Control Center tools and DB2 administration server (DAS) have been deprecated on page 222 v Health monitor has been deprecated on page 225 v Some response file keywords have been deprecated on page 233 v Instance and database migration commands and APIs have been deprecated on page 232 v Worksheet Format (WSF) for Import and Export utilities has been deprecated on page 227
Discontinued functionality
v db2secv82 command has been discontinued on page 241 v db2ilist command options -a and -p have been discontinued on page 243 v Netscape browser support has been discontinued on page 239 v Some operating systems are no longer supported on page 237
165
166
Fix Pack 2
Fix Pack 2 contains the functionality of Fix Pack 1 and includes the following enhancements: v Support for IBM Rational Developer for zSeries v7. For more information see Support for database application development in COBOL in Database Administration Concepts and Configuration Reference. v The db2pd command has additional functionality to help control which engine dispatchable units (EDUs) are included in the output, the ability to define a time interval in some cases, and improved stack output on Linux operating systems. For more information, see db2pd - Monitor and troubleshoot DB2 database in Command Reference. v The IBM Data Server Provider for .NET has been enhanced in several ways that could help with application performance, data server compatibility, and simplifying application development. For more information, see IBM Data Server Provider for .NET has been enhanced on page 131.
Fix Pack 1
Fix Pack 1 includes the following enhancements: v The IBM Data Server Provider for .NET includes multiple enhancements. For more information, see IBM Data Server Provider for .NET has been enhanced on page 131. v The diagpath database manager configuration parameter has new values, which allow you to store DB2 diagnostic data in separate directories named according to the physical host, database partition, or both. The db2diag command also has a new -merge parameter to merge multiple db2diag log files. For more information, see FP1: Diagnostic data can be stored in separate directories on page 158. v Transparent LDAP is supported on the Linux, HP-UX, and Solaris operating systems. For more information, see FP1: Transparent LDAP authentication and group lookup is supported (Linux and UNIX) on page 89. v 32-bit GSKit libraries are now installed automatically. For more information, see FP1: 32-bit GSKit libraries are included in the 64-bit DB2 product installation on page 91. v Additional support is provided for the GB18030 code set. For more information, see GB18030 code set support has been extended on page 155.
Copyright IBM Corp. 2010
167
v DB2 database products installed on HP-UX operating systems now support long host names. For more information, see Installation requirements for DB2 Connectt server products (HP-UX) in SC27-2433-02.
168
169
170
Details
When you create an index for a data partitioned table, by default, the index is created as a partitioned index, unless the following situations apply: v You specify UNIQUE on the CREATE INDEX statement and the index key does not include all the table-partitioning key columns. v You create an index over spatial data. In the previous situations, a nonpartitioned index is created by default. Starting in DB2 Version 9.7 Fix Pack 1, you can create indexes over XML data on a partitioned table as either partitioned or nonpartitioned. The default is a partitioned index. With DB2 V9.7 and earlier, for tables created that use multidimensional clustering (MDC) and data partitioning, the system-created MDC block indexes are always created as nonpartitioned indexes. Starting in DB2 V9.7 Fix Pack 1, for tables created that use both MDC and data partitioning, the system-created MDC block indexes are always created as partitioned indexes. DB2 V9.7 Fix Pack 1 and later releases support partitioned MDC tables with nonpartitioned block indexes and partitioned MDC tables with partitioned block indexes.
User response
If you do not want to create partitioned indexes on partitioned tables, specify the NOT PARTITIONED clause on the CREATE INDEX statement. If you have a data-partitioned MDC table that uses nonpartitioned block indexes, and you want to use partitioned block indexes, you must create new data-partitioned MDC table that uses partitioned block indexes, and move the data in the existing table to the new table. Data can be moved using either an online method or an offline method. See the related links for information about converting existing indexes and about block indexes.
171
Related concepts "Table partitioning and multidimensional clustering tables" in Partitioning and Clustering Guide "Block indexes for MDC tables" in Partitioning and Clustering Guide "Indexes on partitioned tables" in Database Administration Concepts and Configuration Reference Related tasks "Converting existing indexes to partitioned indexes" in Partitioning and Clustering Guide Related reference "CREATE INDEX " in pureXML Guide
diagsize
Controls the maximum size of diagnostic logs and administration notify logs.
ssl_cipherspecs
Supported cipher Specifies the cipher suites that the server allows for incoming specifications at connection requests when using SSL protocol. the server SSL key file path Specifies the fully qualified file path of the key file to be used for SSL connection at the client-side. for outbound SSL connections at the client SSL stash file path for outbound SSL connections at the client Specifies the fully qualified file path of the stash file to be used for SSL connections at the client-side.
ssl_clnt_keydb
ssl_clnt_stash
ssl_svr_keydb
SSL key file path Specifies a fully qualified file path of the key file to be used for SSL for incoming SSL setup at server-side. connections at the server
172
Table 11. Summary of new Version 9.7 database manager configuration parameters (continued) Parameter name ssl_svr_label Description Label in the key file for incoming SSL connections at the server SSL stash file path for incoming SSL connections at the server Details Specifies a label of the personal certificate of the server in the key database.
ssl_svr_stash
Specifies a fully qualified file path of the stash file to be used for SSL setup at server-side.
ssl_svcename ssl_versions
SSL service name Specifies the name of the port that a database server uses to await communications from remote client nodes using SSL protocol. Supported SSL versions at the server Specifies SSL and TLS versions that the server supports for incoming connection requests.
diagpath
The values '"$h"', '"pathname $h"', '"$n"', '"pathname $n"', '"$h$n"', and '"pathname $h$n"' are available in DB2 Version 9.7 Fix Pack 1 and later fix packs. You can specify to store DB2 diagnostic data in separate directories named according to the physical host, database partition, or both by setting the enhanced diagpath database manager configuration parameter.
173
Related concepts Some database configuration parameters have been changed on page 185 Related reference "RESET DATABASE CONFIGURATION " in Command Reference "Configuration parameters summary" in Database Administration Concepts and Configuration Reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
FP1: Collection interval time for workload management statistics has changed
Starting with Version 9.7 Fix Pack 1, the workload management statistics collection interval is synchronized relative to a fixed start time (a day of the week and an hour of the day), rather than relative to when the DB2 database was activated.
Details
The fixed start time is Sunday at 00:00:00 (hh:mm:ss). If, for example, the catalog partition activation time is 22:30:00 on a Saturday night and the collection interval is set to 60 minutes, the first collection occurs 30 minutes later (conforming to the top-of-the-hour start time for the hour collection interval). However, if the catalog partition activation time is 22:30:00 on a Saturday night and the collection interval is set to 1 week (10,080 minutes), the first collection occurs 1 hour 30 minutes later (conforming to the day-of-the-week start time for the week collection interval). Note: The collection interval is not compensated due to changes to the fixed start time that result from daylight savings time transitions. The interval, during which a transition from standard time to daylight savings time or vice versa occurs, could be shortened or lengthened compared to the setting of the wlm_collect_int database configuration parameter.
Resolution
No user response is required. Related reference "wlm_collect_int - Workload management collection interval configuration parameter" in Workload Manager Guide and Reference
NO FILE SYSTEM CACHING for table space containers is the default for General Parallel File System (GPFS)
Starting in Version 9.7, when the underlying file system is GPFS, NO FILE SYSTEM CACHING is the default behavior for table space definition on a subset of platforms if you do not specify the FILE SYSTEM CACHING option on the CREATE TABLESPACE statement and on some of the table space definition parameters of the CREATE DATABASE command.
Details
For the CREATE DATABASE command, this behavior applies to the CATALOG, USER, non-SMS TEMPORARY table space definition parameters. In previous releases, the default behavior was FILE SYSTEM CACHING on GPFS for all supported platforms. In Version 9.7, the default has been changed to NO FILE SYSTEM CACHING on a subset of AIX and Linux platforms. The new
174
default specifies that I/O operations bypass the file system caching automatically.
User response
For details on the platforms that support NO FILE SYSTEM CACHING, see the File system caching configurations topic. To revert to buffered I/O behavior, specify the FILE SYSTEM CACHING attribute in the CREATE TABLESPACE statement, the ALTER TABLESPACE statement, or the CREATE DATABASE command. Related concepts "File system caching configurations" in Database Administration Concepts and Configuration Reference Related reference "ALTER TABLESPACE " in SQL Reference, Volume 2 "CREATE TABLESPACE " in SQL Reference, Volume 2
New defaults
Table 13. Registry variables with new default values Registry variable DB2_LOGGER_NON_ BUFFERED_IO Version 9.5 default setting OFF Version 9.7 default setting Starting with Version 9.7, the default value for this variable is AUTOMATIC, which means that log files in the active log path might be accessed using non-buffered I/O. The database manager determines which log files benefit from using non-buffered I/O. In Version 9.5 Fix Pack 1 or later, the default was OFF and log files were accessed using only buffered I/O. Starting in Version 9.7 Fix Pack 2, the default for this variable is -2, which means that there will not be any unnecessary file system access for any spilled SMS temporary objects whose size is less than or equal to 1 extent * number of containers. Temporary objects that are larger than this are truncated to 0 extent
DB2_SMS_TRUNC_TMPTABLE_THRESH 0
175
Table 14. Registry variables with new values Registry variable DB2_EVMON_STMT_FILTER New values This variable has new options that allow users to determine which rules to apply to which event monitors. Each option represents an integer value mapping to a specific SQL operation. These new options are also available starting in DB2 Version 9.5 Fix Pack 1. Starting in DB2 Version 9.7 Fix Pack 3, this variable has a new setting, FCM_CFG_BASE_AS_FLOOR, which allows users to set a floor value for the fcm_num_buffers and fcm_num_channels database manager configuration parameters so that auto-configuration will not tune below their set value. This variable has two new options: v APREUSE, which indicates whether the query compiler will attempt to reuse access plans. v CONCURRENTACCESSRESOLUTION, which specifies the concurrent access resolution to use for statements in the package DB2_WORKLOAD This variable has new values: CM, COGNOS_CS, FILENET_CM, MAXIMO, MDM, WAS, WC and WP. These settings allow you to configure a set of registry variables in your database for applications provided by IBM Content Manager, Cognos Content Server, Filenet Content Manager, Maximo, Master Data Management, WebSphere Application Server, IBM WebSphere Commerce and WebSphere Portal. The CM and WC values are also available starting in DB2 Version 9.5 Fix Pack 3 and Fix Pack 4, respectively. The COGNOS_CS, FILENET_CM, MAXIMO, MDM, WAS, and WP values are also available starting in DB2 Version 9.5 Fix Pack 5.
DB2_FCM_SETTINGS
DB2_SQLROUTINE_PREPOPTS
176
Changed behaviors
Table 15. Registry variables with changed behaviors Registry variable DB2_EVALUNCOMMITTED and DB2_SKIPDELETED Changed behavior For statements operating under cursor stability isolation level with currently committed behavior enabled using the cur_commit database configuration parameter, these registry variables are in effect only when currently committed cannot be applied to a scan. Otherwise, the evaluation of predicates is performed on data retrieved by currently committed scans. If currently committed behavior was enabled using the BIND command or the PREPARE statement, these registry variables have no effect. For more information, see the cur_commit configuration parameter. The DB2_SERVER_ENCALG registry variable is deprecated. If the alternate_auth_enc database manager configuration parameter is set, its value has precedence over the DB2_SERVER_ENCALG value. For statements operating under cursor stability isolation level with currently committed behavior enabled, this registry variable has no effect. For more information, see the cur_commit configuration parameter. Starting in Version 9.7 Fix Pack 2, when this aggregate registry variable is set to SAP, the DB2_EXTENDED_OPTIMIZATION registry variable will be set to IXOR to improve performance for queries generated by SAP applications.
DB2_SERVER_ENCALG
DB2_SKIPINSERTED
DB2_WORKLOAD
New variables
These environment and registry variables are new in Version 9.7:
Table 16. Environment and Registry variables added for Version 9.7 Registry variable DB2_ATS_ENABLE Description This registry variable enables or disables the administrative task scheduler. This registry variable enables soft invalidation of applicable database objects when they are dropped or altered, meaning that active access to an object that is being invalidated can continue.
DB2_DDL_SOFT_INVAL
177
Table 16. Environment and Registry variables added for Version 9.7 (continued) Registry variable DB2_DEFERRED_PREPARE_SEMANTICS Description This registry variable allows dynamic statements containing untyped parameter markers to use deferred prepare semantics. By default, this variable is set to YES so that any untyped parameter markers derive their data types and length attributes based on the input descriptor from subsequent OPEN or EXECUTE statements. In previous releases, compilation of such dynamic statements would have failed. On Linux operating systems, you can set this registry variable with the FCM_MAXIMIZE_SET_SIZE token to preallocate a default 2 GB of space for the fast communication manager (FCM) buffer. The token must have a value of either YES or TRUE to enable this feature. This environment variable allows you to specify whether add partition operations are to be performed offline or online. The default setting of FALSE indicates that DB2 partitions can be added without taking the database offline. This variable enables the HADR reads on standby feature. When DB2_HADR_ROS is enabled on the HADR standby database, the standby accepts client connections and allows read-only queries to run on it. This registry variable is available in Version 9.7 Fix Pack 1 and later fix packs. This variable enables the parallel reading of data in a database-managed space (DMS) or automatic storage table spaces during certain types of backups. Intra-tablespace parallelism can reduce the time needed for online and offline backup operations. This variable is available in Version 9.7 Fix Pack 2 and later fix packs. On Windows operating systems, if you enable extended security, this registry variable can limit the operating system permissions of the fenced routines that run under the fenced mode process (db2fmp).
DB2_FCM_SETTINGS
DB2_FORCE_OFFLINE_ADD_PARTITION
DB2_HADR_ROS
DB2_ITP_LEVEL
DB2_LIMIT_FENCED_GROUP
178
Table 16. Environment and Registry variables added for Version 9.7 (continued) Registry variable DB2_NCHAR_SUPPORT Description This variable allows the use of NCHAR, NVARCHAR, and NCLOB data types in Unicode databases. When it is set to ON, various national character related functions such as NCHAR() and TO_NCHAR() are also supported. This variable allows users to continue using the sqlugtpi and sqlugrpn APIs to return, respectively, the distribution information for a table, and the distribution map offset and database partition for a row. When this variable is set to OFF, the distribution map size for new or upgraded databases is increased to 32 768 entries and you have to use the new db2GetDistMap and db2GetRowPartNum APIs. This environment variable controls whether DB2 data page read errors are tolerated, and activates extended trap recovery. It is set to ON by default. To revert to the behavior of previous releases and force the database manager to shutdown the instance, set the registry variable to OFF. This variable coerces the isolation level requested by applications and statements running on an active HADR standby database to Uncommitted Read (UR). When DB2_STANDBY_ISO is set to UR, isolation levels higher than UR are coerced to UR with no warning returned. This registry variable is available in Version 9.7 Fix Pack 1 and later fix packs. This variable allows the Veritas fast allocation file system feature or AIX JFS2 to reserve table space, and speed up the process of creating or altering large table spaces and database restore operations. This variable is available with DB2 Version 9.7 Fix Pack 1.
DB2_PMAP_COMPATIBILITY
DB2RESILIENCE
DB2_STANDBY_ISO
DB2_USE_FAST_PREALLOCATION
179
Related concepts "HADR reads on standby feature" in Data Recovery and High Availability Guide and Reference Some registry and environment variables have been deprecated on page 231 Some registry and environment variables have been discontinued on page 243
Details
With the new behavior, the file system where the primary and secondary recovery logs reside should not be mounted with options to disable the file system cache buffering. Instead, the database manager can open individual log files with options to disable the file system cache buffering. In previous releases, the default behavior for these log files was to use buffered I/O. You can revert to the behavior of the previous releases by setting the DB2_LOGGER_NON_BUFFERED_IO registry variable to OFF. In certain situations, the new behavior might degrade the performance of log disk I/O response times resulting in longer commit times. There might also be performance impact for long rollback operations.
User response
You can address performance degradation for longer commit times by ensuring that the number of physical disk spindles for the logger file system matches the desired performance levels. In addition, you can improve performance by enabling the storage controller write caching mechanisms, provided that these mechanisms meet system durability requirements so that your system can recover committed transaction updates if either the system or the storage media fails. You can address rollback performance issues by tuning the logbufsz database configuration parameter to ensure that log data required for rollforward operations is in the log buffer instead of having to perform physical read I/O on the disk.
Details
Starting in Version 9.7, the mem_percent parameter of the AUTOCONFIGURE command indicates the percentage of the instance_memory database manager configuration parameter instead of the total physical memory on the computer.
180
User response
If you do not specify a value for mem_percent, the percentage is calculated based on memory usage in the instance and the system up to a maximum of 25% of the instance memory. Related reference "AUTOCONFIGURE " in Command Reference "AUTOCONFIGURE command using the ADMIN_CMD procedure" in Administrative Routines and Views
Details
When creating a CONCURRENTDBCOORDACTIVITIES threshold, irreconcilable queue-based contention scenarios. Irreconcilable queue-based contention scenarios occur when a concurrency limit has been reached and all applications that have issued the activities that are holding the tickets then try to issue one or more additional activities. Those additional activities get queued because there are no more tickets available, thereby stopping the applications from proceeding. For example, if the concurrency threshold permits only one activity to be processed at a time and a single application opens one cursor and then tries to issue another activity of any type. The cursor opened by the application acquires the single ticket. The second activity gets queued since there are no more available tickets, thus putting the application into a deadlock. The chance of creating irreconcilable queue-based contention scenarios has been reduced by changes to the behavior of the CONCURRENTDBCOORDACTIVITIES threshold, which now controls fewer types of activities than before: v CALL statements are no longer controlled by the threshold, but all nested child activities started within the called routine continue to be under threshold control. Note that both anonymous blocks and autonomous routines are classified as CALL statements. v User defined functions (UDFs) continue to be under threshold control, but child activities nested within UDFs are no longer controlled. If an autonomous routine is called from within a user defined function, neither the autonomous routine and nor any child activities of the autonomous routine are under threshold control. v Trigger actions that invoke CALL statements and the child activities of these CALL statements are no longer under threshold control. Note that the insert, update or delete statements themselves that can cause a trigger activate continue to be under threshold control. The behavior of the CONCURRENTDBCOORDACTIVITIES threshold for all other types of activities remains unchanged.
User response
Before using CONCURRENTDBCOORDACTIVITIES thresholds, be sure to become familiar with the effects that they can have on the database system. For more
Chapter 18. Changed functionality
181
information, see the "CONCURRENTDBCOORDACTIVITIES threshold" topic. Related reference "CONCURRENTDBCOORDACTIVITIES threshold" in Workload Manager Guide and Reference
Details
If you specify the INDEXES FOR TABLE parameter with the SHOW DETAIL clause, more information is listed for all the types of indexes.
User response
Because the index information displayed by the DESCRIBE command with the INDEXES FOR TABLE parameter contains new columns, you must change tools that depend on the output to parse the new text. Related reference "DESCRIBE " in Command Reference
Details
When you issue the ALTER TABLE statement with the DETACH partition clause, the data partition that you are detaching is converted into a stand-alone table in the following two-phase process: 1. The ALTER TABLE operation logically detaches the data partition from the partitioned table. The data partition name is changed to a system-generated name of the form SQLyymmddhhmmssxxx so that a subsequent attach can reuse the detached partition name immediately. In SYSCAT.DATAPARTITIONS, the status of the partition is set to L (logically detached) if there are no detached dependent tables or D if there are detached dependent tables. 2. An asynchronous partition detach task converts the logically detached partition into a stand-alone table. The target table is unavailable until the asynchronous partition detach task completes the detach. For example, a DROP statement that drops the target table after a detach must wait until the asynchronous partition detach task completes the detach. In Version 9.7 and earlier releases, the target table of an ALTER TABLE statement with the DETACH PARTITION clause became available immediately after the transaction issuing the ALTER TABLE statement committed if there were no detached dependent tables that needed to be incrementally maintained with respect to the detached data partition. If there were detached dependent tables, the target table became available after the SET INTEGRITY statement is run on all detached dependent tables.
182
User response
Because the data partition name is changed to a system-generated name during the first phase of the detach process, you might need to modify applications that query the catalog views for detached data partitions and use the data partition names. Related concepts "Asynchronous partition detach for data partitioned tables" in Partitioning and Clustering Guide "Data partition detach phases" in Partitioning and Clustering Guide Related reference "DROP " in SQL Reference, Volume 2
FP1: XML schema maxOccurs attribute values greater than 5000 are parsed differently
Starting in Version 9.7 Fix Pack 1, if you specify a value greater than 5000 for the maxOccurs attribute for an element in an XML schema definition, the XML parser treats the value as if you specified "unbounded".
Details
A maxOccurs attribute value of unbounded indicates that the element can appear an unlimited number of times. In that case, starting in Fix Pack 1, an XML document might pass validation when you use the XMLVALIDATE function even if the number of occurrences of an element exceeds the maximum according to the XML schema that you used to validate the document.
User Response
If you use an XML schema that defines an element that has a maxOccurs attribute value that is greater than 5000 and you want to reject XML documents that have a maxOccurs attribute value greater than 5000, you can define a trigger or procedure to check for that condition. In the trigger or procedure, use an XPath expression to count the number of occurrences of the element and return an error if the number of elements exceeds the maxOccurs attribute value For example, the following trigger ensures that a document never has more than 6500 phone elements:
CREATE TRIGGER CUST_INSERT AFTER INSERT ON CUSTOMER REFERENCING NEW AS NEWROW FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SELECT CASE WHEN X <= 6500 THEN 'OK - Do Nothing' ELSE RAISE_ERROR('75000', 'TooManyPhones') END FROM ( SELECT XMLCAST(XMLQUERY('$INFO/customerinfo/count(phone)') AS INTEGER) AS X FROM CUSTOMER WHERE CUSTOMER.CID = NEWROW.CID ); END
183
Related reference "XMLVALIDATE " in SQL Reference, Volume 1 "Restrictions on the pureXML feature" in V9.7 FP2 Transportable schemas
Licensing control for DB2 Express, DB2 Workgroup Edition, and workload management has been changed
In Version 9.7, DB2 Express and DB2 Workgroup Server Edition use a hard-stop license enforcement policy with regards to CPU and memory usage. In addition, workload management (WLM) functionality provided through the DB2 Performance Optimization Feature is only available if the DB2 Performance Optimization Feature license key has been registered.
Details
The DB2 database manager checks for additional licensing compliance in the following scenarios:
184
v If an attempt is made to use the DB2 WLM functionality provided in the DB2 Performance Optimization Feature without the DB2 Performance Optimization Feature license key being registered, the SQL8029N message is returned. v The CPU and memory resources available to the DB2 Express and Workgroup Server Edition products are limited to the capacity specified by the license. You can use DB2 Express and DB2 Workgroup Server on a system that has more capacity, but will only be able to exploit the capacity specified by the license.
User response
v Purchase the license key for the DB2 Performance Optimization Feature to use WLM from your IBM representative or authorized dealer. You will then need to update your license using the License Center or the db2licm command line utility. v To take full advantage of your server's memory and CPU capacity, contact your IBM representative or authorized dealer to obtain a DB2 product with a higher licensed limit.
Details
License enforcement policies are configured for your DB2 database products using the db2licm command with the -e option. If you choose to use a hard-stop license enforcement policy for your DB2 database product, the database manager will check for licensing compliance when users attempt to use row level compression and index compression. If the appropriate licenses have not been applied, a SQL8029N message will be returned and the attempted action will not be allowed.
User response
Apply the appropriate licenses for row level compression and index compression.
blocknonlogged
Block non-logged This configuration parameter prevents the creation of tables that activity allow non-logged activity.
Chapter 18. Changed functionality
185
Table 17. New Version 9.7 database configuration parameters (continued) Parameter name cur_commit date_compat Description Currently committed Date compatibility Decimal to character function configuration parameter Monitoring activity metrics Monitoring deadlock Monitoring lock timeout Monitoring lock wait Monitoring lock wait threshold Monitoring lock event notification messages Monitoring object metrics Monitoring package list size Monitoring request metric Monitoring unit of work events Statement concentrator This configuration parameter enables statement concentration for dynamic statements. The setting in the database configuration is used only when the client does not explicitly enable or disable statement concentrator. Details This configuration parameter controls the behavior of cursor stability (CS) scans. This parameter indicates whether the DATE compatibility semantics associated with the TIMESTAMP(0) data type are applied to the connected database. This configuration parameter controls the result of the CHAR scalar function and the CAST specification for converting decimal to character values.
dec_to_char_fmt
These parameters control the collection of metrics and event monitor data at the database level including the new lock event monitor, and lock-related messages notification level. During database upgrade, these parameters are set to NONE, except for mon_deadlock which is set to WITHOUT_HIST, mon_lw_thresh which is set to 5 000 000, mon_lck_msg_lvl which is set to 1, and mon_pkglist_sz which is set to 32, so that there is no change in behavior from previous releases.
The following database configuration parameters have changed behaviors or have new ranges in Version 9.7.
186
Table 19. Database configuration parameters with changed behaviors or new ranges Parameter name applheapsz Description Application heap size Version 9.7 change Due to optimization enhancements to match MQTs, the requirement for application heap has increased. If this parameter is set to AUTOMATIC, this setting accounts for the new requirements. If you cannot set this parameter to AUTOMATIC or increase its value, reduce the number of MQTs considered for a given query by using optimization profiles. For more information, see Anatomy of an optimization profile in Troubleshooting and Tuning Database Performance. The self tuning memory manager (STMM) has an improved ability to adjust the database shared memory usage on Solaris Operating Environment. If database_memory is set to AUTOMATIC on a Solaris operating system, the database manager uses pageable memory for the database shared memory. As a result, the DB2 database system uses smaller memory pages by default and you might notice some performance degradation. The database manager can now determine when to apply row compression to temporary tables that meet certain criteria to improve query performance. Memory allocated for database heap is used to create the compression dictionary and released once the dictionary is created. If you are using row compression and temporary tables eligible for compression, ensure that you have enough space to create the dictionary by setting the dbheap parameter to AUTOMATIC. For details about temporary table compression, see Table compression in Database Administration Concepts and Configuration Reference. The limit for this parameter is now 134,217,728 pages (4 KB).
database_memory
dbheap
Database heap
locklist
Maximum storage for lock list Log buffer size Size of log files Number of primary log files Package cache size
A log sequence number (LSN) uses now 8 bytes. In previous releases, LSN was 6 bytes in length. You might need to increase the value of this parameter according to your database logging activity. For more information, see Maximum limit of log sequence numbers has increased on page 199. The maximum limit for logbufsz has been changed to 131 070. The maximum limit for logfilsiz has been changed to 1 048 572.
pckcachesz
To support XML Explain, package cache memory requirements have increased from 10 to 25 percent. The impact from the database upgrade should be minimal because of the small size of this cache. By setting this parameter to AUTOMATIC, the new requirements are taken into account. For upgraded databases, the INLINE LENGTH default value is the maximum size of the LOB descriptor. LOB data is inlined when the length of the LOB data plus the overhead do not exceed the INLINE LENGTH value. Therefore, if the LOB data length plus the overhead is less than the LOB descriptor size for the LOB column, the LOB data is implicitly inlined in a table row after the database upgrade. Storing LOB data as inlined might require that you increase the pckcachesz database configuration parameter. By setting this parameter to AUTOMATIC, the new requirements are taken into account. The maximum limit for pckcachesz on 64-bit operating systems has been changed to 2 147 483 646.
187
Related concepts Some database manager configuration parameters have been changed on page 172 Related reference "RESET DATABASE CONFIGURATION " in Command Reference "Configuration parameters summary" in Database Administration Concepts and Configuration Reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
Details
The INTERACTIVE keyword applies to Linux and UNIX operating systems only. In previous releases, if the INTERACTIVE keyword was set to YES, the user was prompted for either the location of the installation package, or, the location of the national language package. In Version 9.7, the INTERACTIVE keyword prompts for the location of the national language package only. The installation images are now available on one DVD, therefore, this keyword no longer requires a prompt for the location of the installation package. Prompting occurs if the INTERACTIVE keyword was set to YES, and a national language DVD is required.
User response
You do not have to change your applications or scripts. Related concepts New response file keywords have been added on page 149 Some response file keywords have been deprecated on page 233 Related reference "Response file keywords" in Installing DB2 Servers
Registry files have been removed from the DB2 installation path
The location of instance information and global registry information has been changed. Starting with DB2 Version 9.7, the profiles.reg and default.env files are removed from the DB2 installation path.
Details
In DB2 Version 9.5, the DB2 Instance Profile Registry was located in the profiles.reg file, and the DB2 Global-Level Profile Registry was located in the
188
User response
DB2 instance information and global registry information is stored in the global registry (global.reg). Related tasks "Setting environment variables on Linux and UNIX operating systems" in Database Administration Concepts and Configuration Reference
Details
DB2 Text Search is no longer installed automatically if you select Typical when installing the DB2 product. On Windows operating systems, the syntax for the Text Search instance services port number has been simplified for the following commands: v db2icrt (Create instance) v db2imigr (Migrate instance) v db2iupdt (Update instances) In addition, the new Version 9.7 db2iupgrade command uses the simplified syntax. The simplified syntax is as follows:
/j "TEXT_SEARCH,portnumber"
User response
To install DB2 Text Search in Version 9.7, select Custom when installing the DB2 product. Also, add the COMP=TEXT_SEARCH and the CONFIGURE_TEXT_SEARCH=YES keywords to existing scripts and response files. If you have existing scripts on Windows operating systems that use the db2icrt, the db2imigr, or the db2iupdt command, remove the extra comma at the beginning of the Text Search instance services port number.
FP2: DB2 Advanced Copy Services (ACS) is not automatically included in a compact installation
In Version 9.7 Fix Pack 2 and later fix packs, DB2 ACS is no longer installed automatically during DB2 installations of type compact.
Details
Before Version 9.7 Fix Pack 2, DB2 ACS was always installed during all DB2 product installations. Starting in Version 9.7 Fix Pack 2, DB2 ACS is an optional component during some types of DB2 product installations, including unattended installations. DB2 ACS is installed by default during DB2 installations of type custom and typical, and when you use the db2_install command.
Chapter 18. Changed functionality
189
During unattended installations, you can use the ACS response file keyword to install or uninstall DB2 ACS.
User response
If you have already completed a compact installation and must now install DB2 ACS, use a custom response file installation, or click Work with Existing from the DB2 Setup wizard to install only DB2 ACS into the DB2 copy that you already installed. Related tasks "Installing DB2 Advanced Copy Services (ACS)" in Data Recovery and High Availability Guide and Reference
Details
The changes for the SYSADM authority are as follows: v A user who holds SYSADM authority no longer has implicit DBADM authority and therefore has limited capabilities compared to those available in Version 9.5. However, the UPGRADE DATABASE command and the RESTORE DATABASE command (for a downlevel database) grants DBADM authority to the SYSADM group. Privileges associated with groups are not considered for authorization when a user creates views, triggers, materialized query tables (MQTs), packages and SQL routines. Given these restrictions associated with groups, even though the upgrade process grants DBADM authority to the SYSADM group, the upgrade process alone does not ensure that every user with SYSADM authority in Version 9.5 will have the exact same capabilities in Version 9.7. For a member of the SYSADM group to be certain to retain the same privileges as in Version 9.5, they must be directly granted DBADM authority with DATAACCESS and ACCESSCTRL authorities, or must possess these authorities through membership of a role. v If a user holding SYSADM authority creates a database, the user is automatically granted DATAACCESS, ACCESSCTRL, SECADM and DBADM authority for that database, which gives the user the same abilities as in Version 9.5. v A user who holds SYSADM authority is no longer able to grant any authorities or privileges, except table space privileges.
User response
For a user holding SYSADM authority to obtain the same capabilities as in Version 9.5 (other than the ability to grant SECADM authority), the security administrator must explicitly grant the user DBADM authority and grant the user the new DATAACCESS and ACCESSCTRL authorities. The new authorities can be granted
190
by using the GRANT DBADM ON DATABASE statement with the WITH DATAACCESS and WITH ACCESSCTRL options of that statement, which are default options. The DATAACCESS authority is the authority that allows access to data within a specific database, and the ACCESSCTRL authority is the authority that allows a user to grant and revoke privileges within a specific database. For the user holding SYSADM authority to also be able to grant SECADM authority, the security administrator must grant the user SECADM authority as well. However, holding SECADM authority allows the user to perform more actions than the user could as a Version 9.5 system administrator. For example, the user can create objects such as roles, trusted contexts, and audit policies. Tip: In addition to considering how these SYSADM authority changes impact your security implementation, you should also review the new capabilities of the database administrator (who holds DBADM authority) and the security administrator (who holds SECADM authority) , and the new authorities introduced in DB2 Version 9.7, so that you can decide how to organize responsibilities within your system. DB2 Version 9.7 introduces the following new authorities in addition to DATAACCESS and ACCESSCTRL: v WLMADM, for managing workloads v SQLADM, for tuning SQL statements v EXPLAIN, for using the explain facility with SQL statements These new authorities allow you to grant users responsibilities without granting them DBADM authority or privileges on base tables, which would give those users more privileges than they need to do their work.
191
Related concepts "Authorization, privileges, and object ownership" in SQL Reference, Volume 1 "Windows LocalSystem account support" in Database Security Guide DB2 authorization model has been enhanced to allow separation of duties on page 84 "Authorities roadmap" in Database Security Guide Related reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
Details
The changes for the SECADM authority are as follows: v A user who holds SECADM authority can now grant and revoke all authorities and privileges, including DBADM and SECADM. v The security administrator can now grant SECADM authority to roles and groups. In Version 9.5, SECADM could be granted only to a user. v The security administrator can delegate responsibility to run the audit stored procedures and table functions (AUDIT_ARCHIVE, AUDIT_LIST_LOGS, and AUDIT_DELIM_EXTRACT) by granting another user EXECUTE privilege on them.
User response
The security administrator can allow another user to grant and revoke authorities and privileges by granting that other user the new ACCESSCTRL authority. However, only the security administrator can grant SECADM, DBADM, and ACCESSCTRL authority. Also, only the security administrator can grant the new authority DATAACCESS, which enables a user to access data within a specific database. In addition to considering how these SECADM authority changes impact your security implementation, you should also review the new capabilities of the system administrator (who holds SYSADM authority) and the database administrator (who holds DBADM authority) , and the new authorities introduced in DB2 Version 9.7, so that you can decide how you to organize responsibilities within your system. DB2 Version 9.7 introduces the following new authorities in addition to DATAACCESS and ACCESSCTRL: v WLMADM, for managing workloads v SQLADM, for tuning SQL statements v EXPLAIN, for using the explain facility with SQL statements These new authorities allow you to grant users responsibilities without granting them DBADM authority or privileges on base tables, which would give those users more privileges than they need to do their work.
192
Related concepts "Authorization, privileges, and object ownership" in SQL Reference, Volume 1 DB2 authorization model has been enhanced to allow separation of duties on page 84 "Authorities roadmap" in Database Security Guide Related reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
Details
The changes for the DBADM authority are as follows: v DBADM authority no longer necessarily includes the ability to access data and to grant and revoke privileges for a database. v Granting DBADM authority no longer additionally grants the following separate database authorities because they are already implicitly vested in the DBADM authority level. BINDADD CONNECT CREATETAB CREATE_EXTERNAL_ROUTINE CREATE_NOT_FENCED_ROUTINE IMPLICIT_SCHEMA QUIESCE_CONNECT LOAD
User response
The new DATAACCESS authority provides the ability to access data in a database, and the new ACCESSCTRL authority provides the ability to grant and revoke privileges and authorities. These authorities are granted by default when a security administrator grants DBADM authority. Also, the security administrator can use the following options of the GRANT DBADM ON DATABASE statement to provide or not provide the ACCESSCTRL and DATAACCESS authorities: v WITH ACCESSCTRL v WITHOUT ACCESSCTRL v WITH DATAACCESS v WITHOUT DATAACCESS Tip: In addition to considering how these DBADM authority changes impact your security implementation, you should also review the new capabilities of the system administrator (who holds SYSADM authority) and security administrator (who holds SECADM authority) , and the new authorities introduced in DB2 Version 9.7,
193
so that you can decide how you to organize responsibilities within your system. DB2 Version 9.7 introduces the following new authorities in addition to DATAACCESS and ACCESSCTRL: v WLMADM, for managing workloads v SQLADM, for tuning SQL statements v EXPLAIN, for using the explain facility with SQL statements These new authorities allow you to grant users responsibilities without granting them DBADM authority or privileges on base tables, which would give those users more privileges than they need to do their work. Related concepts "Authorization, privileges, and object ownership" in SQL Reference, Volume 1 DB2 authorization model has been enhanced to allow separation of duties on page 84 "Authorities roadmap" in Database Security Guide Related reference "DB2 server behavior changes" in Upgrading to DB2 Version 9.7
SSLconfig.ini and SSLClientconfig.ini files replaced with new database manager configuration parameters
You no longer need to use the SSLconfig.ini and SSLClientconfig.ini configuration files to set up SSL support. The parameters that you used to set in these files have been replaced with database manager configuration parameters.
Details
The new database manager configuration parameters for server-side SSL support are as follows: v ssl_svr_keydb specifies the fully qualified path of the key database file. v ssl_svr_stash specifies the fully qualified path of the stash file that holds the encrypted password to the key database. v ssl_svr_label specifies the label of the digital certificate of the server in the key database. v ssl_svcename specifies the port that the database server uses to await communications from remote clients using the SSL protocol. v ssl_cipherspecs (optional) specifies the cipher suites that the server supports. v ssl_versions (optional) specifies the SSL and TLS versions that the server supports. The new database manager configuration parameters for client-side SSL support are as follows: v ssl_clnt_keydb specifies the fully qualified path of the key database file on the client. v ssl_clnt_stash specifies the fully qualified path of the stash file on the client.
User response
To set up SSL support, set values for the new database manager configuration parameters.
194
The following tables show how the parameters in the SSLconfig.ini and SSLClientconfig.ini files map to these new database manager configuration parameters. The ssl_cipherspecs and ssl_versions parameters do not have equivalent parameters in these files; they provide new configuration options.
Table 21. Mapping of server-side SSL support parameters to new database manager configuration parameters Version 9.5 SSLconfig.ini parameters DB2_SSL_KEYSTORE_FILE DB2_SSL_KEYSTORE_PW DB2_SSL_KEYSTORE_LABEL DB2_SSL_LISTENER Version 9.7 database manager configuration parameters ssl_svr_keydb ssl_svr_stash ssl_svr_label ssl_svcename
The ssl_svr_stash database manager configuration parameter is not exactly equivalent to the DB2_SSL_KEYSTORE_PW parameter. The ssl_svr_stash configuration parameter points to a stash file that holds the encrypted password to a key database, whereas the DB2_SSL_KEYSTORE_PW parameter specifies the password itself.
Table 22. Mapping of client-side SSL support parameters to new database manager configuration parameters Version 9.5 SSLClientconfig.ini parameters DB2_SSL_KEYSTORE_FILE DB2_SSL_KEYRING_STASH_FILE Version 9.7 database manager configuration parameters ssl_clnt_keydb ssl_clnt_stash
Related concepts SSL client support expanded and configuration simplified on page 87
Audit stored procedures and table functions now require only the EXECUTE privilege
In Version 9.7, the security administrator (who holds SECADM authority) can grant the EXECUTE privilege on the audit stored procedures and table functions. Only the security administrator has the ability to grant EXECUTE on these routines.
Details
Prior to Version 9.7, only the security administrator could run the following stored procedures and table functions: v The AUDIT_ARCHIVE stored procedure and table function v The AUDIT_LIST_LOGS table function v The AUDIT_DELIM_EXTRACT stored procedure
Resolution
In Version 9.7, you can run the audit stored procedures and table functions if you have been granted the EXECUTE privilege on them.
195
Related reference "AUDIT_ARCHIVE procedure and table function - Archive audit log file" in Administrative Routines and Views "AUDIT_DELIM_EXTRACT - performs extract to delimited file" in Administrative Routines and Views "AUDIT_LIST_LOGS - lists archived audit log files" in Administrative Routines and Views
Details
Starting in Version 9.7, the instance owner must hold both the DBADM and DATAACESS authorities, otherwise the Net Search Extender commands will fail even if the user has the correct authorities and privileges. In addition, the authorities and privileges required for running the following Net Search Extender commands have changed as follows:
Table 23. Authorization changes for Net Search Extender commands Command ACTIVATE CACHE ALTER CLEAR EVENTS CONTROL CREATE INDEX Version 9.5 authorization CONTROL privilege on the index table CONTROL privilege on the index table CONTROL privilege on the index table Instance owner must hold SYSADM authority CONTROL privilege on the index table Version 9.7 authorization CONTROL privilege on the index table or DBADM authority CONTROL privilege on the index table or DBADM authority CONTROL privilege on the index table or DBADM authority Instance owner must hold DBADM with DATAACCESS authority One of the following authority levels is required: v CONTROL privilege on the index table v INDEX privilege on the index table and either IMPLICIT_SCHEMA authority on the database or CREATEIN privilege on the index table schema v DBADM authority DB2EXTHL DEACTIVATE CACHE DISABLE DROP INDEX ENABLE UPDATE CONNECT to DB privilege CONTROL privilege on the index table DBADM authority CONTROL privilege on the index table or DBADM authority DBADM with SYSADM authority CONTROL privilege on the index table Instance owner must hold DBADM with DATAACCESS authority CONTROL privilege on the index table or DBADM authority DBADM authority CONTROL privilege on the index table or DBADM authority DBADM authority CONTROL privilege on the index table or DATAACCESS authority
196
User response
Ensure that the instance owner has both DBADM and DATAACCESS authorities and ensure that you have the appropriate authority levels and privileges before running the Net Search Extender commands.
DB2 Text Search command and stored procedure authorizations have changed
Version 9.7 includes authorization changes that affect the scope and abilities of SYSADM, SECADM and DBADM authority levels and therefore impact Text Search commands and stored procedure execution.
Details
Starting in Version 9.7, the instance owner must hold both the DBADM and DATAACESS authorities, otherwise the DB2 Text Search commands and stored procedures will fail even if the user has the correct authorities and privileges. In the Windows environment, if the DB2 text search service is run by a local System, then the system and the local system needs to have DBADM along with DATAACCESS authorities for the DB2 text search commands to run. In addition, the authorities and privileges required for running the DB2 Text Search commands and stored procedures have changed as follows:
Table 24. Authorization changes for the db2ts command db2ts command ALTER CLEANUP CLEAR COMMAND LOCKS Version 9.5 authorization Version 9.7 authorization
CONTROL privilege on the index table CONTROL privilege on the index table or DBADM authority Instance owner CONTROL privilege on the index table, or DBADM or SYSADM if no index is specified. Instance owner must hold DBADM with DATAACCESS authority CONTROL privilege on the index table, or DBADM authority if no index is specified
CONTROL privilege on the index table CONTROL privilege on the index table or DBADM authority CONTROL privilege on the index table One of the following authority levels is required: v CONTROL privilege on the index table v INDEX privilege on the index table and either IMPLICIT_SCHEMA authority on the database or CREATEIN privilege on the index table schema v DBADM authority
DBADM authority
CONTROL privilege on the index table CONTROL privilege on the index table or DBADM authority SYSADM authority DBADM authority
CONTROL privilege on the index table CONTROL privilege on the index table or DATAACCESS authority
197
Important: You must grant the EXECUTE privilege to PUBLIC for all the DB2 Text Search stored procedures.
Table 25. Authorization changes for the DB2 Text Search stored procedures Stored procedure SYSTS_ALTER SYSTS_ADMIN_CMD Version 9.5 authorization CONTROL privilege on the index table Version 9.7 authorization CONTROL privilege on the index table or DBADM authority
authorization requirements are authorization requirements are the same as the same as those listed for the those listed for the invoked command invoked command If an index is specified, If an index is specified, CONTROL CONTROL privilege on the privilege on the index table else if no index, else if no index is index is specified, DBADM with authority specified, DBADM or SYSADM authority CONTROL privilege on the index table One of the following authority levels is required: v CONTROL privilege on the index table v INDEX privilege on the index table with either the IMPLICIT_SCHEMA authority on the database or the CREATEIN privilege on the index table schema v DBADM authority
SYSTS_CLEAR_COMMAND_LOCKS
SYSTS_CREATE
DBADM or SYSADM authority DBADM authority CONTROL privilege on the index table SYSADM authority CONTROL privilege on the index table CONTROL privilege on the index table or DBADM authority DBADM authority CONTROL privilege on the index table or DATAACCESS authority
User response
Ensure that the instance owner has both DBADM and DATAACCESS authorities. Starting in Version 9.7 Fix Pack 1 and later fix packs, users with SECADM authority can grant the necessary DBADM with DATAACCESS privileges to the instance owner by specifying the AUTOGRANT option when running the db2ts ENABLE DATABASE FOR TEXT command. Ensure that you have the appropriate authority levels and privileges before running the DB2 Text Search commands or stored procedures, and that you have granted the EXECUTE privilege to PUBLIC for all the DB2 Text Search stored procedures.
198
Cursor stability (CS) level behavior for newly created databases has been changed
To reduce lock wait and deadlock scenarios when using the Cursor Stability (CS) isolation level, currently committed semantics have been introduced and is enabled by default when new databases are created. Where possible, a read operation returns the currently committed result, ignoring what might happen to an uncommitted operation.
Details
In previous versions, CS prevented an application from reading any row that was changed by other applications until the change was committed. In Version 9.7, under CS when currently committed semantics is enabled, a read operation does not necessarily wait for a change to a row to be committed before returning a value. The new CS behavior is beneficial in high-throughput transaction-processing database environments. In such environments, waiting on locks is undesirable. This new behavior is also particularly beneficial if your applications run against databases from multiple vendors. You can use CS instead of writing and maintaining code pertaining to locking semantics specifically for DB2 databases. If you upgrade a database from a previous release, the new CS behavior is not automatically enabled. To use this feature on an upgraded database, you must enable it manually.
User response
You can disable currently committed semantics for new databases created under CS, or enable currently committed semantics for upgraded databases, using the following methods: v At the database level, using the new cur_commit database configuration parameter v At the application level (overriding the database setting), using the CONCURRENTACCESSRESOLUTION option of the BIND and PRECOMPILE command v At the stored procedure level (overriding the database setting), using the DB2_SQLROUTINE_PREPOPTS registry variable and the SET_ROUTINE_OPTS procedure Related concepts Cursor stability (CS) isolation level enhancements provide more concurrency on page 58 "Currently committed semantics improve concurrency" in SQL Procedural Languages: Application Enablement and Support
199
Details
A new API data type, db2LSN, has been defined in db2ApiDf.h in support of the new LSN size. For information on what happens between new and old client and server combinations, see Log sequence number changes affecting API and application behavior.
User response
There is no down level API support for the log read APIs affected by this change. You need to update existing applications using log read APIs (db2ReadLog and db2ReadLogNoConn APIs) to use the new release libraries after upgrading the database server. Clients must also be upgraded to the new release to use the new libraries. You must also change applications to use the new LSN data structure differences present in the log flow that is returned in the log buffer during a log read API operation. The error message, SQL2032N, is returned to indicate an unsupported down level API call. Related concepts Data replication source tables can be compressed on page 7 "Log sequence number changes affecting API and application behavior" in Administrative API Reference
Some system catalog views, system-defined administrative routines and views have been added and changed
To support new features in Version 9.7, system catalog views, system built-in routines, administrative routines and views have been added and modified.
200
v v v v v v v v v v v v v v v v v v
SYSCAT.PACKAGEDEP SYSCAT.PACKAGES SYSCAT.ROUTINEDEP SYSCAT.ROUTINEPARMS SYSCAT.ROUTINES SYSCAT.SECURITYPOLICIES SYSCAT.SEQUENCES SYSCAT.SERVICECLASSES SYSCAT.TABDEP SYSCAT.TABDETACHEDDEP SYSCAT.TABLES SYSCAT.TABLESPACES SYSCAT.THRESHOLDS SYSCAT.TRIGDEP SYSCAT.VARIABLEDEP SYSCAT.VARIABLES SYSCAT.WORKCLASSES SYSCAT.WORKLOADS
v SYSCAT.XSROBJECTDEP v SYSSCAT.COLGROUPS v SYSSCAT.COLUMNS v SYSSCAT.INDEXES The following system catalog views have been added in Version 9.7: v SYSCAT.CONDITIONS v v v v SYSCAT.DATATYPEDEP SYSCAT.INDEXPARTITIONS SYSCAT.INVALIDOBJECTS SYSCAT.MODULEAUTH
201
v REORGCHK_IX_STATS v SNAPAPPL_INFO administrative view and SNAP_GET_APPL_INFO_V95 table function v SNAPSTORAGE_PATHS administrative view and SNAP_GET_STORAGE_PATHS_V97 table function v SNAPTBSP_PART administrative view and SNAP_GET_TBSP_PART_V97 table function v SYSINSTALLOBJECTS (starting in Version 9.7 Fix Pack 1) v WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function The following ADMIN_CMD stored procedure and associated administrative SQL routines have been added: v ADMIN_EST_INLINE_LENGTH v ADMIN_GET_INDEX_COMPRESS_INFO v ADMIN_GET_INDEX_INFO v ADMIN_GET_TAB_COMPRESS_INFO_V97 v ADMIN_GET_TEMP_COLUMNS v ADMIN_GET_TEMP_TABLES v ADMIN_IS_INLINED v ADMIN_REVALIDATE_DB_OBJECTS The following miscellaneous routines and views have been added: v ADMIN_MOVE_TABLE v ADMIN_MOVE_TABLE_UTIL The following security scalar function has been added: v AUTH_GET_INSTANCE_AUTHID The following SQL procedures routine has been added: v ALTER_ROUTINE_PACKAGE The following common SQL API procedures have been added: v CANCEL_WORK v GET_CONFIG v GET_MESSAGE v GET_SYSTEM_INFO v SET_CONFIG The following system-defined modules routines have been added: v DBMS_ALERT.REGISTER v DBMS_ALERT.REMOVE v v v v v v v DBMS_ALERT.REMOVEALL DBMS_ALERT.SET_DEFAULTS DBMS_ALERT.SIGNAL DBMS_ALERT.WAITANY DBMS_ALERT.WAITONE DBMS_JOB.BROKEN DBMS_JOB.CHANGE
202
v v v v v v v v v v v v v v v v v v
DBMS_JOB.INTERVAL DBMS_JOB.NEXT_DATE DBMS_JOB.REMOVE DBMS_JOB.RUN DBMS_JOB.SUBMIT DBMS_JOB.WHAT DBMS_LOB.APPEND DBMS_LOB.CLOSE DBMS_LOB.COMPARE DBMS_LOB.CONVERTTOBLOB DBMS_LOB.CONVERTTOCLOB DBMS_LOB.COPY DBMS_LOB.ERASE DBMS_LOB.GET_STORAGE_LIMIT DBMS_LOB.GETLENGTH DBMS_LOB.INSTR DBMS_LOB.ISOPEN DBMS_LOB.OPEN
v DBMS_LOB.READ v DBMS_LOB.SUBSTR v DBMS_LOB.TRIM v v v v v v v v v v v v DBMS_LOB.WRITE DBMS_LOB.WRITEAPPEND DBMS_OUTPUT.DISABLE DBMS_OUTPUT.ENABLE DBMS_OUTPUT.GET_LINE DBMS_OUTPUT.GET_LINES DBMS_OUTPUT.NEW_LINE DBMS_OUTPUT.PUT DBMS_OUTPUT.PUT_LINE DBMS_PIPE.CREATE_PIPE DBMS_PIPE.NEXT_ITEM_TYPE DBMS_PIPE.PACK_MESSAGE
203
v v v v v v v v v v v v v v v v v v
DBMS_SQL.BIND_VARIABLE_DOUBLE DBMS_SQL.BIND_VARIABLE_INT DBMS_SQL.BIND_VARIABLE_NUMBER DBMS_SQL.BIND_VARIABLE_RAW DBMS_SQL.BIND_VARIABLE_TIMESTAMP DBMS_SQL.BIND_VARIABLE_VARCHAR DBMS_SQL.CLOSE_CURSOR DBMS_SQL.COLUMN_VALUE_BLOB DBMS_SQL.COLUMN_VALUE_CHAR DBMS_SQL.COLUMN_VALUE_CLOB DBMS_SQL.COLUMN_VALUE_DATE DBMS_SQL.COLUMN_VALUE_DOUBLE DBMS_SQL.COLUMN_VALUE_INT DBMS_SQL.COLUMN_VALUE_LONG DBMS_SQL.COLUMN_VALUE_NUMBER DBMS_SQL.COLUMN_VALUE_RAW DBMS_SQL.COLUMN_VALUE_TIMESTAMP DBMS_SQL.COLUMN_VALUE_VARCHAR
v DBMS_SQL.DEFINE_COLUMN_BLOB v DBMS_SQL.DEFINE_COLUMN_CHAR v DBMS_SQL.DEFINE_COLUMN_CLOB v v v v v v v v v v v v DBMS_SQL.DEFINE_COLUMN_DATE DBMS_SQL.DEFINE_COLUMN_DOUBLE DBMS_SQL.DEFINE_COLUMN_INT DBMS_SQL.DEFINE_COLUMN_LONG DBMS_SQL.DEFINE_COLUMN_NUMBER DBMS_SQL.DEFINE_COLUMN_RAW DBMS_SQL.DEFINE_COLUMN_TIMESTAMP DBMS_SQL.DEFINE_COLUMN_VARCHAR DBMS_SQL.DESCRIBE_COLUMNS DBMS_SQL.DESCRIBE_COLUMNS2 DBMS_SQL.EXECUTE DBMS_SQL.EXECUTE_AND_FETCH
204
v v v v v v v v v v v v v v v v v v
DBMS_SQL.VARIABLE_VALUE DBMS_SQL.VARIABLE_VALUE DBMS_SQL.VARIABLE_VALUE DBMS_UTIL.ANALYZE_DATABASE DBMS_UTIL.ANALYZE_PART_OBJECT DBMS_UTIL.ANALYZE_SCHEMA DBMS_UTIL.CANONICALIZE DBMS_UTIL.COMMA_TO_TABLE DBMS_UTIL.COMPILE_SCHEMA DBMS_UTIL.DB_VERSION DBMS_UTIL.EXEC_DDL_STATEMENT DBMS_UTIL.GET_CPU_TIME DBMS_UTIL.GET_DEPENDENCY DBMS_UTIL.GET_HASH_VALUE DBMS_UTIL.GET_TIME DBMS_UTIL.NAME_RESOLVE DBMS_UTIL.NAME_TOKENIZE DBMS_UTIL.TABLE_TO_COMMA
v DBMS_UTIL.VALIDATE v MONREPORT.CONNECTION (starting in Version 9.7 Fix Pack 1) v MONREPORT.CURRENTAPPS (starting in Version 9.7 Fix Pack 1) v v v v v v v v v v v v MONREPORT.CURRENTSQL (starting in Version 9.7 Fix Pack 1) MONREPORT.DBSUMMARY (starting in Version 9.7 Fix Pack 1) MONREPORT.LOCKWAIT (starting in Version 9.7 Fix Pack 1) MONREPORT.PKGCACHE (starting in Version 9.7 Fix Pack 1) UTL_DIR.CREATE_DIRECTORY UTL_DIR.CREATE_OR_REPLACE_DIRECTORY UTL_DIR.DROP_DIRECTORY UTL_DIR.GET_DIRECTORY_PATH UTL_FILE.FCLOSE UTL_FILE.FCLOSE_ALL UTL_FILE.FCOPY UTL_FILE.FFLUSH
205
v v v v v v v v v v v v v v v v v v
UTL_MAIL.SEND_ATTACH_VARCHAR2 UTL_SMTP.CLOSE_DATA UTL_SMTP.COMMAND UTL_SMTP.COMMAND_REPLIES UTL_SMTP.DATA UTL_SMTP.EHLO UTL_SMTP.HELO UTL_SMTP.HELP UTL_SMTP.MAIL UTL_SMTP.NOOP UTL_SMTP.OPEN_CONNECTION (function) UTL_SMTP.OPEN_CONNECTION (procedure) UTL_SMTP.OPEN_DATA UTL_SMTP.QUIT UTL_SMTP.RCPT UTL_SMTP.RSET UTL_SMTP.VRFY UTL_SMTP.WRITE_DATA
v UTL_SMTP.WRITE_RAW_DATA The following monitor routines have been added: v v v v v v v v EVMON_FORMAT_UE_TO_TABLES EVMON_FORMAT_UE_TO_XML MON_GET_ACTIVITY_DETAILS (starting in Version 9.7 Fix Pack 1) MON_GET_APPL_LOCKWAITS (starting in Version 9.7 Fix Pack 1) MON_GET_BUFFERPOOL MON_GET_CONNECTION (starting in Version 9.7 Fix Pack 1) MON_GET_CONNECTION_DETAILS (starting in Version 9.7 Fix Pack 1) MON_GET_CONTAINER
v MON_GET_EXTENT_MOVEMENT_STATUS v MON_GET_FCM (starting in Version 9.7 Fix Pack 2) v MON_GET_FCM_CONNECTION_LIST (starting in Version 9.7 Fix Pack 2) v MON_GET_INDEX v MON_GET_LOCKS (starting in Version 9.7 Fix Pack 1) v v v v v v v v MON_GET_PKG_CACHE_STMT (starting in Version 9.7 Fix Pack 1) MON_GET_PKG_CACHE_STMT_DETAILS (starting in Version 9.7 Fix Pack 1) MON_GET_SERVICE_SUBCLASS (starting in Version 9.7 Fix Pack 1) MON_GET_SERVICE_SUBCLASS_DETAILS (starting in Version 9.7 Fix Pack 1) MON_GET_TABLE (starting in Version 9.7 Fix Pack 1) MON_GET_TABLESPACE MON_GET_UNIT_OF_WORK (starting in Version 9.7 Fix Pack 1) MON_GET_UNIT_OF_WORK_DETAILS (starting in Version 9.7 Fix Pack 1)
v MON_GET_WORKLOAD (starting in Version 9.7 Fix Pack 1) v MON_GET_WORKLOAD_DETAILS (starting in Version 9.7 Fix Pack 1)
206
The following monitor routines have been added. They produce row-based easy-to-read output from the XML documents returned by other routines or about the lock name: v MON_FORMAT_LOCK_NAME (starting in Version 9.7 Fix Pack 1) v MON_FORMAT_XML_COMPONENT_TIMES_BY_ROW (starting in Version 9.7 Fix Pack 1) v MON_FORMAT_XML_METRICS_BY_ROW (starting in Version 9.7 Fix Pack 1) v MON_FORMAT_XML_TIMES_BY_ROW (starting in Version 9.7 Fix Pack 1) v MON_FORMAT_XML_WAIT_TIMES_BY_ROW (starting in Version 9.7 Fix Pack 1) The following monitor views have been added: v MON_BP_UTILIZATION (starting in Version 9.7 Fix Pack 1) v MON_CONNECTION_SUMMARY (starting in Version 9.7 Fix Pack 1) v v v v v v MON_CURRENT_SQL (starting in Version 9.7 Fix Pack 1) MON_CURRENT_UOW (starting in Version 9.7 Fix Pack 1) MON_DB_SUMMARY (starting in Version 9.7 Fix Pack 1) MON_LOCKWAITS (starting in Version 9.7 Fix Pack 1) MON_PKG_CACHE_SUMMARY (starting in Version 9.7 Fix Pack 1) MON_SERVICE_SUBCLASS_SUMMARY (starting in Version 9.7 Fix Pack 1)
v MON_TBSP_UTILIZATION (starting in Version 9.7 Fix Pack 1) v MON_WORKLOAD_SUMMARY (starting in Version 9.7 Fix Pack 1) The following explain routines have been added: v EXPLAIN_FROM_ACTIVITY (starting in Version 9.7 Fix Pack 1) v EXPLAIN_FROM_CATALOG (starting in Version 9.7 Fix Pack 1) v EXPLAIN_FROM_DATA (starting in Version 9.7 Fix Pack 1) v EXPLAIN_FROM_SECTION (starting in Version 9.7 Fix Pack 1) The following snapshot routines and views have been added: v SNAP_GET_TBSP_PART_V97 v SNAP_GET_STORAGE_PATHS_V97 The following workload management routines have been updated: v WLM_GET_CONN_ENV (starting in Version 9.7 Fix Pack 2) v WLM_GET_SERVICE_CLASS_AGENTS_V97 (starting in Version 9.7 Fix Pack 1) v WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 v WLM_GET_SERVICE_SUBCLASS_STATS_V97 (starting in Version 9.7 Fix Pack 1) v WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 (starting in Version 9.7 Fix Pack 1) v WLM_GET_WORKLOAD_STATS_V97 (starting in Version 9.7 Fix Pack 1) v WLM_SET_CONN_ENV (starting in Version 9.7 Fix Pack 2) The following table functions have been deprecated in Version 9.7: v HEALTH_CONT_HI v HEALTH_CONT_HI_HIS v HEALTH_CONT_INFO v HEALTH_DB_HI
Chapter 18. Changed functionality
207
v v v v v v v v v v v v v v v v v v
HEALTH_DB_HI_HIS HEALTH_DB_HIC HEALTH_DB_HIC_HIS HEALTH_DB_INFO HEALTH_DBM_HI HEALTH_DBM_HI_HIS HEALTH_DBM_INFO HEALTH_GET_ALERT_ACTION_CFG HEALTH_GET_ALERT_CFG HEALTH_GET_IND_DEFINITION HEALTH_HI_REC HEALTH_TBS_HI HEALTH_TBS_HI_HIS HEALTH_TBS_INFO SNAP_GET_LOCK (deprecated starting in Version 9.7 Fix Pack 1) SNAP_GET_LOCKWAIT (deprecated starting in Version 9.7 Fix Pack 1) SNAP_GET_STORAGE_PATHS SNAP_GET_TBSP_PART_V91
v WLM_GET_ACTIVITY_DETAILS v WLM_GET_SERVICE_CLASS_AGENTS v WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES v WLM_GET_SERVICE_SUBCLASS_STATS v WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES v WLM_GET_WORKLOAD_STATS The following administrative views have been deprecated in Version 9.7 Fix Pack 1: v SNAPLOCK v SNAPLOCKWAIT v LOCKS_HELD v LOCKWAITS
User response
To have access to new administrative routines in Version 9.7 Fix Pack 1 in databases created in Version 9.7 prior to Fix Pack 1, you must have already run the db2updv97 command. If your database was created before Version 9.7, it is not necessary to run the db2updv97 command (because the system catalog is automatically updated by the database upgrade). Review the list of the Deprecated SQL administrative routines and their replacement routines or views in Administrative Routines and Views to determine additional changes that might impact your applications and scripts. For the list of the data dictionary-compatible views, see the Data dictionary-compatible views topic. To minimize the impact of changes to system-defined routines and views, review .
208
Related reference "Upgrade impact from system catalog changes" in Upgrading to DB2 Version 9.7
New SYSIBM functions override unqualified user-defined functions with the same name
If you use the default SQL path (or an SQL path that has SYSIBM before user schemas), and the schema has existing functions with the same names as new SYSIBM functions, the SYSIBM functions are used instead. This situation usually improves performance, but might cause unexpected behaviors.
Details
If a user-defined function or a user-defined procedure has the same name and signature as a new built-in function or SQL administrative routine, an unqualified reference to those functions or routines in a dynamic SQL statement executes the built-in function or SQL administrative routine instead of the user-defined one. The default SQL path contains the schemas SYSIBM, SYSFUN, SYSPROC, and SYSIBMADM before the schema name which is the value of the USER special register. These system schemas are also usually included in the SQL path when it is explicitly set using the SET PATH statement or the FUNCPATH bind option. When function resolution and procedure resolution is performed, the built-in functions and SQL administrative routines in the SYSIBM, SYSFUN, SYSPROC, and SYSIBMADM schemas are encountered before user-defined functions and user-defined procedures. This change does not affect static SQL in packages or SQL objects such as views, triggers, or SQL functions which continue to execute the user-defined function or procedure until an explicit bind of the package, or drop and create of the SQL object. For a complete list of scalar functions added in this release, refer to Scalar function support has been extended on page 101.
User response
Rename the user-defined routine or fully qualify the name before you invoke it. Alternatively, place in the SQL path the schema in which the user-defined routine exists before the schema in which the built-in functions and SQL administrative routines exist. However, promoting the schema in the SQL path increases the resolution time for all built-in functions and SQL administrative routines because the system schemas are considered first. Related reference "SET PATH " in SQL Reference, Volume 2
209
Details
To provide additional flexibility when creating expressions, you can now specify untyped NULL keyword specifications anywhere an expression is allowed. As a result, references to the NULL keyword that are not qualified and not delimited resolve to the null value when SQL statements are compiled rather than resolving to an identifier name as it was the case in earlier releases. For example, if a database identifier is named NULL and is used in an SQL statement without being fully qualified or delimited, the identifier specification might resolve to the keyword NULL instead of the identifier reference. Assuming you have the following table and data:
CREATE TABLE MY_TAB (NULL INT) INSERT INTO MY_TAB VALUES (1)
In previous releases, the null that is specified in the select list resolves to the column named NULL. Starting in Version 9.7, it resolves to the null value. Also, when you issue the following statement:
SELECT NULL FROM TABLE(VALUES(1)) AS X(NULL)
In previous releases, this statement returns a value of 1. Starting in Version 9.7, this statement returns a null value.
User response
To avoid conflicts with the keyword NULL, columns named NULL should be fully qualified or delimited when used in SQL statements. Review existing expressions that use untyped NULL keywords specifications, and update them if needed. You can use the db2ckupgrade command to check for identifiers named NULL. For example, the following expressions can be used to produce the behaviors from previous releases:
SELECT MY_TAB.NULL FROM MY_TAB SELECT "NULL" FROM MY_TAB
Related reference "db2ckupgrade - Check database for upgrade " in Command Reference
Details
In previous releases, the CHAR scalar function (decimal to character) as well as the CAST specification from decimal to character returns leading zeros and a trailing decimal character in the result. The behavior is inconsistent with the VARCHAR
210
scalar function as well as the SQL standard casting rules. The following examples illustrate the changed behavior: Example 1 Assuming that you create the following table and data:
CREATE TABLE MY_TAB (C1 DEC(31,2) INSERT INTO MY_TAB VALUES 0.20, 0.02, 1.20, 333.44
Example 2 Assuming that you create the following table and data:
CREATE TABLE MY_TAB (C1 DEC(5,0)) INSERT INTO MY_TAB VALUES 1, 4.0 SELECT CHAR(C1)FROM MY_TAB
User response
If previous release semantics are required, you can use the following methods: v Update the specific SQL statements to use the CHAR_OLD(<decimal>) scalar function rather than the CHAR(<decimal>) scalar function. v Update the database configuration parameter dec_to_char_fmt to V95'. After setting the database configuration parameter, SQL statements using the CHAR scalar function or the CAST specification from decimal to character will need to
211
be recompiled. For static SQL, you must rebind the package. For dynamic SQL, the statement simply needs to be invoked. If you want migrated databases to use the new format, set the dec_to_char_fmt to 'NEW'. Related reference "CHAR " in SQL Reference, Volume 1 "dec_to_char_fmt - Decimal to character function configuration parameter" in Database Administration Concepts and Configuration Reference
Details
In previous releases, the DOUBLE scalar function (character string to double) removes leading and trailing whitespace (blanks, tabs, carriage-return, newline, vertical tab and form-feed) from the string-expression before converting the argument to a floating-point number. This behavior is inconsistent with the documentation for the scalar function, other numeric scalar functions, and other database products in the DB2 family. In Version 9.7, the support for the DOUBLE scalar function has been extended to the SYSIBM schema, making it a built-in function as well, and the handling of leading and trailing whitespace has been changed. As a result, an error (SQLSTATE 22018) is returned in the following situations: v string-expression contains whitespace characters other than a blank v string-expression contains only blanks v string-expression is an empty string
User response
If the previous releases semantics are required, you can use the SYSFUN version of the DOUBLE scalar function using any of the following methods: v You can fully qualify the reference to the scalar function with SYSFUN. For example, SYSFUN.DOUBLE(string-expression). v You can create a sourced function on SYSFUN.DOUBLE and include the schema of the function before SYSIBM in the SQL path. v You can place the SYSFUN schema in the SQL path before the SYSIBM schema. However, this is not recommended because it will affect many other functions as well. Related reference "DOUBLE_PRECISION or DOUBLE" in SQL Reference, Volume 1
Result data type for unary minus and unary operators in untyped expressions has been changed
Starting with Version 9.7, unary minus or unary plus operators in untyped expressions return DECFLOAT(34).
212
Details
In previous releases, the result data type of a unary minus or unary plus operators with an untyped expression as an argument is DOUBLE.
User response
If previous release semantics are required, you can cast the untyped expression explicitly to DOUBLE. For example:
(CAST (? AS DOUBLE))
Details
Using non-delimited references to DEFAULT on the right side of an SQL PL assignment statement no longer resolves to a variable or parameter named DEFAULT. Instead, it resolves to the DEFAULT keyword. If the usage of the DEFAULT keyword is invalid, an error is returned (SQLSTATE 42608). In addition, the invocation of a procedure which specifies DEFAULT as a parameter always resolve to the DEFAULT keyword, rather than resolving to a variable or parameter named DEFAULT if the variable or parameter exists. This change enables you to specify DEFAULT as a parameter value for procedure invocation. In previous releases, SQL PL assignment statements in the form "SET V = DEFAULT" , where V is a local variable, produces one of the following results: v DEFAULT resolves to a variable or parameter, if one is defined v An error (SQLSTATE 42608) is returned, if a variable or parameter with the name DEFAULT is not defined This behavior is inconsistent with the assignment to global variables as well as the VALUES statement, where specifying DEFAULT always resolves to the DEFAULT keyword. Also in previous releases, the invocation of a procedure specifying DEFAULT as a parameter produces one of the following results: v The variable or parameter resolves to a variable or parameter named DEFAULT, if one is defined. v An error (SQLSTATE 42608) is returned, if a variable or parameter named DEFAULT is not defined
User response
To avoid conflicts with the keyword DEFAULT, you should delimit variables named DEFAULT using double quotes and use uppercase letters when using these variables in SQL and SQL PL assignment statements, as well as in procedure invocations.
213
Example
When you create and call the following procedure:
CREATE PROCEDURE foo(IN DEFAULT INTEGER) BEGIN DECLARE V0 INTEGER DEFAULT 1; SET V0 = "DEFAULT; RETURN V0; END% CALL foo(10)%
Details
When you assign XML data to a parameter or local variable and the values are passed by reference, the node identities and the parent property are preserved. Therefore, results of the following types of operations might change: v Operations that use the node identities of the XML value v Operations that use the parent axis in an XPath expression The following types of expressions use node identity: v Node comparisons. The IS operator uses node identity to determine if two nodes have the same identity. The >> operator and << operator use node identity to compare the document order of nodes. v Path expressions. Path expressions use node identity to eliminate duplicate nodes. v Sequence expressions. The UNION, INTERSECT, and EXCEPT operators use node identity to eliminate duplicate nodes. In DB2 Version 9.5 and earlier, when you assign XML data to a parameter or local variable, the XML data is passed by value. Therefore, node identities and the parent property are not preserved.
User response
Check the stored procedure to ensure that it returns the correct results when performing operations comparing node identities and operations that use the parent axis in path expressions.
Example
The stored procedure in the example shows that different results are returned when passing XML data by reference and by value.
214
The stored procedure uses data from a table that contains an XML column and returns results into a second table. The following statements create the tables and insert data into the first table:
CREATE TABLE t1 (c1 INT, c2 XML) ~ INSERT INTO t1 VALUES (1, '<a><b><d>1</d></b><c>2</c></a>') ~ CREATE TABLE t2 (c1 INT, c2 VARCHAR(1000)) ~ ~
The stored procedure assigns the XML data from the XML column to two XML variables and performs operations that return different results depending on the version of the DB2 database server being used:
CREATE PROCEDURE MYTESTPROC ( ) BEGIN DECLARE v1, v2, v3 XML; -- Assign XML value to v1 and v2 SELECT XMLQUERY('$c/a/b' passing by ref c2 as "c") INTO v1 FROM t1 WHERE c1 = 1; SELECT XMLQUERY('$c/a/b' passing by ref c2 as "c") INTO v2 FROM t1 WHERE c1 = 1; -- insert XML value into t2 INSERT INTO t2 VALUES (1, xmlserialize(v1 as VARCHAR(1000))); -- OR operator combining sequences of nodes -- If node identities are identical, sequence expression will drop duplicate nodes SET v3 = xmlquery ('$x | $y' passing v1 as "x", v2 as "y"); INSERT INTO t2 VALUES (2, xmlserialize(v3 as VARCHAR(1000))); -- Creating a sequnce of nodes SET v3 = xmlquery ('($x,$y)' passing v1 as "x", v2 as "y"); -- If node identities are identical, path expression will drop duplicate nodes SET v3 = xmlquery ('$x/d' passing v3 as "x"); INSERT INTO t2 VALUES (3, xmlserialize(v3 as VARCHAR(1000))); -- Test of parent axis property SET v3 = xmlquery('$x/..' passing v1 as "x"); INSERT INTO t2 VALUES (4, xmlserialize(v3 as VARCHAR(1000))); -- NODE ID comparison if(xmlcast(xmlquery('$X is $Y' passing by ref v1 as X, v2 as Y) as VARCHAR(5))='true') then INSERT INTO t2 VALUES (5, 'NODE ID preserved'); else INSERT INTO t2 VALUES (5, 'NODE ID NOT preserved'); end if; END
The stored procedure returns the following values for the different versions of the DB2 database server.
Table 26. Stored procedure values inserted to table t2 Column c1 1 2 3 4 5 DB2 Version 9.7 (passing by DB2 Version 9.5 (passing by reference) value) <b><d>1</d></b> <b><d>1</d></b> <d>1</d> <a><b><d>1</d></ b><c>2</c></a> NODE ID preserved <b><d>1</d></b> <b><d>1</d></ b><b><d>1</d></b> <d>1</d><d>1</d> NULL NODE ID NOT preserved
215
Related information "Axes" in XQuery Reference "Node comparisons" in XQuery Reference "Node identity" in XQuery Reference "Expressions for combining sequences of nodes" in XQuery Reference
Details
The XMLVALIDATE function now annotates a successfully validated XML document only with information about the schema used to validate the document. The function does not augment the element and attribute nodes with type information. Element node values or attribute values from validated documents returned in XQuery expressions are represented using a string data type. If the data is defined in the schema as xs:list, it is represented as xdt:untypedAtomic. The output of the VALIDATED predicate and the XMLXSROBJECTID function remain the same. The VALIDATED predicate tests whether or not an XML document has been validated using the XMLVALIDATE function, and the XMLXSROBJECTID function returns the XSR object identifier of the XML schema used to validate an XML document. In the DB2 XQuery prolog, the default XML construction declaration value has been changed from preserve to strip. The construction declaration value sets the construction mode for the query. When the construction mode is strip, type information is stripped from element and attribute nodes that are copied to form the content of a newly constructed node.
User response
For existing applications that use XQuery and validated XML documents, modify the XQuery expressions to cast data to the appropriate type to ensure correct results. When creating new applications using DB2 pureXML, be aware that all XQuery comparisons are string comparisons unless you cast the data to a different data type. For example, without casting, XQuery operators such as the greater than (>) and less than (<) operators compare node and attribute values as strings, and the XQuery ORDER BY clause sorts data as strings. To process data that is defined in the XML schema as xs:list as a list, use the fn:tokenize function to convert it to a sequence. Creating indexes over XML data The successful creation of an index over XML data depends on the compatibility of the XML values as type xdt:untypedAtomic to the SQL type specified for the
216
index. If an XML value is not compatible with the SQL type during index creation, the error message SQL20306N is returned with error code 4. In DB2 Version 9.5 or earlier, either error code 2 or 3 was returned. If an XML value is not compatible with the SQL type specified for an index over XML data when you insert or update XML documents, the error message SQL20305N is returned with error code 4. In DB2 Version 9.5 and earlier, either error code 2 or 3 was returned. Matching indexes over XML data Type casting is required to match indexes over XML data that specifies only DOUBLE and DATETIME data types. Indexes over XML data that specify the VARCHAR data type are implied to match a query against the XML data if there is no type casting. You do not need to use the functions fn:string or xs:string to convert data from validated XML documents for matching indexes over XML data. Related concepts "XML validation" in pureXML Guide Related reference "XMLVALIDATE " in SQL Reference, Volume 1
Merge modules for ODBC, CLI, and .NET have been combined (Windows)
IBM Data Server Driver for ODBC, CLI, and .NET has been renamed to IBM Data Server Driver Package, which continues to provide an MSI-based Windows installation that uses merge modules. However, the packaging strategy has been simplified in Version 9.7 to provide a single merge module for ODBC, CLI, and .NET rather than multiple merge modules.
Details
The contents of the old IBM Data Server Driver for ODBC and CLI Merge Module.msm and IBM Data Server Provider for .NET Merge Module.msm merge modules are now available in a single merge module, named the IBM Data Server Driver Package.msm merge module. The old merge modules for ODBC, CLI, and .NET are no longer available. This change does not affect the language-specific merge modules, which continue to be available separately.
User response
Update references to the ODBC and CLI merge module and references to the .NET merge module to use the new merge module name. Related concepts Component names have changed on page 3 Related reference "Non-DB2 instance merge modules ( Windows)" in Installing IBM Data Server Clients
Result data type for integer division in number_compat mode has been changed
Starting with Version 9.7, when a database has been created in number_compat mode, the result data type of division operations involving only integer
Chapter 18. Changed functionality
217
expressions returns DECFLOAT(34) and the operation is performed using decimal floating point arithmetic. This result for integer division is consistent with results in compatible databases that support the NUMBER data type.
Details
In the previous release, when you created a database with the DB2_COMPATIBILITY_VECTOR registry variable set to enable the NUMBER data type, the result data type of an integer division was an integer data type and the operation was performed using binary integer arithmetic. An upgraded database might include SQL objects with expressions that are impacted by this change. The result type for view columns involving integer division might change. If an expression involving integer division is used as the argument of a function, the result of function resolution might be different.
User response
In most cases, the implicit casting support included in Version 9.7 will implicitly handle the change in the data type of the expression. If the usage of an SQL object fails because of the change in data type, extract the object definition statement from the catalog or use db2look, change the CREATE option to the CREATE OR REPLACE option in the statement and run the statement again. This will replace the object in the upgraded database using the new result data type for division operations involving integer expressions. Related reference "DB2_COMPATIBILITY_VECTOR registry variable" in SQL Procedural Languages: Application Enablement and Support "NUMBER data type" in SQL Procedural Languages: Application Enablement and Support
FP1: Some import and load file type modifiers can accept unspecified values in trailing fields
Starting in DB2 Version 9.7 Fix Pack 1, the file type modifiers DATEFORMAT, TIMEFORMAT, and TIMESTAMPFORMAT for the import and load utilities can accept input that contains unspecified values in trailing fields. A default value will be used for all trailing fields with unspecified input values.
Details
For applications that use the IMPORT or LOAD commands, if you use the DATEFORMAT, TIMEFORMAT, or TIMESTAMPFORMAT modifiers to input data into DATE, TIME, or TIMESTAMP columns, you can input values with unspecified values for trailing fields. In this case, a default value will be used. The default is 1 for year, month, or day fields, and 0 for all other fields.
User response
Applications that load and import data can now accept input data that conforms to the new input specifications.
Example
Example 1: timestampformat = "YYYY-MM-DD HH:MM:SS"
218
v The input 2007-11-23 07:29: is accepted, and has the effective value of 2007-11-23 07:29:00. v The input 2007-11-23 is accepted, and has the effective value of 2007-11-23 00:00:00. v The input 2007-11-23 :29:00 is not accepted. Only trailing input can be unspecified. Example 2: dateformat = "YYYYMMDD" v The input 1999 is accepted, and has the effective value of 19990101. Related reference "IMPORT Command" in Data Movement Utilities Guide and Reference "LOAD command" in Data Movement Utilities Guide and Reference
219
220
221
v CREATE EVENT MONITOR FOR DEADLOCKS statement as well as the DB2DETAILDEADLOCK event monitor (see CREATE EVENT MONITOR FOR DEADLOCKS statement and DB2DETAILDEADLOCK event monitor have been deprecated on page 230) v CREATE EVENT MONITOR FOR TRANSACTIONS statement (see CREATE EVENT MONITOR FOR TRANSACTIONS statement has been deprecated on page 230) The following product installation and instance management functionality has been deprecated: v -s option of the db2iupdt command on Linux and UNIX operating systems (see db2iupdt command -s option has been deprecated (Linux and UNIX) on page 231) v Instance and database migration commands and APIs (see Instance and database migration commands and APIs have been deprecated on page 232) v MIGRATE_PRIOR_VERSIONS and CONFIG_ONLY response file keywords (see Some response file keywords have been deprecated on page 233) Review each topic to find out more details and to plan for future changes. Other deprecated functionality might be described in Chapter 20, Discontinued functionality, on page 237.
Control Center tools and DB2 administration server (DAS) have been deprecated
Starting in Version 9.7, the Control Center tools and the DAS have been deprecated and might be removed in a future release. A new suite of GUI tools for managing DB2 for Linux, UNIX, and Windows data and data-centric applications is now available and can be used instead.
Details
The following Control Center tools and related features have been deprecated: v Activity Monitor v v v v v Command Editor Configuration Assistant Control Center and associated wizards and advisors Control Center plug-in extensions DB2 administration server (DAS)
v Event Analyzer v Health Center v v v v v v v Indoubt Transaction Monitor Journal License Center Memory Visualizer Query Patroller Center Replication Center Satellite Administration Center
v Task Center
222
As a result, the following associated DB2 commands are also deprecated: v dasauto (Autostart DB2 administration server command) v dascrt (Create a DB2 administration server command) v dasdrop (Remove a DB2 administration server command) v dasmigr (Migrate the DB2 administration server command) v v v v v v v v v v v v v dasupdt (Update DAS command) daslist (Display DAS name command) db2admin (DB2 administration server command) db2am (Start Activity monitor center command) db2ca (Start the Configuration Assistant command) db2cc (Start control center command) db2ce (Start Command Editor command) db2daslevel (Show DAS level command) db2eva (Event analyzer command) db2hc (Start health center command) db2indbt (Start Indoubt Transaction Monitor Center command) db2journal (Start Journal command) db2lc (Start License Center command)
v db2memvis (Start Memory Visualizer center command) v db2rc (Start Replication Center command) v db2tc (Start Task Center command)
User response
The deprecated Control Center tools and related features are still supported in Version 9.7. However, you should consider using the new suite of GUI tools in place of the Control Center tools. For more information, see Database management and application development tools.
Details
Together, Query Patroller and DB2 Governor provide workload management controls to successfully run complex workloads on your DB2 data server. However, DB2 workload manager provides a greatly enhanced set of workload management features that replaces both Query Patroller and DB2 Governor. All Query Patroller components have been deprecated, including: v The Query Patroller server (including Query Patroller stored procedures, control tables, and log files) v Query Patroller Center v Query Patroller thresholds v Query Patroller historical analysis functionality v Query Patroller registry variables: DB2_QP_BYPASS_APPLICATIONS, DB2_QP_BYPASS_USERS, DB2_QP_BYPASS_COST
Chapter 19. Deprecated functionality
223
v The qp_query_id monitor element v The dyn_query_mgmt database configuration parameter v Query Patroller command-line support, including the following commands: ADD OPERATOR_PROFILE ADD QUERY_CLASS ADD SUBMISSION_PREFERENCES ADD SUBMITTER_PROFILE CANCEL QUERY GENERATE HISTORICAL_DATAFILE RESULT GET OPERATOR_PROFILE GET QP_SYSTEM GET QUERY GET QUERY_CLASS GET SUBMISSION_PREFERENCES GET SUBMITTER_PROFILE LIST OPERATOR_PROFILES LIST QUERIES LIST QUERY_CLASSES
LIST SUBMISSION_PREFERENCES LIST SUBMITTER_PROFILES qpcenter qpsetup qpstart qpstop REMOVE OPERATOR_PROFILE REMOVE REMOVE REMOVE REMOVE QUERY_CLASS QUERY_INFO QUERY_INFO_HISTORY RESULT
RUN IN BACKGROUND QUERY SHOW RESULT UPDATE UPDATE UPDATE UPDATE UPDATE OPERATOR_PROFILE QUERY_CLASS SUBMISSION_PREFERENCES SUBMITTER_PROFILE QP_SYSTEM
In addition, all DB2 Governor commands have been deprecated, including: v db2gov v db2govlg
224
User response
You can manage workloads more effectively by using DB2 workload manager, which provides many more features. Starting in Version 9.7 Fix Pack 1 and later fix packs, you can use a sample program (qpwlmmig.pl) that generates a script that will help migrate a Query Patroller environment to a WLM environment. Related concepts "Workload management roadmap" in Workload Manager Guide and Reference "Frequently asked questions about DB2 workload manager" in Workload Manager Guide and Reference New thresholds provide additional activity control on page 77 Related tasks "Migrating from DB2 Governor to DB2 workload manager" in Upgrading to DB2 Version 9.7 "Migrating from Query Patroller to DB2 workload manager" in Upgrading to DB2 Version 9.7 "Migrating from DB2 Query Patroller to DB2 workload manager using the sample script" in Workload Manager Guide and Reference
Details
The following APIs, API options and value have been deprecated in Version 9.7: v db2GetAlertCfg v db2GetAlertCfgFree db2GetRecommendations db2GetRecommendationsFree db2ResetAlertCfg db2UpdateAlertCfg The SQLM_CLASS_HEALTH and SQLM_CLASS_HEALTH_WITH_DETAIL snapshot class options of the db2GetSnapshot API v The SQLM_HMON_OPT_COLL_FULL value for the AGENT_ID option in the sqlma data structure passed to the db2GetSnapshot API v v v v v The following CLP commands have been deprecated: v GET ALERT CONFIGURATION v GET HEALTH SNAPSHOT v GET RECOMMENDATIONS FOR HEALTH INDICATOR v RESET ALERT CONFIGURATION v UPDATE ALERT CONFIGURATION The following table functions have been deprecated: v HEALTH_CONT_HI
Chapter 19. Deprecated functionality
225
v v v v v v v v v v v v v v v v v
HEALTH_CONT_HI_HIS HEALTH_CONT_INFO HEALTH_DB_HI HEALTH_DB_HI_HIS HEALTH_DB_HIC HEALTH_DB_HIC_HIS HEALTH_DB_INFO HEALTH_DBM_HI HEALTH_DBM_HI_HIS HEALTH_DBM_INFO HEALTH_GET_ALERT_ACTION_CFG HEALTH_GET_ALERT_CFG HEALTH_GET_IND_DEFINITION HEALTH_HI_REC HEALTH_TBS_HI HEALTH_TBS_HI_HIS HEALTH_TBS_INFO
User response
The deprecated health monitor interfaces are still supported in Version 9.7. However, you should consider using the new suite of GUI tools in place of the Control Center tools. For more information, see Database management and application development tools.
LONG VARCHAR and LONG VARGRAPHIC data types have been deprecated
The LONG VARCHAR and LONG VARGRAPHIC data types have been deprecated. As a result, the LONG_VARGRAPHIC and LONG_VARCHAR scalar functions have also been deprecated.
Details
When choosing a data type for a column, use data types such as VARCHAR, VARGRAPHIC, CLOB, or DBCLOB since these will continue to be supported in future releases and are recommended for portable applications.
User response
The usage of LONG VARCHAR and LONG VARGRAPHIC does not affect existing tables because deprecated functionality continues to be supported in the current release. Consider migrating to other data types to ensure that you can take advantage of future enhancements to the product. The support for the LONG VARCHAR and LONG VARGRAPHIC data types and the associated scalar functions might be removed in a future release. Also, in embedded SQL applications, avoid using host variables that generate the deprecated data types.
226
Worksheet Format (WSF) for Import and Export utilities has been deprecated
The Worksheet Format (WSF) was used for data exchange with products such as Lotus 1-2-3 and Symphony. Support for the file format is deprecated and might be removed in a future release.
Details
WSF files have limitations relative to other supported file formats. This format is not recommended for DB2 utilities.
User response
Start using a supported file format instead of WSF files before support is removed. For existing WSF files, convert to another format by loading the data back into DB2 tables and exporting the data to a supported format such as ASC, DEL, or PC/IXF.
LIST TABLESPACES and LIST TABLESPACE CONTAINERS commands have been deprecated
The commands and APIs that show information about table spaces and table space containers have been deprecated and might be removed in a future release.
Details
The LIST TABLESPACES [SHOW DETAIL] and LIST TABLESPACE CONTAINERS commands are no longer being updated with new features. As a result, the following data structure and APIs are also deprecated: v v v v v SQLB_TBSPQRY_DATA data structure sqlbctsq API sqlbftsq API sqlbftpq API sqlbgtss API
v sqlbmtsq API v sqlbotsq API v sqlbstpq API v sqlbstsq API v sqlbtcq API
User response
Modify any of your existing scripts using the deprecated commands or APIs to call the MON_GET_TABLESPACE or the MON_GET_CONTAINER table functions instead. These table functions return more information than was provided by the deprecated commands and APIs.
227
Related reference "MON_GET_TABLESPACE table function - Get table space metrics" in Administrative Routines and Views "MON_GET_CONTAINER table function - Get table space container metrics" in Administrative Routines and Views
IBM Software Developer's Kit (SDK) 1.4.2 support for Java routines has been deprecated
IBM SDK Version 1.4.2 support for Java routines has been deprecated. As a result, Java stored procedures and routines built in DB2 Version 8.2 (and earlier) are also deprecated because they were created using the SDK 1.4.2 (and earlier) level.
Details
Due to the introduction of newer versions of the IBM SDK, support for SDK Version 1.4.2 is deprecated and will be removed from service. DB2 Version 9.7 installs IBM SDK for Java 6 by default on all platforms. This version of Java will be used to compile new Java stored procedures and user-defined functions created in Version 9.7.
User response
Before IBM SDK 1.4.2 support is discontinued, recreate any deprecated Java routines with the SDK installed in your DB2 Version 9.7 copy. If you must use an SDK for Java other than the one installed in your DB2 Version 9.7 copy, refer to the refer to the Upgrading Java routines topic. For the list of supported Java development software, see Java software support for DB2 products. Related tasks "Upgrading Java routines" in Upgrading to DB2 Version 9.7 Related reference "Java software support for DB2 database products" in Getting Started with Database Application Development
Details
The sqlugrpn API is designed to work with distribution maps containing up to 4096 (4 KB) entries. In Version 9.7, the distribution map size has been expanded to 32 768 (32 KB) entries. The sqlugrpn API cannot be used in conjunction with these larger distribution maps. For information about enabling larger maps, see the Distribution maps topic.
228
User response
Use the new db2GetRowPartNum API, which supports all distribution map sizes. Related concepts "Distribution maps" in Partitioning and Clustering Guide sqlugtpi API has been deprecated
Details
The sqlugtpi API is designed to work with distribution maps containing up to 4096 (4 KB) entries. In Version 9.7, the distribution map size has been expanded to 32 768 (32 KB) entries. The sqlugtpi API cannot be used in conjunction with larger distribution maps that make use of more than 4096 entries. For information about enabling larger maps, see the Distribution maps topic.
User response
If the sqlugtpi API encounters a distribution map that it cannot process due to the larger size, it returns SQL2768N. Use the DB2GetDistMap API, which supports all distribution map sizes. Related concepts "Distribution maps" in Partitioning and Clustering Guide sqlugrpn API has been deprecated on page 228 Related reference "db2GetDistMap - Get distribution map" in Administrative API Reference
Subset of Net Search Extender features and commands have been deprecated
Certain Net Search Extender (NSE) features and commands have been deprecated and might be removed in a future release.
Details
The following NSE features have been deprecated: v Index updates using replication v Indexes on nicknames (federated databases) v The NUMBEROFMATCHES scalar function v Highlighting in search operations v v v v User-defined relationships in a thesaurus Documents in General Purpose Format (GPP) Caching for search operations using the stored procedures Presorted indexes used in search operations using the stored procedures
229
The following commands have been deprecated because the related features have also been deprecated: v ACTIVATE CACHE v DEACTIVATE CACHE v DB2EXTHL
User response
You can start using other supported features or commands before these deprecated features and commands become discontinued. Avoid using deprecated features and commands when developing new applications. Related concepts "Net Search Extender key concepts" in Net Search Extender Administration and User's Guide
CREATE EVENT MONITOR FOR DEADLOCKS statement and DB2DETAILDEADLOCK event monitor have been deprecated
The use of the CREATE EVENT MONITOR FOR DEADLOCKS statement and the automatically started DB2DETAILDEADLOCK event monitor, to monitor deadlock events, have been deprecated. Their use is no longer recommended and might be removed in a future release.
Details
In previous releases, if you wanted to monitor deadlock events, you had to issue the CREATE EVENT MONITOR FOR DEADLOCKS statement or check the output files for deadlock-related entries written by the automatically started DB2DETAILDEADLOCK event monitor. Version 9.7 includes a new event monitor infrastructure that provides an entirely new set of monitor elements and methods to monitor DB2 events. As a result, if you want to monitor deadlock events in DB2 Version 9.7, using the CREATE EVENT MONITOR FOR LOCKING statement is the suggested method.
User response
Use the CREATE EVENT MONITOR FOR LOCKING statement to monitor lock-related events, such as lock timeouts, lock waits, and deadlocks. Related concepts New relational monitoring interfaces are light weight and SQL accessible on page 34 Related reference "CREATE EVENT MONITOR (locking) " in SQL Reference, Volume 2
230
Details
In previous releases, if you wanted to monitor transaction events, you had to issue the CREATE EVENT MONITOR FOR TRANSACTIONS statement to create a transaction event monitor. Version 9.7 includes a new event monitor infrastructure that provides an entirely new set of monitor elements and methods to monitor DB2 events. As a result, if you want to monitor transaction events in DB2 Version 9.7, using the CREATE EVENT MONITOR FOR UNIT OF WORK statement is the suggested method.
User response
Use the CREATE EVENT MONITOR FOR UNIT OF WORK statement to create a transaction event monitor. Related reference "CREATE EVENT MONITOR (unit of work) " in SQL Reference, Volume 2
DB2_SERVER_ENCALG
Related concepts Some registry and environment variables have changed on page 175
231
Details
The db2iupdt command updates an instance to run on a DB2 copy that has a new DB2 database product or feature installed, to run on a DB2 copy of the same version as the DB2 copy associated with the instance, or to update the instance type to a higher level instance type. On UNIX and Linux operating systems, the -s parameter ignores the existing sync point manager (SPM) log directory.
User response
On UNIX and Linux operating systems, do not use this option of the db2iupdt command.
Instance and database migration commands and APIs have been deprecated
The db2imigr, db2ckmig, and MIGRATE DATABASE commands as well as the sqlemgdb and sqlgmgdb APIs have been deprecated in DB2 Version 9.7 and might be removed in a future release.
Details
To be consistent with the use of the term upgrade for DB2 products, the term upgrade is now used to describe the process of enabling pre-Version 9.7 DB2 servers, clients, database applications, and routines to run in a Version 9.7 environment. Pre-Version 9.7 refers only to DB2 Universal Database Version 8, DB2 Version 9.1, and DB2 Version 9.5. The term upgrade is also used to describe the process of enabling pre-Version 9.7 instances and databases to run in a DB2 Version 9.7 copy. Prior to Version 9.7, the term migration was used to describe the process of enabling DB2 servers, clients, database applications, routines, instances, and databases from one release to run in a later release. Because of this change in terminology, the DB2 commands and APIs to migrate instances and databases have been deprecated, and new commands and APIs to upgrade instances and databases are available. The following table shows new Version 9.7 commands and APIs that you should use.
Table 28. Equivalent pre-Version 9.7 and Version 9.7 commands Pre-Version 9.7 command or API name db2imigr Version 9.7 command or API name db2iupgrade Version 9.7 command or API description db2iupgrade upgrades an instance to Version 9.7 from Version 8, Version 9.1, or Version 9.5. db2ckupgrade verifies that the local pre-Version 9.7 databases are ready to be upgraded to Version 9.7.
db2ckmig
db2ckupgrade
MIGRATE DATABASE
UPGRADE DATABASE UPGRADE DATABASE upgrades a database to Version 9.7 if the instance where the database was running was upgraded to Version 9.7 by using the db2iupgrade command.
232
Table 28. Equivalent pre-Version 9.7 and Version 9.7 commands (continued) Pre-Version 9.7 command or API name sqlemgdb and sqlgmgdb Version 9.7 command or API name db2DatabaseUpgrade Version 9.7 command or API description db2DatabaseUpgrade converts a DB2 Version 9.5, Version 9.1, or Version 8 database to the current release.
User response
Use the new DB2 commands and APIs provided in Version 9.7 when upgrading instances and databases to Version 9.7. Refer to "Upgrade to DB2 Version 9.7" for details about the complete upgrade process for DB2 servers, clients, database applications, and routines. Related concepts "Upgrade to DB2 Version 9.7" in Upgrading to DB2 Version 9.7 Related reference "MIGRATE DATABASE command" in Command Reference "db2ckupgrade - Check database for upgrade " in Command Reference "db2iupgrade - Upgrade instance " in Command Reference
Details
When you use the -file option, only the specified log control file (SQLOGCTL.LFH.1 or SQLOGCTL.LFH.2) is updated. This causes the files to no longer be synchronized. As a result, if the database is placed into rollforward pending state using the primary log control file (SQLOGCTL.LFH.1) and this file later becomes unavailable, the database will no longer be in rollforward pending state. Similarly, if the database is placed into rollforward pending state using the secondary log control file (SQLOGCTL.LFH.2) and the primary log control file remains available, the database will not be in rollforward pending state.
User response
Use the database_alias parameter or the -path option instead. Related reference "db2rfpen - Reset rollforward pending state " in Command Reference
233
Details
The following response file keywords are no longer recommended: v MIGRATE_PRIOR_VERSIONS v CONFIG_ONLY
User response
Deprecated response file keyword MIGRATE_PRIOR_VERSIONS Change Starting with DB2 V9.7, on Windows operating systems and non-root upgrades on Linux and UNIX operating systems, use the response file keyword UPGRADE_PRIOR_VERSIONS to specify an existing version of the DB2 product to be upgraded. Any existing response files should be changed to replace the deprecated keyword with the new keyword. CONFIG_ONLY Previously, on Linux and UNIX operating systems only, this keyword specified that the response file was for performing configuration tasks only. For example, creating a new instance using the db2isetup command. The keyword had two options: YES or NO. Starting with DB2 V9.7, there is no ability to set this option to NO. Whether the keyword is specified or not in the silent mode of db2isetup, the DB2 code assumes the option to be YES indicating that the response file is performing configuration tasks only. Existing scripts or response files do not need to be modified. Even if this keyword exists in a response file of db2isetup, the keyword value is assumed to be YES, regardless of the value specified.
Related concepts New response file keywords have been added on page 149 INTERACTIVE response file keyword has been changed on page 188 Related reference "Response file keywords" in Installing DB2 Servers
FP2: COBOL and FORTRAN language support has been deprecated for db2History APIs
Starting with DB2 Version 9.7 Fix Pack 2, COBOL and FORTRAN language support for the db2History APIs has been deprecated and might be discontinued in a future release.
Details
The following APIs are affected by the deprecated support of the COBOL and FORTRAN application development languages: v db2HistoryCloseScan v db2HistoryGetEntry
234
v db2HistoryOpenScan v db2HistoryUpdate
User response
Use one of the following supported alternatives: v Access history information through the DB_HISTORY administrative view. v Use a different programming language to develop your administrative application. Related reference "db2HistoryUpdate - Update a database history records entry" in Administrative API Reference "db2HistoryOpenScan - Start a database history records scan" in Administrative API Reference "db2HistoryGetEntry - Get the next entry in the database history records" in Administrative API Reference "db2HistoryCloseScan - End the database history records scan" in Administrative API Reference "db2HistoryData " in Administrative API Reference "DB_HISTORY administrative view - Retrieve history file information" in Administrative Routines and Views
235
236
Details
The following Linux distributions are no longer supported: v Red Hat Enterprise Linux (RHEL) 4 v SUSE Linux Enterprise Server (SLES) 9 v Ubuntu 7.x
237
In addition, DRDA connectivity to the following DB2 for z/OS and DB2 for IBM i data servers is no longer supported: v DB2 for z/OS, Version 7.1 v DB2 for i, V5R1 v DB2 for i, V5R2
User Response
Review the list of supported Linux distributions and the list of supported servers through DRDA connectivity. Related reference "IBM i and mainframe support for DB2 Connect" in Installing and Configuring DB2 Connect Servers "Installation requirements for DB2 database products" in Installing DB2 Servers
Details
All indexes that you created by using DB2 releases earlier than Version 8 are type-1 indexes, unless you converted them to type-2 indexes in Version 8 or later via the REORG INDEXES command with the CONVERT option. All indexes that you created by using Version 8.2, Version 9.1, or Version 9.5 are type-2 indexes, unless they were created in an instance with the DB2_INDEX_TYPE2 registry variable set to NO, or unless you created an index on a table that already had a type-1 index. In Version 9.7, all indexes that you create are type-2 indexes. If you do not convert your type-1 indexes before upgrading a database, these indexes are marked as invalid during the upgrade. If you set the indexrec configuration parameter to RESTART, the indexes are rebuilt as type-2 indexes when you restart the database. Otherwise, the rebuild occurs when you first access a table, and you might experience an unexpected degradation in response time. The table is inaccessible until the index rebuild is completed. Also, the following related functionality is deprecated and might be removed in a future release: v The CONVERT option of the REORG INDEXES command v The DB2LOADQUERY_TYPE1_INDEXES parameter of the db2LoadQueryOutputStruct data structure and of the db2LoadQueryOutputStruct64 data structure of the db2LoadQuery API v The DB2REORG_CONVERT parameter of the db2ReorgStruct data structure of the db2Reorg API
User response
Before upgrading to DB2 Version 9.7, convert type-1 indexes to type-2 indexes. Ensure that you allocate enough time to convert all the indexes prior to upgrading. You can convert type-1 indexes to type-2 indexes by using the CONVERT option of the REORG INDEXES command or by using the output of the db2IdentifyType1 command. The db2IdentifyType1 command identifies and generates the
238
appropriate statements that you can use later to convert any type-1 indexes found in tables or schemas for a specified database. For more information, see the Converting type-1 indexes to type-2 indexes topic. Related tasks "Converting type-1 indexes to type-2 indexes" in Upgrading to DB2 Version 9.7 Related reference "db2IdentifyType1 - Type-1 index identification tool " in Command Reference
Details
Following the pervasive adoption and usage of 64-bit processors, the need for partitioned databases to run on 32-bit operating systems has been reduced.
User response
If you are using database partitioning functionality in Windows 32-bit environments, you will no longer be able to use these environments in Version 9.7. You can use database partitioning functionality in Windows 64-bit environments in Version 9.7. Related concepts "Deprecated or discontinued functionality that affects DB2 server upgrades" in Upgrading to DB2 Version 9.7
Details
In the past, you could use the Netscape browser to access DB2 Launchpad, First Steps, and the DB2 Information Center.
User response
Use one of the following supported browsers: v Internet Explorer 6.0 and later v Mozilla 1.7 and later v Firefox 2.0 and later The DB2 Launchpad supports also the SeaMonkey 1.1.4 (or later) browser. The DB2 Information Center supports the same browsers, plus any other browsers that support JavaScript.
239
Related concepts "DB2 servers and IBM data server clients" in Installing DB2 Servers
Details
Because the database provides a comprehensive set of XML tools, XML Extender functions are no longer required. You can use the pureXML feature to store well-formed XML documents in database table columns that have the XML data type. By storing XML data in XML columns, the data is kept in its native hierarchical form, rather than stored as text or mapped to a different data model. You can apply database functions such as XMLQUERY and XSLTRANSFORM directly to database tables having the XML data type.
User response
Refer to the Migrating from XML Extender to pureXML topic for details on how to upgrade your existing database applications to Version 9.7 using the pureXML functionality. Related tasks "Migrating from XML Extender to pureXML" in Upgrading to DB2 Version 9.7
Details
WORF has been replaced with a new feature within IBM Data Studio that you can use to create Web services without writing document access definition extension (DADX) files. Moreover, you can use the IBM Data Studio feature to create the SQL statements and stored procedures on which to base the operations of your Web services. Also, in many scenarios, deploying a Web service requires only a single mouse click.
User response
Migrate your WORF Web services to IBM Data Studio Web services. Instructions for migrating are in the Migrating Web applications that were developed for the Web Object Runtime Framework (WORF) topic in the Integrated Data Management Information Center located at http://publib.boulder.ibm.com/ infocenter/idm/v2r2/index.jsp.
240
Details
DB2 EAS is no longer included with Version 9.7 products.
Resolution
Use IBM Data Studio or IBM Optim Development Studio to re-create your Web services and redeploy your Web services after upgrading to DB2 Version 9.7. If you have Web applications that were developed for WORF, you should migrate these Web applications. For more information, see Web Object Runtime Framework (WORF) support has been discontinued on page 240. If you use DB2 samples that require an application server, you can use the WebSphere Application Server Community Edition (CE) application server.
Details
The db2uiddl command generated scripts with CREATE UNIQUE INDEX statements to convert unique indexes created on databases earlier than DB2 UDB Version 5. This command is unnecessary because when you convert type-1 indexes to type-2 indexes, you also automatically convert the unique indexes created on databases earlier than DB2 UDB Version 5.
User response
Use the db2IdentifyType1 command to handle unique index conversion. For more information, see the Converting type-1 indexes to type-2 indexes topic. Related tasks "Converting type-1 indexes to type-2 indexes" in Upgrading to DB2 Version 9.7 Related reference "db2IdentifyType1 - Type-1 index identification tool " in Command Reference
Details
Use the db2extsec command instead for setting the permissions for DB2 objects such as files, directories, network shares, registry keys, and services.
241
User response
Change references to the db2secv82 command in applications and scripts to the db2extsec command. Related reference "db2extsec - Set permissions for DB2 objects command" in Command Reference
Details
In previous releases, the GET AUTHORIZATIONS command reported the authorities of the current user from values found in the database configuration file and the authorization system catalog view (SYSCAT.DBAUTH). The command was deprecated in Version 9.5 because of internal changes in the DB2 authorization model and is discontinued in Version 9.7.
User response
Remove references to the GET AUTHORIZATIONS command, and use the AUTH_LIST_AUTHORITIES_FOR_AUTHID table function to return the authorities for a particular user. Related reference "AUTH_LIST_AUTHORITIES_FOR_AUTHID " in Administrative Routines and Views
Details
In previous releases, the sqluadau API reported the instance level and database level authorities of the current user from values found in the database manager configuration file and the authorization system catalog view (SYSCAT.DBAUTH) respectively. This API has been deprecated since Version 9.5 due to internal changes in the DB2 authorization model and is discontinued in Version 9.7. The sql_authorization data structure has been discontinued because it was used only to return information after a call to the sqluadau API.
User response
Remove references to the sqluadau API and the sql_authorization data structure. Use the AUTH_LIST_AUTHORITIES_FOR_AUTHID table function to return the authorities for a particular user. You can review the complete list of changed APIs in the Changed APIs and data structures topic.
242
Related reference "Changed APIs and data structures" in Administrative API Reference "AUTH_LIST_AUTHORITIES_FOR_AUTHID " in Administrative Routines and Views
Details
In DB2 Version 8, you could use the db2ilist command to list all of the DB2 instances that were available on a system. The -a and -p options were deprecated in DB2 Version 9.1 because the scope of the db2ilist command changed and listed only the instances related to the current installation path.
User response
For DB2 Version 9.1 and higher releases, you can list DB2 installation path information by using the db2ls command on the server and then the db2ilist command in each of the installation directories shown in the db2ls output. For DB2 Version 8 releases, you can list instance information by running the db2ilist command in the installation directories where the DB2 Version 8 products are installed. Related reference "db2ls - List installed DB2 products and features " in Command Reference
Related concepts Some registry and environment variables have been deprecated on page 231 Some registry and environment variables have changed on page 175
243
244
245
Version 9.1
v The Data Warehouse Center and the Information Catalog Center are no longer included v DB2 Administration Tools are no longer supported on some platforms v db2sqljprint - DB2 SQLJ Profile Printer Command v db2profc and db2profp utilities are discontinued
db2reg2large utility for converting DMS table space size Desktop icon and folder-making utilities (Linux operating systems) Extended Storage option for buffer pools indexsort database configuration parameter JDBC type 2 driver JDBC type 3 driver and related registry variable
Version 9.1 Version 9.1 Version 9.1 To be determined To be determined Version 9.1
v db2reg2large utility for converting DMS table space size is discontinued v Desktop icon and folder making utilities are no longer supported (Linux) v Extended Storage option for buffer pools is discontinued v Version 8 incompatibilities with previous releases v Introduction to Java application support v Java Database Connectivity (JDBC) v Type 3 JDBC support is discontinued
246
Table 31. Functionality deprecated in Version 8 and possibly discontinued in a later release (continued) Functionality NetBIOS and SNA communication protocols and related registry variables and configuration parameters Network monitor elements: v max_network_time_2_ms v max_network_time_8_ms v max_network_time_32_ms v max_network_time_gt32_ms PK_COLNAMES and FK_COLNAMES in SYSCAT.REFERENCES Subset of ODBC 3.0 functions that affect CLI support Text Extender Uncataloged stored procedures Variable argument list stored procedures Vendor escape clauses in CLI statements Vendor load API (sqluvtld) Virtual interface (VI) architecture usage in fast communications manager (FCM) and related registry variables VSE and VM objects in the DB2 Control Center To be determined v DB2 Universal Database planned incompatibilities v For the list of deprecated features and additional information, see CLI and ODBC function summary v Text Extender is no longer supported v Version 8 incompatibilities with previous releases v Version 8 incompatibilities with previous releases v Vendor escape clauses in CLI applications v Vendor load API (sqluvtld) is discontinued v FCM no longer uses VI architecture Discontinued in release Version 9.1 Links to additional information v NetBIOS and SNA communication protocols are no longer supported
To be determined
To be determined
Version 9.1
v VSE and VM objects are no longer supported in the DB2 Control Center
Table 32. Registry variables deprecated in Version 8 and possibly discontinued in a later release Registry variable DB2_CLIENT_ENCALG DB2JVIEW DB2_LGPAGE_BP Discontinued in release Version 9.1 Version 9.1 Version 9.5 Links to additional information v Deprecated and discontinued features v Deprecated and discontinued features v List of Version 8 known limitations, problems, and workarounds
v Some registry and environment variables are discontinued DB2NOLIOAIO Version 9.5 v DB2NOLIOAIO registry variable replaced by DB2LINUXAIO (Linux) v Some registry and environment variables are discontinued DB2NTNOCACHE To be determined v Some registry and environment variables are discontinued
247
Table 32. Registry variables deprecated in Version 8 and possibly discontinued in a later release (continued) Registry variable DB2_SCATTERED_IO (Linux) Discontinued in release Version 9.1 Links to additional information v DB2_SCATTERED_IO registry variable is discontinued (Linux)
ADD PARTITIONING KEY and DROP To be determined PARTITIONING KEY clauses of the ALTER TABLE statement
Address Windowing Extensions (AWE) Version 9.5 support and related registry variable (Windows operating systems)
bitwidth support for the db2icrt, db2iupdt, and db2ilist commands CLISchema CLI keyword COLNAMES column in SYSCAT.INDEXES Database logging using raw devices db2ilist command options -a and -p (Linux and UNIX operating systems)
v -w option for db2icrt, db2ilist, and db2iupdt is discontinued (Linux and UNIX) v CLISchema CLI keyword is discontinued v COLNAMES column in SYSCAT.INDEXES is deprecated v Database logging using raw devices is deprecated v db2ilist command options are deprecated (Linux and UNIX) v db2ilist command options -a and -p have been discontinued
v -n option of the db2licm command is discontinued v Deprecated and discontinued features v db2undgp command is discontinued
v The -schema option of the db2sampl command is discontinued v db2secv82 command is deprecated v db2secv82 command has been discontinued
Default function entry points support in external routine libraries DB2 Web Tools
v External routines now require an explicit entry point specification v DB2 Web Tools support is deprecated v DB2 Web Tools support is discontinued
Extended storage (ESTORE) feature support, including related configuration parameters, monitor elements, and table functions
Version 9.5
248
Table 33. Functionality deprecated in Version 9.1 and possibly discontinued in a later release (continued) Functionality iCheckPending parameter Discontinued in release To be determined Links to additional information v Check pending table state is replaced and iCheckPending parameter is deprecated v Network Information Services (NIS and NIS+) support is deprecated (Linux and UNIX) v Deprecated and discontinued features v Some database manager configuration parameters have changed To be determined v Deprecated Version 9.1 SQL administrative routines and their replacement routines or views v Deprecated Version 9.5 SQL administrative routines and their replacement routines or views v Deprecated Version 9.7 SQL administrative routines and their replacement routines or views Type-1 indexes and related functionality Version 9.7 v Type-1 indexes are deprecated v Type-1 indexes have been discontinued v db2uiddl command has been discontinued Table 34. Registry variables deprecated in Version 9.1 and possibly discontinued in a later release Registry variable DB2_ASYNC_APPLY Discontinued in release Version 9.5 Links to additional information v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2CCMSRV Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_COMMIT_ON_EXIT DB2_CORRELATED_PREDICATES DB2_ENABLE_BUFPD DB2LINUXAIO To be determined To be determined To be determined Version 9.5 v Deprecated and discontinued features v Deprecated and discontinued features v Deprecated and discontinued features v DB2LINUXAIO registry variable is deprecated (Linux v Some registry and environment variables are discontinued DB2_MAPPED_BASE DB2_NO_MPFA_FOR_NEW_DB DB2_PRED_FACTORIZE OS/2 related registry variables: v DB2UPMPR v DB2UPMSINGLE To be determined To be determined To be determined Version 9.5 v Deprecated and discontinued features v Deprecated and discontinued features v Deprecated and discontinued features v Deprecated and discontinued features v Some registry and environment variables are discontinued
Chapter 21. Version 9 deprecated and discontinued DB2 functionality summary
Network Information Services (NIS To be determined and NIS+) support and related registry variable (Linux and UNIX operating systems) priv_mem_thresh configuration parameter Version 9.5
249
Table 34. Registry variables deprecated in Version 9.1 and possibly discontinued in a later release (continued) Registry variable Query Patroller registry variables: v DQP_ABORTRESULT v DQP_CHILD_WAIT v DQP_DISKMON v DQP_EXIT_AN v DQP_INTERVAL v DQP_LAST_RESULT_DEST v DQP_LOCAL_SERVANTS v DQP_LOG v DQP_LOGMON v DQP_MAIL v DQP_MAIL_ACCOUNT v DQP_MAPI_PASSWORD v DQP_MAPI_PROFILE v DQP_NET v DQP_NOCPU v DQP_NOEXPLAIN v DQP_NTIER v DQP_PURGEHOURS v DQP_RECOVERY_INTERVAL v DQP_RES_TBLSPC v DQP_RUNTIME v DQP_SERVER v DQP_SHARE v DQP_SIBLING_WAIT v DQP_STARTUP v DQP_TRACEFILE DB2_BLOCK_ON_LOG_DISK_FULL Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_FORCE_FCM_BP Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_LGPAGE_BP Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_MEMALLOCATE_HIGH Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_MIGRATE_TS_INFO Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued Discontinued in release To be determined Links to additional information v Deprecated and discontinued features
250
Table 34. Registry variables deprecated in Version 9.1 and possibly discontinued in a later release (continued) Registry variable DB2_NEWLOGPATH2 Discontinued in release Version 9.5 Links to additional information v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_NR_CONFIG Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued DB2_OLAP_BUFFER_SIZE Version 9.5 v Deprecated and discontinued features v Some registry and environment variables are discontinued
logretain and userexit configuration parameter Version 9.5 Log control file SQLOGCTL.LFH and the -file option of the db2flsn command SQLOGCTL.LFH: Version 9.5 The -file option: To be determined LONG VARCHAR and LONG VARGRAPHIC data types maxagents and maxcagents configuration parameters Netscape browser support numsegs configuration parameter To be determined
v LONG VARCHAR and LONG VARGRAPHIC data types have been deprecated v Some database manager configuration parameters have changed v Netscape browser support has been discontinued v Some database configuration parameters have changed
251
Table 35. Functionality deprecated in Version 9.5 and possibly discontinued in a later release (continued) Functionality query_heap_sz configuration parameter sqluadau API Discontinued in release Version 9.5 Version 9.7 Links to additional information v Some database manager configuration parameters have changed v sqluadau API is deprecated v sqluadau API and sql_authorization data structure have been discontinued Static data stream snapshot output Subset of SQL administrative routines To be determined To be determined v Static data stream snapshot output is deprecated v Deprecated Version 9.5 SQL administrative routines and their replacement routines or views v Deprecated Version 9.7 SQL administrative routines and their replacement routines or views Web Object Runtime Framework (WORF) support Version 9.7 v Web Object Runtime Framework (WORF) is deprecated v Web Object Runtime Framework (WORF) support has been discontinued XML Extender Version 9.7 v XML Extender is deprecated v XML Extender has been discontinued Table 36. Registry variables deprecated in Version 9.5 and possibly discontinued in a later release Registry variable DB2_ALLOCATION_SIZE DB2ATLD_PORTS DB2_ASYNC_IO_MAXFILOP DB2_BAR_AUTONOMIC_DISABLE DB2BPVARS DB2COUNTRY DB2DEFPREP DB2DMNBCKCTLR DB2FFDC DB2_HASH_JOIN Discontinued in release To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined Links to additional information v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued
252
Table 36. Registry variables deprecated in Version 9.5 and possibly discontinued in a later release (continued) Registry variable DB2_INDEX_FREE DB2_MAP_XML_AS_CLOB_FOR_DLC DB2MEMMAXFREE DB2_NO_FORK_CHECK DB2_PARTITIONEDLOAD_DEFAULT DB2PRIORITIES and DB2NTPRICLASS DB2ROUTINE_DEBUG DB2_RR_TO_RS DB2_SNAPSHOT_NOAUTH DB2_TRUSTED_BINDIN DB2_UPDATE_PART_KEY DB2_VENDOR_INI DB2YIELD Discontinued in release To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined To be determined Links to additional information v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued v Some registry and environment variables are discontinued
253
Table 37. Functionality deprecated in Version 9.7 (continued) Functionality db2iupdt command option -s -file option of the db2rfpen command Health monitor LIST TABLESPACES and LIST TABLESPACE CONTAINERS commands and related APIs: v sqlbctsq v sqlbftsq v sqlbftpq v sqlbgtss v sqlbmtsq v sqlbotsq v sqlbstpq v sqlbstsq v sqlbtcq MIGRATE_PRIOR_VERSIONS and the CONFIG_ONLY response file keywords sqlugtpi API sqlugrpn API Subset of Net Search Extender features and commands Subset of SQL administrative routines The following Control Center tools: v Activity Monitor v Command Editor v Configuration Assistant v Control Center and associated wizards and advisors v Control Center plug-in extensions v DB2 administration server (DAS) v Event Analyzer v Health Center v Indoubt Transaction Monitor v Journal v License Center v Memory Visualizer v Query Patroller Center v Replication Center v Satellite Administration Center v Task Center Worksheet Format (WSF) for Export and Load utilities v Worksheet Format (WSF) for Export and Load utilities has been deprecated v Some response file keywords have been deprecated v sqlugtpi API has been deprecated v sqlugrpn API has been deprecated v Subset of Net Search Extender features and commands have been deprecated v Deprecated Version 9.7 SQL administrative routines and their replacement routines or views v Control Center tools and DB2 administration server (DAS) have been deprecated Links to additional information v db2iupdt command -s option has been deprecated (Linux and UNIX) v -file option of db2rfpen command has been deprecated v Health monitor has been deprecated v LIST TABLESPACES and LIST TABLESPACE CONTAINERS commands have been deprecated
254
Table 38. Registry variables deprecated in Version 9.7 Registry variables DB2_CAPTURE_LOCKTIMEOUT DB2_SERVER_ENCALG Links to additional information v Some registry and environment variables have been deprecated v Some registry and environment variables have been deprecated
255
256
Part 4. Appendixes
257
258
Appendix A. DB2 Version 9.7 for Linux, UNIX, and Windows fix pack summary
Version 9.7 fix packs contain important technical changes and new functionality that might affect your product usage. Review the technical changes and new functionality included in the following Version 9.7 fix packs. Each fix pack for a release is cumulative; it contains all of the technical changes and new functionality that were shipped in previous fix packs for that release. v Fix Pack 3 v Fix Pack 2 on page 260 v Fix Pack 1 on page 262
Fix Pack 3
Fix Pack 3 contains the functionality of the previous fix pack and includes the following changes: v LOB strings of any length continue to be supported in comparisons using the LIKE predicate, NULL predicate, and the POSSTR function. LOB strings that have an actual length less than 32672 bytes are supported as operands in other predicates and the simple CASE expression. Fix Pack 3 also contains the following enhancements: v You can specify that the system controller thread does not adjust resources below specific values by using the new FCM_CFG_BASE_AS_FLOOR option of the DB2_FCM_SETTINGS registry variable. For more information, see the DB2_FCM_SETTINGS entry in Partitioned database environment variables in the Database Administration Concepts and Configuration Reference. v Improved support for target storage devices that support data deduplication. For more information, see FP3: Data deduplication device support has been integrated into the backup utilities on page 51. v DB2 Text Search and Net Search Extender text indexes can now coexist on the same table column. For more information, see FP3: DB2 Text Search and Net Search Extender index coexistence on page 146. v Range-clustered tables (RCT) are supported in a partitioned database environment. The distribution key must be a single column subset of the range-clustered table key. For more information, see . v A new tool db2caem (db2 Capture Activity Event Monitor data tool) has been created to simplify the process of capturing detailed diagnostic and runtime information about one or more statements. New db2support options have been added for the optimizer mode to collect the data captured by db2caem. For more information, see FP3: Simplify capture of detailed statement information using the new db2caem tool with db2support integration options on page 47. v Two new features have been added to improve the granularity of the db2trc tool, these are an ability to trace only the members (or partitions) specified and an ability to trace based on a specific application ID (or application handle). For more information, see FP3: Improvements to the granularity of the db2trc tool on page 157.
259
Fix Pack 2
Fix Pack 2 contains the functionality of the previous fix pack and includes the following changes: v If DB2_RESTORE_GRANT_ADMIN_AUTHORITIES is set to ON, and you are restoring to a new database, then SECADM, DBADM, DATAACCESS, and ACCESSCTRL authorities are granted to the user that issues the restore operation. For more information, see the "DB2_RESTORE_GRANT_ADMIN_AUTHORITIES" entry in System environment variables in Database Administration Concepts and Configuration Reference. v COBOL and FORTRAN language support for the db2History APIs has been deprecated and might be discontinued in a future release. For more information, see FP2: COBOL and FORTRAN language support has been deprecated for db2History APIs on page 234. v The database manager uses a new formula to automatically adjust kernel parameter settings so that manual adjustments are not required to update Linux kernel parameters related to interprocess communication (IPC). For Version 9.7 Fix Pack 1 or earlier, you might still need to adjust your Linux kernel parameter settings. For more information, see Kernel parameters requirements ( Linux ) in Installing DB2 Servers. v The DB2 Advanced Copy Services (ACS) component is no longer automatically installed during a compact installation. For more information, see FP2: DB2 Advanced Copy Services (ACS) is not automatically included in a compact installation on page 189. Fix Pack 2 also contains the following enhancements: v National character support has been extended by means of NCHAR, NCLOB, and NVARCHAR. For more information, see National character strings in SQL Reference, Volume 1. v New scalar functions have been added to assist with national character support; NCHAR, NCLOB, NVARCHAR, TO_NCLOB, and TO_NCHAR. For more information, see Supported functions and administrative SQL routines and views in SQL Reference, Volume 1. v Procedure enhancements introduced at Version 9.7 have been extended to user-defined functions (UDFs): The ability to create UDFs and specify default values for parameters The ability to invoke a UDF with named arguments For more information, see Default values and named arguments promote flexibility for creating and calling procedures and functions on page 106. v You can use the new WAIT FOR OUTCOME keyword in a SELECT statement to indicate the concurrent access resolution. WAIT FOR OUTCOME specifies to wait for the commit or rollback when encountering data in the process of being updated, deleted, or inserted. For more information, see select-statement in Command Reference. v Support for applications on Solaris UltraSPARC and x64. For more information, see Support for database application development in C in Getting Started with Database Application Development. v Backups can take less time in some cases. For more information, see FP2: Intra-tablespace parallelism can reduce elapsed time for backups on page 51.
260
v Tivoli Storage Manager (TSM) proxy nodes support. For more information, see FP2: Proxy node support for the db2adutl command has been added on page 52. v Integrated support for POWER7 systems and the SUSE Linux Enterprise Server (SLES) 11 distribution. For more information, see IBM Tivoli System Automation for Multiplatforms (SA MP) support has been improved on page 152. v New db2has data collector command makes it easier to collect DB2 health information to be sent to the DB2 Health Advisor Service at IBM for analysis and assessment. For more information, see FP2: DB2 health information is easier to collect and assess on page 157. v The new RESTRICTED ACCESS option can be specified to prevent authorization checking for all connect attempts to the databases of a quiesced DB2 instance. The new option can also be used when there is a need to have exclusive connections to a database within the quiesced instance. For more information, see FP2: New RESTRICTED ACCESS option restricts database connections within quiesced instance on page 18. v You can use the RESTORE command with the TRANSPORT option to copy table spaces and SQL schemas as a set from a database backup image to another active database. For more information, see FP2: Databases can be restored using transportable sets on page 53. v You can obfuscate or encode the body of a routine, trigger, view, or PL/SQL package, so that the proprietary portion cannot be read by a user, but it is still understood by DB2 for Linux, UNIX, and Windows. For more information, see ObfuscationObfuscation in SQL Procedural Languages: Application Enablement and Support. v The db2pd -reorgs index command supports index reorg progress reporting for partitioned indexes. For more information, see Additional system monitoring information can be generated on page 44. v The db2pd command has additional functionality to help control which engine dispatchable units (EDUs) are included in the output, the ability to define a time interval in some cases, and improved stack output on Linux operating systems. For more information, see db2pd - Monitor and troubleshoot DB2 database in Command Reference. v Restrictions on the maximum size of core files (CORELIMIT) have been removed on AIX. For more information, see db2pdcfg - Configure DB2 database for problem determination behavior in Command Reference. v A new registry variable setting has been added to the DB2_WORKLOAD aggregate registry variable when it is set to SAP. For more information, see Some registry and environment variables have changed on page 175. v The DB2_SMS_TRUNC_TMPTABLE_THRESH registry variable has a new default setting which prevents unnecessary file system access for small temporary objects while still truncating large temporary objects to 0 extents. For more information, see Some registry and environment variables have changed on page 175. v The WLM_SET_CONN_ENV procedure enables for a particular connection the collection of activity data and measurement of section actuals (runtime statistics measured during section execution). For more information, see WLM_SET_CONN_ENV in Administrative Routines and Views. v The WLM_GET_CONN_ENV table function returns for a particular connection the values of settings that control collection of activity data and section actuals. You can use this table function to check the current values of the settings
261
applied by the WLM_SET_CONN_ENV stored procedure. For more information, see WLM_GET_CONN_ENV in Administrative Routines and Views. NULL indicators support has been extended to enable applications to use all-columns INSERT, UPDATE, and MERGE statements without having to specify the current value of columns whose values are not to be changed or inserted. For more information, see "References to host variables" in Identifiers in SQL Reference, Volume 1. Support for IBM Rational Developer for zSeries v7. For more information see Support for database application development in COBOL in Getting Started with Database Application Development. Transparent LDAP authentication and group lookup support on AIX is extended to certify Kerberos authentication support. For more information, see FP1: Transparent LDAP authentication and group lookup is supported (Linux and UNIX) on page 89. Auditing improvements have been added to allow for the replay of past database activities. For more information, see FP2: Audit improvements allow replay of past database activities on page 91.
v Two new table functions, MON_GET_FCM and MON_GET_FCM_CONNECTION_LIST, improve the monitoring of fast communications manager (FCM). For more information, see FP2: Identify FCM issues more easily on page 46. v The IBM Data Server Provider for .NET has been enhanced in several ways that might help with application performance, data server compatibility, and simplifying application development. For more information, see IBM Data Server Provider for .NET has been enhanced on page 131. v Some queries using spatial data run faster in partitioned database environments. For more information, see FP2: Materialized query tables with spatial columns can be replicated on page 66. v When using the ADMIN_MOVE_TABLE procedure, you can use the new LOAD_MSGPATH option to define the load message file path. The FORCE option no longer needs to be specified with the COPY_USE_LOAD option. For more information see ADMIN_MOVE_TABLE procedure - Move an online table in Administrative Routines and Views. v Restrictions on the DB2_SKIPDELETED registry variable during an online ADMIN_MOVE_TABLE procedure have been removed. For more information see ADMIN_MOVE_TABLE procedure - Move an online table in Administrative Routines and Views.
Fix Pack 1
Fix Pack 1 includes the following changes: v The -file option of the db2rfpen command has been deprecated. For more information, see FP1: -file option of db2rfpen command has been deprecated on page 233. v The process to detach a data partition from a data partitioned table has been changed. For more information, see FP1: Detach operation for data partitions has been changed on page 182. v If an XML schema that is registered in the DB2 XSR uses the maxOccurs attribute where the value is greater than 5000, the maxOccurs attribute value is treated as if you specified "unbounded". For more information, see FP1: XML schema maxOccurs attribute values greater than 5000 are parsed differently on page 183.
262
v The workload management statistics collection interval is synchronized relative to a day of the week and an hour of the day, rather than relative to when the DB2 instance was started. For more information, see FP1: Collection interval time for workload management statistics has changed on page 174. Fix Pack 1 also contains the following enhancements: v Read operations support on High Availability and Disaster Recovery (HADR) standby databases. For more information, see FP1: Read operations on HADR standby databases are supported on page 51. v DB2 Advanced Copy Services (ACS) support for the AIX 6.1 operating system. For more information, see FP1: DB2 Advanced Copy Services (ACS) is supported on AIX 6.1 on page 50. v Last referenced date support for some objects, which helps you understand when these were last used. For more information, see FP1: Last referenced date is available for tables, table partitions, indexes, and packages on page 40. v The SUBSTRB scalar function, which returns a substring of a string. For more information, see SUBSTRB scalar function in SQL Reference, Volume 1. v Compiled user defined functions with OUT and INOUT parameters are supported in SQL PL. For more information, see SQL PL functionality has been extended for user-defined functions on page 137. v Global variables assignments in nested contexts are supported. For more information, see FP1: Global variable assignments in nested contexts are supported on page 139. v OUT and INOUT parameters are supported in user-defined functions. For more information, see FP1: User-defined functions support OUT and INOUT parameters on page 114. v Support for PL/SQL functions that modify the database. For more information, see CREATE FUNCTION statement (PL/SQL) in SQL Procedural Languages: Application Enablement and Support. v The IBM Data Server Provider for .NET includes multiple enhancements. For more information, see IBM Data Server Provider for .NET has been enhanced on page 131. v The db2pd command has a new parameter that makes it easier to collect the fenced routines history information. For more information, see FP1: Fenced routines history information is easier to collect on page 159. v The DB2 PL/SQL compiler supports FORALL and BULK COLLECT INTO syntax. For more information, see FORALL statement (PL/SQL) and BULK COLLECT INTO clause (PL/SQL) in SQL Procedural Languages: Application Enablement and Support. You can use new XQuery functions to retrieve current date and time values using the local time zone of the DB2 database system. For more information, see FP1: XQuery functions make it easier to retrieve date and time values for local time zones on page 31. v The diagpath database manager configuration parameter has new values, which allow you to store DB2 diagnostic data in separate directories named according to the physical host, database partition, or both. The db2diag command also has a new -merge parameter to merge multiple db2diag log files. For more information, see FP1: Diagnostic data can be stored in separate directories on page 158. v The new package cache event monitor captures information about cached statement entries after they have been flushed from the database package cache, which can help to resolve SQL query performance and problem determination v
Appendix A. Fix pack summary
263
issues. For more information, see FP1: New event monitor for dynamic and static SQL statements in package cache on page 43. New lock-related relational monitoring interfaces replace deprecated snapshot interfaces. For more information, see FP1: New relational monitoring interfaces for locking events on page 35. Runtime statistics are available for access plan operators. For more information, see FP1: Explain enhanced with actual values for operator cardinality on page 43. Section explain functionality captures explain information about a statement using only the contents of the runtime section. For more information, see FP1: Statements from a runtime section can be explained on page 42. New component time monitor elements can be combined with existing wait times monitor elements, provided in DB2 Version 9.7, to provide a comprehensive breakdown of time spent within the DB2 database manager. For more information, see Time-spent monitor elements are more comprehensive on page 39. Time monitor elements, reported in XML documents, can be displayed and analyzed in a generic fashion using new row-based formatting functions. For more information, see FP1: Table functions for row-based formatting of monitoring information are available on page 46. Package cache information can be retrieved in XML form using a new package cache details table function. For more information, see MON_GET_PKG_CACHE_STMT_DETAILS in Administrative Routines and Views. New administrative views encapsulate key queries using the new monitoring table functions introduced in DB2 Version 9.7 and V9.7 Fix Pack 1. For more information, see FP1: Monitoring table functions information can be viewed using administrative views on page 45. A list of packages used within each unit of work can be obtained through the unit of work event monitor. For more information, see A new unit of work event monitor supports transaction monitoring on page 38. Reorganization of data or indexes for a specific data partition of a data partitioned table. For more information, see FP1: Data partitions and partitioned indexes can be reorganized on page 17. A partitioned table remains available during roll-out operations. For a partitioned table, a roll-out operation no longer takes the table offline. For more information, see FP1: Partitioned table data remains available during roll-out operations on page 65. Multidimensional clustering (MDC) block indexes are partitioned when creating a table that uses both MDC and table partitioning. For more information, see Partitioned indexes on partitioned tables improve performance on page 60. Distribution statistics are collected for indexes over XML data. For more information, see FP1: Distribution statistics collected for XML columns on page 31. The ADMIN_MOVE_TABLE procedure has new options which prevent locking overhead on the target table in the copy and swap phases and improve the data movement speed. For more information, see Table data can be moved online using a new stored procedure on page 14. Additional keywords can be added to the db2relocatedb command configuration file which makes it easier to relocate a database when the paths used are different. For more information, see FP1: relocating databases using the db2relocatedb command has been improved on page 18.
264
v New routines, views, and modules for monitoring, workload management, and explaining statements have been added and some routines have been changed. For more information, see Some system catalog views, system-defined administrative routines and views have been added and changed on page 200. v You can monitor the progress of the RUNSTATS command as well as table and index reorganizations. For more information, see Additional system monitoring information can be generated on page 44. v Transparent LDAP is supported on the Linux, HP-UX, and Solaris operating systems. For more information, see FP1: Transparent LDAP authentication and group lookup is supported (Linux and UNIX) on page 89. v 32-bit GSKit libraries are now installed automatically. For more information, see FP1: 32-bit GSKit libraries are included in the 64-bit DB2 product installation on page 91. v Additional support is provided for the GB18030 code set. For more information, see GB18030 code set support has been extended on page 155. v DB2 database products installed on HP-UX operating systems now support long host names. For more information, see Installation requirements for DB2 servers and IBM data server clients (HP-UX) in Installing DB2 Servers. v Multiple result sets can now be returned from an SQL procedure by enabling multiple instances of the same cursor. For more information, see Returning result sets from SQL procedures in SQL Procedural Languages: Application Enablement and Support. v The db2support tool includes new filtering options that you can use to gather specific diagnostic data more easily and an archiving option for storing diagnostic files in a different location. For more information, see FP1: db2support tool has been enhanced on page 159. v Work action sets can be defined at the workload level to control workloads, based on the type and size of work, before they enter the system. For more information, see FP1: Work action sets can be defined at workload level on page 79. v The UOWTOTALTIME threshold specifies the maximum amount of time that a unit of work may spend in the DB2 engine. For more information, see FP1: New time threshold limits unit of work duration on page 80. v A sample script (qpwlmmig.pl) has been provided to facilitate the migration from the deprecated DB2 Query Patroller environment to the DB2 workload manager environment. For more information, see FP1: Script facilitates migration from Query Patroller to workload manager on page 80. v A new optional AUTOGRANT command parameter for the DB2 Text Search ENABLE DATABASE FOR TEXT command, which tries to grant the necessary DBADM with DATAACCESS privileges to the instance owner when running the ENABLE command, in case the instance owner misses these privileges for this database. For more information, see db2ts ENABLE DATABASE FOR TEXT command in the Command Reference. v For Linux operating systems, users can specify that activated databases are to use minimal processing resources when the database manager is idle, using a new registry variable DB2_MIN_IDLE_RESOURCES. For more information, see the "DB2_MIN_IDLE_RESOURCES" entry in Miscellaneous variables in the Database Administration Concepts and Configuration Reference. v With the new DB2_USE_FAST_PREALLOCATION registry variable, the Veritas fast allocation file system feature can be used to reserve table space, and speed up the process of creating or altering large table spaces and database restore
265
operations. For more information, see the "DB2_USE_FAST_PREALLOCATION" entry in Miscellaneous variables in the Database Administration Concepts and Configuration Reference. v With the new DB2TCP_CLIENT_KEEPALIVE_TIMEOUT registry variable, users can specify a keep alive setting that is lower than the system default, allowing the database manager to detect connection failures sooner. For more information, see the "DB2TCP_CLIENT_KEEPALIVE_TIMEOUT" entry in Communications variables in the Database Administration Concepts and Configuration Reference. v The DB2_WORKLOAD aggregate registry variable now has a new value, INFOR_ERP_LN, which configures a set of registry variables for Infor ERP Baan. For more information, see the "DB2_WORKLOAD" entry in System environment variables in the Database Administration Concepts and Configuration Reference.
266
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for how to improve the DB2 documentation, send an e-mail to [email protected]. The DB2 documentation team reads all of your feedback, but cannot respond to you directly. Provide specific examples wherever possible so that we can better understand your concerns. If you are providing feedback on a specific topic or help file, include the topic title and URL. Do not use this e-mail address to contact DB2 Customer Support. If you have a DB2 technical issue that the documentation does not resolve, contact your local IBM service center for assistance.
267
The form number increases each time a manual is updated. Ensure that you are reading the most recent version of the manuals, as listed below. Note: The DB2 Information Center is updated more frequently than either the PDF or the hard-copy books.
Table 39. DB2 technical information Name Administrative API Reference Administrative Routines and Views Call Level Interface Guide and Reference, Volume 1 Call Level Interface Guide and Reference, Volume 2 Command Reference Form Number SC27-2435-02 SC27-2436-02 SC27-2437-02 Available in print Yes No Yes Last updated September, 2010 September, 2010 September, 2010
SC27-2438-02
Yes
September, 2010
SC27-2439-02
Data Movement Utilities SC27-2440-00 Guide and Reference Data Recovery and High Availability Guide and Reference SC27-2441-02
Database Administration SC27-2442-02 Concepts and Configuration Reference Database Monitoring Guide and Reference Database Security Guide DB2 Text Search Guide Developing ADO.NET and OLE DB Applications Developing Embedded SQL Applications Developing Java Applications Developing Perl, PHP, Python, and Ruby on Rails Applications Developing User-defined Routines (SQL and External) Getting Started with Database Application Development SC27-2458-02 SC27-2443-01 SC27-2459-02 SC27-2444-01
Yes
September, 2010
Yes Yes No
SC27-2448-01
Yes
November, 2009
GI11-9410-01
Yes
November, 2009
Getting Started with GI11-9411-00 DB2 Installation and Administration on Linux and Windows
Yes
August, 2009
268
Table 39. DB2 technical information (continued) Name Globalization Guide Installing DB2 Servers Installing IBM Data Server Clients Message Reference Volume 1 Message Reference Volume 2 Net Search Extender Administration and User's Guide Partitioning and Clustering Guide pureXML Guide Query Patroller Administration and User's Guide Spatial Extender and Geodetic Data Management Feature User's Guide and Reference SQL Procedural Languages: Application Enablement and Support Form Number SC27-2449-00 GC27-2455-02 GC27-2454-01 SC27-2450-00 SC27-2451-00 SC27-2469-02 Available in print Yes Yes No No No No Last updated August, 2009 September, 2010 September, 2010 August, 2009 August, 2009 September, 2010
Yes Yes No
SC27-2468-01
No
September, 2010
SC27-2470-02
Yes
September, 2010
SQL Reference, Volume 1 SC27-2456-02 SQL Reference, Volume 2 SC27-2457-02 Troubleshooting and Tuning Database Performance Upgrading to DB2 Version 9.7 Visual Explain Tutorial What's New for DB2 Version 9.7 Workload Manager Guide and Reference XQuery Reference SC27-2461-02
September, 2010 August, 2009 September, 2010 September, 2010 November, 2009
Table 40. DB2 Connect-specific technical information Name Form Number Available in print Yes Last updated September, 2010
Installing and SC27-2432-02 Configuring DB2 Connect Personal Edition Installing and Configuring DB2 Connect Servers SC27-2433-02
Yes
September, 2010
269
Table 40. DB2 Connect-specific technical information (continued) Name DB2 Connect User's Guide Form Number SC27-2434-02 Available in print Yes Last updated September, 2010
Table 41. Information Integration technical information Name Form Number Available in print Yes Last updated August, 2009
Information Integration: SC19-1020-02 Administration Guide for Federated Systems Information Integration: ASNCLP Program Reference for Replication and Event Publishing Information Integration: Configuration Guide for Federated Data Sources Information Integration: SQL Replication Guide and Reference Information Integration: Introduction to Replication and Event Publishing SC19-1018-04
Yes
August, 2009
SC19-1034-02
No
August, 2009
SC19-1030-02
Yes
August, 2009
GC19-1028-02
Yes
August, 2009
Procedure
v To find out whether you can order printed DB2 books online in your country or region, check the IBM Publications Center at http://www.ibm.com/shop/
270
publications/order. You must select a country, region, or language to access publication ordering information and then follow the ordering instructions for your location. v To order printed DB2 books from your local IBM representative: 1. Locate the contact information for your local representative from one of the following Web sites: The IBM directory of world wide contacts at www.ibm.com/planetwide The IBM Publications Web site at http://www.ibm.com/shop/ publications/order. You will need to select your country, region, or language to the access appropriate publications home page for your location. From this page, follow the "About this site" link. 2. When you call, specify that you want to order a DB2 publication. 3. Provide your representative with the titles and form numbers of the books that you want to order. For titles and form numbers, see DB2 technical library in hardcopy or PDF format on page 267.
Procedure
To start SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the first two digits of the SQL state. For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help for the 08 class code.
271
Procedure
v To display topics in your preferred language in the Internet Explorer browser: 1. In Internet Explorer, click the Tools > Internet Options > Languages... button. The Language Preferences window opens. 2. Ensure your preferred language is specified as the first entry in the list of languages. To add a new language to the list, click the Add... button. Note: Adding a language does not guarantee that the computer has the fonts required to display the topics in the preferred language. To move a language to the top of the list, select the language and click the Move Up button until the language is first in the list of languages. 3. Refresh the page to display the DB2 Information Center in your preferred language. v To display topics in your preferred language in a Firefox or Mozilla browser: 1. Select the button in the Languages section of the Tools > Options > Advanced dialog. The Languages panel is displayed in the Preferences window. 2. Ensure your preferred language is specified as the first entry in the list of languages. To add a new language to the list, click the Add... button to select a language from the Add Languages window. To move a language to the top of the list, select the language and click the Move Up button until the language is first in the list of languages. 3. Refresh the page to display the DB2 Information Center in your preferred language.
Results
On some browser and operating system combinations, you must also change the regional settings of your operating system to the locale and language of your choice.
Updating the DB2 Information Center installed on your computer or intranet server
A locally installed DB2 Information Center must be updated periodically.
272
Installing DB2 Servers. All prerequisites and restrictions that applied to installing the Information Center also apply to updating the Information Center.
Procedure
To automatically update the DB2 Information Center installed on your computer or intranet server: 1. On Linux operating systems, a. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the /opt/ibm/db2ic/V9.7 directory. b. Navigate from the installation directory to the doc/bin directory. c. Run the ic-update script:
ic-update
2. On Windows operating systems, a. Open a command window. b. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the <Program Files>\IBM\DB2 Information Center\Version 9.7 directory, where <Program Files> represents the location of the Program Files directory. c. Navigate from the installation directory to the doc\bin directory. d. Run the ic-update.bat file:
ic-update.bat
Results
The DB2 Information Center restarts automatically. If updates were available, the Information Center displays the new and updated topics. If Information Center updates were not available, a message is added to the log. The log file is located in doc\eclipse\configuration directory. The log file name is a randomly generated number. For example, 1239053440785.log.
273
Manually updating the DB2 Information Center installed on your computer or intranet server
If you have installed the DB2 Information Center locally, you can obtain and install documentation updates from IBM.
Procedure
To update the DB2 Information Center installed on your computer or intranet server: 1. Stop the DB2 Information Center. v On Windows, click Start Control Panel Administrative Tools Services. Then right-click DB2 Information Center service and select Stop. v On Linux, enter the following command:
/etc/init.d/db2icdv97 stop
2. Start the Information Center in stand-alone mode. v On Windows: a. Open a command window. b. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the Program_Files\IBM\DB2 Information Center\Version 9.7 directory, where Program_Files represents the location of the Program Files directory. c. Navigate from the installation directory to the doc\bin directory. d. Run the help_start.bat file:
274
help_start.bat
v On Linux: a. Navigate to the path where the Information Center is installed. By default, the DB2 Information Center is installed in the /opt/ibm/db2ic/V9.7 directory. b. Navigate from the installation directory to the doc/bin directory. c. Run the help_start script:
help_start
The systems default Web browser opens to display the stand-alone Information Center. 3. Click the Update button ( ). (JavaScript must be enabled in your browser.) On the right panel of the Information Center, click Find Updates. A list of updates for existing documentation displays. 4. To initiate the installation process, check the selections you want to install, then click Install Updates. 5. After the installation process has completed, click Finish. 6. Stop the stand-alone Information Center: v On Windows, navigate to the installation directory's doc\bin directory, and run the help_end.bat file:
help_end.bat
Note: The help_end batch file contains the commands required to safely stop the processes that were started with the help_start batch file. Do not use Ctrl-C or any other method to stop help_start.bat. v On Linux, navigate to the installation directory's doc/bin directory, and run the help_end script:
help_end
Note: The help_end script contains the commands required to safely stop the processes that were started with the help_start script. Do not use any other method to stop the help_start script. 7. Restart the DB2 Information Center. v On Windows, click Start Control Panel Administrative Tools Services. Then right-click DB2 Information Center service and select Start. v On Linux, enter the following command:
/etc/init.d/db2icdv97 start
Results
The updated DB2 Information Center displays the new and updated topics.
DB2 tutorials
The DB2 tutorials help you learn about various aspects of DB2 products. Lessons provide step-by-step instructions.
275
Some lessons use sample data or code. See the tutorial for a description of any prerequisites for its specific tasks.
DB2 tutorials
To view the tutorial, click the title. pureXML in pureXML Guide Set up a DB2 database to store XML data and to perform basic operations with the native XML data store. Visual Explain in Visual Explain Tutorial Analyze, optimize, and tune SQL statements for better performance using Visual Explain.
276
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of the Publications is detrimental to its interest or, as determined by IBM, the above instructions are not being properly followed. You may not download, export or re-export this information except in full compliance with all applicable laws and regulations, including all United States export laws and regulations. IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
277
278
Appendix C. Notices
This information was developed for products and services offered in the U.S.A. Information about non-IBM products is based on information available at the time of first publication of this document and is subject to change. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 1623-14, Shimotsuruma, Yamato-shi Kanagawa 242-8502 Japan The following paragraph does not apply to the United Kingdom or any other country/region where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web
Copyright IBM Corp. 2010
279
sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information that has been exchanged, should contact: IBM Canada Limited U59/3600 3600 Steeles Avenue East Markham, Ontario L3R 9Z7 CANADA Such information may be available, subject to appropriate terms and conditions, including, in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems, and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements, or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information may contain examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious, and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating
280
platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows: (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. All rights reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at www.ibm.com/legal/copytrade.shtml. The following terms are trademarks or registered trademarks of other companies v Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. v Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. v UNIX is a registered trademark of The Open Group in the United States and other countries. v Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. v Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
Appendix C. Notices
281
282
125
Numerics
64-bit server 91
A
access plans reusing overview 56 ACCESSCTRL (access control) authority overview 84 act_remapped_in monitor element 78 act_remapped_out monitor element 78 Activity Monitor deprecated 222 ADMIN_EST_INLINE_LENGTH function overview 63 ADMIN_IS_INLINED function overview 63 ADMIN_MOVE_TABLE procedure overview 14 administration modifications summary 171 administration notification log size control 52, 160 administrative routines additions 13, 200 changes 200 administrative views additions 13, 200 changes 200 monitoring information 45 AES algorithm alternate_auth_enc configuration parameter overview 86 AGGSQLTEMPSPACE threshold overview 77 aliases enhancements 105 public 105 ALTER TABLE statement ALTER COLUMN SET DATA TYPE enhancement 98 RENAME COLUMN clause 95 alternate_auth_enc configuration parameter overview 86, 172 anchored data types overview 139 annotated XML schema decomposition enhancements 27 APIs deprecated 227 applheapsz configuration parameter change 185
B
backups enhancements summary 49 blocknonlogged database configuration parameter overview 185 books ordering 270 Boolean data type overview 140 built-in functions additions 200 changes 200
283
C
call level interface (CLI) applications enhancements 126 dynamic packages binding enhancement 126 enhancements 126 merge modules packaging simplified 217 casting implicit 104 catalog statistics distribution statistics on XML columns 31 CATALOG TCPIP MODE command enhancement 87 catalog views additions 200 changes 200 changed functionality summary 169, 171 CHAR scalar function return behavior altered 210 clusters managing Solaris SPARC support 49 columns renaming 95 Command Editor deprecated 222 Command Line Processor Plus (CLPPlus) overview 67 commands db2ckupgrade overview 232 db2has overview 157 db2iupgrade overview 232 deprecated Control Center 222 DB2 administration server (DAS) 222 DB2 Governor 223 health monitor 225 LIST TABLESPACE CONTAINERS 227 LIST TABLESPACES 227 Query Patroller 223 DESCRIBE enhancement 17 output changes 182 UPGRADE DATABASE overview 232 common SQL API stored procedures overview 109 compiled compound statements overview 136 components name changes 3 compression enhancements summary 5 index overview 7 XML documents overview 5, 29
concurrency scan sharing overview 59 CONCURRENTDBCOORDACTIVITIES aggregate threshold change 181 CONFIG_ONLY response file keyword 234 Configuration Assistant (CA) deprecated 222 configuration parameters enhancements 87 connection string parameters SSL support 87 Control Center deprecated tools 222 extensions deprecated 222 CPUTIME activity threshold overview 77 CREATE EVENT MONITOR FOR DEADLOCKS statement deprecated 230 CREATE EVENT MONITOR FOR TRANSACTIONS statement deprecated 231 CREATE INDEX statement new default 171 CREATE statement OR REPLACE clause addition 95 CREATE with errors overview 96 created temporary tables overview 100 cur_commit database configuration parameter overview 185 currently committed semantics enhancement 58 cursor data types overview 141 cursor stability (CS) default behavior changed 199 enhancement 58 cursor variables overview 141 cursors parameter support added 141
D
data distribution map size increased 15 storage enhancements summary data deduplication devices backup utilities 52 data dictionary Oracle compatible views 68 data types anchored overview 139 associative array overview 141 Boolean 140 CLI enhancements 126 cursor overview 141 DATE 69 LONG VARCHAR deprecated 226 5
284
data types (continued) LONG VARGRAPHIC deprecated 226 NUMBER 69 row 142 setting ALTER TABLE statement 98 SQL PL 139 VARCHAR2 69 data warehouse applications scalability improvement 15 DATAACCESS (data access) authority overview 84 database manager configuration parameters changed 172 new 172 database objects modules 135 database partition servers enhancement 16 database-managed space (DMS) reclaimable storage overview 11 databases outage resiliency enhanced 50 setup changes summary 184 transporting schemas overview 53 DATE data type data type arithmetic support 69 date_compat database configuration parameter overview 185 DB2 administration server (DAS) deprecated 222 DB2 Advanced Copy Services (ACS) installing compact type 189 response file 189 supported operating systems 50 DB2 Connect changes summary 163 database manager configuration parameters changes 172 enhancements summary 163 DB2 Connect Unlimited Edition (System z) 153 DB2 embedded application server (EAS) discontinued 241 DB2 Express Edition licensing changes 184 DB2 Governor deprecated 223 DB2 Health Advisor Service data collector command DB2 Information Center languages 272 updating 272, 274 versions 271 DB2 products availability 3 packaging 3 DB2 Text Search 146 authorizations changes 197 commands authorization changes 197 installing changes 189 procedure authorization changes 197
157
DB2 Text Search (continued) stored procedure authorization changes 197 DB2 Workgroup Edition licensing changes 184 DB2 workload manager changes statistics collection interval 174 CONCURRENTDBCOORDACTIVITIES threshold change 181 CONCURRENTDBCOORDACTIVITIES threshold enhancement 181 enhancements application-specific threshold controls 74 buffer pool I/O priority 76 high watermarks 75 integration with Linux WLM 76 IP address support 74 Linux WLM support 76 migration script 81 monitoring 75 priority aging 78 resource controls 76 service class tiering 78 statistics collection 75 summary 73 threshold actions 78 thresholds 74, 77, 80 time-based thresholds granularity 77 unit of work time threshold control 80 wild card support 74 work action sets 79 workloads 74 licensing changes 184 migration script Query Patroller to workload manager 81 statistics collection interval synchronization 174 thresholds AGGSQLTEMPSPACE 77 CPUTIME 77 SQLROWSREAD 77 work action sets workload level 79 DB2 XQuery functions current-local-date overview 31 current-local-dateTime overview 31 current-local-time overview 31 DB2 XQuery functions local-timezone 31 overview 31 DB2_ATS_ENABLE registry variable overview 175 DB2_CAPTURE_LOCKTIMEOUT registry variable deprecated functionality 231 DB2_COMPATIBILITY_VECTOR registry variable enhancement 70 DB2_DDL_SOFT_INVAL registry variable overview 175 DB2_DEFERRED_PREPARE_SEMANTICS registry variable overview 175 DB2_EVALUNCOMMITTED registry variable changes 175 DB2_EVMON_STMT_FILTER registry variable new values 175 Index
285
DB2_FCM_SETTINGS registry variable 175 new values 175 DB2_FORCE_OFFLINE_ADD_PARTITION environment variable overview 175 DB2_HADR_ROS registry variable overview 175 DB2_ITP_LEVEL registry variable overview 175 DB2_LIMIT_FENCED_GROUP registry variable overview 175 DB2_LOGGER_NON_ BUFFERED_IO registry variable changed default values 175 DB2_NCHAR_SUPPORT registry variable overview 175 DB2_PMAP_COMPATIBILITY registry variable overview 175 DB2_SERVER_ENCALG registry variable changes 175 deprecated functionality 231 DB2_SKIPDELETED registry variable changes 175 DB2_SKIPINSERTED registry variable changes 175 DB2_SQLROUTINE_PREPOPTS registry variable new values 175 DB2_STANDBY_ISO registry variable overview 175 DB2_THREAD_SUSPENSION variable discontinued 243 DB2_USE_FAST_PREALLOCATION registry variable overview 175 DB2_WORKLOAD aggregate registry variable new values 175 db2adutl command enhancement 52 db2caem event monitoring 47 db2ckmig command deprecated 232 DB2DETAILDEADLOCK event monitor deprecated 230, 231 db2diag command parameter additions 158 db2fmp process privileges customization enhancement 90 db2haicu utility (DB2 High Availability Instance Configuration Utility) Solaris SPARC support 49 db2has command 157 db2History APIs COBOL and FORTRAN languages deprecated support 234 db2ilist command options deprecated 243 db2imigr command deprecated 232 db2iprune command enhancement 152 db2iupdt command option deprecated 232 db2mtrk command SYSMON authority added 90 db2pd history fenced routine history 159
db2pd command keywords added 44 db2relocatedb command enhancements 18 DB2RESILIENCE environment variable overview 175 db2rfpen command option deprecated 233 db2rspgn command Linux support added 148 UNIX support added 148 db2secv82 command discontinued 241 db2support command new options 159 db2trc command application handle 157 application id 157 db2uiddl command discontinued 241 db2val command overview 150 DB2WebServices discontinued 241 DBADM (database administration) authority changes 84, 193 dbheap database configuration parameter change 185 dec_to_char_fmt database configuration parameter overview 185 declared temporary tables XML data overview 24 DECOMPOSE XML DOCUMENTS command overview 27 deprecated functionality APIs summary 227 commands db2ckmig 232 db2imigr 232 LIST TABLESPACE CONTAINERS 227 LIST TABLESPACES 227 MIGRATE DATABASE 232 summary 169, 221, 245 DESCRIBE command enhancement 17 output changes 182 diagnostic logs size control 52, 160 diagpath configuration parameter enhancements 158 diagsize database manager configuration parameter overview 172 discontinued functionality commands db2secv82 241 db2uiddl 241 GET AUTHORIZATIONS 242 sqluadau API 242 summary 169, 237, 245 distribution maps size increased 15 documentation overview 267 PDF files 267 printed 267
286
documentation (continued) terms and conditions of use 276 DOUBLE scalar function return behavior changed 212 dyn_query_mgmt configuration parameter deprecated 185
H
Health Center deprecated 222 health monitor deprecated 225 help configuring language 272 SQL statements 271 high availability enhancements summary 49 High Availability Disaster Recovery (HADR) standby database read operations overview 51 high water marks free space reclaimation overview 11 high watermarks workload management additions 75 highlighting conventions xi
E
encryption enhancement 86 environment variables changes 175 Event Analyzer deprecated 222 event monitoring capture activity event monitoring 47 db2caem 47 db2support 47 dynamic SQL statements in package cache 43 static SQL statements in package cache 43 EXPLAIN authority overview 84 external scalar functions OUT and INOUT parameters supported 114
I
I/O completion ports (IOCPs) AIO support 64 new default 64 IBM data server clients enhancements 114 Sysplex support added 125 IBM Data Server Driver for ODBC and CLI merge module changes 217 IBM Data Server Driver Package enhancements 124 Sysplex support added 125 IBM data server drivers enhancements 114 name changes 3 Sysplex support added 125 IBM Data Server Provider for .NET enhancements 131 IBM Database Add-Ins for Visual Studio enhancements 110 install 152 IBM Global Security Kit 91 IBM Tivoli System Automation for Multiplatforms (SA MP) support enhanced 152 ibm_db API overview 107 IBM_DB Ruby driver and Rails adapter trusted contexts 125 ibm_db_dbi API overview 107 ibm_db_sa adaptor overview 107 Index Coexistence 146 index compression overview 7 index over XML data enhancement 30 index reorganization progress information 44 indexes data partitions 61 partitioned overview 61 Indoubt Transaction Monitor deprecated 222
F
Fix pack summary DB2 Connect 167 fix packs enhancements summary 147 space requirements reduced 153 summary 259 FOR UPDATE clause overview 100 functions additions 200 changes 200 deprecated list 200 LONG_VARCHAR 226 LONG_VARGRAPHIC 226 enhancement 137 table ADMIN_EST_INLINE_LENGTH ADMIN_IS_INLINED 63
63
G
GB 18030 code set DB2CODEPAGE 155 GB18030 Windows client 155 General Parallel File System (GPFS) NO FILE SYSTEM CACHING new default GET AUTHORIZATIONS command discontinued 242 global registry changed 188 GSKit 91
174
Index
287
inline storage LOBs enhancements 63 installation changes summary 184 enhancements Linux and UNIX 151 UNIX 151 enhancements summary 147 images enhancement 152 installFixPack command enhancement 153 instances creating shared system support 148 integer division changes 218 INTERACTIVE response file keyword intra-tablespace parallelism overview 51 invalidation soft 97 isolation levels fullselect clauses support 64 subselect clauses support 64
188
J
JDBC enhancements summary Journal deprecated 222 115
LIST TABLESPACES command deprecated 227 SYSMON authority added 90 LIST UTILITIES command SYSMON authority added 90 locking intent subselect clauses and fullselect clauses support locklist configuration parameter new range 185 locks event reporting enhancements 41 log sequence numbers (LSNs) limit increase 200 logbufsz database configuration parameter changes 185 logfilsiz database configuration parameter changed functionality 185 logprimary database configuration parameter changes 185 logs enhancements summary 49 non-buffered I/O default change primary 180 secondary 180 LONG VARCHAR data type deprecated 226 LONG VARGRAPHIC data type deprecated 226 LONG_VARCHAR function deprecated 226 LONG_VARGRAPHIC function deprecated 226
64
L
large objects (LOBs) CLI retrieval enhancement 126 enhancements 63 inline 63 storage enhancement 63 LD_LIBRARY_PATH 91 LIBPATH 91 License activation process 153 License Center deprecated 222 license policies enforcement changes 185 setting changes 185 licenses changes 184 Lightweight Directory Access Protocol (LDAP) transparent LDAP overview 89 LIST DATABASE PARTITION GROUPS command SYSMON authority added 90 LIST DRDA INDOUBT TRANSACTIONS command SYSMON authority added 90 LIST PACKAGES command SYSMON authority added 90 LIST TABLES command SYSMON authority added 90 LIST TABLESPACE CONTAINERS command deprecated 227 SYSMON authority added 90
M
manageability enhancements summary 9 materialized query tables (MQTs) matching enhancements 62 maxOccurs attribute parsing changes 183 Memory Visualizer deprecated 222 merge modules .NET, ODBC, and CLI combined 217 MIGRATE DATABASE command deprecated 232 MIGRATE_PRIOR_VERSIONS response file keyword migration deprecated commands 232 modules overview 135 mon_act_metrics configuration parameter overview 185 mon_deadlock configuration parameter overview 185 mon_locktimeout configuration parameter overview 185 mon_lockwait configuration parameter overview 185 mon_lw_thresh configuration parameter overview 185 mon_obj_metrics configuration parameter overview 185 mon_req_metrics configuration parameter overview 185
234
288
mon_uow_data configuration parameter overview 185 monitor elements act_remapped_in overview 78 act_remapped_out overview 78 num_remaps 78 row-based formatting support added 46 time-spent 39 monitoring database configuration parameters added 36 dynamic SQL statements in package cache 37 enhancements 33, 34 last referenced date 40 monitor elements enhancements 36 section explain 42 SQL statement access plan support section actuals 43 SQL statement access plans 42 static SQL statements in package cache 37 summary 33 workload management enhancements 75, 78 monitoring interfaces FCM support added 46 locking support added 35 monitoring reports generation 45 moving data Worksheet Format (WSF) deprecated 227 Multicultural support enhancements summary 155 multidimensional clustering (MDC) tables reclaiming extents 13 XML support 23
O
object management last referenced date 40 ODBC merge modules packaging simplified 217 online table moves ADMIN_MOVE_TABLE procedure overview 14 optimization guidelines XML data and XQuery 28 optimization profiles enhancement 57 Oracle data dictionary--compatible views 68 environment setup 70 ordering DB2 books 270
P
package cache event monitor supported 43 packages dynamic packages binding enhancement 126 parameter markers CLI enhancement 126 parsing implicit maxOccurs attribute 183 partitioned database environments cost model improved 58 Windows 32-bit support removal 239 XML data 25 partitioned indexes overview 61 partitioned tables detaching data partitions 182 detaching partitions 65 partitioned indexes new default 171 reorganization 17 XML data overview 22 passwords maximum length enhancement 89 PATH 91 pckcachesz database configuration parameter new range 185 performance enhancements summary 55 scan sharing 59 PHP trusted contexts overview 125 pinging CLI application enhancements 126 PL/SQL compilation support 68 predicate pushdown query optimization overview 27 priority aging overview 78 problem determination enhancements summary 157 information available 276 tutorials 276 Index
N
named arguments procedures 106 Net Search Extender (NSE) authorization changes 196 commands authorization changes 196 deprecated 229 deprecated features 229 enhancements summary 145 full-text searches partitioned database support extended partitioned table support added 145 incremental updates 146 Netscape browser support discontinued 239 new features summary 1 non-buffered I/O log file changes 180 notices 279 num_remaps monitor element 78 NUMBER data type overview 69 number_compat mode 218
145
289
procedures ADMIN_MOVE_TABLE 14 common SQL API overview 109 DEFAULT keyword change 213 named arguments 106 parameters default 106 enhancement 106 proxy nodes Tivoli Storage Manager (TSM) overview 52 public aliases overview 105 public synonyms overview 105 Python extensions added 107
response files (continued) MIGRATE_PRIOR_VERSIONS keyword deprecated uninstallation enhancements 149 restoring transporting database schemas overview 53 revalidation automatic 97 soft 97 routines additions 200 changes 200 row data types overview 142 rows count retrieval enhancement 126
234
Q
queries access plan reuse overview 56 Query Patroller deprecated 223 Query Patroller Center deprecated 223 quiesced instance RESTRICTED ACCESS option enhancement 18
S
samples additions 111 Satellite Administration Center deprecated 222 scalability database partition server enhancement 16 scalar functions CHAR return behavior change 210 DOUBLE return behavior change 212 enhancements 101 scan sharing overview 59 SDKs version 1.4.2 deprecation 228 SECADM (security administrator) authority changes 84, 192 security changes summary 190 enhancements summary 83 plug-ins LDAP (Lightweight Directory Access Protocol) security connection parameter 87 SELECT INTO statement FOR UPDATE clause 100 SERVER_ENCRYPT authentication type enhancement 86 service classes tiering 78 SHLIB_PATH 91 shredding XML documents enhancement 27 soft invalidation overview 97 spatial data partitioned database environments 66 SQL administrative routines additions 13 administrative views additions 13 alternative syntax support 70 compatibility enhancements 67 SQL Procedural Language (SQL PL) data types anchored 139 associative array 141
R
reclaimable storage automatic storage table spaces 11 DMS table spaces 11 recovery enhancements summary 49 registry files removed 188 registry variables additions 175 changes 175 deprecated functionality 231 discontinued support 243 relocate database command enhancements 18 REORG INDEXES command CONVERT option deprecated 238 REORG-recommended operations XML data support added 26 replication source tables compression overview 7 Replication Center deprecated 222 resiliency enhancements summary 49 errors and traps detection improvements 50 resources Optim Performance Manager 3, 66 response files CONFIG_ONLY keyword deprecated 234 INTERACTIVE keyword changes 188 keywords addition 149
89
290
SQL Procedural Language (SQL PL) (continued) data types (continued) Boolean 140 cursor 141 new 139 row 142 DEFAULT keyword specification 213 enhancements summary 135 statements enhancement 137 SQL statements help displaying 271 OR REPLACE clause 95 SQLADM (SQL administration) authority overview 84 SQLAlchemy adapter added 107 SQLCreatePkg API 126 sqlemgdb API deprecated 232 sqlgmgdb API deprecated 232 SQLROWSREAD activity threshold overview 77 sqluadau API discontinued 242 sqlugrpn API deprecated 228 sqlugtpi API deprecated 229 srvcon_auth configuration parameter changes 172 SSL enhancements 87 setup enhancement 194 ssl_cipherspecs configuration parameter overview 87, 172 ssl_client_keystash connection parameter overview 87 ssl_client_keystoredb connection parameter overview 87 ssl_clnt_keydb configuration parameter overview 172 ssl_clnt_stash configuration parameter overview 172 ssl_svcename configuration parameter overview 87, 172 ssl_svr_keydb configuration parameter overview 87, 172 ssl_svr_label configuration parameter overview 87, 172 ssl_svr_stash configuration parameter overview 87, 172 ssl_versions configuration parameter overview 87, 172 SSLClientKeystash connection parameter overview 87 SSLClientKeystoredb connection parameter overview 87 statement concentrator CLI enhancement 126 overview 56 statistical views RUNSTATS command 57 statistics collection workload management enhancements 75, 78
stmt_conc database configuration parameter overview 185 storage paths automatic dropping 10 stored procedures result changes 214 strong encryption enhancement 86 synonyms public 105 SYSADM (system administration) authority changes 84, 190 SYSCAT views additions 200 changes 200 SYSMON (system monitor) authority db2mtrk command added 90 LIST commands added 90 Sysplex IBM data server client support added 125 system catalogs views additions 200 changes 200 system-defined modules overview 108
T
table functions deprecated functionality summary 200 table reorganization progress information 44 table spaces increased capacity limit 15 rebalancing overview 10 tables moving online overview 14 partitioned partitioned index overview 61 Task Center deprecated 222 temporary tables compression 6 LOB data support added 105 terms and conditions publications 276 thresholds AGGSQLTEMPSPACE overview 77 CPUTIME overview 77 SQLROWSREAD overview 77 TIMESTAMP data type enhancements 104 trace command application handle 157 application id 157 transaction control CLI rollback enhancement 126 transaction monitoring 38
Index
291
transactions ALTER TABLE operations increased limit autonomous 107 Transport Layer Security (TLS) enhancements 87 transports database schemas overview 53 triggers SQL PL enhancements 137 SQL PL statements 137 troubleshooting enhancements summary 157 online information 276 tutorials 276 TRUNCATE statement overview 100 trusted contexts .NET support added 125 IBM_DB Ruby driver support overview 125 PHP extensions support added 125 tutorials list 275 problem determination 276 troubleshooting 276 Visual Explain 275 type-1 indexes discontinued details 238
96
VARCHAR2 data type overview 69 views additions 200 changes 200 compatible with Oracle data dictionary
68
W
Web Object Runtime Framework (WORF) support discontinued 240 WLMADM (workload administration) authority overview 84 Worksheet Format (WSF) deprecated 227
X
XML compressing documents 5, 29 enhancements compressing documents 5, 29 decomposition 27 indexing 30 summary 21 multidimensional clustering (MDC) tables partitioned database environments 25 processing improvement 27 stored procedure result changes 214 XML data compressing 5, 29 distribution statistics 31 querying XML data 31 XML data type user-defined functions (UDFs) 24 XML decomposition enhancements 27 XML documents compressing 5, 29 storage enhancement 63 type annotation changes 216 XML Extender discontinued 240 XML indexes concurrency enhancement 30 XQuery optimization guidelines 28
23
U
unary operators changed return data types 213 uninstalling response file support enhancements 149 unit of work event monitor overview 38 universal fix packs Windows support added 153 untyped NULL keyword specification changes 210 update service enabled by default 150 updates DB2 Information Center 272, 274 upgrades applications deprecated commands 232 clients deprecated commands 232 DB2 servers deprecated commands 232 enhancements summary 147 response file keyword additions 149 routines deprecated commands 232 user-defined functions (UDFs) overridden by SYSIBM functions 209 XML data type support added 24
V
validation DB2 installation 150
292
Printed in USA
SC27-2463-02
Spine information:
Version 9 Release 7