MySQL To DB2 Conversion Guide (sg247093) PDF
MySQL To DB2 Conversion Guide (sg247093) PDF
MySQL to DB2
Conversion Guide
Guides you through a MySQL database
and application conversion to DB2
Converts an application
with detailed examples
Whei-Jen Chen
Angela Carlson
ibm.com/redbooks
International Technical Support Organization
December 2009
SG24-7093-01
Note: Before using this information and the product it supports, read the information in
Notices on page ix.
This edition applies to DB2 9.7 for Linux, UNIX, and Windows and MySQL 5.1.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 5. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.1 DB2 Express-C 9.7 on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1.1 System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1.2 Installation procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.1.3 Instance creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1.4 Client setup on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.2 Other software products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2.1 Apache2 installation with DB2 support . . . . . . . . . . . . . . . . . . . . . . 105
5.2.2 PHP installation with DB2 support . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.3 IBM Data Movement Tool installation and usage . . . . . . . . . . . . . . . . . . 112
5.3.1 IBM Data Movement Tool prerequisites . . . . . . . . . . . . . . . . . . . . . 112
5.3.2 IBM Data Movement Tool installation . . . . . . . . . . . . . . . . . . . . . . . 113
Contents v
8.2.2 Converting MySQL PHP applications to DB2 . . . . . . . . . . . . . . . . . 225
8.2.3 Converting MySQL Ruby on Rails applications to DB2 . . . . . . . . . 237
8.2.4 Converting MySQL Java applications to DB2 . . . . . . . . . . . . . . . . . 240
8.2.5 Converting MySQL C/C++ applications to DB2 . . . . . . . . . . . . . . . 247
8.2.6 Converting Connector/ODBC applications to DB2 . . . . . . . . . . . . . 256
8.2.7 Condition handling in DB2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
8.2.8 Special conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.3 Additional application considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8.3.1 The purpose of locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8.3.2 Concurrency control and transaction isolation . . . . . . . . . . . . . . . . 272
8.3.3 DB2 isolation levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
8.3.4 Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.3.5 Specifying the isolation level in DB2 . . . . . . . . . . . . . . . . . . . . . . . . 276
Contents vii
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and
other countries.
SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and
other countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation
and/or its affiliates.
Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. and
other countries.
VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in
the United States and/or other jurisdictions.
EJB, Enterprise JavaBeans, J2EE, Java, Java runtime environment, JavaBeans, JavaServer, JDBC, JSP,
MySQL, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Access, ActiveX, Expression, Microsoft, MS, SQL Server, Visual Basic, Visual Studio, Windows Mobile,
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Notices xi
xii MySQL to DB2 Conversion Guide
Preface
If you picked up this book, you are most likely considering converting to DB2 and
are probably aware of several of the advantages of converting to DB2 data
server. In this IBM Redbooks publication, we discuss in detail how you can
take advantage of this industry leading database server.
This book is an informative guide that describes how to convert the database
system from MySQL 5.1 to DB2 9.7 on Linux, and the steps involved in
enabling the applications to use DB2 instead of MySQL.
This MySQL to DB2 migration guide also presents the best practices in
conversion strategy and planning, conversion tools, porting steps, and practical
conversion examples. It is intended for technical staff involved in a MySQL to
DB2 conversion project.
Boris Bialek
Program Director, Information Management Partner Technologies, IBM Canada
Irina Delidjakova
Information Management Emerging Partnerships and Technologies, IBM
Canada
Vlad Barshai
Information Management Emerging Partnerships and Technologies, IBM
Canada
Martin Schlegel
Information Management Partner Technologies, IBM Canada
Daniel Krook
Cloud Engineering and Experience, IBM U.S.
Emma Jacob
International Technical Support Organization, San Jose Center
Authors of the first edition, MySQL to DB2 UDB Conversion Guide, SG24-7093,
published in May 2004, were Whei-Jen Chen, Andreas Blank, Michael Hoeller,
Rakesh Midha, and Klaus Subtil
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us.
Preface xv
xvi MySQL to DB2 Conversion Guide
Summary of changes
This section describes the technical changes made in this edition of the book and
in previous editions. This edition might also include minor corrections and
editorial changes that are not identified.
Summary of Changes
for SG24-7093-01
for MySQL to DB2 Conversion Guide
as created or updated on December 1, 2009.
New information
DB2 features and functions of DB2 for Linux, UNIX, and Windows Version
9, 9.5, and 9.7
IBM Data Movement Tool
MySQL 5.1 features
Changed information
DB2 and MySQL features and functions
Conversion scenarios and examples
This book describes how to migrate MySQL 5.1 to DB2 Version 9.7 on Linux and
enable your applications on DB2. To further ease your migration, this informative
guide will cover best practices in migration strategy and planning, as well as the
step-by-step directions, tools, and practical conversion. After completing this book, it
will be clear to the technical reader that a MySQL to DB2 migration is easy and
straightforward.
Potential IBM clients seek migration information because DB2 offers performance
and functional capabilities that the competition can't compare with. DB2 Express C,
our lightweight community edition, is free to develop, deploy and redistribute, and is
designed to give the IT community a powerful alternative to the open source or free
databases currently available.
DB2 Express C offers the same high quality, reliable, scalable features that you would
expect from an IBM enterprise database at no charge. Fixed Term License support is
available as well, at a lower price than the competition. The decision to migrate
becomes simple when you consider that DB2 can be easily deployed in the
development stack, while offering many additional features and ease of use.
Enterprise class features aimed to lower the total cost of ownership can be found in
every edition of DB2. DB2 has powerful autonomics which make installation,
configuration, maintenance and administration virtually hands free. DB2 9.7's
compression features help companies manage rising energy costs and reduce
datacenter sprawl by reducing storage requirements and improving I/O efficiency.
IBM is committed to providing products to our clients that are powerful and
affordable. DB2 provides industry leading features, such as pureXML, Workload
Management, and Granular Security. Using DB2 pureXML makes XML data
processing even faster, more flexible, and more reliable. Manage workloads with new
threshold, priority and OS integration features in DB2 9.7. Keep data secure from
internal and external threats using the unparalleled security control in DB2 9.7.
Start taking advantage of these exciting new features and help your business
manage costs and simplify application development. Migrate your database systems
and applications today and discover why DB2 9.7 is a smarter product for a smarter
planet.
Arvind Krishna
General Manager
IBM Information Management
The release of DB2 for Linux, UNIX, and Windows Version 9 (DB2 9) data server
brought the most exciting and innovative database features to the market; these
features were further enhanced with the release of DB2 9.5 and 9.7. DB2 9
introduced many important features for both database administrators and
application developers. These features included pureXML, autonomics, table
partitioning, data compression, and label-based access control. DB2 9.5
enhanced the manageability of the DB2 data server by introducing the threaded
engine, easier integration with high availability disaster recovery (HADR),
workload management, enhancements to autonomics, and more. The focus of
the DB2 9.7 release is to provide unparalleled reliability and scalability for the
changing needs of your business. Therefore, DB2 9.7 introduces enhancements
to Version 9 and Version 9.5 features, such as enhancements to data
compression, performance, workload management, security, and application
development.
When this book was written, DB2 9.7 had just been released on June 2009. DB2
9.7 is the database version that we use throughout the book. DB2 9.7 is a highly
scalable and easy to install and manage hybrid data server. DB2 was developed
to meet the demands of even the most critical database applications. This is
managed through various autonomics capabilities, such as self-tuning memory
management and automatic storage. DB2 provides a highly adaptable database
environment while optimizing data storage through backups and deep data row
compression. DB2 deep embedded capabilities allow for ubiquitous deployment
in user directories and administrative installations for any size server. In a single
database, DB2 provides native storage and processing of both transactional
XML data in a pre-parsed tree format and relational data using pureXML
technology.
Information about additional DB2 features and products can be found at this Web
site:
http://www.ibm.com/software/data/db2/9/features.html?S_CMP=wspace
Because DB2 is running with a threaded architecture, all threads within the
engine process share the same address space, meaning all threads can
immediately see new memory allocations. This design creates a simplified
memory model by allowing memory growth and shrinkage through control of a
single memory parameter for an entire instance. This control is automatically
performed by the Self-Tuning Memory Manager, which can also tune other
memory parameters for best performance without DBA intervention. Its adaptive
algorithm is able to react to unforeseen memory requirements in DB2 caused by
workloads running against it.
When using SMS or DMS in combination with container files, you can choose
how DB2 handles these files. For example, you can choose to enable various
optimization features if supported by the operating systems, that is, Direct I/O
(to bypass file system caching; always enabled with raw and block devices),
Vector I/O (reading contiguous data pages from disk into contiguous portions
of memory), and Async I/O (non-sequential processing of read and write
requests across multiple disks to avoid delays from synchronous events).
When using the Automatic Storage feature in DB2, you can simply specify
folders where the database can automatically create and manage DMS table
spaces. When more space is required, the database manager automatically
allocates more space. Table spaces can be automatically resized using this
feature. This feature provides a convenient and worry-free operation
scenario. You can perform manual operations without having to specify
container files.
Containers
A container is a physical storage device. It can be identified by a directory
name, a device name, or a file name. A container is assigned to a table
space. A single table space can span many containers, but each container
can belong to only one table space.
When using the Deep Data Row Compression feature, DB2 is able to
transparently compress and decompress table rows (for each table with
compression turned on). This feature can effectively save 45-80% of the
space on disk. Compressed rows in a table are compressed when
pre-fetched to buffer pool memory and left in a compressed state until they
are actually used. Although decompression of the data when it is fetched
adds a slight overhead, I/O bound workloads will have a performance gain
due to the reduced amount of data we actually need to read and write from or
to disk, as well as saved memory.
Views
A view provides another way of looking at data from one or more tables; it is a
named specification of a result table. The specification is a SELECT
statement that runs whenever the view is referenced in an SQL statement. A
view has columns and rows just like a base table. All views can be used just
like base tables for data retrieval. Figure 1-8 on page 17 shows the
relationship between tables and views.
Indexes
An index is a set of keys, each pointing to rows in a table. For example, table
A has an index based on the first column in the table (Figure 1-9 on page 18).
This key value provides a pointer to the rows in the table: value 19 points to
record KMP. If searching for this particular record, a full table scan can be
avoided, because we have an index defined. Except for changes in
performance, users of this table are unaware that an index is being used.
DB2 decides whether to use the index or not. DB2 also provides tools, such
as the Design Advisor, that can help decide what indexes will be beneficial.
An index allows efficient access when selecting a subset of rows in a table by
creating a direct path to the data through pointers. The DB2 SQL Optimizer
chooses the most efficient way to access data in tables. The optimizer takes
indexes into consideration when determining the fastest access path.
Indexes have both benefits and disadvantages. Be careful when defining
indexes and take into consideration costs associated with update, delete, and
insert operations and maintenance, such as reorganization and recovery.
The catalog is automatically created with the database. The base tables are
owned by the SYSIBM schema and stored in the SYSCATSPACE table space.
On top of the base tables, the SYSCAT and SYSSTAT views are created. SYSCAT
views are the read-only views that contain the object information and are found in
the SYSCAT schema. SYSSTAT views are views, which you can update, that
contain statistical information that is found in the SYSTAT schema. You can
obtain the complete DB2 catalog views in DB2 SQL Reference Volume 1 and 2
available for download under the following link:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
Views database manager settings db2 get dbm cfg show detail
Changes a database manager setting db2 update dbm cfg using health_mon off
db2admin Starts and stops the DB2 Administration Server db2admin start
Note: The Control Center and its associated components have been
deprecated in Version 9.7 and might be removed in a future release. We
recommend that you use the new suite of GUI tools for managing DB2 data
and data-centric applications. These new tools include the IBM Data Studio,
the Optim Development Studio, and the Optim Database Administrator.
and
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0804zikopoulos
Additionally, the IBM Data Server Client provides vast amounts of sample code
in various languages, header files for application development and graphical
administration and development tools, such as the DB2 Control Center, the IBM
Data Studio, the MS Visual Studio Tools, and more.
Figure 1-10 illustrates how to connect to a DB2 data server using the IBM data
server clients.
If running mixed versions of DB2 servers and clients, it is good to know that DB2
Clients from DB2 UDB Version 8 and DB2 9.1 or 9.5 for Linux, UNIX, and
Windows are still supported and able to connect to a DB2 9.7 data server. In the
reverse direction, the newer IBM data server clients from Version 9.7 can also
connect to the earlier DB2 9.1 and DB2 UDB Version 8 servers using the IBM
Data Server Driver for ODBC, CLI, and .Net. In this case, however, new DB2
Version 9.7 functionality is not available.
As of DB2 Version 9.5, both clients and drivers are decoupled from the server
release schedule and can be downloaded separately. The IBM Data Server
Driver for JDBC and SQLJ is already included in the IBM Data Server Runtime
Client. It provides support for JDBC 3 and 4 compliant applications, as well as for
Java applications using static SQL (SQLJ). Support is also provided for
pureXML, SQL/XML, and XQuery. All of this support and other features, such as
connection concentration, automatic client reroute, and more, are provided within
in a single package called db2jcc4.jar. The IBM Data Server Driver for ODBC,
CLI and .Net is a lightweight deployment solution for Windows applications to
provide runtime support for applications without needing to install the Data
Server client or the Data Server Runtime Client. On Windows, the driver comes
as an installable image including merge modules to easily embed it in a Windows
installer-based installation. On Linux and UNIX, there is another easy
deployment solution called the IBM Data Server Driver for ODBC and CLI, which
is available in tar format.
Communication protocols
DB2 primarily uses these protocols to communicate:
TCP, IPv4, IPv6, and Named Pipes (Windows only) for remote connections
Interprocess Communication (IPC) for local connections within a DB2
instance
From the command line, this information can be then updated in the database
manager with the following DB2 command:
db2 UPDATE DBM CFG USING SVCENAME db2icdb2
These tasks can also be performed using the DB2 Configuration Assistant utility.
At the client side, the database information is configured using either the
CATALOG command or using the Configuration Assistant. The databases are
The service name registered in the server or the port number can be specified in
the SERVER option. To catalog a database under this node, the command used
is:
db2 CATALOG DATABASE database-name AS alias-name AT NODE node-name
When using the Configuration Assistant GUI tool to add a database connection,
a database discovery can be started to find the desired database.
Note: DB2 Discovery method is enabled at the instance level using the
DISCOVER_INST parameter, and at database level using the
DISCOVER_DB parameter.
IBM recognizes that in many cases there might be a need for accessing data
from a variety of distributed data sources rather than one centralized
database. The data sources can be from IBM, such as DB2 or Informix, from
non-IBM databases, such as Oracle, or even from non-relational data, such
as files or spreadsheets. As illustrated in the last scenario in Table 1-6, IBM
offers the most comprehensive business integration solution by allowing
federated access to a variety of distributed data sources.
DB2 Data Server 9 offers multiple ways of creating Java applications, by either
using a type 2 or type 4 JDBC driver:
Type 2 driver
With a type 2 driver, calls to the JDBC application driver are translated to Java
native methods. The Java applications that use this driver must run on a DB2
client, through which JDBC requests flow to the DB2 server.
Type 4 driver
The JDBC type 4 driver can be used to create both Java applications and
applets. To run an applet that is based on the type 4 driver, a Java-enabled
browser is required, which downloads the applet and the JDBC driver
(db2jcc4.jar). To run a DB2 application with a type 4 driver, an entry for the
JDBC driver in the class path is required and no DB2 client is required.
Remote Data Objects (RDO) provide an information model for accessing remote
data sources through ODBC. RDO offers a set of objects that makes it easy to
connect to a database, execute queries and stored procedures, manipulate
results, and commit changes to the server. Because RDO implements a thin
code layer over the ODBC API, it requires an ODBC data source to be created
for the DB2 database to which you connect.
ADO.NET
DB2 supports the Microsoft ADO.NET programming interface through a native
managed provider. These applications can use the DB2 .Net, the OLE DB .Net,
or the ODBC .NET data provider. High performing Windows Forms, Web Forms,
and Web Services can be developed using the ADO.NET API. DB2 supports a
collection of features that integrate seamlessly into Visual Studio 2003, 2005,
and 2008 to make it easier to work with DB2 servers and to develop DB2
procedures, functions, and objects.
The IBM Data Server Provider for .Net extends data server support for the
ADO.NET interface and delivers high performing, secure access to IBM data
servers:
DB2 Version 9 (or later) for Linux, UNIX, and Windows
DB2 Universal Database Version 8 for Windows, UNIX, and Linux
DB2 for z/OS and OS/390 Version 6 (or later), through DB2 Connect
DB2 for i5/OS Version 5 (or later), through DB2 Connect
DB2 Universal Database Version 7.3 (or later) for VSE and VM, through DB2
Connect
IBM Informix Dynamic Server, Version 11.10 or later
IBM UniData, Version 7.1.11 or later
IBM UniVerse, Version 10.2 or later
When used in conjunction with stored procedures and the federated database
capabilities of DB2 data servers and DB2 Connect servers, this data access can
be extended to include a wide variety of other data sources, including non-DB2
mainframe data and Informix Dynamic Server (IDS), Microsoft SQL Server,
For more information about developing ADO.NET and OLE DB, refer to DB2 for
Linux, UNIX, and Windows Developing ADO.NET and OLE DB Applications,
SC23-5851-01, which is available at this Web site:
http://www.ibm.com/support/docview.wss?uid=pub1sc23585101
Perl DBI
DB2 supports the Perl Database Interface (DBI) specification for data access
through the DBD::DB2 driver. The Perl DBI module uses an interface that is
similar to the CLI and JDBC interfaces, which makes it easy to port Perl
prototypes to CLI and JDBC. As of DB2 9.5, Perl DBI comes with support for
DB2 pureXML technology, which allows you to insert XML documents without
the need to parse or validate XML. The Perl driver also supports multi-byte
character sets, which means your application does not have to deal with the
conversion itself when interacting with the database.
You can obtain more information about Perl DBI at this Web site:
http://www.ibm.com/software/data/db2/perl/
PHP
The PHP Hypertext Preprocessor is a modular and interpreted programming
language intended for the development of Web applications. Its functionality can
be customized through the use of extensions. DB2 supports PHP through an
extension called pdo_ibm, which allows DB2 access through the standard PHP
Data Objects (PDO) interface. In addition, the ibm_db2 extension offers a
procedural API that, in addition to the normal create, read, update, and write
database operations, also offers extensive access to the database metadata.
The most up-to-date versions of ibm_db2 and pdo_ibm are available from the PHP
Extension Community Library (PECL):
http://pecl.php.net/
For more information about the PHP application development support that DB2
data server for Linux, UNIX, and Windows offers, refer to this Web site:
http://www.ibm.com/software/data/db2/ad/php.html
Ruby on Rails
DB2 9.5 has drivers for Ruby, which is an object-oriented programming
language. Combined with the open source Ruby framework called Rails, the
development of Web-based and database-driven applications can be extremely
quick. Included in DB2 is the IBM_DB Ruby adapter, which allows any
database-backed Ruby application to interface with IBM data servers.
DB2 offers a no-charge community edition (DB2 Express-C) of the DB2 data
server. This edition of DB2 is completely free to develop, to deploy, and to
distribute. DB2 Express-C is a full-function relational and XML data server and
has the same reliability, flexibility, and power of the higher editions of DB2. DB2
also offers the DB2 Express + Fix Term license option, which is priced
comparably with the MySQL Enterprise Gold pricing. For more details about
each of the available DB2 editions, refer to 1.2.1, DB2 Data Server Editions for
the production environment on page 3.
MySQL was initially developed for UNIX and Linux applications. It became
popular when Internet Service Providers (ISPs) discovered that MySQL can be
offered at no charge to their Internet customers, providing all of the storage and
retrieval functionality that a dynamic Web application needs. It was also
advantageous, because ISPs primarily use Linux or UNIX as their base
operating system, in combination with APACHE as their favorite Web server
environment. Today, MySQL is also used as an integrated or embedded
database for various applications running on almost all platform architecture.
Figure 2-1 illustrates the conceptual architecture of the MySQL database. The
next several sections cover the functionality of the integrated components in
more detail.
The client layer is the front-end component with which users will interact. This
component presents three types of users that interact with the server: query
users, administrators, and applications.
In DB2, you can use the command line process for the same functionality. Use
the db2 or db2cmd command to start the command line processor.
Administrators
Administrators use the administrative interface and utilities, such as mysqladmin
and MySQL Administrator. Theses tools can be used for creating or dropping
databases and users, as well as managing the MySQL server. These tools
connect to the database server using the native C client library. There are also
utilities that can be used for administrative purposes, but they do not connect to
the MySQL server. Instead, they work directly with the database files. These
tools are myisamchk for table analysis, optimization, and crash recovery and
myisampack for creating read-only compressed versions of MyISAM tables.
DB2 offers a rich set of database management GUI tools, such as the DB2
Control Center, the Optim Database Administrator, and the IBM Optim Data
studio. These tools simplify database administration by providing one single tool
to completely manage your entire database environment. Also, you can use
these tools to query the database. The GUI tools are discussed in detail in 9.8,
Database management tools on page 312.
Applications
Applications communicate with the database server through MySQL APIs that
are available for various programming languages, such as C++, PHP, Java, Perl,
.NET, and so on. We discuss these APIs in more detail later in the chapter.
Connection pool
The connection pool assigns user connections to the database and
corresponding memory caches. The utilities and programs that are included with
MySQL connect using the Native C API. Other applications can connect using
the standard drivers that MySQL offers, such as C++, PHP, Java, Perl, .NET,
and so on. MySQL supports TCP/IP, UNIX socket file, named pipe, and shared
memory networking protocols, depending on the type of operating system that is
used. For more details about application programming interfaces, see 2.5,
MySQL application programming interfaces on page 51.
SQL interface
The SQL interface accepts and conveys the SQL statements from the
connecting user or MySQL application. This layer is independent of the storage
ending layer, and, therefore, SQL support statements are not dependent on the
type of storage engine being used. The SQL statement is then passed to the
SQL parser for further processing.
SQL parser
The parser analyzes the SQL statements and validates the SQL query syntax.
The parser breaks up the statement and creates a parse tree structure to
validate the SQL query syntax and prepare the statement for the optimizer.
SQL optimizer
The SQL optimizer verifies that the tables exist and that the records can be
accessed by the requesting user. After security checking, the query is analyzed
and optimized to improve the performance of the query process.
In the next section, we describe how the database objects and data are
physically stored on the server.
In the example in Figure 2-2 on page 43, there are two databases on this MySQL
server. The first database is the mysql database, which, by default, holds the
security information. The second database is the sample database inventory,
which is discussed in more detail in Chapter 4, Conversion scenario on
page 75.
The following files are created for each database directory in the MySQL home
directory:
Files with the frm extension contain the structural definition of the table and
the view, which is known as the schema.
Files with the MYD extension contain the table data.
Files with the MYI extension contain the table indexes.
If there are triggers, there also are files with the TRN and the TRG extensions.
Example 2-1 shows the files that are created for each table in our sample
database.
Log files, by default, are created in the mysql home directory. The security data
tables in the mysql database are in the /< mysql home directory>/mysql
directory. Table 2-2 lists the files of the security data tables.
By default, DB2 uses a better approach for the logical and physical distribution of
database objects. DB2 differs from MySQL in that DB2 stores all database
objects in table spaces. The benefits of table spaces are increased performance
and simplified management. To take advantage of this advanced database
distribution, refer to 6.2.1, Database manipulation on page 123, where we
discuss in detail the conversion of a MySQL database structure to DB2.
Transaction-safe storage engines have commit and rollback capabilities and can
be recovered from a crash. Therefore, these storage engines guarantee the
Atomicity, Consistency, Isolation, and Durability (ACID properties) of a database.
Non-transaction-safe storage engines are faster and require less memory and
disk space. However, non-transaction-safe storage engines do not guarantee the
database is left in a consistent state, because they do not support ACID
properties.
In DB2, all tables support transactions. Therefore, tables that are managed by
MySQL InnoDB, MyISAM, ARCHIVE, and CSV storage engines can all be
converted to a DB2 regular table. We discuss the details of converting MySQL
tables to a DB2 table in 6.2.2, Table manipulation on page 128.
MySQL supports transactions with the InnoDB and NDB transactional storage
engines. Both engines provide full ACID compliance. In contrast, all tables in
DB2 support transactions and provide full ACID compliance.
Referential integrity
Referential integrity is the state in which all values of all foreign keys are
valid. The relationship between certain rows of the DEPT and EMP tables, as
shown in Figure 2-3 on page 49, illustrates referential integrity concepts and
Setup programs
The rest of the programs are used for setting up operations during the installation
or upgrade of the MySQL server:
mysql_install_db
mysql_fix_privilege_tables
make_binary_distribution
mysqlbug
comp_err
make_win_bin_dist
mysql_secure_installation
mysql_tzinfo_to_sql
mysql_upgrade
The first approach is to connect the Java application using Java Database
Connectivity (JDBC) and the Connector/J, which is officially supported by
MySQL. This connector is written in Java and does not use the C client library to
implement the client/server communication protocol.
The third approach is to connect using Connector/ODBC for applications that use
ODBC standards. This connector uses the embedded C client libraries to
implement the client/server communication protocol. This approach is officially
supported by MySQL.
The fourth approach is to use the third-party APIs that are provided by
programming languages, such as PHP, Perl, or Python. These APIs will use the
embedded C client libraries to implement the client/server communication
protocol. The third-party APIs are not officially supported by MySQL. The
following list shows several of the APIs that are available for MySQL:
C API
The API to connect from C programs to a MySQL database. You can obtain
more details at this Web site:
http://dev.mysql.com/doc/refman/5.1/en/c.html
DB2 supports the most frequently used MySQL programming languages. These
languages include PHP, Java, Perl, Python, Ruby, C#, C/C++, and Visual
Basic. With proper planning and knowledge, you can convert these applications
to DB2 with minimal effort. In Chapter 8, Application conversion on page 205,
we discuss and provide examples for converting applications from MySQL to
DB2.
Based on the application profile, you can plan the software and hardware needs
for the target system. The planning stage is also a good time to consider many of
the rich functions and features of the DB2 product family, which can increase
productivity and reduce maintenance costs. While the most common reason for
the one-to-one conversion from MySQL to DB2 is to use the more advanced
features of DB2 that MySQL does not provide, reducing the run time of
transactions dramatically also helps. The advanced optimizer in DB2 showed
improvements of approximately 20x for the real production load in various
projects.
Knowledge regarding the skills required and the resources that are available for
the conversion project is required. IBM provides a variety of DB2 courses to help
IBM clients learn DB2 quickly. For more information, go to these Web sites:
http://www.ibm.com/developerworks/
http://www.ibm.com/industries/education/index.jsp
http://www.ibm.com/developerworks/data/bootcamps
A conversion assessment provides you with the overall picture of the conversion
tasks and efforts needed. From a conversion assessment, a conversion project
plan can be created to manage each conversion step.
To execute the typical conversion steps, various tools are available to help you
save time in your conversion project. IBM offers the no charge conversion tool,
the IBM Data Movement Tool, for converting from various relational database
systems to DB2.
Experienced IBM specialists can support you during any phase of the conversion
project with special conversion offerings that are provided worldwide by IBM.
There is no cost to join PartnerWorld; you can find more information and register
by visiting the following link:
http://www.ibm.com/partnerworld
You can obtain more information about the DB2 conversion team at the Software
Migration Project Office (SMPO) Web site:
http://www.ibm.com/software/solutions/softwaremigration/dbmigteam.html
You can obtain the most up-to-date details about current offerings, success
stories, literature, and other information about DB2 Migrate Now! at this Web
site:
http://www.ibm.com/software/data/db2/migration/
3.1.3 Education
DB2 provides an easy-to-use, feature-rich environment. Therefore, it is important
that those individuals involved in the conversion process are appropriately
trained to take full advantage of its offerings.
For further information regarding DB2 training, visit the DB2 Web site:
http://www.ibm.com/software/data/education/
This DB2 for Linux, UNIX, and Windows conversion Web site can help you find
the information that you need to port an application and its data from other
database management systems to DB2. The porting and conversion steps,
which are described in this chapter, appear in the order that they are commonly
performed. In addition to the technical information that is available at this site,
IBM clients and IBM Business Partners need to check out the Information for IBM
clients and Information for IBM partners links:
http://www.ibm.com/developerworks/db2/zones/porting/partners.html
http://www.ibm.com/developerworks/db2/zones/porting/customers.html
Here, you will find additional links and information regarding assistance or
available resources for your porting project:
http://www.ibm.com/developerworks/db2/zones/porting/index.html
You need to understand how your application works and what resources are
needed. There are probably many characteristics within your application that will
influence system planning and the scope of the conversion effort.
With DB2 support for various platforms and multiple operating systems, such as
Linux, Windows, and AIX and so on, platform limitation should not be an issue.
You can select on which system the converted application will run based on the
application nature and future enhancement requirements.
As shown in Figure 3-1, the target system can be the same system as the source
system, or another system with a separate operating system and hardware. You
might even want to make your database server a separate machine from the
machine on which your application runs on (creating a two-tier architecture).
In either case, check if the hardware of your target system meets the minimum
requirements, paying particular attention to the following areas:
Operating system
DB2
Application
Data
Conversion tools (if used)
3.3.1 Software
You must determine which software must be installed on your target system,
including:
Operating system (Linux, AIX, UNIX, Windows, or others)
DB2 version
Application to be converted
Conversion tools (if used and installed on the target system)
Any software on your source system that is required by your application to run
properly, including but not limited to:
HTTP server
Web application server
Development environment
Additional software (such as Lightweight Directory Access Protocol
(LDAP) or others)
Be sure to have the latest versions and fix packs of the planned products. Ensure
that the chosen operating system supports the chosen software.
3.3.2 Hardware
When starting the conversion process, it is important to have a target platform
that meets the minimum requirements of all the software that will be installed on
it. Check the supported hardware platforms, depending on the chosen software.
Virtual images are a great option for operating enablement environments. Virtual
images reduce the hardware impact and allow simple test environments for the
enablement effort without incurring additional costs. Keep in mind that the disk
space is still needed.
When deciding to use a tool, be sure that it fulfills the requirements that are
appropriate for your platform.
The tool can be used to extract the data from the source database into flat files,
to generate scripts to create the database objects, and to import the data using
the DB2 LOAD utility. At the time that this book was written, the Data Movement
Tool supported the following database objects for a MySQL to DB2 conversion:
tables, constraints, indexes, primary keys, and foreign keys (with InnoDB).
At the time that this book was written, the Data Movement Tool did not support
the following database objects for a MySQL database conversion: views,
procedures, functions, triggers, and packages.
The IBM Data Movement tool is available in both graphical user interface (GUI)
and command-line interface:
Graphical user interface
The GUI interface offers the IBM Data Movement Tool conversion
functionality by using a Java interface. It provides an easy to use interface for
beginners.
For our conversion scenario, we use the new IBM Data Movement Tool to
convert database objects and data from MySQL to DB2. If you want to download
the IBM Data Movement Tool or receive more information about it, refer to:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
index.html
http://www.ispirer.com/
For more information about the installation of DB2 and the IBM Data Movement
Tool, refer to Chapter 5, Installation on page 87.
Tools, such as IBM Data Movement Tool, can perform the majority of this task
automatically. Regardless of the tool used, verify the output results. This step
can also be performed manually; however, you need to make sure to cover all of
the database objects.
If you plan to change the logical model of your database structure to enhance
your application and take advantage of DB2 functions and features, the DDL
needs to be modified in this step.
These actions are supported by tools, such as the IBM Data Movement Tool.
Make sure to check that all database objects, such as tables, keys, indexes, and
functions, are created successfully.
In this step, you create scripts to create the required users and grant them
access privileges to the DB2 database and objects based on the MySQL user
data.
All of the steps that are described in this section are performed on the MySQL
sample database and are explained in great detail in Chapter 7, Data
conversion on page 167.
The extent to which you have to change the application code depends on the
database interface that is used in the source application. When a database
You might find MySQL behaviors that are not natively supported in DB2, so a
concept must be established to allow the application to behave in the same way
as it did prior to conversion.
Database interface
Regardless of the interface used between an application and a database
application, the access to the database must be changed, because the database
server has been modified.
If standardized interfaces, such as ODBC or JDBC are used, the changes will be
less significant than if the application uses the native API of the database
product.
Handling conditions
Depending on the implementation of your application, there might be changes in
the condition handling part of the application.
Additional considerations
DB2 offers rich, robust functions, which you can take advantage of in your
applications. The following list shows several of these features that you might
want to consider using in your application and which differ in MySQL:
Concurrency
Locking
Isolation-level transactions
Logging
National language support
XML support
The steps listed in this section are performed with various sample application
code and explained in great detail in Chapter 8, Application conversion on
page 205.
Every database has its own method for backup and recovery, because these
tasks are common, vital tasks in database administration. The database must be
backed up regularly, and the data retention period must be defined based on the
business requirements.
If you have backup and recovery tasks defined on the source system, you
probably want to convert these tasks, as well. Be sure to port any existing scripts
for backup tasks to support DB2.
Both the database backup and recovery functions must be tested to ensure a
safe environment for your application.
Log files
DB2 logs differently than MySQL, so database administrators must be aware of
the logging level that can be set, where log information is stored, and how to read
these logs.
If you succeed with testing, you can then proceed to tuning your database and
application in order to speed up your application.
Checking data
Aside from the basic data checks that must be performed when exporting and
importing data, checking that your application handles your data correctly and
manipulates the expected fields on inserts, updates, or deletes is vital.
Performance checking can be done manually, or a script can be used to have the
data checked.
Troubleshooting
Whenever the conversion leads to a problem, such as incorrect data or
unexpected application behavior, you have to determine the problem in order to
fix it.
You must understand error messages from the application, as well as DB2 error
messages. The troubleshooting process includes studying the DB2 log files.
See the DB2 technical support Web site for help with specific problems:
http://www.ibm.com/software/data/support/
Basic tuning
When your new system is working perfectly, you might want to tune it for even
better performance. With the correct database configurations, and hints from
DB2 tuning tools, you can speed up your queries quite easily.
We discuss testing and initial tuning of a DB2 database in detail in Chapter 10,
Testing and tuning on page 321.
With User Login, a registered user uses their user ID and password to log in. The
application verifies the user name, password, and user permissions against the
registered users in the database. The lowest level of permissions allows the user
to view, edit, and create inventory and service requests. If the user has
permissions to view, create, and edit groups and users, the user is given extra
functionality to do so.
From New Users, a new user can create an account. Completing the registration
form (Figure 4-3 on page 78) creates a new user account in the application. The
application verifies that the user name provided by the user is unique. By default,
new users have the lowest level of permissions. A user who is allowed to edit
groups can add a user to a group with more than default-level permissions.
Using Forget Password, a user can reset the users account password by
entering the users first, last, and user name, as shown in Figure 4-4 on page 79.
The new password will be displayed to the user.
When successfully logged in, the management options are presented to the
user. A set of management options is displayed and can vary, depending on the
type of permissions held by the logged in user. Figure 4-5 shows the welcome
page options for a user with the highest level of permissions.
Figure 4-5 Welcome menu for the administration login of the sample application
Using the View/Edit Account Info option, users can view account details, as
shown in Figure 4-6 on page 80. Users can update their details by editing the
fields and submitting the form.
With Add Inventory, the user can associate new inventory with their user
account. Figure 4-7 shows a typical completed Add New Inventory form.
Using View/Edit Inventory List, users have the ability to view their assigned
inventory and other users inventory using owner, location, inventory type, or
service created against the inventory (Figure 4-8 on page 81). From this page,
Using Create Service Ticket (Figure 4-9), users can open service tickets against
their assigned inventory.
With the View Group Users option, users with administration permissions can
view all users within a specific group. From this page, the user can update any
given user account by selecting Edit. Figure 4-11 on page 83 shows a user
viewing the manager group user details.
Using the Create/Edit Group option (Figure 4-12), a user with administration
permissions can create and edit groups.
For detailed table information, see Figure 4-13 on page 85. We discuss the data
type conversion between MySQL and DB2 in detail in Chapter 6, Database
conversion on page 115.
The second VMware image, the destination server, has the following software
installed on the VMware image:
SUSE Linux 10 SP2
DB2 9.7 Express-C
Apache 2.0
PHP 5.3.0
IBM Data Movement Tool
For more information about the VMware workstation and working with VMware
images, go to this Web site:
http://vmware.com/
DB2 9.7 Express-C for Linux, UNIX and Windows can be downloaded from this
Web site:
http://www.ibm.com/software/data/db2/express/
The IBM Data Movement Tool is used to simplify and greatly decrease the time
that it takes to convert from MySQL to DB2. This tool is available at no charge
from IBM at the following URL:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
index.html
With the IBM Data Movement Tool, the conversion of database objects, such as
tables and data types, and the conversion of data can be done automatically into
equivalent DB2 database objects.
We discuss the installation and configuration of DB2 and the IBM Data
Movement Tool in the next chapter.
Chapter 5. Installation
In this chapter, we discuss the target system environment setup. For the
database server, we guide you through the installation process of DB2 9.7 for
Linux, including the hardware and software prerequisites. The application server
has to be examined to ensure that the existing software has the proper DB2
support. If this is a completely new system setup, make sure that all the required
software is included in the installation list. Furthermore, we describe the
download and steps required to set up the IBM Data Movement Tool.
Hardware requirements
DB2 products are supported on the following hardware:
HP-UX
Itanium-based HP Integrity Series Systems
Linux
x86 (Intel Pentium, Intel Xeon, and AMD) 32-bit Intel and AMD
processors
x64 (64-bit AMD64 and Intel EM64T processors)
Chapter 5. Installation 89
POWER (IBM eServer OpenPower, iSeries, pSeries, System i,
System p, and POWER Systems that support Linux)
System z or System z9
Solaris
UltraSPARC or SPARC64 processors
Solaris x64 (Intel 64 or AMD64)
Windows
All Intel and AMD processors capable of running the supported Windows
operating systems (32-bit and 64-bit base systems)
For more information about DB2 9.7 system requirements and other DB2 release
system requirement, check this Web site:
http://www.ibm.com/software/data/db2/9/sysreqs.html
When you install DB2 Enterprise Server Edition or Workgroup Server Edition
using the DB2 setup wizard, size estimates are dynamically provided by the
installation program based on the installation type and component selection.
If the space required for the installation type and components exceeds the space
found in the path specified, the setup program issues a warning about insufficient
space. The installation is allowed to continue. If the space for the files being
On the Linux and UNIX operating systems, 2 GB of free space in the /tmp
directory is recommended.
Chapter 5. Installation 91
Communication requirements
When using TCP/IP as the communication protocol, no additional software is
needed for connectivity. For more supported communication protocols, refer to
the DB2 manual Quick Beginnings for DB2 Servers 9.5, GC10-4246, at this Web
site:
http://publibfp.boulder.ibm.com/epubs/pdf/c2358642.pdf
Or, you can visit the IBM DB2 Database for Linux, UNIX, and Windows
Information Center at this Web site:
https://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
For this particular project or any other conversion project in general, the DB2
Data Server Client is required. It provides libraries for application development. If
the application server and the database server are to be placed on the same
system, you can install both the DB2 server and Data Server Client in one step
by selecting the Custom installation type.
For this project, we perform the following steps to install DB2 9.7 on Linux:
1. Log on to Linux as a root user.
2. Download DB2 9.7 Express-C from this Web site:
http://www.ibm.com/software/data/db2/express/download.html
3. Save the tar file to the /usr/local/src directory.
Note: Starting in DB2 9.5, you can also perform a non-root installation of DB2.
Chapter 5. Installation 93
Figure 5-2 DB2 setup launchpad
9. Select Install New under the option to install the server to launch the DB2
setup wizard.
10.Go to the Software License Agreement panel, and read the Software License
Agreement, as shown in Figure 5-3 on page 95. If you agree with the
agreement, select Accept, and click Next.
11.In the Installation Type panel, click Custom, as shown in Figure 5-4 on
page 96, and click Next.
Chapter 5. Installation 95
Figure 5-4 DB2 installation type panel
12.Click Next again to get the Features panel. Select the Application
Development tools option, as shown in Figure 5-5 on page 97, and click
Next.
13.In the Languages panel, choose the type of languages to install, and click
Next.
14.In the Documentation panel, choose where to access the DB2 Information
Center. You can choose to install it as part of this process, or you can access
the online DB2 Information Center at any time. Click Next.
15.In the Database Administration Server (DAS) panel, enter the DAS user
information. Linux group and user accounts do not have to be created prior to
this step; DB2 will create the required Linux system group and user
automatically. For the example installation, we use the default name dasusr1
and choose a password for this user, as shown in Figure 5-6 on page 98, and
click Next.
Chapter 5. Installation 97
Figure 5-6 DB2 setup administration server panel
16.In the Instance setup panel, you can choose whether you want to set up an
instance during the DB2 installation. By selecting Create a DB2 instance and
clicking Next, we let DB2 create the instance for us.
17.Enter the instance owner information in the Instance Owner panel. Linux
group and user accounts do not have to be created prior to this step; DB2 will
create the Linux group and user. For the example installation, we use the
default db2inst1 settings and create a password for this user, as shown in
Figure 5-7 on page 99, and click Next.
18.In the Fenced user panel, we allow DB2 create the ID for us, as shown in
Figure 5-8 on page 100, and click Next.
Chapter 5. Installation 99
Figure 5-8 DB2 set up DB2 fenced user
19.In the Instance Communication panel, configure the DB2 instance TCP/IP
communication. For the example installation, we use the default settings and
click Next.
20.In the DB2 Tools Catalog Configuration panel, we create a local tools catalog
by selecting the db2inst1 instance. We also use the default local database
TOOLSDB and default schema SYSTOOLS and then, click Next. Figure 5-9 on
page 101 shows the DB2 Tools Catalog Configuration panel.
21.At the end, the setup wizard provides a summary of the installation options
selected. Review it and click Finish to start the installation.
If the log file option is not specified, the db2setup.log file and db2setup.err file
are stored in the /tmp directory on a Linux operating system. Example 5-1 shows
an example of the db2setup.log file.
DB2 Setup log file started at: Fri Jul 24 15:20:15 2009 EDT
============================================================
If you have chosen not to create instances during the DB2 installation or need to
add another instance after installation, there are two options to create instances
manually:
The db2isetup command starts a graphical tool for creating and configuring
instances, as shown in Figure 5-11 on page 103. It allows you to specify all
the required configuration parameters, such as the instance owner and
communication protocol, in an easy, guided fashion. The command can be
found in the /opt/ibm/db2/V9.7/instance directory on a Linux operating
system.
The command to create the DAS user is the dascrt command. Use it in the
following way:
dascrt -u dasadm1
As part of the GUI instance creation, the installer suggests three users identified
as db2inst1, db2fenc1, and dasadm1. These are default names for the instance
users. If you do not want to use the default names, you can choose your own
names by creating the system user IDs and groups ahead of time and inputting
these parameters in the wizard when prompted. The installer will also add the
following entry to the /etc/services file in order to allow communication from
DB2 clients:
db2c_db2inst1 50000
In this entry, db2c_db2inst1 indicates the service name, and 50000 indicates the
port number. DB2 allows for multiple instances on one server installation to allow
for various environments, that is, test, production, development, and so on.
Subsequent instances can be created on the same server simply by using one of
the methods introduced here.
To access a remote DB2 database, you can either run the easy to use graphical
tool Configuration Assistant or use the catalog commands to provide entries for
the following three directories:
NODE directory: A list of remote DB2 instances
ADMIN NODE directory: A list of remote DB2 Administration servers
DATABASE directory: A list of databases
To use the command-line tools, first catalog the DB2 node. The DB2 node is the
server where the database resides. Then, catalog the database. See
Example 5-2.
--
-- catalog the DAS on the remote node
--
CATALOG ADMIN TCPIP NODE db2das remote SERVER1
-
-- catalog database
--
CATALOG DATABASE invent AS inventdb AT NODE db2node
After installing your DB2 Client, configure it to access a remote DB2 server using
the Configuration Assistant. The graphical interface can be launched through the
DB2 Control Center or run on its own by using the command db2ca. For more
details, refer to the IBM DB2 manual Quick Beginnings for DB2 Clients,
GC10-4242.
The sample application in this book is written in PHP with Apache2. Therefore,
the next two sections discuss how to prepare the target system for Apache2 and
PHP.
Installation steps
The following steps explain how we install Apache2 on SUSE 10 SP2:
1. Download the Apache package.
The source code for Apache package is available at this Web site:
http://httpd.apache.org/download.cgi
In our conversion scenario, we use Version 2.2.11 of Apache, and the
package that we download is httpd-2.2.11.tar.gz.
2. Change the working directory.
Use the cd command to make your working directory the directory to which
you download the tar file:
db2server: # cd /usr/local/src/
3. Uncompress the source package.
The following command decompresses the contents of the source package
into a directory called httpd-2.2.11:
db2server:/usr/local/src # tar -xzf httpd-2.2.11.tar.gz
4. Change the working directory.
Use the cd command to make the newly created directory your working
directory:
db2server:/usr/local/src # cd httpd-2.2.11/
Note: All commands and procedure descriptions that are provided in this
section refer to SUSE Linux Enterprise Server 10 SP2. The commands and
procedures can vary for other versions or Linux distributions.
Installation steps
In order to use the IBM DB2 libraries, you must recompile PHP. These
installation steps explain how to update your PHP install and how to install PHP
from the beginning:
1. Back up the httpd.conf and php.ini files.
To ensure the configuration files for Apache and PHP are not lost when
installing the new PHP version, we recommend backing up the
/etc/httpd/httpd.conf and if you have a previous version of PHP installed,
back up the /etc/php.ini files.
2. Download the PHP package.
The source code for PHP is available at this Web site:
http://www.php.net/downloads.php
For the IBM Data Movement user guide and a no charge download of the IBM
Data Movement Tool, visit this Web site:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
Software requirements
We describe the software that is required to use the IBM Data Movement Tool in
this topic.
General requirements
In general, you need the following software:
Latest version of the IBM Data Movement Tool.
MySQL: Ensuring that MySQL is running (usually the daemon can be started
with the command safe_mysqld & from an account with root permissions).
DB2 V9.7 needs to be installed on the target server. Use the command
db2start to ensure that the DB2 Server is up and running.
Java Version 1.5 or higher must be installed on your target server. To verify
your current Java version, run the java -version command. By default, Java
is installed as part of DB2 for Linux, UNIX, and Windows in
<install_dir>\SQLLIB\java\jdk (Windows) or /opt/ibm/db2/V9.7/java/jdk
(Linux).
You must have the Java Database Connectivity (JDBC) drivers for the
MySQL source database (mysql-connector-java-5.1.8-bin.jar or the latest
driver) and the DB2 target database (db2jcc.jar, db2jcc_license_cu.jar, or
db2jcc4.jar, db2jcc4_license_cu.jar) installed on the server with the IBM Data
Movement Tool.
For the purpose of this document, we have used the IBM Data Movement Tool
with DB2 Version 9.7 and MySQL Version 5.1.36. We recommend that you install
the IBM Data Movement Tool on the DB2 server side to achieve the best data
movement performance.
In this chapter, we discuss the process of converting the database structure from
the MySQL 5.1 server to the DB2 9.7 server. Before this discussion, we must
evaluate the differences between the MySQL database structure and the DB2
database structure.
In the first section, we discuss data type mapping, taking a closer look at MySQL
and DB2 data types and the differences between them. Following this section,
we provide Data Definition Language (DDL) differences, providing a syntax
comparison between MySQL and DB2.
Every column in the database table has an associated data type, which
determines the values that this column can contain. DB2 supports both built-in
data types and user-defined data types (UDT) whereas MySQL only supports
built-in data types. Figure 6-1 shows the built-in data types of MySQL.
Figure 6-2 on page 117 shows the built-in data types that are supported by DB2.
MySQL data types are grouped into three categories and can be converted to
DB2 data types following these suggested rules:
Numeric type:
TINYINT
A tiny integer is a single-byte integer in MySQL that can be mapped to a
DB2 SMALLINT for similar functionality.
SMALLINT
A small integer is a two-byte integer with a precision of five digits. With
MySQL, the range of signed small integers is -32768 to 32767, making it
replaceable by DB2 SMALLINT. For unsigned MySQL small integers, the
range is 0 to 65535, making it replaceable by DB2 INTEGER.
BIT, BOOL, and BOOLEAN
These types are synonyms for TINYINT(1). Instead of BIT, BOOL, and
BOOLEAN, DB2 uses SMALLINT with check constraint.
Table 6-1 shows the default data type mappings between the two databases that
are used by the IBM Data Movement Tool. We use this mapping for our sample
conversion.
TINYINT SMALLINT
UNSIGNED
SMALLINT SMALLINT
SMALLINT INTEGER
UNSIGNED Optional: SMALLINT
BIT SMALLINT
BOOLEAN SMALLINT
MEDIUMINT INTEGER
MEDIUMINT INTEGER
UNSIGNED
INTEGER/INT INTEGER
INTEGER/INT BIGINT
UNSIGNED Optional: INTEGER
BIGINT BIGINT
BIGINT DECIMAL
UNSIGNED Optional: BIGINT
FLOAT DOUBLE
FLOAT DOUBLE
UNSIGNED
DOUBLE DOUBLE
DOUBLE DECIMAL
UNSIGNED Optional: DOUBLE
REAL DOUBLE
REAL DOUBLE
UNSIGNED
NUMBERIC DECIMAL(31,0)
DECIMAL
DEC
NUMERIC(P) DECIMAL(min(P,31),0)
NUMERIC(P,0)
DECIMAL(P)
DECIMAL(P,0)
DEC(P)
DEC(P,0)
DECIMAL DECIMAL
UNSIGNED
DATE DATE
DATETIME TIMESTAMP
Optional: TIME
TIMESTAMP TIMESTAMP
TIME TIME
YEAR CHAR(4)
Optional: SMALLINT
CHAR CHAR
VARCHAR VARCHAR
BINARY CHAR(I) FOR BIT DATA
TINYBLOB BLOB(255)
Optional: VARCHAR(255)
TINYTEXT CLOB(255)
BLOB BLOB(65535)
TEXT CLOB(65535)
MEDIUMBLOB BLOB(16777215)
MEDIUMTEXT CLOB(16777215)
LONGBLOB BLOB(2GB)
LONGTEXT CLOB(2GB)
The data definition language is a set of SQL statements. You can use these
statements for a variety of tasks, including the creation or deletion of databases
and database objects (tables, views, and indexes), definitions of column types,
and definitions of referential integrity rules.
On Linux machines, MySQL can use the file system mounting options. However,
in most cases, MySQL uses symbolic links, which can be done by creating a
directory where you have extra space:
bash> cd <file system with space>
bash> mkdir mysqldata
Then, create a symbolic link to the newly created directory using these
statements:
bash> cd /var/lib/mysql
bash>ln -sf <file system with space>/mysqldata data
Then, create a database from the mysql prompt using this statement:
mysql>CREATE DATABASE inventory
MySQL users can distribute tables using symbolic linking or the data and index
directory options of the CREATE TABLE statement.
MySQL stores data in single files, multiple files, or table spaces, depending on
the table type being used. Figure 6-3 shows an example of storage engines that
fall under one of these three types.
The tables on the left side of the diagram are managed by the MyISAM storage
engine. For MyISAM tables, MySQL creates a .MYD file for data and a .MYI file to
store indexes, with only one file for all data and indexes. The tables in the middle
of the diagram are managed by the Merge storage engine. With Merge tables,
the .MRG file contains the names of the tables to be used and a .FRM file for table
definition. In a Merge table, various tables are used, each of these tables having
its own data file. However, as a whole, a Merge table uses multiple data files.
The tables on the right side of the diagram are managed by the InnoDB storage
engine. For InnoDB tables, MySQL stores data in a table space identified by the
path parameter, innodb_data_file_path. Multiple data files can be used for
InnoDB.
MySQL also has a feature called user-defined partitioning, which allows for table
data to be horizontally split across file systems depending on a specific set of
data defined by the user. For each partition that is created, there is a
corresponding .MYD file for data and .MYI file for the index.
In contrast to MySQL, DB2 stores everything in table spaces. Table spaces are
logical representations of physical containers on the file system. DB2 uses a
better approach for the logical and physical distribution of the database and the
database elements in different sectors, as shown in Figure 6-4 on page 125.
After completing the conversion of your database from MySQL to DB2, you can
use these features to enhance the performance of your application.
Instance
A DB2 server can have more than one instance. One instance can have multiple
databases. One instance per application database has the advantage that the
application support and the database support do not have to coordinate with one
another to take the database or the instance offline. For conversion purposes, a
single instance can be created for your database application environment using
the db2icrt command:
bash> db2icrt -u db2fenc1 db2inst1
Database
A database represents your data as a collection of data in a structured fashion. It
includes a set of system catalog tables that describes the logical and physical
structure of the data, a configuration file containing the environment parameter
values that are used by the database, and a recovery log with ongoing
transactions and transactions that can be archived.
Database partition
A database partition is part of a database, containing its own data, indexes,
configuration files, and transaction logs. A database partition is sometimes called
a node or a database node. A partitioned database environment is a database
installation that supports the distribution of data across database partitions. This
Creating a database
The database in DB2 can be created simply by issuing the following command:
db2>CREATE DATABASE invent
This command generates a new database with a default path, and it generates
table spaces. It creates three initial table spaces and the system tables, and it
creates the recovery log.
You can use the CREATE DATABASE statement with options to personalize the
database and take advantage of DB2 advanced features, such as automatic
storage, which simplifies the storage management for table spaces, as shown in
Example 6-1. When using automatic storage, it is possible to specify a group of
storage devices for DB2 to use for your database. This specification allows DB2
to allocate and grow this specified space as table spaces are created and
populated. Automatic storage is turned on by default when creating a database.
Dropping a database
In MySQL, you can drop the database using this statement:
mysql> DROP DATABASE [if exists] inventory
This statement removes all of the database files (.BAK, .DAT, .HSH, .ISD, .ISM,
.ISM, .MRG, .MYD, .MYI, .db, and .frm) from your file system.
This command deletes the database contents and all log files for the database,
uncatalogs the database, and deletes the database subdirectory.
Alter database
The MySQL alter database command allows you to change the overall
characteristics of a database. For example, the character set clause changes the
In DB2, you can use the UPDATE DATABASE CONFIGURATION and UPDATE
DATABASE MANAGER CONFIGURATION commands to set the database and
database manager configuration parameters. These commands allow
modification of various configuration parameters, such as log file size, log file
path, heap size, cache size, and many others. You can take advantage of the
DB2 autonomic features by enabling the automatic maintenance and Self-Tuning
Memory Manager features. Automatic maintenance allows for scheduling
database backups, keeping statistics current, and reorganizing tables and
indexes. The Self-Tuning Memory Manager provides constant tuning of your
database without the need for DBA intervention. The following examples show
how these commands can be set in DB2:
db2> UPDATE DATABASE MANAGER CONFIGURATION using diaglevel 3
db2> UPDATE DATABASE ONFIGURATION for invent
using auto_maint on
auto_tbl_maint on
auto_runstats on
auto_reorg on
self_tuning_mem on
This example does not show all of the parameters that are available in DB2. For
more information about how to set up automatic maintenance and Self-Tuning
Memory Manager, visit this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
In addition, you can use these commands to change the physical and logical
partitioning of a database and to allocate table spaces and paging
configurations.
Table space
A table space is a storage structure containing tables, indexes, large objects, and
long data. Table spaces reside in database partition groups and allow the
assignment of database location and table data directly onto containers. DB2
allows for two types of table spaces: System Managed Space (SMS), where the
operating system allocates and manages the space where the tables are stored,
and Data Managed Space (DMS), where the database administrator has the
ability to decide which devices or files to use and allows DB2 to manage this
space. Another option is to enable automatic storage for the table spaces. No
container definitions are needed in the latter case, because the DB2 database
manager assigns and manages the container automatically.
You can create more table spaces by using the following commands:
db2> CREATE REGULAR TABLESPACE tblsp1 PAGESIZE 8192 MANAGED BY SYSTEM using
('/home/db2inst1/database/user8K') extentsize 8 prefetchsize 8 bufferpool
bp8k
db2> CREATE TABLESPACE tblsp2 MANAGED BY DATABASE using (device
'/dev/rhdisk6' 10000, device '/dev/rhdisk7' 10000, device '/dev/rhdisk8'
10000) overhead 12.67 transferrate 0.18
db2> CREATE TABLESPACE tblsp3 MANAGED BY AUTOMATIC STORAGE
Schema
A schema is an identifier, such as a user ID, that helps group tables and other
database objects. A schema can be owned by an individual, and the owner can
control access to the data and the objects within it. A schema is also an object in
the database. It can be created automatically when the first object in a schema is
created. We can create a schema:
db2>CREATE SCHEMA inventschema AUTHORIZATION inventUser
MySQL tables
As shown in Figure 6-5 on page 129, MySQL supports two types of tables:
transaction-safe tables and non-transaction-safe tables. Transaction-safe tables
(managed by InnoDB or NDB storage engines) are crash safe and can take part
in transactions providing concurrency features that allow commit and rollback.
Alternatively, non-transaction-safe tables (managed by MyISAM, MEMORY,
MERGE, ARCHIVE, or CSV storage engines) are less safe but are much faster
and consume less space and memory.
Example 6-4 shows how to create a table using the MyISAM storage engine.
Example 6-5 is the DB2 conversion. Note the changes:
Changes in the data type according to data type mapping
Instead of auto_increment, generated by default because identity is used
Example 6-6 is a MySQL table creation example using the ARCHIVE engine.
Example 6-7 is the DB2 conversion using row compression.
Example 6-8 on page 132 shows how to create a MySQL table with partitioning.
Example 6-9 on page 132 is the DB2 conversion; again, no major change is
required.
Alter table
ALTER TABLE is a statement that is used to modify one or more properties of a
table. The syntax of the ALTER TABLE statement for MySQL and DB2 is quite
similar and is shown in Example 6-10.
db2>ALTER TABLE partsales alter column status set data type varchar(20)
ALTER TABLE in DB2 now supports dropping columns. DB2 drops columns
using a temporary table. The syntax of ALTER TABLE for MySQL and DB2 is
similar, as shown in Example 6-11.
Drop table
Tables can easily be deleted from the database by issuing the DROP TABLE
statement as shown.
For MySQL:
DROP [TEMPORARY] TABLE [IF EXISTS] tbl_name [, tbl_name,...] [RESTRICT
|CASCADE]
DB2 uses the update-able UNION ALL view to achieve the merge table
feature. UNION ALL views are commonly used for logically combining
different but semantically related tables. The UNION ALL view is also used for
unification of like tables for better performance, manageability, and
integrating federated data sources.
The following example shows using the UNION ALL command for views:
db2>CREAT VIEW UNIONVIEW as SELECT * FROM table1 UNION ALL SELECT * FROM
table2
MEMORY table
A MEMORY table is a hashed index that is always stored in memory. Memory
tables are fast but are not crash safe. When MySQL crashes or has a
scheduled reboot, the MEMORY table will still exist on the reboot but the data
is lost.
A MySQL Memory table can be created using the following command:
mysql> CREATE TABLE memtable type=MEMORY SELECT * FROM table1;
MySQL MEMORY tables can be converted to DB2 as temporary tables,
materialized query tables, or indexes depending upon your requirements.
//Here is the SQL to create the tables in the Person table hierarchy
MySQL supports both single column indexes and multi-column indexes. MySQL
has five types of indexes:
Primary key
Unique
Non-unique
Fulltext
Spatial
DB2 supports all of the index types that are supported by MySQL with the same
terminology, allowing them to map directly during conversion.
Create index
The following example shows the MySQL CREATE INDEX syntax:
CREATE [ONLINE|OFFLINE] [UNIQUE|FULLTEXT|SPATIAL] INDEX index_name
[index_type]
ON tbl_name (index_col_name,...)
index_col_name:
col_name [(length)] [ASC | DESC]
Drop index
This example shows the DROP INDEX statement for MySQL and DB2:
mysql> DROP INDEX index_name ON tbl_name;
db2> DROP INDEX index_name
There are two types of triggers: statement triggers and row triggers. Statement
triggers are executed in response to a single INSERT, UPDATE, or DELETE
statement. Row triggers are executed for each row that is affected by the
INSERT, UPDATE, or DELETE statement. There are many benefits to having
triggers. They allow you to log information about changes to a table, can be used
to validate an insert, can restrict access to specific data, or can make data
modifications and comparisons as a change occurs.
The following example shows the CREATE TRIGGER syntax for MySQL:
CREATE [DEFINER = { user | CURRENT_USER}]
TRIGGER trigger_name trigger_time trigger_event
ON tbl_name FOR EACH ROW trigger_stmt
Multiple servers
In certain cases, multiple MySQL servers are placed on the same machine.
Possible reasons include user management, testing, or potentially differentiating
applications. MySQL provides an option to run multiple servers on same
machine using several operating parameters.
There are several ways to configure a new server; we used this method:
bash> /path/to/mysqld --socket=file_name --port=port_number
DB2 also supports the creation of separate instances on the same machine with
a different DB2 version.
MySQL supports the access of tables in multiple databases using the same
connection. So, an application connected to MySQL can use two tables in
separate databases in a single statement. The queries in Example 6-15 show
how MySQL can use two tables in separate databases in a single statement.
+------+-------+------+------+
| col1 | col2 | col1 | col2 |
+------+-------+------+------+
| 1 | val 1 | 1 | new |
| 1 | val 1 | 2 | new |
+------+-------+------+------+
Figure 6-7 MySQL application using multiple DBs instead of multiple schema
The tables created in the particular schema can be accessed using the full table
qualifier table schema.table name.
DB2 supports table spaces to establish the relationship between the physical
storage devices that are used by your database system and the logical
containers that are used to store data. Table spaces reside in database partition
groups. They allow you to assign the location of table data directly onto
containers.
Prior to the conversion of your database structure, you must create proper table
spaces of various sizes in DB2. Individual or multiple tables can then be
assigned to the tables spaces. If the table space design is done effectively, it can
greatly increase performance. We discuss this topic in more detail in Chapter 10,
Testing and tuning on page 321.
List information
MySQL provides a show command to list the information about databases,
tables, columns, or status information about the server.
DB2 provide commands for getting information about instances, databases, table
spaces, and other objects. The DB2 system catalogs contain all of the necessary
information about tables, columns, indexes, and other objects. You can use the
describe and list commands to display database and table structure or use the
select statement to get the details of the table definition.
Primary keys and unique keys are treated similarly in MySQL and DB2.
However, MySQL currently only parses foreign key syntax in the CREATE
TABLE statements. MySQL does not use or store the information about foreign
keys, except in InnoDB tables, which support checking foreign key constraints,
including CASCADE, ON DELETE, and ON UPDATE.
DB2 provides full support for foreign keys. With the full referential integrity
functionality from DB2, your application can be released from the job of taking
care of the data integrity issues. Example 6-16 shows the creation and usage of
foreign key constraints in DB2.
We discuss foreign key creation in more detail in 6.5.2, Manual database object
conversion and enhancements on page 158.
You can convert database schema in various ways; however, there are three
most common approaches:
Automatic conversion using porting tools
Manual porting
Metadata transport
In general, all of these approaches use existing MySQL databases as input and
pass them through the following functional engine:
Capture database schema information from MySQL
Modify schema information for DB2
Create the database in DB2 with structure
You can reduce the downtime, eliminate human error, and cut back on person
hours and other resources associated with the database conversion by using the
following features found in the IBM Data Movement Tool:
Extract DDL statements from the MySQL source database.
Extract data from the MySQL source database.
Generate and run DB2 DDL conversion scripts.
Automate the conversion of database object definitions.
View and refine conversion scripts.
Efficiently implement converted objects using the deployment options
(interactive deployment or automated deployment).
Generate and run data movement scripts.
Track the status of object conversions and data movement, including error
messages, error location, and DDL change reports, using the detailed
conversion log file and report.
We describe the instructions to download, install, and use the IBM Data
Movement Tool in 5.3, IBM Data Movement Tool installation and usage on
page 112.
For the rest of this chapter, we discuss how to convert the database structure to
DB2. The next chapter discusses how to convert the data to DB2.
In 6.1, Data type mapping on page 116 and 6.2, Data definition language
differences on page 122, we demonstrated syntax and semantic differences
between MySQL and DB2 that might require manual conversion. We have also
discussed the creation, deletion, and alteration of various database objects, such
as the database, tables, index, and views, and how these objects are related
when converting from MySQL and DB2.
connect to invent;
disconnect invent;
To create the database, invoke this SQL script from the DB2 command-line
window or bash shell by using this command:
db2>@create-database.sql
bash>db2 -f create-database.sql
We then use the database object creation statements from the output of
mysqldump and the mysqlobjects.ddl file to create the db2objects.ddl script.
Change the DDL statements and data types based on the discussion in 6.1,
Data type mapping on page 116 and 6.2, Data definition language differences
on page 122. You can create any additional statements that are required.
Now, we execute the newly created DDL scripts from the bash shell, as shown in
Example 6-18.
The IBM Rational Rose Data Modeler Edition is a data design tool, which
integrates application design with database design and maps the data model
with the object model. It allows database designers, business analysts, and
developers to work together through a common language. Use this tool for
conversions that require your MySQL model file to be available for a conversion
to a DB2 physical model.
6.5.1 Converting database objects with the IBM Data Movement Tool
The following steps describe the process of converting the MySQL sample
Inventory database to DB2 using the IBM Data Movement Tool.
After the data extraction is complete, read through the result output files for the
status of the data movement, warnings, errors, and other potential issues. You
can click View Script/Output from the Extract/Deploy window to check the
generated scripts, DDL, data, or output log file.
Table 6-3 on page 153 shows the command scripts that are regenerated each
time that you run the tool in GUI mode. These scripts can also be issued in
console mode without the GUI, which is helpful when you want to embed this tool
as part of a batch process to accomplish an automated data movement.
IBMExtract.properties This file contains all input parameters that you specified
through your GUI or command-line input values. You can edit
this file manually to modify or correct parameters. Note this
file will be overwritten each time that you run the GUI.
geninput This script is the first data movement step where you will
create an input file that contains the names of the tables to
move. You can edit this file manually to exclude tables that
you do not want to move.
unload This script is the last step of data movement. This script
unloads data from the source database server to flat files.
DB2 LOAD scripts will be generated after running this script.
Note that if you did not choose to separate DDL from DATA,
the genddl content is included in the unload script.
rowcount This file will be used after you have moved the data to perform
a sanity check for the row count for tables in source and target
database servers.
Choose an option that works best for your conversion project. The interactive
deployment mode is best for deploying database objects that contain triggers,
functions, or procedures. In most MySQL conversions, the first two options will
suffice, because the conversion of the database objects will be performed
outside of the IBM Data Movement tool. For our example, we select the
Interactive Deploy mode to better explain the conversion process by separating
the database object and the data deployment. Figure 6-11 on page 154 shows
the Interactive Deploy tab window.
From the Interactive Deploy window, you can perform a number of tasks:
Refresh the database object list.
Select the refresh button (circled in Figure 6-12 on page 155) to refresh the
list of database objects in the DB2 Objects view on the left side of the window.
Edit the object definition.
You can select the database object that you want to modify and edit in the
right panel, as shown in Figure 6-12 on page 155. To save and deploy
changes, deploy the object before selecting a new object. After deployment,
you can return to refine any objects that failed to deploy.
You can also edit the scripts that were extracted into the conversion directory.
To change the table definition, edit the db2tables.sql file:
db2inst1@db2server:/opt/ibm/IBMDataMovementTool/migr> vi db2tables.sql
Example 6-19 shows the converted DB2 table creation file.
TERMINATE;
Important: You must not reduce the size of any field, because it can cause
an error while converting the data.
Views
You must manually port views from MySQL to DB2. You can extract the MySQL
view definition from the MySQL database using the mysqldump utility or
selecting from the INFORMATION_SCHEMA.VIEWS table.
The syntax for a view in MySQL and DB2 is extremely similar, which makes it
simple to convert the DDL for this database object. Example 6-20 on page 159
shows the CREATE VIEW syntax for the MySQL views.
For our example, we alter the CREATE VIEW commands to match the DB2
syntax, as shown in Example 6-21 on page 159.
Trigger conversion
Triggers also require manual conversion to port them from MySQL to DB2. You
can extract the MySQL trigger definition from the MySQL database using the
mysqldump utility or selecting from the INFORMATION_SCHEMA.TRIGGERS
table.
Example 6-22 on page 161 shows the MySQL CREATE TRIGGER statement
and Example 6-24 on page 161 shows the DB2 CREATE TRIGGER statement.
In these examples, note the difference between new and old data values that are
declared and referenced. Refer to 6.2.4, Trigger manipulation on page 137 for
more details about syntax differences between MySQL and DB2.
Also, notice the change in the date function. You must find the equivalent
functions in DB2 when converting DDL and DML statements that contain built-in
functions. We discuss and compare MySQL and DB2 built-in functions and
operators in more detail in 8.1.10, Built-in functions and operators on page 221.
The syntax for a procedure in MySQL and DB2 is similar. Example 6-25 on
page 162 shows the CREATE PROCEDURE syntax for the MySQL procedure
and Example 6-26 on page 162 shows the CREATE PROCEDURE syntax for the
DB2 procedure. You might notice that the only difference between the statements
is the date function to determine the number of days between the open date and
the close date. For a description of MySQL built-in functions and DB2 equivalent
Foreign keys
Now, you can add any additional enhancement to your database using the DB2
features that might have not been supported in your existing MySQL storage
engine. One example is referential integrity, which is essential to the database by
ensuring consistency of data values between related columns in separate tables.
Referential integrity is usually maintained by using the primary key, unique key,
and foreign keys. MySQL only supports foreign keys in the InnoDB engine.
Primary and unique keys are successfully converted using the IBM Data
Movement Tool, but at the time of writing this book, foreign keys are not
supported for a MySQL conversion project. If you want to create foreign keys in
your database or to convert your foreign keys from an Innodb database, you
need to perform this task manually.
Now, we have completed the DDL modification. We execute the changed scripts
in Example 6-27 on page 162 to create the DB2 database and the objects, as
shown in Example 6-28.
In this chapter, we also discuss the differences in specific data formats and data
types and ways in which they can be converted from MySQL to DB2.
We also describe how user account management (user data, access rights, and
privileges) is implemented in MySQL and how this information can be ported to
implement secure database access within DB2.
Finally, we discuss in detail the steps that we performed to convert the data in
our sample project.
Database systems provide commands and tools for unloading and loading data.
In MySQL, the mysqldump tool is used to unload a database. DB2 provides the
LOAD and IMPORT commands for loading data from files into the database.
You have to be aware of the differences in how specific data types are
represented by various database systems. For example, the representation of
date and time values can differ in separate databases, and this representation
often depends on the local settings of the system.
If the source and the target databases use different formats, you must to convert
the data either automatically using tools or manually. Otherwise, the loading tool
is not able to understand the data that it needs to load due to improper
formatting.
You must convert the binary data stored in binary large objects (BLOBs)
manually, because binary data cannot be exported to files in a text format.
The IBM Data Movement Tool automates the use of the MySQL SELECT
statements and the DB2 LOAD commands.
For a complete description of this tool, run mysqldump --help. Consider these
important command-line options:
--no-data
This option ensures that no data is extracted from the database, just the SQL
statements for creating the tables and indexes. Therefore, this option is used
for extracting DDL statements only.
--no-create-info
This option ensures that no SQL statements for creating the exported table
are extracted from the database. Therefore, it is used for exporting data only.
The output file containing the data can be loaded into a DB2 table at a later
time.
--tab=<outFilePath>
This option creates a text file with the DDL (<tablename>.sql) and a tab
separated text file with the data (<tablename>.txt) in the given path for each
specified table. This option works only when the utility is run on the same
machine as the MySQL daemon. If this option is not specified, INSERT
statements for each row are created.
Example 7-1 shows the usage and output of the mysqldump command using only
the --user and --password options. The output includes DDL statements for table
creation and INSERT statements to insert data into the table.
Example 7-1 Usage of mysqldump with only the --user and --password options
mysqlServer:~ # mysqldump --user root --password inventory severity
--
-- Table structure for table `severity`
--
--
-- Dumping data for table `severity`
--
UNLOCK TABLES;
Example 7-2 shows the usage and output of the mysqldump command with the
--no-create-info option, but without the --tab option. The output has only INSERT
statements for each row to insert data into the table.
Example 7-2 Usage of mysqldump with the --no-create-info but without the --tab option
mysqlServer:~# mysqldump--no-create-info --user root -password inventory severity
--
-- Dumping data for table `severity`
--
UNLOCK TABLES;
Example 7-3 shows the usage and output of the mysqldump command with the
--no-create-info and the --tab options. This command outputs a file in the current
directory named <tableName>.txt that contains only the exported MySQL data.
This file can be read by the DB2 LOAD command.
Example 7-3 Usage of mysqldump with the --no-create-info and the --tab option
mysqlServer:~ # mysqldump --no-create-info --tab=. --user root --password
inventory severity
2 high-med \N 4 4
4 low-med \N 10 10
3 medium \N 7 7
5 low \N 14 12
1 high \N 1 2
Example 7-4 shows the usage and output of mysqldump without the
--no-create-info option but with the --tab option. This command outputs two files:
one file contains the DDL statements for table creation (<tableName>.sql) and
the other file contains the exported MySQL data (<tableName>.txt) in the current
directory. The second file can be read by the DB2 LOAD command.
Example 7-4 Usage of mysqldump without the --no-create-info but with the --tab option
mysqlServer:~ # mysqldump --tab=. --user root -password inventory severity
--
-- Table structure for table `severity`
--
Example 7-5 illustrates the inventory.tables generated file for our sample
conversion.
In general, the LOAD utility is faster than the IMPORT utility, because it writes
formatted pages directly into the database, while the IMPORT utility performs
SQL insert statements. The LOAD utility validates the uniqueness of the indexes,
but it does not fire triggers. It also does not perform referential constraint or table
constraint checking.
See Example 7-6 for a simplified syntax diagram for the LOAD command. For a
complete syntax description, visit the Information Center at this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
The IBM Data Movement Tool uses the LOAD utility for loading the application
data into DB2. Example 7-7 shows an example of the LOAD command that is
used to load the data into the severity table.
See Example 7-8 for a simplified syntax diagram for the IMPORT command. For
a complete syntax description, visit the Information Center at this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
Binary Large Objects (BLOB data type) usually contain binary data. Exporting of
binary data into text files is not possible. So, if your BLOBs contain binary data,
you must convert them in a different way than exporting and loading. The IBM
Data Movement tool handles the conversion BLOB data for you.
If you use the IBM Data Movement Tool, you do not have to worry about this
format, because the tool takes care of this formatting for you.
Example 7-9 Edits made to IBM Data Movement Tool extraction script
"ADMIN"."groups":SELECT * FROM groups
"ADMIN"."inventory":SELECT * FROM inventory
"ADMIN"."locations":SELECT * FROM locations
"ADMIN"."owners":SELECT id, firstName, lastName, email, locID, cubeNum,
phoneNum, loginName, AES_DECRYPT(password,'password') as password, faxNum,
groups FROM owners
"ADMIN"."services":SELECT * FROM services
"ADMIN"."severity":SELECT * FROM severity
"ADMIN"."status":SELECT * FROM status
When extracting the data from the GUI tool, make sure to select No when
requested to recreate the conversion directory and the <database>.tables file.
Otherwise, your changes will be overwritten.
User accounts
When assessing your application, be sure to distinguish between the following
user types:
Application user accounts
These users log on to the application, but they do not exist on the database
level. Database access is through the application with the applications
database user ID. Because the information about application users is usually
stored in custom application tables, the porting of application user account
data is done when porting the MySQL application data. In the sample
inventory database, this custom application table is our owners table.
Database user accounts
Database users connect directly to the MySQL database to retrieve and
manipulate data. At least one database user must exist for applications to
connect to the database. Database user accounts are created with the
MySQL server and allow you to grant and restrict access to portions of the
MySQL servers. A database user account is associated with a host name and
user name. The user account information is stored with the mysql.user table
and must be ported in data conversion step. Access rights and privileges for
these users are stored in the mysql.db, mysql.host, mysql.tables_priv,
mysql.columns_priv, and mysql.procs_priv tables.
Passwords
Database users have associated passwords, which are stored encrypted in the
mysql.user table. Encrypted passwords cannot be ported and must be reset on
the new system. The password of the database user, which is used by an
application to access the database, is typically stored in a profile with restricted
rights.
Access rights
When accessing a MySQL database, there are two levels of access control.
When you first connect to a MySQL server, you provide a user name and the
associated password. Furthermore, the machine that you are connecting from
must be associated with this user to allow the connection. This requirement is
based on the assumption that a user with a specific user name from one host is
different from a user with the same user name from a separate host.
This access information is stored in the mysql.user table in the fields: user,
password, and host. The MySQL wildcard ampersand character (%) is often
used in the host field to specify that this user can connect from any host. The
wildcard underscore (_) is also sometimes used for single characters.
In Example 7-10, the user1 user can connect from any host, the user2 user can
connect from only the remoteHost.ibm.com host, and db2inst1 can connect from
localhost, myServer, and 127.0.0.1.
Refer to the MySQL Reference Manual at this Web site for a complete
description of the MySQL connection verification:
http://dev.mysql.com/doc/refman/5.1/en/index.html
You can also find information about how the entries in the mysql.user table are
ordered when the provided connection data meets more than one connection
criteria.
Privileges exist for selecting, inserting, updating, and deleting data for creating,
altering, and dropping database objects and other operations performed at the
database level.
Privileges can be granted to users with the MySQL GRANT command; they can
be revoked with the REVOKE command.
For more information about MySQL privileges, see the MySQL Reference
Manual at this Web site:
http://dev.mysql.com/doc/refman/5.1/en/
User accounts
To create a user for DB2 implies creation of a user in the servers operating
system, assigning the user to a group, and granting specific database privileges
to the user or group.
On Linux systems, you must have root access to the system to create groups
and users. Group information is stored in the /etc/group file and user
information is stored in the /etc/passwd file.
For example, if you want to create a new db2app1 group with one user, db2usr1,
to access a specific DB2 table, perform the necessary steps:
1. Log on to the Linux system with root privileges.
2. Create the group. Make sure that the provided group name does not already
exist and ensure that it is not longer than eight characters:
groupadd [-g 995] db2app1
3. Create the user and assign it to the previously created group. Make sure that
the ID for the user does not already exist and that it is not longer than eight
characters:
useradd [-u 1001] -g db2app1 -m -d /home/db2usr1 db2usr1 [-p passwd1]
If the user will access the DB2 database locally, continue with the next two
steps:
a. Edit the profile of the created user:
vi /home/db2usr1/.profile
b. Add the following line to the profile. Be sure to specify the path of your
DB2instance owners home directory and to specify a blank between the
dot and the command:
. /home/db2inst1/sqllib/db2profile
DB2 9.5 introduced roles to simplify the management of authorization. Roles are
equivalent to in-database groups and allow DBAs to group together one or more
privileges, authorities, or security labels. Roles can be assigned to users, groups,
PUBLIC, or other roles by using the GRANT statement.
Access rights
The first component in the DB2 security model is authentication. Access to DB2
databases is restricted to users that exist on the DB2 system. When connecting
to a DB2 database, you have to provide a user name and password combination
that is valid against the servers system. Authentication can occur at the DB2
server or the DB2 client using operating system authentication, Kerberos, or an
external security manager.
Administrative authority
A user or group can have one or more of the following administrative authorities.
System-level authorization
The system-level authorities provide varying degrees of control over
instance-level functions:
SYSADM authority
The SYSADM (system administrator) authority provides control over all the
resources created and maintained by the database manager. The system
administrator possesses all authorities for the SYSCTRL, SYSMAINT, and
SYSMON authority. The user who has SYSADM authority is responsible for
both controlling the database manager and ensuring the safety and integrity
of the data.
SYSCTRL authority
The SYSCTRL authority provides control over operations that affect system
resources. For example, a user with SYSCTRL authority can create, update,
start, stop, or drop a database. This user can also start or stop an instance,
but cannot access table data. Users with SYSCTRL authority also have
SYSMON authority.
Figure 7-1 illustrates the instance-level authorities that can be granted to a user
or role.
Figure 7-2 on page 185 illustrates the database-level authorities that can be
granted to a user or role.
Note: Care must be given to granting authorities and privileges to user names
that do not exist in the system yet. At a later time, this user name can be
created and automatically receive all of the authorities and privileges
previously granted.
LBAC controls access to table objects by attaching security labels to rows and
columns. Users attempting to access an object must have a certain security label
granted to them. Only users who have the proper labels when accessing the row
are allowed to retrieve the data. Other users do not receive any indication that
they cannot retrieve the row. This form of security differs from normal SELECT
privileges, because users who attempt to access a table that they are not
allowed to access receive an SQL error message.
All privileges and labels can be granted to users or groups with the GRANT
command, and they can be revoked using the REVOKE command.
Table 7-1 shows the mapping of MySQL privileges to DB2 privileges, assuming
that different MySQL databases are mapped to different DB2 schemas. For
example, an INSERT privilege granted in MySQL on the global level means that
you have to grant the INSERT privilege on all existing tables in the DB2 database
to the specified user. If you create a new table in DB2, you have to grant the
INSERT privilege on this table to the user.
Table SELECT On
SYSCAT.VIEWS
Example 7-11 shows retrieving users with access to the sample project
database.
Example 7-11 Retrieve users with access to the sample project database
>mysqlaccesss % % inventory -b -U root -P
mysqlaccess Version 2.06, 20 Dec 2000
The user accounts user1, users2, and ANY_NEW_USER (which is similar to the
logical DB2 group PUBLIC) do not have any privileges on our sample database,
so we do not need to upgrade these users either.
The only remaining user is the user inventAppUser, which is the account that our
application uses to connect to the MySQL database and manipulate data.
The users privileges are set for SELECT, INSERT, UPDATE, DELETE,
CREATE, and DROP on the database level for inventory, so we map these
privileges to SELECT, INSERT, UPDATE, and DELETE for all tables in the DB2
schema inventory and to CREATEIN and DROPIN for the schema.
groupadd $2
useradd -g $2 -m -d $HOMEDIR/$1 $1
passwd $1
echo '. '${DB2DIR}'/sqllib/db2profile' >> $HOMEDIR/$1/.profile
The root user creates our user and group, as shown in Example 7-13.
The instance owner, db2inst1, grants the privileges with the DB2 command that
is shown in Example 7-14.
The next step is to extract the MySQL application data using the Data Movement
Tool. In 7.1.1, Data porting commands and tools on page 168 of this chapter,
we discuss how to create data extraction and transfer files using the IBM Data
Movement Tool. And in 7.1.2, Differences in data formats on page 175, we
discuss how to modify the extraction scripts to extract special data. Now, we must
execute these scripts and extract the data from the MySQL database. Open the
Extract/Deploy tab window, clear the DDL check box, select the Data Movement
check box, and click Extract DDL/data, as shown in Figure 7-3 on page 196. If
you have made changes to the extraction script, be sure to select No when
requested to recreate the output directory or <tableName>.tables file.
Now, you can go into your project directory to check the extracted data files. The
data output files extracted from the MySQL database are located under the
<migration output directory>/data directory. Example 7-15 shows the created
scripts for our inventory scenario.
Each of the files is tab-delimited, containing the data from the corresponding
MySQL table. This format can be read by the DB2 LOAD command.
Example 7-16 shows the DB2 LOAD commands that are generated by the IBM
Data Movement Tool for our sample project.
Example 7-16 DB2 LOAD commands for loading the data into the DB2 database
CONNECT TO INVENT;
--#SET :LOAD:ADMIN:GROUPS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_groups.txt" OF DEL
MODIFIED BY CODEPAGE=1208COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_groups.txt"
METHOD P(1,2,3,4,5)MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_groups.txt"
REPLACE INTO "ADMIN"."GROUPS"("GROUPNAME", "EDITUSER", "EDITGRANTUSERPERM", "EDITINVT",
"EDITSERVICE") --STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL
NONRECOVERABLE INDEXING MODE AUTOSELECT;
--#SET :LOAD:ADMIN:LOCATIONS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_locations.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_locations.txt"
METHOD P(1,2,3,4)MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_locations.txt"
REPLACE INTO "ADMIN"."LOCATIONS" ("ID", "ROOMNAME", "FLOORNUM", "PASSCODE")
--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE
INDEXING MODE AUTOSELECT ;
--#SET :LOAD:ADMIN:SEVERITY
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_severity.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_severity.txt"
METHOD P (1,2,3,4,5) MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_severity.txt"
REPLACE INTO "ADMIN"."SEVERITY" ("ID", "TITLE", "NOTES", "ESTDAYS", "AVGDAYS"
)--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING
MODE AUTOSELECT;
--#SET :LOAD:ADMIN:INVENTORY
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_inventory.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_inventory.txt"
METHOD P (1,2,3,4,5,6,7,8) MESSAGES
"/opt/ibm/IBMDataMovementTool/migr/msg/admin_inventory.txt"
REPLACE INTO "ADMIN"."INVENTORY"("ID", "ITEMNAME", "MANUFACTURER", "MODEL", "YEAR",
"SERIAL", "LOCID", "OWNERID")--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL
NONRECOVERABLE INDEXING MODE AUTOSELECT ;
--#SET :LOAD:ADMIN:SERVICES
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_services.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_services.txt"
METHOD P (1,2,3,4,5,6,7,8,9) MESSAGES
/opt/ibm/IBMDataMovementTool/migr/msg/admin_services.txt" REPLACE INTO
"ADMIN"."SERVICES" ("ID", "INVENTID", "DESCRIPTION", "SEVERITY", "SERVICEOWNER",
"OPENDATE", "CLOSEDATE", "TARGETCLOSEDATE", "STATUS" )--STATISTICS YES WITH DISTRIBUTION
AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING MODE AUTOSELECT;
--#SET :LOAD:ADMIN:STATUS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_status.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_status.txt"
METHOD P (1,2,3) MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_status.txt"
REPLACE INTO "ADMIN"."STATUS" ("ID", "TITLE", "NOTES" )--STATISTICS YES WITH
DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING MODE AUTOSELECT;
TERMINATE;
After importing the data into the DB2 tables, execute the RUNSTATS command
to recreate the statistics information about indexes. The query optimizer uses
this statistics information. The IBM Data Movement Tool generates a custom
RUNSTATS script for the new database, which is called db2runstats.sql.
Example 7-17 shows the db2runstats.sql script for our sample conversion. You
can run this script in the IBM Data Movement Tool GUI or command line.
Example 7-17 DB2 RUNSTATS commands for recreating the statistics information
CONNECT TO INVENT;
RUNSTATS ON TABLE "ADMIN"."GROUPS" ON ALL COLUMNS WITH DISTRIBUTION
ON ALL COLUMNS AND DETAILED INDEXES ALL ALLOW WRITE ACCESS ;
Example 7-18 Log file information about the DB2 LOAD command
Number of rows read = 140
Number of rows skipped = 0
Number of rows loaded = 140
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 140
TERMINATE
DB20000I The TERMINATE command completed successfully.
CONNECT TO INVENT
ADMIN.GROUPS
---------------------------------
6.
1 record(s) selected.
ADMIN.LOCATIONS
---------------------------------
140.
1 record(s) selected.
ADMIN.SEVERITY
---------------------------------
5.
1 record(s) selected.
ADMIN.OWNERS
---------------------------------
502.
1 record(s) selected.
ADMIN.INVENTORY
---------------------------------
703.
ADMIN.SERVICES
---------------------------------
808.
1 record(s) selected.
ADMIN.STATUS
---------------------------------
7.
1 record(s) selected.
TERMINATE
DB20000I The TERMINATE command completed successfully.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
Make sure that the number of rows read equals the number of rows committed,
which also needs to equal the number of records in the MySQL source table.
Example 7-19 shows the MySQL command for the record count.
After you have checked that all the records were loaded into DB2, check the
sample data in each ported table. Pay attention to ensure that the values are
correct, especially if you have any time values or decimal values. Example 7-20
shows the table content checking.
1 record(s) selected.
The STRAIGHT_JOIN keyword forces the MySQL optimizer to join tables in the
order that is specified. In DB2, the join order is always determined by the
optimizer. The optimizer choices can be limited by changing the default query
optimization class to a lower level using SET CURRENT QUERY
OPTIMIZATION. However, changing this class does not guarantee that the
optimizer will evaluate the join in the order stated within the SQL statement,
because the DB2 cost-based optimizer usually chooses the best access path for
a given query. For additional information, see 10.5.5, SQL execution plan on
page 366.
DB2 has a similar operator to guide SQL optimizer decisions with a different
syntax, as shown in Example 8-3.
The STRAIGHT_JOIN keywords force the MySQL optimizer to join tables in the
order that they are specified. In DB2, the join order is always determined by the
optimizer. You can limit the optimizer choices by changing the default query
optimization class by using the SET CURRENT QUERY OPTIMIZATION
command.
A NATURAL join, as its name implies, can be invoked when two or more tables
share exactly the same columns needed for a successful equijoin. It is
semantically equivalent to DB2 INNER JOIN or LEFT OUTER JOIN with the
respective join criteria specified in the ON clause.
According to the SQL ANSI standard when you must join tables that share more
than one column naturally, you must use the JOIN ... USING syntax. You can
compose an equivalent join using the DB2-supported join syntax in the ON
clause.
AVG([DISTINCT] mysql> SELECT a, AVG ([DISTINCT | db2 " SELECT a, Returns the
expression) AVG(b) ALL] expression) AVG(b) average set of
FROM t1 FROM t1 numbers
GROUP BY a GROUP BY a"
MAX ([DISTINCT] mysql> SELECT a, MAX ([DISTINCT | db2 "SELECT a, Returns the
expression) MAX(b) ALL] expression) MAX(b) maximum value
FROM t1 FROM t1 in a set of values.
GROUP BY a GROUP BY a"
SUM([DISTINCT] mysql> SELECT a, SUM([DISTINCT | db2 " SELECT a, Returns the sum
expression) SUM(b) ALL] expression) sum(b) of a set of
FROM t1 FROM t1 numbers.
GROUP BY a GROUP BY a"
GROUP BY on alias mysql> SELECT a as x Use column name for db2 " SELECT a Groups data by
FROM a grouping FROM t1 the alias name
GROUP BY x; GROUP BY a" provided.
GROUP BY on mysql> SELECT a Use column name for db2 " SELECT a Groups data by
position FROM t1 grouping FROM t1 the position
GROUP BY 1 GROUP BY a" provided.
HAVING on alias mysql> SELECT a as x Use column name in db2 " SELECT a Groups data
FROM t1 having clause FROM t1 meeting the
GROUP BY a GROUP BY a HAVING
HAVING x > 0 HAVING a > 0 expression.
8.1.6 Strings
Unless you start MySQL in ANSI mode (using mysqld --ansi), MySQL behaves
differently than DB2. As Example 8-7 on page 211 illustrates, MySQL accepts
single, as well as double quotation marks as a string delimiter when started in
default mode.
DB2 is designed and implemented according to the ANSI standard and therefore
accepts single quotation marks as a string delimiter. Double quotation marks are
used in DB2 for delimiting SQL identifiers. Example 8-8 shows how DB2 handles
strings. You achieve similar results when MySQL runs in ANSI mode.
Table 8-2 provides an overview of a few of the MySQL string related functions,
and how these can be converted to DB2. For a full list of MySQL string functions
and the DB2 equivalent, refer to A.2, String functions on page 402.
ASCII(string) mysql> SELECT ascii('a'); ASCII(string) db2 "VALUES ascii('a') " Returns ASCII
+------------+ code value
| ascii('a') | 1
+------------+ -----------
| 97 | 97
+------------+
1 row in set (0.00 sec) 1 record(s) selected
TRIM([Both | mysql> select trim(trailing from TRIM([Both | db2 "VALUES Removes blanks
Leading | trim(LEADING FROM ' abc Leading | trim(trailing from or occurrences of
trailing ')) as OUTPUT; trailing trim(LEADING FROM ' another specified
[substring] +--------+ [substring] abc '))" character from the
FROM] | OUTPUT | FROM] string) end or the
string) +--------+ OUTPUT beginning of a
| abc | --------- string expression
+--------+ abc
1 row in set (0.00 sec)
1 record(s) selected.
Prior to Version 9.7, strong typing was used during comparisons and
assignments. Strong typing requires matching data types, which means that you
must explicitly convert one or both data types to a common data type prior to
performing comparisons or assignments.
In Version 9.7, the rules that are used during comparisons and assignments
have been relaxed. If reasonable interpretation can be made between two
mismatched data types, implicit casting is used to perform comparisons or
assignments. Implicit casting is also supported during function resolution. When
the data types of the arguments of a function being invoked cannot be promoted
to the data types of the parameters of the selected function, the data types of the
arguments are implicitly cast to those of the parameters.
Example 8-9 shows how MySQL implicitly casts the character value 5 to an
integer value to resolve the query.
As of DB2 9.7, DB2 now supports an implicit casting of incompatible data types.
Example 8-10 shows implicit casting of a character value in DB2 9.7.
C1
-----------
5
1 record(s) selected.
DB2 versions prior to 9.7 require explicit casting of the character value to an
integer value, as illustrated in Example 8-11 on page 215.
C1
--------
5
1 record(s) selected.
Example 8-12 illustrates how MySQL implicitly casts numeric values and DATE,
TIME, or TIMESTAMP values to strings when concatenated.
Example 8-12 MySQL implicit casting using concatenation for strings and DATE
mysql> select concat('ITSOSJ',1234) from t1;
+-----------+
| stringcol |
+-----------+
| ITSOSJ1234|
+-----------+
1 row in set (0.02 sec)
Example 8-13 illustrates how DB2 9.7 implicitly casts numeric values, as well as
DATE, TIME, or TIMESTAMP values, to strings when concatenated.
Example 8-13 DB2 9.7 casting character strings and DATE implicitly
db2 => select concat('ITSOSJ',1234) from t1
1
----------
ITSOSJ1234
1 record(s) selected.
STRINGDATE
1 record(s) selected.
DB2 9.5 and prior versions require compatible arguments for the concatenation
built-in functions, as shown in Example 8-14. If the arguments are incompatible,
for example, calling a function with a character data type argument using a
numeric data type, the concatenation will fail with the error SQL0440N No
authorized routine named "CONCAT" of type "FUNCTION" having compatible
arguments was found.
Example 8-14 DB2 9.5 and prior versions casting character strings and DATE explicitly
db2 => select concat('ITSOSJ',1234) from t1
SQL0440N No authorized routine named "CONCAT" of type "FUNCTION" having
compatible arguments was found. SQLSTATE=42884
db2 => select concat('ITSOSJ','1234') as stringcol from t1
STRINGCOL
-----------
ITSOSJ1234
1 record(s) selected.
STRINGDATE
------------------------
ITSOSJ01/23/2004
1 record(s) selected.
DB2 follows the ANSI92 standard for concatenation of multiple strings. DB2 also
has a CONCAT(string1, string2), which can be used for concatenation of two
strings. Example 8-16 shows how DB2 handles concatenating strings.
1
-------------------
This is an example.
1 record(s) selected.
db2 => VALUES ('This ' || 'is ' || 'an ' || 'example.')
1
-------------------
This is an example.
1 record(s) selected.
The ANSI92 standard states that if you concatenate a NULL value onto an
existing string, the result set is NULL. Example 8-17 shows you the behavior of
MySQL.
As shown, MySQL behaves as ANSI-92 compliant and, therefore, gives you the
same result sets as Example 8-18 for DB2.
NULLSTRING
-----
-
1 record(s) selected.
NULLSTRING
-----
abc
1 record(s) selected.
mysql> rollback ;
ERROR 1196: Warning: Some non-transactional changed tables couldn't be rolled
back
mysql> select * from t1;
Empty set (0.00 sec)
The TRUNCATE option is primarily used to delete all records quickly from a table
when no recovery of the deleted rows is required. As of DB2 9.5, you can enable
the support of the TRUNCATE statement using the
DB2_COMPATIBILITY_VECTOR registry variable.
These features ease the task of converting applications written for other
relational database vendors to DB2 Version 9.5 or later. This DB2 registry
variable is represented as a hexadecimal value, and each bit in the variable
enables one of the DB2 compatibility features. To enable the TRUNCATE
statement, set the DB2_COMPATIBILITY_VECTOR registry variable to 8.
Example 8-20 on page 220 shows the syntax to set the
DB2_COMPATIBILITY_VECTOR registry variable and execute the TRUNCATE
command. You can obtain more information about the
DB2_COMPATIBILITY_VECTOR registry variable at the IBM Information Center
at this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
db2inst1@db2rules:~> db2set
DB2_COMPATIBILITY_VECTOR=8
DB2RSHCMD=/usr/bin/ssh
DB2COMM=tcpip
db2inst1@db2rules:~> db2start
08/31/2009 16:13:19 0 0 SQL1063N DB2START processing was successful.
SQL1063N DB2START processing was successful.
C1
-----------
5
10
15
20
4 record(s) selected.
C1
-----------
0 record(s) selected.
You can also turn off logging with the following ALTER TABLE statement to
achieve a similar result.
ALTER TABLE <tablename> ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE
You can also use the DBI interface to access DB2 using the DBD::DB2 driver.
You can obtain information about the DBI interface and the DBD::DB2 driver, as
well as installation instructions, at the following Web sites:
http://www.ibm.com/software/data/db2/perl/
http://search.cpan.org/~ibmtordb2/DBD-DB2-1.74/DB2.pod
http://www.perl.com/CPAN/modules/by-module/DBD/
http://aspn.activestate.com/ASPN/Modules/
Figure 8-1 illustrates how the Perl interfaces and pluggable drivers connect to
MySQL and DB2 databases.
Connecting to a database
Use the following code within your Perl application to connect to a MySQL
database:
$dsn= "dbi:mysql:$database:$host:$port";
$dbh = DBI ->Connect($dsn, $user, $password);
The connection statement consists of the data source name, user ID, and
password. The data source name consists of the vendor-specific database
driver, the database name, the host name, and the port. Optionally, you can omit
the host name and port from the data source name. Example 8-21 shows the
connect syntax to a MySQL database. For simplicity reasons, we do not include
the error handling in following examples.
my $dns="DBI:mysql:database=$database;host=$host;port=$port";
$dbh =
DBI->connect("$dns","$user","$password");
$dbh = DBI->connect("$dns","$user","$password");
Because DB2 is more powerful than MySQL, the DB2 connect statement might
require a fourth argument \%attr, which contains the connection attributes, as
shown in Example 8-23.
$dbh=DBI->connect($data_source, $user,$password, \%attr);
Example 8-24 SELECT statement that is used with the DBI interface
$sqlStatement = "SELECT * FROM owners;";
$($sqlStatement );
$sth->execute();
@arr = $sth->fetchrow;
$dbh->disconnect;
The following sections describe porting PHP applications from MySQL to DB2.
For more information about developing PHP applications with DB2, refer to
Developing PHP Applications for IBM Data Servers, SG24-7218.
If you use either the mysql or mysqli functions to access your MySQL database
from PHP, it is relatively straightforward to convert your application to use the
ibm_db2 interface. You might also decide to perform a more complex code
rewrite and use PDO instead.
The following declaration shows the function that is specified to connect the
MySQL server:
resource mysql_connect ( [string server [, string username [, string
password [, bool new_link [, int client_flags]]]]])
The server variable in the mysql_connect() function contains the host name or
the IP address of the server that hosts the MySQL database.
Example 8-26 shows the connection part of our sample application using the
MySQL database.
Example 8-28 shows the converted connection part of the sample application. In
our connection script, the command after the connection statement sets the
current schema, which is used when querying the database. We use this
approach, because when compared to DB2, MySQL does not have schemas or
instances.
Example 8-32, Example 8-33, and Example 8-34 on page 231 show the
differences between the MySQL and ibm_db2 functions for the INSERT
statement.
mysql_close($conn);
mysqli_close($conn);
db2_close($conn);
db2_pclose
mysqli_stmt_prepare db2_prepare
mysqli_stmt_execute db2_execute
mysql_field_name db2_field_name
Connecting to a database
Use this syntax for connecting to a database using PDO:
$conn = new PDO( string $dsn [, string $username [, string $password [,
array $driver_options ]]] )
Example 8-38 and Example 8-39 show the difference between the syntax to
connect to a MySQL and a DB2 database using PDO.
Sconn = null;
Using the Unified ODBC support in PHP applications does not require a special
load of library files, because support has been integrated during the compilation
process of PHP. You can obtain a complete overview of the MySQL and the
Unified ODBC functions in the PHP manual, which is available at this Web site:
http://www.php.net/docs.php
When we discuss ODBC in this section (which is always Unified ODBC), we refer
to the native DB2 driver. Wide similarities between the syntax of Unified ODBC
and other ODBC types and the performance advantages when using Unified
ODBC support are the reasons that we discuss the application conversion with
Unified ODBC support.
Connecting to a database
When connecting to a DB2 database with ODBC, you connect by using a single
ODBC command (odbc_connect()):
resource odbc_connect ( string dsn, string user, string password [, int
cursor_type])
Example 8-42 on page 234 shows how to connect to our sample application
database using the ODBC command.
$conn = odbc_connect($database,$user,$password)
or die("Could not connect ". odbc_errormsg());
To get each row in an array equal to the mysql_fetch_row() function, you can use
the odbc_fetch_into() function without extensive modifications. There are only
syntax differences in both statements in our case.
Example 8-44 on page 236 illustrates the ODBC functions for the UPDATE
statement. Refer to Example 8-32 on page 230 and Example 8-33 on page 230
for the MySQL UPDATE statements.
if($updateOutput){
$textOutput = "Service Ticket updated. \n";
}else{
$textOutput = "Service Ticket updated Failed. \n";
}
echo $textOutput . "\n";
Example 8-45 shows the disconnect function using the ODBC library. Refer to
Example 8-35 on page 231 for the MySQL disconnect function.
odbc_close($conn);
The two APIs provide the same functions as the MySQL C API.
Collectively known as the IBM_DB gem, the IBM_DB Ruby driver and Rails
adapter allow Ruby applications to access the following database management
systems:
DB2 Version 9 for Linux, UNIX, and Windows
DB2 Universal Database (DB2 UDB) Version 8 for Linux, UNIX, and Windows
DB2 UDB Version 5, Release 1 (and later) for AS/400 and iSeries, through
DB2 Connect
DB2 for z/OS, Version 8 and Version 9, through DB2 Connect
Informix Dynamic Server, Version 11.10 and later
Example 8-46 on page 238 illustrates the functions that are used to connect to a
MySQL database from Ruby.
conn = Mysql.init()
conn.connect('localhost', 'password')
conn.select_db('test')
You can create a connection to a DB2 database by using a single function, the
connect() function. Use this syntax for the connect function:
resource IBM_DB::connect ( string database, string username, string
password [, array options] )
You can map the MySQL query() function directly to the DB2 exec() function.
Both functions have similar functionality. The only difference between the two
functions is that exec() requires two parameters: one parameter specifies the
connection ID that is returned by the connect() statement, and the other
parameter is the SQL statement:
resource IBM_DB::exec ( resource connection, string statement [, array
options] )
To get each row in an array equal to the MySQL/Ruby fetch_row() function, you
can use the IBM_DB fetch_array() function without extensive modifications.
Example 8-50 and Example 8-51 show the disconnect functions using the
MySQL/Ruby API and the IBM_DB API functions.
conn.close()
IBM_DB.close(conn)
For more information about developing DB2 applications with Ruby on Rails,
review the Developing Perl, PHP, Python, and Ruby on Rails Applications
Manual, which is available at this Web site:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
DB2 provides the IBM Data Server Driver for JDBC and SQLJ. This driver is a
single application driver to support the most demanding Java applications.You
can use this agile driver in either type 4 or type 2 mode. This section provides an
overview of JDBC, SQLJ, and the conversion of existing MySQL Java
applications to DB2.
A JDBC application can establish a connection to a data source using the JDBC
DriverManager interface. In the following sections, we discuss the changes that
are required within the code of a Java application when converting from MySQL
to DB2.
In order to use the DB2 JDBC type 2 driver, you need following properties:
drivername="COM.ibm.db2.jdbc.app.DB2Driver"
URL="java:db2:dbname"
The user ID and password are implicitly selected according to the DB2 client
setup.
Note: The DB2 JDBC Type 2 Driver for Linux, UNIX, and Windows will not
be supported in future releases. Consider switching to the IBM Data Server
Driver for JDBC and SQLJ, which we describe next.
Connecting to a database
In this part, the Java program tries to establish a connection to the given
database by calling the function DriverManager.getConnection with the proper
URL values as discussed within the driver description in IBM JDBC driver for
DB2 on page 241. After this call, DriverManager selects the appropriate driver
from a set of registered drivers to connect to the database. Example 8-52 and
Example 8-53 show these steps for MySQL and then DB2.
//...
}
}
}
}
The JDBC API does not place any restrictions on the kind of SQL statements that
can execute. Therefore, it becomes the responsibility of the application to pass
SQL statements that are compatible with the database. The connection obtained
in Example 8-52 on page 243 and Example 8-53 on page 243 can be used for
one of the following three types of statements, depending upon the requirements:
Statement: Simple single SQL statement
You can create the statement by using the createStatement method of the
Connection. Example 8-54 shows the usage of executeQuery with a change
for MySQL and DB2. It is evident that only changes to the SQL statement are
required.
conn.close();
Because MySQL does not enforce strict type conversions, the Java programmer
has to take care of data lost because of round-off, overflow, or precision loss. For
more details about how MySQL is mapped to Java data types, refer to the
information at this Web site:
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-type-conversions.h
tml
Apart from this, DB2 uses C/C++ for server side programming for creating:
Stored procedures on DB2 server
User-defined functions (UDF) on DB2 server.
DB2 provides precompilers for C, C++, COBOL, Fortran, and REXX to support
embedded SQL applications. Embedded SQL applications support both static
and dynamic SQL statements. Static SQL statements require information about
all SQL statements, tables, and data types used at compile time. The application
needs to be precompiled, bound, and compiled prior to execution. In contrast,
dynamic SQL statements can be built and executed at runtime. For further
details on embedded SQL refer to Getting Started with Database Application
Development, GI11-9410-00. available at:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
Converting applications
MySQL C API and DB2 CLI are similar in functionality and mechanisms to
access databases. Both use the function call to pass dynamic SQL statements
and do not need to be precompiled. We recommend converting MySQL C
applications to DB2 CLI. This section describes conversion changes for various
levels of the application:
int main(){
if(!mysql_real_connect (
conn, /* pointer to connection handler */
NULL, /* host to connect, default localhost*/
"mysql", /* user name, default local user*/
"password", /* password, default none*/
"inventory", /* database name*/
0, /* port */
NULL, /* socket */
0 /* flags*/
Figure 8-4 shows a similar task using DB2 CLI. It shows the initialization tasks,
which consist of: allocation and initializing the environment and connection
handlers; creating the connection; processing of transactions; and terminating
the connection and removing of handlers.
Example 8-57 shows the implementation of the task defined by the figure above.
Processing a query
A typical MySQL C API program involves three steps in query processing:
Query construction
Example 8-58 shows an example of both MySQL queries - some that return a set
of results and others that do not.
mysql_free_result(result);
}
}
On the other hand DB2 CLI provides a more comprehensive set of APIs for doing
similar tasks. One of the essential parts of DB2 CLI is transaction processing,
Example 8-59 DB2 CLI prepared statement with column binding, auto commit on
SQLHANDLE hstmt; /* statement handle */
SQLCHAR firstName [TEXT_SIZE];
SQLCHAR lastName [TEXT_SIZE];
SQLCHAR email [TEXT_SIZE];
SQLINTEGER id = 501;
/* set AUTOCOMMIT on */
if (ret != SQL_SUCCESS) {
/* handle error */
}
if(ret == SQL_NO_DATA_FOUND){
printf("No data found");
}
while(ret != SQL_NO_DATA_FOUND){
printf("First name: %s \n",firstName);
printf("Last name: %s \n",lastName);
printf("email: %s \n", email);
ret=SQLFetch(hstmt);
}
ret = SQLFreeStmt(hstmt, SQL_UNBIND);
if (ret != SQL_SUCCESS) {
/* handle error */
}
Example 8-60 DB2 CLI prepare/execute in one step with SQLGetData and manual commit
/* set AUTOCOMMIT on */
ret = SQLSetConnectAttr(hdbc,
SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)SQL_AUTOCOMMIT_OFF, SQL_NTS);
if (ret != SQL_SUCCESS) {
/* handle error */
}
if(ret == SQL_NO_DATA_FOUND) {
printf("No data found");
}
int count = 0;
while(ret != SQL_NO_DATA_FOUND) {
ret=SQLFetch(hstmt);
}
ret = SQLEndTran( SQL_HANDLE_DBC, hdbc, SQL_ROLLBACK );
if (ret != SQL_SUCCESS) {
/* handle error */
return 1;
}
You only have to use DB2 ODBC driver to link to your application with libdb2.
The DB2 CLI driver also acts as an ODBC driver when loaded by an ODBC
driver manager. DB2 CLI conforms to ODBC 3.51.
Figure 8-6 shows the MySQL driver and DB2 ODBC driver in the ODBC
scenario. Figure 8-6 shows the simplicity of converting an application using an
ODBC driver to another driver. It also shows various components that are
involved in the ODBC application and how they map from MySQL
Connector/ODBC to DB2 ODBC.
Figure 8-6 ODBC application conversion from MyODBC to DB2 ODBC driver
You can optionally configure the DB2 ODBC Driver to modify the behavior of the
DB2 ODBC Driver by changing the db2cli.ini file.
Within your application program, you can retrieve these values to determine the
state of the previously executed SQL statement. These identifiers provide more
detailed information about the condition of the statement.
However, IBM defines the value for SQLCODE. To achieve the highest
portability of applications, only build dependencies on a subset of DB2
SQLSTATEs that are defined by ODBC Version 3 and ISO SQL/CLI
specifications. Whenever you build your exception handling on IBM-supplied
SQLSTATEs or SQLCODEs, carefully and thoroughly document the
dependencies. You can access the specifications by using the search words
ISO/IEC and standards 9075-1, 9075-2, and 9075-3 for SQL Foundation.
For example, if your application signals SQLSTATE 23000, the DB2 description
reports an integrity constraint violation, which is similar to MySQLs rudimentary
description ER NON UNIQ ERROR or ER DUP KEY. Hence, condition handling for both
database management systems can almost execute the same code.
The code segment from utilcli.c in Example 8-61 shows how to retrieve and
display diagnostic information, such as SQLSTATEs.
i = 1;
printf("-------------------------\n");
}
Note: We provide the code snippets in this chapter for illustrative purposes
only. The utilcli.c code is sample code that is included with DB2, which you
can find in the SQLLIB/samples directory.
You can use the getErrorCode method to retrieve SQL error codes and the
getSQLState method to retrieve SQLSTATEs.
For exception handling in Java, it is important to know that DB2 provides several
types of JDBC drivers with slightly different characteristics. With the DB2
Universal JDBC Driver, you can retrieve the SQLCA. For the DB2 JDBC type 2
driver for Linux, UNIX, and Windows (DB2 JDBC type 2 driver), use the standard
SQLException to retrieve SQL error information.
SQLException under the IBM Data Server Driver for JDBC and
SQLJ
As in all Java programs, error handling is done using try/catch blocks. Methods
throw exceptions when an error occurs, and the code in the catch block handles
those exceptions.
IBM DB2 Driver for JDBC and SQLJ provides an extension to the SQLException
class, which gives you more information about errors that occur when DB2 is
accessed. If the JDBC driver detects an error, the SQLException class provides
you with the same information as the standard SQLException class. However, if
DB2 detects the error, the SQLException class provides you the standard
information, along with the contents of the SQLCA that DB2 returns. If you plan
to run your JDBC applications only on a system that uses the IBM DB2 Driver for
JDBC and SQLJ, you can use this extended SQLException class.
Under the IBM DB2 Driver for JDBC and SQLJ, SQLExceptions from DB2
implement the com.ibm.db2.jcc.DB2Diagnosable interface. An SQLException
from DB2 contains the following information:
A java.lang.Throwable object that caused the SQLException or null if no such
object exists. The java.lang.Throwable class is the superclass of all errors
and exceptions in the Java language.
The information that is provided by a standard SQLException
An object of DB2-defined type DB2Sqlca that contains the SQLCA. This
object contains the following objects:
An INT value that contains an SQL error code
A String object that contains the SQLERRMC values
A String object that contains the SQLERRP value
An array of INT values that contains the SQLERRD values
An array of CHAR values that contains the SQLWARN values
A String object that contains the SQLSTATE
try {
// Code that could throw SQLExceptions }
.
catch(SQLException sqle) {
while(sqle != null) {
if (sqle instanceof DB2Diagnosable) {
DB2Sqlca sqlca = ((DB2Diagnosable)sqle).getSqlca();
if (sqlca != null) {
System.err.println ("SqlCode: " + sqlca.getSqlCode());
System.err.println ("SQLERRMC: " + sqlca.getSqlErrmc());
System.err.println ("SQLERRP: " + sqlca.getSqlErrp() );
String[] sqlErrmcTokens = sqlca.getSqlErrmcTokens();
for (int i=0; i< sqlErrmcTokens.length; i++) {
System.err.println (" token " + i + ": " + sqlErrmcTokens[i]);
}
int[] sqlErrd = sqlca.getSqlErrd();
char[] sqlWarn = sqlca.getSqlWarn();
System.err.println ("SQLSTATE: " + sqlca.getSqlState());
System.err.println ("message: " + sqlca.getMessage());
}
}
sqle=sqle.getNextException();
}
}
You must use the WHENEVER statement prior to the SQL statements that will
be affected. Otherwise, the precompiler does not know that additional
error-handling code must be generated for executable SQL statements. You can
have any combination of the three forms active at any time. The order in which
you declare the three forms is insignificant.
To avoid an infinite looping situation, ensure that you undo the WHENEVER
handling, prior to executing any SQL statements within the handler, by using the
WHENEVER SQLERROR CONTINUE statement.
After executing each SQL statement, the system issues a return code in both
SQLCODE and SQLSTATE. SQLCODE is an integer value that summarizes the
execution of the statement, and SQLSTATE is a character field that provides
error codes that are common across IBM relational database products.
SQLSTATE also conforms to the ISO/ANSI SQL92 and Federal Information
Processing Standard (FIPS) 127-2 standard.
Note that if SQLCODE is less than 0, it means that an error has occurred and the
statement has not been processed. If the SQLCODE is greater than 0, it means
that a warning has been issued, but the statement is still processed.
When DB2 raises a condition that matches a condition, DB2 passes control to
the condition handler. The condition handler performs the action that is indicated
by the handler-type and then executes the SQL-procedure-statement.
You can also use the DECLARE statement to define your own condition for a
specific SQLSTATE.
Example 8-64 on page 266 shows the general flow of the condition handler in a
stored procedure.
statement1;
statement2;
End
Example 8-65 shows a CONTINUE handler for delete and update operations on
a table named EMP. Again, note that this code is solely intended for illustrative
purposes.
This information has to be ported into a DB2 table. When a user attempts to
access the data in the DB2 database, the application will verify each users
database access rights, along with the host system information for the host from
which that user connects.
We need two tables for our DB2 conversion: one table to store user privilege
information ported from MySQL and one working table. The table definitions and
sample values are shown in Example 8-67.
-- table ACCESSLIST
-- it stores access rights for specific users connecting from specific hosts
-- remark: there should be different access-flags for different functions
-- fields:
-- username, whom access to the function should be granted
-- hostname or ip-address, from which the user must connect
-- select access flag (Y/N), if SELECT is granted
-- insert access flag (Y/N), if INSERT is granted
-- table APPLACCESS
-- it stores the info about users and their host asking for access
-- this table is filled automatically by the sample application
-- fields:
-- username, who asks for access to the function
-- hostname, from which the user connects
-- ip-address, from which the user connects
-- timestamp, when the user asks for access
Example 8-68 on page 269 lists the application code; remember that the code is
just for demonstration purposes.
db2Conn=DriverManager.getConnection("jdbc:db2:"+DB2DB,DB2USR,DB2PWD);
Both methods have their advantages and disadvantages, but by far, the most
popular method is the latter approach. Both MySQL and DB2 follow this
approach to various degrees of sophistication and with implementation
differences.
Because the isolation level determines how data is isolated from other processes
while the data is being accessed, select an isolation level that balances the
requirements of concurrency and data integrity. Table 8-5 gives you an overview
of the DB2 isolation levels.
Note: Only committed data is returned for the cursor stability isolation level,
which is the currently committed semantics introduced in Version 9.7. Only
committed data was returned in previous releases, but now readers do not
wait for updaters to release the row locks. Instead, readers return data that is
based on the currently committed version, that is, data prior to the start of the
write operation.
By default, DB2 operates with the cursor stability isolation level. You can specify
transaction isolation at many levels, as we discussed in 8.3.5, Specifying the
isolation level in DB2 on page 276. For good performance, verify the lowest
isolation level required for your converted application.
8.3.4 Locking
Certain MySQL applications, when ported to DB2, appear to behave identically,
and you can ignore the topic of concurrency. However, if your applications
involve frequent access to the same tables, the applications might behave
differently. By default, MySQL runs in a mode that is called autocommit, which
means that MySQL considers each and every SQL statement as an atomic unit
of work or transaction.
Another matter causing controversy among experts is the level of locking that is
required for implementation on the database level. Do you implement the locking
approach with the lowest level of overhead and, therefore, maintain locks on a
table level? Or, is it better to lock on a lower level, for example, on the page
level? Or, do you want even finer granularity with locking occurring on the row
level? As usual, the correct answer to these questions is, It depends.
Lock level Row level, None or table level Row level and
table level only on table level
explicit request
The isolation level that you specify is in effect for the duration of the unit of work.
You can specify the isolation level in several ways. Use the following heuristics to
determine which isolation level will be used when compiling an SQL or XQuery
statement:
Static SQL:
If an isolation clause is specified in the statement, the value of that clause
is used.
If no isolation clause is specified in the statement, the isolation level used
is the isolation level that is specified for the package at the time when the
package was bound to the database.
Dynamic SQL:
If an isolation clause is specified in the statement, the value of that clause
is used.
If no isolation clause is specified in the statement, and a SET CURRENT
ISOLATION statement has been issued within the current session, the
value of the CURRENT ISOLATION special register is used.
If no isolation clause is specified in the statement, and no SET CURRENT
ISOLATION statement has been issued within the current session, the
isolation level used is the one specified for the package at the time that the
package was bound to the database.
For static or dynamic XQuery statements, the isolation level of the
environment determines the isolation level that is used when the XQuery
expression is evaluated.
Tip: To determine the isolation level of a package, execute the following query
where XXXXXXXX is the name of the package and YYYYYYYY is the
schema name of the package. Both of these names must be in all capital
letters:
SELECT ISOLATION FROM SYSCAT.PACKAGES
WHERE PKGNAME = 'XXXXXXXX'
AND PKGSCHEMA = 'YYYYYYYY'
REXX and the command line processor connect to a database using a default
isolation level of cursor stability. Changing to another isolation level does not
change the connection state.
To determine the isolation level that is used by a REXX application, check the
value of the SQLISL predefined REXX variable. The value is updated each time
that the CHANGE ISOLATION LEVEL command executes.
Note: JDBC and SQLJ are implemented with CLI on DB2, which means that
the db2cli.ini settings might affect what is written and run using JDBC and
SQLJ.
Using the DB2 profile registry allows for centralized control of the environment
variables. Through use of various profiles, multiple levels of support are
provided. Remote administration of the environment variables is also available
when using the DB2 Administration Server.
You can set the variables by using the db2set command. The command
immediately stores the updated variables in the profile registry. Example 9-1
shows the various modes in which the db2set command can be used.
Example 9-1 Changing registry and environment variables using the db2set command
//this shows the current registry variables//
db2inst1@db2server:~> db2set -all
[i] DB2_COMPATIBILITY_VECTOR=8
[i] DB2PROCESSORS=0,1
[i] DB2COMM=tcpip
[g] DB2SYSTEM=db2server
[g] DB2INSTDEF=db2inst1
[g] DB2ADMINSERVER=dasusr1
DB2 configures the operating environment by checking for registry values and
environment variables, and DB2 resolves them in the following order:
1. Environment variables are set using the set command (or the export
command on UNIX platforms).
2. Registry values are set with the instance node-level profile (using the db2set
-i <instance name> <nodenum> command).
3. Registry values are set with the instance-level profile (using the db2set -i
command).
4. Registry values are set with the global-level profile (using the db2set -g
command).
Figure 9-1 illustrates the two DB2 configuration files and additional operating
system configurations.
Configuration tools
IBM has tools to assist you with configuring your database server. Two of these
tools are the Configuration Assistant and the IBM Data Studio.
You can use the DB2 Configuration Assistant to configure and maintain the
database objects that you or your application will use. The Configuration
Assistant is a graphical tool that is tightly integrated with the DB2 Control Center.
It allows you to update both the DB2 Profile Registry and the DB2 database
manager configuration parameters on the local machine, as well as remotely. It
can be launched from the DB2 Control Center or by calling the db2ca utility. The
Configuration Assistant also has an advanced view, which uses a notebook to
organize connection information by object: systems, instance nodes, databases,
database connection services (DCS), and data sources. Figure 9-2 on page 284
shows how to change the database manager configuration using the
Configuration Assistant.
Note: The Configuration Assistant has been deprecated in Version 9.7 and
might be removed in a future release. We recommend that you use the IBM
Integration Management solutions for managing DB2 for Linux, UNIX, and
Windows data and data-centric applications.
IBM Data Studio is part of the IBM Integration Management solutions for
managing your DB2 database. Data Studio simplifies the process of managing
your database objects by supporting instance and database management and by
providing the ability to run database commands and utilities. It provides a simple
user interface to invoke the database administration commands that you use to
maintain and manage your database environment. Figure 9-3 on page 285
shows how to change the database manager and database configuration using
Data Studio.
IBM offers a number of automatic tools and DB2 features to make database
administration effortless. You can use the Configuration Advisor to assist with
parameter configuration and to configure your database for optimal performance.
The Configuration Advisor looks at your current database, asks for user input on
the database workload, and suggests the best configuration parameters for
buffer pool size, database configuration, and database manager configuration.
Figure 9-4 on page 286 shows the suggested output for our sample inventory
database.
Database backup
To back up a DB2 database, database partition, or selected table space, you can
use the DB2 backup command. Use this command to create a backup to disk,
tape, or named pipes in UNIX. DB2 supports both offline and online backup:
db2 backup database invent to /home/db2inst1/backup
You can back up an entire database, database partition, or only selected table
spaces.
In addition to backing up the entire database every time, DB2 also supports
incremental backups where you can back up large databases on a regular basis
incrementally. Incremental backups require that the trackmod database
configuration parameter is set to yes. Incremental backup can be a cumulative
backup, which stores data changes since the last successful full backup, or a
delta backup, which is the last successful backup irrespective of whether that
backup was full, delta, or cumulative. Figure 9-6 on page 289 and Example 9-2
on page 289 show the cumulative and delta backup techniques.
IBM has tools to assist you with the maintenance activities configuration,
because it can be time-consuming to determine when the configuration is
required and whether maintenance activities, such as backup operations, need
to be run. You can use the Configure Automatic Maintenance wizard within Data
Studio, as shown in Figure 9-7 on page 290, or the DB2 Control Center to
configure the database maintenance activities. With automatic maintenance, you
specify your maintenance objectives, including when automatic maintenance can
run. DB2 then uses these objectives to determine if the maintenance activities
need to be done and then runs only the required maintenance activities during
the next available maintenance window (a user-defined time period for running
automatic maintenance activities).
Database recovery
The recover utility performs the necessary restore and roll-forward operations to
recover a database to a specified time, based on information found in the
recovery history file. When you use this utility, you specify that the database is
recovered to a certain point in time or to the end of the log files. The utility will
then select the best suitable backup image and perform the recovery operations.
Example 9-3 shows how to use the RECOVER DATABASE command.
Rollforward Status
Node number = 0
Rollforward status = not pending
Next log file to be read =
Log files processed = S0000000.LOG - S0000001.LOG
Last committed transaction = 2009-09-11-19.25.01.000000 Local
Roll-forward recovery
Use Roll-forward recovery to reapply changes that were made by
transactions that were committed after a backup was made.
There are two types of roll-forward recovery:
Database roll-forward recovery
With this type of roll-forward recovery, you can tell DB2 to roll forward
a database to the state immediately before the failure or you can
specify the local time to which you want to roll forward your database.
Figure 9-9 on page 293 shows the roll-forward recovery technique of a
DB2 database.
Figure 9-11 on page 294 shows how you can use Data Studio to recover a DB2
database.
Database restore
DB2 database restore is as easy as backing up the database. Use the
RESTORE utility to perform DB2 database restore. The restore database
command rebuilds the database data or table space to the state that it was in
when the backup copy was made. This utility can overwrite a database with a
separate image or restore the backup copy to a new database. You can also use
the restore utility to restore backup images in DB2 Version 9.7 that were backed
up on DB2 Universal Database Version 8, DB2 Version 9.1, or DB2 Version 9.5.
If at the time of the backup operation, the database was enabled for roll-forward
recovery, you can take the database to its previous state by invoking the
following ROLLFORWARD command after a successful completion of a restore
operation:
db2inst1@db2server:~ > db2 ROLLFORWARD DATABASE invent COMPLETE
Tip: You can also perform DB2 backup and restore using the Data Studio.
You can also execute he RESTORE and ROLLFORWARD utilities from the Data
Studio. Figure 9-12 on page 296 shows the restore database window.
IBM provides two solutions that you can use to replicate data from and to
relational databases: SQL replication and Q replication. IBM also provides a
solution called event publishing for converting committed source changes into
messages in an XML or delimited format and for publishing those messages
across WebSphere MQ queues to applications.
Replication is not only supported between two DB2 systems running on separate
platforms, but replication is also supported on the following non-DB2 databases:
SQL replication supports replicating between DB2 on Linux, UNIX, Windows,
z/OS, and iSeries; Informix; Microsoft SQL Server; Oracle; Sybase; and
Teradata (target only). Q replication supports DB2 for Linux, UNIX, and
Windows, DB2 for z/OS, Informix (target only), Microsoft SQL Server (target
only), Oracle (target only), and Sybase (target only).
EXPORT utility
DB2 EXPORT is a powerful tool to export your DB2 data quickly from DB2 to the
external file system. DB2 EXPORT uses SQL select or an XQuery statement to
export tables, views, large objects, or typed tables to one of the three external file
formats:
.DEL: Delimited ASCII format file
.WSF: Worksheet format, such as Lotus 1-2-3
.IXF: Integrated exchange format
IMPORT utility
You can use the files created with the same syntax as the EXPORT utility to
populate data into a new DB2 database on the same system. Or, you can
transfer these files to another platform and import or load them to the DB2
database that resides on that platform. The IMPORT utility supports the following
file formats:
ASC: Non-delimited ASCII format file
.DEL: Delimited ASCII format file
.WSF: Worksheet format, such as Lotus 1-2-3
.IXF: Integrated exchange format
Similar to EXPORT, you can use the IMPORT utility with the following methods:
The command line processor (CLP)
Use the IMPORT utility from CLP by supplying an SQL INSERT, INSERT
UPDATE, or REPLACE option. The example shows the simple IMPORT
statement:
db2inst1@db2server:~ > db2 IMPORT FROM invent.ixf OF ixf MESSAGES
msg.txt INSERT INTO admin.owners
Data Studio
Use Data Studio to import data graphically. Figure 9-14 on page 301 shows
the Import wizard.
For the automatic failover, you use the DB2-integrated component called Tivoli
System Automation for Multiplatforms (often referred to as cluster manager).
This component monitors all resources involved in both systems, that is, DB2
instances, databases, network interfaces, and other components, and initiates
the database takeover. At the same time, DB2 is aware of Tivoli System
Automation for Multiplatforms and notifies it when a planned outage is
necessary, for example, when the database is manually stopped by an
administrator. This way, a database administrator can, for the most part, use the
usual DB2 commands without having to involve the cluster manager. Every DB2
edition supporting HADR includes all of the necessary software packages, as
well as licenses for Tivoli System Automation for Multiplatforms, and offers to
install these products during the DB2 setup. DB2 also performs the configuration
of Tivoli System Automation for Multiplatforms.
Figure 9-16 on page 306 illustrates an example high availability disaster recover
scenario.
For more information and implementation steps, refer to High Availability and
Disaster Recovery Options for DB2 on Linux, UNIX, and Windows, SG24-7363.
9.6 Autonomics
Automated task management allows the automation of database management
jobs by scheduling activities according to specific requirements. Automated task
management is really useful for performing regular maintenance tasks, such as
backup, space monitoring, error checking, maintenance, and so forth.
IBM strives for autonomic computing and the development of highly intelligent
database systems. The DB2 autonomic computing environment is
self-configuring, self-healing, self-optimizing, and self-protecting. By sensing and
responding to situations that occur, autonomic computing shifts the burden of
managing a computing environment from the database administrator to the
technology. Autonomic computing provides users with improved resiliency,
higher return on investment (ROI), and lower total cost of ownership (TCO) by
accelerating the implementation of new capabilities to gain the highest value
possible.
Example 9-6 on page 308 shows how to enable Self-Tuning Memory Manager
and how to configure each parameter to be managed by Self-Tuning Memory
Manager.
Automatic Storage
The automatic storage feature simplifies storage management for table spaces.
When you create an automatic storage database, you specify the storage paths
where the database manager will place your table space data. Then, the
database manager manages the container and space allocation for the table
spaces as you create and populate them. By default, automatic storage is turned
on:
Automatic storage databases
Automatic storage is intended to make storage management easier. Rather
than managing storage at the table space level using explicit container
definitions, storage is managed at the database level and the responsibility of
creating, extending, and adding containers is taken over by the database
manager.
Automatic maintenance
The database manager provides automatic maintenance capabilities for
performing database backups, keeping statistics current, and reorganizing tables
and indexes as necessary. Performing maintenance activities on your databases
is essential in ensuring that they are optimized for performance and
recoverability.
Maintenance of your database can include part, or all, of the following activities:
Backups
When backing up a database, the database manager takes a copy of the data
in the database and stores it on another medium in case of failure or damage
to the original. Automatic database backups help to ensure that your
database is backed up properly and regularly so that you do not have to worry
about when to back up or know the syntax of the BACKUP command.
Data defragmentation (table or index reorganization)
This maintenance activity can increase the efficiency with which the database
manager accesses your tables. Automatic reorganization manages an offline
table and index reorganization so that you do not need to worry about when
and how to reorganize your data.
Data access optimization (statistics collection)
The database manager updates the system catalog statistics on the data in a
table, the data in indexes, or the data in both a table and its indexes. The
optimizer uses these statistics to determine which path is necessary to
access the data. Automatic statistics collection attempts to improve the
performance of the database by maintaining up-to-date table statistics. The
goal is to allow the optimizer to choose an access plan based on accurate
statistics.
Configuration Advisor
You can use the Configuration Advisor to obtain recommendations for values of
the buffer pool size, database configuration parameters, and database manager
configuration parameters.
You can display the recommended values or apply them by using the APPLY
option of the CREATE DATABASE command. The recommendations are based
on input that you provide and system information that the Configuration Advisor
gathers.
When you create a database, this tool is automatically run to determine and set
the database configuration parameters and the size of the default buffer pool
(IBMDEFAULTBP). The values are selected based on system resources and the
intended use of the system. This initial automatic tuning means that your
database performs better than an equivalent database that you might create with
the default values. It also means that you will spend less time tuning your system
after creating the database.
You can run the Configuration Advisor at any time (even after your databases
are populated) to have the tool recommend and optionally apply a set of
configuration parameters to optimize performance based on the current system
characteristics. You can use the graphical database administration tools to run
The values that are suggested by the Configuration Advisor are relevant for only
one database per instance. If you want to use the Configuration Advisor on more
than one database, each database must belong to a separate instance.
Data compression
You can compress both tables and indexes to save storage. Compression is fully
automatic. After you specify that a table or index must be compressed using the
COMPRESS YES clause of the CREATE TABLE, ALTER TABLE, CREATE
INDEX, or ALTER INDEX statements, there is nothing more you must do to
manage compression. Temporary tables are compressed automatically; indexes
for compressed tables are also compressed automatically, by default. We
discuss data compression further in 11.2, Data compression on page 386.
Utility throttling
This feature regulates the performance impact of maintenance utilities so that
they can run concurrently during production periods. Although the impact policy
for throttled utilities is defined by default, you must set the impact priority if you
want to run a throttled utility. The throttling system ensures that the throttled
utilities run as frequently as possible without violating the impact policy.
Currently, you can throttle statistics collection, backup operations, rebalancing
operations, and asynchronous index cleanup.
As of DB2 Data Server 9.5, Workload Management is built right into the DB2
engine, allowing administrators to monitor and control database activities, such
as DDL and DML statements, over their full life cycle. Through definable rules,
you program the engine to automatically filter certain workloads and apply
execution priorities to control concurrency or activity through the setup of
thresholds. Workload Management can explicitly control CPU usage among
executing work, detect and prevent so-called runaway queries, which for
example are exceeding the predicted or configured number of rows returned, the
execution time, or the estimated execution costs. Specifically on the AIX
platform, DB2 workload management does not stop at a database level and can
be tightly integrated with the operating systems workload management.
For more details and setup information, refer to the IBM DB2 9.7 for Linux, UNIX,
Windows Information Center at this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
DB2 Control Center is tightly coupled with other DB2 tools; Figure 9-17 shows a
hierarchy of database objects on the leftmost panel and details on the rightmost
panel. You start the Control Center by entering the db2cc command.
You can start the following tools from the Control Center Tools menu:
Replication Center
Satellite Administration Center Command Center
Command Editor
Task Center
Health Center
Journal
License Center
Configuration Assistant
A benefit to these new tools is that you can have a single environment to develop
and manage your database, which required separate tools in the past. This tool
also supports multiple database servers, such as DB2 for Linux, UNIX, Windows,
i5, and z/OS, Apache Derby, Informix Dynamic Server, and other database
servers. This flexibility makes managing multiple databases much easier.
The IBM Optim and Data Studio tool suite is built on the Eclipse platform and is
an Eclipse-based development environment. The Eclipse platform is a
framework that allows you to create integrated development environments
(IDEs); plug-ins exist to allow development in Java, C/C++, PHP, COBOL, Ruby,
and more products.
There are two versions, the Optim Database Administrator and the Optim
Development Studio, that you purchase. There is one no-charge version of IBM
Data Studio.
Optim Database Administrator 2.2 is on Eclipse 3.4. It can share a shell with the
following products:
Optim Development Studio 2.2
Optim Query Tuner 2.2
InfoSphere Data Architect 7.5.x.x
Optim Development Studio provides the following key features for database
object management. Typically, you perform these tasks on test databases that
you use to test your applications.
If you work on a large team, you can use the following features to enable team
members to share resources:
You can share data development projects using supported source code
control systems.
You can share database connection information by importing and exporting
this information to XML files.
You can customize the user interface to enable and disable visible controls
and defaults.
Optim Development Studio 2.2 is on Eclipse Version 3.4, and here are several
products with which it can share the shell:
Rational Application Developer for WebSphere Software 7.5.x.x
Optim Database Administrator 2.2
InfoSphere Data Architect 7.5.x.x
With the Data Studio Administration Console, you can perform these tasks:
View system health at a glance:
Connect to and monitor multiple databases across various platforms from
a single console.
Configure thresholds for warnings and alerts for key performance
indicators.
Drill down into alerts and warnings for a database:
Examine the details when problems occur with key performance
indicators.
Filter the display of statistics by time period, database, or application.
Browse alert history:
Collect and retain alert history for 72 hours.
Filter the display of statistics by time period, database, or alert type.
Use expert recommendations for problem solving:
Review expert recommendations to help solve the problems that are
causing a specific alert.
View the system and database parameters and statistics associated with a
warning or alert.
You can also use Data Studio Administration Console to monitor Q replication
and event publishing, generate replication health reports, and perform basic
replication operations.
You can launch Data Studio Administration Console from the IBM Data Studio
Developer user interface so that you can monitor IBM data servers for status,
including database availability, dashboards, and alerts. Figure 9-19 on page 319
shows the Data Studio Administration Console dashboard.
We also provide information about how you can check that system behavior has
not changed in an undesired way.
Furthermore, we discuss the methods and tools available for DB2 to tune the
database in order to achieve optimal performance.
It is always best to tie all test dates directly to their related conversion activity
dates, which prevents the test team from being perceived as the cause of a
delay. For example, if system testing is to begin after the delivery of the final
build, system testing begins the day after delivery. If the delivery is late, system
testing starts from the day of delivery, not on a specific date. This approach is
called dependent or relative dating.
Figure 10-1 shows the test phases during a typical conversion project. The
definition of the test plans happen at an extremely early moment. The test cases,
and all subsequent tasks, must be done for all test phases.
Prepare Infrastructure
The time exposure of tests depends on the availability of an existing test plan
and already prepared test items. The efforts depend also on the degree of
changes during the application and database conversion.
The testing process must detect if all rows were imported into the target
database, ensure that all data type conversions were successful, and check
random data byte-by-byte. The data checking process must be automated by
appropriate scripts. When testing data conversion results, you must perform
these steps:
Check IMPORT/LOAD messages for errors and warnings.
Count the number of rows in the source and target databases and compare
them.
Prepare scripts that perform data checks.
Involve data administration staff familiar with the application and its data to
perform random checks.
SQL3148W A row from the input file was not inserted into the table. SQLCODE "-545" was
returned.
SQL0545N The requested operation is not allowed because a row does not
satisfy the check constraint "DB2INST1.TABLE01.SQL090915100543100".
SQLSTATE=23513
SQL3117W The field value in row "3" and column "1" cannot be converted to a SMALLINT
value. A null was loaded.
SQL3125W The character data in row "4" and column "2" was truncated because the data is
longer than the target database column.
SQL3110N The utility has completed processing. "4" rows were read from the input file.
SQL3149N "4" rows were processed from the input file. "3" rows were
successfully inserted into the table. "1" rows were rejected.
As shown in the summary, during the import process one record from the input
file was rejected, and three records were inserted into the database. To
understand the nature of the warnings, you must look into the data source file
and the table definition (use the db2look command). The table definition for
Example 10-1 on page 324 is shown in Example 10-2, and the data file for
Example 10-1 on page 324 is shown in Example 10-3.
SQL3117W The field value in row "F0-3" and column "1" cannot be converted to a SMALLINT
value. A null was loaded.
SQL3125W The character data in row "F0-4" and column "2" was truncated
because the data is longer than the target database column.
SQL3110N The utility has completed processing. "4" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "09/15/2009
10:15:23.836019".
A table that has been created with constraints is left by the LOAD command in
check pending state. Accessing the table with SQL queries generates a warning:
SQL0668N Operation not allowed for reason code "1" on table <TABLE_NAME>.
SQLSTATE=57016.
You need to use the SET INTEGRITY SQL statement to move loaded tables into
a usable state. Example 10-5 shows a way to validate constraints. All rows that
violated constraints will be moved to exception table table01_e.
The SET INTEGRITY statement has many options, such as turning integrity on
only for new data, turning integrity off, or specifying exception tables with
additional diagnostic information. To read more about the SET INTEGRITY
command, refer to this Web site:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
For each table, you must count the number of rows and store the information in
the CK_ROW_COUNT table. You can use the following INSERT statement for
that purpose:
INSERT INTO ck_row_count SELECT 'tab_name', COUNT(*), 'MYS', sysdate() FROM
tab_name
You can manually convert the table ck_row_count and its data to the target DB2
database. Example 10-7presents the DB2 version of the table.
On the DB2 system, repeat the counting process with the equivalent INSERT
statement:
INSERT INTO ck_row_count SELECT 'tab_name', COUNT(*), 'DB2', CURRENT
TIMESTAMP FROM tab_name
Having the information about the number of rows in an SQL table is convenient,
because with a single query, you can get the table names that contain a different
number of rows in the source and target database:
SELECT tab_name FROM (SELECT DISTINCT tab_name, row_count FROM
ck_row_count) AS t_temp GROUP BY t_temp.tab_name HAVING(COUNT(*) > 1)
You can extend this approach for comparing the number of rows for additional
checking, such as comparing the sum of numeric columns. Here are the steps
that summarize the technique:
1. Define check sum tables on the source database and characterize the scope
of the computation.
2. Perform the computation and store the results in the appropriate check sum
tables.
3. Convert the check sum tables just as you convert the other user tables.
4. Perform equivalent computations on the target system, and store the
information in the converted check sum tables.
5. Compare the computed values.
Table 10-1 on page 330 provides computations for selected database types. The
argument for the DB2 SUM() function is converted to DECIMAL type, because, in
most cases, the SUM() function returns the same data type as its argument,
which can cause arithmetic overflow. For example, when calculating the sum on
an INTEGER column, if the result exceeds the INTEGER data type range, error
SQL0802N is generated: Arithmetic overflow or other arithmetic exception
occurred. Converting the argument to DECIMAL eliminates the error.
Along with the functional testing, you must also check the application against
performance requirements. Because there are many architectural differences
between MySQL and DB2, certain SQL operations might require further
optimization. Observing the performance differences in the early testing stages
increases the chance of preparing more optimal code for the new environment.
Before going into production, you must verify the converted database under high
volumes and loads. These tests must emulate the production environment, and
these tests can determine if further application or database tuning is necessary.
The stress load can also reveal other hidden problems, such as locking issues,
which can be observed only in a production environment.
MySQL users and privileges are resolved in DB2 with operating system users
and groups. You must compare the list of MySQL users to the equivalent DB2
operating system users. You must verify all of DB2s authorities to allow the
correct individuals to connect to the database. You must verify all of the
privileges for all database objects.
Here is the complete list for DB2 error messages to prefix your reference:
ASN: Replication messages
CCA: Client Configuration Assistant messages
CLI: Call level interface messages
DB2: Command line processor messages
DBA: Control Center and Database Administration Utility messages
DBI: Installation or configuration messages
EXP: Explain utility messages
FLG: Information Catalog Manager messages
DB2 also provides detailed information for each message. The full error
message describes the nature of the problem in detail along with potential user
responses. To display the full message for the DB2 return code, you can use the
DB2 command db2 ? error-code in Linux or AIX. Because the question mark
(?) is a special character, you must separate the DB2 command and the error
code with a double quotation mark ("). See Example 10-10.
Explanation:
Explanation:
2 All the containers assigned to this DMS table space are full.
This is the likely cause of the error.
[...]
You can find the complete information about the DB2 message format and a
listing of all the messages in the Messages Reference, Volume 1,
SC27-2450-00, and Messages Reference, Volume 2, SC27-2451-00, which are
available online at this Web site:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
db2diag.log
The db2diag.log file is most frequently used file for DB2 problem investigation.
You can find this file in the DB2 diagnostic directory, defined by the DIAGPATH
variable in the database manager configuration. If the DIAGPATH parameter is
not set, by default, the directory is located at this path for Linux and UNIX:
$HOME/sqllib/db2dump
<INSTALL PATH> is the directory where DB2 is installed, and <DB2INSTANCE> is the
name of the DB2 instance.
Most of the time, the default value is sufficient for problem determination. In
certain cases, especially on development or test systems, you can set the
parameter to 4 to collect all informational messages. However, ensure that you
focus on the database activities and the size that is available on the file system,
because this information can cause performance issues due to the large
amounts of data recorded in the file. Setting DIAGLEVEL to 4 can also make the
file extremely large and harder to read.
We next explain the db2diag.log file entries. The numbers bolded in the
example correspond to the following numbers:
1. A time stamp and time zone for the message.
2. The record ID field. The recordID of the db2diag log file specifies the file
offset at which the current message is being logged (for example, 27204)
and the message length (for example, 655) for the platform where the DB2
diagnostic log was created.
3. The diagnostic level associated with an error message, for example, Info,
Warning, Error, Severe, or Event.
4. The process ID.
5. The thread ID.
6. The process name.
7. The name of the instance generating the message.
8. For multi-partition systems, the database partition generating the message.
(In a non-partitioned database, the value is 000.)
9. The database name.
10.The application handle. This value aligns with that used in db2pd output and
lock dump files. It consists of the coordinator partition number followed by the
coordinator index number, separated by a dash.
11.Identification of the application for which the process is working. In this
example, the process generating the message is working on behalf of an
application with the ID 9.26.54.62.45837.070518182042.
The TCP/IP-generated application ID is composed of three sections:
i. IP address: It is represented as a 32-bit number displayed as a
maximum of 8 hexadecimal characters.
Trap files
The database manager generates a trap file if it cannot continue processing due
to a trap, segmentation violation, or exception.
All signals or exceptions received by DB2 are recorded in the trap file. The trap
file also contains the function sequence that was running when the error
occurred. This sequence is sometimes referred to as the function call stack or
stack trace. The trap file also contains additional information about the state of
the process when the signal or exception was caught.
A trap file is also generated when an application is forced off the system while
running a fenced thread-safe routine. The trap occurs as the process is shutting
down. This trap file is not a fatal error, and it is nothing to be concerned about.
The files are located in the directory specified by the DIAGPATH database
manager configuration parameter.
On all platforms, the trap file name begins with a process identifier (PID),
followed by a thread identifier (TID), followed by the partition number (000 on
single partition databases), and concludes with .trap.txt.
There are also diagnostic traps, which are generated by the code when certain
conditions occur that do not warrant crashing the instance, but where it might be
useful to verify values within the stack. These traps are named with the PID in
decimal format, followed by the partition number (0 in a single partition
database).
The following example resembles a trap file with a process identifier (PID) of
6881492 and a thread identifier (TID) of 2.
6881492.2.000.trap.txt
You can generate trap files on demand using the db2pd command with the -stack
all or -dump option. In general, though, only run this command as requested by
IBM Software Support.
Dump files
When DB2 determines that extra information is required for collection due to an
error, it often creates binary dump files in the diagnostic path. The binary dump
file is named with the process or thread ID that failed, the node where the
problem occurred, and ends with the .dump.bin extension, as shown in this
example:
6881492.2.010.dump.bin
These files are located in the directory that is specified by the DIAGPATH
database manager configuration parameter.
Maintenance version
You can use the db2level utility to check the current version of DB2. As shown in
Figure 10-2 on page 340, the utility returns information about the installed
maintenance updates (fix packs), the word length used by the instance (32-bit or
64-bit), the build date, and other code identifiers. We recommend that you
periodically check to determine if the newest available fix packs are installed.
In one simple step, the tool can gather database manager snapshots,
configuration files, and operating system parameters, which can help you
determine the problem more quickly. This example is a sample call of the utility:
db2support . -d invent -c
The dot represents the current directory where the output file is stored. The rest
of the command options are not required and can be omitted. The -d and -c
clauses instruct the utility to connect to the invent database and to gather
information about database objects, such as table spaces, tables, or packages.
The Web site has the most recent copies of documentation, a knowledge base to
search for technical recommendations or DB2 defects, links for product updates,
the latest support news, and other useful DB2-related links.
Authorized Program Analysis Reports (APARs) are defects in the DB2 code that
have been discovered by clients and that require a fix. APARs have unique
identifiers and are always specific to a particular version, but they can affect
multiple products in the DB2 family that run on multiple platforms. Fixes for
APARs are provided through the DB2 fix packs.
On the DB2 support site, you can search for closed, open, and HIPER APARs. A
closed status for an APAR indicates that a resolution for a problem has been
created and included in a specific fix pack. Open APARs represent DB2 defects
that are currently being addressed or are waiting to be included in the next
available fix pack. High-Impact or PERvasive (HIPER) APARs are critical
problems that you must review to assess the potential affect of staying at a
particular fix pack level.
The DB2 Technical Support site offers e-mail notification of critical or pervasive
DB2 client support issues, including HIPER APARs and fix pack alerts. To
subscribe to it, follow the DB2 Alert link on the Technical Support main page.
You can also send DB2 for Linux, UNIX, and Windows questions to:
[email protected].
Guidelines and reference materials (which you might need when calling IBM
support), as well as the telephone numbers, are available in the IBM Software
Support Guide at this Web site:
http://techsupport.services.ibm.com/guides/handbook.html
When using these table functions in a database partitioned environment, you can
choose to receive data for a single partition or for all partitions. If you choose to
receive data for all partitions, the table functions return one row for each partition.
Using SQL, you can sum the values across partitions to obtain the value of a
monitor element across partitions.
Monitor table functions can be divided into three categories, depending on the
information that they monitor:
System: The system monitoring perspective includes the complete volume of
work and effort expended by the data server to process application requests.
This perspective helps you determine all of the data server activities, as well
as the activities for a particular subset of application requests. The following
list shows the table functions for retrieving system information:
MON_GET_SERVICE_SUBCLASS
MON_GET_SERVICE_SUBCLASS_DETAILS
MON_GET_WORKLOAD
MON_GET_WORKLOAD_DETAILS
MON_GET_CONNECTION
MON_GET_CONNECTION_DETAILS
MON_GET_UNIT_OF_WORK
MON_GET_UNIT_OF_WORK_DETAILS
Example 10-12 shows an example of how you can use the MON_GET_TABLE
function to retrieve the rows read, inserted, updated, and deleted from all tables
in the ADMIN schema.
Snapshot monitoring
Snapshot monitoring describes the state of database activity at a particular point
in time when a snapshot is taken. Snapshot monitoring is useful in determining
the current state of the database and its applications. Taken at regular intervals,
the snapshots are useful for observing trends and foreseeing potential problems.
You can take snapshots from the command line, by using custom APIs, or
through SQL by using table functions. Example 10-13 on page 344 shows an
extract from a sample snapshot invoked from the command line.
Database Snapshot
[...]
[...]
[...]
In Example 10-13, the snapshot has collected database-level information for the
INVENT database. Several of the returned parameters display point-in-time
values, such as the number of currently connected applications:
Applications connected currently = 1
Other parameters can contain historical values, such as the maximum number of
concurrent connections that have been observed on the database:
High water mark for connections = 9
Cumulative or historical values are used to relate to the point in time during the
last initialization of counters. The counters can be reset to zero by the RESET
MONITOR command or by the appropriate DB2 event. In Example 10-13,
The monitor switches can be turned on at the instance level or the application
level. To switch the monitors at the instance level, modify the appropriate
database manager parameter. After modifying the DFT_MON_BUFPOOL
parameter, as shown in Example 10-16, all users with SYSMAINT, SYSCTRL, or
SYSADM authorities are able to collect buffer pool statistics on any database in
the instance.
Table 10-2 shows the complete list of monitor switches and related database
manager (DBM) parameters.
Sample snapshots
The database manager snapshot (Example 10-18) captures information specific
to the instance level. The information centers around the total amount of memory
that is allocated to the instance and the number of agents that are currently
active on the system.
The lock snapshot (Example 10-19 on page 347) is useful in determining what
locks an application currently holds and the locks that other applications are
waiting on. The snapshot lists all applications on the system and the locks that
each of these applications holds. Each lock is given a unique identifier number,
and each application is given a unique identifier number.
The table snapshot (Example 10-20) contains information about the usage and
creation of all tables. This information is useful in determining how much work is
being run against a table and how much the table data changes. You can use
this information to decide how to lay out your data physically.
The table space and buffer pool snapshots (Example 10-21) contain similar
information. The table space snapshot returns information regarding the layout of
the table space and the amount of space that is used. The buffer pool snapshot
contains information about the amount of space currently allocated for buffer
pools and the amount of space that is required when the database is next reset.
Both snapshots contain a summary of the way in which data is accessed from
the database. This access can be done from a buffer pool, directly from tables on
disk, or through a direct read or write for LOBs or LONG objects.
Example 10-23 and Example 10-24 show how to get similar monitoring
information using the table functions and views as we did from Example 10-17 by
using the GET SNAPSHOT command. Example 10-23 demonstrates a query
that captures the snapshot of lock information for the currently connected
database. Example 10-24 is a query that captures a snapshot of lock information
about the SAMPLE database for the currently connected database partition.
Table 10-3 lists the snapshot table functions, administrative views, and return
information that can be used to monitor your database system. All administrative
views belong to the SYSIBMADM schema.
For the following list of snapshot table functions, if you enter NULL for the
currently connected database, you get snapshot information for all databases in
the instance:
SNAP_GET_DB_V95
SNAP_GET_DB_MEMORY_POOL
SNAP_GET_DETAILLOG_V91
SNAP_GET_HADR
SNAP_GET_STORAGE_PATHS
SNAP_GET_APPL_V95
SNAP_GET_APPL_INFO_V95
SNAP_GET_AGENT
SNAP_GET_AGENT_MEMORY_POOL
SNAP_GET_STMT
SNAP_GET_SUBSECTION
SNAP_GET_BP_V95
SNAP_GET_BP_PART
The database name parameter does not apply to the database manager-level
snapshot table functions; they have an optional parameter for database partition
number.
Event monitoring
Event monitors are used to monitor the performance of DB2 over a fixed period
of time. The information that can be captured by an event monitor is similar to the
snapshots, but in addition to snapshot-level information, event monitors also
examine transition events in the database, and they consider each event as an
object. Event monitors can capture information about DB2 events in the following
areas:
Statements: A statement event is recorded when an SQL statement ends. The
monitor records the statements start and stop time, CPU used, text of
dynamic SQL, the return code of the SQL statement, and other matrixes,
such as the fetch count.
Event monitors are created with the CREATE EVENT MONITOR SQL
statement. Information about event monitors is stored in the system catalog
table, and it can be reused later.
[...]
The access plan that is acquired from Visual Explain helps you to understand
how individual SQL or XQuery statements are executed. You can use the
information displayed in the Visual Explain graph to tune SQL and XQuery
queries to optimize performance.
You can start Visual Explain from the Control Center or from the Optim Data
Studio toolset. From Data Studio, create or open SQL or XQuery statement. In
the Main panel view, right-click and select Open Visual Explain, as shown in
Figure 10-3.
A configuration window appears where you can specify the general settings and
the values for Visual Explain to use for special registers when fetching explain
data.
For simple databases, the default configurations might be sufficient for your
needs. However, in most cases, you might want to add additional table spaces.
The benefit of multiple table spaces is that you can assign separate database
objects to separate table spaces and assign the table spaces to dedicated
physical devices. Using this method allows each table object to utilize the
hardware allocated to the table space to which it belongs. This approach
essentially allows for table-level backup.
You can create an SMS table space using the MANAGED BY SYSTEM clause in
the create table space definition. The benefit of using an SMS table space is that
it does not require initial storage; space is not allocated by the system until it is
required. Creating a table space with SMS requires less initial work, because you
do not have to predefine the containers.
However, with SMS table spaces, the file system of the operating system
decides where each logical file page is physically stored. Pages might not be
stored contiguously on disk, because the storage placement depends on the file
With DMS, the database manager can ensure that pages are physically
contiguous, because it bypasses operating system I/O and interfaces with the
disk directly. This approach can improve performance significantly. You can
create a DMS table space by using the MANAGED BY DATABASE clause in the
create table space definition.
The disadvantage is that a DMS table space requires more tuning and
administrative effort, because you must add more storage containers as the table
space fills with data. However, you can easily add new containers, drop, or
modify the size of existing containers. The database manager then automatically
rebalances existing data into all the containers belonging to the table space.
Therefore, DMS table spaces are ideal for performance-sensitive applications,
particularly applications that involve a large number of INSERT operations.
For optimal performance, you must place large volume data and indexes within
DMS table spaces; if possible, split them to separate raw devices. Initially,
system temporary table spaces need to be of the SMS type. In an online
transaction processing (OLTP) environment, there is no need to create large
temporary objects to process SQL queries, so the SMS system temporary table
space is a good starting point. The easiest way to optimize your table spaces is
to use table spaces that are managed by automatic storage.
As shown in Figure 10-5, all data modifications are not only written to table space
containers, but they are also logged to ensure recoverability. Because every
INSERT, UPDATE, or DELETE statement is replicated in the transactional log,
the flushing speed of the logical log buffer can be crucial for the entire database
performance. To understand the importance of logical log placement, remember
that the time necessary to write data to disk depends on the physical data
distribution on disk. The more random reads or writes that are performed, the
more disk head movements are required, and therefore, the slower the writing
speed. Flushing the logical log buffer to disk is by its nature sequential, and other
operations must not interfere with it. Locating logical log files on separate devices
isolates them from other processes and ensures uninterrupted sequential writes.
To change logical log files to a new location, you must modify the
NEWLOGPATH database parameter, as shown in Example 10-27. The logs are
relocated to the new path on the next database activation (it takes time to create
the files).
However, the most effective way to configure these parameters is to set them to
automatic and let DB2 manage them, as shown in Example 10-28.
If there are a relatively small number of disks available, it can be difficult to keep
logical logs, data, indexes, system temporary table spaces (more important for
processing large queries in a data warehousing environment), backup files, or
the operating system paging file on separate physical devices. A compromise
solution is to have one large file system striped by a disk array (RAID device) and
create table spaces with only one container. The load balancing is shifted to the
hardware, and you do not have to worry about space utilization. If you want
parallel I/O operations on a single container, you must set the
DB2_PARALLEL_IO registry variable before starting the DB2 engine.
If this registry variable is set, and the prefetch size of the table is not
AUTOMATIC, the degree of parallelism of the table space is the prefetch size
divided by the extent size. If this registry variable is set, and the prefetch size of
the table space is AUTOMATIC, DB2 automatically calculates the prefetch size
of a table space. Table 10-4 on page 360 summarizes the available options and
how parallelism is calculated for each situation.
Initially, set the total size of buffer pools to 10% to 20% of available memory. You
can monitor the system later and correct it. DB2 allows changing buffer pool
sizes without shutting down the database. The ALTER BUFFERPOOL statement
with the IMMEDIATE option takes effect right away, except when there is not
enough reserved space in the database-shared memory to allocate new space.
This feature can be used to tune database performance according to periodical
changes in use, for example, switching from daytime interactive use to nighttime
batch work.
When the total available size is determined, this area can be divided into
separate buffer pools to improve utilization. Having more than one buffer pool
can preserve data in the buffers. For example, let us suppose that a database
has many frequently used smaller tables, which normally reside in the buffer in
their entirety, and thus, are accessible quickly. Now, let us suppose that there is
a query running against a very large table using the same buffer pool and
involving reads of more pages than the total buffer size. When this query runs,
the pages from the small, frequently used tables will be lost, making it necessary
to reread them when they are needed again.
Creating an extra buffer pool for system temporary data can also be valuable for
the system performance, especially in an OLTP environment where the
temporary objects are relatively small. Isolated temporary buffer pools are not
influenced by the current workload, so it takes less time to find free pages for
temporary structures, and it is likely that the modified pages will not be swapped
out to disk.
Example 10-29 shows how to create buffer pools, assuming that an additional
table space named DATASPACE for storing data and indexes was already
created and that there is enough memory in the system. Use this example as a
starting buffer pool configuration for a 2 GB RAM system.
The results:
BPNAME NPAGES PAGESIZE TBSPACE
-------------------- ----------- ----------- --------------------
IBMDEFAULTBP 16384 4096 SYSCATSPACE
IBMDEFAULTBP 16384 4096 SYSTOOLSPACE
Although you can tune your buffer pools manually, using the Self-Tuning Memory
Manager is an easier and more effective way of tuning the buffer pools for
optimal performance. As we discussed in 9.6, Autonomics on page 306, the
Self-Tuning Memory Manager can tune database memory parameters and buffer
pools without any DBA intervention. The Self-Tuning Memory Manager works
with buffer pools of multiple page sizes and can easily trade memory between
the buffer pools as needed. You can turn on the Self-Tuning Memory Manager
for a specific buffer pool by issuing commands in Example 10-30.
The first command in Example 10-30 turns the Self-Tuning Memory Manager on,
which is the default. The second command tells DB2 to automatically tune the
buffer pool BP32. You can tune individual buffer pools or all of the buffer pools
with the Self-Tuning Memory Manager.
For databases with a heavy update transaction workload, you can generally
ensure that there are enough clean pages in the buffer pool by setting the
parameter value to be equal-to or less-than the default value. A percentage
larger than the default can help the performance of your database if there are a
small number of extremely large tables. To change the default parameter, you
can use the following command:
db2 update db cfg for sample using CHNGPGS_THRESH 40
A single transaction must fit into the available log space to be completed; if it
does not fit, the transaction is rolled back by the system (SQL0964C The
transaction log for the database is full). To process transactions that
modify large numbers of rows, adequate log space is needed.
Currently, you can calculate the total log space that is available for transactions
by multiplying the size of one log file (database parameter LOGFILSIZ) and the
number of logs (database parameter LOGPRIMARY).
Locking is the mechanism that the database manager uses to control concurrent
access to data in the database by multiple applications. Each database has its
own list of locks (a structure stored in memory that contains the locks held by all
applications concurrently connected to the database). The size of the lock list is
controlled by the LOCKLIST database parameter.
The default storage for LOCKLIST on Windows and UNIX is set to AUTOMATIC.
On 32-bit platforms, each lock requires 48 or 96 bytes of the lock list, depending
on whether other locks are held on the object. On 64-bit platforms, each lock
requires 64 or 128 bytes of the lock list, depending on whether other locks are
held on the object.
When the maximum number of lock requests has been reached, the database
manager replaces existing row-level locks with table locks (lock escalation). This
operation reduces the requirements for lock space, because transactions will
To check the current usage of locks, use snapshots, as shown in Example 10-32.
The snapshot collects the requested information at the time that the command
was issued. Issuing the get snapshot command later can produce other results,
because, in the mean time, the applications might commit the transaction and
release the locks. To check lock escalation occurrences, look at the db2diag.log
file.
Log buffer
Log records are written to disk when one of the following situations occurs:
A transaction commits or a group of transactions commit, as defined by the
mincommit configuration parameter.
The log buffer is full.
Another internal database manager event occurs, which results in log records
being written to disk.
This log buffer size must also be less than or equal to the dbheap parameter.
Buffering the log records results in more efficient logging file I/O, because the log
records are written to disk less frequently and a greater quantity of log records
are written out at each time.
The default size for the log buffer is 256 4 KB pages. In most cases, the log
records are written to disk when one of the transactions issues a COMMIT or
when the log buffer is full. We recommend that you increase the size of this
buffer area if there is considerable read activity on a dedicated log disk or if there
is high disk utilization. Increasing the size of the log buffer can result in more
efficient I/O operations, especially when the buffer is flushed to disk. The log
records are written to disk less frequently, and more log records are written each
time.
When increasing the value of this parameter, also consider increasing the
DBHEAP parameter, because the log buffer area uses space that is controlled by
the DBHEAP parameter.
Before running the application snapshot, switch on the Unit Of Work monitor. In
Example 10-33, at the time that the snapshot was issued, you can see that there
are only three applications running on the system. The first transaction uses 478
bytes of log space, the second transaction uses 21,324 bytes of log space, and
the last transaction uses 110,865 bytes of log space, which is roughly 28 pages
more than the default log buffer size. The snapshot gives only the current values
from the moment that the command was issued. To get more valuable
information about the usage of log space by transactions, run the snapshot
multiple times.
You can use an access plan to view statistics for selected tables, indexes, or
columns; properties for operators; global information, such as table space and
function statistics; and configuration parameters that are relevant to optimization.
With Visual Explain, you can view the access plan for an SQL or XQuery
statement in graphical form.
Additional read operations are also required if many rows have been deleted. In
this case, consider reorganizing the table to match the index and to reclaim
space. You can also reorganize the system catalog tables.
Because reorganizing a table usually takes more time than updating statistics,
you can execute the RUNSTATS command to refresh the current statistics for
your data and then rebind your applications. If refreshed statistics do not improve
performance, reorganization might help.
You can execute the RUNSTATS command against a table from the command
line. Example 10-34 shows how to execute the RUNSTATS command against our
sample inventory table.
It is also possible to update statistics using the Data Studio tool. Within the
Database Explorer View, connect to your database and drill down the database
object folders until you find the table for which you want to update the statistics.
In our example, we connect to the invent database, and then, we drop down the
invent database folder, the schema folder, the ADMIN schema folder, and the
Tables folder to a list of tables in the invent database in the ADMIN schema. To
pull up the table options, we right-click the INVENTORY table icon, as shown in
Figure 10-7 on page 368.
You can access the table reorganization option from the Data Studio by
right-clicking the table, as shown in Figure 10-7 on page 368. After selecting the
REORG Table option in the drop-down menu, the reorganization table wizard
opens in the main view. Figure 10-9 on page 370 shows the reorganization table
wizard. You can select the parameters for the REORG command, and select Run
to execute the command.
By default in DB2 9, any new databases run the Configuration Advisor in the
background and have the configuration recommendations automatically applied.
To disable this feature, or to explicitly enable it, you must use the db2set
command, as shown in Example 10-35 on page 372.
In any case, you can run the Configuration Advisor manually at any time against
a database to update the current configuration, regardless of the
DB2_ENABLE_AUTOCONFIG_DEFAULT setting. All recommendations are
based on the input that you provide and the system information that the
Configuration Advisor gathers. The generated recommendations can be applied
or simply displayed.
It is important to point out that the values that are suggested by the Configuration
Advisor are relevant for only one database per instance. If you want to use this
advisor on more than one database, each database must belong to a separate
instance.
To invoke this wizard from the DB2 Control Center, expand the object tree until
you find the database that you want to tune. Select the icon for the database,
right-click, and select Configuration Advisor. Through several dialog windows,
the wizard collects the following information:
Percentage of memory that is dedicated to DB2
Type of workload
Number of statements per transaction
Transaction throughput
Trade-off between recovery and database performance
Number of applications
Isolation level of applications that are connected to the database
Based on the supplied answers, the wizard proposes configuration changes and
gives you the option to apply the recommendations or to save them as a task for
the Task Center for later execution, as shown in Figure 10-11 on page 373.
Figure 10-12 on page 374 shows the resulting recommendations.
You can also acquire the initial configuration recommendations through the
text-based AUTOCONFIGURE command (Example 10-36).
[...]
[...]
We can then run the db2advis command and specify the db2advis.in file as
the workload input script. Example 10-38 shows the syntax and output to execute
the index advisor. For more options, run db2advis -h from the command line.
Figure 11-1 on page 383 illustrates a simple example of how relational and XML
data are integrated within a single table. As shown in Figure 11-1 on page 383,
when creating a table within a DB2 database, you can specify both relational and
XML data types in separate column definitions. This specification creates a table,
where for every row of relational data, there will be an XML document associated
with that row.
Because it is possible that the large size of XML documents will be stored, they
are physically stored in separate objects, by default. These objects are called the
XML Data Area (XDA) objects. For most application scenarios, XDA objects
provide excellent performance. However, you have the option of storing the XML
documents in the same physical space as the relational data by using the SET
INLINE LENGTH clause in the CREATE or ALTER statement. For more
information, refer to the IBM DB2 Information Center:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
In order to query and update SQL and XML data, you can use SQL and XQuery
statements (SQL/XML: International Organization for Standardization (ISO)
standard ISO/International Electrotechnical Commission (IEC) 9075-14:2003).
Several operations are available to directly modify not only full documents, but
also parts or subtrees of XML documents without having to read, modify, and
reinsert them. Using the XQuery language, you can directly modify single values
and nodes within the XML document. XQuery is a fairly new, standardized query
language supporting path-based expressions. You can obtain more information
about XQuery at this Web site:
http://www.w3.org/TR/2007/REC-xquery-20070123/
With pureXML, applications are not only able to combine statements from both
languages to query SQL and XML data; you can express many queries in plain
XQuery, in SQL/XML, or XQuery with embedded SQL. In certain cases, one of
the options to express your query logic might be more intuitive than another
option. In general, you must choose the correct approach for querying XML data
on a case-by-case basis, taking the applications requirements and
characteristics into account. Example 11-1 on page 384 shows a simple XQuery
command.
1
--------------------------------------------
<name>Kathy Smith</name>
<name>Kathy Smith</name>
<name>Jim Noodle</name>
<name>Robert Shoemaker</name>
<name>Matt Foreman</name>
<name>Larry Menard</name>
6 record(s) selected.
6 record(s) selected.
1
<name>Kathy Smith</name>
1 record(s) selected.
Before you consider turning on row compression, you can inspect your tables to
see what potential savings to expect. Compressing data and decompressing
data are effortless.
To enable row compression, you can use the COMPRESS YES keywords in
either the CREATE or ALTER TABLE statement, as shown in Example 11-4.
Figure 11-2 illustrates the mapping of repeating patterns in two table rows to
dictionary symbols representing those patterns. The end result is a compressed
data record that is shorter in length than the original uncompressed record -
which is depicted by the yellow rectangles representing the rows beneath the
table.
As of DB2 9.7, data compression had been extended to include all temporary
tables. Data compression for temporary tables reduces the amount of temporary
disk space that is required for large and complex queries, increasing query
performance.
If compression is enabled on a table with an XML column, the XML data that is
stored in the XDA object is also compressed. A separate compression dictionary
for the XML data is stored in the XDA object. XDA compression is not supported
for tables whose XML columns were created prior to this version; for such tables,
only the data object is compressed.
These systems can also be made highly available when using shared storage. In
this case, two or more nodes can share the file systems holding the table spaces.
If an outage occurs, the surviving node can immediately access the failed nodes
table spaces and continue processing.
Remember you can use database partitioning efficiently in combination with table
partitioning and multidimensional clustering.
To partition a particular table, specify the PARTITION BY RANGE clause and the
partitioning columns. You can specify multiple columns and generated columns.
The column must be a base type, no large objects (LOBS), LONG VARCHARS,
and so forth. Figure 11-4 on page 390 describes the various syntax for creating
the same table partitions. The first CREATE table statement creates a table with
three partitions on the c1 column. This statement creates a partition to hold data
for each of the following ranges: 1 - 33, 34 - 66 and 67 - 99. We refer to this
CREATE table statement as short form, because it allows DB2 to create, name,
and distribute the partitions over three table spaces. In the second CREATE
TABLE statement, the user specifies the partition names by using the
PARTITION or PART key word. In this example, the user also specifies the table
spaces, in which each partition must be stored.
After creating a partitioned table, open INSERT, UPDATE, or LOAD into the
table, and DB2 automatically inserts rows into the appropriate table partition
according to the specified range. If the inserted data does not fit within the
ranges of any of the partitions, DB2 produces an error.
Traditionally, in order to archive older data, you moved data to the archived
locations, and you issued delete statements to remove the data from the current
table. This effort results in a full table scan to find all rows belonging to the
requested range. By using table partitioning, each table partition can be quickly
separated from the table using the DETACH PARTITION key words in the
ALTER TABLE statement. Example 11-5 describes the syntax for dropping a
particular table partition.
db2> COMMIT
Physically, there is no impact to the system when using the ALTER table
DETACH PARTITION command. The command is extremely fast, because no
The ATTACH command is similar to DETACH. For more details, visit the IBM
DB2 Information Center:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
Remember that you can use table partitioning efficiently in combination with
database partitioning and multidimensional clustering.
The indexes for each dimension are block-based, not record-based, thus
reducing their size (and the effort that is needed for logging and maintaining)
dramatically. Reorganization of the table in order to re-cluster is unnecessary.
Example 11-6 shows the CREATE TABLE statement of an MDC table clustered
in three columns: itemId, nation, and orderDate. The block indexes for each
dimension are created automatically.
Figure 11-7 on page 393 shows the data clustering according to the three
dimensions as defined in Example 11-6.
When organizing by dimensions, you can specify one or more table columns.
DB2 places all inserted rows with the same values for specific columns into a
physical location close to one another. This special physical location is called a
block. A block (extent) is a set of contiguous pages on disk, so access to these
records is sequential and accessed with minimal I/O operations. If an existing
block is filled, a new block is allocated. All blocks with the same combination of
dimension values are grouped into cells. With this internal organization, DB2 can
quickly find data along dimensions or find all rows for a specific combination of
dimension values.
MQTs are a powerful way to improve response time for complex queries,
especially queries that might require several of the following operations:
Aggregates data over one or more dimensions
Joins and aggregates data over a group of tables
Includes data from a commonly accessed subset of data, that is, from a hot
horizontal or vertical database partition
Repartitions data from a table, or part of a table, in a partitioned database
environment
Knowledge of MQTs is integrated into the SQL and XQuery compiler. During
compilation, the query rewrites phases, and the optimizer matches queries with
MQTs to determine whether substitution of an MQT for a query that accesses the
base tables is required. If an MQT is used, the EXPLAIN facility can provide
information about which MQT was selected.
Because MQTs behave like regular tables in many ways, the same guidelines as
MQTs apply for optimizing data access using table space definitions, creating
indexes, and issuing RUNSTATS.
An MQT is created with the sum and count of sales for each level of the following
hierarchies:
Product
Location
Time, composed of year, month, day
Many queries can be satisfied from this stored aggregate data. The following
example shows how to create an MQT that computes the sum and the count of
sales along the product group and line dimensions; along the city, state, and
country dimension; and along the time dimension. It also includes several other
columns in its GROUP BY clause. Example 11-7 is an example of the CREATE
TABLE statement that will create an MQT table.
The larger the base tables become, the greater the improvements in response
time can be, because the MQT grows more slowly than the base table. MQTs
can effectively eliminate overlapping work among queries by performing the
computation after the MQTs are built and refreshed, as well as reusing their
content for many queries.
Distinct type
ID
SKU_type SKU
We can also use this type as a type for a column, as shown in the last
statement of Example 11-8.
AVG([DISTINCT] mysql> SELECT a, AVG ([DISTINCT | ALL] db2 " SELECT a, Returns the
expression) AVG(b) expression) AVG(b) average set of
FROM t1 FROM t1 numbers
GROUP BY a GROUP BY a"
MAX ([DISTINCT] mysql> SELECT a, MAX ([DISTINCT | ALL] db2 "SELECT a, Returns the
expression) MAX(b) expression) MAX(b) maximum value in
FROM t1 FROM t1 a set of values
GROUP BY a GROUP BY a"
MIN ([DISTINCT] mysql> SELECT a, MIN ([DISTINCT | ALL] db2 "SELECT a, MIN(b) Returns the
expression) MIN(b) expression) FROM t1 minimum value in a
FROM t1 GROUP BY a" set of values
GROUP BY a
STDDEV (expression) / mysql> SELECT STDDEV ([DISTINCT | db2 " SELECT Returns the
STDDEV_POP STDDEV (a),a ALL] expression) stddev(a),a standard deviation
(expression) FROM t1 FROM t1 (/n) of a set of
GROUP BY a GROUP BY a" numbers
SUM([DISTINCT] mysql> SELECT a, SUM([DISTINCT | ALL] db2 " SELECT a, Returns the sum of
expression) SUM(b) expression) sum(b) a set of numbers
FROM t1 FROM t1
GROUP BY a GROUP BY a"
VAR_POP(expression) / mysql> SELECT VARIANCE ([DISTINCT | db2 " SELECT Returns the
VARIANCE(expression) VAR_POP(a) ALL] expression) VARIANCE(a) variance of a set of
FROM t1 FROM t1 numbers
GROUP BY a GROUP BY a"
BIT_AND (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the value
This function is an BIT_AND(a), a Implement using Sample code for of the bitwise
extension to SQL standards FROM t1 user-defined function BIT_AND on logical AND
GROUP BY a (UDF). page 414. operation
BIT_OR (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the value
This function is an BIT_OR(a), a Implement using UDF. Sample code for of the bitwise
extension to SQL standards FROM t1 BIT_AND on logical OR
GROUP BY a page 414. operation
BIT_XOR (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the value
This function is an BIT_XOR(a), a Implement using UDF. Sample code for of the bitwise
extension to SQL standards FROM t1 BIT_AND on logical XOR
GROUP BY a page 414. operation
GROUP BY on alias mysql> SELECT a as x Use column name for db2 " SELECT a
FROM a grouping. FROM t1
GROUP BY x; GROUP BY a"
GROUP BY on position mysql> SELECT a Use column name for db2 " SELECT a
FROM t1 grouping. FROM t1
GROUP BY 1 GROUP BY a"
HAVING on alias mysql> SELECT a as x Use column name in db2 " SELECT a
FROM t1 having clause. FROM t1
GROUP BY a GROUP BY a
HAVING x > 0 HAVING a > 0
ASCII(string) mysql> SELECT ascii('a'); ASCII(string) db2 "VALUES ascii('a') " Returns ASCII
+------------+ code value
| ascii('a') | 1
+------------+ -----------
| 97 | 97
+------------+
1 row in set (0.00 sec) 1 record(s) selected.
CHAR(int, mysql> SELECT char(97); CHR(string) db2 "VALUES chr('97') " Returns the
[USING +----------+ character that has
character set]) | char(97) | 1 the ASCII code
+----------+ - value specified by
|a | a the argument
+----------+
1 row in set (0.00 sec) 1 record(s) selected.
CONCAT(strin mysql> SELECT Use db2 "SELECT (firstName || ' ' || Returns the
g, string,) CONCAT(firstname, ' ', lastname) as CONCAT(string, lastName) as fullName from concatenation of
FULLNAME from owners where id = string) or || to admin.owners where id = 501" string arguments
501; implement
+----------------+ CONCAT(list). FULLNAME
| FULLNAME | -----------------------------------------
+----------------+ Angela Carlson
| Angela Carlson | 1 record(s) selected.
+----------------+
1 row in set (0.00 sec)
INSTR mysql> SELECT LOCATE('N', LOCATE(substri db2 " SELECT LOCATE('N', Returns the
(substring, 'DINING') ng, string, [start], 'DINING') starting position of
string) -> ; [CODEUNITS16 FROM the first
/LOCATE(subs +-----------------------+ | SYSIBM.SYSDUMMY1" occurrence of one
tring, string, | LOCATE('N', 'DINING') | CODEUNITS32| 1 string within
[position]) +-----------------------+ OCTETS]) ----------- another string
/POSITION(su | 3| 3
bstring, string) +-----------------------+ 1 record(s) selected.
1 row in set (0.00 sec)
LCASE(string) / mysql> SELECT LCASE('JOB'); LCASE(string) db2 " SELECT LCASE('JOB') Returns a string in
LOWER(string) +--------------+ FROM which all
| LCASE('JOB') | SYSIBM.SYSDUMMY1" characters have
+--------------+ 1 been converted to
| job | --- lowercase
+--------------+ job characters
1 row in set (0.00 sec) 1 record(s) selected.
LOAD_FILE(dir update blobTBL SET data = Use the LOAD Inserts the file into
String) LOAD_FILE('/tmp/AquaBlue.jpg') command with the database
WHERE id = 6; LOBS FROM
<lob_directory>.
LTRIM(string) mysql> SELECT LTRIM(' Apple'); LTRIM(string) db2 "SELECT LTRIM(' Apple') Removes blanks
+------------------+ FROM from the beginning
| LTRIM(' Apple') | SYSIBM.SYSDUMMY1" of
+------------------+ 1 string-expression
| Apple | -------
+------------------+ Apple
1 row in set (0.00 sec)
QUOTE(string) mysql> SELECT quote(firstname) SELECT with || db2 "select ('''' || firstname || '''') Returns string with
from owners where id = 501; from admin.owners where id = single quotes
+------------------+ 501"
| quote(firstname) |
+------------------+ 1
| 'Angela' | ----------------------
+------------------+ 'Angela'
1 row in set (0.00 sec)
1 record(s) selected.
RTRIM(string) mysql> SELECT RTRIM('PEAR '); RTRIM(string) db2 "VALUES RTRIM('PEAR Removes blanks
+------------------+ ') " from the end of
| RTRIM('PEAR ') | string
+------------------+ 1
| PEAR | -------
+------------------+ PEAR
1 row in set (0.00 sec) 1 record(s) selected.
SPACE(expres mysql> SELECT space(30); SPACE(express db2 " VALUES space(3)" Returns a
sion) +--------------------------------+ ion) 1 character string
| space(30) | ----------- consisting of
+--------------------------------+ blanks with length
| | specified by the
+--------------------------------+ second argument
1 row in set (0.00 sec)
TRIM([Both | mysql> select trim(trailing from TRIM([Both | db2 "VALUES trim(trailing from Removes blanks
Leading | trim(LEADING FROM ' abc ')) as Leading | trailing trim(LEADING FROM ' abc '))" or occurrences of
trailing OUTPUT; [substring] another specified
[substring] +--------+ FROM] string) OUTPUT character from the
FROM] string) | OUTPUT | --------- end or the
+--------+ abc beginning of a
| abc | string expression
+--------+ 1 record(s) selected.
1 row in set (0.00 sec)
UCASE(string) mysql> SELECT UPPER('jobs'); UCASE(string) / db2 "VALUES UPPER('jobs')" Returns a string in
/ UPPER +---------------+ UPPER (String) 1 which all
(String) | UPPER('jobs') | ---- characters have
+---------------+ JOBS been converted to
| JOBS | 1 record(s) selected. uppercase
+---------------+ characters
1 row in set (0.00 sec)
EXTRACT (unit FROM mysql> SELECT Use concatenate different date db2 "VALUES (YEAR('2009-08-31
expression) EXTRACT(YEAR_MON stripping functions (DAY, YEAR, 05:06:00') || MONTH('2009-08-31
TH from '2009-08-31 MONTH, DAYNAME, 05:06:00'))"
05:06:00'); DAYOFWEEK, and so on).
logical NOT as '!' in VALUES CASE WHEN Implement using CASE expression and VALUES
SELECT list 1!=1 THEN 0 ELSE 1 END statement
& (bitwise and) Not available. Implement using UDF. Refer to UDF B.1, Sample code for BIT_AND on
page 414.
logical AND as '&&' in CASE Implement using CASE expression and VALUES
SELECT list statement.
Function = in SELECT list: CASE Implement using CASE expression and VALUES
select (1=1) statement.
<< and >> (bitwise shifts) No equivalent Implement using power function:
MySQL:
SELECT (x>>y)
SELECT(x<<y)
DB2 :
SELECT(x/power(2,y))
SELECT(x*power(2,y)):
BIT_COUNT No equivalent, implement using UDF Refer to UDF B.6, Sample code for BIT_COUNT on
page 431.
LEAST FnLeastN See UDF example in B.5, Sample code for LEAST on
page 427.
LIKE in SELECT CASE with LIKE Implement using CASE expression and VALUES
statement.
LIKE ESCAPE in SELECT CASE with LIKE and ESCAPE Implement using CASE expression and VALUES
statement.
NOT BETWEEN in SELECT CASE Implement using CASE expression and VALUES
statement.
NOT LIKE in SELECT CASE Implement using CASE expression and VALUES
statement.
--
-- DB2 UDF(User-Defined Function) Samples for conversion
--
-- 2001/08/29
--
-- Name of UDF: BIT_AND (N1 Integer, N2 Integer)
--
-- Used UDF: None
--
-- Description: Returns bit by bit and of both parameters.
--
-- Author: TOKUNAGA, Takashi
--
--------------------------------------------------------------------------
CREATE FUNCTION BITAND (N1 Integer, N2 Integer)
RETURNS Integer
SPECIFIC BITANDMySQL
LANGUAGE SQL
CONTAINS SQL
NO EXTERNAL ACTION
DETERMINISTIC
RETURN
WITH
Repeat (S, M1, M2, Ans) AS
(Values (0, N1, N2, 0)
Union All
Select S+1, M1/2, M2/2, Ans+MOD(M1,2)*MOD(M2,2)*power(2,S)
From Repeat
Where M1 > 0
AND M2 > 0
AND S < 32
)
SELECT ANS
FROM Repeat
WHERE S = (SELECT MAX(S)
FROM Repeat)
;
--------------------------------------------------------------------------
values bitand(10,8);
---------------------------------------------------
1
-----------
8
--------------------------------------------------------------------------
values bitand(14,3);
---------------------------------------------------
--------------------------------------------------------------------------
values bitand(1038,78);
---------------------------------------------------
1
-----------
14
Main_Loop:
WHILE XN > 0 DO
SET RetVal = SUBSTR(CHAR(MOD(XN,1000)),19,3) || RetVal;
SET XN = XN/1000;
IF XN > 0 THEN
SET RetVal = ',' || RetVal;
ELSE
LEAVE Main_Loop;
END IF;
END WHILE;
N 2 3
-------------------------- ----------------------- -----------------------
12.34567 12.34 12.
-12.34567 -12.34 -12.
120034.56700 120,034.56 120,034.
23400123456789.00000 123,400,123,456,789.00 123,400,123,456,789.
4 record(s) selected.
--------------------------------------------------------------------------
CREATE FUNCTION RPAD (C1 VarChar(4000), N integer, C2 VarChar(4000))
RETURNS VARCHAR(4000)
LANGUAGE SQL
SPECIFIC RPADBase
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',3,'*.'),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
ABC
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',20,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCDE X
1 record(s) selected.
UDF RPAD with the third parameter omitted is shown in Example B-7.
Running the RPAD function gives you the results that are shown in Example B-8.
1
--------------------------------------------------
ABCDE X
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',3) || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCX
1 record(s) selected.
Function RPAD allows a set of different input arguments. Example B-9 shows two
more RPAD UDFs.
Example: B-10 Results of RPAD with first parameter as integer, 2, and 3 parameters
1
--------------------------------------------------
927*.*.*.*.*
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad(927,12,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad(9021,3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
902
1 record(s) selected.
The counterpart for RPAD are the LPAD functions, which are shown in
Example B-11.
1
--------------------------------------------------
*.*.*.*.*.ABCDE
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',3,'*.'),50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABC
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',15,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCDEX
1 record(s) selected.
Because RPAD allows LPAD a different number and data type for input
arguments, Example B-13 shows LPAD without the third parameter.
The results of Example B-13 must look like those results in Example B-14.
1
--------------------
ABCDE
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
ABC
1 record(s) selected.
Two more LPAD UDFs with different characteristics are shown in Example B-15.
--------------------------------------------------------------------------
CREATE FUNCTION LPAD (I1 Integer, N integer)
RETURNS VARCHAR(4000)
LANGUAGE SQL
SPECIFIC LPADIntParm2
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LPAD(rtrim(char(I1)),N,' ')
;
Example: B-16 Results of LPAD: The first parameter is integer, 2, and 3 parameter
1
--------------------------------------------------
*.*.*.*.*.*9021
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad(9021,15,''),50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
9021
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad(9021,3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
902
1 record(s) selected.
--
-- DB2 UDF(User-Defined Function) Samples for conversion
--
-- 2001/08/28, 08/29
--
-- Name of UDF: GREATEST (P1 VarChar(254), P2 VarChar(254), ...)
--
--
-- Used UDF: None
--
-- Description: Returns greatest value of list of data.
--
-- Author: TOKUNAGA, Takashi
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle2
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2 THEN P1
ELSE P2
END
;
---------------------------------------------------
--
-- GREATEST function with three parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle3
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2
THEN CASE
WHEN P1 >= P3 THEN P1
ELSE P3
END
ELSE CASE
WHEN P2 >= P3 THEN P2
ELSE P3
END
END
;
---------------------------------------------------
--
-- GREATEST function with four parameters
1
--------------------
abcfgh
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
defgh
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
endof...
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
endof...
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra'),20) FROM
sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
extra
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of'),20) FROM
sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
extra
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of','more'),20)
FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
more
1 record(s) selected.
--------------------------------------------------------------------------
1
--------------------
more and
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of','more','more
and ',' something'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
more and
1 record(s) selected.
1
---------------------------------------------------------------------------------------
HAROLD
1 record(s) selected
1
-----------
1
1 record(s) selected.
1
-----------
6
1 record(s) selected.
1
-----------
29
1 record(s) selected.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics that are covered in this book.
Other publications
These publications are also relevant as further information sources:
IEEE Standard for Software Test Documentation (829-1998),
ISBN 0-7381-1444-8
Understanding DB2, Learning Visually with Examples, Second Edition,
ISBN-13:978-0-13-158018-3
Installing IBM Data Server Clients, GC27-2454-00
Installing DB2 Servers, GC27-2455-00
Getting Started with DB2 Installation and Administration on Linux and
Windows, GI11-9411-00
Database Administration Concepts and Configuration Reference,
SC27-2442-00
Database Monitoring Guide and Reference, SC27-2458-00
Database Security Guide, SC27-2443-00
Partitioning and Clustering Guide, SC27-2453-00
Troubleshooting and Tuning Database Performance, SC27-2461-00
Online resources
These Web sites are also relevant as further information sources:
DB2
Database Management
http://www.ibm.com/software/data/management/
DB2
http://www.ibm.com/software/data/db2/
MySQL
MySQL home page
http://www.mysql.com/
MySQL 5.1 Reference Manual
http://dev.mysql.com/doc/refman/5.1/en/index.html
PHP MyAdmin
http://www.phpmyadmin.net/home_page/index.php
Others
VMware
http://www.vmware.com/
SUSE Linux Enterprise
http://www.novell.com/linux/
PHP
http://www.php.net/
PHP PECL extension
http://pecl.php.net/
PHP Manual - Database extensions
http://ca2.php.net/manual/en/refs.database.php
Index 443
desktop system 8 formatted page 173
destination server 86 frm extension 43
device name 14 full table scan 17
diagnostic level 282
direct I/O 14
directory name 14
G
global level profile registry 280
discover_db parameter 25
global levels 280
discover_inst parameter 25
global variable Privilege 184
disk space 90
go to label 264
distinctrow keyword 207
grant command 179
distributed platform 2
grant table buffer 39
DML 71
granting privilege 70
double precision 118
graphical interface 38
driver code 3
graphical tool 90
dropin privilege 188
dynamic query 187
dynamic SQL 187 H
dynamic sql statement 28 hashed index 133
dynamic table 46 health and fault monitor 6
dynamic warehouse 7 heap size 127
heap storage engine 130
heap table 46
E High availability 279
e-business 57
high availability 4
EDU 10
host information 25
education opportunity 60
host language variable 27
EJB 240
host name 39
embedded analytical feature 7
embedded SQL statement 27
embedded system 3 I
employee inventory 76 I/O bound workload 16
encrypted password 168 ibm_db2 226
engine dispatchable unit 10 implicit casting 213
engine infrastructure 10 implicit privilege 186
Enterprise JavaBean 240 implicit_schema privilege 187
execute privilege 184 import command 168
explain authority 183, 187 import mode 302
export mode 302 IMS 2
incremental backup 288
independent software vendor 5
F index 136
failover 57
index blocks 39
federated storage engine 47, 130
Index privilege 184
file name 14
index privilege 188
file system 90, 123
indexe 13
fixed term license 5
InnoDB engine 162
floating-point number 118
InnoDB table 124
foreign key 48, 162
insert privilege 188
foreign key constraint 47
installation method 92
foriegn key 142
Index 445
MyISAM table 124 mysqlbinlog 51
myisamchk 51 mysqlcheck 50
myisampack 51 mysqldump 40, 51, 145, 168
mysql 50 mysqldump options
MySQL Administrator 51 --help 169
MySQL index --no-create-info 169171
fulltext 136 --no-data 169
non-unique 136 --password 169
primary key 136 --tab 170
spatial 136 --tab= 169
unique 136 --user 169
MySQL Query Browser 51 mysqlhotcopy 40, 51, 172
MySQL server program mysqlhotcopy script 168
comp_err 50 mysqlimport 51
innochecksum 51 mysqlshow 51
make_binary_distribution 50
make_win_bin_dist 50
msql2mysql 51
N
named pipe 39
my_print_defaults 51
national language support 71
myisamlog 51
natural join 208
mysql.server 50
nickname privilege 184
mysql_config 51
node 25, 125
mysql_fix_privilege_tables 50
non-recoverable database 287
mysql_install_db 50
non-transaction-safe storage engine 45
mysql_secure_installation 50
null value 217
mysql_tzinfo_to_sql 50
numeric data type 216
mysql_upgrade 50
bigint 118
mysqlbug 50
bit 117
mysqld 50
bool 117
mysqld_multi 50
boolean 117
mysqld_safe 50
decimal 118
mysqld-debug 50
double 118
mysqld-max 50
fixed 118
mysqld-nt 50
float 118
mysqlmanager 50
int 118
perror 51
integer 118
replace 51
numeric 118
mysql.columns_priv table 179
real 118
mysql.procs_priv table 179
smallint 117
mysql.tables_priv table 179
tinyint 117
mysql.user table 177, 179
numeric values 215
mysql_convert_table_format 51
mysql_fix_extensions 51
mysql_setpermissions 51 O
mysql_tableinfo 51 object privileges 18
mysql_waitpid 51 object-oriented extension 107
mysql_zap 51 ODBC driver manager 29
mysqlaccess 51 OLAP 7, 57
mysqladmin 50 OLE DB provider 32
Index 447
response file 101 sort heap size 307
response file installation 92 sort heap threshold 307
restore utility 294 source code language 62
result output file 144 source distribution 41
revoke command 179 source table 203
roll forward recovery 292 sources package 109
rollback 129 special group 186
rollforward utility 295 SQL 2
root privilege 180 SQL interface 38
root table 135 SQL replication 5
root type 398 SQL statements 183
routine privilege 184 sqladm authority 183, 187
routines 184 SQLCODE 259
row compression 131, 386 SQLj 240
row trigger 137 SQLj applet 31
runstats command 199 SQLj translator 31
runtime support 23 SQLSTATE 259
standard interface 25
statement trigger 137
S static sql statement 27
schema 43, 128
static tables 46
Schema level 187
statistical data 41
Schema privilege 184
status 144
secadm authority 183, 187
STMM 6
security 11, 13
storage device 126
security label privilege 188
storage engine 3940, 129
security model 181
storage engines 45
security system 177
storage optimization 4
select privilege 188
stored procedure 22, 84
self tuning memory manager 6
straight_join keyword 206, 208
self-tuning memory manager 59, 164165
string and character data type
sequence privilege 184, 188
binary 119
server code 10
blob 120
server privilege 184
char 119
service request 77
enum 120
service ticket 76, 81
longblob 120
service type 82
longtext 120
servlets 240
mediumblob 120
setup wizard 90
mediumtext 120
severity level 84
set 120
shared memory networking protocol 39
tinyblob 120
shared-nothing architecture 48
tinytext 120
single byte integer 117
varbinary 119
single precision 118
varchar 119
single-tier 25
string function 211
slave database 40
strong typing 213
slave system 40
structured query language 123
small integer 117
structured type 398
sort heap 164
subquery 209
Index 449
workload privilege 184
X
XML support 71
XSR object privilege 184