0% found this document useful (0 votes)
31 views

Db2Monitoring v1.0.2

Uploaded by

priscille45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Db2Monitoring v1.0.2

Uploaded by

priscille45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Using Table Functions in Db2 LUW©

A Monitoring Approach

This document can be found on the web, www.ibm.com/support/techdocs


Search for document number WP102778 under the category of “White Papers”.

Version Date: 25 November 2019

This article provides a monitoring approach for IBM Db2 databases via Db2 table functions. It
describes how relevant data can be collected, and displayed for further tracking.

Malte Schünemann - Db2 Development, IBM Germany R&D, [email protected]


Trademarks

The following terms are registered trademarks of International Business Machines Corporation in the
United States and/or other countries: AIX, AS/400, DB2, IBM, Micro Channel, MQSeries, Netfinity,
NUMA-Q, OS/390, OS/400, Parallel Sysplex, PartnerLink, POWERparallel, RS/6000, S/390, Scalable
POWERparallel Systems, Sequent, SP2, System/390, ThinkPad, WebSphere.

The following terms are trademarks of International Business Machines Corporation in the United States
and/or other countries: DB2 Universal Database, DEEP BLUE, e-business (logo), ~, GigaProcessor,
HACMP/6000, Intelligent Miner, iSeries, Network Station, NUMACenter, POWER2 Architecture,
PowerPC 604,pSeries, Sequent (logo), SmoothStart, SP, xSeries, zSeries. A full list of U.S. trademarks
owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml . NetView, Tivoli and TME are
registered trademarks and TME Enterprise is a trademark of Tivoli Systems, Inc. in the United States and/or
other countries.
Oracle, MetaLink are registered trademarks of Oracle Corporation in the USA and/or other countries.
Microsoft, Windows, Windows NT and the Windows logo are registered trademarks of Microsoft
Corporation in the United States and/or other countries.
UNIX is a registered trademark in the United States and other countries licensed exclusively through The
Open Group.
LINUX is a registered trademark of Linus Torvalds.
Intel and Pentium are registered trademarks and MMX, Pentium II Xeon and Pentium III Xeon are
trademarks of Intel Corporation in the United States and/or other countries.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United
States and/or other countries.
Other company, product and service names may be trademarks or service marks of others.

Abstract
This document describes how to collect monitoring data for a database on Db2 LUW© in a
certain time frame.

The following key items are main characteristics of the approach described.
• The information is collected in iterations. This allows you to monitor the database over a
certain time period.
• Metrics represented by counter values are displayed as differences, rather than absolute
values. This eases the identification of possible pain points within the monitored time
frame.
• The procedure described runs the data evaluation independently from the data collection.
This allows you to perform the investigation in a different, non-severe environment
rather than a sensitive production database.
• The data can be viewed in a spreadsheet. Key metrics can be graphically visualized.

The monitoring method allows you to keep track of changes of Db2 metrics with high granularity
across an extended period of time.

As an example, see the following chart that shows the total request time vs extended latch waits
and extended latch wait time for a single application handle.
Publications available so far describe two consecutive data collections and the differences
between the considered metrics. This requires detailed knowledge of the problem, and the right
point in time to be used for data collections. Examples are the db2mon script
(https://ibm.ent.box.com/s/iz3ytk28d8wsfg03s1lxwjsl7mifvfmu/file/160446905792) and the
article “Tuning and Monitoring Database System Performance”
(https://www.ibm.com/developerworks/community/wikis/home?
lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/Tuning%20and%20Monitoring
%20Database%20System%20Performance).

This document provides one of countless ways to monitor a Db2 database. We'd like to encourage
you to modify and even improve the described approach to fit it into your own monitoring
scenarios.
Table of Contents
Motivation................................................................................................................................5
Prerequisites and conventions...................................................................................................5
Terminology..............................................................................................................................6
1 Collect data..........................................................................................................................6
1.1 How to query table functions.......................................................................................6
1.2 Choose a data format for the collection........................................................................7
1.3 How to set up the collection process............................................................................8
1.4 Which data sets should be collected ?........................................................................12
2 Prepare for investigation....................................................................................................13
2.1 Import the data...........................................................................................................13
2.2 The first query............................................................................................................14
2.3 Shortcomings.............................................................................................................17
3 A fully generic approach....................................................................................................17
3.1 Developing the parts of the generator query..............................................................18
3.2 Putting it all together..................................................................................................25
3.3 Using the generated SQL query.................................................................................30
3.4 Data representation....................................................................................................31
Appendix.................................................................................................................................35
A) Some SQL concepts......................................................................................................35
B) A monitoring scenario...................................................................................................37
C) Extended list of fields to be shown with absolute values..............................................40
References..............................................................................................................................42

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 4
Using Table Functions in Db2 LUW© - A Monitoring Approach
Motivation
You can monitor Db2 LUW© databases to a deep level using monitoring table functions as listed
e.g. in
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/do
c/c0053963.html .

However, many table functions return a large number of fields. This circumstance can make
tracking potential problems a nasty task. In addition, values for counters are collected from the
start of an entity (database start or connect time of an application process), or from the creation of
an object (an SQL statement in the SQL cache). Problem tracking often requires that you monitor
changes over a defined period of time. Consequently, the task requires that you check the change
of a value over that period of time rather than investigate absolute values.

This document provides an approach to

 collecting Db2 monitoring data in iterations over an extended, but defined period of time
 preparing the data for efficient tracking and investigation

Note that in order to follow the steps in this document, you require some scripting
abilities (shell, awk) and enhanced SQL knowledge. We recommend to have a test system
to reproduce the SQL statements and scripts that are discussed throughout this document.

Prerequisites and conventions


This document

• has been written for Db2 LUW© version 9.7, or higher, and

• uses the Korn shell as it is available in Linux and UNIX environments. In addition, we
also provide bash examples.

The SQL commands presented in this document can be used to run against a database via Db2
CLP. Unless stated otherwise, copy + paste the SQL text into a file, replace parts in blue by the
proper value, terminate the statement by a semicolon, and run it using the following OS
command:

db2 -tvf FILENAME

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 5
Using Table Functions in Db2 LUW© - A Monitoring Approach
Terminology
Throughout this document the following terms are used:

Term Definition
diff value Difference between two values
Table function A Db2 LUW© monitoring table function as shipped with the product.
This document refers to mon_get_...() table functions.

1 Collect data
1.1 How to query table functions
The number of table functions that come with Db2 LUW© is increasing with higher release levels.
In addition, the content is changing as there are more columns for many of the table functions as
well. To ensure that you don’t miss important information that is available, but still have a
procedure that works across different release levels of Db2 LUW©, you need a generic SQL
structure that allows for easy access to the complete data set you intend to collect.

Furthermore, when you query the result set of a monitoring table function like
mon_get_database(), you need to take extra care to save the time of the data collection.
Therefore, the data collection procedure requires the following.

 Flexibility across Db2 LUW© levels

 Queries on one table function can be modified easily to query another table function.

 Additional information for easier problem analysis can easily be amended, like the time
of the data collection

The following is an SQL query for the table function mon_get_database().

select
current timestamp as collection_timestamp,
t.*
from table(mon_get_database(-2) ) as t

The time the data collection is retrieved from the Db2 register current timestamp. You can
also use other information that you might want to add.

This SQL query is generic with regard to the number of columns returned by the table function.
We can expect this query to run on every level of Db2 LUW© provided the table function exists

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 6
Using Table Functions in Db2 LUW© - A Monitoring Approach
at this level. Assuming you need to query another table function, e.g. mon_get_connection(),
you need to identify the query syntax of this table function and can then modify the SQL to get
the following.

select
current timestamp as collection_timestamp,
t.*
from table( mon_get_connection(NULL,-2) ) as t

So you just need to replace one table function with another and ensure to use the proper
parameter list for the table function.

The generalized SQL query structure then is the one shown below. The part to be adjusted is
shown in blue.

select
current timestamp as collection_timestamp,
t.*
from table( TABLE_FUNCTION ( [parameter list] )) as t

1.2 Choose a data format for the collection


Once the queries are defined, you need to decide where and how to store data you retrieve.
Possible options are:

 Save the text output of SQL queries.


 Export data in an external data format.
 Store data directly in a table within the database.

Below, these options are discussed in more detail. However, this document focuses on the export
of data.

Save text output


It seems like the easiest way to just save the output of the SQL query in a text file. However, this
will make the evaluation part very hard, and options to process the data further are very limited,
or require considerable effort. Therefore, this option is not useful.

Export the data

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 7
Using Table Functions in Db2 LUW© - A Monitoring Approach
If data cannot be investigated within the environment they are collected in, data export is the best
option. The data format should be IXF. You also need to take care of LOB data, because many
table functions contain fields with LOB data types. Use the following command:

export to FILENAME.ixf of ixf


Lobs to ./ lobfile FILENAME modified by lobsinfile
select
current timestamp as collection_timestamp,
t.*
from table( TABLE_FUNCTION ( [parameter list] )) as t

With the above command, the data part of the table function is then stored in FILENAME.ixf
while the LOB data go to a file named FILENAME.001.lob. Of course, to benefit from this
method, you have to import the data into a target environment. This is described later in this
document.

The EXPORT command of Db2 LUW© issues a warning SQL27984W indicating it does
not have any table structure information that can be stored in the output file.
This warning is returned due to the nature of the SELECT statement being used for the
EXPORT, and is not a reason for concern.

Store data in a local table


The most direct approach for data collection is to store the data directly in the database where it is
collected. This means that at the time of data collection you need to have a table to store the data.

Once the table exists, you can collect data using the following command:

insert into TABLENAME (


select
current timestamp as collection_timestamp,
t.*
from table( TABLE_FUNCTION ( [parameter list] )) as t
)

1.3 How to set up the collection process


To complete the data collection discussion, you need to identify a good method to collect the data.
The following conditions should be met.

 Data is collected in several iterations.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 8
Using Table Functions in Db2 LUW© - A Monitoring Approach
 The frequency of iterations is roughly constant.

 The data collection occurs frequently enough.

 The impact on the monitored environment is minimized.

For most table functions, querying with a delay of at least 30 seconds does not cause too
much impact. For other table functions, it is advisable to be a bit more careful. The most
prominent example is mon_get_pkg_cache_stmt() which returns the content of the
SQLcache. Apart from the impact on the database being monitored, the amount of data
returned is considerable, and the time to store the data in a table is time consuming as
well. Therefore, to query the total SQL cache, use iterations with a delay of several
minutes. A good starting point is 10 or 15 minutes.

To run in iterations, the usual procedure is to use a shell script. We use the Korn shell here, as it is
installed in every Linux and UNIX environment running Db2 LUW©. In addition, we use the
bash because this shell is very convenient and the preferred shell for many administrators.

Using the Korn shell


You can start with the following draft. The part in blue has to be replaced by the code that
performs the data collection.

#!/bin/ksh

_num_iterations=10
_delay=60

(( i = 0 ))
whil e [[ i -lt ${_num_iterations} ]]; do
< some activity >
(( i = i + 1 ))
sleep "${_delay}"
done

The intention is to run 10 iterations with a 1-minute delay. The problem with this traditional
approach is that the activities (marked blue) take time and therefore cause the iterations to take
longer than the amount of time indicated by parameter _delay.

Better results are achieved e.g. by a co-process that runs in the background and indicates when
the next iteration can start. To do so, the co-process writes a line once per iteration. The main
process then reads from the co-process and starts the next iteration once the data is read.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 9
Using Table Functions in Db2 LUW© - A Monitoring Approach
#!/bin/ksh

_num_iterations=10
_delay=60

# co-process started
{
while true; do
sleep ${_delay}
print "next"
done
} |&

# run the loop collecting data


(( i = 0 ))
while [[ i -lt ${_num_iterations} ]]; do
< some activity >
# increment iterator
(( i = i + 1 ))
# wait for the next line to be read from co-process
read -p
done

A simple script for collecting data for table function mon_get_database() then looks as shown
below.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 10
Using Table Functions in Db2 LUW© - A Monitoring Approach
#!/bin/ksh

_num_iterations=10
_delay=60
_database="DB0"
_logfile="script.log"

touch "${_logfile}"

# co-process started
{
while true; do
sleep "${_delay}"
print "next"
done
} |&

# connect to the database to be monitored


db2 connect to "${_database}" >> "${_logfile}"

# run the loop collecting data


(( i = 0 ))
while [[ i -lt ${_num_iterations} ]]; do
# write the current timestamp
date
# run the command to collect / export the data
db2 "export to mon_get_db${i}.ixf of ixf
select current timestamp as collection_timestamp,t.*
from table( mon_get_database(-2) ) as t"
# increment iterator
(( i = i + 1 ))
# wait for the next line to be read from co-process
read -p
done >> "${_logfile}"

# cleanup … . end the DB2CLP processing


db2 terminate >> "${_logfile}"

This processing runs with a roughly constant frequency of iterations. Of course, the example
given is a most simple realization of the principles that have been sketched. However, it works. A
more sophisticated solution based on the script shown is surely possible.

Using bash
You can derive a basic script for the bash in the same way as above. A useful example is the
following script.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 11
Using Table Functions in Db2 LUW© - A Monitoring Approach
#!/bin/bash

_num_iterations=10
_delay=60
_database="DB0"
_logfile="script.log"

touch "${_logfile}"

# co-process started
coproc drummer {
while true; do
sleep "${_delay}"
echo "next"
done
}

# connect to the database to be monitored


db2 connect to "${_database}" >> "${_logfile}"

# run the loop collecting data


(( i = 0 ))
while [[ i -lt ${_num_iterations} ]]; do
# write the current timestamp
date
# run the command to collect / export the data
db2 "export to mon_get_db${i}.ixf of ixf
select current timestamp as collection_timestamp, t.*
from table( mon_get_database(-2) ) as t"
# increment iterator
(( i = i + 1 ))
# wait for the next line to be read from co-process
read -u "${drummer[0]}"
done >> "${_logfile}"

# cleanup … . end the DB2CLP processing


db2 terminate >> "${_logfile}"

Most parts of the script remain unchanged. However, the syntax of the co-processes is different.
You must replace the shell’s print command, and the shell’s read command takes different
arguments.

1.4 Which data sets should be collected ?


The question which data is required to track performance problems depends on various factors.

• In case of a single performance problem scenario, the tracking methods heavily depend
on the nature of this scenario. In many cases, a general monitoring approach as discussed
in this document is not required.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 12
Using Table Functions in Db2 LUW© - A Monitoring Approach
• In case of a general performance issue, or if the conditions are unknown, the approach is
from a global scope down to more specific areas of investigation.

For a general monitoring approach, a good starting point is the database level, i.e. the table
function mon_get_database(). Depending on the results and to investigate at a more granular
level, you might choose e.g. mon_get_connection() to identify a problem that occurs in
specific connections, or mon_get_transaction_log() if there are problems in the logging
area.

It is advisable to collect all data at the same time from all monitoring table functions being
considered.

2 Prepare for investigation


2.1 Import the data
Once data is collected, the next step is to bring it into a form that is useful for investigation. To
ease data processing and data handling, it is useful to have some rules for the target environment
where the data is to be imported.

 The Db2 LUW© level must be compatible with that of the environment where the data
was collected. This means the target environment should have the same Db2 LUW©
version. For levels as of Db2 LUW© 11.1, it must also have the same Modification Pack
(Mod Pack) level.

 Tables created to import and investigate data should reside in a separate tablespace so that
there is no interference with other data in the database.

To create the table, use the following SQL statement structure:

create table TABLENAME as (


select
current timestamp as collection_timestamp,
t.*
from table( TABLE_FUNCTION ( [parameter list] )) as t
) with no data
in TABLESPACE

Then, import the data with the following command:

import from FILENAME.ixf of ixf


lobs from ./ commitcount 1000
insert into TABLENAME

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 13
Using Table Functions in Db2 LUW© - A Monitoring Approach
This is the general form required for tables with LOB data and export files with large amount of
space, e.g. when dealing with table function mon_get_pkg_cache_stmt().

Example
If you take the example of table function mon_get_database(), you will have the export files
mon_get_db0.ixf, mon_get_db1.ixf, and so on from the script developed in section 1.3 .
On the target side, you need to create a tablespace and a table to import the data into.

The amount of data will be one row per database member in each data collection. In addition, the
table function has no LOBs. Thus, the import command can be used in its simplest form.

A Db2 CLP command file as shown below can do the job.

create tablespace mon_get_tf;


create table tf_database as (
select
current timestamp as collection_timestamp,
t.*
from table(mon_get_database(-2 )) as t
) with no data
in mon_get_tf ;
import from mon_get_db0.ixf of ixf insert into tf_database ;
import from mon_get_db1.ixf of ixf insert into tf_database ;
...

Let’s assume the command file has the name tf_database_import.clp. Then you can run
this command file as follows.

db2 -tvf tf_database_import.clp -z tf_database_import.log

Screen output will be dumped to the logfile specified with option -z.

2.2 The first query


All the above effort is done to finally have data that can be investigated. As mentioned before,
simply querying absolute values would provide little benefit as the large numbers would conceal
the real pain points that we hope to find.

The SQL pattern


The task is to compare data of consecutive data collections. This means we need to find a method
to combine each data row with its successor or predecessor. We can do this by using our previous
example of output from mon_get_database() as follows.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 14
Using Table Functions in Db2 LUW© - A Monitoring Approach
select
<field list>
from
tf_database a
inner join
tf_database b
on
a.member = b.member and
b.collection_timestamp in (
select max(x.collection_timestamp) from tf_database x
where a.collection_timestamp > x.collection_timestamp
)
order by a.member,a.collection_timestamp

While this is a pattern that is specific to our example, we need a more general SQL pattern that
applies to other table functions as well. Let’s assume we have placed all data from iterations on
some table function in the table called data_table. Then, our SQL pattern will look as shown
below.

select
<field list>
from
data_table a
inner join
data_table b
on
a.join_field1 = b.join_field2 and
...
b.collection_timestamp in (
select max(x.collection_timestamp) from data_table x
where a.collection_timestamp > x.collection_timestamp
)
order by a.member, a.join_field2, … , a.collection_timestamp

The names and numbers of join fields depend on the table function that was queried. The
following rules apply:

• For the join, use either all arguments or a subset of the argument list of the table function.

• All the join fields from the table function, plus the field collection_timestamp,
uniquely identify a row in the table of collected data ( data_table in our example)

Additional information:
• The ORDER BY clause in the SQL pattern must be grouped together to data rows
in order to be compared. The field COLLECTION_TIMESTAMP is the last ORDER
BY field as data with identical join fields of the table function need to be
compared.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 15
Using Table Functions in Db2 LUW© - A Monitoring Approach
Specifying the field list
The item still open in our query is the field list that defines the actual data to be shown. Basically,
we can identify the following different data classes with regard to data collection:

• Identifiers and status information


This kind of data identifies as object. Typical examples are members, application handles,
or all kinds of IDs. This data needs to be displayed “as is”, i.e. it makes no sense to
display diff values for it.
The same is true for fields that define a status like the current lock list being in use or the
number of locks currently being held. This data is handled the same way as identifiers.

• Counters
Here we have metric data that is counted from the creation of an object or from the start
of the database. Counters typically have integer data types. To properly interpret this data,
we need to look at the diff values of the data in consecutive iterations.

• Non-integer data
Finally, there are non-numerical data types and time stamps. This data is also displayed
“as is”.

The table functions return almost all numeric data as integer data types. Thus, there is no
fractional numeric data to take care of.

So in the query pattern above, there is no generalized form of the field list. In the following
example we use an SQL to query fields explicitly.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 16
Using Table Functions in Db2 LUW© - A Monitoring Approach
select
a.collection_timestamp,
a.member,
a.num_locks_held,
a.total_cpu_time-b.total_cpu_time as "*TOTAL_CPU_TIME" ,
a.total_wait_time-b.total_wait_time as "*TOTAL_WAIT_TIME" ,
a.total_rqst_time-b.total_rqst_time as "*TOTAL_RQST_TIME"
from
tf_database a
inner join
tf_database b
on
a.member = b.member and
b.collection_timestamp in (
select max(x.collection_timestamp) from tf_database x
where a.collection_timestamp > x.collection_timestamp
)
order by a.member, a.collection_timestamp

You might notice correlation names with a leading asterisk symbol. This is a good method to
distinguish between fields showing absolute values, like identifiers, and diff values.

2.3 Shortcomings
What is missing in the approach developed so far? Basically, at least two items:

• Compiling and writing down the selection field list in the SQL can quickly become a
tedious work.

• If there are several potential areas to look into, the fields have to be identified, and
classified first (e.g. whether diff values or absolute values are to be shown).

These facts immediately show that the current approach cannot be the end of the story. To have a
useful procedure, we need to continue.

3 A fully generic approach


The SQL pattern developed above shows us a clear structure that allows you to separate the
pattern into the following parts:

• Name of the table holding the data

• Fields used in the selection list

• Fields denoting identifiers and status according to the classification above

• Fields used in the join conditions and the ORDER BY clause

• A fixed part

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 17
Using Table Functions in Db2 LUW© - A Monitoring Approach
So let’s assume the data have been collected and imported into a target environment. Since
manual creation of queries does not appear to be a handy procedure, the most natural alternative
is to generate the query from information that is available. The nature of the desired query
suggests a generation via SQL.

This means we need an SQL statement that works as a generator of an SQL query.

3.1 Developing the parts of the generator query


Let’s put the information into SQL form now.

Name of the table holding the data


This is the first part of the list above. The information can simply be written down as follows.
Replace the part in blue by the actual table name of the data table.

select * from table (


(
values ( cast('TABLENAME' as varchar(60)) )
)
)

If you run this SQL query, the name of the data table is returned. The disadvantage of having the
table name in the generator statement is that each time a new data table is being used you need to
modify the generator query. To avoid this, use an SQL variable.

You can create the variable outside the script. As value, assign the proper table name. From OS
level, run the following commands:

db2 "create or replace variable mondb2_tabname varchar(60)"


db2 "set mondb2_tabname = 'TABLENAME'"

For SQL commands to handle global SQL variables, refer to appendix A.

With these modifications, the SQL statement can be rewritten as follows:

select * from table (


(
values (cast(mondb2_tabname as varchar(60)) )
)
)

To make use of this construct, you can put the result of the SQL statement into a table within a
compound SQL statement.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 18
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
data_table(tabname) as (
select * from table (
(
values ( cast(mondb2_tabname as varchar(60)) )
)
)
)

select tabname from data_table

The result of this SQL statement is the same as above. Compound statements will turn out to
become an essential part of the solution.

This SQL statement will run fine if you have created the SQL variable
mondb2_tabname as described above. Create an SQL command file as described in
section Prerequisites and conventions at the beginning of this document, and run it
against a Db2 database to retrieve the current content of the SQL variable.

List of identifiers and status information fields


For the selection field list you have to distinguish between fields that are expected to be displayed
with diff values and those that are to be displayed as absolute values. In the end, you need to go
over the table functions being queried and identify the columns that suggest to be displayed with
absolute values.

Field (or column) names have the same meaning across all table functions, although the
scope may differ. This means that the field NUM_LOCKS_HELD is always the number of
locks being held currently. However, in table function mon_get_database(), this
number is evaluated for the whole database, while in table function
mon_get_unit_of_work() it refers to the number of locks being held by the UOW
(database transaction) that the collected row refers to.
This means that the list of fields with absolute values can be used for all table functions.

For our query, the column names again go into a table, just as before.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 19
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
abs_cols(field) as (
select * from table (
(
-- partition + member information
values ( cast('MEMBER' as varchar(60)) ),
'DBPARTITIONNUM', 'COORD_MEMBER',
'COORD_PARTITION_NUM', 'CATALOG_PARTITION',
-- handles + IDs
'APPLICATION_HANDLE', 'CLIENT_PID',
'WORKLOAD_OCCURRENCE_ID', 'UOW_ID',
'ACTIVITY_ID', 'PARENT_UOW_ID',
'PARENT_ACTIVITY_ID', 'AGENT_TID',
-- miscellaneous
'IS_SYSTEM_APPL', 'LOCK_LIST_IN_USE',
'NUM_LOCKS_HELD', 'NUM_LOCKS_WAITING',
'ACTIVE_SORTS', 'NUM_AGENTS'
)
)
)

select field from abs_cols

The list above is incomplete in the sense that across the existing monitoring table functions there
are quite a lot of fields that should be displayed as absolute values. However, the list can be easily
extended to include additional fields. For a more comprehensive list, refer to appendix C.

The SQL statement just returns the field names put into table ABS_COLS above.

Selection field list


The selection field list is a bit more complex. It needs to reflect the difference between diff value
fields and absolute value fields. The distinction between the two requires information from the
parts defined previously, as well as information from the database catalog. To help sorting the
selection fields, the table also gets the column number from the database catalog.

The SQL statements in this document assume that you always work in a single database
schema. In particular, it is required that

• the data table exists, and is created in the current schema of your database
connection.

• the SQL variable mondb2_tabname exists, and is created in the current schema
of your database connection.

If you reproduce the SQL statements in a local environment, then your database
connection, the data table, and the SQL variable must have the same schema value. For
information on how to adjust schema settings, see Appendix A.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 20
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
data_table(tabname) as (
select * from table (
(
values ( cast(mondb2_tabname as varchar(60)) )
)
)
) ,
abs_cols(field) as (
select * from table (
(
-- partition + member information
values ( cast('MEMBER' as varchar(60)) ),
'DBPARTITIONNUM', 'COORD_MEMBER',
'COORD_PARTITION_NUM', 'CATALOG_PARTITION',
-- handles + IDs
'APPLICATION_HANDLE', 'CLIENT_PID',
'WORKLOAD_OCCURRENCE_ID', 'UOW_ID',
'PARENT_ACTIVITY_ID', 'AGENT_TID',
-- miscellaneous
'IS_SYSTEM_APPL', 'LOCK_LIST_IN_USE',
'NUM_LOCKS_HELD', 'NUM_LOCKS_WAITING',
'ACTIVE_SORTS', 'NUM_AGENTS'
)
)
) ,
selection_fields(fnum,field) as (
select
colno as fnum,
case
-- no diff values for column names found in ABS_COLS
when colname in ( select abs.field from abs_cols abs )
then substr((' ,a.' || colname),1,128) ||
' as "' || colname || '" '
-- other integer type fields shown as diff
when typename like '%INT%'
then substr((' ,a.' || colname || '-b.' ||
colname),1,128) || ' as "*' || colname || '" '
-- remove line feeds from CLOB fields
when typename = 'CLOB'
then substr((' , replace(a.' || colname || ',x''0A'',
'''''),1,128) || ' as "' || colname || '" '
-- other SQL types (DATE, VARCHAR) are displayed as found
else substr((' , a.' || colname),1,128) ||
' as "' || colname || '" '
end as field
from syscat.columns
where tabname in ( select upper(d.tabname) from data_table d )
and tabschema = current schema
order by colno
)

select fnum,field from selection_fields order by fnum asc

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 21
Using Table Functions in Db2 LUW© - A Monitoring Approach
With this part, you have the first item of the SQL query to be generated. Via the global SQL
variable mondb2_tabname, you maintain the table name of the data table holding the collected
monitoring data, and you get the selection field list, either as the field name of the table or the diff
values. In the latter case, the field name of the result is prepared to show an asterisk before the
original field name to indicate that diff values are being displayed.

Note that the generated field list starts with a comma. The selection fields are separated by
commas, so when the final SQL command is put together, a first selection field still needs to be
specified.

Join conditions and ORDER BY clause


When creating the first simple query above, it was already stated that the join columns are the
same as the columns in the ORDER BY clause. The current goal is to find all the join columns. To
do so, find all potential join columns and put them into a table similar to the list of fields to be
displayed as absolute values.

Note that potential join conditions are those fields that together identify a single row within the
collected data. Therefore, apart from the field COLLECTION_TIMESTAMP, this list will consist of
the parameter list (or a subset of it) of the table function used in the data collection. Conversely, if
certain identifier fields like APPLICATION_HANDLE exist, the conclusion is that this field will
need to be used as join condition.

The entries are enumerated according to the database catalog. Thus, the sorting of the data table
can be adjusted to follow the order of occurrence of fields.

Again, the assumption is that the data table has the same table schema as the schema of the
current connection.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 22
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
data_table(tabname) as (
select * from table (
(
values ( cast(mondb2_tabname as varchar(60)) )
)
)
) ,
join_candidates(field) as (
select * from table (
(
values ( cast('MEMBER' as varchar(60)) ),
'DBPARTITIONNUM',
'APPLICATION_HANDLE',
'UOW_ID',
'ACTIVITY_ID',
'AGENT_TID',
'EXECUTABLE_ID',
'LATCH_NAME'
)
)
),
join_cols(fnum,field) as (
select
c.colno as fnum ,
c.colname as field
from syscat.columns c
where c.tabname in ( select upper(tabname) from data_table )
and c.tabschema = current schema
and c.colname in ( select j.field from join_candidates j )
)

select fnum,field from join_cols order by fnum asc

From the list returned, the join conditions and the ORDER BY list can be built.

The fixed part


Up until here, the document has been dealing with parts of the SQL generator that perform
isolated tasks of the final SQL command. As a final part, we need a frame to put these parts in.

As mentioned before, the field list of the SQL statement requires a first field. This is added as the
time difference of the data rows being compared, measured in number of seconds between
iterations. Apart from its use for a correct SQL syntax, the field just added is useful for verifying
the length of each iteration.

Similarly, the first entry in the join conditions has to be treated differently than the rest.

A look at the general form of the SQL statement to be generated helps to get a picture of what to
do.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 23
Using Table Functions in Db2 LUW© - A Monitoring Approach
select
timestampdiff(2,char(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp)))
as time_interval
<field list>
from
TABLENAME a
inner join
TABLENAME b
on
< join conditions >
b.collection_timestamp in (
select max(x.collection_timestamp) from TABLENAME x
where a.collection_timestamp > x.collection_timestamp
)
order by <ORDER BY list>
a.collection_timestamp

As a special requirement, it turns out that the static part consists of several pieces spread across
the SQL text. To realise the correct order in the final SQL statement, we use an additional
numbering scheme.

If you want to display the time difference between data rows in a different format for
example, for <seconds>.<microseconds>, replace the first field in the SQL
statement above by the following expression:
timestampdiff(2,char(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp))) +
( (microsecond(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp)) + 1000000)
% 1000000) * 0.000001
as time_interval

If you maintain the table name in the SQL command for the static part, the result resembles the
first query from section 2.2.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 24
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
data_table(tabname) as (
select * from table (
(
values ( cast(mondb2_tabname as varchar(60)) )
)
)
) ,
static_part(partno,fnum ,line) as (
select * from table (
(
values
(
(cast(0 as integer)),
(cast(0 as smallint)),
(cast('select' as varchar(128)))
),
(0 ,1,
' timestampdiff(2,char(timestamp(a.collection_timestamp)-'),
(0 ,2,
' timestamp(b.collection_timestamp))) as time_interval'),
(20,0,'from'),
(20,1,' ' || (select tabname from data_table) || ' a'),
(20,2,'inner join'),
(20,3 ,' ' || (select tabname from data_table) || ' b'),
(20,4,'on '),
(80,0,' b.collection_timestamp in ('),
(80,1,' select max(x.collection_timestamp) from'),
(80,2,' ' || (select tabname from data_table) || ' x'),
(80,3 ,
' where a.collection_timestamp > x.collection_timestamp'),
(80,4 ,' )'),
(80,5 ,'order by'),
(100,0,' a.collection_timestamp'),
(100,5,';')
)
)
)

select line from static_part order by partno asc,fnum asc

3.2 Putting it all together


Now that all parts required for the SQL query generator exist, you can put them together and
you’re done. Concatenate all temporary tables developed in section 3.1. You can see below the
final statement, separated into the different pieces.

Basic information tables


First, define the tables used as basis information for the generated parts.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 25
Using Table Functions in Db2 LUW© - A Monitoring Approach
with
data_table(tabname) as (
select * from table (
(
values ( cast(mondb2_tabname as varchar(60)) )
)
)
),
abs_cols ( field ) as (
select * from table (
(
-- partition + member information
values ( cast('MEMBER' as varchar(60)) ),
'DBPARTITIONNUM', 'COORD_MEMBER',
'COORD_PARTITION_NUM', 'CATALOG_PARTITION',
-- handles + IDs
'APPLICATION_HANDLE', 'CLIENT_PID',
'WORKLOAD_OCCURRENCE_ID', 'UOW_ID',
'ACTIVITY_ID', 'PARENT_UOW_ID',
'PARENT_ACTIVITY_ID', 'AGENT_TID',
-- miscellaneous
'IS_SYSTEM_APPL', 'LOCK_LIST_IN_USE',
'NUM_LOCKS_HELD', 'NUM_LOCKS_WAITING',
'ACTIVE_SORTS', 'NUM_AGENTS'
)
)
) ,
join_candidates ( field ) as (
select * from table (
(
values ( cast('MEMBER' as varchar(60)) ),
'DBPARTITIONNUM',
'APPLICATION_HANDLE',
'UOW_ID',
'ACTIVITY_ID',
'AGENT_TID',
'EXECUTABLE_ID',
'LATCH_NAME'
)
)
) ,

Text generating parts


Next, define the parts providing the query text. First, we have the selection fields, and the
columns we need for the join conditions.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 26
Using Table Functions in Db2 LUW© - A Monitoring Approach
selection_fields(fnum,field) as (
select
colno as fnum ,
case
-- no diff values for column names found in ABS_COLS
when colname in (
select abs . field from abs_cols abs
)
then
substr((' ,a.' || colname),1,128) ||
' as "' || colname || '" '
-- other integer type fields shown as diff
when typename like '%INT%'
then
substr((' ,a.' || colname || '-b.' || colname),1,128)
|| ' as "*' || colname || '" '
-- other SQL types (DATE, VARCHAR) are displayed as found
else
substr((' ,a.' || colname),1,128) ||
' as "' || colname || '" '
end as field
from syscat.columns
where tabname in ( select upper(d.tabname) from data_table d )
and tabschema = current schema
order by colno
) ,
join_cols(fnum,field) as (
select
c.colno as fnum ,
c.colname as field
from syscat.columns c
where c.tabname in ( select upper(tabname) from data_table )
and c.tabschema = current schema
and c.colname in ( select j.field from join_candidates j )
) ,

We then need the static part that takes care of fixed language elements in the SQL query to be
generated.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 27
Using Table Functions in Db2 LUW© - A Monitoring Approach
static_part(partno,fnum,line) as (
select * from table (
(
values
(
(cast(0 as integer)),
(cast(0 as smallint)),
(cast('select' as varchar(128)))
),
(0,1,
' timestampdiff(2,char(timestamp(a.collection_timestamp)-'),
(0,2,
' timestamp(b.collection_timestamp))) as time_interval'),
(20,0,'from'),
(20,1,' ' || (select tabname from data_table) || ' a'),
(20,2,'inner join'),
(20,3,' ' || (select tabname from data_table) || ' b'),
(20,4,'on '),
(80,0,' b.collection_timestamp in ('),
(80,1,' select max(x.collection_timestamp) from'),
(80,2,' ' || (select tabname from data_table) || ' x'),
(80,3,
' where a.collection_timestamp > x.collection_timestamp'),
(80,4,' )'),
(80,5,'order by'),
(100,0,' a.collection_timestamp'),
(100,5,';')
)
)
)

Generating the final query


What is left now is to properly select the various temporary tables that have just been put
together.

Note that the sub-selects for the selection fields, join conditions, and the ORDER BY clause
contain a literal number as first field to properly fit into the static part that was carefully designed
for this task.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 28
Using Table Functions in Db2 LUW© - A Monitoring Approach
select line from (
select * from (
select 10,fnum ,field from selection_fields
union all
-- join conditions for the main SQL
select 30,fnum ,' a.' || field || ' = b.' || field || ' and'
from join_cols
union all
-- list of columns in the ORDER BY clause
select 90,fnum ,' a.' || field || ',' from join_cols
union all
-- put in the static parts
select partno,fnum,line from static_part
) as t(n1,n2,line)
) order by n1 asc,n2 asc

After you have put together all the pieces, end the SQL statement with a semicolon and save the
result in a file. Let’s say the file name is SQLMonGenerator.sql. Then you can run the SQL
query via Db2 CLP.

Running the final query


To run the generated SQL query, use the following commands from OS level:

db2 "create variable mondb2_tabname varchar(60)"


db2 "set mondb2_tabname = 'TABLENAME'"
db2 -txf SQLMonGenerator.sql

The first command is only required if the SQL variable mondb2_tabname does not yet exist. For
more information, refer to appendix A. To work with the output of the generated SQL query, save
the retrieved information in a file. Let’s use the file name SQL_Output.

db2 -txf SQLMonGenerator.sql > SQL_Output

The command line option -x instructs the Db2 CLP command to suppress header and trailing
information. In addition, the SQL query being generated is terminated by a semicolon, so the
output can be run immediately. This looks similar to what was used in section 2.2. However,
instead of just a few fields, the query is using all fields available in the table.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 29
Using Table Functions in Db2 LUW© - A Monitoring Approach
select
timestampdiff(2,char(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp)))
as time_interval
<field list>
from
TABLENAME a
inner join
TABLENAME b
on
a.join_field1 = b.join_field1 and
...
b.collection_timestamp in (
select max(x.collection_timestamp) from data_table x
where a.collection_timestamp > x.collection_timestamp
)
order by a.join_field1, … , a.collection_timestamp
;

3.3 Using the generated SQL query


When using the generated query, the following considerations may be helpful.

Performance
For a small number of rows in the data table, the performance of the generated SQL query will be
fine. However, depending on the table function being used and the number of iterations run to
generate the data, the join condition on field COLLECTION_TIMESTAMP potentially causes a
performance issue. Therefore, from a performance perspective, it is helpful to create an index on
this field.

create index TABLENAME_0


on TABLENAME (collection_timestamp asc)

Displaying the data


With the amount of data collected it quickly becomes difficult to work with the information
without losing the context. It is best to use a spreadsheet calculator to display the result set. Most
office products provide suitable functionality to import the data generated by the SQL query.

Manual modifications
You might want to add further information to the data displayed, e.g. the ratio between certain
fields. The following is an example of the ratio of rows returned compared to rows read that you
may want to calculate.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 30
Using Table Functions in Db2 LUW© - A Monitoring Approach
case
when rows_read <> 0
(a.rows_returned-b.rows_returned) * 100 /
(a.rows_read-b.rows_read)
else -1
end as “*RATIO_ROWS_READ”

Note that this will make the data generation more complex because you need to take extra care,
for example, to avoid division by zero.

It is easier to use the data as generated by the provided SQL procedure, and later add information
that is derived from the data representation process e.g. by using a pivot table in the spreadsheet.

There are still situations where manual modifications of the generated query can be helpful. To do
so, see the example discussed below where the amount of data returned is limited.

3.4 Data representation


To easily identify pain points, and/or prepare the data to visualize potential problem areas use a
spreadsheet calculator.

Import the data into a spreadsheet


Note that the SQL output is available as fields separated by spaces. Unfortunately, there are fields
that potentially contain spaces. The most predominant example is the SQL statement text as
contained in field stmt_text of table function mon_get_pkg_cache_stmt(). To get the SQL
output into a form that can be properly consumed by the spreadsheet calculator, you may use an
awk script. To understand this method, note that the SQL output is returned in the following way.

FIELD1 FIELD2 FIELD3 FIELD4 ...


--------- ----------------- ------- --------- ...
VALUE11 VALUE12 VALUE13 VALUE14
VALUE21 VALUE22 ...
...

The dashed line can be used to identify the length and offset of the field content. A simple form of
the script is shown here. The field separator character of the output is @ by default.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 31
Using Table Functions in Db2 LUW© - A Monitoring Approach
BEGIN{
# default output separator character
sep="@"
# internal variables
nCols=0;nRows=0
linetype="none"
}
{
# header line (dashes indicate field lengths)
if(index($0,"----") == 1){
if(linetype == "none"){
l = 0;
# get the field offsets + lengths
for(i=1;i<=NF;i++){
s[i] = l+1 # offset of the i-th field
e[i] = length($i) # length of the i-th field
l = l+1+length($i) # increment offset
}
nCols = NF; # the number of columns
# now, we can get the column names from the saved line
for(i=1;i<=NF;i++){
header=substr(hline,s[i],e[i]) # get i-th column name
sub(/^ */,"",header) # truncate leading blanks
sub(/ *$/,"",header) # truncate trailing blanks
printf "%s%s",header,sep
}
printf "\n"
linetype = "data"
next
}
}

# if header is not yet defined, save the current line


if( linetype == "none" ){
hline = $0
}

# process the data


if( linetype == "data" ){
if( NF > 0 ){
j++;
nRows++;
for(i=1;i<=nCols;i++){
field=substr($0,s[i],e[i]) # get i-th field value
sub(/^ */,"",field) # truncate leading blanks
sub(/ *$/,"",field) # truncate trailing blanks
printf "%s%s",field,sep
}
} else {
linetype = "finished"
}
printf "\n"
}
}

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 32
Using Table Functions in Db2 LUW© - A Monitoring Approach
To run the script, use the following command line:

awk -f script.awk SQL_Output > SQL_Output.txt

If you need to use a different field separator like the colon, simply run the script with the
modified command line shown below.

awk -f script.awk sep=":" SQL_Output > SQL_Output.txt

You can then read the output file using a spreadsheet calculator. Specify the proper field
separation character, no text separation character.

Limiting the amount of data


When collecting data over a certain time period, let’s say 2 hours with a time interval of 1 minute
between iterations, the amount of data from most table functions is limited and can be handled
easily. As an example, table function mon_get_database() collects one row per iteration and
database member.

There are other table functions that potentially provide much more data. For example, take
mon_get_connection() which returns one row per connection and database member. If you
monitor a database with one member and e.g. 2000 connections, the data collection outlined
above will give you 120 data sets of 2000 entries each, providing a total of ~250000 entries.
That’s why working with output from this table function should start with information that lets
you limit the investigation of this data to a subset of database connections and focus on a certain
area.

Let’s assume you have collected data for mon_get_database() and mon_get_connection()
in parallel. You see from mon_get_database() that there was a particularly high amount of
rows read during the monitoring time period, indicated by a.rows_read-b.rows_read (field
"*ROWS_READ" in the SQL output). You then want to find the application handles that were
reading the highest amount of data. To do so, you take output from table function
mon_get_connection() that was collected at the same time as mon_get_database() to
import the data collection into a table, e.g. tf_connection. You generate the SQL for that data
collection following the procedure as described. Now you modify the query to put it into another
select * with an ORDER BY clause on field "*ROWS_READ". It may be useful to limit the
amount of data returned using a FETCH FIRST clause. In the example used here, the changes will
be as follows (marked red).

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 33
Using Table Functions in Db2 LUW© - A Monitoring Approach
select * from (
select
timestampdiff(2,char(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp)))
as time_interval
<field list>
from
tf_connection a
inner join
tf_connection b
on
a.APPLICATION_HANDLE = b.APPLICATION_HANDLE and
a.MEMBER = b.MEMBER and
b.collection_timestamp in (
select max(x.collection_timestamp) from data_table x
where a.collection_timestamp > x.collection_timestamp
)
order by a.APPLICATION_HANDLE,
a.MEMBER,
a.collection_timestamp
)
order by “*ROWS_READ”
fetch first 100 rows only
;

This query returns the data collections and application handles with the highest amount of rows
read. You can then take the original query and restrict it to just the most significant application
handles that give you an amount of data you can handle more easily.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 34
Using Table Functions in Db2 LUW© - A Monitoring Approach
select * from (
select
timestampdiff(2,char(timestamp(a.collection_timestamp)-
timestamp(b.collection_timestamp)))
as time_interval
<field list>
from
tf_connection a
inner join
tf_connection b
on
a.APPLICATION_HANDLE = b.APPLICATION_HANDLE and
a.MEMBER = b.MEMBER and
b.collection_timestamp in (
select max(x.collection_timestamp) from data_table x
where a.collection_timestamp > x.collection_timestamp
)
order by a.APPLICATION_HANDLE,
a.MEMBER,
a.collection_timestamp
)
where APPLICATION_HANDLE in (<list of application handles>))
;

With the approach outlined, you can always put the generated query into a select statement, so
modifications consist of adding lines to the beginning and end of the generated query.

Appendix
A) Some SQL concepts
In this section, you can find help on basic SQL concepts used in this document.

Handling SQL variables


To properly work with SQL variables in the context of topics discussed here, the following list of
commands is sufficient.

• Create an SQL variable

db2 "create variable var_name varchar(60)"

• Assign a value to an SQL variable

db2 "set var_name = 'VALUE'"

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 35
Using Table Functions in Db2 LUW© - A Monitoring Approach
• Show or verify the value of an SQL variable

db2 "values(var_name)"

• Drop an SQL variable

db2 drop variable var_name

Query schema information in a Db2 LUW© database


When you connect to a Db2 LUW© database, one of the connection properties is the database
schema. By default, this is the same as the database user that you used for the connection. If you
create a table without specifying a table schema, this table will be created in the current schema
of your database connection.

To query the current schema of your database connection, run the following command:

db2 "values(current schema)"

You can modify the current schema of your database connection to a new value, e.g.
new_schema, using the following command:

db2 "set current schema new_schema"

The schema of a table, e.g. tf_database, is queried from the database catalog. Use the
following SQL statement:

select tabschema from syscat.tables


where tabname = 'TF_DATABASE'

The schema of an existing table cannot be modified.

You can query the schema of an SQL variable in a similar way. Let’s use the SQL variable
mondb2_tabname. Use the following SQL statement:

select varschema from syscat.variables


where varname = 'MONDB2_TABNAME'

The schema of an existing SQL variable cannot be modified. However, with the above
information, it is easy to first set the proper database connection schema, create the SQL variable,
and assign the desired value to the (re-)created SQL variable.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 36
Using Table Functions in Db2 LUW© - A Monitoring Approach
B) A monitoring scenario
Let’s assume we have a Db2 LUW© environment with a performance problem that needs to be
investigated. The Db2 version is 10.5. The course of action is as listed below.

Generate the data


1. Parametrise the script in section 1.3. In this example, assume

_num_iterations=61
_delay=120

and the use of table function mon_get_database().

2. Run the script, which will generate 61 files tf_mon_get_db0.ixf …


mon_get_db60.ixf.

3. Transfer the 61 files to the target environment for investigation, create the data table, and
import the data as shown in section 2.1.

4. Generate the SQL generator query script as shown in section 3.2.

5. Set SQL variable mondb2_tabname to the proper table name and run the generator SQL
query script.

6. Now, the query to be used for investigation is available. Run that query and save the SQL
output in a file.

7. Convert the SQL output file using awk as explained in section 3.4 and redirect the output
to a text file.

8. Open the generated text file using the spreadsheet calculator of your office product.

Import the data into a spreadsheet


In this example, the data show high CPU times, that is, the investigation has revealed a CPU
bottleneck.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 37
Using Table Functions in Db2 LUW© - A Monitoring Approach
Illustration 1: Diff values for TOTAL_CPU_TIME

Note the time interval, which in this example is always roughly 120 seconds. The column
showing the CPU time indicates by the leading asterisk in the header that the displayed numbers
are per iteration (i.e. per ~120 seconds).
From this data, you can easily generate a chart that clearly shows the peak in the CPU time
consumption. To further investigate, you can then narrow down the tracking to the time of high
CPU utilization and e.g. check for applications that could have been responsible for this increase.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 38
Using Table Functions in Db2 LUW© - A Monitoring Approach
Illustration 2: Chart showing diff values for TOTAL_CPU_TIME

Diff values vs. absolute values


If you compared absolute values, you would easily notice how difficult it is to track and
investigate in that way.

Illustration 3: Absolute values for TOTAL_CPU_TIME

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 39
Using Table Functions in Db2 LUW© - A Monitoring Approach
The resulting chart from absolute values lacks the clarity of the previous one using diff values.

Illustration 4: Chart for absolute values of TOTAL_CPU_TIME

C) Extended list of fields to be shown with absolute values


As discussed in section 3, there are fields that need to be displayed with absolute values. A
comprehensive, yet not complete list is shown below.

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 40
Using Table Functions in Db2 LUW© - A Monitoring Approach
values ( cast('MEMBER' as varchar(60)) ),
'ACTIVE_HASH_JOINS', 'ACTIVE_OLAP_FUNCS',
'ACTIVE_SORTS', 'ACTIVITY_ID',
'AGENTS_TOP', 'AGENT_TID',
'APPLICATION_HANDLE', 'APPLID_HOLDING_OLDEST_XACT',
'APPLS_CUR_CONS', 'APPLS_IN_DB2',
'ASSOCIATED_AGENTS_TOP', 'CATALOG_PARTITION',
'CLIENT_PID', 'COORD_AGENTS_TOP',
'COORD_MEMBER', 'COORD_PARTITION_NUM',
'CURRENT_ACTIVE_LOG', 'CURRENT_ARCHIVE_LOG',
'CURRENT_LSN', 'CURRENT_LSO',
'DATABASE_WORK_ACTION_SET_ID','DATABASE_WORK_CLASS_ID',
'DBPARTITIONNUM', 'FIRST_ACTIVE_LOG',
'HADR_LOG_GAP', 'HADR_TIMEOUT',
'HEARTBEAT_INTERVAL', 'INDEX_TBSP_ID',
'INVOCATION_ID', 'IS_SYSTEM_APPL',
'LAST_ACTIVE_LOG', 'LOCK_LIST_IN_USE',
'LOG_CHAIN_ID', 'LOG_HADR_WAIT_CUR',
'LOG_HELD_BY_DIRTY_PAGES', 'LOG_TO_REDO_FOR_RECOVERY',
'LONG_TBSP_ID', 'METHOD1_FIRST_FAILURE',
'METHOD1_NEXT_LOG_TO_ARCHIVE','METHOD2_FIRST_FAILURE',
'METHOD2_NEXT_LOG_TO_ARCHIVE','NESTING_LEVEL',
'NUM_AGENTS', 'NUM_ASSOC_AGENTS',
'NUM_COORD_AGENTS', 'NUM_INDOUBT_TRANS',
'NUM_LOCKS_HELD', 'NUM_LOCKS_WAITING',
'NUM_LOGS_AVAIL_FOR_RENAME', 'NUM_ROUTINES',
'OLDEST_TX_LSN', 'PARENT_ACTIVITY_ID',
'PARENT_UOW_ID', 'PEER_WAIT_LIMIT',
'PEER_WINDOW', 'PRIMARY_LOG_PAGE',
'QUERY_ACTUAL_DEGREE', 'ROUTINE_ID',
'SEC_LOGS_ALLOCATED', 'SEC_LOG_USED_TOP',
'SERVICE_CLASS_ID', 'SERVICE_CLASS_WORK_ACTION_SET_ID',
'SERVICE_CLASS_WORK_CLASS_ID','SMP_COORDINATOR',
'SQL_REQS_SINCE_COMMIT', 'STANDBY_ID',
'STANDBY_LOG_PAGE', 'STANDBY_RECV_BUF_SIZE',
'STANDBY_RECV_REPLAY_GAP', 'STANDBY_REPLAY_LOG_PAGE',
'STANDBY_SPOOL_LIMIT', 'STMT_PKG_CACHE_ID',
'TAB_FILE_ID', 'TBSP_AUTO_RESIZE_ENABLED',
'TBSP_CUR_POOL_ID', 'TBSP_EXTENT_SIZE',
'TBSP_ID', 'TBSP_NEXT_POOL_ID',
'TBSP_PAGE_SIZE', 'TBSP_PREFETCH_SIZE',
'TBSP_USING_AUTO_STORAGE', 'TIME_SINCE_LAST_RECV',
'TOTAL_LOG_AVAILABLE', 'TOTAL_LOG_USED',
'TOT_LOG_USED_TOP', 'UOW_ID',
'WL_WORK_ACTION_SET_ID', 'WL_WORK_CLASS_ID',
'WORKLOAD_OCCURRENCE_ID'

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 41
Using Table Functions in Db2 LUW© - A Monitoring Approach
References
• IBM Knowledge Center for Db2 LUW

◦ Monitoring Table Functions


https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.s
ql.rtn.doc/doc/c0053963.html

◦ Monitoring Elements
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.a
dmin.mon.doc/doc/c0059125.html

© IBM Copyright, 2018 Version 1.0.2, 2019-11-25


Web location of document (www.ibm.com/support/techdocs) 42
Using Table Functions in Db2 LUW© - A Monitoring Approach

You might also like