DB2WorkloadMgmnt Db2wlme972
DB2WorkloadMgmnt Db2wlme972
7
for Linux, UNIX, and Windows
Version 9 Release 7
SC27-2464-02
IBM DB2 9.7
for Linux, UNIX, and Windows
Version 9 Release 7
SC27-2464-02
Note
Before using this information and the product it supports, read the general information under Appendix E, “Notices,” on
page 431.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/
planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright IBM Corporation 2007, 2010.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . . ix Default service superclasses and subclasses . . . 68
Activity-to-service class mapping . . . . . . 70
Chapter 1. Introduction to DB2 workload Agent priority of service classes . . . . . . 74
Prefetch priority of service classes . . . . . . 75
manager concepts . . . . . . . . . . 1 Buffer pool priority of service classes . . . . . 75
Stages of workload management. . . . . . . . 1 States of connections and activities in a service
Workload manager administrator authority class . . . . . . . . . . . . . . . . 76
(WLMADM) . . . . . . . . . . . . . . 3 System-level entities not tracked by service
Frequently asked questions about DB2 workload classes . . . . . . . . . . . . . . . 78
manager . . . . . . . . . . . . . . . . 4 Creating a service class . . . . . . . . . 78
Altering a service class . . . . . . . . . 80
Chapter 2. Work identification . . . . . 15 Dropping a service class . . . . . . . . . 83
Activities . . . . . . . . . . . . . . . 15 Example: Using service classes . . . . . . . 84
DDL statements for DB2 workload manager . . . 18 Example: Analyzing a service class–related
Work identification by origin with workloads . . . 19 system slowdown . . . . . . . . . . . 88
Workload assignment . . . . . . . . . . 23 Example: Investigating agent usage by service
Default workloads . . . . . . . . . . . 25 class . . . . . . . . . . . . . . . . 90
Creating a workload . . . . . . . . . . 30 Control of work with thresholds . . . . . . . 91
Altering a workload . . . . . . . . . . 31 Threshold domain and enforcement scope . . . 94
Permitting occurrences of a workload to access Threshold evaluation order . . . . . . . . 95
the database . . . . . . . . . . . . . 33 Connection thresholds . . . . . . . . . . 97
Preventing occurrences of a workload from Activity thresholds . . . . . . . . . . . 98
accessing the database . . . . . . . . . . 33 Aggregate thresholds . . . . . . . . . . 105
Enabling a workload . . . . . . . . . . 34 Unit of work thresholds . . . . . . . . . 112
Disabling a workload . . . . . . . . . . 34 Creating a threshold . . . . . . . . . . 113
Granting the USAGE privilege on a workload . . 35 Altering a threshold . . . . . . . . . . 114
Revoking the USAGE privilege on a workload . 36 Dropping a threshold. . . . . . . . . . 114
Dropping a workload . . . . . . . . . . 36 Example: Using thresholds . . . . . . . . 115
Example: Workload assignment. . . . . . . 37 Priority aging of ongoing work . . . . . . . 117
Example: Workload assignment when workload Sample priority aging scripts . . . . . . . 121
attributes have single values. . . . . . . . 41 Remapping activities between service subclasses 127
Example: Workload assignment for a unit of Apply controls to types of activities with work
work when multiple workloads exist . . . . . 43 action sets . . . . . . . . . . . . . . 129
Example: Workload assignment when workload How work classes, work class sets, work
attributes have multiple values . . . . . . . 46 actions, and work action sets work together and
Work identification by type of work with work are associated with other DB2 objects . . . . 130
classes . . . . . . . . . . . . . . . . 47 Work actions and work action sets . . . . . 132
Work classes and work class sets . . . . . . 50 Work actions and the work action set domain 134
Evaluation order of work classes in a work class Thresholds that can be used in work actions . . 138
set . . . . . . . . . . . . . . . . 52 Application of work actions to database
Assignment of activities to work classes . . . . 53 activities . . . . . . . . . . . . . . 138
Work classifications supported by thresholds . . 53 Concurrency control at the workload level using
Creating a work class . . . . . . . . . . 54 work action sets . . . . . . . . . . . 140
Altering a work class . . . . . . . . . . 57 Workload and work action set comparison . . 142
Dropping a work class. . . . . . . . . . 57 Creating a work action set . . . . . . . . 144
Creating a work class set . . . . . . . . . 58 Altering a work action set . . . . . . . . 145
Altering a work class set . . . . . . . . . 58 Disabling a work action set. . . . . . . . 146
Dropping a work class set . . . . . . . . 58 Dropping a work action set. . . . . . . . 147
Example: Analyzing workloads by activity type 59 Creating a work action . . . . . . . . . 147
Example: Using a work class set to manage Altering a work action . . . . . . . . . 150
specific types of activities. . . . . . . . . 60 Disabling a work action . . . . . . . . . 152
Example: Working with a work class defined Dropping a work action . . . . . . . . . 152
with the ALL keyword . . . . . . . . . 61 Example: Using a work action set and database
threshold . . . . . . . . . . . . . . 152
Chapter 3. Activities management . . . 65 Example: Using work action sets to determine
Resource assignment with service classes . . . . 65 the types of work being run . . . . . . . 154
Contents v
coord_act_rejected_total - Coordinator activities temp_tablespace_top - Temporary table space
rejected total monitor element . . . . . . . 362 top monitor element . . . . . . . . . . 379
coord_partition_num - Coordinator partition thresh_violations - Number of threshold
number monitor element . . . . . . . . 362 violations monitor element . . . . . . . . 379
cost_estimate_top - Cost estimate top monitor threshold_action - Threshold action monitor
element . . . . . . . . . . . . . . 363 element . . . . . . . . . . . . . . 380
db_work_action_set_id - Database work action threshold_domain - Threshold domain monitor
set ID monitor element . . . . . . . . . 363 element . . . . . . . . . . . . . . 381
db_work_class_id - Database work class ID threshold_maxvalue - Threshold maximum
monitor element . . . . . . . . . . . 364 value monitor element . . . . . . . . . 381
destination_service_class_id – Destination threshold_name - Threshold name monitor
service class ID monitor element . . . . . . 364 element . . . . . . . . . . . . . . 382
histogram_type - Histogram type monitor threshold_predicate - Threshold predicate
element . . . . . . . . . . . . . . 364 monitor element . . . . . . . . . . . 382
last_wlm_reset - Time of last reset monitor threshold_queuesize - Threshold queue size
element . . . . . . . . . . . . . . 365 monitor element . . . . . . . . . . . 382
num_remaps - Number of remaps monitor thresholdid - Threshold ID monitor element . . 382
element . . . . . . . . . . . . . . 366 time_completed - Time completed monitor
num_threshold_violations - Number of element . . . . . . . . . . . . . . 383
threshold violations monitor element . . . . 366 time_created - Time created monitor element 383
number_in_bin - Number in bin monitor time_of_violation - Time of violation monitor
element . . . . . . . . . . . . . . 366 element . . . . . . . . . . . . . . 383
parent_activity_id - Parent activity ID monitor time_started - Time started monitor element . . 384
element . . . . . . . . . . . . . . 367 top - Histogram bin top monitor element . . . 384
parent_uow_id - Parent unit of work ID monitor uow_comp_status - Unit of Work Completion
element . . . . . . . . . . . . . . 367 Status . . . . . . . . . . . . . . . 384
prep_time - Preparation time monitor element 368 uow_elapsed_time - Most Recent Unit of Work
queue_assignments_total - Queue assignments Elapsed Time . . . . . . . . . . . . 385
total monitor element. . . . . . . . . . 368 uow_id - Unit of work ID monitor element . . 385
queue_size_top - Queue size top monitor uow_lock_wait_time - Total time unit of work
element . . . . . . . . . . . . . . 368 waited on locks monitor element . . . . . . 386
queue_time_total - Queue time total monitor uow_log_space_used - Unit of work log space
element . . . . . . . . . . . . . . 369 used monitor element . . . . . . . . . 386
request_exec_time_avg - Request execution time uow_start_time - Unit of work start timestamp
average monitor element . . . . . . . . 369 monitor element . . . . . . . . . . . 387
routine_id - Routine ID monitor element . . . 370 uow_status - Unit of Work Status. . . . . . 388
rows_fetched - Rows fetched monitor element 370 uow_stop_time - Unit of work stop timestamp
rows_modified - Rows modified monitor monitor element . . . . . . . . . . . 388
element . . . . . . . . . . . . . . 370 uow_total_time_top - UOW total time top
rows_returned - Rows returned monitor element 372 monitor element . . . . . . . . . . . 389
rows_returned_top - Actual rows returned top wl_work_action_set_id - Workload work action
monitor element . . . . . . . . . . . 373 set identifier monitor element . . . . . . . 389
sc_work_action_set_id - Service class work wl_work_class_id - Workload work class
action set ID monitor element . . . . . . . 373 identifier monitor element . . . . . . . . 390
sc_work_class_id - Service class work class ID wlm_queue_assignments_total - Workload
monitor element . . . . . . . . . . . 374 manager total queue assignments monitor
section_env - Section environment monitor element . . . . . . . . . . . . . . 390
element . . . . . . . . . . . . . . 374 wlm_queue_time_total - Workload manager
service_class_id - Service class ID monitor total queue time monitor element . . . . . 391
element . . . . . . . . . . . . . . 375 wlo_completed_total - Workload occurrences
service_subclass_name - Service subclass name completed total monitor element . . . . . . 393
monitor element . . . . . . . . . . . 376 work_action_set_id - Work action set ID monitor
service_superclass_name - Service superclass element . . . . . . . . . . . . . . 393
name monitor element . . . . . . . . . 376 work_action_set_name - Work action set name
source_service_class_id - Source service class ID monitor element . . . . . . . . . . . 393
monitor element . . . . . . . . . . . 377 work_class_id - Work class ID monitor element 394
statistics_timestamp - Statistics timestamp work_class_name - Work class name monitor
monitor element . . . . . . . . . . . 377 element . . . . . . . . . . . . . . 394
stmt_invocation_id - Statement invocation workload_id - Workload ID monitor element 394
identifier monitor element . . . . . . . . 378 workload_name - Workload name monitor
element . . . . . . . . . . . . . . 395
Contents vii
viii Workload Manager Guide and Reference
About this book
This book provides information on the DB2® workload manager features and
functionality that can help you obtain a stable, predictable execution environment
that meets your business objectives. Using DB2 workload manager, both requests
and resources are managed. This book also provides information on monitoring
and performing troubleshooting for the workload on your data server.
In a data server environment, you can see even more of a need for effective
management of work, especially now that data servers are being stressed like
never before. Cash registers generate thousands of data inserts, reports are
constantly being generated to determine whether sales targets are being met, batch
applications run to load collected data, and administration tasks such as backups
and reorganizations run to protect the data and make the server run optimally. All
these operations are using the same database system and competing for the same
resources.
To ensure the best chance of meeting goals for running a data server, an efficient
workload management system is critical.
In a data server environment, you must also define goals. Sometimes the goals are
clear, especially when they originate from service level agreement (SLA) objectives.
For example, queries from a particular application can consume no more than 10%
of the total processor resource. Goals can also be tied to a particular time of day.
For example, an overnight batch utility might have to complete loading data by 8
Identification
of activities
Management
Monitoring
Most database objects have owners, and these owners have the authority to alter
the objects that they own. Unlike most objects, DB2 workload manager objects do
not have owners, to avoid unpredictable effects on your execution environment
because of changes to DB2 workload manager objects by those owners. If a
The following example illustrates the problem that would be caused by having
owners for DB2 workload manager objects. Assume that a service superclass has
two user-defined service subclasses, A and B, and that each service subclass has a
different owner. Initially, the prefetch priority setting is medium for the default
service subclass and for the two service subclasses. If the owner of service subclass
A changes its prefetch priority to high and many prefetch requests come from this
service subclass, connections to service subclass B and the default subclass have
less access to prefetcher services, and the performance of activities running in these
service subclasses might suffer.
DB2 workload manager is available on all platforms supported by DB2 9.5 for
Linux®, UNIX®, and Windows® or later. The optional tight integration offered
between DB2 service classes and operating system service classes is available with
AIX and Linux WLM.
No. Although Query Patroller and DB2 workload manager are both part of the
Performance Optimization feature, they are independent of each other. In other
words, one does not require Query Patroller to be installed to use DB2 workload
manager or vice versa.
Why are the new WLM capabilities not integrated into Query
Patroller?
How does this new functionality affect Query Patroller and DB2
Governor?
To ensure an easy transition, DB2 data server enables Query Patroller and DB2
Governor to coexist with the facilities provided by DB2 workload manager while
still providing separate scopes of control. If Query Patroller is present, any work
submitted for execution in the default user service class are intercepted and sent to
Query Patroller. Work submitted for execution in other service classes defined by
the database administrator are not presented to Query Patroller.
When DB2 9.5 or later is first installed, the default user service class is
automatically defined and all incoming work is sent to it for execution. This means
The story is essentially the same for DB2 Governor which, although it can watch
agents in any service class, it is permitted to adjust the agent priority only for
agents in the default user service class.
Note that DB2 workload manager is aware of and can control all work within DB2
including that within the default user service class. When Query Patroller is used,
it is recommended to limit the use of DB2 workload manager to control work in
the default user service class in order to avoid potential conflicts between Query
Patroller and DB2 workload manager. It is always safe to use the monitoring
features of DB2 workload manager.
While you can emulate the approach taken by Query Patroller by categorizing
work by its estimated cost, mapping it to different service sub classes, and
applying different concurrency thresholds on each service sub class, this is neither
the recommended approach nor the best starting point. This approach does not
deal with all the different types of work that execute within DB2 data server, only
with DML SQL statements. Achieving a stable execution environment requires that
all work executing within DB2 data server is controlled to one degree or another.
If you cannot separate work by its source (via a DB2 workload), then you can map
all incoming work to a common service super class and use a DB2 work action set
to separate work by different characteristics and assign it to different service sub
classes. At this point, you can manipulate the resources available to each service
class to achieve your objectives. Note that not all types of activities can be
recognized within a work action set and any unrecognized ones will not be
mapped to a different service class; they will remain in the one originally assigned
to them.
If resource manipulation does not achieve the desired results, you can selectively
apply other features of DB2 workload manager as needed until you achieve your
objectives. This includes the application of DB2 thresholds, including concurrency
thresholds. As most concurrency thresholds (such as the
CONCURRENTDBCOORDACTIVITIES threshold, for example) coordinate
activities across all database partitions, they impose a higher overhead on the
activities that they manage. Introducing a concurrency threshold adds complexity
to the execution environment; if care is not taken in the definition, unexpected or
unintended results may be the consequence.
Users on all platforms have the ability to control processor resource and prefetcher
I/O activity between service classes using SQL (for CREATE and ALTER SERVICE
CLASS statements, for example). To control CPU usage, users can use the agent
priority attribute of the DB2 service class to set a relative processor priority for all
threads that run in that service class. On AIX and Linux platforms, users can also
use this approach or they can choose to take advantage of AIX and Linux WLM,
respectively, for more advanced processor usage management. For prefetcher I/O
activity, users on all platforms can set the prefetcher priority attribute of a DB2
service class to a value of high, medium or low. All service classes run with a
medium prefetch priority by default.
Currently, neither AIX nor Linux WLM supports I/O activity controls at the thread
level. Because DB2 Version 9.5 and later use a threaded model, it is not possible to
use either AIX or Linux WLM to control disk I/O activity. You can control DB2
prefetcher I/O activity by using the PREFETCH PRIORITY attribute of any DB2
service class.
DB2 data server uses primarily shared memory which is accessed by more than
one agent from different service classes. For this reason, it is not possible to divide
memory allocation between different service classes using either AIX or Linux
WLM.
WebSphere Application Server Version 6.0 and Version 6.1 can set or pass in the
CLIENT INFO fields to DB2 data server, either explicitly by your applications (see:
Passing client information to a database) or implicitly by having WebSphere
Application Server do it for you (see: Implicitly set client information).
The service class agent priority setting does not take effect until an agent begins
work on activities in that service class. An idle agent keeps the priority of the
service class it last worked for until it joins a different service class. Another reason
may be that the AGENTPRI dbm config parameter is set. Even though this
parameter is deprecated as of Version 9.5, it does take precedence over the WLM
service class setting. To use the WLM setting, reset the AGENTPRI config
parameter to its default value, which is -1. On AIX, the instance owner must have
CAP_NUMA_ATTACH and CAP_PROPAGATE capabilities to set a higher relative
priority for agents in a service class.
On Solaris 10, the instance owner must have the proc_priocntl privilege to set a
higher relative priority for agents in a service class. If DB2 is run within a
non-global zone of Solaris, the zone must have the proc_priocntl privilege in the
limit privilege set. On Solaris 9, there is no facility for DB2 to set a higher relative
priority for agents.
The simple answer to this question is yes. You can create one or more
CONCURRENTDBCOORDACTIVITIES concurrency thresholds that apply to the
same set of activities by defining them at the level of the database, the service
class in which the work executes, or within a work action set applied at the
database or workload level. Be aware that each new concurrency threshold that
applies to an activity implies additional overhead to enforce that concurrency
threshold.
The more complex answer includes the following caution: verify that you actually
need to use concurrency thresholds at all, let alone multiple ones. There may be
simpler ways to address the scenario you are facing by using one or more of the
other mechanisms and controls provided by DB2 workload manager. If you find
yourself introducing one or more concurrency thresholds, you may have bypassed
a simpler approach to address the problem. In general, concurrency thresholds for
activities should be used at the database level, via a work action set, for disruptive
activities that affect the entire system or go across service class boundaries, while
concurrency thresholds at the service class level can be used to ensure proper
sharing of resources between one service class and another (although a more
effective technique may be to use the CONCURRENTWORKLOADACTIVITIES
threshold on the workloads that contribute to the service class). There should
rarely, if ever, be a case where you need to define a concurrency threshold for
CONCURRENTDBCOORDACTIVITIES at the database level by itself.
There are a number of reasons why a connection may not be mapped to the
desired workload. The most common ones are the failure to grant USAGE
privilege on the workload, incorrect spelling of the case sensitive connection
attributes, or the existence of a matching workload definition that is positioned
earlier in the evaluation order.
Connection attributes for workloads are case sensitive. For example: If the system
user ID is uppercased, then the SYSTEM_USER connection attribute you specify
must be in uppercase as well.
To establish why a connection is not being mapped to the expected workload, you
should gather some information. Which workload is the work being mapped to? Is
that workload before or after the one that you thought would be used when you
look at the workload definitions in the order of evaluation? (Hint: try selecting the
workload definitions ordered in ascending order by the value of the
EVALUATIONORDER column in SYSCAT.WORKLOADS).
If you do not know what the connection attributes are for the target connection,
you can find out the values for the connection in a number of different ways:
Why does the DB2 data server not automatically create AIX and
Linux service classes?
While having the DB2 data server automatically create corresponding AIX or Linux
WLM service classes when DB2 service classes are created might reduce
administrative overhead for system administrators, this is not available for a
number of reasons:
v AIX and Linux WLM provide a wide variety of configuration options from
which you can craft an environment suited for your unique needs. If DB2 data
server were to automatically create AIX or Linux WLM service classes, then
either the full variety of options provided by AIX and Linux WLM service
classes would have to be surfaced within the DB2 service class DDL statements,
greatly increasing DDL complexity, or the AIX and Linux WLM service classes
would have to be standardized using a limited set of features which would not
permit full utilization of AIX and Linux WLM features.
v DB2 service classes are created once in a database by SQL statements and are
then automatically available on all database partitions when they are started.
AIX or Linux WLM service class definitions need to be defined on each AIX and
Linux machine that participates in the database, including any new ones that are
added later.
v The use of tight integration with AIX and Linux WLM is an optional feature of
DB2 workload manager which can be enabled or disabled at any time.
In the end, we decided that it is better for DB2 data servers not to create AIX or
Linux WLM service classes when a DB2 service class is created. We believe that
this gives the DB2 data server and our customers maximum flexibility.
These CLP commands are affected by DB2 workload manager thresholds, because
the database engine cannot distinguish system requests originating with these
utilities from other requests directly initiated by users within the CLP interactive
front-end.
Yes, you can change the service subclass an activity is executing in to another
service subclass within the same parent service superclass by defining a
CPUTIMEINSC or SQLROWSINSC threshold with the REMAP ACTIVITY action
on the original service subclass. Initially, DB2 workload manager maps an activity
to a service class based on the relevant workload definition for the connection,
An enhancement has been added to CLP so that the client application name is
automatically set to the CLP script filename, with a CLP prefix preceding it (the
value of this field at the server can be seen in the CURRENT
CLIENT_APPLNAME special register). For example, if the CLP script filename is
batch.db2, the CURRENT CLIENT_APPLNAME special register value is set to
CLP batch.db2 by CLP when that script is run. With this feature, it is possible
for different CLP scripts to be associated with different workloads based on the
client application name.
For example, to create a workload for CLP file batch1.db2, you can issue the
following DDL statement:
CREATE WORKLOAD batch1 CURRENT CLIENT_APPLNAME ('CLP batch1.db2')
SERVICE CLASS class1
To create a workload for CLP file batch2.db2, you can issue the following DDL
statement:
CREATE WORKLOAD batch2 CURRENT CLIENT_APPLNAME ('CLP batch2.db2')
SERVICE CLASS class2
Since these two batch files are associated with different workloads, they can be
assigned to different service classes and managed differently.
The answer depends on why the monitoring is desired and what is to be done
with the information.
Aggregate activity information is about the entire set of work that has executed
within a service class and captures characteristics of this set; it does not capture
details about individual activities. For normal operational monitoring, using the
COLLECT AGGREGATE ACTIVITY DATA clause is preferred because it is very
light-weight, can be gathered automatically by an event monitor for a historical
record, and provides important information on overall response time patterns. If
further insight is required on the type of work within a service class, it is also
How does DB2 WLM work with the new AIX WPAR feature?
All aspects of the DB2 workload manager will work within an AIX WPAR but
because AIX WPARs do not support the use of AIX WLM features, the option to
tightly integrate DB2 service classes with AIX WLM service classes is of no benefit
in this environment.
Query Patroller cannot schedule, hold, or queue SQL statements issued from
within a routine. Since the QP query class relies on queuing to effect its control,
this is a significant limitation.
DB2 workload manager enables control of all queries regardless of origin and
while it does offer queuing as a secondary control mechanism, the primary control
mechanism is resource control and prioritization via a DB2 service class. This
means that you can allocate processor resource between DB2 service classes and
possibly avoid using a queue at all.
Now that Query Patroller and DB2 Governor are deprecated, how
do I migrate to DB2 workload manager?
The threshold violations, statistics, and activities WLM event monitors capture
information about threshold violations, operational statistics, and both individual
and aggregate activity data (see: Historical monitoring with WLM event monitors).
Each event monitor collects one or more logical data groups (see: Event type
mappings to logical data groups) and there are one or more monitoring elements
in each logical data group (see: Event monitor logical data groups and monitor
elements).
Use workloads to identify work by where it originates or who submits it. You
identify work by the application name or system authorization ID that submitted
it, for example.
The following figure shows a number of different sources of work, coming from
different users, groups, and applications.
Organization name
Group or role
Application
Activities
One way that you can monitor and control workloads is on the basis of individual
activities. Each time your DB2 data server executes the access plan for an SQL or
XQuery statement or executes the load utility a corresponding activity is created.
For workload control, most workload controls and thresholds apply to each
activity. For example, the ACTIVITYTOTALTIME threshold controls the maximum
time that your data server can spend processing an activity.
The life cycle of an activity for a DML statement does not include processing that
occurs before or outside of access plan execution. This implies that activity-based
monitoring does not cover operations such as connecting to the database or
compiling SQL into an access plan.
During its life cycle, an activity can spend time in various states, which are
reported by the activity_state event monitor element. Some of the states an activity
can be in are:
v EXECUTING - This state indicates that the coordinator agent is working on the
activity. An activity that encounters a lock wait situation is reported as
executing.
v IDLE - This state indicates that the coordinator agent is waiting for the next
request from a client.
v QUEUED - Some thresholds include a built-in queue. This state indicates that
the activity is waiting in the queue for its turn to begin executing.
Monitoring data for the activity is aggregated at the end of the lifetime of an
activity.
The following figure shows how the lifetime of a long running query breaks down
into queue time and execution time:
Queuing Open Fetch Fetch Close
Queue time
Execution time
Lifetime
This section describes what activities are created for various SQL statements and
identifies the start and end points in the lifetime of these activities. You can use
this information to understand how SQL statements are monitored and controlled
through activities.
SELECT statements using WITH HOLD cursors: When a WITH HOLD cursor is
used, an application can open a cursor within one unit of work and close the
cursor in a subsequent unit of work. The cursor remains open for multiple units of
work. The corresponding activity exists for as long as the cursor is open, because
the life cycle of the activity ends only after the cursor is closed.
The activity associated with the CALL statement starts when your DB2 data server
starts processing the statement or request and ends after the stored procedure
processing is complete.
Triggers and UDFs: When a SQL statement calls a trigger or UDF, no additional
activity is created. The work done by that trigger or UDF is accrued to the activity
for the SQL statement that called it. Cases where the trigger or UDF executes
additional SQL statements are handled like any statement execution, that is, an
activity is created for each statement.
PREPARE statement: No activity is created, because activities are not created until
an access plan is executed.
Nested activities
Running the load utility will generate several activities, one of which is a load
activity and several others that are of type READ, WRITE, or OTHER. In the case
of a load from cursor, an additional activity for the cursor the load activity is
loading from is created. This cursor activity is a nested activity of the load activity.
Workload management DDL statements differ from other DB2 DDL statements:
v Only one uncommitted DB2 workload manager DDL statement is permitted at a
time across all database partitions. If an uncommitted DB2 workload manager
DDL statement exists, subsequent DB2 workload manager DDL statements wait
until the uncommitted DB2 workload manager DDL statement is either
committed or rolled back. DB2 workload manager DDL statements are processed
in the order in which they are issued.
v Every DB2 workload manager DDL statement must be followed by a COMMIT
or ROLLBACK statement.
v A DB2 workload manager DDL statement cannot be issued in an XA transaction.
After a connection issues a DB2 workload manager DDL statement, the same
connection must issue a COMMIT or ROLLBACK statement immediately after
the DB2 workload manager DDL statement. With XA transactions, it is possible
for multiple connections to join a transaction, and any of the connections can
commit or roll back the transaction. In this situation, it is impossible to ensure
that the workload management environment would be correctly implemented.
v DB2 for z/OS® does not recognize DB2 Database for Linux, UNIX, and Windows
DB2 workload manager DDL statements.
The connection attributes are evaluated when the connection is established, and the
connection is assigned to a given workload, creating a new occurrence of that
workload. If any of the connection attributes change during the life of that
connection, the workload assignment is reevaluated at the start of the next unit of
work after the change and, if a new workload definition is to be assigned, the old
workload occurrence for the previous assigned workload is ended and a new
occurrence is started for the newly assigned workload definition. While each
connection is assigned to one and only one workload at any one time, it is possible
for there to be multiple connections assigned to the same workload at the same
time resulting in the concurrent existence of multiple workload occurrences related
to that definition.
Workload reevaluation occurs at the beginning of each unit of work in the event
that the value of a connection attribute or the workload definition itself changes
during the unit of work. This reevaluation might result in the connection being
associated with a new workload, creating a different workload occurrence. For
more information, see “Workload assignment” on page 23
To assign all activities created by the application Accounts under the connections
that belong to the session user group Deptmgr to the SUMMARY workload, which
maps the activities to the HumanResources service class, issue a statement such as
the following:
CREATE WORKLOAD SUMMARY SESSION_USER_GROUP('Deptmgr') APPLNAME('Accounts')
SERVICE CLASS HumanResources
User Workload
requests HumanResources
SUMMARY
The wild card asterisk (*) matches zero or more characters. If you need to match an
asterisk, use a double asterisk (**) to specify the asterisk as a literal character.
For example: If you have several accounts receivable applications (accrec01, accrec02
... accrec15) that you all want to belong to the same workload for equal treatment
by DB2 workload manager, define the CURRENT CLIENT_APPLNAME('accrec*')
connection attribute to match all of these applications when you create or alter
your workload. Similarly, an acc*rec accounts receivable application (a name that
includes an asterisk character) is matched by the CURRENT
CLIENT_APPLNAME('acc**rec') connection attribute.
The following workload connection attributes support the use of wild cards:
As you analyze the usage characteristics of your environment, you can use the
CREATE WORKLOAD statement to create your own workloads and map them to
specific service classes. When you create the workload, you define both the values
that are used to evaluate the connection attributes during workload assignment
and the order in which the workload is evaluated relative to other workloads.
Because more than one workload can match incoming connection attributes, being
able to change the evaluation order enables you to determine which matching
workload is chosen. Whether or not the session user has the USAGE privilege on
the workload also determines which matching workload is chosen. For more
information, see “Workload assignment” on page 23.
The following figure shows multiple requests being evaluated against workloads in
the order A, B, C, and D, then assigned to specific workloads and executed in the
applicable service class. Requests that cannot be matched to an existing workload
are matched to the SYSDEFAULTUSERWORKLOAD workload and executed by
default in the SYSDEFAULTUSERCLASS service superclass. For information about
the types of activities that run in the default maintenance class and default system
class, see “Default service superclasses and subclasses” on page 68.
Default service
Application Workload A
subclass
Application Service
Workload B
subclass 1.1
Application Service
Workload C
subclass 1.2
Service
Application Workload D subclass 1.3
System
maintenance requests
System
database requests
Workload assignment
At the beginning of the first unit of work after a database connection is
established, the data server assigns the connection to a workload by evaluating the
connection attributes of each workload that is enabled.
You can set the evaluation order by using the POSITION keyword of the CREATE
WORKLOAD or ALTER WORKLOAD statement, as follows:
v By specifying the absolute position of the workload in the evaluation order, as
shown in the following example:
CREATE WORKLOAD...POSITION AT 2
POSITION AT 2 means that the workload is to be positioned second in the
evaluation order. A matching workload that is positioned higher in the
evaluation order is evaluated first. That is, if the workloads at both position 2
and position 3 match, the workload at position 2 is evaluated before the
workload at position 3.
If the position that you specify on the CREATE WORKLOAD or ALTER
WORKLOAD statement is greater than the total number of existing workloads,
the workload is positioned next to last in the evaluation order, before the
SYSDEFAULTUSERWORKLOAD workload. The effect is the same as specifying
POSITION LAST on the CREATE WORKLOAD or ALTER WORKLOAD
statement.
v By using the POSITION BEFORE workload-name or POSITION AFTER
workload-name keyword, where workload-name is an existing workload. This
keyword specifies the position of a new or altered workload relative to another
workload in the evaluation order, as shown in the following example:
ALTER WORKLOAD...POSITION BEFORE workload2
If you do not specify the POSITION keyword, by default, the new workload is
positioned after the other defined workloads in the evaluation order but before the
SYSDEFAULTUSERWORKLOAD workload, which is always considered last.
Workload reassignment
Default workloads
The default user workload SYSDEFAULTUSERWORKLOAD provides a workload
for your data server to which all connections are assigned initially. The default
administration workload SYSDEFAULTADMWORKLOAD permits you to take
corrective administrative action that cannot otherwise be performed. Both
workloads are created at database creation time and you cannot drop them.
Connections that are assigned to the default user workload are mapped to the
default user service superclass SYSDEFAULTUSERCLASS, which provides the
default execution environment. You can map connections to user-defined service
classes by creating user defined workloads. In addition, you can alter
SYSDEFAULTUSERWORKLOAD so that it maps connections to a different service
class than SYSDEFAULTUSERCLASS.
The following table shows shows the columns returned for the
SYSDEFAULTUSERWORKLOAD workload in the SYSCAT.WORKLOADS view,
along with values and whether you can modify these values. See “Workload
Although you require no special authority to use the SET WORKLOAD command,
you require ACCESSCTRL, DATAACCESS, DBADM, SECADM, or WLMADM
authority to assign a connection to the default administration workload. Otherwise,
SQL0552N is returned during workload assignment.
When the command takes effect depends on when you issue it:
v If you issue the SET WORKLOAD TO SYSDEFAULTADMWORKLOAD
command before the connection to the database, after the connection is
established, it is assigned to SYSDEFAULTADMWORKLOAD at the beginning of
the first unit of work.
v If you issue the SET WORKLOAD TO SYSDEFAULTADMWORKLOAD
command at the beginning of a unit of work, after a connection to the database
is established, the connection is assigned to SYSDEFAULTADMWORKLOAD
when the first request that is not an sqleseti (Set Client Information) request is
submitted.
v If you issue the SET WORKLOAD TO SYSDEFAULTADMWORKLOAD
command at the middle of a unit of work, after a connection is established, the
connection is assigned to SYSDEFAULTADMWORKLOAD at the beginning of
the next unit of work.
When a connection is assigned to SYSDEFAULTADMWORKLOAD, workload
reassignment is performed at the beginning of the next unit of work if either of the
following situations occurs:
v You revoke SYSADM or DBADM authority from the session user. In this
situation, SQL0552N is returned.
v You issue a SET WORKLOAD TO AUTOMATIC command. This command
indicates that the next unit of work should not be assigned to the
SYSDEFAULTADMWORKLOAD workload and that a normal workload
evaluation is to be performed at the beginning of the next unit of work. For
more information, see “Workload assignment” on page 23.
The threshold that is the cause of the problem is created accidentally with the
following statement. Concurrency should have been set to 100 but was set to 0.
This threshold effectively prevents any activity from executing:
CREATE THRESHOLD PROHIBITIVE FOR DATABASE ACTIVITIES
ENFORCEMENT DATABASE WHEN CONCURRENTDBCOORDACTIVITIES > 0
STOP EXECUTION
Note: This statement is intended only to show you how a severely prohibitive
threshold might be created. You should not issue this statement.
SQL4712N The threshold "PROHIBITIVE" has been exceeded. Reason code = "6".
SQLSTATE=5U026
Before you can take corrective action, you must set the workload to the default
administration workload:
SET WORKLOAD TO SYSDEFAULTADMWORKLOAD
The problem can now be corrected by altering the threshold so that activities can
run:
ALTER THRESHOLD PROHIBITIVE WHEN CONCURRENTDBCOORDACTIVITIES > 100 STOP EXECUTION
Once corrected, change the workload back so that the connection will no longer be
assigned to SYSDEFAULTADMWORKLOAD but to whatever workload it was
assigned to before:
SET WORKLOAD TO AUTOMATIC
The same SELECT statement used before should now complete successfully:
SELECT * FROM SYSCAT.TABLES
...
To create a workload:
1. Specify one or more of the following properties for the workload using the
CREATE WORKLOAD statement:
v The name of the workload.
v The connection attributes. The incoming connection must supply matching
connection attributes to those that you specified for the workload for a match
to occur. For more information, see “Work identification by origin with
workloads” on page 19. When specifying the connection attributes, note that
values are ORed and attributes are ANDed: for example, UserID (bob OR sue
OR frank) AND Application (SAS).
v A value that indicates whether occurrences of this workload are permitted to
access the database. By default, occurrences of this workload are permitted to
access the database.
v A value that indicates whether the workload is enabled or disabled. By
default, the workload is enabled.
v The service class under which occurrences of this workload are to be
executed. The SYSDEFAULTUSERCLASS service superclass is the default.
If you specify a user-defined service superclass and do not map the
workload to run in a user-defined service subclass under the service
superclass, the workload occurrences will run in the
SYSDEFAULTSUBCLASS service subclass of the service superclass.
After you create a workload, you might need to grant the USAGE privilege on it to
one or more session users. (Session users with WLMADM or DBADM authority
have an implicit privilege to use any workload.) Even if a connection provides an
exact match to the connection attributes of the workload, if the session user does
not have the USAGE privilege on the workload, the data server does not consider
the workload when performing workload evaluation. For more information, see
“Granting the USAGE privilege on a workload” on page 35.
Altering a workload
An ALTER WORKLOAD statement changes a workload in the catalogs.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
To alter a workload:
1. Specify one or more of the following properties for the workload using the
ALTER WORKLOAD statement:
v The connection attributes. You can add connection attributes to and drop
connection attributes from the workload definition unless it is the
SYSDEFAULTUSERWORKLOAD or SYSDEFAULTADMWORKLOAD
workload. The incoming connection must supply matching connection
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
When you prevent a workload from accessing the database, the data server still
examines that workload when performing workload assignment. However, all
occurrences of that workload are rejected with an error. To permit a workload to
access the database:
1. Use the ALLOW DB ACCESS option of the ALTER WORKLOAD statement to
permit the workload to access the database. For example, to permit a workload
called WL1 to access the database, specify the following statement:
ALTER WORKLOAD WL1 ALLOW DB ACCESS
2. Commit your changes. When you commit your changes workload is updated in
the SYSCAT.WORKLOADS view.
Altering a workload to permit its occurrences to access the database takes effect
when the data server analyzes the next unit of work for that workload. For
example, if you specified DISALLOW DB ACCESS for workload A and alter the
workload by specifying ALLOW DB ACCESS, new occurrences of workload A are
permitted to execute. Previously, any occurrence of workload A would have been
rejected with an error.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
Enabling a workload
The DB2 data server checks the connection attributes specified for a workload
against the connection attributes of the current session. The data server does not
consider a disabled workload when it looks for a matching workload.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
By default, a workload is enabled when you create it. If you create a workload as
disabled, you must enable it for the data server to consider the workload when it
performs workload evaluation.
To enable a workload:
1. Identify the workload that you want to enable. You can display the set of
disabled workloads by querying the SYSCAT.WORKLOADS view, as shown in
the following example:
SELECT * FROM SYSCAT.WORKLOADS WHERE ENABLED='N'
2. Use the ALTER WORKLOAD statement to enable the disabled workload:
ALTER WORKLOAD...ENABLE
Enabling a workload takes effect at the beginning of the next unit of work. At that
point, a workload reevaluation occurs, and the data server considers the newly
enabled workload when it performs workload reevaluation.
Disabling a workload
Use this task to prevent specific workloads from being considered during
workload assignment. If you disable a workload, the data server does not consider
it when it looks for a matching workload. Instead, the data server assigns the unit
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
To disable a workload:
1. Use the DISABLE option of the ALTER WORKLOAD statement to disable the
workload:
ALTER WORKLOAD...DISABLE
2. Commit your changes. When you commit your changes, the workload is
updated in the SYSCAT.WORKLOADS view.
Disabling a workload takes effect at the beginning of the next unit of work. At that
point, a workload reevaluation occurs, and the connection is assigned to the next
enabled workload that matches the connection attributes and for which there is
authorization.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
When the data server finds a workload that matches the attributes of an incoming
connection, the data server checks whether the session user has the USAGE
privilege on that workload. If the session user does not have the USAGE privilege
on that workload, the data server looks for the next matching workload. (In other
words, the workloads for which the session user does not have the USAGE
privilege are treated as if they do not exist.) Therefore, the workload USAGE
privilege gives you the ability to further control which workload among the
matching workloads a user, group, or role should be assigned to. For example, you
can define more than one workload with the same connection attributes and grant
the USAGE privilege on each of these workloads to only certain users, groups, or
roles. For more information, see “Workload assignment” on page 23.
The client can set the client user ID, client application name, client workstation
name, and client accounting string (which are some of the connection attributes
that are used to assign a connection to a workload) without authorization.
Therefore, the workload USAGE privilege also permits you to control which
session user has the authority to use a workload.
If you create a database without the RESTRICT option, the USAGE privilege on the
SYSDEFAULTUSERWORKLOAD workload is granted to PUBLIC at database
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
Dropping a workload
Dropping a workload removes it from the database catalog.
To drop a workload:
1. Disable the workload by specifying the ALTER WORKLOAD statement. See
“Disabling a workload” on page 34 for more information. Disabling the
workload prevents new occurrences of the workload from being able to run
against the database.
2. Ensure that no occurrences of this workload are running by using the
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 table
function. For more information, see
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 table
function.
The WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 table
function returns the application handles corresponding to the active workload
occurrences. You can use the FORCE APPLICATION command to terminate the
applications using the application handles.
3. Drop the workload by specifying the DROP WORKLOAD statement. For
example, to drop the ACCTNG workload, specify the following statement:
DROP WORKLOAD ACCTNG
4. Commit your changes. When you commit your changes, the workload is
removed from the SYSCAT.WORKLOADS view. In addition, authorization
information for the workload is removed from the SYSCAT.WORKLOADAUTH
view.
The following figure shows a workload assignment. Users in the Marketing group
who submit queries through AppA are assigned to the APPAQUERIES workload.
They are not assigned to the PAYROLL workload, even though PAYROLL is
positioned before APPAQUERIES, because the definition of workload PAYROLL
specifies the SESSION_USER GROUP keyword as Finance. Users in the Finance
group who submit queries through AppA are assigned to the FINANCE workload.
They are not assigned to the PAYROLL workload, even though it is more specific
and specifies both AppA and Finance in its definition, because the FINANCE
workload is positioned before the PAYROLL workload. Users in the Marketing
group who submit queries through AppB are assigned to the
SYSDEFAULTUSERWORKLOAD workload, because none of the connection
attributes specified in the FINANCE, PAYROLL, or APPAQUERIES workload
definitions match the AppB application or Marketing group.
SYSCAT.WORKLOADS
FINANCE
PAYROLL
APPAQUERIES 1
SYSDEFAULTUSERWORKLOAD
SYSDEFAULTADMWORKLOAD
Workload occurrence
of APPAQUERIES
Finance Marketing
group group
1In the preceding figure, the CREATE WORKLOAD statements are as follows:
CREATE WORKLOAD PAYROLL APPLNAME ('AppA') SESSION_USER GROUP ('FINANCE')
SERVICE CLASS SC1
SYSCAT.WORKLOADS
MARKETING
REPORTING
APPSERVER 1
SYSDEFAULTUSERWORKLOAD
SYSDEFAULTADMWORKLOAD
Connect
Application
server
CONNECT TO SAMPLE
USER APPUSER USING …
Occurrence of
Set client application name to marketing.exe
MARKETING
Query 1
Query 2
Set client application name to reporting.exe
COMMIT
Query 3
Query 4
COMMIT
Query 5
Occurrence of
Set client user ID to Lidia
REPORTING
Query 6
COMMIT
Query 7
Query 8
Query 9
…
The following statements are used to define the workloads specified in box 1 in
the previous figure:
CREATE WORKLOAD MARKETING SESSION_USER ('APPUSER')
CURRENT CLIENT_APPLNAME ('marketing.exe') SERVICE CLASS SC2
POSITION AT 1
When the first unit of work is submitted, the data server checks each workload in
the catalog, starting with the first workload in the list, and processes the
workloads in ascending order until it finds a workload with matching attributes.
When a matching workload is found, the unit of work runs under an occurrence of
that workload. When determining which workload to assign the connection to, the
data server compares the connection attributes in deterministic order.
The data server first checks the REPORTS workload for a match. The REPORTS
workload is first in the list.
Table 7. REPORTS workload in the catalog
SESSION SESSION CURRENT CURRENT CURRENT CURRENT
Evaluation Workload SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order name ADDRESS APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
1 REPORTS AppA
The data server checks the connection attributes in the following deterministic
order:
1. APPLNAME. The value of APPLNAME, AppA, for the database connection
matches the value of APPLNAME for the REPORTS workload.
2. SYSTEM_USER, which is not set in the workload definition. Any value
(including a null value) is considered a match.
3. SESSION_USER, which is not set in the workload definition. Any value is
considered a match.
In this situation, because of the explicit and implicit matches between the
connection attributes of the REPORTS workload and the information passed on the
connection, the data server selects the REPORTS workload as a potential match.
After selecting a workload, the data server then checks whether the session user
has the USAGE privilege on the workload. Assuming that the session user TIM has
the USAGE privilege on the REPORTS workload, that workload is used for the
connection. If, however, TIM does not possess the USAGE privilege on the
REPORTS workload, the data server continues by checking the
INVENTORYREPORT workload for a match.
You could also use the following SQL statement to achieve the same result:
ALTER WORKLOAD EXPENSEREPORT BEFORE REPORTS
To ensure that the ALTER WORKLOAD statement takes effect, you must
immediately issue a COMMIT statement after the ALTER WORKLOAD statement.
The effect of the ALTER WORKLOAD statement on the catalog is as follows:
Table 8. Workloads in the catalog after repositioning the EXPENSEREPORT workload
SESSION SESSION CURRENT CURRENT CURRENT CURRENT
Evaluation SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order Workload name APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
1 EXPENSE AppA TIM EXPENSE
REPORT APPROVER
2 REPORTS AppA
3 INVENTORY AppB LYNN ACCOUNTING TELEMKTR
REPORT
4 SALES REPORT AppC KATE KATE SALESREP
5 AUDIT REPORT AppB ACCOUNTING FINANALYST
6 AUDIT RESULT LYNN LYNN Audit Group
If TIM does not already have the USAGE privilege on the EXPENSEREPORT
workload, you must issue the following statements (the COMMIT statement
ensures that the GRANT statement takes effect):
GRANT USAGE ON WORKLOAD EXPENSEREPORT TO USER TIM
COMMIT
At the beginning of the next unit of work, workload reassignment occurs, and the
data server assigns the connection from TIM to the EXPENSEREPORT workload.
When the first unit of work is submitted, the data server checks each workload in
the catalog in ascending evaluation order and stops when it finds a workload
whose connection attributes match those supplied by the connection. When it
checks the workloads, the data server compares the connection attributes in
deterministic order.
Because the APPLNAME attribute in the workload definition is AppB but the
APPLNAME attribute passed by the connection is AppA, no match is possible. The
data server proceeds to the REPORTS workload, which is second in the list:
Table 12. REPORTS workload in the catalog
SESSION SESSION CURRENT CURRENT CURRENT CURRENT
Evaluation SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order Workload name APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
2 REPORTS AppB
Again, the APPLNAME attribute in the workload definition is AppB, which does
not match AppA. The data server proceeds to the third workload in the list,
INVENTORYREPORT:
The data server checks for a match between the submitted connection attributes
and the INVENTORYREPORT workload. The attributes are checked in the
following order:
1. APPLNAME. Both the workload definition and the connection have a value of
AppA, so a match occurs.
2. SYSTEM_USER. Both the workload definition and the connection have a value
of LYNN, so a match occurs.
3. SESSION_USER. The connection passed a value of LYNN. Because the
SESSION_USER attribute is not set for the workload, any value, including a
null value, that is passed by the connection matches.
4. SESSION_USER GROUP. Both the workload definition and the connection have
a value of ACCOUNTING, so a match occurs.
5. SESSION_USER ROLE. The workload definition specifies the value TELEMKTR,
but the connection supplied the values of FINANALYST and SALESREP. No match
occurs for this attribute.
The data server stops trying to match the INVENTORYREPORT workload and the
connection attributes and proceeds to the fourth workload in the list,
SALESREPORT:
Table 14. SALESREPORT workload in the catalog
SESSION SESSION CURRENT CURRENT CURRENT CURRENT
Evaluation SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order Workload name APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
4 SALESREPORT AppC KATE KATE SALESREP
The data server compares the attributes of the AUDITREPORT workload and the
connection in the deterministic order:
1. APPLNAME. Both the workload definition and the connection have a value of
AppA, so a match occurs.
2. SYSTEM_USER. The connection passed a value of LYNN. Because the
SYSTEM_USER attribute is not set for the workload, any value passed by the
connection matches.
3. SESSION_USER. The connection passed a value of LYNN. Because the
SESSION_USER attribute is not set for the workload, any value passed by the
connection matches.
4. SESSION_USER GROUP. Both the workload and the connection have a value of
ACCOUNTING for this attribute, so a match occurs.
5. SESSION_USER ROLE. Both the workload and the connection have a value of
FINANALYST for this attribute, so a match occurs.
After processing all the connection attributes and finding a matching workload, the
data server checks whether the session user has the USAGE privilege on the
workload. Assume that LYNN does not have the USAGE privilege on the
AUDITREPORT workload. In this situation, although all of the connection
attributes match, this workload is not associated with the connection. The data
server proceeds to the sixth workload in the evaluation list, AUDITRESULT:
Table 16. AUDITRESULT workload in the catalog
SESSION SESSION CURRENT CURRENT CURRENT CURRENT
Evaluation Workload SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order name APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
6 AUDITRESULT LYNN LYNN Audit Group
The data server compares the attributes of the AUDITRESULT workload and the
connection in the deterministic order:
1. APPLNAME. Because the APPLNAME attribute is not set for the workload,
any value passed by the connection matches.
2. SYSTEM_USER. Because the SYSTEM_USER attribute is not set for the
workload, any value passed by the connection matches.
3. SESSION_USER. Both the workload and the connection have a value of LYNN
for this attribute, so a match occurs.
4. SESSION_USER GROUP. Because the SESSION_USER GROUP attribute is not
set for the workload, any value passed by the connection matches.
5. SESSION_USER ROLE. Because the SESSION_USER ROLE attribute is not set
for the workload, any value passed by the connection matches.
6. CURRENT CLIENT_USERID. Both the workload and the connection have a
value of LYNN for this attribute, so a match occurs.
7. CURRENT CLIENT_APPLNAME. Because the CURRENT
CLIENT_APPLNAME attribute is not set for the workload, any value passed
by the connection matches.
8. CURRENT CLIENT_WRKSTNNAME. Because the CURRENT
CLIENT_WRKSTNNAME attribute is not set for the workload, any value
passed by the connection matches.
9. CURRENT CLIENT_ACCTNG. Both the workload and the connection have a
value of Audit Group for this attribute, so a match occurs.
After processing all of the connection attributes and finding a matching workload,
the data server checks whether the session user has the USAGE privilege on the
workload. In this situation, assume that the session user LYNN has the USAGE
When the first unit of work is submitted, the data server checks each workload in
the catalog in ascending evaluation order and stops when it finds a workload
whose connection attributes match those supplied by the connection. When it
checks the workloads, the data server compares the connection attributes in
deterministic order.
The data server checks for a match between the submitted connection attributes
and the ITEMINQ workload. The attributes are checked in the following order:
1. APPLNAME. Because the APPLNAME attribute is not set for the workload,
any value, including a null value, that is passed by the connection matches.
2. SYSTEM_USER. The connection passed a value of LINDA. However, the
ITEMNO workload values are KYLE and GEORGE. No match occurs for this
attribute.
The data server stops trying to match the ITEMNO workload and the connection
and proceeds to the second workload in the list, DAILYTRANSREPORT:
Table 20. DAILYTRANSREPORT workload in the catalog
SESSION SESSION CURRENT CURRENT CURREN CURRENT
Evaluation SYSTEM SESSION _USER _USER CLIENT CLIENT CLIENT CLIENT
order Workload name APPLNAME _USER _USER GROUP ROLE _USERID _APPLNAME _WRKSTNNAME _ACCTNG
2 DAILYTRANSREPORT AppC KYLE, SALES,
CAROL ACCOUNTING
After processing all of the connection attributes and finding a matching workload
for the connection, the data server checks whether the session user has the USAGE
privilege on the workload. In this situation, assume that the session user KYLE has
the USAGE privilege on the DAILYTRANSREPORT workload. Because all
connection attributes match and the session user has the USAGE privilege, the
connection is assigned to the DAILYTRANSREPORT workload.
The following table shows the type keywords available for work classes and the
SQL statements that correspond to the different keywords. Except for the load
utility, all the statements in the table below are intercepted immediately before
The following figure shows a hierarchical view of the work type keywords:
ALL
READ WRITE
SQL statements that do not fall under any of the available keywords are not
classified, and behave as though no work class and work class set exists. For
example, if the statement is SET SCHEMA and the only work class in the work
class set has a work type of DML, that statement is not classified and no work
action can be applied to it. So, if the action is MAP, the SET SCHEMA activity runs
in the default service subclass (SYSDEFAULTSUBCLASS). If the action is a
threshold, no threshold is applied to the activity.
Additional identification
Work classes also permit you to use predictive elements in the identification for
DML work (or READ and WRITE statements). Predictive elements are useful
because they provide information about database activities that can be used to take
action before these activities start consuming resources on the data server. The
following table provides information about predictive elements supported by work
classes:
You can also identify activities by using the schema name of the procedure that a
CALL statement calls.
Based on workload attributes and work class types, you can identify work and
prepare it for the next stage, the management of the work.
For more information on working with work classes and work class sets, refer the
following topics:
Examples of database activity attributes which can determine which work class an
activity is associated with include: activity type (DDL, DML, LOAD), the estimated
cost (where available), the estimated cardinality (where available), and the schema
(where available).
Work classes
You can alter work classes by using the ALTER WORK CLASS keyword of the
ALTER WORK CLASS SET statement.
You can drop work classes from a work class set using the DROP WORK CLASS
keyword of the ALTER WORK CLASS SET statement, or by using the DROP
WORK CLASS SET statement to drop the work class set.
You can view your work classes by querying the SYSCAT.WORKCLASSES view.
You use work class sets to group one or more work classes. A work class set
consists of the following attributes:
v A unique descriptive name for the work class set
v Any comments that you want to supply for the work class set
v Zero or more work classes (although a work class can only exist in a work class
set, a work class set does not have to contain any work classes)
v An automatically generated ID that uniquely identifies the work class set
You create a new work class set using the CREATE WORK CLASS SET statement.
You can create an empty work class set and add work classes later, or you can
create a work class set that contains one or more work classes.
You change an existing work class set in the following ways using the ALTER
WORK CLASS SET statement:
v Add work classes to the work class set.
v Change work class attributes for work classes in the work class set.
v Drop work classes from the work class set.
You cannot change any work class set attributes.
Drop a work class set using the DROP WORK CLASS SET statement.
You can view your work class sets by querying the SYSCAT.WORKCLASSSETS
catalog view.
For a work class set to be effective on the system, you must define a work action
set and associate it with the work class set. By using a work action set, you can
associate a work class set to a service superclass, a workload, or a database, to
indicate what action should be applied to the database activities that fall within the
classification. If you do not create a work action set for the work class set, the data
server ignores the work class set.
If no matching work class exists, the database activity does not belong to any work
class, and no work action is applied to that activity.
You can affect the evaluation order of work classes in a work class set when you
create or alter a work class set. When you create or alter a work class set, you
determine the position at which a work class is placed in the work class set using
one of the following three methods:
v Specify the absolute position of the work class in the list.
For example, POSITION AT 2. In this situation, the work class is placed in the
second position in the work class set, and the work class that was at the second
position is now the third, the third work class is now the fourth, and so on. If
the position specified for the work class by the CREATE WORK CLASS SET or
ALTER WORK CLASS SET statement is greater than the total number of work
classes in the work class set, the work class is positioned last in the list.
v Use the POSITION BEFORE or POSITION AFTER keyword to specify the
position of the work class relative to work classes already in the work class set.
v Omit the position when creating a work class.
In this situation, the new work class is positioned at the end of the list. The
position you specify for the work class in the work class set list is not
Work classes are processed in the order they are received, which can affect the
evaluation order. For example, assume that you issue the following statement:
ALTER WORK CLASS SET WCS ALTER WORK CLASS C1 POSITION AT 1
ALTER WORK CLASS C2 POSITION AT 1
As a result, the C1 work class has a evaluation order of 2 and the C2 work class
has an evaluation order of 1 because C2 was the last work class processed.
The work classes are sorted within the work class set, by their evaluation order.
Based on this evaluation order, the database activity is checked against each work
class based on the attributes of the database activity (such as the activity type and
cardinality) until there is a match or the list of work classes in the work class set
has been exhausted.
Assume that the following work classes are in a work class set:
v Evaluation order: 1; work class name: MyLoad; work class type: LOAD
v Evaluation order: 2; work class name: SmallRead; work class type: READ; other
attributes: estimated cost < 300 timerons
v Evaluation order: 3; work class name: AllDML; work class type: DML
v Evaluation order: 4; work class name: LargeRead; work class type: READ; other
attributes: estimated cost > 301 timerons
v Evaluation order: 5; work class name: MyDDL; work class type: DDL
For example, if you create a work class for DDL, then associate that work class
with an ESTIMATEDSQLCOST threshold work action, that threshold will not
apply to any of the requests that are classified under DDL because DDL statements
do not have an estimated cost. If you create a work class for ALL, then associate
that work class with an ESTIMATEDSQLCOST threshold work action, although all
database activities belong to the ALL work class, the threshold will only apply to
the database activities that have an estimated cost.
Note:
1. Activities that run within user-defined functions (UDFs) and that contain these
work classifications are not affected by the
CONCURRENTDBCOORDACTIVITIES threshold.
Table 24. Work classification supported by thresholds (continued)
“SQLROWSREAD threshold” on page 102 “SQLROWSRETURNED threshold” on page 104 “SQLTEMPSPACE threshold” on page 104
READ, Yes Yes Yes
including SET
statements
with
embedded
READ SQL
WRITE, Yes Yes Yes
including SET
statements
with
embedded
WRITE SQL
CALL No No (see note) No
DML, Yes Yes Yes
including SET
statements
with
embedded
READ or
WRITE SQL
DDL No No No
LOAD No No No
ALL Some Some Some
Note:
v Although the statements in the procedure called may return rows, because the
rows are not returned as a result of the CALL statement they are not controlled
by the SQLROWSRETURNED threshold.
See “DDL statements for DB2 workload manager” on page 18 for additional
prerequisites.
See “DDL statements for DB2 workload manager” on page 18 for additional
prerequisites.
You can only drop a work class set if no work action sets are associated with it. If
you want to drop the work class set, you must first drop its dependent work
action sets.
You can obtain a count of how many activities of a specific type have been
submitted since the last reset of the DB2 workload manager statistics by using the
WLM_GET_WORK_ACTION_SET_STATS table function, as shown in the following
example. Assume that the READCLASS and LOADCLASS work classes exist for
activities of type READ and activities of type LOAD. The * represents all activities
that do not fall into the READCLASS or LOADCLASS work class.
SELECT SUBSTR(WORK_ACTION_SET_NAME,1,18) AS WORK_ACTION_SET_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
SUBSTR(WORK_CLASS_NAME,1,15) AS WORK_CLASS_NAME,
LAST_RESET,
SUBSTR(CHAR(ACT_TOTAL),1,14) AS TOTAL_ACTS
FROM TABLE(WLM_GET_WORK_ACTION_SET_STATS('', -2)) AS WASSTATS
ORDER BY WORK_ACTION_SET_NAME, WORK_CLASS_NAME, PART
You can view the average lifetime of LOAD activities by creating a work action set
to map LOAD activities to a specific service subclass. For example, suppose you
map LOAD activities to the service subclass LOADSERVICECLASS under the
service superclass MYSUPERCLASS. Then, you can query the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function:
SELECT SUBSTR(SERVICE_SUPERCLASS_NAME,1,19) AS SUPERCLASS_NAME,
SUBSTR(SERVICE_SUBCLASS_NAME,1,18) AS SUBCLASS_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
CAST(COORD_ACT_LIFETIME_AVG / 1000 AS DECIMAL(9,3)) AS AVGLIFETIME
FROM TABLE
(WLM_GET_SERVICE_SUBCLASS_STATS_V97('MYSUPERCLASS', 'LOADSERVICECLASS', -2))
AS SCSTATS
ORDER BY SUPERCLASS_NAME, SUBCLASS_NAME, PART
Assume that you have a large number of applications running on your NONAME
database each day and lately a few performance issues have been occurring. To
deal with some of these issues, you decide that you need to be able to control the
number of large queries (that is, any query that has an estimated cost of greater
than 9999 timerons or an estimated cardinality of greater than 9999 rows) that can
run simultaneously on the database.
To control the number of large queries that can run on the database, you would do
the following:
1. Create a MYWORKCLASSSET work class set that contains two work classes:
one for queries with a large estimated cost and one for queries with a large
estimated cardinality. For example:
CREATE WORK CLASS SET MYWORKCLASSSET
(WORK CLASS LARGEESTIMATEDCOST WORK TYPE DML
FOR TIMERONCOST FROM 10000 TO UNBOUNDED,
WORK CLASS LARGECARDINALITY WORK TYPE DML
FOR CARDINALITY FROM 10000 TO UNBOUNDED)
2. Create a DATABASEACTIONS work action set that contains two work actions
that are to be applied to the work classes in the MYWORKCLASSSET work
class set at the database level
CREATE WORK ACTION SET DATABASEACTIONS FOR DATABASE
USING WORK CLASS SET LARGEQUERIES
(WORK ACTION ONECONCURRENTQUERY ON WORK CLASS LARGEESTIMATEDCOST
WHEN CONCURRENTDBCOORDACTIVITIES > 1 AND QUEUEDACTIVITIES > 1 STOP EXECUTION,
WORK ACTION TWOCONCURRENTQUERIES ON WORK CLASS LARGECARDINALITY
WHEN CONCURRENTDBCOORDACTIVITIES > 2 AND QUEUEDACTIVITIES > 3 STOP EXECUTION)
Because it is important that the queries (SELECT statements) run quickly, you
decide to create a service subclass called SELECTS in the ADMINAPPS service
superclass for these queries.
When a work class with the type of ALL is used with a mapping work action, all
recognized database activity is mapped to the service subclass specified in the
work action. If a work class with the work type of ALL is used with a threshold
work action, the threshold type determines which database activities the threshold
applies to. Consider the following example.
Assume that you create a work class set called Example with the following work
classes. The evaluation order of the work class is as follows:
1. SMALLDML, which is for all DML-type SQL that has an estimated cost of less
than 1000 timerons.
2. LOADUTIL, which is for the load utility.
3. ALLACTIVITY, which is for all database activity
ALLACTIVITY is the last work class evaluated, and covers database activities that
do not correspond to the first three work classes.
Using this configuration, all small DML runs under the SMALLACTIVITY service
subclass. The COUNTLOAD work action is applied to the LOADUTIL work class,
which runs under the default service subclass. All other recognized database
activities run under the OTHERACTIVITY service subclass.
Note: If the ALLACTIVITY work class were at the top of the evaluation order, all
recognized activities would be mapped to the OTHERACTIVITY service subclass.
Now assume that you want to define a work action set for the database and apply
thresholds that control what is permitted to run concurrently on the system. You
could create a work action set called DATABASEACTIONS that contains the
following work actions. The DML for creating this work action set is:
CREATE WORK ACTION SET DATABASEACTIONS FOR DATABASE USING WORK CLASS SET EXAMPLE
(WORK ACTION CONCURRENTSMALLDML ON WORK CLASS SMALLDML
WHEN CONCURRENTDBCOORDACTIVITIES > 1000 AND QUEUEDACTIVITIES > 10000
COLLECT ACTIVITY DATA STOP EXECUTION,
WORK ACTION CONCURRENTLOAD ON WORK CLASS LOADUTIL
WHEN CONCURRENTDBCOORDACTIVITIES > 2 AND QUEUEDACTIVITIES > 10
COLLECT ACTIVITY DATA STOP EXECUTION,
WORK ACTION CONCURRENTOTHER ON WORK CLASS ALLACTIVITY
WHEN CONCURRENTDBCOORDACTIVITIES > 100 AND QUEUEDACTIVITIES > 100
COLLECT ACTIVITY DATA STOP EXECUTION,
WORK ACTION MAXCOSTALLOWED ON WORK CLASS ALLACTIVITY
WHEN ESTIMATEDSQLCOST > 1000000 COLLECT ACTIVITY DATA STOP EXECUTION)
When these work actions are applied, up to 1000 small DML-type SQL statements
(because of the SMALLDML work class) can run at a time, and up to 10 000 of
these statements can be queued. Only two occurrences of the load utility can run at
a time, and up to 10 occurrences can be queued. Only 100 activities that are not
LOAD and are not small DML are permitted to run at a time, and only 100 of
these activities can be queued at a time. In all situations, if the queued threshold is
violated, the database activity is not permitted to run and an error message is
returned.
All work runs in a service class and you use workloads to assign work to service
superclasses, or you assign work to service subclasses in a service superclass by
using workloads, the REMAP ACTIVITY threshold action, or the MAP ACTIVITY
work action. When you define a workload, you indicate the service class where
work associated with that workload runs. By default, a default user workload also
exists (SYSDEFAULTUSERWORKLOAD) that maps work to the default user
service class (SYSDEFAULTUSERCLASS), so that any work not explicitly mapped
to a user defined service class using a user defined workload will run in the
default user service class.
You can create different service superclasses to provide the execution environment
for different types of work, then assign the applicable requests to the service
superclasses. Assume that you have applications from two separate lines of
business, finance and inventory. Each line of business would have its own
applications to fulfill its responsibilities to the organization. You can organize the
requests into categories that make sense for your workload management objectives.
In the following figure, different service superclasses are assigned to different lines
of business.
Finance 1 Inventory 1
Finance 2 Inventory 2
Finance 3 Inventory 3
In the previous figure, the activities in both service superclasses are further
subdivided. The service class provides a two-tier hierarchy: a service superclass
and service subclasses underneath. This hierarchy permits for a more complex
division of execution environment and better emulates a real-world model. Unless
specified otherwise, service subclasses inherit characteristics from the service
superclass. Use the service subclasses to further subdivide work in the service
superclass.
When you create or alter a service class object, you can define a number of
resource controls:
Table 27. Resource control afforded by service classes
Control Description
Agent priority This control sets a processor priority level for the agent threads
running in a service class. This priority flows through to the
operating system as a relative (delta) priority to other threads and
processes running in the data server.
Note: This control cannot be set when outbound correlator is in use.
Prefetch priority This control assigns a priority to the prefetch requests, which affects
the order in which they are addressed by the data server.
Buffer pool priority This control assigns a buffer pool priority to service classes which
affects how likely pages fetched by activities in a service class are to
be swapped out.
Service subclasses
Although the service superclass is the highest tier for work, activities run only in
service subclasses. Each service superclass has a default service subclass defined to
run activities that you do not assign to an explicitly defined subclass. This default
subclass is created when the service superclass is created. You can create additional
subclasses in a service class as you require them to further isolate work. Except for
histograms and the COLLECT ACTIVITY DATA, COLLECT AGGREGATE
ACTIVITY DATA and COLLECT AGGREGATE REQUEST DATA options, a service
subclass inherits the attributes of its service superclass, unless otherwise specified.
The resources of the superclass are shared by all subclasses in it.
You can define only a single level of subclasses (that is, you cannot define a
subclass under another subclass, only under a service superclass).
Service superclass 1
User
Workload A
requests
Default service
subclass
User
Workload B
requests
Service
subclass A
User
Workload C
requests
Service
subclass B
User
Workload D
requests
Maintenance Default
requests maintenance
class
Figure 12. A custom DB2 workload manager configuration using workloads and service
classes
As user requests enter the data server, they are identified as belonging to a given
workload and assigned to a service superclass or subclass. There are also system
requests (for example, prefetches) that run under a special default system service
class (SYSDEFAULTSYSTEMCLASS) and DB2-driven maintenance requests (such
as an automatic RUNSTATS from the health monitor) that run under a default
maintenance service class (SYSDEFAULTMAINTENANCECLASS).
SYSDEFAULTUSERCLASS
SYSDEFAULTSUBCLASS
SYSDEFAULTSYSTEMCLASS
SYSDEFAULTSUBCLASS
SYSDEFAULTMAINTENANCECLASS
SYSDEFAULTSUBCLASS
All work issued by connections to a default service superclass are processed in the
default service subclass of that service superclass.
Default service superclasses and their default service subclasses are dropped only
when the database is dropped. They cannot be dropped using the DROP SERVICE
CLASS statement.
Default user service superclass (SYSDEFAULTUSERCLASS)
By default, all user activities run in the SYSDEFAULTUSERCLASS.
Default maintenance service superclass (SYSDEFAULTMAINTENANCECLASS)
The default maintenance service superclass tracks the internal DB2
connections that perform database maintenance and administration tasks.
Connections from the DB2 asynchronous background processing (ABP)
agents are mapped to this service superclass. ABP agents are internal
agents that perform database maintenance tasks. Asynchronous index
cleanup (AIC) is an example of an ABP-driven task. ABP agents
automatically reduce their resource consumption and number of subagents
when the number of user connections increases on the data server. Utilities
that are issued by user connections are mapped using regular service
classes. You cannot implement service class thresholds on
SYSDEFAULTMAINTENANCECLASS.
The internal connections tracked by the default maintenance service
superclass include:
v ABP connections (including AIC)
v Health monitor initiated backup
v Health monitor initiated RUNSTATS
v Health monitor initiated REORG
Default system service superclass (SYSDEFAULTSYSTEMCLASS)
The default system service superclass tracks internal DB2 connections and
threads that perform system-level tasks. You cannot define service
subclasses for this service superclass, nor can you associate any workloads
or work actions with it. In addition, you cannot implement service class
You can use the workload to map activities from a connection to a service
superclass by specifying the SERVICE CLASS keyword of the CREATE
WORKLOAD statement. Assuming that no work class or work action applies to
the activity, the activity is run in the default service subclass of the service
superclass. You can also use a workload to map activities from a connection to a
service subclass in the service superclass by specifying the UNDER keyword for
the SERVICE CLASS keyword of the CREATE WORKLOAD statement. In this
situation, the connection still belongs to the service superclass, but all activities
issued from that connection are automatically mapped to the service subclass
specified in the workload definition.
Only the coordinator agent does service superclass mapping for the connection. If
the coordinator agent spawns subagents, the subagents inherit the superclass
mapping of the coordinator agent.
The following figure shows the relationship between connections, workloads, and
service superclasses. Connections that meet the definition of workload A are
mapped to service superclass 1; connections that meet the definition of workloads
B or C are mapped to service superclass 2; connections that meet the definition of
workload D are mapped to the SYSDEFAULTUSERCLASS service superclass.
User connections
Connections Workload C
DB2 internal
maintenance connections
Connections SYSDEFAULTMAINTENANCECLASS
Entities and
SYSDEFAULTSYSTEMCLASS
connections
If you have a more complex DB2 workload manager configuration, you might
want to handle activities differently based on either the activity type or some other
activity attribute. For example, you might want to do one of the following actions:
v Put DML in a different service subclass than DDL.
v Put all read-type queries with an estimated cost of less than 100 timerons in a
different service subclass than all the other read-type queries.
In a more complex configuration you can set up the workload to map activities
from the connection to the service superclass. Then, using work actions (contained
in a work action set that is applied to the service superclass), you can remap
activities, based on their type or attribute, to specific service subclasses in a service
superclass.
Specifically, you could apply a work action set that contains a MAP ACTIVITY
work action to the service superclass. All activities that are both mapped to the
service superclass and match a work class to which a MAP ACTIVITY work action
is associated are mapped to the service subclass specified by the work action.
When database activities have been mapped to their respective service superclasses
and service subclasses, you can implement controls on all the activities in a
particular service class. Statistics are available at the service-class level that you can
use to monitor database activities in that service class.
The following figure shows requests to the database being mapped to a service
superclass or service subclass through workloads. For information on how work
actions are used to map activities to a service subclass, see “Work actions and
work action sets” on page 132
Default service
Requests Workload A
subclass
Requests Service
Workload B
subclass 1.1
Service
Requests Workload C
subclass 1.2
Service
Requests Workload D subclass 1.3
Maintenance
database requests
Requests SYSDEFAULTMAINTENANCECLASS
System
database requests
Requests SYSDEFAULTSYSTEMCLASS
If you do not specify the agent priority value for a service class, all agents in that
service class have the same priority as all other DB2 agents.
Setting the agent priority for a DB2 service class adjusts the priority of agents only
for new work that enters the service class. Non-agent threads running in the
service class do not use the agent priority value that you specify. If you are
integrating DB2 service classes with an operating system workload manager such
as AIX Workload Manager or Linux workload management, you can use the
operating system workload manager to specify the processor priority to be used
for the operating system class (as processor shares), then have the DB2 service
class inherit this value through the OUTBOUND CORRELATOR value of the DB2
service class. The processor priority that you specify using the operating system
workload manager controls the priority for agents that run in the DB2 service
class, and any service class agent priority setting is ignored.
On UNIX operating systems and Linux, valid values are DEFAULT and -20 to 20
(SQLSTATE 42615). Negative values denote a higher relative priority. Positive
values denote a lower relative priority.
On Solaris 10 or higher, the instance owner must have the proc_priocntl privilege
to set a higher relative priority for agents in a service class using AGENT
PRIORITY. To grant this privilege, logon as root and run the following command:
usermod -K defaultpri=basic,proc_priocntl db2user
In this example, proc_priocntl is added to the default privilege set of user db2user.
In this example, proc_priocntl is added to the limit privilege set of zone db2zone.
On Solaris 9, there is no facility for DB2 to raise the relative priority of agents.
Upgrade to Solaris 10 or higher to use the service class agent priority.
Agents send read-ahead requests to the database prefetch queue. The prefetchers
take these read-ahead requests from the queue, then retrieve the data into the
buffer pools. When an agent requires specific data, it first checks the buffer pools
to see if the data is available. If not, the agent retrieves the data from disk.
Prefetchers perform expensive disk I/O operations, which frees agents to perform
computational work in parallel.
Any connection routed to a service class has its prefetch requests processed
according to the prefetch priority assigned for the service class. Each service class
can be associated with one of the three prefetch priorities: high, medium, or low.
You specify the prefetch priority of a service class with the PREFETCH PRIORITY
keyword on either the CREATE or ALTER SERVICE CLASS statement.
Specifying DEFAULT for a service superclass sets a medium prefetch priority for
the service superclass. You can specify a different prefetch priority for any service
subclass in the service superclass, but if you use the default prefetch priority for
the service subclass, the service subclass inherits its prefetch priority setting from
its service superclass.
You can associate each DB2 service class with a relative buffer pool priority, which
controls how likely pages fetched into the buffer pool by activities in the service
class are to be swapped out. Increasing the buffer pool priority potentially
increases the proportion of pages in use by agents of a particular service class.
You are more likely to realize a performance advantage with setting the buffer pool
priority for a service class if there is a reasonable amount of contention on the
buffer pool. Buffer pool contention demonstrated by an overall hit ratio of 85% or
less is likely to see the most benefit. If the overall hit ratio exceeds 90%, there is
likely not substantial buffer pool contention to begin with, and setting buffer pool
priority will yield less or little benefit in most cases. What benefits you realize are
dependent on the type of workload your data server runs.
For some workloads, setting buffer pool priority is more effective if you also turn
on proactive page cleaning. This is because buffer pool priority settings are
effective only for non-dirty pages and proactive page cleaning is more aggressive
about writing out dirty pages to disk. Note that you should turn on proactive page
cleaning only if it yields a performance benefit.
If you use asynchronous page cleaning (also known as classic page cleaning),
setting the chngpgs_thresh database configuration parameter to a lower value will
likely yield the same effect of making your buffer pool priority settings more
effective, because a low value for this parameter also ensures that there are enough
clean pages in the buffer pool.
It is possible that the positive effects of setting buffer pool priority can be
surpassed by the effects of prefetching, with or without setting prefetch priority, if
there is a reasonable amount of prefetching taking place. For example, if you
define a service class with high buffer pool priority where there is only little
prefetching, the effective advantage of this buffer pool priority setting might be
small when compared to a service class with low buffer pool priority but where
activities perform a significant amount of prefetching. Due to the benefits of
prefetching, the activities in the service class with low buffer pool priority might
even outperform the activities in the high buffer pool priority service class.
However, setting buffer pool priority can still supplement your workload
management strategy under these circumstances, and you should use it.
States of a connection
States of an activity
Because service classes work in a database and are stored in the catalog tables of
the database, entities that do not work in a database cannot be tracked by service
classes. Instance-level entities, such as the system controller and the health monitor
daemons, work at the instance level and are not directly associated with any
database. Agents that perform instance attachments and gateway connections are
not tracked by service classes either. Because instance attachment agents and
gateway agents do not work in a database, they are not tracked by service classes.
The following list is a partial list of entities that do not work within a database and
are not tracked by service classes:
v DB2 system controllers (db2sysc)
v IPC listeners (db2ipccm)
v TCP listeners (db2tcpcm)
v FCM daemons (db2fcms, db2fcmr)
v DB2 resynchronization agents (db2resync)
v Idle agents (agents with no database association)
v Instance attachment agents
v Gateway agents
v All other instance-level EDUs
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
Activities that have already acquired resources and are running are not affected by
the ALTER statement. These activities will hold their resources and run until
completion. However, if a subagent request is sent to a remote database partition
during the ALTER SERVICE CLASS operation, the service class definition seen by
the coordinator agent and the subagent can differ. Consider the following example
in which the prefetch priority for the service class is initially set to MEDIUM:
Table 28. Differences between the views of a coordinator agent and subagent of an altered
service class
Event order Connection 1 Connection 2
1 Coordinating agent sends a
request to remote partition
(prefetch priority of service
class was previously set to
MEDIUM)
2 ALTER SERVICE CLASS
issued; set prefetch priority
to HIGH
3 COMMIT is issued (the
altered service class property
is committed at the catalog
partition and loaded to
memory at all database
partitions)
This situation described in the previous table is temporary, and only affects
connections that issue subagent requests during the ALTER SERVICE CLASS
operation. All new connections will see the updated service class definition with
the prefetch priority of HIGH.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
A service class you defined cannot be dropped if any of the following conditions
apply:
v It is enabled
v It contains user defined service subclasses
v It is referenced by any workload, work action or threshold
v It is still referenced by a workload occurrence
v Any connection or activity is currently mapped to the service class
v If the service class is set as the target of a REMAP ACTIVITY action.
Note: You cannot manually drop the default service subclass for the service
superclass. The default service subclass for a service superclass is dropped
when the service superclass is dropped.
6. Commit your changes. When you commit your changes the service class is
removed from the SYSCAT.SERVICECLASSES view.
The product catalog database runs well most of the time. However, sometimes
users complain that their applications cannot connect to the database because the
maximum number of connections has been exceeded. After upgrading to DB2
Version 9.7, Bob, the database administrator, decides to try service classes. Bob
wants to know the usage patterns of the product catalog database by each of the
five departments and figure out why his database runs out of connections
occasionally. Following are the steps Bob follows to set up the service classes:
1. First, Bob creates service superclasses for each of the departments (the default
service subclass is also automatically created for each service superclass):
v SALES is created for the Sales department:
CREATE SERVICE CLASS SALES
v ACCOUNTING is created for the Accounting department:
CREATE SERVICE CLASS ACCOUNTING
v ENGINEERING is created for the Engineering department:
CREATE SERVICE CLASS ENGINEERING
v TESTING is created for the Testing department:
CREATE SERVICE CLASS TESTING
v PRODUCTION is created for the Production department:
CREATE SERVICE CLASS PRODUCTION
2. Bob creates session user groups with appropriate authorization IDs for each of
the departments:
v A session user group is created with the authorization ID SALESGRP. This
group includes the authorization IDs of all users in the Sales department.
v A session user group is created with the authorization ID ACCTNGRP. This
group includes the authorization IDs of all users in the Accounting
department.
Bob uses the default service class and workload settings. He wants to observe the
database usage patterns before placing any controls on the service classes. The
resulting service superclass definitions are as follows:
Table 29. Service class definitions
Service class
SALES
ACCOUNTING
ENGINEERING
TESTING
PRODUCTION
SYSDEFAULTUSERCLASS
SYSDEFAULTMAINTENANCECLASS
SYSDEFAULTSYSTEMCLASS
Following the most recent connection spike, Bob queries service superclass
statistics using the WLM_GET_SERVICE_SUPERCLASS_STATS table function and
examines the connection high-water mark value for each service superclass. Bob
discovers that the connection high-water mark for all departments except Testing is
close to 100. However, the statistic for the Testing department shows that at one
time, the test team established over 800 connections
Once a month, the Testing department performs its monthly intensive product
testing. At this time, the department establishes up to 1000 concurrent connections.
Because the database manager configuration parameter max_connections is set to
1000, the Testing department uses most of the available connections to the
database. When the system has 1000 connections, all subsequent connections are
rejected.
To prevent the Testing department from using all the connections, Bob decides to
limit the number of connections from the Testing department and ensure that each
of the other four departments can obtain sufficient connections to the database to
meet their business objectives.
The other four departments ordinarily do not require more than 150 concurrent
connections each. In addition, Bob also notices that the default user, default
maintenance, and default system service superclasses rarely contain any
connections, so he decides that 100 connections should be sufficient for these
default service superclasses. After 700 connections (600 for the four departments
and 100 for the default classes) are allocated from the max_connections pool of
1 000 available connections, 300 connections are available for the Testing
department. By limiting the Testing department to a maximum of 300 connections,
users from other departments should not have their connection requests rejected.
Because the TESTING service class can contain a maximum of only 300 concurrent
connections, all connection requests above this threshold are rejected. A
MAXSERVICECLASSCONNECTIONS threshold is not applied on the other service
classes, so these service classes share the remaining 700 available connections to
the data server. Because there is no contention for connections among these service
classes, Bob does not place connection thresholds on them.
Setting the prefetch priority of the TESTING service class to LOW causes prefetch
requests from connections issued from the Testing department to be serviced only
after all prefetch requests from the other departments are processed. This change
increases the query throughput of the other departments and decreases the
throughput of the Testing department during its product testing phase.
Chapter 3. Activities management 87
Third refinement of the DB2 workload manager implementation
After the prefetch problem is resolved, the Engineering department tells Bob that it
needs a few connections for an experimental application called Brewmeister.
Because the application is experimental, Bob wants to ensure that it does not
consume too many database connections and that queries from the application will
not compete for prefetchers when the system is busy. To accomplish these
objectives, he creates a new service subclass under the ENGINEERING service
superclass for the experimental application and a workload to map connections
from the application to the new service subclass. Bob updates the service class and
workloads as follows:
v Service subclass EXPERIMENT is created under the service superclass
ENGINEERING:
CREATE SERVICE CLASS EXPERIMENT UNDER ENGINEERING
v Threshold MAXSERVICECLASSCONNECTIONS of 50 is created for the service
subclass EXPERIMENT:
CREATE THRESHOLD MAXSERVICECLASSCONNECTIONS FOR SERVICE CLASS EXPERIMENT
UNDER ENGINEERING ACTIVITIES
ENFORCEMENT DATABASE WHEN TOTALDBPARTITIONCONNECTIONS > 50 STOP EXECUTION
v Workload WL_EXPERIMENT is created to map connections from the application
BREWMEISTER to the service subclass EXPERIMENT:
CREATE WORKLOAD WL_EXPERIMENT APPLNAME ('BREWMEISTER') SERVICE CLASS EXPERIMENT
UNDER ENGINEERING
v The prefetch priority for the EXPERIMENT service subclass is set to LOW:
ALTER SERVICE CLASS EXPERIMENT UNDER ENGINEERING PREFETCH PRIORITY LOW
Assume that on previous occasions, the query reported the following results:
SUPERCLASS_NAME SUBCLASS_NAME ACTSCOMPLETED ACTSABORTED ACTSHW ACTAVGLIFETIME
------------------- ------------------ ------------- ----------- ------ --------------
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 8 0 1 3.750
BI_APPS SYSDEFAULTSUBCLASS 4 0 1 5.230
BATCH SYSDEFAULTSUBCLASS 1 0 1 25.600
The data returned by this query might be sufficient to show that the slowdown is
occurring in the BI_APPS service class because its average activity lifetime is
significantly higher than usual. This situation could indicate that the available
resources for that particular service class are becoming exhausted.
If the averages for the service classes for all database partitions do not isolate the
problem, consider analyzing average values for each database partition.
Aggregating the average for each database partition into a global average can hide
large discrepancies between database partitions. In this situation, the assumption is
that every database partition is being used as a coordinator partition. If this
assumption is incorrect, the average lifetime computed at non-coordinator
partitions is zero.
SELECT SUBSTR(SERVICE_SUPERCLASS_NAME,1,19) AS SUPERCLASS_NAME,
SUBSTR(SERVICE_SUBCLASS_NAME,1,18) AS SUBCLASS_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
CAST(COORD_ACT_LIFETIME_AVG / 1000 AS DECIMAL(9,3)) AS AVGLIFETIME
FROM TABLE(WLM_GET_SERVICE_SUBCLASS_STATS_V97('', '', -2)) AS SCSTATS
ORDER BY SUPERCLASS_NAME, SUBCLASS_NAME
In this example, database partition 2 might be receiving more work than usual
because its average activity lifetimes are much higher than those of the other
database partitions.
The first step to take is to determine how many agents are working for each
service class. You might use a query such as the following one:
SELECT SUBSTR(AGENTS.SERVICE_SUPERCLASS_NAME,1,19) AS SUPERCLASS_NAME,
SUBSTR(AGENTS.SERVICE_SUBCLASS_NAME,1,19) AS SUBCLASS_NAME,
COUNT(*) AS AGENT_COUNT
FROM TABLE(WLM_GET_SERVICE_CLASS_AGENTS_V97('', '', CAST(NULL AS BIGINT), -2))
AS AGENTS
WHERE AGENT_STATE = 'ACTIVE'
GROUP BY SERVICE_SUPERCLASS_NAME, SERVICE_SUBCLASS_NAME
ORDER BY SERVICE_SUPERCLASS_NAME, SERVICE_SUBCLASS_NAME
If you conclude that a particular service class is using more than its fair share of
agents, you can take actions to restrict the number of activities permitted for a
workload or a service class. Alternatively, you can restrict the number of
connections for a service class.
Types of thresholds
Connection thresholds
If you want to limit how many database connections can be open at any
one time, or how long they can sit idle, use a connection threshold. These
thresholds limit the total number of concurrent connections to your
database, and they can be used to detect connections that sit idle for too
long.
Table 33. Connection thresholds
Threshold Description
CONNECTIONIDLETIME Controls the amount of time that a connection sits idle and is not working on behalf
of user requests. Use this threshold to detect inefficient use of data server resources
and application wait conditions.
Activity thresholds
If you want to limit the impact that specific activities can have on how the
data server is running, activity thresholds provide you with one of the
means you can use. Excess execution time, abnormally high volumes of
data returned, or abnormally high amounts of resources consumed are all
examples of warning flags that potentially troublesome activities could be
consuming excessive resources, which you can control with activity
thresholds.
Table 35. Activity thresholds
Threshold Description
ACTIVITYTOTALTIME Controls the amount of time that any given activity can spend from submission to
completion, for both execution and queue time. Use this threshold to detect jobs
that are taking an abnormally long time to complete.
The action that is taken dynamically when a threshold is violated depends on how
you define the threshold.
Stop execution (STOP EXECUTION)
A common action when a threshold is violated is to stop the activity from
executing. In this case, an error code is returned to the submitting
application indicating that the threshold was violated. Note that for
TOTALDBPARTITIONCONNECTIONS and
TOTALSCPARTITIONCONNECTIONS thresholds, a STOP EXECUTION
action prevents a connection from being established. For
CONNECTIONIDLETIME thresholds, the connection is closed. For
CONCURRENTWORKLOADOCCURRENCES, a new workload occurrence
is prevented from being created. For all activity-related thresholds, the
activity is stopped from continuing to execute. If a
THRESHOLDVIOLATIONS event monitor is active, a record is written to
the event monitor indicating that the threshold was violated.
Continue execution (CONTINUE)
In some situations, stopping the execution of an activity is too harsh a
response. A preferable response is to permit the activity to continue to run
and to collect the relevant data for an administrator to perform future
Within each of these threshold domains, a threshold has a scope over which it is
enforceable, such as a single workload occurrence, a database partition, or all the
partitions of a database. This is the enforcement scope of the threshold. For
example: Service class aggregate thresholds can have one of two enforcement
scopes, database and database partition; an example of an aggregate threshold that
applies only at the database partition level is the maximum number of concurrent
connections for a service superclass on a partition
(TOTALSCPARTITIONCONNECTIONS). Similarly, the following table shows that
you can specify the processor time threshold (CPUTIME) at the database,
superclass, subclass, work action or workload domain and that it is enforced per
partition. That is, the upper boundary specifies the maximum amount of user and
system processor time per partition that an activity may use.
All other thresholds are based on recognized activities resulting from an SQL
statement or the execution of a utility as the load utility and are evaluated in the
following order:
1. “Predictive thresholds”
2. “Reactive thresholds” on page 97
Predictive thresholds
Predictive thresholds are checked before reactive thresholds, because they affect
whether a database activity can start to run.
For concurrency thresholds, thresholds for workload-level work action sets are
checked first and database-level work action sets are checked second. Thresholds
for work action sets are checked first, in order to avoid work action set thresholds
on particular types of work blocking work of other types, which would affect
concurrency. For example, by checking database-level work action set concurrency
thresholds first, the following situation is avoided.
Also, assume that one LOAD activity is already running in the database (under
any service superclass) and nine activities are already running in service superclass
S1. A second new LOAD activity enters as the 10th activity. If the activity threshold
scope resolution hierarchy were used during threshold evaluation, the incoming
LOAD activity would not violate the service class threshold, increasing the
concurrency to 10. The LOAD activity is then evaluated against the database-level
work action threshold concurrency limit, which is violated because a LOAD
activity is already running in the database and the work action threshold
concurrency value is only 1. The second LOAD activity is then queued.
Reactive thresholds
Connection thresholds
A connection threshold applies controls to individual database connections. You
can use connection thresholds to limit the total number of concurrent connections
to the database and how long a connection can sit idle.
CONNECTIONIDLETIME threshold
The CONNECTIONIDLETIME threshold specifies a maximum amount of time that
a connection can be idle (that is, not working on a user request).
Type Connection
Definition domain
Database or service superclass
Enforcement scope
Database
Tracked work
User connections
Queuing
No
Unit Time duration expressed in minutes, hours, or days
Predictive or reactive
Reactive
Chapter 3. Activities management 97
If a connection remains idle for longer than the duration specified by the threshold
and the threshold action is STOP EXECUTION, the connection is closed.
Activity thresholds
An activity threshold applies to an individual activity. When the resource usage of
an individual activity violates the upper bound of the threshold that is tracking it,
the corresponding action is triggered and applied once to the activity.
After being applied once, the threshold is deactivated for the activity and not
applied again.
For example: Assume that you defined a time based threshold that triggers a
CONTINUE action after an elapsed time of 5 minutes. If an activity violates this
threshold, the action is applied once but not reapplied every 5 minutes.
Aggregate thresholds are not affected, because the same activity is permitted to
contribute to multiple activity aggregates simultaneously, as occurs with
concurrency thresholds, for example.
For example: A threshold that defines a maximum execution time of 1 hour for all
database queries defined in the database domain is overridden by a threshold that
defines a maximum execution time of 5 hours for a service superclass set up to
handle large queries, which is overridden by a maximum execution time of 10
hours for a service subclass for very large queries. Similarly, the maximum
execution time of 1 hour defined in the database domain can be overridden by a
value of 10 minutes in a second service superclass geared towards ensuring that
shorter, important queries can complete quickly.
ACTIVITYTOTALTIME threshold
The ACTIVITYTOTALTIME threshold specifies the maximum amount of time that
the data server should spend processing an activity.
Type Activity
Definition domain
Database, service superclass, service subclass, work action, workload
Enforcement scope
Database
In situations where the activity is queued by a queuing threshold, the total activity
time includes the time spent in the queue awaiting execution. When a cursor is
opened, the activity associated with the cursor lasts until the cursor is closed.
The data server considers IMPORT, EXPORT, and other CLP commands to be user
logic. Activities that are invoked from within IMPORT, EXPORT, and other CLP
commands are subject to thresholds.
CPUTIME threshold
The CPUTIME threshold specifies the maximum amount of combined user and
system processor time that an activity can use on a particular database partition
while the activity is running. Use this threshold to detect and control activities that
are using excessive processor resources.
Type Activity
Definition domain
Database, work action, service superclass, service subclass, and workload
Enforcement scope
Database partition
Tracked work
See the information later in this topic
Queuing
No
Unit Time
Predictive or reactive
Reactive
The amount of processor time that an activity spends running is measured from
the time that the activity begins running at the partition, after any queuing by
thresholds, until the time that the activity finishes running.
Example
The following example creates a CPUTIME threshold TH1 for the database domain
with a database partition enforcement scope. This threshold stops any activity that
takes longer than 30 seconds to run, which it checks for at 5-second intervals. You
can use this threshold to ensure that no queries on the system use an unreasonable
amount of processor time, which can negatively impact other work running on the
system.
CREATE THRESHOLD TH1 FOR DATABASE ACTIVITIES
ENFORCEMENT DATABASE PARTITION
WHEN CPUTIME > 30 SECONDS CHECKING EVERY 5 SECONDS
STOP EXECUTION;
CPUTIMEINSC threshold
The in-service-class CPUTIMEINSC threshold specifies the maximum amount of
combined user and system processor time that an activity may use on a particular
database partition while running in a specific service subclass. Use this threshold
to detect and control activities that are using excessive processor resources.
Class Activity
Definition domain
Service subclass
Enforcement scope
Database partition
Tracked work
See the information later in this topic
Queuing
No
Unit Time
Predictive or reactive
Reactive
The processor time that an activity spends running is measured from the time that
the activity enters the current service subclass until the time that the activity leaves
the service subclass or finishes running.
This threshold differs from the CPUTIME threshold in that it controls only the
amount of processor time that may be used in a specific service subclass, not the
total amount of processor time used during the lifetime of the activity.
You can use the REMAP ACTIVITY action to control activities by remapping them
to a service subclass with different resource assignments.
Example
The following example creates two service subclasses, A1 and A2, under a
superclass A, with a single in-service-class CPUTIMEINSC threshold that remaps
activities between subclasses after 1 minute of processor time has been used during
query evaluation in service subclass A1. An event monitor record is logged.
CREATE SERVICE CLASS A;
CREATE SERVICE CLASS A1 UNDER A;
CREATE SERVICE CLASS A2 UNDER A;
ESTIMATEDSQLCOST threshold
The ESTIMATEDSQLCOST threshold specifies the maximum estimated cost that is
permitted for DML activities.
Type Activity
Definition domain
Database, service superclass, service subclass, work action, and workload
Enforcement scope
Database
Tracked work
See the information later in this topic
Queuing
No
Unit Estimated SQL cost expressed in timerons
Predictive or reactive
Predictive
SQLROWSREAD threshold
The SQLROWSREAD threshold specifies the maximum number of rows that a
DML activity may read on any database partition. Use this threshold to detect and
control activities that are reading an excessive number of rows.
Class Activity
Definition domain
Database, work action, service superclass, service subclass, and workload
Enforcement scope
Database partition
Tracked work
See the information later in this topic
Queuing
No
Unit Number of rows
Predictive or reactive
Reactive
Index accesses are not counted toward the total number of rows read. If an access
plan uses only indexes during query evaluation, the SQLROWSREAD threshold
will not be violated.
Example
The following example creates an SQLROWSREAD threshold TH1 for the database
domain with a database partition enforcement scope. This threshold stops the
execution of any activity that reads more than 5 000 000 rows during query
evaluation, which the threshold checks for at 10-second intervals. You can use this
threshold to ensure that no queries on the system read an unreasonable number of
rows, which can negatively impact other work running on the system.
SQLROWSREADINSC threshold
The in-service-class SQLROWSREADINSC threshold specifies the maximum
number of rows that a DML activity can read on a particular database partition
while running in a specific service subclass. Use this threshold to detect and
control activities that are reading an excessive number of rows.
Class Activity
Definition domain
Service subclass
Enforcement scope
Database partition
Tracked work
See the information later in this topic
Queuing
No
Unit Number of rows
Predictive or reactive
Reactive
This threshold differs from the SQLROWSREAD threshold in that it controls the
number of rows read only from the time that an activity enters a specific service
subclass, not the total number of rows read during the lifetime of the activity. This
threshold also differs from the SQLROWSRETURNED threshold in that it controls
the maximum number of rows read during query evaluation in the current service
subclass, not the number of rows returned to a client application from the data
server.
Index accesses are not counted toward the total number of rows read. If an access
plan uses only indexes during query evaluation, the SQLROWSREADINSC
threshold will not be violated.
You can use the REMAP ACTIVITY action to control activities by remapping them
to a service subclass with different resource assignments.
The following example creates two service subclasses, A1 and A2, under a
superclass A, with a single in-service-class SQLROWSREADINSC threshold that
remaps activities between subclasses after 10 000 rows have been read in service
subclass A1 during query evaluation. An event monitor record is logged.
CREATE SERVICE CLASS A;
CREATE SERVICE CLASS A1 UNDER A;
CREATE SERVICE CLASS A2 UNDER A;
SQLROWSRETURNED threshold
The SQLROWSRETURNED threshold specifies the maximum number of rows that
can be returned by the data server to the client.
Type Activity
Definition domain
Database, service superclass, service subclass, work action, and workload
Enforcement scope
Database
Tracked work
See the information later in this topic
Queuing
No
Unit Number of rows
Predictive or reactive
Reactive
When multiple result sets are returned by a CALL statement, the threshold applies
to each result set separately and not as an aggregate to the total number of rows
returned across all result sets. For example, if you define the threshold for 20 rows
and the CALL statement returns two result sets returning 15 rows and 19 rows
respectively, the threshold is not triggered.
SQLTEMPSPACE threshold
The SQLTEMPSPACE threshold specifies the maximum amount of system
temporary table space that can be consumed by a DML activity at any database
partition. DML activities often use temporary table space for operations such as
sorting and the manipulation of intermediate result sets.
Type Activity
Definition domain
Database, service superclass, service subclass, work action, workload
The data server considers IMPORT, EXPORT, and other CLP commands to be user
logic. Activities that are invoked from within IMPORT, EXPORT, and other CLP
commands are subject to thresholds.
Aggregate thresholds
An aggregate threshold places collective control over elements of work in a
database. The boundary that you define using an aggregate threshold operates as a
running total, to which any work tracked by the threshold contributes.
When newly instantiated work causes the upper boundary to be violated, the
corresponding action is triggered. The work that caused the upper boundary to be
violated is the only one affected by the triggered action.
Activity queuing
Some thresholds have a built-in queue and permit you to enforce how many
activities can execute concurrently by queuing all additional activities once the
concurrency limit is reached, up until the set limit for the queue is exceeded.
You can also define the upper queuing boundary as being unbounded, in which
case there is no upper limit to the size of the queue. In this situation, newly
arriving work is added to the queue. If you define a hard limit for the upper
boundary and define an action of CONTINUE as the threshold action, all newly
arriving work that violates the threshold boundary is added to the queue, and the
threshold behaves as if its queuing boundary were unbounded.
AGGSQLTEMPSPACE threshold
The AGGSQLTEMPSPACE threshold specifies the maximum amount of system
temporary table space that can be used in total across all concurrently running
CONCURRENTDBCOORDACTIVITIES threshold
The CONCURRENTDBCOORDACTIVITIES threshold specifies the maximum
number of recognized coordinator activities that can run concurrently across all
database partitions in the specified definition domain.
The use of this type of threshold is best suited for applications that do not execute
more than one activity at a time. If an application starts more than one activity
concurrently, such as issuing an UPDATE SQL statement while a cursor is open,
then certain queue contention scenarios can sometimes occur depending on the
concurrency level allowed by the threshold and the behaviors of the other
applications involved. If this threshold exists in scenarios where applications can
execute more than one activity concurrently or the application behavior is
unknown, then it is recommended to have an ACTIVITYTOTALTIME threshold
defined for those activities to help automatically resolve any potential queue
contention scenarios.
Type Aggregate
Definition domain
Database, work action, service superclass, service subclass
Enforcement scope
Database
Tracked work
Recognized coordinator and nested activities (see further below and “Work
identification by type of work with work classes” on page 47)
Queuing
Yes
106 Workload Manager Guide and Reference
Unit Number of concurrent database activities
Predictive or reactive
Predictive
This example can be generalized to multiple applications and queues. You can
resolve this situation by increasing the concurrency values, or cancelling certain
activities if the concurrency values are correctly set.
CONCURRENTWORKLOADACTIVITIES threshold
The CONCURRENTWORKLOADACTIVITIES threshold specifies the maximum
number of coordinator and nested activities that can concurrently run in a
workload occurrence.
Type Aggregate
Definition domain
Workload
Enforcement scope
Workload occurrence
Tracked work
Recognized coordinator and nested activities (see “Activities” on page 15)
Queuing
No
Unit Number of concurrent workload activities
Predictive or reactive
Predictive
The nested activities that are tracked by this threshold must satisfy the following
criteria:
v They must be a recognized coordinator activity. Nested coordinator activities
that are not recognized types as described in “Work identification by type of
work with work classes” on page 47 are not counted.
v They must be directly invoked from user logic, such as a user-written stored
procedure issuing SQL or from the SYSPROC.ADMIN_CMD stored procedure.
Nested coordinator activities that are started by the invocation of a DB2 utility
or any other code in the SYSIBM, SYSFUN. or SYSPROC schemas are not
counted towards the upper boundary specified by this threshold.
Example
CONCURRENTWORKLOADOCCURRENCES threshold
The CONCURRENTWORKLOADOCCURRENCES threshold is an aggregate
threshold that specifies the maximum number of workload occurrences that can
run concurrently on the coordinator partition.
Type Aggregate
Definition domain
Workload
Enforcement scope
Database partition
TOTALDBPARTITIONCONNECTIONS threshold
The TOTALDBPARTITIONCONNECTIONS threshold specifies the maximum
number of concurrent database connections on a coordinator partition for a
database, that is, this threshold controls the maximum number of clients that can
connect to the database on each of its database partitions.
TOTALSCPARTITIONCONNECTIONS threshold
The TOTALSCPARTITIONCONNECTIONS threshold specifies the maximum
number of concurrent database connections on a coordinator partition for a service
superclass.
Type Aggregate
Definition domain
Service superclass
Enforcement scope
Database partition
Tracked work
Connections
Queuing
Yes
Unit Number of concurrent connections in service class
Predictive or reactive
Predictive
Tracked connections include both new client connections and existing client
connections that switch to the service class from another service class. Connections
switch service classes by associating with a different workload definition that is
mapped to a different service class. Workload reevaluation occurs only at
transaction boundaries, so connections can switch service classes only at
Chapter 3. Activities management 111
transaction boundaries; however, because resources that are associated with WITH
HOLD cursors are maintained across transaction boundaries, connections with
open WITH HOLD cursors cannot switch service superclasses. When the
connection concentrator is on, any application that is switched leaves the service
class. When the application is switched in at the subsequent statement, it must
rejoin the service class and consequently pass the threshold.
When the queue size threshold is reached, the threshold action is triggered. The
TOTALSCPARTITIONCONNECTIONS threshold controls only coordinator
connections. Connections made by subagents are not counted towards the
threshold.
UOWTOTALTIME threshold
The UOWTOTALTIME threshold specifies the maximum amount of time that a
unit of work may spend in the DB2 engine.
Type Unit of work
Definition domain
Database, workload, service superclass
Enforcement scope
Database
Tracked work
See the information later in this topic.
Queuing
No
Unit Labeled duration
Predictive or reactive
Reactive
The STOP EXECUTION action for a UOWTOTALTIME threshold rolls back the
unit of work. The FORCE APPLICATION action forces the application to which the
unit of work belongs. The COLLECT ACTIVITY DATA option can be specified for
this threshold, but it is ignored.
Creating a threshold
Create thresholds using the DDL statement CREATE THRESHOLD (or the
CREATE WORK ACTION SET statement). You create a threshold to impose a limit
on resource consumption.
To create a threshold for a work action set, use the CREATE WORK ACTION SET
statement or the ALTER WORK ACTION SET statement with the ADD WORK
ACTION keywords. For more information, see CREATE WORK ACTION SET
statement or ALTER WORK ACTION SET statement.
To create a threshold:
1. Issue the CREATE THRESHOLD statement, specifying one or more of the
following properties for the threshold:
v The name of the threshold.
v The threshold domain. The threshold domain is the database object that the
threshold is both attached to and operates on. The domain that applies
depends on the type of threshold, see “Threshold domain and enforcement
scope” on page 94 for more information.
v The enforcement scope for the threshold. The threshold scope is the
enforcement range of the threshold in its domain. The enforcement scope
that applies depends on the type of threshold, see “Threshold domain and
enforcement scope” on page 94 for more information.
v Optional: Disable the threshold when it is created. By default a threshold is
created as enabled. If you create the threshold as disabled and want to
enable it later, use the ALTER THRESHOLD statement.
v The threshold predicate to specify the type of threshold and the maximum
value permitted. When the maximum value is violated, the action specified
for the threshold is enforced. For more information on which thresholds are
available to you, see “Connection thresholds” on page 97, “Activity
thresholds” on page 98, “Aggregate thresholds” on page 105, and “Unit of
work thresholds” on page 112.
v The actions to be taken if the maximum value for the threshold is exceeded.
The actions consist of a mandatory action that affects the execution of the
activity (STOP EXECUTION, CONTINUE, FORCE APPLICATION, or
REMAP ACTIVITY TO) and an optional collect activity action (COLLECT
ACTIVITY DATA). The options you specify for the collect activity action
determine what information is collected for the activity that caused the
threshold boundary to be violated.
Chapter 3. Activities management 113
2. Commit your changes. When you commit your changes, the threshold is added
to the SYSCAT.THRESHOLDS view.
Altering a threshold
Alter thresholds using the ALTER THRESHOLD statement. You might alter a
threshold to modify the limit imposed on a specific resource.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
To alter a threshold for a work action set, use the ALTER WORK ACTION SET
statement with the ADD WORK ACTION keywords. For more information, see
ALTER WORK ACTION SET statement.
Restrictions
You cannot alter the threshold type with the ALTER THRESHOLD statement. For
example: You cannot change a TOTALDBPARTITIONCONNECTIONS threshold
into a TOTALSCPARTITIONCONNECTIONS threshold, for example. If you require
a different threshold type, drop the existing thresholds and then create a new
threshold.
To alter a threshold:
1. Specify one or more of the following properties for the threshold on the ALTER
THRESHOLD statement. You can change the following properties:
v The boundary for the threshold predicate.
v The actions to be taken, if the threshold boundary is violated.
v Whether the threshold is enabled or disabled.
2. Commit your changes. When you commit your changes, the threshold is
updated in the SYSCAT.THRESHOLDS view.
Dropping a threshold
Drop a threshold that you no longer require using the DDL statement DROP
THRESHOLD.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
If you want to drop a threshold in a work action set, use the ALTER WORK
ACTION SET statement. You can also drop a threshold by dropping the entire
WORK ACTION SET with the DROP statement.
To drop a threshold:
1. Do one of the following steps:
v If the threshold is a queuing threshold, use the ALTER THRESHOLD
statement to disable it.
One way of setting up a DB2 workload manager solution is to divide and manage
the database resources across the various departments in a company. For example,
assume that the sales department runs two main reports, which consist of the
monthly and yearly sales. Assume also that the human resources department runs
a payroll application every other week and that the development team is working
on a new type of report at the request of the management team. To define WLM
execution environments for these departments, create service classes:
CREATE SERVICE CLASS SALES
CREATE SERVICE CLASS HUMANRESOURCES
CREATE SERVICE CLASS DEVELOPMENT
In this situation, you create a workload definition for each one of these
applications to map the application to its applicable service superclass:
CREATE WORKLOAD MONTHLYSALES APPLNAME('monthlyrpt.exe') SERVICE CLASS SALES
CREATE WORKLOAD YEARLYSALES APPLNAME('yearlyrpt.exe') SERVICE CLASS SALES
CREATE WORKLOAD PAYROLL APPLNAME('payroll.exe') SERVICE CLASS HUMANRESOURCES
CREATE WORKLOAD NEWREPORT APPLNAME('dev.exe') SERVICE CLASS DEVELOPMENT
Because the YearlySales report is very large, you do not want to have more than
one occurrence of this application running in the database at any time. You
therefore create a threshold to set the maximum number of concurrent occurrences
of this workload to 1:
CREATE THRESHOLD SINGLEYEARLYSALESRPT FOR WORKLOAD YEARLYSALES ACTIVITIES
ENFORCEMENT DATABASE PARTITION
WHEN CONCURRENTWORKLOADOCCURRENCES > 1
STOP EXECUTION
You can achieve a similar solution by associating the YearlySales application with a
service subclass YearlySalesReports (under the Sales service superclass) and setting
the maximum concurrency threshold to a value of 1 for the service subclass:
CREATE SERVICE CLASS YEARLYSALESREPORTS UNDER SALES
In either situation, you can set the threshold action to STOP EXECUTION to
prevent more than one occurrence of the workload from executing. You can also
collect activity information if you want additional information about the conditions
when the threshold is violated.
Because all applications are expected to complete in an hour or less, you create a
threshold with a database domain, preventing any activity from running longer
than 1 hour. The only exception to this rule is the yearly report, which can take up
to 5 hours to complete. Therefore, you can associate an activity total time threshold
of 5 hours with the YearlySales workload. This will override the activity total time
threshold applied to the yearly sales report, relaxing the time constraints. The new
value of 5 hours now applies to the YearlySales workload although the global
value of 1 hour applies elsewhere in the database:
CREATE THRESHOLD MAXDBACTIVITYTIME FOR DATABASE ACTIVITIES
ENFORCEMENT DATABASE
WHEN ACTIVITYTOTALTIME > 1 HOUR
STOP EXECUTION
When the application becomes more stable, it enters its optimization phase. During
the phase, the developer tries to reduce the number of activities generated by the
application from between 15 and 20 to 15. At this time, you alter the threshold by
changing its upper boundary value to 15 and the threshold action to CONTINUE.
This threshold definition helps identify and address situations in which the
number of generated activities exceeds 15 but the increased stability of the
application does not require that its execution be stopped.
ALTER THRESHOLD MAXDEVACTIVITIES
WHEN CONCURRENTDBCOORDACTIVITIES > 15
COLLECT ACTIVITY DATA ON COORDINATOR WITH DETAILS AND VALUES
CONTINUE
The application LongUOW issues transactions that can occasionally run longer
than the desired ten minutes. This results in locks being held for too long and
prevents more important applications from proceeding. In this case, you want to
force the application, rather than let it hold up other work. You can restrict the
runtime for this application's transactions to an administrator-defined period of
time using the UOWTOTALTIME threshold.
Then, create a threshold for this workload that forces the LongUOW application
when any of the application's transactions take more than 10 minutes to finish:
CREATE THRESHOLD FORCELONGUOW FOR WORKLOAD LONG_UOW ACTIVITIES ENFORCEMENT DATABASE
WHEN UOWTOTALTIME > 10 MINUTES FORCE APPLICATION
You can also apply this threshold at the service subclass level or database level.
System resources are allocated and controlled by using service classes. With
priority aging, the priority of an activity can be changed by moving the activity
from one service class to another service class. The priority increases if the new
service class has more resources, and the priority decreases if the new service class
has fewer resources. Activities are moved when a threshold with a REMAP
ACTIVITY action is violated, based upon predetermined maximum usage of a
specific resource such as processor time or rows read. After an activity is mapped
to a new service class, it continues to run with the new resource constraints
applied.
A simple approach that you can use to help short queries to run faster is to define
a series of service classes with successively lower levels of resource priority and
threshold actions that move activities between the service subclasses. Using this
setup, you can decrease, or age, the priority of longer-running work over time and
perhaps improve response times for shorter-running work without having detailed
knowledge of the activities running on your data server.
Threshold
Medium priority
service subclass
Threshold
Low priority
service subclass
Figure 16. A simple tiered setup that shows three service classes with successively lower
priority
You can create this setup by assigning a high priority for all applicable resources to
one service class, medium priority to a second service class, and low priority to a
third service class. As work enters the system, it is automatically placed into the
first service class and begins running using the high-priority settings of this service
class. If you also define thresholds for each of the service classes that limit the time
or resources used during execution, work is dynamically reassigned to the
next-lower service class if the threshold of the next-higher class is violated. This
dynamic resource control is repeatedly applied until the work is completed or is in
the lowest-priority class, where it remains until it is completed or you force it to
stop running.
In-service-class thresholds
The in-service class thresholds are evaluated separately for an activity on each
partition, without coordination. Because there is no coordination between
partitions, when an activity is remapped on one partition, it is possible for the
same activity to be in different service subclasses on different partitions
simultaneously.
When subagent work for an activity is completed on a remote partition and further
work for the same activity is sent to the same partition later, the activity restarts in
the same service subclass as the agent that sent the request to the partition. If you
defined an in-service-class threshold for this service subclass, the timer or counter
for the activity on the remote partition restarts at zero.
Where activities are nested, parent and child activities are tracked separately.
Therefore, if a child activity is using an excessive amount of resources, only this
activity, not its parent or sibling activities, violates a threshold.
On data servers where the primary resource activities have to compete for is
processor time, use the CPUTIMEINSC threshold as your first measure of control.
On data servers where queries reading many table rows result primarily in I/O
contention, use SQLROWSREADINSC. On systems that see a combination of heavy
processor and IO activity, use a combination of the CPUTIMEINSC and
SQLROWSREADINSC thresholds.
You should set the agent priority of the service subclasses relative to each other, so
that your data server can treat activities of different business priority differently.
Note that the agent priority of the default system class should always be higher
than any user defined service classes you create to avoid a negative impact on
performance. The agent priority of the default maintenance class can be set lower
than your user defined service classes.
For additional information on how to use the thresholds, see the sample tiering
scripts and priority aging scenarios.
When you remap an activity to a new service subclass, only the in-service-class
thresholds, such as CPUTIMEINSC and SQLROWSREADINSC, change. These
in-service-class thresholds no longer affect an activity after it leaves the source
service subclass, and they are replaced with the corresponding thresholds for the
target subclass, if you defined those thresholds. All other activity thresholds from
the service subclass to which the activity was originally mapped remain
For example, assume that two service subclasses with thresholds are defined as
follows:
v Service subclass A with the following thresholds:
– An ACTIVITYTOTALTIME lifetime threshold TH1 with a STOP EXECUTION
action after 30 minutes have elapsed
– An SQLROWSREADINSC in-service-class threshold TH2 with a REMAP
ACTIVITY action to service subclass B after more than 2000 rows have been
read
v Service subclass B with the following thresholds:
– An ACTIVITYTOTALTIME lifetime threshold TH3 with a STOP EXECUTION
action after 5 minutes have elapsed
– An SQLROWSREADINSC threshold TH4 with a STOP EXECUTION action
after more than 1000 rows have been read
When an activity enters the system in service subclass A, both thresholds TH1 and
TH2 apply to the activity. If the activity reads more than 2000 rows during query
evaluation, it is dynamically remapped to service subclass B. Because of the
remapping of the activity to subclass B, the applicable in-service-class thresholds
change, and TH4 rather than TH2 now applies to the activity. Counters for both
thresholds are reset to zero, and even though the activity has read more than 2000
rows in the original service subclass, the counter for TH4 is restarted at zero; the
activity must read more than 1000 rows while running in service subclass B before
threshold TH4 is violated. Threshold TH1, which applies throughout the lifetime of
the activity, continues to apply, even though the activity is now running in a
different subclass. Threshold TH3 does not exercise any control over the remapped
activity at all, because it did not apply to the first service subclass that the activity
entered when it began running.
The wlmtiersdefault.db2 sample script creates the following work action set and
work class set, which is used to map activities that cannot be remapped by the
CPUTIMEINSC threshold directly to the WLM_MEDIUM service subclass. These
activities will remain in the WLM_MEDIUM service subclass for the duration of
their execution.
Table 40. Work class set created by the wlmtiersdefault.db2 sample script
Work class Work action
WLM_DML_WC For DML activities, mapped to service class
WLM_SHORT initially. These activities can
be remapped by a CPUTIMEINSC threshold.
WLM_CALL_WC For CALL activities, mapped to service class
WLM_SHORT initially. These activities can
be remapped by a CPUTIMEINSC threshold.
WLM_OTHER_WC For activities that cannot be remapped by a
CPUTIMEINSC threshold, mapped to
service class WLM_MEDIUM. These
activities will remain in the WLM_MEDIUM
service subclass.
The wlmtierstimerons.db2 sample script also creates the following work action set
and work class set, which is used to map activities according to their estimated
cost:
Table 41. Work class set created by the wlmtierstimerons.db2 sample script
Estimated cost range in timerons and work
Work class action
WLM_SHORT_DML_WC For DML activities with an estimated cost of
0 to 999 timerons, mapped to service class
WLM_SHORT initially. These activities may
get remapped by a CPUTIMEINSC
threshold.
WLM_MEDIUM_DML_WC For DML activities with an estimated cost of
1000 to 99 999 timerons, mapped to service
class WLM_MEDIUM initially. These
activities may get remapped by a
CPUTIMEINSC threshold.
WLM_LONG_DML_WC For DML activities with an estimated cost of
100 000 to infinity timerons, mapped to
service class WLM_LONG.
WLM_CALL_WC For CALL activities, mapped to service class
WLM_SHORT initially. These activities can
be remapped by a CPUTIMEINSC threshold.
WLM_OTHER_WC For activities that cannot be remapped,
mapped to service class WLM_MEDIUM
When you modify the sample scripts to adapt them to your environment, the most
important setting to consider is the maximum amount of processor time that can
be used in each service class. How much processor time you permit activities to
consume in each service subclass depends largely on your particular environment.
To find the best values, you need to monitor how activities are being processed on
your data server. By default, both the wlmtiersdefault.db2 and
wlmtierstimerons.db2 scripts will log event monitor records to the threshold
violations event monitor, if one is active, with the option to turn on and enable the
activity event monitor and to collect activity data (at the cost of incurring
additional overhead). For wlmtiersdefault.db2, if the maximum amount of
processor time that can be used in each service class is set too high, most activities
will always start and finish in the high priority class regardless of how much
actual processor time each requires. If the maximum amount of processor time is
set too low, no activity will finish in the high priority service class and every
activity will end up being remapped to the medium or low priority service class
regardless of business priority. In either case, the script will not benefit overall
throughput on your data server and activities are not treated according to their
business priority effectively. The same issue is true to a lesser extent for
wlmtierstimerons.db2 where activities are differentiated initially by being mapped
to service subclasses according to estimated cost. If the maximum amount of
processor time that can be used in each service class is set incorrectly, activities will
fail to be remapped to a more appropriate service subclass if they consume too
much processor time, or are remapped too quickly despite having higher business
priority.
Note that the wlmtiersdefault.db2 and wlmtierstimerons.db2 scripts set the agent
priority of the default system class to be higher and the priority of the default
maintenance class to be lower than the three user defined service classes. If you
modify the agent priority of the user defined service classes, you should always set
the priority of the default system class to be as high as or higher than the highest
priority service subclass you create to avoid a negative impact on performance.
For more information about the specific DB2 workload manager objects created by
the scripts and about how to run them, refer to the scripts.
Sample scenarios
Two examples have been included in the documentation that show you how you
can adapt the sample tiering scripts on your data server to make use of priority
aging.
The problem: There is a business intelligence report which any end user can run
and which is very expensive. Anytime the report runs, it compromises the
performance of the system. The front end tool used to generate the report does not
The solution: You can use the wlmtiersdefault.db2 sample tiering scripts to
configure your data server with a tiered configuration that dynamically lowers, or
ages, the priority of processor intensive activities during their lifetime in order to
prevent compromising data server performance for all other users. After a
workload initially maps all work to a high priority service subclass, the expensive
reports are detected by the CPUTIMEINSC in-service-class threshold based on the
amount of processor time consumed. If an activity violates the CPUTIMEINSC
threshold by using the maximum amount of allowed processor time, a REMAP
ACTIVITY moves the activity to a lower priority service subclass. The activity can
be remapped in response to processor time consumption again until it executes in
the lowest priority service subclass where it will continue until it completes or you
intervene manually. Other activities which do not exceed the thresholds continue to
run in the high priority service subclass, where they receive higher agent priority.
After running the workload for a period of time, you can use the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function to see how many
activities were remapped between the service subclasses:
SELECT substr(service_superclass_name,1,21) AS superclass,
substr(service_subclass_name,1,21) AS subclass,
substr(char(coord_act_completed_total),1,10) AS completed,
substr(char(act_remapped_in),1,10) AS remapped_in,
substr(char(act_remapped_out),1,10) AS remapped_out,
substr(char(last_reset),1,19) AS last_reset
FROM table( WLM_GET_SERVICE_SUBCLASS_STATS_V97(
CAST(NULL AS VARCHAR(128)),
CAST(NULL AS VARCHAR(128)),
-2 )
) AS TF_subcls_stats@
7 record(s) selected.
If you notice that no or only very few activities are being remapped to the lower
priority service subclasses, decrease the CPUTIMEINSC threshold value and the
check interval used by the ALTER THRESHOLD statements in the script to
improve the mapping of activities across service class tiers according to business
priority. If most or almost all activities are being remapped to the lower priority
service subclasses, increase the CPUTIMEINSC threshold value and the check
interval for the ALTER THRESHOLD statements to permit more activities to
complete with higher priority. After your changes are complete, rerun the
wlmtiersdefault.db2 script to make them effective.
Chapter 3. Activities management 125
Scenario: Remapping incorrectly mapped queries through
priority aging
The following scenario shows how you can configure your data server to
dynamically remap, or age the priority of, activities that are consuming more
processor time than originally estimated in order to maintain system performance
for other queries.
The problem: You may have mapped expensive activities based on estimated SQL
cost to a lower priority service subclass so that these activities do not impact the
performance of less expensive, shorter activities. Such a mapping can be
accomplished by defining a work action set at the service superclass level.
However, if the estimated SQL cost is incorrect because of statistics that are out of
date, for example, an expensive activity might be mapped incorrectly to a high
priority service subclass where it begins to consume an excessive amount of
resources, at the cost of all other high priority activities.
The solution: You can use the wlmtierstimerons.db2 sample tiering script to
configure your data server with a tiered configuration that evaluates incoming
activities according to their estimated cost and maps them to one of three service
subclasses, each with different agent priorities. If an activity consumes too much
processor time, your data server dynamically lowers the priority of the activity
during its lifetime by remapping it between performance tiers. This dynamic
process of remapping activities to lower their priority is also referred to as priority
aging.
After an activity has been mapped to its initial service class and begins executing,
the CPUTIMEINSC in-service-class threshold is used by the script to control the
amount of processor time an activity can consume. If the activity violates the
threshold by using the maximum amount of allowed processor time, a REMAP
ACTIVITY action is triggered which moves the activity to a service subclass with
lower agent priority. The activity can be remapped in response to processor time
consumption until it executes the lowest priority service subclass where it will
continue until it completes or you intervene manually.
An event monitor record is logged every time an activity is remapped. If you want
to collect additional information about remapped activities to investigate further,
you can add the COLLECT ACTIVITY DATA clause to the ALTER THRESHOLD
statement in the wlmtiersdefault.db2 script. Simply rerun the script for the change
to take effect.
After running the workload for a period of time, you can use the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function to see how many
activities were remapped between the service subclasses:
SELECT substr(service_superclass_name,1,21) AS superclass,
substr(service_subclass_name,1,21) AS subclass,
substr(char(coord_act_completed_total),1,10) AS completed,
substr(char(act_remapped_in),1,10) AS remapped_in,
substr(char(act_remapped_out),1,10) AS remapped_out,
substr(char(last_reset),1,19) AS last_reset
FROM table( WLM_GET_SERVICE_SUBCLASS_STATS_V97(
CAST(NULL AS VARCHAR(128)),
CAST(NULL AS VARCHAR(128)),
-2 )
) AS TF_subcls_stats@
7 record(s) selected.
For this scenario, you should see relatively few activities being remapped between
service subclasses, because activities should almost always be mapped to the
appropriate service subclass initially, based on estimated cost. If you notice that
activities typically are being completed only in the WLM_SHORT or the
WLM_LONG service class, you can adjust the estimated cost values used by the
ALTER WORK CLASS SET statement in the script to improve the mapping of
activities across service class tiers, so that shorter activities are mapped to the
WLM_SHORT_DML_WC work class and longer activities are mapped to the
WLM_MEDIUM_DML_WC or the WLM_LONG_DML_WC work class. If you
notice that most of the activities are being remapped, you can increase the
threshold values used in the ALTER THRESHOLD statements to improve the
initial mapping of activities to service subclasses. After your changes are complete,
rerun the wlmtierstimerons.db2 script to make them effective.
In order to be able to remap to another service subclass, the target service subclass
must exist under the same service superclass as the original service subclass of the
activity. Either the target or original service subclass can be the default subclass of
the superclass. The REMAP ACTIVITY action cannot be applied to service
subclasses under the default system class, default maintenance class or default user
class.
The REMAP ACTIVITY action will move an activity to a different service subclass
within the same service superclass. Remapping is available with any of the
in-service-class thresholds such as CPUTIMEINSC and SQLROWSREADINSC. You
use this dynamic process of remapping activities to lower their priority over time,
which is also known as priority aging. Lowering the priority of some activities
over time can free up system resources, which can then be applied to other
activities of higher business importance.
Agents working for the activity will periodically check if a threshold has been
violated on each partition, without coordination between partitions. When any one
agent detects an in-service-class threshold violation on a partition, this agent
triggers the REMAP ACTIVITY action for the activity on the partition and then
remaps itself to the target service subclass, after which the activity is considered
remapped. All other agents working for the activity on the same partition will
remap to the target service subclass when they detect that the activity has been
remapped.
The target service subclass cannot be the same as the original service subclass; you
must remap to a different service subclass first before remapping to the original
one.
The following example creates a simple three-tiered setup that lowers, or ages, the
priority of ongoing activity over time. Three service subclasses under a single
superclass A provide the execution environment in which all queries must run.
Assume that the default user workload maps incoming queries to service subclass
A1, which is a high-priority subclass intended to permit shorter running queries to
execute quickly. A medium-priority service subclass A2 is intended to permit
longer running queries to execute, although with more stringent resource controls.
Service subclass A3 provides containment for any very large queries that take an
excessive amount of processor time to complete.
A work action provides an action that can be applied to a work class, which
represent activities of a certain type like LOAD or READ activities.
If you apply a work action set to a database, there are several types of actions that
you can apply to activities that fall within a work class, such as threshold
definitions, prevent execution, collect activity data, and count activity. Defining a
threshold for a work action is the most powerful database work action. For
If you apply a work action set to a workload, the different types of actions that
you can apply to activities include defining thresholds, preventing execution,
collecting activity data and aggregate activity data, and counting the activities.
If you define the work action set for a service superclass, the different types of
actions that you can apply to activities include mapping activities to a service
subclass, preventing execution, collecting activity or aggregate activity data, and
counting the activities. Typically, the work action maps an activity to a service
subclass and has thresholds defined on the subclass to help manage the activity.
User
database requests Service superclass 1
Requests Service
Workload B
subclass 1.1
Service
Requests Workload C subclass 1.2
How work classes, work class sets, work actions, and work
action sets work together and are associated with other DB2
objects
Work classes and work actions work together to apply specific actions to specific
activity types. The best way to describe how this works is through an example.
The following diagram shows a high-level view of how work classes, work class
sets, work actions, and work action sets work together and are associated with
WA_COUNT
Database
WA_COLLECT
Database
requests
Service subclass SSC2 WA_MAP_DML
Workload
WL2 Service subclass SSC3
Work class set WCS1
WC_LOAD (LOAD)
Default
workload
WA_PREVENT_EXECUTION
Legend
Associated with
Map to
Figure 18. Overview of work action sets and work class sets
In the diagram, some database activities are mapped, through workload WL1,
workload WL3, and the default user workload, SYSDEFAULTUSERWORKLOAD,
to the service superclass SS1. Because work action set WASDB is applied to the
database, any activities that are assigned to the default user workload, the WL1
workload, or the WL3 workload and fall under the WC_DML or WC_LOAD work
classes will have the work actions in the WASDB work action set applied to them.
That is, activities with the DML work type are counted, and activities with the
LOAD work type have activity data collected for them and written to an active
event monitor (if one is available).
The work action set WASSSC1 is applied to the service superclass SS1. Any
activities that are assigned to the default user workload, the WL1 workload, or the
WL3 workload and fall under the WC_DML work class and the WC_LOAD work
class will also have the WA_MAP_DML and WA_MAP_LOAD work actions
applied to them. That is, activities with a work type of LOAD will be mapped to
Activities that are assigned to the WL2 workload are mapped directly to a service
subclass (SSC3). When a workload maps activities directly to a service subclass, no
work actions from the work action set WASSSC1 are applied to those activities.
However, because WASWL2 is applied to WL2, any activities assigned to WL2 and
fall under WL_LOAD will have work actions in the WASWL2 work action set
applied to the. That is, LOAD activities will not be allowed to be run, due to the
PREVENT EXECUTION work action.
Work actions
You can create a work action by using either the WORK ACTION keyword in the
CREATE WORK ACTION SET statement or the ADD keyword in the ALTER
WORK ACTION SET statement. You can alter a work action by using the ALTER
keyword in the ALTER WORK ACTION SET statement. You can remove a work
action from a work action set by using the DROP keyword in the ALTER WORK
ACTION SET statement, or by dropping the entire work action set.
You can view your work actions by querying the SYSCAT.WORKACTIONS view.
You can create a work action set using the CREATE WORK ACTION SET
statement, alter a work action set using the ALTER WORK ACTION SET
statement, and drop a work action set using the DROP WORK ACTION SET
statement.
You can view your work action sets by querying the SYSCAT.WORKACTIONSETS
view.
When you create a work action set, you must specify the object that the work
action set is to be applied to. The valid object types are the database, a workload,
or a service superclass. You must also specify which work class set the work action
set is to work with. This permits you to use the work classes in the work class set
to identify the types of activities that you want to apply the work actions to.
If the work action set is defined for a database, the work actions in the work action
set must be any of the following actions:
v A threshold
The following thresholds apply to each individual activity in the matching work
class:
– ACTIVITYTOTALTIME
– CPUTIME
– ESTIMATEDSQLCOST
– SQLROWSREAD
– SQLROWSRETURNED
– SQLTEMPSPACE
The following threshold applies to all activities in the matching work class as a
group:
– CONCURRENTDBCOORDACTIVITIES
The actual threshold is specified by the WHEN threshold-type clause. Multiple
threshold work actions can be applied to a single work class if all the thresholds
are of different types. If this action is specified, the threshold is applied to all
database activities associated with the work class.
v PREVENT EXECUTION
If this action is specified, all database activities that match the associated work
class are not permitted to run.
v COLLECT ACTIVITY DATA
If this action is specified, information about the database activities corresponding
to the work class for which this work action is defined are written to the active
ACTIVITIES event monitor when the activities complete execution. For more
information, see “Collecting data for individual activities”.
v COUNT ACTIVITY
If this action is specified, all database activity that maps to the associated work
class causes the turnstile counter for that work class type to be incremented.
(The turnstile counter for the work class is incremented by 1 each time an
activity is associated with that work class). The COUNT ACTIVITY work action
provides an efficient way to ensure this counter is updated. If no work action is
applied to an activity corresponding to a work class, the work class activity
counter is not incremented. Sometimes the only action you care about is
obtaining a count of activities of a given type. For more information, see
“Collecting data for individual activities”.
If the work actions in the work action set defined for a database are not any of
these actions, SQL4720N is returned.
If the work action set is defined for a workload, the work actions in the work
action set must be any of the following actions:
v A threshold
The following thresholds apply to each individual activity in the matching work
class:
– ACTIVITYTOTALTIME
– CPUTIME
– ESTIMATEDSQLCOST
– SQLROWSREAD
– SQLROWSRETURNED
– SQLTEMPSPACE
The following threshold applies to all activities in the matching work class as a
group:
– CONCURRENTDBCOORDACTIVITIES
The actual threshold is specified by the WHEN threshold-type clause. Multiple
threshold work actions can be applied to a single work class if all the thresholds
are of different types. If this action is specified, the threshold is applied to all
database activities associated with the work class.
v PREVENT EXECUTION
Behavior is the same as for the database work action.
v COLLECT ACTIVITY DATA
Behavior is the same as for the database work action.
v COLLECT AGGREGATE ACTIVITY DATA
Behavior is the same as for the service superclass work action.
v COUNT ACTIVITY
Behavior is the same as for the database work action.
If the work actions in the work action set defined for a workload are not any of
these actions, SQL4720N is returned.
The following figure shows an example of how the work classes in a work class set
called LARGE ACTIVITIES are to be applied to both the database and a service
superclass. To meet this objective, two work action sets, "Database large activities"
and "Service class large activities" are created.
Although this example does not show it, you can also apply the classes in the
LARGE ACTIVITIES work class set to a workload, by creating a work action set
associated with the workload and then associating the work action set with the
LARGE ACTIVITIES work class set.
Service
Work action: Map for large reads superclass
Map large reads to SSC1
Service
subclass SSC1
Work action: Prevent execution for large writes
Service
subclass SSC2
Legend
Associated with
Map to
Figure 19. Example of work actions, work actions sets, work classes, and work class set
A work action set does not have to contain an action for every work class in the
work class set to which the work action set is applied. In addition, a work class
can have more than one work action applied to it as long as the action types are
different. A work class can have more than one threshold work action applied to it
as long as the threshold types are different.
When work is submitted to the data server, it is associated with a workload, either
a user-defined workload or the default workload, then mapped to a service class.
The following figure shows the process of how a work action is applied to an
activity.
Yes
Is this work
Apply work actions. No
class found?
Yes
Is there a workload
work action set? No
Exit
Is there
a service superclass No
work action set?
Yes
Figure 1 illustrates an example scenario using work action sets to control the
concurrency of incoming work based on the source of the connection while all
work in the database is controlled using priority aging.
Regular Important
workload workload
Short service
all
subclass
remap when
processor time > 1s
Priority Aging Medium service
service class subclass
work action set
remap when
processor time > 5s
Long service
load
subclass
Figure 21. Concurrency control at the workload level using work action sets
In the example scenario, two workloads are created to identify and differentiate the
work coming from different sources. Connections to the database from the sales,
accounting, and IT departments are mapped to the Regular workload. Connections
to the database from management and critical applications are mapped to the
Important workload. Work from the Important workload has higher priority and
needs to be able to complete within the shortest amount of time. To ensure the
database has sufficient capacity for work in the Important workload, concurrency
thresholds are placed on the work in the Regular workload. A workload level work
action set, called Regular workload level work action set, is created on the Regular
workload and is applied to a work class set that has two work classes. Load
activities are mapped to one work class, while all other activities are mapped to
the other work class. A CONCURRENTDBCOORDACTIVITIES threshold is created
as a work action in the Regular workload level work action set to allow only one
load activity in the system at a time while queueing the other load activities. In
addition, another CONCURRENTDBCOORDACTIVITIES threshold is created as a
work action in the Regular workload level work action set to allow a maximum of
500 concurrent activities, while activities exceeding the maximum are queued.
Connections to the database from both the Regular and Important workloads are
mapped to the Priority Aging service superclass. This service superclass is created
to implement priority aging that favors short activities. The Priority Aging service
class work action set is created for the Priority Aging service superclass to separate
the long-running load activities from all the short-running activities. All activities,
other than load, are mapped to the Short service subclass. The Short service
subclass is configured to have the highest agent, prefetch, and buffer pool
priorities. A CPUTIMEINSC threshold is created on the Short service subclass to
remap an activity to the Medium service subclass after it consumes more than 1
second of processor time in the Short service subclass. The Medium service
subclass has medium agent, prefetch, and buffer pool priorities. A CPUTIMEINSC
threshold is created on the Medium service subclass to remap an activity to the
Long service subclass after it consumes more than 5 seconds of processor time in
the Medium service subclass. The Long service subclass has the lowest agent,
prefetch, and buffer pool priorities. Load activities are mapped directly to the Long
service subclass by the Priority Aging service class work action set because load
activities can be long running, resource intensive, and less time critical for
completion.
With workloads, requests are identified and assigned to a service class based on
connection attributes. Workloads are the primary method for routing work to a
specific DB2 service class for execution. If you want to further refine how requests
are identified, you can use work classes to classify the activities based on their type
and other activity attributes. For example, you can classify READ activities, WRITE
activities, and LOAD activities into different work classes and have each activity
type treated differently.
If you use work classes (which are grouped into work class sets), you can use
work actions to exercise control over the different types of activities. For example,
you can use one work action to map a specific type of activity to a service subclass
and use a different work action to apply a control known as a threshold to ensure
that same type of activity does not exceed certain conditions.
Work actions are grouped into work action sets. A single work action set can apply
to activities in the database, to activities in a service superclass, or to activities in a
workload. However, the same work action set cannot apply to more than one
object. Work class sets and work action sets work together. That is, a work class
must exist for categorizing an activity as a specific type of work before a work
action can be applied to it. A work class set can be associated with more than one
work action set, but a work action set can be associated with only one work class
set.
Now assume that an activity that does not update the catalogs (a READ activity)
enters the system. The database-level work action set WAS_1 (that is associated
with work class set WCS_1) contains a work action that is applied to the READ
work class. The request is then mapped to service superclass SC_A (by workload
WL_A). Here, the request encounters the service superclass-level work action set
WAS_2, which is also associated with work class set WCS_1, and applies to
activities in service superclass SC_A. This work action set contains a mapping
work action, which is also applied to the READ work class so that all READ
activities will be mapped to service subclass SSC_1a in service superclass SC_A.
A somewhat similar situation occurs with the request that is associated (again,
based on its connection attributes) with workload WL_B. Workload WL_B maps
activities to service superclass SC_B. Assume that the request is for a LOAD
activity and that work class set WCS_2 contains a work class that applies to LOAD
activities. Work class set WCS_2 is associated with the service superclass-level
work action set WAS_3, which applies to activities in service superclass SC_B.
Assume that work action set WAS_3 contains a mapping work action that is
applied to the LOAD work class, so that when the LOAD activity is mapped to
service superclass SC_B by workload WL_B, it will then be mapped by the work
action to service subclass SSC_1b for execution.
Service
superclass SC_A Work class set WCS_1
Service superclass-level
Workload WL_A work action set WAS_2
Legend
Associated with
Map to
When you create a work action set to work with a specific work class set, you
cannot change it to work with a different work class set because the work actions
in the work action set have a dependency on the work classes in the work class
set. If you want to change the work class set this work action set is to be applied
to, you must drop and recreate the work action set.
You cannot change which object the work action set applies to because the type of
work actions in the work action set depends on which object (database, workload,
or service superclass) the work action set is defined for. If you want to change
which object the work action set is associated with, you must drop and recreate the
work action set.
Note: Disabling a work action set does not disable the work actions within the
work action set, but the work action set will no longer affect any work. If you
want to drop a work action set that contains a concurrency work action
threshold, you must first disable the concurrency work action before the work
action set can be dropped, because concurrency thresholds must be disabled
before they can be dropped.
5. Commit your changes. When you commit your changes, the work action set is
updated in the SYSCAT.WORKACTIONSETS view. The
SYSCAT.WORKACTIONS views is updated for any added, altered, or dropped
work actions.
Note: If you want to drop a work action set that contains a concurrency work
action threshold, you must first disable the concurrency work action before the
work action set can be dropped, because concurrency thresholds must be disabled
before they can be dropped.
For example, assume that you have a work action set called READACTIVITIES
that is associated with a work class set called READCLASSES, and that work
action set is defined for a service superclass called READSERVICECLASS. The
SMALLREAD work action set has a work action in it that remaps all SELECT
statements to the service subclass SMALLREADSERVICECLASS. If the
READACTIVITIES work action set is disabled, all SELECT statements are treated
as though the READACTIVITIES work action set does not exist, and are mapped
to the default service subclass.
Dropping a work action set drops the work action set and all work actions in it.
If you define the work action as WITHOUT NESTED, nested activities are
handled according to their activity type instead of automatically being
See “DDL statements for DB2 workload manager” on page 18 for additional
prerequisites.
Note: If the action is a threshold, you cannot alter the type of threshold to a
different threshold. So, for example, if the work action was an
SQLROWSRETURNED threshold, you cannot change it to a
SQLTEMPSPACE threshold. In addition, you cannot change the work action
type of an enabled CONCURRENTDBCOORDACTIVITIES work action
threshold.
v You can alter the histogram templates used by a COLLECT AGGREGATE
ACTIVITY DATA work action to describe the histograms created for the
corresponding work class. Updating the histogram templates used by a work
action updates the corresponding rows in the
SYSCAT.HISTOGRAMTEMPLATEUSE view, which displays the histogram
templates referenced by the service class or work action. For more
information on histograms and histogram templates, see “Histograms in
workload management” on page 189.
v You can alter whether you want to enable or disable the work action. By
default, work actions are enabled. When enabled, the data server considers
the work action for application against the activity that falls under the work
class for the work action. If the work action is disabled, the data server
ignores it.
Assume that you have a work class set called ALLSQL, and it contains the
following work classes in this order:
1. SMALLDML, which is for all DML-type SQL statement that have an estimated
cost of less than 1 000 timerons
The following SQL statements create the work class set and the work classes:
CREATE WORK CLASS SET ALLSQL
(WORK CLASS SMALLDML WORK TYPE DML FOR TIMERONCOST FROM 0 TO 1000,
WORK CLASS MEDDML WORK TYPE DML FOR TIMERONCOST FROM 1001 TO 20000,
WORK CLASS LARGEDML WORK TYPE DML FOR TIMERONCOST FROM 20001 TO UNBOUNDED,
WORK CLASS ALLDDL WORK TYPE DDL,
WORK CLASS ALLACTIVITY WORK TYPE ALL)
These work classes already have work actions, such as COUNT ACTIVITY,
COLLECT, and thresholds (that are not ACTIVITYTOTALTIME thresholds) applied
to them.
Assume that you want to permit large DML activities to run for no longer than 5
hours. All other SQL can take no longer than 30 minutes to run. The following two
examples show possible methods for accomplishing this objective.
Method 1
Another method might be to use only one work class, LARGEDML, then create a
work action set for the database that has one work action,
LARGEDMLTIMEALLOWED, applied to the work class.
Table 43. LARGEDMLTIMEALLOWED work action applied to the LARGEDML work class
Work action Work class applied to Threshold type and value Action
LARGEDMLTIMEALLOWED LARGEDML ACTIVITYTOTALTIME < 5 v Stop execution
HOURS
v Collect activity data
To accomplish this task, first create a work class set that contains work classes for
the different types of work you are interested in. For example, if you want to know
how many READ activities, WRITE activities, DDL activities, and LOAD activities
are running on your system, you would create a work class set, ACTIVITYTYPES,
as in the following example:
CREATE WORK CLASS SET ACTIVITYTYPES
(WORK CLASS READWC WORK TYPE READ,
WORK CLASS WRITEWC WORK TYPE WRITE,
WORK CLASS DDLWC WORK TYPE DDL,
WORK CLASS LOADWC WORK TYPE LOAD)
After a sufficient amount of time has passed, you can determine the number of
each type of activity that has run by using the
WLM_GET_WORK_ACTION_SET_STATS table function:
The primary purpose of monitoring is to validate the health and efficiency of your
system and the individual workloads running on it. Using table functions, you can
access real-time operational data such as a list of running workload occurrences
and the activities running in a service class or average response times. Using event
monitors you can capture detailed activity information and aggregate activity
statistics for historical analysis.
Looking at aggregate information should usually be the first step when you build
a monitoring strategy. Aggregates give a good picture of overall data server
activity and are also cheaper because you do not have to collect information on
every activity in which you might be interested. You can collect more detailed
information as you understand the scope of your monitoring needs.
Table functions with names that begin with WLM_ are DB2 workload manager
table functions. These table functions provide access to a set of data relevant to
managing your workload, such as workload management statistics, as a virtual
DB2 table against which you can issue a SELECT statement. This enables you to
write applications to query data and analyze it as if it were in a physical table on
the data server. The DB2 workload manager table functions are qualified with the
SYSPROC schema name.
Table functions with names that begin with MON_ are monitoring metrics
functions. Monitoring metrics provide monitoring data about the health of and
query performance on your DB2 data server, which can then be used as input to a
Some table functions return sets of information about the work that is currently
running on a system:
Table 44. Table functions that show you the work currently running on the system
Objects for which
information is
collected Functions and information returned
Workload The
occurrences WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97
table function returns a list of workload occurrences, across database
partitions, that are assigned to a service class. For each occurrence,
there is information about the current state and the connection
attributes used to assign the workload to the service class and
activity statistics indicating activity volume and success rates. For an
example of how to use this table function, see “Example:
Investigating agent usage by service class” on page 90.
The deprecated
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES table
function is also available.
Workload The WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97
occurrence activities table function returns a list of current activities associated with a
workload occurrence. For each activity, information is available about
the current state of the activity (for example, executing or queued),
the type of activity (for example, LOAD, READ, or DDL), and the
time at which the activity started. For examples of how to use this
table function, see “Example: Aggregating data using DB2 workload
manager table functions” on page 167 and “Scenario: Identifying
activities that are taking too long to complete” on page 281.
The deprecated
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES table
function is also available.
Some table functions return monitoring data for all requests executed on the
system aggregated by service subclass and workload objects:
Table 45. Table functions that show you monitoring data aggregated by DB2 workload
manager objects
Objects for which
data is aggregated Functions and information returned
Workloads Both the MON_GET_WORKLOAD table function and the
MON_GET_WORKLOAD_DETAILS table function return metrics for
one or more workloads. The metrics returned by this function
represent the accumulation of all metrics of all workload occurrences
that use the same workload definition.
Statistical information
The following table lists the statistics that you can obtain by using table functions.
All statistics table functions return the statistics that accumulated since the last
time that you reset the statistics.
Table 46. Table functions that show you statistical information
Objects for which
statistics are
returned Functions and statistics returned
Service superclasses The WLM_GET_SERVICE_SUPERCLASS_STATS table function
shows summary statistics across database partitions at the service
superclass level: namely, high-water marks for concurrent
connections, which are useful when determining peak workload
activity.
Statistics are useful only if the time period during which they are collected is
meaningful. Collecting statistics over a very long time, and for any length of time
using the WLM_COLLECT_STATS stored procedure, might be less useful if it
becomes difficult to identify changes to trends or problem areas because there is
too much old data. Thus, you can reset statistics at any time.
Because of the default workload and default user service classes, monitoring
capabilities exist from the moment that you install the DB2 data server. These can
In this situation, only the default workload and service class are in place. Use this
example to understand how you can use the table functions to understand what,
exactly, is running on the data server. Follow these steps:
1. Use the Service Superclass Statistics table function to show all of the service
superclasses. After you install or upgrade to DB2 9.5 or later, three default
superclasses are defined: one for maintenance activities, one for system
activities, and one for user activities. SYSDEFAULTUSERCLASS is the service
class of interest.
SELECT VARCHAR(SERVICE_SUPERCLASS_NAME,30) AS SUPERCLASS
FROM TABLE(WLM_GET_SERVICE_SUPERCLASS_STATS('',-1)) AS T
SUPERCLASS
------------------------------
SYSDEFAULTSYSTEMCLASS
SYSDEFAULTMAINTENANCECLASS
SYSDEFAULTUSERCLASS
3 record(s) selected.
2. Use the Service Subclass Statistics table function to show statistics for all the
service subclasses of the SYSDEFAULTUSERCLASS superclass. For each service
subclass you can see the current volume of requests that are being processed,
the number of activities that have completed execution, and the overall
distribution of activities across database partitions (possibly indicating a
problem if the distribution is uneven). You can optionally obtain additional
statistics including the average lifetime for activities, the average amount of
time activities spend queued, and so on. You can obtain optional statistics for a
service subclass by specifying the COLLECT AGGREGATE ACTIVITY DATA
keyword on the ALTER SERVICE CLASS statement to enable aggregate activity
statistics collection.
SELECT VARCHAR(SERVICE_SUPERCLASS_NAME, 20) AS SUPERCLASS,
VARCHAR(SERVICE_SUBCLASS_NAME, 20) AS SUBCLASS,
COORD_ACT_COMPLETED_TOTAL,
COORD_ACT_ABORTED_TOTAL,
COORD_ACT_REJECTED_TOTAL,
CONCURRENT_ACT_TOP
FROM TABLE(WLM_GET_SERVICE_SUBCLASS_STATS_V97(
'SYSDEFAULTUSERCLASS', 'SYSDEFAULTSUBCLASS', -1))
AS T
SUPERCLASS SUBCLASS COORD_ACT_COMPLETED_TOTAL COORD_ACT_ABORTED_TOTAL COORD_ACT_REJECTED_TOTAL CONCURRENT_ACT_TOP
-------------------- -------------------- ------------------------- ----------------------- ------------------------ ------------------
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 2 0 0 1
1 record(s) selected.
3. For a given service subclass, use the Workload Occurrence Information table
function to list the occurrences of a workload that are mapped to the service
subclass. The table function displays all of the connection attributes, which you
can use to identify the source of the activities. This information can be quite
useful in determining custom workload definitions in the future. For example,
perhaps a specific workload occurrence listed here has a large volume of work
from an application as shown by the activities completed counter.
1 record(s) selected.
a. For that application, use the Workload Occurrence Activities Information
table function to show the current activities across database partitions that
were created from the application's connection. You can use this information
for a number of purposes, including identifying activities that might be
causing problems on the data server.
SELECT APPLICATION_HANDLE,
LOCAL_START_TIME,
UOW_ID,
ACTIVITY_ID,
ACTIVITY_TYPE
FROM TABLE(WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97(431,-1)) AS T
APPLICATION_HANDLE LOCAL_START_TIME UOW_ID ACTIVITY_ID ACTIVITY_TYPE
-------------------- -------------------------- ----------- ----------- --------------------------------
431 2008-06-17-12.49.46.854259 11 1 READ_DML
1 record(s) selected
b. For each activity, retrieve more detailed information by using the Activity
Details table function. The data might show that some SQL statements are
returning huge numbers of rows, that some activities have been idle for a
long time, or that some queries are running that have an extremely large
estimated cost. In situations such as these, it might make sense to define
some thresholds to identify and prevent potentially damaging behavior in
the future.
SELECTVARCHAR(NAME, 20) AS NAME,
VARCHAR(VALUE, 40) AS VALUE
FROM TABLE(WLM_GET_ACTIVITY_DETAILS(431,11,1,-1))
AS T WHERE NAME IN ('UOW_ID', 'ACTIVITY_ID', 'STMT_TEXT')
NAME VALUE
-------------------- ----------------------------------------
UOW_ID 1
ACTIVITY_ID 1
STMT_TEXT select * from syscat.tables
3 record(s) selected.
Installing DB2 Version 9.5 or later creates a set of default workloads and service
classes. Before deciding how to implement your own DB2 workload manager
solution, you can use the table functions to observe work being performed in the
system in terms of the default workload occurrences, service classes, and activities.
You can start by obtaining the list of workload occurrences in a service class. To do
this, use the WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97
Assume that the system has four database partitions and that there are two
applications performing activities on the database when you issue the query. The
results would resemble the following ones:
SUPERCLASS_NAME SUBCLASS_NAME PART COORDPART APPHNDL WORKLOAD_NAME WLO_ID
------------------- ------------------ ---- --------- ------- -----------------------------
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 0 0 1 SYSDEFAULTUSERWORKLOAD 1
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 0 0 2 SYSDEFAULTUSERWORKLOAD 2
The results indicate that both workload occurrences were assigned to the
SYSDEFAULTUSERWORKLOAD workload. The results also show that both
workload occurrences were assigned to the SYSDEFAULTSUBCLASS service
subclass in the SYSDEFAULTUSERCLASS service superclass and that both
workload occurrences are from the same coordinator partition (partition 0).
The query results show that workload occurrence 1 is running two activities. One
activity is a stored procedure (indicated by the activity type of CALL), and the
other activity is a DML activity that performs a read (for example, a SELECT
statement). The DML activity is nested in the stored procedure call. You can tell
that the DML activity is nested because the parent unit of work identifier and
parent activity identifier of the DML activity match the unit of work identifier and
the activity identifier of the CALL activity. You can also tell that the DML activity
is executing on database partitions 0, 1, 2, and 3. The parent identifier information
is available only on the coordinator partition.
You can obtain more information about an individual activity that is currently
running by using the MON_GET_ACTIVITY_DETAILS table function. This table
function returns an XML document where the elements in the document describe
the activity. In this example, the XMLTABLE function is used to return a result
table from the XML output.
SELECT D.APP_HANDLE,
D.MEMBER,
D.COORD_MEMBER,
D.LOCAL_START_TIME,
D.UOW_ID,
D.ACTIVITY_ID,
D.PARENT_UOW_ID,
D.PARENT_ACTIVITY_ID,
D.ACTIVITY_TYPE,
D.NESTING_LEVEL,
D.INVOCATION_ID,
D.ROUTINE_ID
FROM TABLE(MON_GET_ACTIVITY_DETAILS(65592, 1, 1, -2)) AS ACTDETAILS,
XMLTABLE (XMLNAMESPACES( DEFAULT 'http://www.ibm.com/xmlns/prod/db2/mon'),
'$details/db2_activity_details' PASSING XMLPARSE(DOCUMENT
ACTDETAILS.DETAILS) as "details"
COLUMNS "APP_HANDLE" BIGINT PATH 'application_handle',
"MEMBER" BIGINT PATH 'member',
"COORD_MEMBER" BIGINT PATH 'coord_member',
"LOCAL_START_TIME" VARCHAR(26) PATH 'local_start_time',
"UOW_ID" BIGINT PATH 'uow_id',
"ACTIVITY_ID" BIGINT PATH 'activity_id',
"PARENT_UOW_ID" BIGINT PATH 'parent_uow_id',
"PARENT_ACTIVITY_ID" BIGINT PATH 'parent_activity_id',
"ACTIVITY_TYPE" VARCHAR(10) PATH 'activity_type',
"NESTING_LEVEL" BIGINT PATH 'nesting_level',
"INVOCATION_ID" BIGINT PATH 'invocation_id',
"ROUTINE_ID" BIGINT PATH 'routine_id'
) AS D;
APP_HANDLE MEMBER COORD_MEMBER LOCAL_START_TIME
UOW_ID ACTIVITY_ID PARENT_UOW_ID
PARENT_ACTIVITY_ID ACTIVITY_TYPE NESTING_LEVEL INVOCATION_ID
ROUTINE_ID
-------------------- -------------------- --------------------
-------------------------- -------------------- --------------------
-------------------- -------------------- ------------- --------------------
-------------------- --------------------
65592 1 1
2009-04-07-18.39.42.549197 1
1 - - READ_DML 0 0 0
65592 0 1
2009-04-07-18.39.42.552763 1
1 - - READ_DML 0 0 0
2 record(s) selected.
The results show a coordinator agent and a subagent on database partition 0 and a
subagent on database partition 1 operating on behalf of an activity with a unit of
work identifier of 1 and an activity identifier of 5. The coordinator agent
information indicates that the request is a fetch request.
You can use the following statement to obtain service class statistics, such as the
average activity lifetime. Passing an empty string for an argument for the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function means that the result
is not to be restricted by that argument. The value of the last argument,
dbpartitionnum, is -2 (a wildcard character), which means that data from all
database partitions is to be returned.
Note: Lifetime information is only returned for those service classes that are
defined with COLLECT AGGREGATE ACTIVITY DATA.
SELECT SUBSTR(SERVICE_SUPERCLASS_NAME,1,19) AS SUPERCLASS_NAME,
SUBSTR(SERVICE_SUBCLASS_NAME,1,18) AS SUBCLASS_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
CAST(COORD_ACT_LIFETIME_AVG / 1000 AS DECIMAL(9,3)) AS AVGLIFETIME,
CAST(COORD_ACT_LIFETIME_STDDEV / 1000 AS DECIMAL(9,3)) AS STDDEVLIFETIME,
SUBSTR(CAST(LAST_RESET AS VARCHAR(30)),1,16) AS LAST_RESET
FROM TABLE(WLM_GET_SERVICE_SUBCLASS_STATS_V97('', '', -2)) AS SCSTATS
ORDER BY SUPERCLASS_NAME, SUBCLASS_NAME, PART
SUPERCLASS_NAME SUBCLASS_NAME PART AVGLIFETIME STDDEVLIFETIME LAST_RESET
------------------- ------------------ ---- ----------- -------------- ----------------
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 0 691.242 34.322 2006-07-24-11.44
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 1 644.740 22.124 2006-07-24-11.44
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 2 612.431 43.347 2006-07-24-11.44
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 3 593.451 28.329 2006-07-24-11.44
By reviewing the average lifetime and number of completed activities, you can use
the output of the WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function to
obtain a rolled-up view of the workload on each database partition in the database.
Significant variations in the high watermarks and averages returned by a table
function might indicate a change in the workload on the system.
The following is an example of data aggregation that you can perform to identify
problems.
Assume that you have a workload called WL1. You can identify a situation in
which a large number of queries are running in the workload by showing the total
number of executing non-nested coordinator activities for the workload across the
whole system:
SELECT SUBSTR(WORKLOAD_NAME,1,22) AS WLNAME,
COUNT(*) AS TOTAL_EXE_ACT
FROM TABLE(WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97('', '', -2)) AS APPS,
TABLE(WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97(APPS.APPLICATION_HANDLE, -2)) AS APPACTS
WHERE WORKLOAD_NAME = 'WL1' AND
APPS.DBPARTITIONNUM = APPS.COORD_PARTITION_NUM AND
ACTIVITY_STATE = 'EXECUTING' AND
NESTING_LEVEL = 0
GROUP BY WORKLOAD_NAME
WLNAME TOTAL_EXE_ACT
-------------------- -------------
WL1 5
This view can be used to easily answer questions such as the following:
v How many applications or activities are currently queued by a WLM threshold?
v What is the order of the applications or activities in the WLM threshold queue?
Example 1
To count the number of applications queued by each queuing threshold, run the
following statement:
SELECT VARCHAR(THRESHOLD_NAME, 30) AS THRESHOLD, COUNT(*)
AS QUEUED_ENTRIES FROM WLM_QUEUE_INFO GROUP BY THRESHOLD_NAME
The following is a sample of the output obtained after running the preceding
statement:
THRESHOLD QUEUED_ENTRIES
------------------------------ --------------
TH1 3
1 record(s) selected.
Example 2
The following is a sample of the output obtained after running the preceding
statement:
QUEUE_ENTRY_TIME APPLICATION_HANDLE UOW_ID ACTIVITY_ID
-------------------------- -------------------- ----------- -----------
2009-11-09-18.08.32.583286 145 1 2
3 record(s) selected.
Three event monitors are available for you to use. Each event monitor serves a
different purpose:
Activity event monitor
This monitor captures information about individual activities in a service
class, workload, or work class or activities that violated a threshold. The
amount of data that is captured for each activity is configurable and
should be considered when you determine the amount of disk space and
the length of time required to keep the monitor data. A common use for
activity data is to use it as input to tools such as db2advis or to use access
plans (from the explain utility) to help determine table, column, and index
usage for a set of queries.
You can collect information about an activity by specifying COLLECT
ACTIVITY DATA for the service class, workload, or work action to which
such an activity belongs or a threshold that might be violated by such an
activity. The information is collected when the activity completes,
regardless of whether the activity completes successfully.
Note that if an activities event monitor is active when the database
deactivates, any backlogged activity records in the queue are discarded. To
ensure that you obtain all activities event monitor records and that none
are discarded, explicitly deactivate the activities event monitor first before
deactivating the database. When an activities event monitor is explicitly
deactivated, all backlogged activity records in the queue are processed
before the event monitor deactivates.
Threshold violations event monitor
This monitor captures information when a threshold is violated. It
indicates what threshold was violated, the activity that caused the
violation, and what action was taken when it occurred.
If you specify COLLECT ACTIVITY DATA for the threshold and an
activities event monitor is created and active, information is also collected
about activities that violate the threshold, but this information is collected
when the activity ends (either successfully or unsuccessfully).
You can obtain details about a threshold by querying the
SYSCAT.THRESHOLDS view.
Statistics event monitor
This monitor serves as a low-overhead alternative to capturing detailed
activity information by collecting aggregate data (for example, the number
of activities completed and average execution time). Aggregate data
includes histograms for a number of activity measurements including
lifetime, queue time, execution time and estimated cost. You can use
histograms to understand the distribution of values, identify outliers, and
compute additional statistics such as averages and standard deviations. For
example, histograms can help you understand the variation in lifetime that
users experience. The average life time alone does not reflect what a user
The following figure shows the different monitoring options available to access
workload information: table functions to access real-time statistics, and activity
details and historical information captured as efficient aggregates or as details
about individual activities:
Service
superclass 1
User Service
Workload A
requests subclass A
User Service
Workload B
requests subclass B
User
Workload C
requests
Activity
information
User
Workload D
requests
Maintenance Default
requests maintenance
class
Legend
Monitoring interface
Unlike statement, connection, and transaction event monitors, the activity, statistics,
and threshold violations event monitors do not have event conditions (that is,
conditions specified on the WHERE keyword of the CREATE EVENT MONITOR
statement). Instead, these event monitors rely on the attributes of service classes,
workloads, work classes, and thresholds to determine whether these objects send
their activity information or aggregate information to these monitors.
You can use the wlmevmon.ddl script in the sqllib/misc directory to create and
enable three event monitors called DB2ACTIVITIES, DB2STATISTICS, and
DB2THRESHOLDVIOLATIONS. If necessary, modify the script to change the table space
or other parameters.
Example
Example: Identify queries with a large estimated cost using the statistics event
monitor: You suspect that your database workload occasionally includes large,
expensive queries, possibly due to the poor optimization of the queries themselves.
You want to identify these queries so that you can prevent them from consuming
excessive resources on your system, with a long-term goal of perhaps rewriting
some of the queries to improve performance. The statistics event monitor provides
you with a low-overhead way to measure the estimated cost of your queries which
you can then use to determine what the maximum acceptable estimated cost for a
query on your data server should be. A query that is poorly optimized is typically
distinguished by a large estimated cost that is many times larger than the
estimated cost of most other queries.
To get started, you need to create and activate a statistics event monitor and to
start collecting extended aggregate activity data for the service class where the
queries run:
CREATE EVENT MONITOR DB2STATISTICS
FOR STATISTICS WRITE TO TABLE
Included with the different statistics written to the event monitor tables are the
estimated cost statistics of queries. To see them, you can query the service class
statistics table SCSTATS_DB2STATISTICS:
SELECT STATISTICS_TIMESTAMP,
COORD_ACT_EST_COST_AVG,
COST_ESTIMATE_TOP
FROM SCSTATS_DB2STATISTICS
WHERE SERVICE_SUPERCLASS_NAME = 'SYSDEFAULTUSERCLASS'
AND SERVICE_SUBCLASS_NAME = 'SYSDEFAULTSUBCLASS'
STATISTICS_TIMESTAMP COORD_ACT_EST_COST_AVG COST_ESTIMATE_TOP
-------------------------- ---------------------- --------------------
2008-09-03-09.49.04.455979 169440 13246445
1 record(s) selected.
In the histogram, the value in the number_in_bin column for queries whose top is
greater than 2616055 is zero until top reaches 14160950, where the number_in_bin
becomes 3. These three queries are outliers and can be controlled with an
ESTIMATEDSQLCOST threshold set to trigger if the estimated cost of a query
After the end of the day, you can see what threshold violations occurred by
querying the threshold violations table:
SELECT THRESHOLDID,
SUBSTR(THRESHOLD_PREDICATE, 1, 20) PREDICATE,
TIME_OF_VIOLATION,
THRESHOLD_MAXVALUE,
THRESHOLD_ACTION
FROM THRESHOLDVIOLATIONS_DB2THRESHOLDVIOLATIONS
ORDER BY TIME_OF_VIOLATION, THRESHOLDID
THRESHOLDID PREDICATE TIME_OF_VIOLATION THRESHOLD_MAXVALUE THRESHOLD_ACTION
----------- -------------------- -------------------------- -------------------- ----------------
1 EstimatedSQLCost 2008-09-02-22.39.10.000000 10000000 Stop
1 record(s) selected.
The previous example showed how you can collect threshold information in an
event monitor table to confirm that activities with a large estimated cost are being
prevented from executing by a threshold. After seeing these threshold violations,
you want to determine what the SQL statement texts producing these large queries
are, so that you can use the explain facility to determine if an index is needed on
the tables being queried.
When you query the threshold violations table again after another business day
has passed, you can perform a join with the ACTIVITYSTMT_DB2ACTIVITIES
table to see the SQL statement text of any activity that violated the threshold:
SELECT THRESHOLDID,
SUBSTR(THRESHOLD_PREDICATE, 1, 20) PREDICATE,
TIME_OF_VIOLATION,
SUBSTR(STMT_TEXT,1,70) STMT_TEXT
FROM THRESHOLDVIOLATIONS_DB2THRESHOLDVIOLATIONS TV,
ACTIVITYSTMT_DB2ACTIVITIES A
WHERE TV.APPL_ID = A.APPL_ID
AND TV.UOW_ID = A.UOW_ID
AND TV.ACTIVITY_ID = A.ACTIVITY_ID
THRESHOLDID PREDICATE TIME_OF_VIOLATION STMT_TEXT
----------- -------------------- -------------------------- ----------------------------------------------------------------------
1 EstimatedSQLCost 2008-09-02-23.04.49.000000 select count(*) from syscat.tables,syscat.tables,syscat.tables
1 record(s) selected.
The following figure shows the monitoring information that is available for
workloads. You can collect workload statistics and information about activities that
run in the workloads using event monitors. For workloads, you can also obtain
aggregate activity statistics. You can access workload statistics and information
about workload occurrences in real time using table functions.
Workload
Activity
information
The following figure shows the monitoring information that is available for service
classes. You can collect statistics for service subclasses and service superclasses. For
Service
superclass
Service subclass
statistics
Service
subclass
Service
subclass
Activity
information
The following figure shows the monitoring information that is available for work
classes. You can collect work class statistics and information about activities that
are associated with a particular work class. You can access work class statistics in
real time using table functions.
Work class
Service Activity
subclass information
The following figure shows the monitoring information that is available for
thresholds. You can obtain information about threshold violations, the activities
that caused the threshold violations, and queuing statistics (for queuing
thresholds). You can access queuing threshold statistics in real time using table
Threshold
Activity
information
Queue statistics
The following stored procedures are available for use with DB2 workload manager:
WLM_CANCEL_ACTIVITY(application_handle, uow_id, activity_id)
Use this stored procedure to cancel a running or queued activity. You
identify the activity by its application handle, unit of work identifier, and
activity identifier. You can cancel any type of activity. The application with
the cancelled activity receives the error SQL4725N.
WLM_CAPTURE_ACTIVITY_IN_PROGRESS(application_handle, uow_id,
activity_id)
Use this stored procedure to send information about an individual activity
that is currently executing to the activities event monitor. This stored
procedure sends the information immediately, rather than waiting until the
activity completes.
WLM_COLLECT_STATS()
Use this stored procedure to collect and reset statistics for DB2 workload
manager objects. All statistics tracked for service classes, workloads,
threshold queues, and work action sets are sent to the active statistics
event monitor (if one exists) and reset. If there is no active statistics event
monitor, the statistics are only reset, but not collected.
WLM_SET_CLIENT_INFO(client_userid, client_wrkstnname,client_applname,
client_acctstr,client_workload)
Use this procedure to set the client information attributes used at the data
server to record the identity of the application or end-user currently using
the connection. In cases where middleware exists between applications or
Note that you can also obtain monitoring metrics through the statistics event
monitor. These are not discussed in this topic, which covers only those statistics
that are specific to DB2 workload manager.
When statistics are sent to the event monitor, the values in memory are reset to
prevent duplicate data from being collected on subsequent collection intervals.
Because the DB2 workload manager statistics table functions report the current
in-memory values, following a collection they report the reset values. The DB2
workload manager table functions report only a subset of the statistics. To view the
full set of statistics, you must collect the statistics and send them to a statistics
event monitor.
The following statistics are maintained on the given objects on each database
partition, regardless of the value of the COLLECT AGGREGATE ACTIVITY DATA
option specified for those objects when they are created or altered.
When you set the value of the COLLECT AGGREGATE ACTIVITY DATA option to
BASE for a service subclass, workload, or a work class (through a work action),
some of the following statistics are also collected, or the corresponding histograms
are generated for each database partition. Use the averages to quickly understand
where activities are spending most of their time (for example, queued or executing)
and the response time (lifetime). You can also use the averages to tune the
histogram templates. That is you can compare a true average with the average
computed from a histogram, and if the average from the histogram deviates from
the true average, consider altering the histogram template for the corresponding
histogram, using a set of bin values that are more appropriate for your data.
Table 48. Statistics or histograms collected when COLLECT AGGREGATE ACTIVITY DATA
is set to BASE
Statistic or histogram Description
Average request execution time Use this statistic to determine the arithmetic
(request_exec_time_avg) mean of the execution times for requests
associated with a service class.
Average coordinator activity lifetime Use this statistic to determine the arithmetic
(coord_act_lifetime_avg) mean of the lifetime for non-nested
coordinator activities associated with a
service class, workload or a work class.
Average coordinator activity execution time Use this statistic to determine the arithmetic
(coord_act_exec_time_avg) mean of execution time for non-nested
coordinator activities associated with a
service class, workload or a work class.
Average coordinator activity queue time Use this statistic to determine the arithmetic
(coord_act_queue_time_avg) mean of the queue time for non-nested
coordinator activities associated with a
service class, workload or a work class.
Cost estimate top (cost_estimate_top) Use this statistic to tune estimated cost
thresholds.
Actual rows returned top Use the information to tune the actual rows
(rows_returned_top) returned thresholds.
When you set the value of the COLLECT AGGREGATE ACTIVITY DATA option to
EXTENDED for a service subclass, workload or a work class, the following system
statistics are collected or histograms are generated for each database partition for
the corresponding service class or work class (through a work action). Use the
averages to quickly understand the average rate of arrival of activities (arrival rate
is the inverse of inter-arrival time) and the expense of activities (estimated cost).
You can also use the averages to tune the histogram templates. That is you can
compare a true average with the average computed from a histogram, and if the
average from the histogram deviates from the true average, consider altering the
histogram template for the corresponding histogram, using a set of bin values that
are more appropriate for your data. EXTENDED statistics are useful for more
detailed performance modelling. Also see “Workload management performance
modelling” on page 213.
Table 49. Statistics or histograms collected when COLLECT AGGREGATE ACTIVITY DATA
is set to EXTENDED
Statistic or histogram Description
Coordinator activity estimated cost average Use this statistic to determine the arithmetic
(coord_act_est_cost_avg) mean of the estimated costs of coordinator
DML activities at nesting level 0 that are
associated with this service subclass,
workload or work class since the last
statistics reset.
The following table provides a reference for which activity statistics are collected
for each DB2 workload manager object and includes all aggregate statistics
available to you from both table functions and event monitors. Some statistics are
always collected for some objects. Other statistics are only collected when a
particular COLLECT AGGREGATE option is specified. For aggregate activity
statistics, if COLLECT AGGREGATE ACTIVITY DATA EXTENDED is specified, all
the BASE aggregate activity statistics are also collected.
Table 50. Aggregate activity statistics collection for DB2 workload manager objects
Activity statistics collected when Activity statistics collected when
you specify COLLECT you specify COLLECT
Activity statistics always collected AGGREGATE ACTIVITY DATA AGGREGATE ACTIVITY DATA
Object type by default BASE EXTENDED
Service subclass act_remapped_in agg_temp_tablespace_top coord_act_est_cost_avg
act_remapped_out coord_act_exec_time_avg coord_act_interarrival_time_avg
concurrent_act_top coord_act_lifetime_avg CoordActEstCost histogram
coord_act_completed_total coord_act_lifetime_top CoordActInterArrivalTime
coord_act_rejected_total coord_act_queue_time_avg histogram
coord_act_aborted_total coord_act_lifetime_stddev
coord_act_exec_time_stddev
coord_act_queue_time_stddev
CoordActLifetime histogram
CoordActExecTime histogram
CoordActQueueTime histogram
cost_estimate_top
rows_returned_top
temp_tablespace_top
wlo_completed_total coord_act_exec_time_stddev
coord_act_queue_time_stddev
CoordActLifetime histogram
CoordActExecTime histogram
CoordActQueueTime histogram
cost_estimate_top
rows_returned_top
temp_tablespace_top
Work class act_total agg_temp_tablespace_top coord_act_est_cost_avg
(through a work
action) coord_act_lifetime_top coord_act_interarrival_time_avg
coord_act_lifetime_avg CoordActEstCost histogram
coord_act_exec_time_avg CoordActInterArrivalTime
coord_act_queue_time_avg histogram
CoordActLifetime histogram
CoordActExecTime histogram
CoordActQueueTime histogram
cost_estimate_top
rows_returned_top
temp_tablespace_top
Threshold N/A N/A N/A
Threshold queue queue_assignments_total N/A N/A
queue_size_top
queue_time_total
When you set the value of the COLLECT AGGREGATE REQUEST DATA option
for a service subclass to BASE, the following statistics are maintained for the
service subclass.
Table 51. Statistics or histograms collected when COLLECT AGGREGATE REQUEST DATA
is set to BASE
Statistic or histogram Description
Request execution time average Use this statistic to quickly understand the
(request_exec_time_avg) average amount of time that is spent
processing each request on a database
partition and to help tune the histogram
template for the corresponding request
execution time histogram.
The following table provides a reference for which request statistics are collected
for each DB2 workload manager object and includes all aggregate statistics
available to you from both table functions and event monitors. Some statistics are
always collected for some objects. Other statistics are only collected when the
COLLECT AGGREGATE REQUEST DATA option is specified.
As one exception to the rule, the activity interarrival time, estimated cost, and
queue time are all associated with the subclass in which an activity starts running,
rather than with the subclass in which the activity finishes running. Because a
remapped activity affects the statistics collection of both subclasses, a different
number of activities can be counted in an interarrival time, an estimated cost, or a
queue-time histogram than in a lifetime or execution-time histogram.
For example, consider an activity that starts running in service subclass A and later
is remapped to service subclass B, in which it finishes running. The estimated cost
of this activity is associated with service subclass A, but its lifetime is associated
with service subclass B. As a result, for subclass A, the estimated cost histogram
has one more element counted in it than the lifetime histogram has counted in it,
and for service subclass B, the lifetime histogram has one more element counted in
it than the estimated cost histogram has counted in it.
You can use two monitor elements to count the number of activities entering or
leaving a service subclass because of a remapping action: act_remapped_in and
act_remapped_out. The act_remapped_in and act_remapped_out monitor elements
count the number of activities for any given subclass at any partition that were
Table 54. Effect of the COLLECT AGGREGATE DATA EXTENDED option on aggregate
statistics collection for subclasses involved in remapping
Starting subclass collection setting and ending subclass collection
Statistics setting
NONE and EXTENDED and NONE and EXTENDED and
NONE NONE EXTENDED EXTENDED
Lifetime Not collected Not collected Collected Collected
Queue time Not collected Collected Not collected Collected
Execution time Not collected Not collected Collected Collected
Inter-arrival time Not collected Collected Not collected Collected
Estimated cost Not collected Collected Not collected Collected
DB2 workload manager histograms have a fixed number of 41 bins. The 40th bin
contains the highest defined value for the histogram, and the 41st bin is for values
that are beyond the highest defined value. The following figure shows a histogram
of activity lifetimes that are plotted using a bar chart.
180
160
140
120
100
80
60
40
20
0
48 20
16 9
8 5
50
38 3
89 3
20 80
11 283
16 9
3
8
19
44
3
1
2
09
46
16 9
1
26 40
60 05
49
37
14 652
10
24
56
77
09
2
13
30
70
24
3
Figure 28. Histogram of activity lifetimes that are plotted using a bar chart
The activity lifetime histogram corresponds to the following data. Each count
represents the number of activities whose lifetimes (in milliseconds) are within the
range of the low bin value to the high bin value. For example, 156 activities had a
lifetime in the range of 68 milliseconds to 103 milliseconds.
Low Bin High Bin Count
0 1 0
1 2 0
2 3 0
3 5 0
5 8 0
You can use histograms for a number of different purposes. For example, you can
use them to see the distribution of values, use them to identify outlying values, or
use them to compute averages and standard deviations. See “Scenario: Tuning a
DB2 workload manager configuration when capacity planning information is
unavailable” on page 290 and “Example: Computing averages and a standard
deviation from histograms in a DB2 workload manager configuration” on page 195
for examples of how to use histograms to better understand and characterize your
workload.
Histograms are available for service subclasses, workloads, and work classes,
through work actions. Histograms are collected for these objects when you specify
one of the COLLECT AGGREGATE ACTIVITY DATA clauses when creating or
altering the objects. For work classes, histograms are also collected if you apply a
COLLECT AGGREGATE ACTIVITY DATA work action to the work class. The
following histograms are available:
Histogram templates
You can optionally specify a histogram template that is used to determine what a
particular histogram looks like, including the high bin value. A histogram template
is a unitless object, meaning that there is no predefined measurement unit assigned
to it.
This statement creates a histogram template with the following bin values:
Low Bin High Bin
0 1
1 2
2 3
3 4
4 6
6 9
9 13
13 19
19 28
28 41
41 60
60 87
87 127
127 184
184 268
268 389
389 565
565 821
821 1192
1192 1732
For example, to use the TEMPLATE1 histogram template for the existing activity
lifetime histogram of service subclass MYSUBCLASS under the service superclass
MYSUPERCLASS, issue the following statement:
ALTER SERVICE CLASS MYSUBCLASS UNDER MYSUPERCLASS
ACTIVITY LIFETIME HISTOGRAM TEMPLATE TEMPLATE1
After you commit the ALTER SERVICE CLASS statement, the activity lifetime
histogram that is collected for the MYSUBCLASS service subclass has high bin
values that are determined by the TEMPLATE1 histogram template instead of by
the SYSDEFAULTHISTOGRAM histogram template.
You can drop a histogram template by using the DROP HISTOGRAM TEMPLATE
statement.
Example
CONNECT RESET
41 record(s) selected.
Some DB2 service subclass, work class activity, and request statistics are collected
using histograms. All histograms have a set number of bins, and each bin
represents a range in which activities or requests are counted. The type of units
used for the bins depends on the type of histogram that you create. The histogram
template describes the high value of the second-to-last bin in the histogram, which
affects the values of all of the bins in the histogram. For more information on
histograms, see “Histograms in workload management” on page 189.
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
Some DB2 service subclass, work class activity, and request statistics are collected
using histograms. All histograms have a set number of bins, and each bin
See “DDL statements for DB2 workload manager” on page 18 for more information
about prerequisites.
Suppose that you have a single-partition environment and histogram with the
following bins. There are more bins in the real histograms, but this example is
limited to eight bins to make the example simpler.
Bin 1 - 0 to 2 seconds
Bin 2 - 2 to 4 seconds
Bin 3 - 4 to 8 seconds
Bin 4 - 8 to 16 seconds
Bin 5 - 16 to 32 seconds
Bin 6 - 32 to 64 seconds
Bin 7 - 64 to 128 seconds
Bin 8 - 128 seconds to infinity
You can compute an approximation of the average by assuming that the average
response time for a query that falls into a bin with the range x to y is (x + y)/2.
You can then multiply this number by the number of queries that fell into the bin,
sum across all bins, then divide the sum by the total count. For the preceding
example, assume that the average response time for each bin is:
Bin 1 average lifetime = (0+2)/2 = 1
Bin 2 average lifetime = (2+4)/2 = 3
Bin 3 average lifetime = (4+8)/2 = 6
Bin 4 average lifetime = (8+16)/2 = 12
Bin 5 average lifetime = (16+32)/2 = 24
Bin 6 average lifetime = (32+64)/2 = 48
Bin 7 average lifetime = (64+128)/2 = 96
Assume that the following histogram was collected during the measurement
period:
Bin 1 Bin 2 Bin 3 Bin 4 Bin 5 Bin 6 Bin 7 Bin 8
count count count count count count count count
20 30 80 10 5 3 2 0
To calculate average lifetime, bin 8 must be empty. Bin 8 only exists to let you
know when you need to change the upper boundary of your range. For this
reason, you must specify the upper bound for the range.
You can approximate the average lifetime for database partition 1 as follows:
average lifetime = (20 x 1 + 30 x 3 + 80 x 6 + 10 x 12 + 5 x 24 + 3 x 48 + 2 x 96) / 150
= (20 + 90 + 480 + 120 + 120 + 144 + 192) / 150
= 1166 / 150
= 7.77 seconds
For example, assume that the database has two partitions, the histogram bin sizes
are as described above, and the histogram has the following data:
Database Bin 1 Bin 2 Bin 3 Bin 4 Bin 5 Bin 6 Bin 7 Bin 8
partition count count count count count count count count
1 20 30 80 10 5 3 2 0
2 1 5 20 20 4 0 0 0
From the combined histogram, you can calculate the overall lifetime average and
standard deviation in a similar way to how they were computed for a
single-partition environment:
Average lifetime = (21 x 1 + 35 x 3 + 100 x 6 + 30 x 12 + 9 x 24 + 3 x 48 + 2 x 96) / 200
= (21 + 105 + 600 + 360 + 216 + 144 + 192) / 200
= 1638 / 200
= 8.19 seconds
These scripts provide historical analysis functionality similar to the Query Patroller
historical analysis feature by using information captured by the workload
management activities event monitor. The workload management historical
analysis tool was written in Perl; you can use these scripts as is or you can modify
them to produce additional historical analysis reports to suit your needs.
The workload management historical analysis tool consists of two scripts, which
can be found in the samples/perl path of your installation directory:
v wlmhist.pl - generates historical data
v wlmhistrep.pl - produces reports from the historical data.
A DB2WlmHist.pm file, which contains common Perl routines used by the two
scripts, is included also.
Refer to the README_WLMHIST file found in the same file directory for more
information on how to set up and run the scripts.
The sample migration script provides an automated means to migrate your Query
Patroller setup to the workload manager environment. The Query Patroller
migration tool was written in Perl; you can use this script as is or you can modify
it to suit your needs.
The Query Patroller migration tool consists of one script, which can be found in
the samples/perl path of your installation directory:
v qpwlmmig.pl - generates two DDL script files:
– outputfile contains the DDL statements to create the WLM objects that most
closely reflect the current Query Patroller setup
– outputfile.DROP contains the DDL statements to drop the WLM objects created
by the first script
Chapter 4. Monitoring and intervention 197
Refer to the README_QPWLMMIG file found in the same file directory for more
information on how to set up and run the script.
The QP to WLM migration sample script files are located in the following
directory:
Windows
install_path\sqllib\samples\perl
UNIX install_path/sqllib/samples/perl
Copy the sample files from this directory to a working directory prior to running
the sample programs. The sample program directories are typically read-only on
most platforms and some samples produce output files that require write
permission on the directory.
Restrictions
This script generates a file which contains the DDL statements to create the WLM
objects that most closely reflect the current QP setup. Some QP features do not
have a direct WLM mapping. The following QP features will either not generate
WLM DDL statements or generate WLM DDL statements that are commented out
in the output scripts to control their use, because there are some differences in the
behavior:
v min_cost_to_manage setting in QP submitter profiles: This setting has no
equivalent WLM setting. This setting will be ignored and no WLM DDL
statements will be generated for it.
v max_cost_allowed setting for a QP submitter profile: If the qpwlmmig.pl script is
run on DB2 V9.7 or later, this setting will cause an ESTIMATEDSQLCOST
threshold DDL statement to be added for the associated WLM workload object.
If the script is run on DB2 V9.5, this setting will be ignored and no WLM DDL
statements will be generated for it.
v max_queries_allowed setting in QP submitter profiles: In WLM, to restrict the
number of activities that can be run in a workload occurrence, you can use the
CONCURRENTWORKLOADACTIVITIES threshold. However, this is not a
queuing threshold. In addition, this threshold controls the number of activities
that can run concurrently in an occurrence of a workload, while the
max_queries_allowed setting in QP controls the number of DML statements that
can be run by a specific submitter profile. Therefore, if this setting is used in QP,
then a CONCURRENTWORKLOADACTIVITIES threshold will be generated but
will be commented out. You can uncomment it if required.
v include_applications setting in QP system settings: This setting specifies which
applications should be intercepted by QP. This setting will be ignored.
The generated output file contains the DDL that creates the WLM objects that are
set up to collect either activity or aggregate information. In order to capture this
information, create the WLM event monitors using the wlmevmon.ddl script
contained in the sqllib/misc directory.
To undo the changes made from running the outputfile file, run the generated
outputfile.DROP file:
db2 -tf outputfile.DROP
You can use statistics to understand the behavior of your system over time (for
example, what is the average lifetime of activities, how much time do activities
spend queued, what is the distribution of large compared to small activities, and
so on), set thresholds (for example, find the upper boundary for concurrent
activities), and detect problems (for example, detect whether the average lifetime
that users are experiencing is higher than normal). See “Statistics for DB2 workload
manager objects” on page 178 for a description of which statistics are collected for
each DB2 workload manager object.
After you perform the preceding steps, workload management statistics are written
to the statistics event monitor every wlm_collect_int minutes. Each record written
to the statistics event monitor has a STATISTICS_TIMESTAMP value and a
LAST_WLM_RESET value. The interval of time from LAST_WLM_RESET to
STATISTICS_TIMESTAMP defines the collection interval (that is, interval of time
over which the statistics in that record were collected).
If the wlm_collect_int parameter is set to 0 (the default) statistics are not sent to
the statistics event monitor automatically. You can manually send statistics to the
statistics event monitor for later historical analysis by using the
WLM_COLLECT_STATS stored procedure. When this procedure is invoked, it
performs the same actions that occur with an automatic statistics collection
interval. That is, the in-memory statistics are sent to the statistics event monitor
and the in-memory statistics are reset. If there is no active statistics event monitor,
the in-memory values are reset, but data is not collected. If you only want to reset
statistics, you can invoke the WLM_COLLECT_STATS procedure while there is no
active statistics event monitor.
Manual collection of statistics does not interfere with the automatic collection of
statistics. For example, assume that you have wlm_collect_int set to 60. Statistics
are sent to the statistics event monitor every hour. Now assume that the last time
the statistics were collected was 5:00 AM. You can invoke the
WLM_COLLECT_STATS procedure at 5:55 AM, which sends the in-memory values
of the statistics to the event monitor and resets the statistics. The next automatic
statistics collection still occurs at 6:00 AM, one hour after the last automated
collection. The collection interval is not affected by any manual collection and
resetting of statistics that occurs during the interval.
Notes:
Note that resetting statistics applies only to DB2 workload manager statistics;
metrics reported by monitoring interfaces will be collected, but not reset.
Four events will reset the in-memory statistics stored for each DB2 workload
manager object. (For a description of the statistics maintained for each object, see
“Statistics for DB2 workload manager objects” on page 178.)
v The WLM_COLLECT_STATS stored procedure is invoked. See “Collecting
workload management statistics using a statistics event monitor” on page 200
for details.
v The automatic DB2 workload manager statistics collection and reset process
controlled by the wlm_collect_int database configuration parameter causes a
collection and reset. See “Collecting workload management statistics using a
statistics event monitor” on page 200 for details.
v The database is reactivated. Every time the database is activated on a database
partition, the statistics for all DB2 workload manager objects on that database
partition are reset.
You can determine the last time the statistics were reset for a given DB2 workload
manager object using the statistics table functions and looking at timestamp in the
LAST_RESET column. For example, to see the last time the statistics were reset for
the service subclass SYSDEFAULTSUBCLASS under the SYSDEFAULTUSERCLASS
service superclass, you could issue a query such as:
SELECT LAST_RESET
FROM TABLE(WLM_GET_SERVICE_SUBCLASS_STATS_V97( 'SYSDEFAULTUSERCLASS',
'SYSDEFAULTSUBCLASS', -2)) AS T
All statistics table functions return the statistics that accumulated since the last
time that the statistics were reset. A statistics reset occurs when a database is
activated or reactivated, when you alter a DB2 workload manager object (only the
statistics for that object are reset), and when you call the WLM_COLLECT_STATS
stored procedure. Statistics are also reset automatically according to the time
period defined by the wlm_collect_int database configuration parameter, if you set
this parameter to a nonzero value.
Metrics are maintained for a number of DB2 database objects. These metrics reside
in memory and can be viewed in real-time using DB2 monitoring metrics table
functions, or the metrics can be collected and sent to an event monitor where they
can be viewed later for historical analysis.
You can obtain system-level monitoring metrics aggregated by service classes and
workloads using:
v The statistics event monitor (DETAILS_XML column in the wlstats and scstats
logical groups)
v The MON_GET_SERVICE_SUBCLASS,
MON_GET_SERVICE_SUBCLASS_DETAILS, MON_GET_WORKLOAD and
MON_GET_WORKLOAD_DETAILS table functions
Monitoring metrics for requests to the data server, including those requests that are
part of an activity, are controlled by the mon_req_metrics database configuration
parameter and the COLLECT REQUEST METRICS clause on a service superclass.
Metrics will be collected for a request, if the database configuration parameter is
set to a value other than NONE or if the request is submitted by a connection that
mapped to a subclass under a superclass which has a COLLECT COLLECT
REQUEST METRICS setting other than NONE.
The DB2 workload manager table functions and the snapshot monitor table
functions share the following fields. You can perform joins on these fields to derive
data that you need to perform diagnostic and performance-tuning activities. Note
that, unlike the snapshot table functions, the WLM table functions do not get their
information from the snapshot monitor, so that the information available in the
WLM table functions is not available from the snapshot monitor.
When a threshold violation occurs for a threshold that has a REMAP ACTIVITY
action defined for it, a threshold violation record is optional. Whether or not a
threshold violation record is recorded is determined by the NO EVENT MONITOR
RECORD or LOG EVENT MONITOR RECORD clause of your CREATE
THRESHOLD statement.
You can optionally have detailed activity information (including statement text)
written to an active activities event monitor if the threshold violation is caused by
an activity. The activity information is written when the activity completes, not
when the threshold is violated. Specify that activity information should be
collected when a threshold is violated by using the COLLECT ACTIVITY DATA
keyword on either the CREATE or ALTER threshold or work action set statements.
Note: If you create any thresholds, you should create and activate a threshold
violations event monitor so you can monitor any threshold violations that
occur. A threshold violations event monitor does not have any impact unless
thresholds are violated.
This example shows how you can determine what remappings of a particular
activity occurred as the result of a threshold violation that included a REMAP
ACTIVITY action. To find the activities that were remapped, use a statement like
the following:
SELECT VARCHAR(APPL_ID, 30) AS APPLID,
UOW_ID,
ACTIVITY_ID,
VARCHAR(T.PARENTSERVICECLASSNAME,20) AS SERVICE_SUPERCLASS,
VARCHAR(T.SERVICECLASSNAME,20) AS FROM_SERVICE_SUBCLASS,
VARCHAR(S.SERVICECLASSNAME,20) AS TO_SERVICE_SUBCLASS
FROM THRESHOLDVIOLATIONS_TH1,
SYSCAT.SERVICECLASSES AS T,
SYSCAT.SERVICECLASSES AS S
WHERE SOURCE_SERVICE_CLASS_ID = T.SERVICECLASSID AND
DESTINATION_SERVICE_CLASS_ID = S.SERVICECLASSID AND
THRESHOLD_ACTION = 'REMAP'
ORDER BY APPLID, ACTIVITY_ID, UOW_ID, TIME_OF_VIOLATION ASC;
In this example, two remappings occurred for the activity submitted by the
application with the ID *N0.swalkty.080613140844 which is identified by activity ID
1 and unit of work (UOW) ID 1:
2 record(s) selected.
The output is ordered by the time of threshold violation and shows that the
activity was remapped twice after it started executing. Although not shown in the
output, the initial service subclass the activity was mapped to is likely a high
priority service subclass, typical of a three-tiered configuration that permits shorter
running queries to complete more quickly. Because the activity did not complete
quickly enough in the high priority service subclass, it violated a threshold and
was remapped to a medium priority service subclass, and then remapped again to
a low priority service subclass after a second threshold violation later on.
In order to implement this email notification approach, you must have DB2
Version 9.7 or higher installed. The SMTP support used here was provided since
DB2 V9.7.
Upon completion of this task, email notifications are sent if WLM threshold
violations occur during the 10 minutes since the threshold notification procedure
was last run. The DB2 Administrative Task Scheduler is used to schedule the
threshold notification procedure to run every 10 minutes in this example.
1. Update the smtp_server database configuration parameter by issuing the
following command:
UPDATE DB CONFIG USING SMTP_SERVER smtp_server_name
2. Create a write-to-table event monitor for threshold violations and write
violations to the TEST.THRESHOLDVIOLATIONS_T table by issuing the following
statement:
CREATE EVENT MONITOR T FOR THRESHOLD VIOLATIONS WRITE TO TABLE
THRESHOLDVIOLATIONS( TABLE TEST.THRESHOLDVIOLATIONS_T )
3. Activate the write-to-table event monitor T for threshold violations by issuing
the following statement:
SET EVENT MONITOR T STATE 1
4. Create a control table to track the last threshold for which an alert was
generated by issuing the following statement:
CREATE TABLE TEST.THRESHOLD_NOTIFY_CONTROL( LAST_NOTIFICATION TIMESTAMP )
5. Create a stored threshold notification procedure to generate threshold violation
messages. The following example procedure iterates over the threshold
violations table and builds a report listing all threshold violations that have
occurred since the last time the procedure was invoked. The report is emailed
using the DB2 SMTP procedures.
CREATE PROCEDURE TEST.NOTIFY_ON_THRESHOLD_VIOLATION()
LANGUAGE SQL
BEGIN
DECLARE NEWEST_VIOLATION TIMESTAMP;
DECLARE LAST_VIOLATION_SEEN TIMESTAMP;
DECLARE NOT_FOUND INTEGER DEFAULT 0;
DECLARE SENDER VARCHAR(128);
DECLARE RECIPIENTS VARCHAR(128);
DECLARE MESSAGE VARCHAR(8192);
DECLARE SUBJECT VARCHAR(128);
OPEN C1;
FETCH C1 INTO NEWEST_VIOLATION;
CLOSE C1;
IF ( NOT_FOUND = 0 ) THEN
OPEN C2;
FETCH C2 INTO LAST_VIOLATION_SEEN;
CLOSE C2;
IF ( NOT_FOUND = 1 ) THEN
SET LAST_VIOLATION_SEEN = NULL;
END IF;
SET NOT_FOUND = 0;
SET MESSAGE = '';
OPEN C3;
WHILE ( NOT_FOUND = 0 ) DO
CLOSE C3;
COMMIT;
END IF;
END IF;
END@
6. Enable the DB2 Administrative Task Scheduler by running the following
command:
db2set DB2_ATS_ENABLE=YES
7. Schedule the threshold notification procedure to execute every 10 minutes. To
schedule the procedure, you must have execute privileges on the procedure.
The following is an example of how this can be done:
CALL SYSPROC.ADMIN_TASK_ADD(
'CHECK THRESHOLD VIOLATIONS EVERY 10 MINUTES',
NULL,
NULL,
NULL,
'0-59/10 * * * *',
'TEST',
'NOTIFY_ON_THRESHOLD_VIOLATION',
NULL,
NULL,
NULL )@
You can collect information about individual activities for service subclasses,
workloads, work classes (through work actions), and threshold violations. You
The COLLECT ACTIVITY DATA keyword also controls the amount of information
that is sent to the ACTIVITIES event monitor. If the keyword specifies WITH
DETAILS, statement information (such as statement text) is collected. If the
keyword specifies WITH DETAILS AND VALUES, data values are collected as
well.
You might not always know in advance that you will want to capture an activity.
For example, you might have a query that is taking a long time to run and you
want to collect information about it for later analysis. In this situation, it is too late
to specify the COLLECT ACTIVITY DATA keyword on the DB2 workload manager
objects, because the activity has already entered the system. In this situation, you
can use the WLM_CAPTURE_ACTIVITY_IN_PROGRESS stored procedure. The
WLM_CAPTURE_ACTIVITY_IN_PROGRESS stored procedure sends information
about an executing activity to the active ACTIVITIES event monitor. You identify
the activity to be collected using the application handle, unit of work identifier,
and activity identifier. Information about the activity is immediately be sent to the
ACTIVITIES event monitor when the procedure is invoked: you do not need to
wait for the activity to complete.
Activities imported into the design advisor must have been collected using the
COLLECT ACTIVITY DATA WITH DETAILS or COLLECT ACTIVITY DATA
WITH DETAILS AND VALUES options. The COLLECT ACTIVITY DATA
WITHOUT DETAILS option is not sufficient, it will not capture the statement text
which is required by the Design Advisor.
To import activity information from the activity event monitor tables into the
Design Advisor, run the db2advis command with the -wlm parameter, followed by
additional parameters:
1. The activities event monitor name
2. Optional: the workload or service class name
3. Optional: the start time and end time
For example, to import information about all the activities collected by the
DB2ACTIVITIES event monitor in the SAMPLE database, use the following
command:
db2advis -d SAMPLE -wlm DB2ACTIVITIES
Note: You can only import information from activities event monitor tables
through the Design Advisor command line interface.
All user activities are cancellable, including the load utility and stored procedures.
To cancel an activity:
1. Identify the activity that you want to cancel. You can use the
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 table function to
identify the activities running in an application. You can also use the
MON_GET_ACTIVITY_DETAILS_COMPLETE table function to view additional
details about a particular activity if the information in
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 is not sufficient to
identify the work that the activities are performing.
2. Cancel the activity using the WLM_CANCEL_ACTIVITY stored procedure. The
stored procedure takes the following arguments: application_handle, uow_id, and
activity_id. For an example of how to use this stored procedure, see “Scenario:
Identifying activities that are taking too long to complete” on page 281.
First establish a set of criteria for what you would consider a rogue activity. For
example:
v An activity in that runs in a service class for activities with a low estimated cost,
and runs for more that 1 hour
v An activity that returns an unusually large number of rows
v An activity that consumes an unusually high amount of temporary table space
Then create thresholds that describe these criteria and contain a COLLECT
ACTIVITY DATA WITH DETAILS action. When the threshold is violated,
information about the activity that violated the threshold is sent to the active
ACTIVITIES event monitor when the activity completes.
For example, to collect information about any database activity that runs for more
than 3 hours, create a threshold such as the following threshold:
CREATE THRESHOLD LONGRUNNINGACTIVITIES
FOR DATABASE ACTIVITIES ENFORCEMENT DATABASE
WHEN ACTIVITYTOTALTIME > 3 HOURS COLLECT ACTIVITY DATA WITH DETAILS
CONTINUE
. The service class created for the threshold is assigned low agent and prefetch
priority, because it is intended to be used for long running queries (this SQL
statement works on UNIX operating systems and Linux; on Windows operating
systems, substitute an agent priority of -6).
After your data server has performed some work, you can analyze the information
that is written to the threshold violations and activities event monitors. DML
activities also have their statement text and compilation environment information
written to the activities event monitor, so you can run DB2 explain on them to
further investigate the performance of the activity.
Inter-arrival time is the time between the arrival of one activity and the arrival of
the next activity. Service time is the time that an activity spends executing on the
system. For example, if you submit a query at time 0 seconds, it spends 2 seconds
in a queue, and it finishes at time 5 seconds, the service time is 5 - 2 = 3 seconds.
Service time assumes no other work executing on the system (that is, it is not the
observed execution time, but rather the time it would take to execute the activity
in isolation). The service time distribution can be approximated for DML activities
using the estimated cost in timerons, which considers both processor and I/O time
for an activity.
You can build a workload model for your system by measuring the inter-arrival
time distribution and the service time distribution of the activities on the system.
Inter-arrival time distributions and approximate service time distributions (using
estimated cost) can be obtained by using extended aggregate activity statistics for
service subclasses or work classes (using work actions) and a statistics event
monitor. These statistics are not collected by default. See “Statistics for DB2
workload manager objects” on page 178 for more information.
Assuming that you have an active activities event monitor called DB2ACTIVITIES,
you can create a work class for CALL statements that apply to the schema of the
MYSCHEMA.MYSLOWSTP stored procedure. Then you can create a work action
to map the CALL activity and all nested activities to a service class that has
activity collection enabled. The CALL activity, and any activities nested in it, are
sent to the event monitor. Following are examples of the DDL required to create
the DB2 workload manager objects:
CREATE SERVICE CLASS SC1;
CREATE WORKLOAD WL1 APPLNAME ('DB2BP') SERVICE CLASS SC1;
CREATE SERVICE CLASS PROBLEMQUERIESSC UNDER SC1 COLLECT ACTIVITY DATA ON COORDINATOR WITH DETAILS;
CREATE WORK ACTION SET DATABASEACTIONS FOR SERVICE CLASS SC1 USING WORK CLASS SET PROBLEMQUERIES
(WORK ACTION CAPTURECALL ON WORK CLASS CALLSTATEMENTS MAP ACTIVITY WITH NESTED TO PROBLEMQUERIESSC);
After the MYSCHEMA.MYSLOWSTP stored procedure runs, you can issue the
following query to obtain the application handle, the unit of work identifier, and
the activity identifier for the activity:
SELECT AGENT_ID,
UOW_ID,
ACTIVITY_ID
FROM ACTIVITY_DB2ACTIVITIES
WHERE SC_WORK_ACTION_SET_ID = (SELECT ACTIONSETID
FROM SYSCAT.WORKACTIONSETS
WHERE ACTIONSETNAME = 'DATABASEACTIONS')
AND SC_WORK_CLASS_ID = (SELECT WORKCLASSID
FROM SYSCAT.WORKCLASSES
WHERE WORKCLASSNAME = 'CALLSTATEMENTS'
AND WORKCLASSSETID =
(SELECT WORKCLASSSETID FROM SYSCAT.WORKACTIONSETS WHERE ACTIONSETNAME
= 'DATABASEACTIONS'));
Assuming that the captured activity has an application handle of 1, a unit of work
identifier of 2, and an activity identifier of 3, the following results are generated:
AGENT_ID UOW_ID ACTIVITY_ID
===================== =========== ===========
1 2 3
Using this information, you can issue the following query against the
ACTIVITY_DB2ACTIVITIES and the ACTIVITYSTMT_DB2ACTIVITIES tables to
determine where the activity spent its time:
WITH RAH (LEVEL, APPL_ID, PARENT_UOW_ID, PARENT_ACTIVITY_ID,
UOW_ID, ACTIVITY_ID, STMT_TEXT, TIME_CREATED, TIME_COMPLETED) AS
(SELECT 1, ROOT.APPL_ID, ROOT.PARENT_UOW_ID,
ROOT.PARENT_ACTIVITY_ID, ROOT.UOW_ID, ROOT.ACTIVITY_ID,
ROOTSTMT.STMT_TEXT, ROOT.TIME_CREATED, ROOT.TIME_COMPLETED
FROM ACTIVITY_DB2ACTIVITIES ROOT, ACTIVITYSTMT_DB2ACTIVITIES ROOTSTMT
WHERE ROOT.APPL_ID = ROOTSTMT.APPL_ID AND ROOT.AGENT_ID = 1
AND ROOT.UOW_ID = ROOTSTMT.UOW_ID AND ROOT.UOW_ID = 2
AND ROOT.ACTIVITY_ID = ROOTSTMT.ACTIVITY_ID AND ROOT.ACTIVITY_ID = 3
UNION ALL
SELECT PARENT.LEVEL +1, CHILD.APPL_ID, CHILD.PARENT_UOW_ID,
CHILD.PARENT_ACTIVITY_ID, CHILD.UOW_ID,
CHILD.ACTIVITY_ID, CHILDSTMT.STMT_TEXT, CHILD.TIME_CREATED,
CHILD.TIME_COMPLETED
FROM RAH PARENT, ACTIVITY_DB2ACTIVITIES CHILD,
The results indicate that the stored procedure is spending most of its time querying
the MYHUGETABLE table. Your next step is to investigate what changes to the
MYHUGETABLE table might cause queries running against it to slow down.
The point of integration between DB2 workload manager and operating system
workload managers is the DB2 service class. You create a mapping between a DB2
service class and an operating system workload manager class when you define a
DB2 service class by using the OUTBOUND CORRELATOR option of the CREATE
SERVICE CLASS or the ALTER SERVICE CLASS statement.
If the outbound correlator is set, all threads in the DB2 service class are associated
with the operating system workload manager using the outbound correlator when
the next activity begins.
Implementing AIX WLM controls may not be needed to meet your performance
objectives, but even if you do not need to exercise AIX WLM, the operating system
statistics provided by AIX WLM per AIX class are often useful for monitoring and
tuning efforts.
Use a 1:1 mapping of DB2 service classes to AIX Workload Manager service classes
to take advantage of AIX WLM processor controls. By having a 1:1 mapping
between DB2 service classes and AIX Workload Manager service classes, you can
adjust the AIX processor resource for each DB2 service class individually to meet
your business priority goals.
The following figure shows the integration of the DB2 workload manager with the
AIX Workload Manager. Note the 1:1 mapping between each DB2 service class and
AIX Workload Manager service class at the service superclass and service subclass
levels.
Requests Workload A
Requests Service
Workload B _DB2_SUBCLASSA
subclass A
Service
Requests Workload C _DB2_SUBCLASSB
subclass B
Requests Workload D
System requests
Default system
Requests _DB2_DEF _SYSTEM
class
Maintenance requests
Figure 29. Integration of the DB2 workload manager with the AIX Workload Manager
In situations where the DB2 environment consists of multiple databases and DB2
instances, several levels might be candidates for resource control. Because the AIX
Workload Manager supports a two-level hierarchy, that is, superclass and subclass,
only two levels of a DB2 environment can be mapped to AIX Workload Manager
classes at any time. The following figure shows one way to achieve a 1:1 mapping
with multiple databases, each with multiple superclasses. Here, each database has
its own AIX Workload Manager superclass and each DB2 service superclass is
mapped to an AIX Workload Manager subclass.
Database 1 Database 1
Service superclass A
Superclass A
Service superclass B
Superclass B
Database 2
Database 2
Service superclass A
Superclass A
Service superclass B
Superclass B
Other application 1
Other application 2
Figure 30. DB2 service classes mapped to AIX classes (with DB2 service superclasses only)
An alternative configuration is to map each DB2 service superclass to its own AIX
Workload Manager superclass, which results in four superclasses in this example.
In this situation, the database level of resource control is represented explicitly in
the AIX Workload Manager service class definitions.
The following figure shows one way to achieve the 1:1 mapping in the situation
where you have multiple databases, each with service superclasses and service
subclasses. Here, each database corresponds to an AIX superclass and each DB2
service subclass is mapped to an AIX Workload Manager subclass. The DB2 service
superclass is not shown explicitly in the AIX Workload Manager service class
Database 1 Database 1
Service superclass A
Service superclass B
Database 2 Database 2
Service superclass A
Service superclass B
Other application 1
Other application 2
Figure 31. DB2 service classes mapped to AIX Workload Manager classes (with DB2 service
subclasses)
Mapping between DB2 service classes and AIX Workload Manager classes is
specified for the DB2 service class using the OUTBOUND CORRELATOR keyword
of the CREATE SERVICE CLASS or the ALTER SERVICE CLASS statements.
The steps for setting up the AIX Workload Manager classes with the DB2 data
server are:
1. Create the DB2 service superclasses and service subclasses, and specify the
OUTBOUND CORRELATOR tags.
2. Create the corresponding AIX classes.
3. Create the associated AIX Workload Manager rules files to contain the DB2
workload manager to AIX Workload Manager mappings using the
OUTBOUND CORRELATOR tags under the tag columns.
4. Start the AIX Workload Manager.
5. If required, set this AIX Workload Manager configuration as active.
When a thread joins a DB2 service class, the DB2 data server calls the appropriate
AIX Workload Manager API to associate the thread to the corresponding AIX
service class. The DB2 data server sends the thread's target AIX service class to the
AIX Workload Manager by passing it the application tag set in the OUTBOUND
CORRELATOR parameter.
You must ensure that the AIX Workload Manager is properly installed, configured,
and active. If the DB2 data server cannot communicate with the AIX Workload
Manager, a message is logged to the db2diag log files and DB2 administrator log.
The database activity continues.
The DB2 data server cannot detect whether the OUTBOUND CORRELATOR value
that it passes to the AIX Workload Manager is recognized by the AIX Workload
Manager. You must verify that the value specified for the DB2 service class
matches the application tags that map DB2 threads to the AIX service classes. If the
OUTBOUND CORRELATOR value is not recognized by the AIX Workload
Manager, the database activity continues to execute.
The AIX Workload Manager can be used to control the amount of processor
resource allocated to each service class. Options include setting a minimum,
maximum, or relative proportion share of processor resource for each service class.
When integrating the AIX Workload Manager with DB2 Workload Management,
only processor resource allocation is supported. You should not set memory and
I/O settings for the AIX classes. DB2 database-level memory is shared among all
agents from different DB2 service classes, so you cannot divide memory allocation
between different service classes. AIX-level I/O control does not support the DB2
engine threaded model. To control I/O, you can use the prefetcher priority
attribute of a DB2 service class to differentiate I/O priorities between different DB2
service classes.
If you use AIX to control the amount of processor resource allocated to a service
class, do not also change the agent priority setting for that DB2 service class. Use
only one of these mechanisms to govern the access to processor resource. You
cannot set both the AGENT PRIORITY and the OUTBOUND CORRELATOR value
for a service class. See “Agent priority of service classes” on page 74 for more
information.
To make use of Linux workload management support, you require a Linux kernel
version 2.6.26 or later and the libcgroup library package.
You should use a 1:1 mapping between DB2 service classes and Linux classes
which permits you to adjust the Linux processor shares assigned to activities in
each DB2 service class individually according to business priority. It is important
that you associate every DB2 service class with a Linux WLM class, either by
setting an outbound correlator for each service superclass and subclass, or through
inheritance from the parent service class for subclasses. This includes the default
SYSDEFAULTSYSTEMCLASS, SYSDEFAULTMAINTENANCECLASS and
SYSDEFAULTUSERCLASS service classes.
The following figure shows how two DB2 service subclasses under the same user
defined service superclass can get mapped 1:1 to Linux subclasses under a
common superclass. In this example, the work identified and assigned by two
workloads for each DB2 service subclass is subject to the processor resource
controls imposed by the corresponding Linux subclasses (_DB2_SUBCLASSA,
_DB2_SUBCLASSB). Also shown are three Linux classes that correspond to the
default DB2 workload manager service classes (_DB2_DEF_USER,
_DB2_DEF_SYSTEM, _DB2_DEF_MAINT). If you integrate DB2 workload manager
with Linux workload management, you should always create these additional
Linux classes to match the default DB2 service classes. To avoid any bottleneck, the
Linux class corresponding to the DB2 default system class should receive more
processor shares than any other Linux class that DB2 activities map to, whilst the
Linux class corresponding to the default maintenance class should receive less
processor shares.
Figure 32. Integration of the DB2 workload manager with Linux workload management
Default class
Requests Workload A
Requests Service
Workload B _DB2_SUBCLASSA
subclass A
Service
Requests Workload C _DB2_SUBCLASSB
subclass B
Requests Workload D
System requests
Default system
Requests _DB2_DEF _SYSTEM
class
Maintenance requests
The steps for integrating DB2 workload manager with Linux workload
management, which runs as an operating system service, are as follows:
1. Define the Linux classes, class permissions, and processor shares by editing the
/etc/cgconfig.conf control groups configuration file. What Linux classes you
create depends on the conditions dictated by your business priorities for the
work your data server performs. If you want to apply processor resource based
on the source of certain work, for example, create a Linux class to match the
DB2 service class that work is going to be assigned to by the workload
identifying the work. Define an entry for each Linux class corresponding to the
DB2 service class to be created that you want to use for the mapping. The
following sections must be provided in the /etc/cgconfig.conf configuration
file:
v group: The Linux class name. For example, if you specify group _class1, you
create a superclass _class1. If you specify group _class1/_subclass1, you
create the subclass _subclass1 under the superclass _class1.
– perm: The permissions section that determines who can control what
threads are assigned to a Linux class and who can change the processor
shares of classes in the /etc/cgconfig.conf configuration file.
cpu
{
cpu.shares = 1024;
}
}
2. Start the Linux workload management service daemon with the service
cgconfig start command, then start your DB2 data server with the db2start
command.
3. To map a DB2 service class to one of the Linux classes, include the Linux class
name in the OUTBOUND CORRELATOR clause when you create or alter the
service class, which associates threads from the DB2 service class with the
external Linux class.
4. If you want to find out what threads are assigned to a particular Linux class,
you can use the cat command on the /cgroup/class_name/tasks file, where
class_name represents the name of the Linux class you are interested in. All
threads that are not mapped to a user-defined Linux class are assigned to the
Linux default class, which you can find at MOUNTPOINT/sysdefault, where
MOUNTPOINT is defined in the cgconfig.conf configuration file.
5. To add or remove Linux classes, you must stop with the Linux workload
management service with the service cgconfig stop command, make your
changes, and then restart the service. Note that stopping the service affects the
entire system, because all tasks are moved to the default class. If you used the
/etc/init.d/cgred script to start the service daemon, issue /etc/init.d/cgred
stop to stop it.
For the integration with DB2 workload manager to work, you must ensure that the
Linux workload management service is properly installed, configured, and active.
If the DB2 data server cannot communicate with the Linux workload management
The DB2 data server cannot detect whether the outbound correlator that it passes
to external workload managers is recognized by Linux workload management. You
must verify that the OUTBOUND CORRELATOR value specified for a DB2 service
class matches the Linux class name so that DB2 threads are mapped to the Linux
class. If an outbound correlator is not recognized, database activities will continue
to execute.
Example
The following example illustrates how you can make use of Linux workload
management processor controls by integrating with DB2 workload manager. In this
example, we create two user-defined DB2 service classes, one for batch applications
(BATCHAPPS) and one for online applications (ONLINEAPPS). For simplicity, this
example does not show the default service classes, which should be included in an
implementation that creates the recommended 1:1 mapping between DB2 service
classes and Linux classes. Because response time is critical for the online
applications, we want the ONLINEAPPS service class to receive three times the
amount of processor shares relative to work that runs in the Linux default class (3
x 1024 = 3072 shares). Batch applications have a lower business priority, and the
BATCHAPPS class should be assigned half the processor resource of work that
runs in the Linux default class (1024 / 2 = 512 shares). All other work on the
system will run in the Linux default class. Note that this example does not create
Linux classes corresponding to the three default DB2 workload manager service
classes.
To create this setup, first create the two corresponding Linux classes _BATCHAPPS
and _ONLINEAPPS and set their relative processor shares by editing the
/etc/cgconfig.conf tasks file. After editing, the tasks file contains the following
two entries, one for each Linux class:
# Superclass ONLINEAPPS
group _ONLINEAPPS
{
perm
{
task
{
uid = db2inst1;
gid = db2iadm1;
}
admin
{
uid = db2inst1;
gid = db2iadm1;
}
}
cpu
{
# 3 x 1024 = 3072 shares
cpu.shares = 3072;
}
}
# Superclass BATCHAPPS
group _BATCHAPPS
{
perm
cpu
{
# 1024 / 2 = 512 shares
cpu.shares = 512;
}
}
The absolute processor time in percent assigned to each Linux class as processor
shares is as follows:
Table 57. Processor shares and absolute processor time assigned to Linux classes
Absolute processor time in
Linux class Shares percent
Default class 1024 (default) 1024 / 4608 = 22%
_ONLINEAPPS 1024 x 3 = 3072 3072 / 4608 = 67%
_BATCHAPPS 1024 x ½ = 512 512 / 4608 = 11%
Total = 1024 + 3072 + 512 =
4608 shares
Once the Linux WLM classes are created, you can start the Linux workload
management service:
service cgconfig start
Next, create the associated DB2 service classes with the following statements:
DB2 CREATE SERVICE CLASS BATCHAPPS OUTBOUND CORRELATOR '_BATCHAPPS'
DB2 CREATE SERVICE CLASS ONLINEAPPS OUTBOUND CORRELATOR '_ONLINEAPPS'
To find out which threads are running in a Linux class, issue the cat command. For
the business critical _ONLINEAPPS Linux class, the command and output look as
follows. You can see that there are six thread running in this Linux class:
cat /cgroup/_ONLINEAPPS/tasks
1056
1087
1107
985
1036
1205
These exercises provide some guidance for using DB2 workload manager features
which you can adapt for your own purposes, but you should note that the initial
configuration you chose for your own data server may differ and should be based
on your specific workload management objectives.
This tutorial is designed to be run against the SAMPLE database and, unless noted
otherwise, requires DBADM or WLMADM authority (or SQLADM authority if
only the COLLECT ACTIVITY DATA clause is specified). You should also start the
instance and activate the SAMPLE database before continuing:
db2start
db2 activate db sample
Some of the command and query statements shown in these exercises are quite
long. You can find most of these statements in the text file wlm-tutorial-
steps.txt, which you can copy from when working through the exercises. The
scripts representing the workloads that are required for the different exercises are
also included.
There are two separate features of monitoring that are demonstrated by this
exercise:
1. The ability to collect aggregate statistics for all activities that run in a service
class. Aggregate activity statistics provide an inexpensive way of looking at
work in a service class as a whole. They show information like the number of
activities that ran in the service class, and the average lifetime of those
activities.
2. The ability to capture information about individual activities. Activity
information can be useful when investigating the performance or behavior of a
Connect to the database and create and enable event monitors for activities and
statistics.
CONNECT TO SAMPLE
For this exercise, you will specify the WITH DETAILS clause so that the statement
text information is captured.
ALTER WORKLOAD SYSDEFAULTUSERWORKLOAD
COLLECT ACTIVITY DATA ON COORDINATOR WITH DETAILS
In this example activity data is collected for the default user workload. This results
in information about all user activities being collected since no other user defined
workloads are currently active. This would be too expensive in a production
environment. A better approach would be to isolate the activities of interest using a
specific user defined workload or service class and apply the COLLECT ACTIVITY
DATA clause to that workload or service class only.
Enable collection of aggregate activity statistics for the default subclass under the
default user service class using the COLLECT AGGREGATE ACTIVITY DATA
clause. When this clause is specified, aggregate statistics will be maintained in
memory for the corresponding service class (for example, statistics such as average
activity lifetime). The in-memory statistics can be viewed using the service subclass
statistics table function, or can be collected and sent to the active statistics event
monitor for later analysis.
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
COLLECT AGGREGATE ACTIVITY DATA BASE
Additional Information: There is a set of statistics collected by default for all DB2
workload manager objects. The COLLECT AGGREGATE ACTIVITY DATA clause
enables collection of a number of additional optional statistics, such as the activity
lifetime histogram.
Run some activities, which will result in statistics being updated and the activities
being collected.
db2 –o –tvf work1.db2
db2 –o –tvf work2.db2
You can view the in-memory service class statistics using the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function. For example:
CONNECT TO SAMPLE
The output from this query will look something such as the following:
SUPERCLASS SUBCLASS LAST_RESET
COORD_ACT_COMPLETED_TOTAL COORD_ACT_REJECTED_TOTAL COORD_ACT_ABORTED_TOTAL
COORD_ACT_LIFETIME_AVG
------------------------------ ------------------------------ -------------------------- --
----------------------- ------------------------ ----------------------- ------------------
------
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 2007-07-18-16.03.51.752190
74 0 0 +1.40288000000000E+002
1 record(s) selected.
Additional Information: If there is no active statistics event monitor, you can still
use the WLM_COLLECT_STATS procedure to reset the in-memory statistics, but
the current in-memory values will be lost. It is possible to automate workload
management statistics collection using the WLM_COLLECT_INT database
configuration parameter. If you set this parameter to a nonzero value, workload
management statistics will be collected automatically every wlm_collect_int
minutes (as if you manually invoked the WLM_COLLECT_STATS procedure every
wlm_collect_int minutes).
1 record(s) selected.
3 record(s) selected.
Every time statistics are sent to the event monitor, a statistics record will be created
for each DB2 workload manager object. Note the two timestamps
LAST_WLM_RESET and STATISTICS_TIMESTAMP. The interval of time from
LAST_WLM_RESET to STATISTICS_TIMESTAMP indicates the period of time over
which the statistics in that record were collected. The STATISTICS_TIMESTAMP
indicates when the statistics were collected. Note that the average lifetime for
activities on the coordinator is -1 for the default system and maintenance service
classes. The average activity lifetime statistic is only maintained for a service class
if aggregate activity statistics are enabled using the COLLECT AGGREGATE
ACTIVITY DATA clause.
Information about every individual activity associated with the default user
workload was also collected by the activities event monitor, due to the
specification of the COLLECT ACTIVITY DATA clause on the default workload in
step 2. You can look at this activity information using a query such as the
following:
SELECT VARCHAR(A.APPL_NAME, 15) as APPL_NAME,
VARCHAR(A.TPMON_CLIENT_APP, 20) AS CLIENT_APP_NAME,
VARCHAR(A.APPL_ID, 30) as APPL_ID,
A.ACTIVITY_ID,
A.UOW_ID,
VARCHAR(S.STMT_TEXT, 300) AS STMT_TEXT
FROM ACTIVITY_DB2ACTIVITIES AS A,
...
CALL WLM_COLLECT_STATS()
Service classes are the primary point of resource control for database activities.
They are also useful for monitoring. For example, you can collect statistics for
activities in a particular service class to determine whether the performance goals
for that service class are being met. By default, three default service classes
(SYSDEFAULTSYSTEMCLASS, SYSDEFAULTMAINTENANCECLASS, and
SYSDEFAULTUSERCLASS) are created for each database. If no user defined
service classes are created, user activities are run under the default user service
class (SYSDEFAULTUSERCLASS).
A workload is an entity that groups one or more units of work based on criteria
such as system user ID, session user ID, etc. Workloads provide a means of
assigning work to a service class so that the work can later be managed. A default
user workload (SYSDEFAULTUSERWORKLOAD) and a default administration
workload (SYSDEFAULTADMWORKLOAD) are created for each database. If no
user defined workloads are created, all user activities are associated with the
default user workload.
There are four separate features that are demonstrated in this exercise:
v How to create a service class.
v How to create a workload.
v How to examine basic workload statistics.
v How to collect activity information for activities run under an individual
workload.
First examine where activities are executed if there is no user defined service class
or workload. All DB2 activities are assigned to a workload and run in a service
class. If no user defined service classes are created, activities run in the default
subclass (SYSDEFAULTSUBCLASS) under the default user service class
(SYSDEFAULTUSERCLASS) and if no user defined workloads are created,
activities run under the default user workload (SYSDEFAULTUSERWORKLOAD).
Run the work1.db2 and work2.db2 scripts and then examine the in-memory
statistics for the SYSDEFAULTSUBCLASS of SYSDEFAULTUSERCLASS using the
WLM_GET_SERVICE_SUBCLASS_STATS_V97 .
db2 –o –tvf work1.db2
db2 –o –tvf work2.db2
CONNECT TO SAMPLE
3 record(s) selected.
Note all the activities are run in the SYSDEFAULTUSERCLASS service super class.
2 record(s) selected.
Note there is one workload occurrence completed for both of the scripts (work1.db2
and work2.db2) as well as a workload occurrence for the connection used to
execute the previous command.
Create a service class and then create a workload such that all activities run from
the work1.db2 script get mapped to the newly created service class. When CLP
executes a script, the CURRENT CLIENT_APPLNAME special register value is set
to "CLP script name".
CREATE SERVICE CLASS work1_sc
Note that one workload occurrence completed under WORK1_WL which is the
work1.db2 script. One workload occurrence completed under
SYSDEFAULTUSERWORKLOAD which is the work2.db2 script.
You may see a 2nd workload occurrence completed for the SYSDEFAULTUSER
WORKLOAD which is the connection that was used to call the
WLM_COLLECT_STATS procedure. WLM_COLLECT_STATS is an asynchronous
procedure which might be completed before the statistics are actually collected and
therefore might be included.
Note the activities that completed under the WORK1_SC due to the WORK1_WL
workload mapping.
Create a second service class and then create a workload such that all activities run
from the work2.db2 application get mapped to the newly created service class. In
addition, set up the workload so that it will collect some activity data. For this
example, we just collect activity data without any additional details or values.
CREATE SERVICE CLASS work2_sc
Use the WLM_COLLECT_STATS stored procedure to reset the statistics again and
run the work1.db2 and work2.db2 scripts again.
CALL SYSPROC.WLM_COLLECT_STATS()
Note this time both workload definitions have a workload occurrence run, once for
each script.
You may or may not see a workload occurrence completed for the
SYSDEFAULTUSERWORKLOAD depending on whether workload occurrence over
which the call to the WLM_COLLECT_STATS procedure was submitted is closed
before the statistics are collected.
Note this time service super class work2_sc has some activities run under it due to
the WORK2_WL mapping. The one activity under SYSDEFAULTUSERCLASS is the
query previously run on WLM_GET_WORKLOAD_STATS_V97.
Query the activity table for information on the activities that have been run. Note
that only the activities from the work2.db2 script have been collected because only
the work2_wl workload definition has the COLLECT ACTIVITY DATA attribute
specified.
SELECT SUBSTR(WORKLOADNAME, 1, 20) WL_DEF_NAME,
SUBSTR(APPL_NAME, 1, 20) APPL_NAME,
SUBSTR(ACTIVITY_TYPE, 1, 10) ACT_TYPE
FROM SYSCAT.WORKLOADS, ACTIVITY_DB2ACTIVITIES
WHERE WORKLOADID = WORKLOAD_ID
Now that you have isolated the activities issued by these two scripts into separate
service classes, you can assign resources to the service classes or monitor the
activities that run in those service classes. A few examples: If the work performed
by the script work2.db2 is more important than the work performed by the script
work1.db2, you could increase the priority of agents running in the WORK2_SC
service class using a statement such as the following.
If you wanted to capture details about every individual activity that executes in
the WORK2_SC service class, you could enable activity collection for that service
class using the following:
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER WORK2_SC
COLLECT ACTIVITY DATA ON COORDINATOR WITH DETAILS
Update workload work2_wl so that no activity data is collected, disable the event
monitor and clean up the event monitor table, and call WLM_COLLECT_STATS()
to reset the statistics.
ALTER WORKLOAD work2_wl
COLLECT ACTIVITY DATA NONE
CALL WLM_COLLECT_STATS()
This exercise demonstrates how thresholds can be used to detect or prevent rogue
activities from running on your system and using up system resources. A rogue
activity is any activity that uses an unexpectedly high amount of resources. For
example, a query that runs for an abnormally long time, or returns an
unexpectedly large result set.
Create and enable a write-to-table event monitor that will be used to capture the
threshold violation information and enable the activity event monitor that was
created in Exercise 1.
CREATE EVENT MONITOR threvio FOR THRESHOLD VIOLATIONS WRITE TO TABLE
THRESHOLDVIOLATIONS(IN userspace1),
CONTROL(IN userspace1)
Create a workload such that all activities run from the workth.db2 script will get
mapped to the work1_sc service class.
The work1_sc service class already exists since it was created in Exercise 2.
CREATE WORKLOAD workth_wl
CURRENT CLIENT_APPLNAME('CLP workth.db2')
SERVICE CLASS work1_sc
The th_estcost threshold specifies an upper bound (10000 timerons) for the
optimizer-estimated cost (in timerons) for an activity running in the work1_sc
service class. If any query with an estimated cost greater than 10000 timerons, tries
to execute in the work1_sc service class, this threshold is violated and the query is
not permitted to run.
The th_sqlrows threshold specifies that any activity running in the work1_sc
service class can return at most 30 rows from the data server. If any query tries to
return more than 30 rows, this threshold is violated, only 30 rows will be returned
to the client and the query will be stopped. In addition, data about the activity that
caused the threshold violation will be collected.
In either case, when an activity violates the threshold, a threshold violation record
is written to the THRESHOLD VIOLATIONS event monitor as defined in step 1
and the execution of the activity is stopped (because of the STOP EXECUTION
action). The application that submitted the activity will receive an SQL4712N error.
CREATE THRESHOLD th_estcost
FOR SERVICE CLASS work1_sc ACTIVITIES
ENFORCEMENT DATABASE
WHEN ESTIMATEDSQLCOST > 10000
STOP EXECUTION
Run some activities, some of which violate the threshold upper bounds defined in
the previous step.
db2 –o –tvf workth.db2
Note that the statements which violate the thresholds defined above fail with an
error of SQL4712N/SQLSTATE 5U026.
2 record(s) selected.
Activity information is collected for any activity that violates a threshold that is
defined with a COLLECT clause. Show the detailed information about the
activities that violated a threshold using the following query:
SELECT VARCHAR(A.APPL_NAME, 15) as APPL_NAME,
VARCHAR(A.TPMON_CLIENT_APP, 20) AS CLIENT_APP_NAME,
A.ACTIVITY_ID,
A.ACTIVITY_TYPE,
A.WORKLOAD_ID,
T.THRESHOLD_PREDICATE,
A.QUERY_CARD_ESTIMATE,
T.THRESHOLD_MAXVALUE,
T.TIME_OF_VIOLATION,
VARCHAR(AS.STMT_TEXT, 100) AS STMT_TEXT
Note that the activity that violated the th_estcost (EstimatedSqlCost) threshold is
not shown. The reason is that the threshold did not specify the COLLECT
ACTIVITY DATA clause, so that no activity data was collected for that activity.
Disable the event monitors that were enabled. Also disable and drop the th_estcost
and th_sqlrows thresholds that were created.
SET EVENT MONITOR threvio STATE 0
SET EVENT MONITOR db2activities STATE 0
Also clean up the activities event monitor tables and the threshold violation table
DELETE from ACTIVITY_DB2ACTIVITIES
DELETE from ACTIVITYSTMT_DB2ACTIVITIES
DELETE from THRESHOLDVIOLATIONS_THREVIO
CALL WLM_COLLECT_STATS()
Work action sets are used to apply an action to an activity based on what the
activity is doing rather than who submitted it (as is done with workloads).
Additional Information: There are other actions that can be applied, such as
collecting statistics for activities of a certain type that are not covered in this
exercise.
First, create a work class set containing work classes that will represent the specific
types of activities you are interested in. This work class set will be used in
conjunction with work action sets to perform actions on the selected types of
activities. Below is an example that creates a work class set containing work classes
of all possible types, but if you were interested only in one activity type, your
work class set could be created to only contain that one work class.
CREATE WORK CLASS SET all_class_types
(WORK CLASS read_wc WORK TYPE READ,
WORK CLASS write_wc WORK TYPE WRITE,
WORK CLASS ddl_wc WORK TYPE DDL,
WORK CLASS call_wc WORK TYPE CALL,
WORK CLASS load_wc WORK TYPE LOAD,
WORK CLASS all_wc WORK TYPE ALL POSITION LAST)
Enable the event monitor for activities that was created in Exercise 1.
SET EVENT MONITOR DB2ACTIVITIES STATE 1
If you want to perform a particular action on all activities of a specific type (such
as applying a threshold or collecting activity information), use a database work
action set.
Create a work action set at the database level that contains work actions for the
specific work class representing the type of activities you want isolated. For this
example, we want to collect activity data for all DDL, READ and LOAD activities
that run on the system and we also want to stop any large read activity from
running. For this exercise, a large read activity is any select statement that has an
estimated cost (in timerons) of greater than 10000.
CREATE WORK ACTION SET db_was FOR DATABASE
USING WORK CLASS SET all_class_types
(WORK ACTION collect_load_wa ON WORK CLASS load_wc
COLLECT ACTIVITY DATA WITH DETAILS AND VALUES,
WORK ACTION collect_ddl_wa ON WORK CLASS ddl_wc
COLLECT ACTIVITY DATA WITH DETAILS AND VALUES,
WORK ACTION collect_read_wa ON WORK CLASS read_wc
COLLECT ACTIVITY DATA WITH DETAILS AND VALUES,
WORK ACTION stop_large_read_wa on WORK CLASS read_wc
WHEN ESTIMATEDSQLCOST > 10000 STOP EXECUTION )
4 record(s) selected.
Information about every individual DDL, READ and LOAD activities was collected
by the activities event monitor, due to the specification of the COLLECT ACTIVITY
DATA work action that was applied to the ddl_wc, read_wc, and the load_wc
work classes in step 3. Below are a couple of examples of how you might want to
look at this activity information.
To get some basic information about the activities, you can simply query the
activity monitor table with a statement such as the following:
SELECT ACTIVITY_ID,
SUBSTR(ACTIVITY_TYPE, 1, 8) AS ACTIVITY_TYPE,
VARCHAR(APPL_ID, 30) AS APPL_ID,
VARCHAR(APPL_NAME, 10) AS APPL_NAME
FROM ACTIVITY_DB2ACTIVITIES
26 record(s) selected.
To obtain additional information about each activity, such as activity text and what
service class it ran under, you can perform a query similar to this one:
SELECT VARCHAR(A.APPL_NAME, 15) as APPL_NAME,
VARCHAR(A.TPMON_CLIENT_APP, 20) AS CLIENT_APP_NAME,
VARCHAR(A.APPL_ID, 30) as APPL_ID,
VARCHAR(A.SERVICE_SUPERCLASS_NAME, 20) as SUPER_CLASS,
VARCHAR(A.SERVICE_SUBCLASS_NAME, 20) as SUB_CLASS,
SQLCODE,
VARCHAR(S.STMT_TEXT, 300) AS STMT_TEXT
FROM ACTIVITY_DB2ACTIVITIES AS A, ACTIVITYSTMT_DB2ACTIVITIES AS S
WHERE A.APPL_ID = S.APPL_ID AND
A.ACTIVITY_ID = S.ACTIVITY_ID AND
A.UOW_ID = S.UOW_ID
:
:
db2bp CLP work1.db2 *LOCAL.karenam.070815192418
SYSDEFAULTUS
ERCLASS SYSDEFAULTSUBCLASS 0 drop procedure stp2
:
:
Note that one of the activities has an SQLCODE of -4712. This indicates execution
of the activity was stopped due to a threshold violation. The threshold defined for
the stop_large_read_wa work action will prevent any SELECT statement with an
estimated cost of greater than 10000 from executing.
Additional information: Load activities (not including load from a cursor) do not
have an entry in the activity statement event monitor table
(activitystmt_db2activities table) which explains why there is no record for the
single load activity that is run by the work1.db2 script in the output shown in the
last query above. The reason for this is that load activities are not SQL statements.
For load from cursor activities, there is an entry for the cursor statement in the
activity statement event monitor table because the cursor itself is a separate
activity. There is an entry for all load activities in the activities event monitor table
(activity_db2activities).
Before moving on to the service class work action set, drop the database work
action set.
DROP WORK ACTION SET db_was
If you want to apply a particular action, such as a threshold, to all the activities of
a certain type running in a service super class, you should consider using a service
class work action set. You can create a mapping work action to map specific types
of activities to a specific service subclass and then apply a threshold to that service
subclass. The following steps demonstrate how service class work action sets
might be used
Create a service subclass under the work1_sc service super class that was created
in Exercise 2 Step 2.
The service super class work1_sc is the service class that the activities will be
mapped to through the workloads. The service subclass work1_sc_read is the
service class that the read activities will be mapped to through the work action.
CREATE SERVICE CLASS work1_sc_read UNDER work1_sc
Create a work action set at the service class level that contains work actions that
apply to the specific work classes representing the types of activities you want
isolated. For this example, we want to collect activity data for all DDL, read, and
load activities that run under the work1_sc service class and we also want to map
read activities to a separate service subclass so that we can treat them differently;
in this case, a threshold will be applied to the service subclass to stop any large
SELECT statements from running.
CREATE WORK ACTION SET sc_was FOR SERVICE CLASS work1_sc
USING WORK CLASS SET all_class_types (
WORK ACTION collect_load_wa ON WORK CLASS load_wc
COLLECT ACTIVITY DATA ON ALL DATABASE PARTITIONS WITH DETAILS AND VALUES,
WORK ACTION collect_ddl_wa ON WORK CLASS ddl_wc
COLLECT ACTIVITY DATA ON ALL DATABASE PARTITIONS WITH DETAILS AND VALUES,
WORK ACTION collect_read_wa ON WORK CLASS read_wc
COLLECT ACTIVITY DATA ON ALL DATABASE PARTITIONS WITH DETAILS AND VALUES,
WORK ACTION map_read_wa on WORK CLASS read_wc
MAP ACTIVITY TO work1_sc_read)
To get an effect similar to the stop_large_read_wa work action that prevented any
large SELECT statements from running, create an ESTIMATEDSQLCOST threshold
and apply it to the work1_sc_read service subclass.
CREATE THRESHOLD stop_large_activities FOR SERVICE CLASS work1_sc_read
UNDER work1_sc
ACTIVITIES ENFORCEMENT DATABASE
WHEN ESTIMATEDSQLCOST >10000 STOP EXECUTION
Step 10: Clear the activity tables, reset the statistics, and run
activities
Clear out all of the activity tables so that you can start afresh before running the
script again. Then call the wlm_collect_stats() stored procedure to reset the
statistics
DELETE FROM activity_db2activities
DELETE FROM activitystmt_db2activities
DELETE FROM activityvals_db2activities
CALL wlm_collect_stats()
Note the SQL04712 error for activities that caused the threshold to be exceeded.
4 record(s) selected.
Now query the activity tables again to get information about the individual
activities. Note the service subclass that the activities were run under.
SELECT VARCHAR(A.APPL_NAME, 15) as APPL_NAME,
VARCHAR(A.TPMON_CLIENT_APP, 20) AS CLIENT_APP_NAME,
VARCHAR(A.APPL_ID, 30) as APPL_ID,
VARCHAR(A.SERVICE_SUPERCLASS_NAME, 20) as SUPER_CLASS,
VARCHAR(A.SERVICE_SUBCLASS_NAME, 20) as SUB_CLASS,
SQLCODE,
VARCHAR(S.STMT_TEXT, 300) AS STMT_TEXT
FROM ACTIVITY_DB2ACTIVITIES AS A, ACTIVITYSTMT_DB2ACTIVITIES AS S
WHERE A.APPL_ID = S.APPL_ID AND
A.ACTIVITY_ID = S.ACTIVITY_ID AND
A.UOW_ID = S.UOW_ID
Disable the even monitor, drop the service class threshold and drop the service
class work action set.
SET EVENT MONITOR DB2ACTIVITIES STATE 0
Clear out all of the activity tables so that you can start afresh, before running the
script again.
DELETE FROM activity_db2activities
DELETE FROM activitystmt_db2activities
DELETE from activityvals_db2activities
Disable all of the workloads that have been created so that all activities will run
under the default user workload and get mapped to the default service super class.
ALTER WORKLOAD work1_wl DISABLE
ALTER WORKLOAD work2_wl DISABLE
ALTER WORKLOAD work3_wl DISABLE
ALTER WORKLOAD workth_wl DISABLE
These three histograms are useful for knowing more than just the average lifetime,
execution time, or queue time of the activities run on the system, since they can be
used to calculate standard deviations and can reveal outliers. For more information
on histograms, see “Histograms in workload management” on page 189.
Histograms are accessed through the statistics event monitor. This exercise reuses
the statistics event monitor created in Exercise 1 Step 1.
A second view makes it easier to find out which service classes are having
histograms collected for them. The HISTOGRAMBIN_DB2STATISTICS table
identifies the service classes for which histograms are being collected using the
service class ID. Joining this table with the SERVICECLASSES catalog table permits
the service class information to be presented with the service super class name and
service subclass name instead of the service class ID.
CREATE VIEW HISTOGRAMSERVICECLASSES AS
SELECT DISTINCT SUBSTR(HISTOGRAM_TYPE,1,24) AS HISTOGRAM_TYPE,
SUBSTR(PARENTSERVICECLASSNAME,1,24) AS SERVICE_SUPERCLASS,
SUBSTR(SERVICECLASSNAME,1,24) AS SERVICE_SUBCLASS
FROM HISTOGRAMBIN_DB2STATISTICS AS H,
SYSCAT.SERVICECLASSES AS S
WHERE H.SERVICE_CLASS_ID = S.SERVICECLASSID
The third view lists all of the times that a histogram of a given type was collected
for a given service class. Such as the histogramserviceclasses view, it also joins the
HISTOGRAMBIN_DB2STATISTICS table with the SERVICECLASSES catalog table.
The difference is that it includes the STATISTICS_TIMESTAMP column as one of
the columns in the view.
CREATE VIEW HISTOGRAMTIMES AS
SELECT DISTINCT SUBSTR(HISTOGRAM_TYPE,1,24) AS HISTOGRAM_TYPE,
SUBSTR(PARENTSERVICECLASSNAME,1,24) AS SERVICE_SUPERCLASS,
SUBSTR(SERVICECLASSNAME,1,24) AS SERVICE_SUBCLASS,
STATISTICS_TIMESTAMP AS TIMESTAMP
FROM HISTOGRAMBIN_DB2STATISTICS AS H,
SYSCAT.SERVICECLASSES AS S
WHERE H.SERVICE_CLASS_ID = S.SERVICECLASSID
The fourth and final view will be used to show the histograms themselves. It also
demonstrates something that one often needs to do when dealing with histograms,
which is to aggregate them over time. This view shows the top of each bin and the
number of activities that were counted towards each bin. For the three histograms
in this exercise, the BIN_TOP field measures the number of milliseconds in the
activity lifetime, execution time or queue time. When BIN_TOP is, say 3000
milliseconds and the BIN_TOP of the previous bin is 2000 milliseconds and the
NUMBER_IN_BIN is ten for a lifetime histogram, you know that ten activities had
a lifetime that was between 2 and 3 seconds.
The activity lifetime, queue time, and execution time histograms are collected for a
service subclass when the base collect aggregate activity data option is enabled for
the subclass. Enable the base aggregate activity data collection for the default
subclass under the default user super class using the COLLECT AGGREGATE
ACTIVITY DATA clause.
Note that all activities will be run in the default user service class since all the user
defined workloads were disabled at the end of the previous exercise.
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS
UNDER SYSDEFAULTUSERCLASS
COLLECT AGGREGATE ACTIVITY DATA BASE
Activate the statistics event monitor that was created earlier so that it may receive
the aggregate data whenever it is collected.
SET EVENT MONITOR DB2STATISTICS STATE 1
Now some activities can be run. After the activities have finished, the
WLM_COLLECT_STATS stored procedure is called to send the statistics (including
the activity lifetime, execution time and queue time histograms for the default user
service class) to the active statistics event monitor. These histograms contain data
about all activities that executed in the default user service class since aggregate
activity statistics were enabled. Calling this stored procedure also resets the
statistics. To show changes in database activity over time, three collection intervals
are created. In the first interval, run two scripts, work1.db2 and work2.db2, and
then collect and reset the statistics.
db2 -o -tvf work1.db2
db2 -o -tvf work2.db2
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS()
In the second interval, only run the work1.db2 script once and then collect and
reset the statistics.
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS()
In the third interval, run work1.db2 twice and run work2.db2 script once and then
collect and reset the statistics.
db2 -o -tvf work1.db2
db2 -o -tvf work2.db2
db2 -o -tvf work1.db2
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS()
Collecting data periodically such as this permits you to watch how work on your
system changes over time.
Now that statistics have been collected, the views created earlier can be used to
look at the statistics. The HISTOGRAMTYPES view just returns the types of
histograms available.
SELECT * FROM HISTOGRAMTYPES
HISTOGRAM_TYPE
------------------------
CoordActExecTime
CoordActLifetime
CoordActQueueTime
3 record(s) selected.
Since the BASE option was used when altering the service class, there are three
histograms: lifetime, exectime and queuetime. The HISTOGRAMSERVICECLASSES
view permits you to see the service classes for which a histogram was collected.
The example below restricts the output to that of the CoordActLifetime histogram
only. Since aggregate activity collection was only turned on for the default user
service class's default subclass, only that class is shown when selecting from the
HISTOGRAMSERVICECLASSES view.
SELECT * FROM HISTOGRAMSERVICECLASSES
WHERE HISTOGRAM_TYPE = 'CoordActLifetime'
ORDER BY SERVICE_SUPERCLASS, SERVICE_SUBCLASS
1 record(s) selected.
The HISTOGRAMTIMES view shows the times when histograms were collected.
Since the WLM_COLLECT_STATS procedure was run three times, there are three
timestamps for the lifetime histogram shown.
3 record(s) selected.
The last view, HISTOGRAMS, is for looking at the histograms themselves. Unlike
the HISTOGRAMTIMES view that lists each collection interval as its own row, this
view aggregates histogram data across multiple intervals to produce a single
histogram of a given type for a given service class.
SELECT BIN_TOP, NUMBER_IN_BIN FROM HISTOGRAMS
WHERE HISTOGRAM_TYPE = 'CoordActLifetime'
AND SERVICE_SUPERCLASS = 'SYSDEFAULTUSERCLASS'
AND SERVICE_SUBCLASS = 'SYSDEFAULTSUBCLASS'
ORDER BY BIN_TOP
BIN_TOP NUMBER_IN_BIN
-------------------- --------------------
-1 0
1 88
2 0
3 0
5 2
8 6
12 7
19 8
29 12
44 13
68 23
103 11
158 2
241 5
369 0
562 0
858 0
1309 0
1997 0
3046 0
4647 0
7089 0
10813 0
16493 0
25157 0
38373 0
58532 0
89280 0
136181 0
207720 0
316840 0
483283 0
737162 0
1124409 0
1715085 0
2616055 0
41 record(s) selected.
The output from the histograms can then be used as input into a graphing tool to
generate a graph. The diagram below shows a graph that was created using a
Ruby Graphing Library called Gruff Graphs.
70
60
50
40
30
20
10
0
50
26 09
60 055
16 9
48 0
11 83
3
89 3
20 0
09
46
16 9
3
1
2
19
44
1
3
14 52
2
49
37
28
8
10
24
56
09
4
77
32
13
30
70
24
16
86
38
BIN_TOP
Running the query above should produce output that will not be exactly the same
as what is shown above since activity lifetimes depend on the performance of the
system. In the output above, there are 41 bins and all of the largest bins are empty.
At the top, there is a bin whose BIN_TOP is -1. This bin represents all of those
activities whose lifetime was too large to fit in the histogram. Seeing a
NUMBER_OF_BIN greater than zero when the BIN_TOP is -1 indicates that you
should probably increase the high bin value of your histogram. In the output
above, the NUMBER_IN_BIN is 0, so there is no need to make such a change. A
large number of activities, 88 in this case, were counted in the bin with a BIN_TOP
of 1. This is the lowest bin and it means that 88 activities had a lifetime between 0
and 1 milliseconds. Another piece of information that can be extracted from the
histogram is that, since the largest BIN_TOP for which there is a corresponding
non-zero NUMBER_IN_BIN is 241, the largest lifetime of any activity in the
workloads collected in this histogram was between 158 milliseconds and 241
milliseconds. The COORD_ACT_LIFETIME_TOP column in the
SCSTATS_DB2STATISTICS table gives a more precise measurement of the lifetime
of the activity with the largest lifetime.
BIN_TOP NUMBER_IN_BIN
-------------------- --------------------
-1 0
1 112
2 0
3 0
5 0
8 5
12 7
19 5
29 12
44 7
68 11
103 11
158 2
241 5
369 0
562 0
858 0
1309 0
1997 0
3046 0
4647 0
7089 0
10813 0
16493 0
25157 0
38373 0
58532 0
89280 0
136181 0
207720 0
316840 0
483283 0
737162 0
1124409 0
1715085 0
2616055 0
3990325 0
6086529 0
9283913 0
14160950 0
21600000 0
41 record(s) selected.
100
NUMBER_IN_BIN
80
60
40
20
50
26 09
60 055
16 9
48 0
11 83
3
89 3
20 0
09
46
16 9
3
2
19
44
1
14 52
2
49
37
28
8
10
24
56
09
4
77
32
13
30
70
24
16
86
38
BIN_TOP
Once again, a large number of activities are counted in the first bin and the highest
execution time of any activity is at most 241 milliseconds.
BIN_TOP NUMBER_IN_BIN
-------------------- --------------------
-1 0
1 177
2 0
3 0
5 0
8 0
12 0
19 0
29 0
44 0
68 0
103 0
158 0
241 0
369 0
562 0
858 0
1309 0
1997 0
3046 0
4647 0
7089 0
10813 0
16493 0
25157 0
38373 0
58532 0
89280 0
136181 0
41 record(s) selected.
120
100
80
60
40
20
0
50
26 09
60 055
16 9
48 0
11 83
3
89 3
20 0
09
46
16 9
3
2
19
44
1
14 52
2
49
37
28
8
10
24
56
09
4
77
32
13
30
70
24
16
86
38
BIN_TOP
Every activity was counted in the 0 to 1 millisecond bin because every activity
spent zero milliseconds queuing.
The last several queries looked at activity lifetimes, execution times and queue
times broken down into bins but aggregated across multiple intervals. The
following query presents the same information from a different perspective. It
shows averages instead of histograms and, rather than combining the intervals, it
shows each interval individually. It also reports a count of the number of
completed activities which shows how many activities completed in each interval.
It uses the SCSTATS_DB2STATISTICS table instead of the
HISTOGRAMBIN_DB2STATISTICS table.
SELECT STATISTICS_TIMESTAMP,
COORD_ACT_LIFETIME_AVG AS LIFETIMEAVG,
COORD_ACT_EXEC_TIME_AVG AS EXECTIMEAVG,
COORD_ACT_QUEUE_TIME_AVG AS QUEUETIMEAVG,
COORD_ACT_COMPLETED_TOTAL AS COMPLETED_TOTAL
FROM SCSTATS_DB2STATISTICS
WHERE SERVICE_SUPERCLASS_NAME = 'SYSDEFAULTUSERCLASS'
AND SERVICE_SUBCLASS_NAME = 'SYSDEFAULTSUBCLASS'
ORDER BY STATISTICS_TIMESTAMP
3 record(s) selected.
The result shows that average lifetimes are slightly higher than average execution
times for each interval and all three are just over a half a second or less. The
average queue time, as expected, is zero. The counts of the number of completed
activities in each interval is as expected because workloads 1 and 2 were run in the
first interval which resulted in 77 activities collected, workload 1 ran alone in the
second interval which resulted in 39 activities, and workload 1 ran twice and
workload 2 ran once in the third interval, which resulted in 113 activities.
The final step is to turn off collection of aggregate activities on the default user
service class and drop the views and delete the information in the statistics tables.
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS
UNDER SYSDEFAULTUSERCLASS
COLLECT AGGREGATE ACTIVITY DATA NONE
The DB2 WLM monitoring facilities provide information and statistics for work in
a database. Once the cause of a slow-down is identified, you can remedy the
situation.
Two applications are used in this exercise, app1.db2 and app2.db2. Both
applications perform DML operations on the SAMPLE database. Run the app1.db2
script in one window followed immediately by the app2.db2 script in a second
window.
db2 –tvf app1.db2
db2 –tvf app2.db2
The app2.db2 script should now be hanging. From a third window, issue table
function WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 to find
the states of all applications running on the database. For this example, you can
4 record(s) selected.
From the output, we can tell that the application handle for app2.db2 is 17.
To find out what the agents for app2.db2 are doing use the
WLM_GET_SERVICE_CLASS_AGENTS_V97 table function. This table function
shows information on agents working in a service class. Since we want to see the
agents working for application handle 17, we specify this in the application_handle
input parameter. For this example, we are not interested in agents for a particular
service class, so we specify wildcards for the service_superclass_name and
service_subclass_name input parameters.
SELECT INTEGER(APPLICATION_HANDLE) AS APPL_HANDLE,
UOW_ID, ACTIVITY_ID,
VARCHAR(AGENT_TYPE, 15) AS AGENT_TYPE,
VARCHAR(AGENT_STATE, 10) AS AGENT_STATE,
VARCHAR(EVENT_TYPE, 10) AS EVENT_TYPE,
VARCHAR(EVENT_OBJECT, 10) AS EVENT_OBJ,
VARCHAR(EVENT_STATE, 10) AS EVENT_STATE
FROM TABLE
(WLM_GET_SERVICE_CLASS_AGENTS_V97('', '', 17, -2))
From the output, you can see that the coordinator agent for application 17 is idle
and waiting to acquire a lock. This is the reason why app2.db2 appears to be
hanging.
Now that we know why the application is hanging, we can remedy the situation.
We know the application is waiting on a lock. To find out which lock this
application is waiting on and which application is holding the lock, we can use the
db2pd tool. First, we need to find out the current transaction number for our
hanging application: Issue db2pd –transactions for application handle 17.
db2pd -db sample -transactions app=17
From the output, we can tell that application 17 has transaction handle 7. We can
now find which locks this transaction is waiting on by issuing the db2pd –locks
command for transaction handle 7.
db2pd -db sample -locks 7 wait
The output shows that the application is waiting on a row lock. The owner of the
lock has transaction handle 2. This transaction is holding the lock and causing our
hang. The final step is to determine the corresponding application handle for
transaction handle 2. Issue db2pd –transactions command for transaction handle 2.
db2pd -db sample -transactions 2
From the output, we can see that transaction handle 2 corresponds to application
handle 12. Referring back to the results from table function
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97, you can see
that application 12 refers to app1.db2. This application is holding a row lock that is
needed by app2.db2. To make app2.db2 proceed, you may commit, rollback or
terminate the unit of work or process from the window running app1.db2.
Alternatively, you may also force off app1.db2 by issuing FORCE APPLICATION
on application handle 12.
db2 force application (12)
From a CLP window, run the following script that issues a long running query
db2 -tvf longquery.db2
By joining the result of the table function with the APPLICATIONS administrative
view, we can find the cursor activity that is run from within longquery.db2. The
output would look something such as the following:
APPLICATION_HANDLE UOW_ID ACTIVITY_ID ACTIVITY_TYPE
-------------------- ----------- ----------- ----------------------------
----
267 1 1 READ_DML
1 record(s) selected.
From the same CLP window, call the WLM_CANCEL_ACTIVITY stored procedure
to cancel the cursor activity above, using the application handle, unit of work ID,
and activity ID obtained from the previous step:
CONNECT TO SAMPLE
CONNECT RESET
Note that in your case, the application handle, unit of work ID, and activity ID
will be different.
In the first CLP window, you will see the following output returned by the long
running query issued by longquery.db2.
SQL4725N The activity has been cancelled. SQLSTATE=57014
You might want to know the number of large activities or load utilities that are
being run concurrently on your system, for example. Understanding the types of
work being run on the system is important as different types of work will have
different resource requirements and impacts on system performance.
Before starting, you might want to show the number of activities of a certain type
that are currently running by using the
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 table function:
CONNECT TO SAMPLE
SELECT ACTIVITY_TYPE,
COUNT(*) AS NUMBER_RUNNING
FROM TABLE (
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97(CAST(NULL AS BIGINT), -2)) AS T
GROUP BY ACTIVITY_TYPE
To get information about the different types of activities that have run on your
system over a given period of time, you can use work class sets and work actions.
To count the number of times an activity of a specific type has been run over a
period of time, a work action set needs to be created. In this example, because we
are interested in the activities that are run on the entire system, the work action set
will be created at the database level and is associated with the all_class_types work
class set that was created in Exercise 4 Step 1. This work class set contains work
classes for all types of recognized activities. If we were only interested in the
activities being run in a specific service class, we would create a work action set at
the service class level. For this example, we are also interested in the information
for all types of activities so that the work action set contains a COUNT ACTIVITY
work action for each work class in the all_class_types work class set.
CREATE WORK ACTION SET work1_was FOR DATABASE
USING WORK CLASS SET all_class_types
(WORK ACTION count_read_wa ON WORK CLASS read_wc COUNT ACTIVITY,
WORK ACTION count_write_wa ON WORK CLASS write_wc COUNT ACTIVITY,
WORK ACTION count_ddl_wa ON WORK CLASS ddl_wc COUNT ACTIVITY,
Additional Information: The blank included with the statement means that result is
not to be restricted by the argument (in this example, we want the information for
all of the work action sets). The value of the last argument, dbpartitionnum, is the
wildcard character -2, which means that data from all database partitions is to be
returned.
The output from this query will look something like the following where "*"
represents all activities that do not fall into any of the defined work classes or that
fall into work classes with no work actions.
WORK_ACTION_SET_NAME WORK_CLASS_NAME LAST_RESET
TOTAL_ACTS
-------------------- --------------- -------------------------- ---------
-
WORK1_WAS * 2007-08-14-13.55.30.725886 0
WORK1_WAS ALL_WC 2007-08-14-13.55.30.725886 2
WORK1_WAS CALL_WC 2007-08-14-13.55.30.725886 4
WORK1_WAS DDL_WC 2007-08-14-13.55.30.725886 12
WORK1_WAS LOAD_WC 2007-08-14-13.55.30.725886 1
WORK1_WAS READ_WC 2007-08-14-13.55.30.725886 12
WORK1_WAS WRITE_WC 2007-08-14-13.55.30.725886 6
7 record(s) selected.
You can separate out activities by more than just their types. For example, you
might want to know how many large queries are being run.
Alter the work class set to add a new read work class that will represent large
queries. For this example, a large query is any query that has a cardinality greater
than 40.
ALTER WORK CLASS SET all_class_types
ADD WORK CLASS large_wc WORK TYPE READ FOR CARDINALITY FROM 41 POSITION AT 1
Alter the work action set to add a COUNT ACTIVITY work action and apply it to
the new work class.
ALTER WORK ACTION SET work1_was
ADD WORK ACTION count_large_reads ON WORK CLASS large_wc COUNT ACTIVITY
Call the WLM_COLLECT_STATS stored procedure to reset the statistics that are
stored in memory so that you are starting fresh and when you chose to query that
workload management statistical information that is stored in memory, it will
contain information for the activities that have been run from this point on.
CALL WLM_COLLECT_STATS()
8 record(s) selected.
Note that this time four of the activities from the script are considered large
activities.
Activity information you capture is sent to the active event monitor for activities.
Previous tasks showed how the COLLECT ACTIVITY DATA clause is used for
workloads, service classes, work actions and thresholds to capture detailed activity
information. This clause needs to be specified in advance before an activity begins
executing and information is sent to the activities event monitor when the activity
completes. The WLM_CAPTURE_ACTIVITY_IN_PROGRESS procedure permits
you to capture information reactively when you notice a problem with an activity
already in progress. When this procedure is used, information about an activity is
sent to the activities event monitor immediately. Both basic and statement activity
information are collected, but not input data.
Enable the existing event monitor for activities you created in Exercise 1.
CONNECT TO SAMPLE
From the CLP, run the following script that issues a long running query with a
problematic cursor:
db2 -tvf longquery.db2
1 record(s) selected.
CONNECT RESET
This step sends information about the activity to the active event monitor for
activities. Note that in your case, the application handle, unit of work ID, and
activity ID you specify may be different.
Look at the information that was collected for the activity using a statement such
as the following:
SELECT VARCHAR(A.APPL_NAME, 15) as APPL_NAME,
VARCHAR(A.TPMON_CLIENT_APP, 20) AS CLIENT_APP_NAME,
VARCHAR(A.APPL_ID, 30) as APPL_ID,
A.ACTIVITY_ID,
A.UOW_ID,
A.PARTIAL_RECORD,
A.TIME_STARTED,
A.TIME_COMPLETED,
VARCHAR(S.STMT_TEXT, 300) AS STMT_TEXT
FROM ACTIVITY_DB2ACTIVITIES AS A,
ACTIVITYSTMT_DB2ACTIVITIES AS S
WHERE A.APPL_ID = S.APPL_ID AND
A.ACTIVITY_ID = S.ACTIVITY_ID AND
A.UOW_ID = S.UOW_ID
DB2 Query Patroller, through its historical analysis, provides information about
which tables, indexes and columns have been accessed, and which have not. DB2
includes a set of Perl scripts as a sample that provides functionality similar to the
Query Patroller Historical Analysis feature using information captured by the
WLM activities event monitor. This WLM Historical Analysis Tool was written in
Perl so you can see or even modify the scripts to produce additional historical
analysis reports to suit your needs.
In order to generate some historical data, the explain tables must exist under the
schema of the user running the tool. To create the explain tables, go to the
/sqllib/misc directory and run the following:
db2 CONNECT TO SAMPLE
Since the activities event monitor was created in Exercise 1 Step 1, enable it now if
it is not enabled already.
SET EVENT MONITOR DB2ACTIVITIES STATE 1
Run some activities so that activity data is collected to generate historical data on.
db2 –tvf work1.db2
db2 –tvf work2.db2
It is highly recommended that you turn off the event monitor for activities before
generating historical data. If you do not do this, any DML activities that are run as
a result of the historical data generator may also be captured and put into the DB2
event monitor activity tables, thereby dramatically increasing the number of actual
activities for which activity data is generated.
CONNECT TO SAMPLE
Run the historical data generator script, wlmhist.pl, to generate historical data for
activities that are captured in the activities event monitor tables. The format is as
follows:
wlmhist.pl dbname user password [fromTime toTime workloadid
serviceClassName serviceSubclassName activityTable activityStmtTable]
For this exercise, generate historical data for all activities that have been captured
in the activities event monitor.
Perl wlmhist.pl sample db2inst1 password
When generating historical data, explain is run on the actual statement. In some
cases, explain cannot be run on some statements with parameter markers and an
error is returned. Any activity that shows such an error will not have historical
data generated for it.
Once the tool has completed generating historical data, it will tell you how many
activities it has successfully generated historical data for.
Run the historical data report script wlmhistrep.pl to generate reports based on the
data that was generated in step 1. The format is as follows:
wlmhistrep.pl dbAlias userId passwd [outputFile report schemaName fromTime toTime submitter]
The report parameter can be any combination from the following letters:
v A: Tables hit
v B: Tables not hit
v C: Indexes hit
v D: Indexes not hit
v E: Submitters
If the userId parameter you specify is not the same as what was used to run the
wlmhist.pl script when the wlmhist table was created, you must specify the
correct schemaName. The fromTime and toTime parameters must be specified in
timestamp format (for example 2007-06-06-17.00.00).
For this exercise, generate reports for tables hit and indexes not hit:
Perl wlmhistrep.pl sample db2inst1 password - AD
TABLE NAME TABLE SCHEMA INDEX NAME INDEX SCHEMA INDEX TYPE
__________________ _______________ __________________ _______________ __________
EXPLAIN_ARGUMENT KARENAM ARG_I1 KARENAM REG
HMON_ATM_INFO SYSTOOLS ATM_UNIQ SYSTOOLS REG
CUSTOMER KARENAM CUST_CID_XMLIDX KARENAM XVIL
CUSTOMER KARENAM CUST_NAME_XMLIDX KARENAM XVIL
CUSTOMER KARENAM CUST_PHONES_XMLIDX KARENAM XVIL
CUSTOMER KARENAM CUST_PHONET_XMLIDX KARENAM XVIL
EXPLAIN_DIAGNOSTIC KARENAM EXP_DIAG_DAT_I1 KARENAM REG
HMON_COLLECTION SYSTOOLS HI_OBJ_UNIQ SYSTOOLS REG
ADVISE_INDEX KARENAM IDX_I1 KARENAM REG
ADVISE_INDEX KARENAM IDX_I2 KARENAM REG
SYSATTRIBUTES SYSIBM INDATTRIBUTES01 SYSIBM REG
SYSATTRIBUTES SYSIBM INDATTRIBUTES02 SYSIBM REG
:
:
Disable activity collection for the default service subclass of the default user service
super class, and clean up the activity tables.
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
COLLECT ACTIVITY DATA NONE
The inter-arrival time is the time between the arrival of one activity into the system
and the arrival of the next activity. The estimated cost of an activity is the
estimated system resources that will be used in the execution of the activity and it
only applies to DML activities. An inter-arrival time histogram can be useful for
correlating with a lifetime histogram or other lifetime statistics for determining
whether a change in lifetime statistics was the result of a change in the arrival rate
of the workload versus being the result of a change in the complexity of the
workload (more complex queries) or a change in the system. The estimated cost
histogram can be useful for correlating with the inter-arrival time and lifetime
histograms to see whether a change in the lifetime histogram could be due to a
change in the complexity of the workload load (more complex queries with higher
estimated costs being submitted), due to a change in the arrival rate of activities
(determined from the inter-arrival time distribution) or due to a change in the
system itself, such as the introduction of a new threshold, a change in the priority
given to a service class, or a change in hardware.
Histograms are accessed through the statistics event monitor. This exercise reuses
the statistics event monitor created in Exercise 1 Step 1.
A second view makes it easier to find out what service classes are having
histograms collected for them. The HISTOGRAMBIN_DB2STATISTICS table reports
the service class for which the histogram is being collected by giving the service
class ID. Joining this table with the SERVICECLASSES catalog table will permit the
service class information to be presented with the service super class name and
service subclass name instead of the service class ID.
CREATE VIEW HISTOGRAMSERVICECLASSES AS
SELECT DISTINCT SUBSTR(HISTOGRAM_TYPE,1,24) HISTOGRAM_TYPE,
SUBSTR(PARENTSERVICECLASSNAME,1,24) SERVICE_SUPERCLASS,
SUBSTR(SERVICECLASSNAME,1,24) SERVICE_SUBCLASS
FROM HISTOGRAMBIN_DB2STATISTICS H,
SYSCAT.SERVICECLASSES S
WHERE H.SERVICE_CLASS_ID = S.SERVICECLASSID
The third view lists all of the times that a histogram of a given type was collected
for a given service class. Such as the HISTOGRAMSERVICECLASSES view, it joins
the HISTOGRAMBIN_DB2STATISTICS table with the SERVICECLASSES catalog
table. The difference is in the STATISTICS_TIMESTAMP column which is included
as one of the columns in this view.
CREATE VIEW HISTOGRAMTIMES AS
SELECT DISTINCT SUBSTR(HISTOGRAM_TYPE,1,24) HISTOGRAM_TYPE,
SUBSTR(PARENTSERVICECLASSNAME,1,24) SERVICE_SUPERCLASS,
SUBSTR(SERVICECLASSNAME,1,24) SERVICE_SUBCLASS,
STATISTICS_TIMESTAMP TIMESTAMP
FROM HISTOGRAMBIN_DB2STATISTICS H,
SYSCAT.SERVICECLASSES S
WHERE H.SERVICE_CLASS_ID = S.SERVICECLASSID
The fourth and final view will be used to show the histograms themselves. It also
demonstrates a common task when dealing with histograms, which is to aggregate
them over time. This view shows the top of each bin and the number of activities
that were counted towards each bin. Of the two histograms covered below, the
BIN_TOP field measures the number of milliseconds in the activity inter-arrival
time and the number of timerons in the estimated cost. When BIN_TOP is, 3000
milliseconds and the BIN_TOP of the previous bin is 2000 milliseconds and the
NUMBER_IN_BIN is ten for an inter-arrival time histogram you know that there
were ten activities which each arrived into the system between 2 and 3 seconds
after the arrival of the previous activity, for example.
CREATE VIEW HISTOGRAMS(HISTOGRAM_TYPE, SERVICE_SUPERCLASS,
SERVICE_SUBCLASS, BIN_TOP, NUMBER_IN_BIN) AS
SELECT DISTINCT SUBSTR(HISTOGRAM_TYPE,1,24) HISTOGRAM_TYPE,
SUBSTR(PARENTSERVICECLASSNAME,1,24) SERVICE_SUPERCLASS,
SUBSTR(SERVICECLASSNAME,1,24) SERVICE_SUBCLASS,
TOP AS BIN_TOP,
SUM(NUMBER_IN_BIN) AS NUMBER_IN_BIN
FROM HISTOGRAMBIN_DB2STATISTICS H,
SYSCAT.SERVICECLASSES S
WHERE H.SERVICE_CLASS_ID = S.SERVICECLASSID
GROUP BY HISTOGRAM_TYPE, PARENTSERVICECLASSNAME, SERVICECLASSNAME, TOP
Turning on the collection of histograms is done for the default user service class by
altering its default subclass to collect aggregate activity data with the EXTENDED
option. This provides both the three histograms available in the BASE option
(lifetime, execution time, and queue time) and the two histograms available only
when using the EXTENDED option (inter-arrival time and estimated cost).
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
COLLECT AGGREGATE ACTIVITY DATA EXTENDED
If not already active, activate the event monitor that was created earlier so that it
can receive aggregate data whenever it is collected.
SET EVENT MONITOR DB2STATISTICS STATE 1
First run some activities; after the activities have finished the
WLM_COLLECT_STATS stored procedure is called to send the service class
statistics to the active statistics event monitor (including the activity lifetime,
execution time, queue time, inter-arrival time and estimated cost histograms for the
default user service class). These histograms contain data about all activities that
executed in the default user service class since aggregate activity statistics were
enabled. Calling the stored procedure also resets the statistics. To show changes in
database activity over time, three collection intervals are created.
In the first interval, run two scripts, work1.db2 and work2.db2, then collect and
reset the statistics.
db2 -o- -tvf work1.db2
db2 -o- -tvf work2.db2
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS
In the second interval, run only the work1.db2 script once, then collect and reset
the statistics.
db2 -o- -tvf work1.db2
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS
In the third interval, run the work1.db2 script twice and the work2.db2 script once,
then collect and reset the statistics.
db2 -o- -tvf work1.db2
db2 -o- -tvf work2.db2
db2 -o- -tvf work1.db2
CONNECT TO SAMPLE
CALL WLM_COLLECT_STATS
Collecting data periodically such as this permits you to watch how work on your
system changes over time.
Now that statistics have been collected, the views created earlier can be used to
look at the statistics. The HISTOGRAMTYPES view just returns the types of
histograms available.
SELECT * FROM HISTOGRAMTYPES
HISTOGRAM_TYPE
------------------------
CoordActEstCost
CoordActExecTime
CoordActInterArrivalTime
CoordActLifetime
CoordActQueueTime
3 record(s) selected.
Since the EXTENDED option was used when altering the service class, there are
five histograms.
1 record(s) selected.
The HISTOGRAMTIMES view shows the times when histograms were collected.
Since the WLM_COLLECT_STATS procedure was run three times, there are three
timestamps for the inter-arrival time histogram shown.
SELECT * FROM HISTOGRAMTIMES
WHERE HISTOGRAM_TYPE = 'CoordActInterArrivalTime'
AND SERVICE_SUPERCLASS = 'SYSDEFAULTUSERCLASS'
AND SERVICE_SUBCLASS = 'SYSDEFAULTSUBCLASS'
ORDER BY TIMESTAMP
BIN_TOP NUMBER_IN_BIN
-------------------- --------------------
-1 0
1 10
2 6
3 7
5 14
8 7
12 32
19 2
29 9
44 24
68 11
103 8
158 8
241 9
369 1
562 10
858 5
1309 5
1997 0
3046 0
4647 0
7089 0
10813 2
16493 2
25157 0
38373 0
58532 0
89280 0
136181 0
207720 0
316840 0
483283 0
737162 0
1124409 0
1715085 0
2616055 0
3990325 0
6086529 0
9283913 0
14160950 0
21600000 0
41 record(s) selected.
Running this query produces output than will not be exactly the same as what is
shown above since activity inter-arrival times depend on the performance of the
system. In the output above, there are 41 bins and all of the largest bins are empty.
At the top, there is a bin whose BIN_TOP is -1. This bin represents all of those
activities whose inter-arrival time was too large to fit in the histogram. Seeing a
NUMBER_OF_BIN greater than zero when the BIN_TOP is -1 indicates that you
should probably increase the high bin value of your histogram. In the output
above, the NUMBER_IN_BIN is 0, so there is no need to make such a change. The
BIN_TOP NUMBER_IN_BIN
-------------------- --------------------
-1 0
1 39
2 0
3 0
5 0
8 30
12 0
19 30
29 0
44 0
68 0
103 0
158 0
241 0
369 0
562 0
858 0
1309 0
1997 0
3046 0
4647 0
7089 0
10813 0
16493 0
25157 0
38373 0
58532 0
89280 0
136181 0
207720 0
316840 0
483283 0
737162 0
1124409 0
1715085 0
2616055 0
3990325 0
6086529 0
9283913 0
14160950 0
21600000 0
41 record(s) selected.
A histogram such as this is typical for a small workload. With a small workload,
there is not much variation in the size of activities, so there are only three different
bins that had activities counted towards them. Slightly more than 60% of the
activities had a cost estimate between 5 and 19 timerons, with the rest having cost
estimates of less than 1 timeron.
The final step is to turn off collection of aggregate activities on the default user
service class, to drop the views and to delete the information in the statistics
tables.
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
COLLECT AGGREGATE ACTIVITY DATA NONE
The workload management sample application demonstrates how you can use DB2
workload manager features to achieve the following objectives:
Protect the system from runaway queries
Runaway queries are costly and cause poor performance. The workload
management sample application identifies queries with the potential to
become runaway queries, and then stops these queries from running after
they have violated a specified threshold.
Limit concurrent resource consumption by individual applications
The sample application shows how to use DB2 workload manager features
to prevent applications that submit large amounts of concurrent work from
negatively affecting the performance of other applications.
Achieve a specific response time
Workload management features permit you to achieve a specific response
time objective of the form: "transaction X from application Y shall complete
within 1 second in 90% of cases," regardless of what other activity is
running concurrently on the system. The sample application will
demonstrate how to achieve a response time objective.
Consistent response time for short queries
Queries that typically have a response time of less than 1 second should
have a relatively consistent response time regardless of what other
workloads are running on the system. The sample application uses the
query execution time histogram to monitor consistency.
Protect the system during periods of peak demand
Workload management policy features protect the system from capacity
overload during bursts of peak demand by queuing work once the system
is sufficiently loaded.
Enable concurrent batch extract, transform, and load (ETL) processing and user
queries
Workload management features permit you to run ETL jobs (like loading
data into tables) while controlling the performance impact for users
running queries concurrently.
First, create a query that aggregates data across service classes and database
partitions using data from the WLM_GET_SERVICE_SUBCLASS_STATS_V97 table
function. Set the first and second arguments to empty strings and the third
argument to -2 (a wildcard character) to indicate that data is to be gathered for all
service classes on all database partitions.
In the preceding example data, the SUB1 service subclass in the SUP1 service
superclass is running more simultaneous activities than usual. To investigate
further, you might want to examine the statistics for workloads that map to this
service class. Your query might resemble the following one:
SELECT SUBSTR(WLSTATS.WORKLOAD_NAME,1,22) AS WL_NAME,
SUBSTR(CHAR(WLSTATS.DBPARTITIONNUM),1,4) AS PART,
CONCURRENT_WLO_TOP AS WLO_HIGH_WTRMRK,
CONCURRENT_WLO_ACT_TOP AS WLO_ACT_HIGH_WTRMRK
FROM TABLE(WLM_GET_WORKLOAD_STATS_V97('', -2)) AS WLSTATS,
TABLE(WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97('', '', -2)) AS SCWLOS
WHERE WLSTATS.WORKLOAD_NAME = SCWLOS.WORKLOAD_NAME
AND SCWLOS.SERVICE_SUPERCLASS_NAME = 'SUP1'
AND SCWLOS.SERVICE_SUBCLASS_NAME = 'SUB1'
ORDER BY WL_NAME, PART;
For example, the activity might be queued, executing, or waiting on a lock. If the
activity were queued, the result would be:
APPLICATION_HANDLE UOW_ID ACTIVITY_ID REQUEST_TYPE EVENT_TYPE EVENT_OBJECT
------------------ ------ ----------- ------------ ---------- ------------
1 2 5 OPEN WAIT WLM_QUEUE
When you know what the activity is doing, you can proceed appropriately:
v If the activity is queued, if the user indicates that the query is taking so long
that they no longer care about the results, or you think the query is consuming
too many resources, you can cancel it.
v If the activity is important and it is queued, consider cancelling some other less
important work that is currently running (reducing the concurrency so that
activities leave queue), or maybe the user will be satisfied to know that work is
not hanging and is just waiting for chance to run.
v If the activity is waiting for a lock, you can use the snapshot monitor to
investigate which locks the application is waiting for.
v If the activity is waiting for a lock held by lower priority activity, consider
cancelling that activity.
You might also find it useful to know the DML statement that activity 5 is running.
Assuming that you have an active activities event monitor, you can run the
WLM_CAPTURE_ACTIVITY_IN_PROGRESS procedure to capture information
about the DML statement and other information about activity 5 while it is
running. Unlike the statement event monitor, the
WLM_CAPTURE_ACTIVITY_IN_PROGRESS procedure permits you to capture
information about a specific query, as opposed to every statement running at the
time. You can also obtain the statement text by using
MON_GET_ACTIVITY_DETAILS.
If you decide that you must cancel the activity, you can use the
WLM_CANCEL_ACTIVITY routine to cancel the activity without having to end
the application that issued it:
CALL WLM_CANCEL_ACTIVITY (1, 2, 5)
The application that issued the activity receives an SQL4725N error. Any
application that handles negative SQL codes is able to handle this SQL code.
The results indicate that agent 1 is waiting on a remote response. Looking at the
agent on the remote partition that is working on the same activity, the
EVENTOBJECT field indicates that the agent is waiting to obtain a lock.
The next step is to determine who owns the lock. You can obtain this information
by turning on the monitor switches and using the snapshot monitor table function,
as shown in the following example:
SELECT AGENT_ID AS WAITING_FOR_LOCK,
SUBSTR(APPL_ID_HOLDING_LK,1,40) AS HOLDING_LOCK,
CAST(LOCK_MODE_REQUESTED AS SMALLINT) AS WANTED,
CAST(LOCK_MODE AS SMALLINT) AS HELD
FROM TABLE(SNAPSHOT_LOCKWAIT('SAMPLE',-1)) AS SLW
You can also determine the lock owner by using the following sequence of
commands:
db2pd -db database alias -locks
db2pd –db database alias -transactions
If you want to cancel the long-running activity, you can use the
WLM_CANCEL_ACTIVITY procedure. If the successful completion of the
long-running application is more important than the successful completion of the
lock-owning application, you can force the lock-owning application.
The statements contained in the example procedure are themselves activities and
subject to threshold control (depending on how thresholds are configured on your
system). Consider running the example queued-activity-cancelling procedure in a
service class that does not have any queuing thresholds applied.
1. Copy the following example script, that creates the procedure to cancel
activities queued for more than 1 hour, into a file you have created (for
example, a file named x.clp):
-- Simple history table to track cancelled
-- activities
WHILE AT_END = 0 DO
-- Now use activity entry time to estimate the time spend queued.
-- Queuing occurs before an activity begins execution, so queue
-- time is approximated using current time - entry time
OPEN QTIMECUR;
FETCH QTIMECUR INTO QUEUETIME;
CLOSE QTIMECUR;
END IF;
END WHILE;
CLOSE QUEUEDAPPS;
END@
2. Create the queued-activity-cancelling procedure by executing script x.clp using
the following command:
db2 -td@ -f x.clp
3. Execute the queued-activity-cancelling procedure by issuing the following
command:
db2 "call sample.cancel_queued_activities()"
Any activities that have been queued for more than 1 hour will be cancelled.
4. The following example script schedules the queued-activity-cancelling
procedure to run every 10 minutes using the DB2 Administrative Task
Scheduler. Copy the example script into a file you have created (for example, a
file named y.clp):
---------------------------------------
-- Enable DB2 Admin Task Scheduler if
-- not already enabled.
---------------------------------------
!db2set DB2_ATS_ENABLE=YES@
---------------------------------------
-- Create SYSTOOLSPACE tablespace.
-- Enable if SYSTOOLSPACE does not already
-- exist on your database.
---------------------------------------
---------------------------------------
-- Add a task to automatically cancel
-- activities that have been queued
-- for more than 1 hour. Task is scheduled
-- to run every 10 minutes. Adjust the
-- schedule as necessary using the
-- schedule input parameter (specified in
-- cron format).
---------------------------------------
CALL SYSPROC.ADMIN_TASK_ADD(
'CANCEL ACTIVITIES QUEUED FOR MORE 1 HOUR',
NULL,
NULL,
NULL,
'*/10 * * * *',
'SAMPLE',
'CANCEL_QUEUED_ACTIVITIES',
NULL,
NULL,
NULL )@
The first step is to create a work class set with a work class that will be used to
identify activities with a low estimated cost. For example:
CREATE WORK CLASS SET WCS1
(WORK CLASS SMALLDML WORK TYPE DML FOR TIMERONCOST FROM 0 TO 500)
Then, you would create a database work action set with a work action that applies
an activity-total-time threshold to the SMALLDML work class. The threshold
action is CONTINUE and the COLLECT ACTIVITY DATA option is specified so
that an activity that violates the threshold is sent to the activities event monitor on
completion:
CREATE WORK ACTION SET WAS1 FOR DATABASE USING WORK CLASS SET WCS1
(WORK ACTION WA1 ON WORK CLASS SMALLDML WHEN ACTIVITYTOTALTIME > 15 MINUTES
COLLECT ACTIVITY DATA WITH DETAILS CONTINUE)
Finally, you would create and activate a threshold violations event monitor and an
activities event monitor:
CREATE EVENT MONITOR THVIOLATIONS FOR THRESHOLD VIOLATIONS WRITE TO TABLE
SET EVENT MONITOR THVIOLATIONS STATE 1
Now when a DML activity with an estimated cost of less than 500 timerons runs
for greater than 15 minutes, a threshold violation record is written to the
THVIOLATIONS event monitor (indicating that the total time threshold was
violated), and details about the DML activity are collected when the activity
completes and sent to the DB2ACTIVITIES event monitor. You can use the
information collected about the activity in the DB2ACTIVITIES event monitor to
investigate further. For example, you can run the EXPLAIN statement on the query
and examine the access plan. You should also consider the system load and
queuing at the time the activity was collected, as a long lifetime can be a result of
insufficient system resources or the activity being queued. The long lifetime does
not necessarily indicate out-of-date statistics.
FETCH_LOOP:
LOOP
END@
2. Run the following CLP command:
db2 -td@ -f cancelall.ddl
After the procedure has been created, execute the procedure (for example,
canceling all activities in the service subclass which has ID = 15) using the
following statement:
CALL CANCELLALL( 15 )
OPEN C1;
FETCH_LOOP:
LOOP
-- Now force any connections that are mapped to the service class, but which
-- don't currently have any activities running
OPEN C2;
FETCH_LOOP2:
LOOP
END@
2. Run the following CLP command:
db2 -td@ -f forceall.ddl
After the procedure has been created, execute the procedure (for example,
disconnecting all applications that are either mapped to or currently executing
activities in a particular service class with ID = 15) using the following statement:
CALL FORCEALLINSC( 15 )
Assume that you performed capacity planning and that the data in the following
table represents the results of this exercise for work types and response time goals:
Table 58. Results of capacity planning
Type of work Application Goal Importance Expected throughput
Order entry orderentryapp.exe Obtain an average High 10 000 (both inserts
response time < 1 and updates) per day
second
Business intelligence businessobjects.exe Obtain an average High 100 queries per day
queries response time < 10
seconds
Batch processing batchapp.exe Maximize throughput Low 5000 updates per day
Other All other applications Best effort Low 100 activities per day
Based on the data in the preceding table, you might create three service classes
(ORDER_ENTRY_SC, BI_QUERIES_SC, and BATCH_SC) and three workloads
(ORDER_ENTRY_WL, BI_QUERIES_WL, and BATCH_WL) to assign work to the
service classes. After creating the service classes and workloads, you might create a
statistics event monitor to collect aggregate activity information, such as the
activity lifetime histogram for each service class. Assume that the data in the
following table compares the average daily count of activities in each service class
(computed from the activity lifetime histogram) with the volumes that were
predicted in the capacity planning exercise:
Table 59. Activities each day
Service class Predicted activities per day Actual activities per day
ORDER_ENTRY_SC 10 000 9700
BI_QUERIES_SC 100 115
BATCH_SC 5000 5412
SYSDEFAULTUSERCLASS 100 85
The observed data indicates that the capacity planning estimates were accurate.
The data in the following table compares the average activity lifetimes (obtained
from the activity lifetime histogram) with the response time goals determined
during capacity planning and shows that activities in the BI_QUERIES_SC service
class are not meeting their response time objectives.
Table 60. Response times
Service class Response time goal Actual average lifetime
ORDER_ENTRY_SC < 1 second 0.8 seconds
With DB2 workload manager, you can use different approaches when addressing
the problem of the business intelligence queries not meeting their response time
goals:
v Limiting the concurrency of lower-importance service classes
v Allowing the operating system workload manager to provide less processor
resource to less-important service classes
v Modifying the agent and I/O prefetcher priorities for the service classes
v Using any combination of the previous three approaches
Assume that processor time is the resource that is causing the business intelligence
queries to fail to meet their goals. Also assume that you use the operating system
workload manager to give the SYSDEFAULTUSERCLASS service class less
processor resources than other service classes. You can then capture aggregate
activity information over a period of days to observe whether the changes to the
CPU allocation provide the results that you expect. The data in the following table
shows another comparison between response time goals and actual average
lifetimes computed from the histograms after you made the operating system
workload manager changes. All service classes are now meeting their response
time objectives and, because of the processor time reallocation, activities in the
SYSDEFAULTUSERCLASS service class have had their response times doubled.
Table 61. Response times after reconfiguration
Service class Response time goal Actual average lifetime
ORDER_ENTRY_SC < 1 second 0.6 seconds
BI_QUERIES_SC < 10 seconds 9.5 seconds
BATCH_SC 1.5 seconds
SYSDEFAULTUSERCLASS 20 minutes
Assume that you do not initially know which workloads and service classes to
create because either you do not have full knowledge of the workload on the
system or you do not yet know which workloads are required for stable execution
results. Also assume that you know that some applications have response time
requirements but that you do not yet know how many other applications are
competing for resources with such time-critical applications. You can use the
workload management monitoring capabilities to determine this.
The sections that follow provide information about how to perform these steps.
Assume that you have two important business intelligence applications, BI1 and
BI2 and that you need to minimize the response times for these applications. You
can create workloads for these two applications and map them to a service class
called MOSTIMPORTANT for which you can assign system resources.
On the AIX operating system, you use the AIX Workload Manager to create a
service class called MOSTIMPORTANT, and give this service class a guaranteed set
of resources.
On the DB2 data server, you create the required service classes and workloads:
CREATE SERVICE CLASS MOSTIMPORTANT OUTBOUND CORRELATOR 'MOSTIMPORTANT'
CREATE WORKLOAD BI1WORKLOAD APPLNAME ('BI1') SERVICE CLASS MOSTIMPORTANT
CREATE WORKLOAD BI2WORKLOAD APPLNAME ('BI2') SERVICE CLASS MOSTIMPORTANT
For the purposes of this example, assume that even after you account for the
known applications, a significant portion of the system workload is unaccounted
for. You therefore need to better understand and possibly control this workload.
TOP_IN_MINUTES PERCENT_IN_BIN
---------------- --------------
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.000 0.00
0.001 0.00
0.001 0.00
0.002 0.00
0.004 0.00
0.006 0.00
0.009 0.00
0.014 0.00
0.021 0.00
0.033 0.00
0.050 0.00
0.077 0.00
0.118 0.00
0.180 0.00
0.274 0.00
0.419 0.00
0.639 0.00
0.975 0.00
1.488 0.00
2.269 0.00
3.462 0.00
The following figure shows the results of the preceding query plotted as a graph:
50
45
Percentage of total activities
40
35
30
25
20
15
10
0
0 100 200 300 400
In this example, 30% of the activities fall into the 101 minutes or greater lifetime
range. To capture information about these activities, create an activity lifetime
threshold of 100 minutes with the CONTINUE and COLLECT ACTIVITY DATA
options as shown in the following example. If this threshold is violated, activity
information is sent to an active activities event monitor.
CREATE THRESHOLD COLLECTLONGESTRUNNING30PERCENT
FOR SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
ACTIVITIES ENFORCEMENT DATABASE ENABLE
WHEN ACTIVITYTOTALTIME > 100 MINUTES COLLECT ACTIVITY DATA CONTINUE
You can analyze the information you collected about activities in the previous step
according to the application that submitted them. You might specify the following
query:
SELECT SUBSTR (APPL_NAME, 1,16) APPLICATION_NAME,
AVG(TIMESTAMPDIFF(4, CHAR(TIME_COMPLETED – TIME_CREATED)))
AS AVG_LIFETIME_MINUTES
COUNT(*) AS ACTIVITY_COUNT
FROM ACTIVITY_DB2ACTIVITIES
GROUP BY APPL_NAME
ORDER BY APPL_NAME
Now that you have the two important applications running in the
MOSTIMPORTANT service class and the unimportant application running in the
BESTEFFORT service class, much less work is running in the default user service
class. In this situation, it might be inexpensive to collect information about every
activity in this service class. Alternatively, you might not need to further subdivide
the work and can stop here. Assume that you want to collect information about the
remaining activities, in case the remaining workload contains surprises. You can
accomplish this task by setting COLLECT ACTIVITY DATA for the default user
service class and creating an activities event monitor:
ALTER SERVICE CLASS SYSDEFAULTSUBCLASS UNDER SYSDEFAULTUSERCLASS
COLLECT ACTIVITY DATA ON COORDINATOR WITHOUT DETAILS
Allow the system to run so that data is collected. You can analyze the results as in
step 3.
SELECT SUBSTR (APPL_NAME,1,16) APPLICATION_NAME,
AVG(TIMESTAMPDIFF(4, CHAR(TIME_COMPLETED – TIME_CREATED)))
AS AVG_LIFETIME_MINUTES
COUNT(*) AS ACTIVITY_COUNT
FROM ACTIVITY_DB2ACTIVITIES
GROUP BY APPL_NAME
ORDER BY APPL_NAME
The results show that the ONLYSMALL application produces the majority of the
unclassified activities. Because this application was not included in the results
when you collected information about the largest activities, you can assume that
ONLYSMALL did not produce any large queries during the period of data
collection.
Syntax
Procedure parameters
application_handle
An input argument of type BIGINT that specifies the application handle whose
activity is to be cancelled. If the argument is null, no activity will be found and
an SQL4702N with SQLSTATE 5U035 is returned.
uow_id
An input argument of type INTEGER that specifies the unit of work ID of the
activity that is to be cancelled. If the argument is null, no activity will be found
and an SQL4702N with SQLSTATE 5U035 is returned.
activity_id
An input argument of type INTEGER that specifies the activity ID which
uniquely identifies the activity within the unit of work that is to be cancelled.
If the argument is null, no activity will be found and an SQL4702N with
SQLSTATE 5U035 is returned.
Authorization
Example
Usage notes
v If no activity can be found, an SQL4702N with SQLSTATE 5U035 is returned.
v If the activity cannot be cancelled because it not in the correct state (not
initialized), an SQL4703N (reason code 1) with SQLSTATE 5U016 is returned.
v If the activity is successfully cancelled, an SQL4725N with SQLSTATE 57014 is
returned to the cancelled application.
When you apply this procedure to an activity with child activities, the procedure
recursively generates a record for each child activity. This information is collected
and sent when you call the procedure; the procedure does not wait until the parent
activity completes execution. The record of the activity in the event monitor is
marked as a partial record.
Syntax
WLM_CAPTURE_ACTIVITY_IN_PROGRESS ( application_handle ,
uow_id , activity_id )
Procedure parameters
If you do not specify all of the following parameters, no activity is found, and
SQL4702N with SQLSTATE 5U035 is returned.
application_handle
An input argument of type BIGINT that specifies the handle of the application
whose activity information is to be captured.
uow_id
An input argument of type INTEGER that specifies the unit of work ID of the
activity whose information is to be captured.
activity_id
An input argument of type INTEGER that specifies the activity ID that
uniquely identifies the activity within the unit of work whose information is to
be captured.
Authorization
Example
After the procedure is completed, the administrator can use the following table
function to find out where the activity spent its time. The function retrieves the
information from the DB2ACTIVITIES event monitor.
CREATE FUNCTION SHOWCAPTUREDACTIVITY(APPHNDL BIGINT,
UOWID INTEGER,
ACTIVITYID INTEGER)
RETURNS TABLE (UOW_ID INTEGER, ACTIVITY_ID INTEGER, STMT_TEXT VARCHAR(40),
LIFE_TIME DOUBLE)
LANGUAGE SQL
READS SQL DATA
NO EXTERNAL ACTION
DETERMINISTIC
RETURN WITH RAH (LEVEL, APPL_ID, PARENT_UOW_ID, PARENT_ACTIVITY_ID,
UOW_ID, ACTIVITY_ID, STMT_TEXT, ACT_EXEC_TIME) AS
(SELECT 1, ROOT.APPL_ID, ROOT.PARENT_UOW_ID,
ROOT.PARENT_ACTIVITY_ID, ROOT.UOW_ID, ROOT.ACTIVITY_ID,
ROOTSTMT.STMT_TEXT, ACT_EXEC_TIME
FROM ACTIVITY_DB2ACTIVITIES ROOT, ACTIVITYSTMT_DB2ACTIVITIES ROOTSTMT
WHERE ROOT.APPL_ID = ROOTSTMT.APPL_ID AND ROOT.AGENT_ID = APPHNDL
AND ROOT.UOW_ID = ROOTSTMT.UOW_ID AND ROOT.UOW_ID = UOWID
AND ROOT.ACTIVITY_ID = ROOTSTMT.ACTIVITY_ID AND ROOT.ACTIVITY_ID = ACTIVITYID
UNION ALL
SELECT PARENT.LEVEL +1, CHILD.APPL_ID, CHILD.PARENT_UOW_ID,
CHILD.PARENT_ACTIVITY_ID, CHILD.UOW_ID,
CHILD.ACTIVITY_ID, CHILDSTMT.STMT_TEXT, CHILD.ACT_EXEC_TIME
FROM RAH PARENT, ACTIVITY_DB2ACTIVITIES CHILD,
ACTIVITYSTMT_DB2ACTIVITIES CHILDSTMT
WHERE PARENT.APPL_ID = CHILD.APPL_ID AND
CHILD.APPL_ID = CHILDSTMT.APPL_ID AND
PARENT.UOW_ID = CHILD.PARENT_UOW_ID AND
CHILD.UOW_ID = CHILDSTMT.UOW_ID AND
PARENT.ACTIVITY_ID = CHILD.PARENT_ACTIVITY_ID AND
CHILD.ACTIVITY_ID = CHILDSTMT.ACTIVITY_ID AND
PARENT.LEVEL < 64
)
SELECT UOW_ID, ACTIVITY_ID, SUBSTR(STMT_TEXT,1,40),
ACT_EXEC_TIME AS
LIFE_TIME
FROM RAH
Usage notes
Activity information is collected only on the coordinator partition for the activity.
Syntax
WLM_COLLECT_STATS ( )
Authorization
Examples
Usage notes
If you call the procedure while another collection and reset request is in progress
(for example, while another invocation of the procedure is running or automatic
collection is occurring), SQL1632W with SQLSTATE 01H53 is returned, and your
new request is ignored.
The WLM_COLLECT_STATS procedure only starts the collection and reset process.
The procedure might return to the caller before all statistics have been written to
the active statistics event monitor. Depending on how quickly the statistics
collection and reset occur, the call to the WLM_COLLECT_STATS procedure
(which is itself an activity) is counted in the statistics for either the prior collection
interval or the new collection interval that has just started.
This function returns detailed information about a specific activity identified by its
application handle, unit of work ID, and activity ID. This information includes
details about any thresholds that the activity has violated.
Syntax
WLM_GET_ACTIVITY_DETAILS ( application_handle , uow_id ,
activity_id , dbpartitionnum )
Authorization
Example
Usage note
The following elements are returned only if the corresponding thresholds apply to
the activity.
Table 64. Elements returned if applicable
Element Name Description
ACTIVITYTOTALTIME_THRESHOLD_ID The ID of the ACTIVITYTOTALTIME threshold that
was applied to the activity.
ACTIVITYTOTALTIME_THRESHOLD_VALUE A timestamp that is computed by adding the
ACTIVITYTOTALTIME threshold duration to the
activity entry time. If the activity is still executing
when this timestamp is reached, the threshold will
be violated.
threshold_name , threshold_id )
If the argument is null or an empty string, data is returned for all thresholds
that meet the other criteria.
The threshold_predicate values match those of the THRESHOLDPREDICATE
column in the SYSCAT.THRESHOLDS view.
threshold_domain
An input argument of type VARCHAR(18) that specifies a threshold domain.
The possible values are as follows:
DB Database
SB Service subclass
SP Service superclass
WA Work action set
If the argument is null or an empty string, data is returned for all thresholds
that meet the other criteria.
The threshold_domain values match those of the DOMAIN column in the
SYSCAT.THRESHOLDS view.
threshold_name
An input argument of type VARCHAR(128) that specifies a threshold name. If
the argument is null or an empty string, data is returned for all thresholds that
meet the other criteria. The threshold_name values match those of the
THRESHOLDNAME column in the SYSCAT.THRESHOLDS view.
threshold_id
An input argument of type INTEGER that specifies a threshold ID. If the
argument is null or -1, data is returned for all thresholds that meet the other
criteria. The threshold_id values match those of the THRESHOLDID column in
the SYSCAT.THRESHOLDS view.
Authorization
The following query displays the basic statistics for all the queues on a system,
across all partitions:
SELECT substr(THRESHOLD_NAME, 1, 6) THRESHNAME,
THRESHOLD_PREDICATE,
THRESHOLD_DOMAIN,
DBPARTITIONNUM PART,
QUEUE_SIZE_TOP,
QUEUE_TIME_TOTAL,
QUEUE_ASSIGNMENTS_TOTAL QUEUE_ASSIGN
FROM table(WLM_GET_QUEUE_STATS('', '', '', -1)) as QSTATS
Usage note
The function does not aggregate data across queues (on a partition) or across
partitions (for one or more queues). However, you can use SQL queries to
aggregate data, as shown in the previous example.
Information returned
Table 65. Information returned for WLM_GET_QUEUE_STATS
Column name Data type Description
THRESHOLD_PREDICATE VARCHAR(27) Threshold predicate of the threshold
responsible for this queue. The
possible values are as follows:
CONCDBC
Concurrent database
coordinator activities
threshold
DBCONN
Total database partition
connections threshold
SCCONN
Total service class partition
connections threshold
The threshold predicate values match
those of the THRESHOLDPREDICATE
column in the SYSCAT.THRESHOLDS
view.
Syntax
Authorization
Example
Example 1
The following query returns a list of agents that are associated with application
handle 1 for all database partitions. You can determine the application handle by
using the LIST APPLICATIONS command or the
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 table function.
SELECT SUBSTR(CHAR(APPLICATION_HANDLE),1,7) AS APPHANDLE,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
SUBSTR(CHAR(AGENT_TID),1,9) AS AGENT_TID,
SUBSTR(AGENT_TYPE,1,11) AS AGENTTYPE,
SUBSTR(AGENT_STATE,1,10) AS AGENTSTATE,
SUBSTR(REQUEST_TYPE,1,12) AS REQTYPE,
SUBSTR(CHAR(UOW_ID),1,6) AS UOW_ID,
SUBSTR(CHAR(ACTIVITY_ID),1,6) AS ACT_ID
FROM TABLE(WLM_GET_SERVICE_CLASS_AGENTS_V97(CAST(NULL AS VARCHAR(128)),
CAST(NULL AS VARCHAR(128)), 1, -2)) AS SCDETAILS
ORDER BY APPHANDLE, PART, AGENT_TID
Example 2
21 record(s) selected.
Using the same query at a later time shows that the WLM threshold has queued
an agent:
EVENT_OBJECT EVENT_TYPE EVENT_STATE EVENT_OBJECT_NAME
--------------- ----------------- ------------------- --------------------------
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
WLM_QUEUE WAIT IDLE MYCONCDBCOORDTH
ROUTINE PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
REQUEST PROCESS EXECUTING -
21 record(s) selected.
Usage note
The parameters are, in effect, ANDed together. That is, if you specify conflicting
input parameters, such as a service superclass SUP_A and a subclass SUB_B such
that SUB_B is not a subclass of SUP_A, no rows are returned.
Information returned
Table 66. Information returned by WLM_GET_SERVICE_CLASS_AGENTS_V97
Column name Data type Description
SERVICE_SUPERCLASS_NAME VARCHAR Name of the service superclass from which this record was
(128) collected.
SERVICE_SUBCLASS_NAME VARCHAR Name of the service subclass from which this record was
(128) collected.
APPLICATION_HANDLE BIGINT System-wide unique ID for the application. On a
single-partitioned database, this identifier consists of a 16-bit
counter. On a multi-partitioned database, this identifier
consists of the coordinating partition number concatenated
with a 16-bit counter. In addition, this identifier is the same
on every partition where the application makes a secondary
connection.
DBPARTITIONNUM SMALLINT Partition number from which this record was collected.
ENTITY VARCHAR (32) One of the following values:
v If the type of entity is an agent, the value is db2agent.
v If the type of entity is a fenced mode process, the value is
db2fmp (pid) where pid is the process ID of the fenced
mode process.
v Otherwise, the value is the name of the system entity.
WORKLOAD_NAME VARCHAR Name of the workload from which this record was collected.
(128)
WORKLOAD_OCCURRENCE_ID INTEGER ID of the workload occurrence. This ID does not uniquely
identify the workload occurrence unless it is coupled with
the coordinator database partition number and the workload
name.
UOW_ID INTEGER Unique ID of the unit of work that this activity started in.
ACTIVITY_ID INTEGER Unique activity ID within a unit of work.
PARENT_UOW_ID INTEGER Unique ID of the unit of work that the parent activity of the
activity started in. The value of the column is null if this
activity has no parent.
PARENT_ACTIVITY_ID INTEGER Unique activity ID within a unit of work for the parent of
the activity whose ID is the same as activity_id. The value of
this column is null if this activity has no parent.
WLM_GET_SERVICE_CLASS_WORKLOAD
_OCCURRENCES_V97 - List workload occurrences
The WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 function
returns the list of all workload occurrences running in a specified service class on a
particular partition. A workload occurrence is a specific database connection whose
attributes match the definition of a workload and hence is associated with or
assigned to the workload.
WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97 ( service_superclass_name ,
service_subclass_name , dbpartitionnum )
Authorization
Example
If the system has four database partitions and is currently running two workloads,
the previous query produces results such as the following ones:
SUPERCLASS_NAME SUBCLASS_NAME PART COORDPART ...
------------------- ------------------ ---- --------- ...
SYSDEFAULTMAINTENAN SYSDEFAULTSUBCLASS 0 0 ...
SYSDEFAULTSYSTEMCLA SYSDEFAULTSUBCLASS 0 0 ...
Usage note
The parameters are, in effect, ANDed together. That is, if you specify conflicting
input parameters, such as a service superclass SUP_A and a subclass SUB_B such
that SUB_B is not a subclass of SUP_A, no rows are returned.
Information returned
Table 68. Information returned for WLM_GET_SERVICE_CLASS_WORKLOAD_OCCURRENCES_V97
Column name Data type Description
SERVICE_SUPERCLASS_NAME VARCHAR(128) Name of the service superclass from which this
record was collected.
SERVICE_SUBCLASS_NAME VARCHAR(128) Name of the service subclass from which this
record was collected.
DBPARTITIONNUM SMALLINT Partition number from which this record was
collected.
COORD_PARTITION_NUM SMALLINT Partition number of the coordinator partition of
the specified workload occurrence.
APPLICATION_HANDLE BIGINT System-wide unique ID for the application. On a
single-partitioned database, this identifier
consists of a 16-bit counter. On a
multi-partitioned database, this identifier
consists of the coordinating partition number
concatenated with a 16-bit counter. In addition,
this identifier is the same on every partition
where the application makes a secondary
connection.
WORKLOAD_NAME VARCHAR(128) Name of the workload from which this record
was collected.
Syntax
WLM_GET_SERVICE_SUBCLASS_STATS_V97 ( service_superclass_name ,
service_subclass_name , dbpartitionnum )
Authorization
Examples
Example 1: Because every activity must be mapped to a DB2 service class before
being run, you can monitor the global state of the system by using the service class
statistics table functions and querying all of the service classes on all partitions. In
the following example, a null value is passed for service_superclass_name and
service_subclass_name to return statistics for all service classes, and the value -2 is
specified for dbpartitionnum to return statistics for all partitions:
SELECT SUBSTR(SERVICE_SUPERCLASS_NAME,1,19) AS SUPERCLASS_NAME,
SUBSTR(SERVICE_SUBCLASS_NAME,1,18) AS SUBCLASS_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
CAST(COORD_ACT_LIFETIME_AVG / 1000 AS DECIMAL(9,3))
AS AVGLIFETIME,
CAST(COORD_ACT_LIFETIME_STDDEV / 1000 AS DECIMAL(9,3))
AS STDDEVLIFETIME,
SUBSTR(CAST(LAST_RESET AS VARCHAR(30)),1,16) AS LAST_RESET
FROM TABLE(WLM_GET_SERVICE_SUBCLASS_STATS_V97(CAST(NULL AS VARCHAR(128)),
CAST(NULL AS VARCHAR(128)), -2)) AS SCSTATS
ORDER BY SUPERCLASS_NAME, SUBCLASS_NAME, PART
The statement returns service class statistics such as average activity lifetime and
standard deviation in seconds, as shown in the following sample output:
SUPERCLASS_NAME SUBCLASS_NAME PART ...
------------------- ------------------ ---- ...
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 0 ...
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 1 ...
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 2 ...
SYSDEFAULTUSERCLASS SYSDEFAULTSUBCLASS 3 ...
... AVGLIFETIME STDDEVLIFETIME LAST_RESET
... ----------- -------------- ----------------
... 691.242 34.322 2006-07-24-11.44
... 644.740 22.124 2006-07-24-11.44
... 612.431 43.347 2006-07-24-11.44
... 593.451 28.329 2006-07-24-11.44
By checking the average execution times and numbers of activities in the output of
this table function, you can get a good high-level view of the load on each
partition for a specific database. Any significant variations in the high-level gauges
returned by this table function might indicate a change in the load on the system.
Usage notes
Some statistics are returned only if you set the COLLECT AGGREGATE ACTIVITY
DATA and COLLECT AGGREGATE REQUEST DATA parameters for the
corresponding service subclass to a value other than NONE.
The parameters are, in effect, ANDed together. That is, if you specify conflicting
input parameters, such as a superclass named SUPA and a subclass named SUBB
such that SUBB is not a subclass of SUPA, no rows are returned.
The COORD_ACT_LIFETIME_STDDEV
value of a service subclass is unaffected by
activities that pass through the service
subclass but are remapped to a different
subclass before they are completed.
Syntax
WLM_GET_SERVICE_SUPERCLASS_STATS ( service_superclass_name ,
dbpartitionnum )
Authorization
Example
The following query displays the basic statistics for all the service superclasses on
the system, across all database partitions:
SELECT SUBSTR(SERVICE_SUPERCLASS_NAME, 1, 26) SERVICE_SUPERCLASS_NAME,
DBPARTITIONNUM,
LAST_RESET,
CONCURRENT_CONNECTION_TOP CONCURRENT_CONN_TOP
FROM TABLE(WLM_GET_SERVICE_SUPERCLASS_STATS('', -2)) as SCSTATS
Usage note
Information returned
Table 70. Information returned for WLM_GET_SERVICE_SUPERCLASS_STATS
Column name Data type Description
SERVICE_SUPERCLASS_NAME VARCHAR(128) Name of the service superclass from which this
record was collected.
DBPARTITIONNUM SMALLINT Partition number from which this record was
collected.
LAST_RESET TIMESTAMP Time when statistics were last reset. There are four
events that trigger a reset of statistics:
v You call the WLM_COLLECT_STATS procedure.
v The wlm_collect_int configuration parameter
causes a collection and reset.
v You reactivate the database.
v You modify the service superclass for which
statistics are being reported and commit the
change.
The LAST_RESET time stamp is in local time.
CONCURRENT_CONNECTION_TOP INTEGER Highest number of concurrent coordinator
connections in this class since the last reset.
Syntax
WLM_GET_WORK_ACTION_SET_STATS ( work_action_set_name ,
dbpartitionnum )
Authorization
EXECUTE privilege on the WLM_GET_WORK_ACTION_SET_STATS function.
Example
Assume that there are three work classes: ReadClass, WriteClass, and LoadClass.
There is a work action associated with ReadClass and a work action associated
with LoadClass, but there is no work action associated with WriteClass. On
partition 0, the number of activities currently running or queued are as follows:
v ReadClass class: eight
v WriteClass class: four
v LoadClass class: two
v Unassigned: three
SELECT SUBSTR(WORK_ACTION_SET_NAME,1,18) AS WORK_ACTION_SET_NAME,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
SUBSTR(WORK_CLASS_NAME,1,15) AS WORK_CLASS_NAME,
LAST_RESET,
SUBSTR(CHAR(ACT_TOTAL),1,14) AS ACT_TOTAL
FROM TABLE(WLM_GET_WORK_ACTION_SET_STATS
(CAST(NULL AS VARCHAR(128)), -2)) AS WASSTATS
ORDER BY WORK_ACTION_SET_NAME, WORK_CLASS_NAME, PART
Sample output is as follows. Because there is no work action associated with the
WriteClass work class, the four activities to which it applies are counted in the
artificial class denoted by an asterisk (*) in the output. The three activities that
were not assigned to any work class are also included in the artificial class.
WORK_ACTION_SET_NAME PART WORK_CLASS_NAME LAST_RESET ACT_TOTAL
-------------------- ---- --------------- -------------------------- --------------
AdminActionSet 0 ReadClass 2005-11-25-18.52.49.343000 8
AdminActionSet 1 ReadClass 2005-11-25-18.52.50.478000 0
AdminActionSet 0 LoadClass 2005-11-25-18.52.49.343000 2
AdminActionSet 1 LoadClass 2005-11-25-18.52.50.478000 0
AdminActionSet 0 * 2005-11-25-18.52.49.343000 7
AdminActionSet 1 * 2005-11-25-18.52.50.478000 0
Information returned
Table 71. Information returned for WLM_GET_WORK_ACTION_SET_STATS
Column name Data type Description
WORK_ACTION_SET_NAME VARCHAR(128) Name of the work action set. A name is returned only if
you enable the work action set.
DBPARTITIONNUM SMALLINT Partition number from which this record was collected.
WLM_GET_WORKLOAD_OCCURRENCE _ACTIVITIES_V97 -
Return a list of activities
The WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 function returns
the list of all activities that were submitted by the specified application on the
specified partition and have not yet been completed.
Syntax
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 ( application_handle ,
dbpartitionnum )
Authorization
After you identify the application handle, you can look up all the activities
currently running in this application. For example, suppose that an administrator
wants to list the activities of an application whose application handle, determined
by using the LIST APPLICATIONS command, is 1. The administrator runs the
following query:
SELECT SUBSTR(CHAR(COORD_PARTITION_NUM),1,5) AS COORD,
SUBSTR(CHAR(DBPARTITIONNUM),1,4) AS PART,
SUBSTR(CHAR(UOW_ID),1,5) AS UOWID,
SUBSTR(CHAR(ACTIVITY_ID),1,5) AS ACTID,
SUBSTR(CHAR(PARENT_UOW_ID),1,8) AS PARUOWID,
SUBSTR(CHAR(PARENT_ACTIVITY_ID),1,8) AS PARACTID,
ACTIVITY_TYPE AS ACTTYPE,
SUBSTR(CHAR(NESTING_LEVEL),1,7) AS NESTING
FROM TABLE(WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97(1, -2)) AS WLOACTS
ORDER BY PART, UOWID, ACTID
Information returned
Table 72. Information returned by WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97
Column name Data type Description
APPLICATION_HANDLE BIGINT System-wide unique ID for the application.
On a single-partitioned database, this
identifier consists of a 16-bit counter. On a
multi-partitioned database, this identifier
consists of the coordinating partition number
concatenated with a 16-bit counter. In
addition, this identifier is the same on every
partition where the application makes a
secondary connection.
Syntax
WLM_GET_WORKLOAD_STATS_V97 ( workload_name , dbpartitionnum )
Authorization
Example
Usage note
The function does not aggregate data across workloads, partitions, or service
classes. However, you can use SQL queries to aggregate data.
Information returned
Table 73. Information returned by WLM_GET_WORKLOAD_STATS_V97
Column name Data type Description
WORKLOAD_NAME VARCHAR(128) Name of the workload from which this record was
collected.
DBPARTITIONNUM SMALLINT Partition number from which this record was
collected
LAST_RESET TIMESTAMP Time when statistics were last reset. There are four
events that trigger a reset of statistics:
v You call the WLM_COLLECT_STATS procedure.
v The wlm_collect_int configuration parameter
causes a collection and reset.
v You reactivate the database.
v You modify the workload for which statistics are
being reported and commit the change.
The LAST_RESET timestamp is in local time.
CONCURRENT_WLO_TOP INTEGER Highest number of concurrent occurrences of the
specified workload on this partition since the last
reset.
CONCURRENT_WLO_ACT_TOP INTEGER Highest number of concurrent activities (both
coordinator and nested) in either executing state
(which includes idle and waiting) or queued state
that has been reached in any occurrence of this
workload since the last reset. The value of the
column is updated by each workload occurrence at
the end of its unit of work.
COORD_ACT_COMPLETED_TOTAL BIGINT Total number of coordinator activities at any
nesting level that were assigned to any occurrence
of this workload that were completed since the last
reset. The value of this column is updated by each
workload occurrence at the end of its unit of work.
By using this procedure, you can set the client's user ID, application name,
workstation name, accounting information, or workload information at the DB2
server. Calling this procedure changes the stored values of the relevant transaction
processor (TP) monitor client information fields and special register settings for
this connection.
Unlike the sqleseti API, this procedure does not set client information at the client
but instead sets the corresponding client attributes on the DB2 server. Therefore,
you cannot use the sqleqry API to query the client information that is set at the
DB2 server using this procedure.
The data values provided with the procedure are converted to the appropriate
database code page before being stored in the related TP monitor fields or special
registers. Any data value which exceeds the maximum supported size after
conversion to the database code page is truncated before being stored at the server.
The truncated values are returned by both the TP monitor fields and the special
registers when those stored values are queried.
Syntax
WLM_SET_CLIENT_INFO ( client_userid , client_wrkstnname ,
Procedure parameters
client_userid
An input argument of type VARCHAR(255) that specifies the user ID for the
client. If you specify NULL, the value remains unchanged. If you specify an
empty string, which is the default value, the user ID for the client is reset to
the default value, which is blank.
client_wrkstnname
An input argument of type VARCHAR(255) that specifies the workstation
name for the client. If you specify NULL, the value remains unchanged. If you
specify an empty string, which is the default value, the workstation name for
the client is reset to the default value, which is blank.
client_applname
An input argument of type VARCHAR(255) that specifies the application name
for the client. If you specify NULL, the value remains unchanged. If you
specify an empty string, which is the default value, the application name for
the client is reset to the default value, which is blank.
client_acctstr
An input argument of type VARCHAR(255) that specifies the accounting string
for the client. If you specify NULL, the value remains unchanged. If you
specify an empty string, which is the default value, the accounting string for
the client is reset to the default value, which is blank.
Authorization
EXECUTE privilege on the WLM_SET_CLIENT_INFO procedure.
Examples
The following procedure call sets the user ID, workstation name, application name,
accounting string, and workload assignment mode for the client:
CALL SYSPROC.WLM_SET_CLIENT_INFO('db2user', 'machine.torolab.ibm.com',
'auditor', 'Accounting department', 'AUTOMATIC')
The following procedure call sets the user ID to db2user2 for the client without
setting the other client attributes:
CALL SYSPROC.WLM_SET_CLIENT_INFO('db2user2', NULL, NULL, NULL, NULL)
The following procedure call resets the user ID for the client to blank without
modifying the values of the other client attributes:
CALL SYSPROC.WLM_SET_CLIENT_INFO('', NULL, NULL, NULL, NULL)
For service classes, when you remap activities between service subclasses with a
REMAP ACTIVITY action, only the act_cpu_time_top high watermark of the
service subclass where an activity completes is updated, provided that a new high
watermark is reached. The act_cpu_time_top high watermarks of other service
subclasses an activity is mapped to but does not complete in are unaffected.
Usage
Use this element to determine the highest amount of processor time used by an
activity on a partition for a service class, workload, or work class during the time
interval collected.
Usage
This element can be used alone to know the elapsed time spent executing the
activity by DB2 on each partition. This element can also be used together with
time_started and time_completed monitor elements on the coordinator partition to
compute the idle time for cursor activities. You can use the following formula:
Cursor idle time = (time_completed - time_started) - act_exec_time
Usage
Use this count to determine whether the remapping of activities into the service
subclass is occurring as desired.
Usage
Use this count to determine whether the remapping of activities out of the service
subclass is occurring as desired.
For service classes, when you remap activities between service subclasses with a
REMAP ACTIVITY action only the act_rows_read_top high watermark of the
service subclass where an activity completes is updated, provided that a new high
watermark is reached. The act_rows_read_top high watermarks of service
subclasses an activity is mapped to but does not complete in are unaffected.
Table 78. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this element to determine the highest number of rows read by an activity on a
partition for a service class, workload, or work class during the time interval
collected.
Usage
Every time an activity has one or more work actions associated with a work class
applied to it, a counter for the work class is updated. This counter is exposed
using the act_total monitor element. The counter can be used to judge the
effectiveness of the work action set (for example, how many activities have a
Usage
Use this element to correlate information returned by the above event types.
Usage
Use this element to determine whether to expect an activity event for the activity
that violated the threshold to be written to the activity event monitor.
When an activity finishes or aborts and the activity event monitor is active at the
time, if the value of this monitor element is ‘Y', the activity that violated this
threshold will be collected. If the value of this monitor element is ‘N', it will not be
collected.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity.
To uniquely identify an activity outside its unit of work, use the combination of
activity_id and uow_id plus one of the following: appl_id or agent_id.
Usage
Use this element with activity_id, uow_id, and appl_id monitor elements to
uniquely identify activity records when information about the same activity has
been written to the activities event monitor multiple times.
For example, information about an activity would be sent to the activities event
monitor twice in the following case:
v the WLM_CAPTURE_ACTIVITY_IN_PROGRESS stored procedure was used to
capture information about the activity while it was running
v information about the activity was collected when the activity completed,
because the COLLECT ACTIVITY DATA clause was specified on the service
class with which the activity is associated
Usage
The value OTHER is returned for SET statements that do not perform SQL (for
example, SET special register, or SET EVENT MONITOR STATE) and the LOCK
TABLE statement.
Usage
Use this element to determine the highest aggregate DML activity system
temporary table space usage reached on a partition for a service subclass in the
time interval collected.
This element can be used to link an activity collected by the activities event
monitor to the applications associated with the activity, if such applications also
support the Application Response Measurement (ARM) standard.
Usage
Usage
Use this element with the corresponding top element to determine the range of a
bin within a histogram.
Usage
Use this element to know the highest concurrency of activities (including nested
activities) reached on a partition for a service subclass in the time interval
collected.
Usage
Usage
Use this element to know the highest number of concurrent activities reached on a
partition for any occurrence of this workload in the time interval collected.
Usage
Use this element to know the highest concurrency of workload occurrences reached
on a partition for a workload in the time interval collected.
Usage
concurrentdbcoordactivities_db_threshold _queued -
Concurrent database coordinator activities database threshold
queued monitor element
This monitor element returns 'Yes' to indicate that the activity was queued by the
CONCURRENTDBCOORDACTIVITIES database threshold. 'No' indicates that the
activity was not queued.
Table 96. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_db_threshold _value -
Concurrent database coordinator activities database threshold
value monitor element
This monitor element returns the upper bound of the
CONCURRENTDBCOORDACTIVITIES database threshold that was applied to the
activity.
Table 97. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
concurrentdbcoordactivities_db_threshold _violated -
Concurrent database coordinator activities database threshold
violated monitor element
This monitor element returns 'Yes' to indicate that the activity violated the
CONCURRENTDBCOORDACTIVITIES database threshold. 'No' indicates that the
activity has not yet violated the threshold.
Table 98. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_subclass_threshold _id -
Concurrent database coordinator activities service subclass
threshold ID monitor element
This monitor element returns the ID of the CONCURRENTDBCOORDACTIVITIES
service subclass threshold threshold that was applied to the activity.
Table 99. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_subclass_ threshold_queued -
Concurrent database coordinator activities service subclass
threshold queued monitor element
This monitor element returns 'Yes' to indicate that the activity was queued by the
CONCURRENTDBCOORDACTIVITIES service subclass threshold. 'No' indicates
that the activity was not queued.
Usage
concurrentdbcoordactivities_subclass_ threshold_value -
Concurrent database coordinator activities service subclass
threshold value monitor element
This monitor element returns the upper bound of the
CONCURRENTDBCOORDACTIVITIES service subclass threshold that was applied
to the activity.
Table 101. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_subclass_ threshold_violated -
Concurrent database coordinator activities service subclass
threshold violated monitor element
This monitor element returns 'Yes' to indicate that the activity violated the
CONCURRENTDBCOORDACTIVITIES service subclass threshold. 'No' indicates
that the activity has not yet violated the threshold.
Table 102. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
Usage
concurrentdbcoordactivities_superclass_ threshold_queued -
Concurrent database coordinator activities service superclass
threshold queued monitor element
This monitor element returns 'Yes' to indicate that the activity was queued by the
CONCURRENTDBCOORDACTIVITIES service superclass threshold. 'No' indicates
that the activity was not queued.
Table 104. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_superclass_ threshold_value -
Concurrent database coordinator activities service superclass
threshold value monitor element
The upper bound of the CONCURRENTDBCOORDACTIVITIES service superclass
threshold that was applied to the activity.
Table 105. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
concurrentdbcoordactivities_superclass_ threshold_violated -
Concurrent database coordinator activities service superclass
threshold violated monitor element
This monitor element returns 'Yes' to indicate that the activity violated the
CONCURRENTDBCOORDACTIVITIES service superclass threshold. 'No' indicates
that the activity has not yet violated the threshold.
Table 106. Table Function Monitoring Information
Table Function Monitor Element Collection Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_wl_was_threshold _id -
Concurrent database coordinator activities workload work
action set threshold ID monitor element
The identifier of the CONCURRENTDBCOORDACTIVITIES workload work action
set threshold that was applied to the activity.
Table 107. Table Function Monitoring Information
Monitor Element Collection Command and
Table Function Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_wl_was_threshold _queued -
Concurrent database coordinator activities workload work
action set threshold queued monitor element
This monitor element returns 'Yes' to indicate that the activity was queued by the
CONCURRENTDBCOORDACTIVITIES workload work action set threshold. 'No'
indicates that the activity was not queued.
Usage
concurrentdbcoordactivities_wl_was_threshold _value -
Concurrent database coordinator activities workload work
action set threshold value monitor element
The upper bound of the CONCURRENTDBCOORDACTIVITIES workload work
action set threshold that was applied to the activity.
Table 109. Table Function Monitoring Information
Monitor Element Collection Command and
Table Function Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
Usage
concurrentdbcoordactivities_wl_was_threshold _violated -
Concurrent database coordinator activities workload work
action set threshold violated monitor element
This monitor element returns 'Yes' to indicate that the activity violated the
CONCURRENTDBCOORDACTIVITIES workload work action set threshold. 'No'
indicates that the activity has not yet violated the threshold.
Table 110. Table Function Monitoring Information
Monitor Element Collection Command and
Table Function Level
MON_GET_ACTIVITY_DETAILS table Always collected
function - Get complete activity details
(reported in DETAILS XML document)
For service classes, if you remap an activity to a different subclass with a REMAP
ACTIVITY action before it aborts, then this activity counts only towards the total
of the subclass it aborts in.
Table 111. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wlstats -
Usage
For service classes, if you remap an activity to a different subclass with a REMAP
ACTIVITY action before it completes, then this activity counts only towards the
total of the subclass it completes in.
Table 112. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_wlstats -
Statistics event_scstats -
Usage
This element can be used to determine the throughput of activities in the system or
to aid in calculating average activity lifetime across multiple partitions.
For service classes, the estimated cost of an activity is counted only towards the
service subclass in which the activity enters the system. When you remap activities
between service subclasses with a REMAP ACTIVITY action, the
coord_act_est_cost_avg mean of the service subclass you remap an activity to is
unaffected.
Table 113. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this statistic to determine the arithmetic mean of the estimated costs of
coordinator DML activities at nesting level 0 that are associated this service
subclass, workload, or work class that completed or aborted since the last statistics
reset.
This average can also be used to determine whether or not the histogram template
used for the activity estimated cost histogram is appropriate. Compute the average
activity estimated cost from the activity estimated cost histogram. Compare the
computed average with this monitor element. If the computed average deviates
from the true average reported by this monitor element, consider altering the
histogram template for the activity estimated cost histogram, using a set of bin
values that are more appropriate for your data.
Usage
Use this statistic to determine the arithmetic mean of execution time for
coordinator activities associated with a service subclass, workload, or work class
that completed or aborted.
This average can also be used to determine whether or not the histogram template
used for the activity execution time histogram is appropriate. Compute the average
activity execution time from the activity execution time histogram. Compare the
computed average with this monitor element. If the computed average deviates
from the true average reported by this monitor element, consider altering the
histogram template for the activity execution time histogram, using a set of bin
values that are more appropriate for your data.
For service classes, the inter-arrival time mean is calculated for service subclasses
through which activities enter the system. When you remap activities between
service subclasses with a REMAP ACTIVITY action, the
coord_act_interarrival_time_avg of the service subclass you remap an activity to is
unaffected.
Table 115. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Use this statistic to determine the arithmetic mean between arrivals of coordinator
activities at nesting level 0 associated with this service subclass, workload, or work
class.
The inter-arrival time can be used to determine arrival rate, which is the inverse of
inter-arrival time. This average can also be used to determine whether or not the
histogram template used for the activity inter-arrival time histogram is appropriate.
Compute the average activity inter-arrival time from the activity inter-arrival time
histogram. Compare the computed average with this monitor element. If the
computed average deviates from the true average reported by this monitor
element, consider altering the histogram template for the activity inter-arrival time
histogram, using a set of bin values that are more appropriate for your data.
For service classes, when you remap activities between service subclasses with a
REMAP ACTIVITY action, only the the coord_act_lifetime_avg mean of the final
service class where the activity completes is affected.
Table 116. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this statistic to determine the arithmetic mean of the lifetime for coordinator
activities associated with a service subclass, workload, or work class that
completed or aborted.
This statistic can also be used to determine whether or not the histogram template
used for the activity lifetime histogram is appropriate. Compute the average
activity lifetime from the activity lifetime histogram. Compare the computed
average with this monitor element. If the computed average deviates from the true
average reported by this monitor element, consider altering the histogram template
for the activity lifetime histogram, using a set of bin values that are more
appropriate for your data.
To effectively use this statistic with service classes when you also remap activities
between service subclasses with a REMAP ACTIVITY action, you must aggregate
the coord_act_lifetime_top high watermark of any given service subclass with that
of other subclasses affected by the same remapping threshold or thresholds. This is
because an activity will complete after it has been remapped to a different service
subclass by a remapping threshold, and the time the activity spends in other
service subclasses before being remapped is counted only towards the service class
in which it completes.
Table 117. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_wcstats -
Statistics event_scstats -
Statistics event_wlstats -
Usage
This element can be used to help determine whether or not thresholds on activity
lifetime are being effective and can also help to determine how to configure such
thresholds.
For service classes, the queue time counts only towards the service subclass in
which the activity completes or is aborted. When you remap activities between
service subclasses with a REMAP ACTIVITY action, the coord_act_queue_time_avg
mean of service subclasses an activity is mapped to but does not complete in is
unaffected.
Table 118. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Usage
Use this statistic to determine the arithmetic mean of the queue time for
coordinator activities associated with a service subclass, workload, or work class
that completed or aborted.
This statistic can also be used to determine whether or not the histogram template
used for the activity queue time histogram is appropriate. Compute the average
activity queue time from the activity queue time histogram. Compare the
computed average with this monitor element. If the computed average deviates
from the true average reported by this monitor element, consider altering the
histogram template for the activity queue time histogram, using a set of bin values
that are more appropriate for your data.
Usage
This element can be used to help determine whether or not predictive thresholds
and work actions that prevent execution are being effective and whether or not
they are too restrictive.
This element allows the coordinator partition to be identified for activities or units
of work that have records on partitions other than the coordinator.
For service classes, the estimated cost of DML activities is counted only towards
the service subclass in which the activity enters the system. When you remap
activities between service subclasses with a REMAP ACTIVITY action, the
cost_estimate_top of the service subclass you remap an activity to is unaffected.
Table 121. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this element to determine the highest DML activity estimated cost reached on
a partition for a service class, workload, or work class in the time interval
collected.
Usage
This element can be used with the db_work_class_id element to uniquely identify
the database work class of the activity, if one exists.
Usage
Usage
Use this element to trace the path of an activity through the service classes to
which it was remapped. This element can also be used to compute aggregates of
how many activities were mapped into a given service subclass.
Usage
Use this element to identify the type of histogram. Several histograms can belong
to the same statistics record, but only one of each type.
Usage
Usage
Use this information to verify whether the activity was remapped the expected
number of times.
Usage
This element can be used to help determine whether or not thresholds are effective
for this particular application or whether the threshold violations are excessive.
Usage
Usage
Use this element along with the parent_uow_id element and appl_id element to
uniquely identify the parent activity of the activity described in this activity record.
Usage
Use this element along with the parent_activity_id element and appl_id element to
uniquely identify the parent activity of the activity described in this activity record.
Usage
The prep_time monitor element indicates how much time was spent preparing the
SQL statement, if this activity was an SQL statement, when the statement was first
introduced to the DB2 package cache. This preparation time is not part of the
activity lifetime nor does it represent time spent during a specific invocation of the
statement if the statement has already been cached in the package cache prior to
that invocation.
Usage
This element can be used to determine the number of times any connection or
activity was queued in this particular queue in a given period of time determined
by the statistics collection interval. This can help to determine the effectiveness of
queuing thresholds.
Use this element to gauge the effectiveness of queuing thresholds and to detect
when queuing is excessive.
This element is used to gauge the effectiveness of queuing thresholds and to detect
when queuing is excessive.
Usage notes
When you remap activities between service subclasses with a REMAP ACTIVITY
action, the request_exec_time_avg mean counts the partial request in each subclass
involved in remapping.
Table 143. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Usage
Use this statistic to quickly understand the average amount of time that is spent
processing each request on a database partition in this service subclass.
This average can also be used to determine whether or not the histogram template
used for the request execution time histogram is appropriate. Compute the average
request execution time from the request execution time histogram. Compare the
computed average with this monitor element. If the computed average deviates
from the true average reported by this monitor element, consider altering the
histogram template for the request execution time histogram, using a set of bin
values that are more appropriate for your data.
Usage
The value of this element matches a value from column ROUTINEID of view
SYSCAT.ROUTINES.
Note: This monitor element reports only the values for the database partition for
which this information is recorded. On DPF systems, these values may not reflect
the correct totals for the whole activity.
Table 146. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Activities event_activity Statement
Usage
Usage
Usage
This element can be used to help determine thresholds for rows returned to the
application or can be used to verify that such a threshold is configured correctly
and doing its job.
For service classes, when you remap activities between service subclasses with a
REMAP ACTIVITY action, only the rows_returned_top high watermark of the
service subclass where an activity completes is updated. High watermarks of
service subclasses an activity is mapped to but does not complete in are
unaffected.
Table 151. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this element to know the highest DML activity actual rows returned reached
on a partition for a service class, workload, or work class in the time interval
collected.
Usage
This element can be used with the sc_work_class_id element to uniquely identify
the service class work class of the activity, if one exists.
Usage
Use this element with the section explain procedures to explain the statement and
view the access plan for the statement.
Usage
The value of this element matches a value from column SERVICECLASSID of view
SYSCAT.SERVICECLASSES. Use this element to look up the service subclass name,
or link information about a service subclass from different sources. For example,
join service class statistics with histogram bin records.
Usage
Use this element in conjunction with other activity elements for analysis of the
behavior of an activity or with other statistics elements for analysis of a service
class or threshold queue.
Usage
Use this element in conjunction with other activity elements for analysis of the
behavior of an activity or with other statistics elements for analysis of a service
class or threshold queue.
Usage
Use this element to trace the path of an activity through the service classes to
which it was remapped. It can also be used to compute aggregates of how many
activities were mapped out of a given service subclass.
Usage
Use this element to determine when this statistics record was generated.
Use this element along with the last_wlm_reset element to identify the time
interval over which the statistics in this statistics record were generated.
This monitor element can also be used to group together all statistics records that
were generated for the same collection interval.
1 This option has been deprecated. Its use is no longer recommended and
might be removed in a future release. Use the CREATE EVENT MONITOR
FOR LOCKING statement to monitor lock-related events, such as lock
timeouts, lock waits, and deadlocks.
Usage
You can use this element to uniquely identify the invocation in which a particular
SQL statement has been executed. You can also use this element in conjunction
with other statement history entries to see the sequence of SQL statements that
caused the deadlock.
For service classes, when you remap activities between service subclasses with a
REMAP ACTIVITY action, only the temp_tablespace_top high watermark of the
service subclass where an activity completes is changed. High watermarks of
service subclasses an activity is mapped to but does not complete in are
unaffected.
Table 167. Event Monitoring Information
Event Type Logical Data Grouping Monitor Switch
Statistics event_scstats -
Statistics event_wcstats -
Statistics event_wlstats -
Usage
Use this element to determine the highest DML activity system temporary table
space usage reached on a partition for a service class, workload, or work class in
the time interval collected.
This element is only updated by activities that have a temporary table space
threshold applied to them. If no temporary table space threshold is applied to an
activity, a value of 0 is returned.
Usage
Use this element to quickly determine if there have been any WLM thresholds that
have been violated. If thresholds have been violated you can then use the
threshold violations event monitor (if created and active) to obtain details about
the threshold violations.
Usage
Use this element to determine whether the activity that violated the threshold was
stopped when the violation occurred, was allowed to continue executing, or was
remapped to another service subclass. If the activity was stopped, the application
that submitted the activity will have received an SQL4712N error. If the activity
was remapped to another service subclass, agents working for the activity on the
partition will be moving to the target service subclass of the threshold.
Usage
This element can be used for distinguishing the queue statistics of thresholds that
have the same predicate but different domains.
Usage
For activity thresholds, this element provides a historical record of what the
threshold's maximum value was at the time the threshold was violated. This is
useful when the threshold's maximum value has changed since the time of the
Usage
Use this element to uniquely identify the queuing threshold whose statistics this
record represents.
Usage
Use this monitor element in conjunction with other statistics or threshold violation
monitor elements for analysis of a threshold violation.
Usage
Use this element to determine the number of activities or connections in the queue
for this threshold at the time the threshold was violated.
Usage
Use this monitor element in conjunction with other activity history monitor
elements for analysis of a threshold queue or for analysis of the activity that
violated a threshold.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity.
Use this element in conjunction with other threshold violations monitor elements
for analysis of a threshold violation.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity.
If the activity got rejected, then the value of act_exec_time monitor element is 0. In
this case, the value of time_started monitor element equals the value of
time_completed monitor element.
Usage
Use this element with the corresponding bottom element to determine the range of
a bin within a histogram.
Usage You may use this element to determine if the unit of work ended due to a
deadlock or abnormal termination. It may have been:
v Committed due to a commit statement
v Rolled back due to a rollback statement
v Rolled back due to a deadlock
v Rolled back due to an abnormal termination
v Committed at normal application termination.
v Unknown as a result of a FLUSH EVENT MONITOR command for
which units of work were in progress.
Note: API users should refer to the header file (sqlmon.h) containing
definitions of database system monitor constants.
Usage Use this element as an indicator of the time it takes for units of work to
complete.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity.
You can also use this element with the activity_id and appl_id monitor elements
to uniquely identify an activity.
Usage This element can help you determine the severity of the resource
contention problem.
Usage
You may use this element to understand the logging requirements at the unit of
work level.
Usage
This resource requirement occurs at the first SQL statement execution of that unit
of work:
v For the first unit of work, it is the time of the first database request (SQL
statement execution) after conn_complete_time.
v For subsequent units of work, it is the time of the first database request (SQL
statement execution) after the previous COMMIT or ROLLBACK.
The database system monitor excludes the time spent between the
COMMIT/ROLLBACK and the next SQL statement from its definition of a unit of
work. This measurement method reflects the time spent by the database manager
in processing database requests, separate from time spent in application logic
before the first SQL statement of that unit of work. The unit of work elapsed time
does include the time spent running application logic between SQL statements
within the unit of work.
You may use this element with the uow_stop_time monitor element to calculate
the total elapsed time of the unit of work and with the prev_uow_stop_time
monitor element to calculate the time spent in the application between units of
work.
You can use the uow_stop_time and the prev_uow_stop_time monitor elements to
calculate the elapsed time for the SQL Reference definition of a unit of work.
Usage You may use this element to determine the status of a unit of work. API
users should refer to the sqlmon.h header file containing definitions of
database system monitor constants.
Use this element with the prev_uow_stop_time monitor element to calculate the
total elapsed time between COMMIT/ROLLBACK points, and with the
uow_start_time monitor element to calculate the elapsed time of the latest unit of
work.
As a new unit of work is started, the contents of this element are moved to the
prev_uow_stop_time monitor element.
Usage
This element can be used to help determine whether or not the UOWTOTALTIME
threshold is effective and can also help to determine how to configure such a
threshold.
For service classes, this monitor element returns -1 when COLLECT AGGREGATE
ACTIVITY DATA for the service class is set to NONE.
For a service class, measurements taken for this high watermark are computed for
the service class assigned by the workload. Any mapping by a work action set to
change the service class of an activity does not affect this high watermark.
Usage
Use this monitor element, together with the wl_work_class_id monitor element, to
uniquely identify the workload work class of the activity, if one exists.
Usage
Usage
Use this element to determine how many occurrences of a given workload are
driving work into the system.
Usage
Use this element in conjunction with other activity history elements for analysis of
the behavior of an activity or with other statistics elements for analysis of a work
class.
Usage
Use this element along with the work_class_name element to uniquely identify the
work class whose statistics are being shown in this record or to uniquely identify
the work class which is the domain of the threshold queue whose statistics are
shown in this record.
Usage
Use this element in conjunction with other statistics elements for analysis of a
work class.
Usage
Usage
Use this ID to uniquely identify the workload to which this activity, application,
histogram bin, or workload statistics record belongs.
In the statistics event monitor and workload table functions, the workload name
identifies the workload for which statistics or metrics are being collected and
reported. In the unit of work event monitor and unit of work table functions, the
workload name identifies the workload that the unit of work was associated with.
Use the workload name to identify units of work or sets of information that apply
to a particular workload of interest.
Usage
Use this to identify the workload occurrence that submitted the activity.
Usage
Commands
SET WORKLOAD
Specifies the workload to which the database connection is to be assigned. This
command can be issued prior to connecting to a database or it can be used to
reassign the current connection once the connection has been established. If the
connection has been established, the workload reassignment will be performed at
the beginning of the next unit of work.
Authorization
Required connection
None
Command syntax
AUTOMATIC
SET WORKLOAD TO SYSDEFAULTADMWORKLOAD
Command parameters
AUTOMATIC
Specifies that the database connection will be assigned to a workload chosen
by the workload evaluation that is performed automatically by the server.
Examples
To reset the workload assignment so that it uses the workload that is chosen by the
workload evaluation performed by the server:
SET WORKLOAD TO AUTOMATIC
Usage notes
If the session authorization ID of the database connection does not have accessctrl,
dataaccess, wlmadm, secadm or dbadm authority, the connection cannot be assigned to
the SYSDEFAULTADMWORKLOAD and an SQL0552N error will be returned. If
the SET WORKLOAD TO SYSDEFAULTADMWORKLOAD command is issued
prior to connecting to a database, the SQL0552N error will be returned after the
database connection has been established, at the beginning of the first unit of
work. If the command is issued when the database connection has been
established, the SQL0552N error will be returned at the beginning of the next unit
of work, when the workload reassignment is supposed to take place.
Configuration parameters
The collect and reset process is initiated from the catalog partition. The
wlm_collect_int parameter must be specified on the catalog partition. It is not used
on other partitions.
The interval needs to be customized per database, not for each SQL request, or
command invocation, or application. There are no other configuration parameters
that need to be considered.
Note: All WLM statistics table functions return statistics that have been
accumulated since the last time the statistics were reset. The statistics will be reset
regularly on the interval specified by this configuration parameter.
Catalog views
SYSCAT.HISTOGRAMTEMPLATEBINS
Each row represents a histogram template bin.
Table 219. SYSCAT.HISTOGRAMTEMPLATEBINS Catalog View
Column Name Data Type Nullable Description
TEMPLATENAME VARCHAR (128) Y Name of the histogram template.
TEMPLATEID INTEGER Identifier for the histogram template.
BINID INTEGER Identifier for the histogram template bin.
BINUPPERVALUE BIGINT The upper value for a single bin in the
histogram template.
SYSCAT.HISTOGRAMTEMPLATES
Each row represents a histogram template.
Table 220. SYSCAT.HISTOGRAMTEMPLATES Catalog View
Column Name Data Type Nullable Description
TEMPLATEID INTEGER Identifier for the histogram template.
TEMPLATENAME VARCHAR (128) Name of the histogram template.
CREATE_TIME TIMESTAMP Time at which the histogram template was
created.
ALTER_TIME TIMESTAMP Time at which the histogram template was
last altered.
SYSCAT.HISTOGRAMTEMPLATEUSE
Each row represents a relationship between a workload management object that
can use histogram templates and a histogram template.
Table 221. SYSCAT.HISTOGRAMTEMPLATEUSE Catalog View
Column Name Data Type Nullable Description
TEMPLATENAME VARCHAR (128) Y Name of the histogram template.
TEMPLATEID INTEGER Identifier for the histogram template.
HISTOGRAMTYPE CHAR (1) The type of information collected by
histograms based on this template.
v C = Activity estimated cost histogram
v E = Activity execution time histogram
v I = Activity interarrival time histogram
v L = Activity life time histogram
v Q = Activity queue time histogram
v R = Request execution time histogram
OBJECTTYPE CHAR (1) The type of WLM object.
v b = Service class
v k = Work action
v w = Workload
OBJECTID INTEGER Identifier of the WLM object.
SERVICECLASSNAME VARCHAR (128) Y Name of the service class.
PARENTSERVICECLASSNAME VARCHAR (128) Y The name of the parent service class of the
service subclass that uses the histogram
template.
WORKACTIONNAME VARCHAR (128) Y The name of the work action that uses the
histogram template.
WORKACTIONSETNAME VARCHAR (128) Y The name of the work action set containing
the work action that uses the histogram
template.
WORKLOADNAME VARCHAR (128) Y The name of the workload that uses the
histogram template.
SYSCAT.SERVICECLASSES
Each row represents a service class.
Table 222. SYSCAT.SERVICECLASSES Catalog View
Column Name Data Type Nullable Description
SERVICECLASSNAME VARCHAR (128) Name of the service class.
SYSCAT.THRESHOLDS
Each row represents a threshold.
Table 223. SYSCAT.THRESHOLDS Catalog View
Column Name Data Type Nullable Description
THRESHOLDNAME VARCHAR (128) Name of the threshold.
THRESHOLDID INTEGER Identifier for the threshold.
ORIGIN CHAR (1) Origin of the threshold.
v U = Threshold was created by a user
v W = Threshold was created through a
work action set
THRESHOLDCLASS CHAR (1) Classification of the threshold.
v A = Aggregate threshold
v C = Activity threshold
SYSCAT.WORKACTIONSETS
Each row represents a work action set.
Table 225. SYSCAT.WORKACTIONSETS Catalog View
Column Name Data Type Nullable Description
ACTIONSETNAME VARCHAR (128) Name of the work action set.
ACTIONSETID INTEGER Identifier for the work action set.
WORKCLASSSETNAME VARCHAR (128) Y Name of the work class set.
WORKCLASSSETID INTEGER The identifier of the work class set that is to
be mapped to the object specified by the
OBJECTID. This column refers to
WORKCLASSSETID in the
SYSCAT.WORKCLASSSETS view.
CREATE_TIME TIMESTAMP Time at which the work action set was
created.
ALTER_TIME TIMESTAMP Time at which the work action set was last
altered.
ENABLED CHAR (1) v N = This work action set is disabled.
v Y = This work action set is enabled.
OBJECTTYPE CHAR (1) v b = Service superclass
v w = Workload
v Blank = Database
OBJECTNAME VARCHAR (128) Y Name of the service class or workload.
OBJECTID INTEGER The identifier of the object to which the
work class set (specified by the
WORKCLASSSETID) is mapped. If the
OBJECTTYPE is 'b', the OBJECTID is the ID
of the service superclass. If the OBJECTTYPE
is 'w', the OBJECTID is the ID of the
workload. If the OBJECTTYPE is blank, the
OBJECTID is -1.
REMARKS VARCHAR (254) Y User-provided comments, or the null value.
SYSCAT.WORKCLASSES
Each row represents a work class defined for a work class set.
Table 226. SYSCAT.WORKCLASSES Catalog View
Column Name Data Type Nullable Description
WORKCLASSNAME VARCHAR (128) Name of the work class.
WORKCLASSSETNAME VARCHAR (128) Y Name of the work class set.
SYSCAT.WORKCLASSSETS
Each row represents a work class set.
Table 227. SYSCAT.WORKCLASSSETS Catalog View
Column Name Data Type Nullable Description
WORKCLASSSETNAME VARCHAR (128) Name of the work class set.
WORKCLASSSETID INTEGER Identifier for the work class set.
CREATE_TIME TIMESTAMP Time at which the work class set was
created.
SYSCAT.WORKLOADAUTH
Each row represents a user, group, or role that has been granted USAGE privilege
on a workload.
Table 228. SYSCAT.WORKLOADAUTH Catalog View
Column Name Data Type Nullable Description
WORKLOADID INTEGER Identifier for the workload.
WORKLOADNAME VARCHAR (128) Name of the workload.
GRANTOR VARCHAR (128) Grantor of the privilege.
GRANTORTYPE CHAR (1) v U = Grantee is an individual user
GRANTEE VARCHAR (128) Holder of the privilege.
GRANTEETYPE CHAR (1) v G = Grantee is a group
v R = Grantee is a role
v U = Grantee is an individual user
USAGEAUTH CHAR (1) Indicates whether grantee holds USAGE
privilege on the workload.
v N = Not held
v Y = Held
SYSCAT.WORKLOADCONNATTR
Each row represents a connection attribute in the definition of a workload.
Table 229. SYSCAT.WORKLOADCONNATTR Catalog View
Column Name Data Type Nullable Description
WORKLOADID INTEGER Identifier for the workload.
WORKLOADNAME VARCHAR (128) Name of the workload.
CONNATTRTYPE VARCHAR (30) Type of the connection attribute.
v 1 = APPLNAME
v 2 = SYSTEM_USER
v 3 = SESSION_USER
v 4 = SESSION_USER GROUP
v 5 = SESSION_USER ROLE
v 6 = CURRENT CLIENT_USERID
v 7 = CURRENT CLIENT_APPLNAME
v 8 = CURRENT CLIENT_WRKSTNNAME
v 9 = CURRENT CLIENT_ACCTNG
v 10 = ADDRESS
CONNATTRVALUE VARCHAR (1000) Value of the connection attribute.
For example, regarding the use of upper and lowercase letters in the names of
objects that are visible in the file system (databases, instances, and so on):
v On UNIX platforms, names are case-sensitive. For example, /data1 is not the
same directory as /DATA1 or /Data1
v On Windows platforms, names are not case-sensitive. For example, \data1 is the
same as \DATA1 and \Data1.
Unless otherwise specified, all names can include the following characters:
v The letters A through Z, and a through z, as defined in the basic (7-bit) ASCII
character set. When used in identifiers for objects created with SQL statements,
lowercase characters “a” through “z” are converted to uppercase unless they are
delimited with quotes (“)
v 0 through 9.
v ! % ( ) { } . - ^ ~ _ (underscore) @, #, $, and space.
v \ (backslash).
Restrictions
v Do not begin names with a number or with the underscore character.
v Do not use SQL reserved words to name tables, views, columns, indexes, or
authorization IDs.
v Use only the letters defined in the basic ASCII character set for directory and file
names. While your computer's operating system might support different code
pages, non-ASCII characters might not work reliably. Using non-ASCII
characters can be a particular problem in distributed environment, where
different computers might be using different code pages.
v There are other special characters that might work separately depending on your
operating system and where you are working with the DB2 database. However,
while they might work, there is no guarantee that they will work. It is not
recommended that you use these other special characters when naming objects
in your database.
v User and group names also must follow the rules imposed by specific operating
systems \. For example, on Linux and UNIX platforms, characters for user
names and primary group names must be lowercase a through z, 0 through 9,
and _ (underscore) for names not starting with 0 through 9.
v Lengths must be less than or equal to the lengths listed in “SQL and XML
limits” in the SQL Reference.
v Restrictions on the AUTHID identifier: Version 9.5, and later, of the DB2
database system allows you to have an 128-byte authorization ID, but when the
authorization ID is interpreted as an operating system user ID or group name,
the operating system naming restrictions apply (for example, Linux and UNIX
operating systems have a limitation to 8 characters and Windows operating
systems have a limitation of 30 characters for user IDs and group names).
Therefore, while you can grant an 128-byte authorization ID, it is not possible to
connect as a user that has that authorization ID. If you write your own security
You also must consider object naming rules, naming rules in an NLS environment,
and naming rules in a Unicode environment.
A role is a database object that groups together one or more privileges and can be
assigned to users, groups, PUBLIC, or other roles by using a GRANT statement, or
can be assigned to a trusted context by using a CREATE TRUSTED CONTEXT or
ALTER TRUSTED CONTEXT statement. A role can be specified for the
SESSION_USER ROLE connection attribute in a workload definition.
All DB2 privileges and authorities that can be granted within a database can be
granted to a role. For example, a role can be granted any of the following
authorities and privileges:
v DBADM, SECADM, DATAACCESS, ACCESSCTRL, SQLADM, WLMADM,
LOAD, and IMPLICIT_SCHEMA database authorities
v CONNECT, CREATETAB, CREATE_NOT_FENCED, BINDADD,
CREATE_EXTERNAL_ROUTINE, or QUIESCE_CONNECT database authorities
v Any database object privilege (including CONTROL)
A role does not have an owner. The security administrator can use the WITH
ADMIN OPTION clause of the GRANT statement to delegate management of the
role to another user, so that the other user can control the role membership.
Restrictions
The three-tiered application model extends the standard two-tiered client and
server model by placing a middle tier between the client application and the
database server. It has gained great popularity in recent years particularly with the
emergence of web-based technologies and the Java™ 2 Enterprise Edition (J2EE)
platform. An example of a software product that supports the three-tier application
model is IBM® WebSphere Application Server (WAS).
While the three-tiered application model has many benefits, having all interactions
with the database server (for example, a user request) occur under the middle tier's
authorization ID raises several security concerns, which can be summarized as
follows:
v Loss of user identity
Some enterprises prefer to know the identity of the actual user accessing the
database for access control purposes.
v Diminished user accountability
Accountability through auditing is a basic principle in database security. Not
knowing the user's identity makes it difficult to distinguish the transactions
performed by the middle tier for its own purpose from those performed by the
middle tier on behalf of a user.
v Over granting of privileges to the middle tier's authorization ID
The middle tier's authorization ID must have all the privileges necessary to
execute all the requests from all the users. This has the security issue of enabling
users who do not need access to certain information to obtain access anyway.
v Weakened security
In addition to the privilege issue raised in the previous point, the current
approach requires that the authorization ID used by the middle tier to connect
must be granted privileges on all resources that might be accessed by user
requests. If that middle-tier authorization ID is ever compromised, then all those
resources will be exposed.
v "Spill over" between users of the same connection
Changes by a previous user can affect the current user.
Clearly, there is a need for a mechanism whereby the actual user's identity and
database privileges are used for database requests performed by the middle tier on
behalf of that user. The most straightforward approach of achieving this goal
would be for the middle-tier to establish a new connection using the user's ID and
password, and then direct the user's requests through that connection. Although
simple, this approach suffers from several drawbacks which include the following:
v Inapplicability for certain middle tiers. Many middle-tier servers do not have
the user authentication credentials needed to establish a connection.
v Performance overhead. There is an obvious performance overhead associated
with creating a new physical connection and re-authenticating the user at the
database server.
v Maintenance overhead. In situations where you are not using a centralized
security set up or are not using single sign-on, there is maintenance overhead in
having two user definitions (one on the middle tier and one at the server). This
requires changing passwords at different places.
The trusted contexts capability addresses this problem. The security administrator
can create a trusted context object in the database that defines a trust relationship
between the database and the middle-tier. The middle-tier can then establish an
explicit trusted connection to the database, which gives the middle tier the ability
to switch the current user ID on the connection to a different user ID, with or
without authentication. In addition to solving the end-user identity assertion
problem, trusted contexts offer another advantage. This is the ability to control
when a privilege is made available to a database user. The lack of control on when
privileges are available to a user can weaken overall security. For example,
Enhancing performance
When you use trusted connections, you can maximize performance because of the
following advantages:
v No new connection is established when the current user ID of the connection is
switched.
v If the trusted context definition does not require authentication of the user ID to
switch to, then the overhead associated with authenticating a new user at the
database server is not incurred.
Suppose that the security administrator creates the following trusted context object:
CREATE TRUSTED CONTEXT CTX1
BASED UPON CONNECTION USING SYSTEM AUTHID USER2
ATTRIBUTES (ADDRESS '192.0.2.1')
DEFAULT ROLE managerRole
ENABLE
If user user1 requests a trusted connection from IP address 192.0.2.1, the DB2
database system returns a warning (SQLSTATE 01679, SQLCODE +20360) to
indicate that a trusted connection could not be established, and that user user1
simply got a non-trusted connection. However, if user user2 requests a trusted
connection from IP address 192.0.2.1, the request is honored because the connection
attributes are satisfied by the trusted context CTX1. Now that use user2 has
established a trusted connection, he or she can now acquire all the privileges and
authorities associated with the trusted context role managerRole. These privileges
and authorities may not be available to user user2 outside the scope of this trusted
connection
Note: The DB2 Information Center topics are updated more frequently than either
the PDF or the hardcopy books. To get the most current information, install the
documentation updates as they become available, or refer to the DB2 Information
Center at ibm.com.
You can access additional DB2 technical information such as technotes, white
papers, and IBM Redbooks® publications online at ibm.com. Access the DB2
Information Management software library site at http://www.ibm.com/software/
data/sw-library/.
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for
how to improve the DB2 documentation, send an e-mail to [email protected].
The DB2 documentation team reads all of your feedback, but cannot respond to
you directly. Provide specific examples wherever possible so that we can better
understand your concerns. If you are providing feedback on a specific topic or
help file, include the topic title and URL.
Do not use this e-mail address to contact DB2 Customer Support. If you have a
DB2 technical issue that the documentation does not resolve, contact your local
IBM service center for assistance.
Although the tables identify books available in print, the books might not be
available in your country or region.
Note: The DB2 Information Center is updated more frequently than either the PDF
or the hard-copy books.
Table 231. DB2 technical information
Name Form Number Available in print Last updated
Administrative API SC27-2435-02 Yes September, 2010
Reference
Administrative Routines SC27-2436-02 No September, 2010
and Views
Call Level Interface SC27-2437-02 Yes September, 2010
Guide and Reference,
Volume 1
Call Level Interface SC27-2438-02 Yes September, 2010
Guide and Reference,
Volume 2
Command Reference SC27-2439-02 Yes September, 2010
Data Movement Utilities SC27-2440-00 Yes August, 2009
Guide and Reference
Data Recovery and High SC27-2441-02 Yes September, 2010
Availability Guide and
Reference
Database Administration SC27-2442-02 Yes September, 2010
Concepts and
Configuration Reference
Database Monitoring SC27-2458-02 Yes September, 2010
Guide and Reference
Database Security Guide SC27-2443-01 Yes November, 2009
DB2 Text Search Guide SC27-2459-02 Yes September, 2010
Developing ADO.NET SC27-2444-01 Yes November, 2009
and OLE DB
Applications
Developing Embedded SC27-2445-01 Yes November, 2009
SQL Applications
Developing Java SC27-2446-02 Yes September, 2010
Applications
Developing Perl, PHP, SC27-2447-01 No September, 2010
Python, and Ruby on
Rails Applications
Developing User-defined SC27-2448-01 Yes November, 2009
Routines (SQL and
External)
Getting Started with GI11-9410-01 Yes November, 2009
Database Application
Development
Getting Started with GI11-9411-00 Yes August, 2009
DB2 Installation and
Administration on Linux
and Windows
Printed versions of many of the DB2 books available on the DB2 PDF
Documentation DVD can be ordered for a fee from IBM. Depending on where you
are placing your order from, you may be able to order books online, from the IBM
Publications Center. If online ordering is not available in your country or region,
you can always order printed DB2 books from your local IBM representative. Note
that not all books on the DB2 PDF Documentation DVD are available in print.
Note: The most up-to-date and complete DB2 documentation is maintained in the
DB2 Information Center at http://publib.boulder.ibm.com/infocenter/db2luw/
v9r7.
To start SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the
first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help
for the 08 class code.
For DB2 Version 9.7 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9r7/.
For DB2 Version 9.5 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9r5.
For DB2 Version 9.1 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9/.
For DB2 Version 8 topics, go to the DB2 Information Center URL at:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/.
Note: Adding a language does not guarantee that the computer has the
fonts required to display the topics in the preferred language.
– To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Refresh the page to display the DB2 Information Center in your preferred
language.
v To display topics in your preferred language in a Firefox or Mozilla browser:
1. Select the button in the Languages section of the Tools —> Options —>
Advanced dialog. The Languages panel is displayed in the Preferences
window.
2. Ensure your preferred language is specified as the first entry in the list of
languages.
– To add a new language to the list, click the Add... button to select a
language from the Add Languages window.
– To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Refresh the page to display the DB2 Information Center in your preferred
language.
On some browser and operating system combinations, you must also change the
regional settings of your operating system to the locale and language of your
choice.
A DB2 Version 9.7 Information Center must already be installed. For details, see
the “Installing the DB2 Information Center using the DB2 Setup wizard” topic in
Installing DB2 Servers. All prerequisites and restrictions that applied to installing
the Information Center also apply to updating the Information Center.
The DB2 Information Center restarts automatically. If updates were available, the
Information Center displays the new and updated topics. If Information Center
updates were not available, a message is added to the log. The log file is located in
doc\eclipse\configuration directory. The log file name is a randomly generated
number. For example, 1239053440785.log.
Updating your locally-installed DB2 Information Center manually requires that you:
1. Stop the DB2 Information Center on your computer, and restart the Information
Center in stand-alone mode. Running the Information Center in stand-alone
mode prevents other users on your network from accessing the Information
Center, and allows you to apply updates. The Workstation version of the DB2
Information Center always runs in stand-alone mode. .
2. Use the Update feature to see what updates are available. If there are updates
that you must install, you can use the Update feature to obtain and install them
Note: On Windows 2008, Windows Vista (and higher), the commands listed later
in this section must be run as an administrator. To open a command prompt or
graphical tool with full administrator privileges, right-click the shortcut and then
select Run as administrator.
To update the DB2 Information Center installed on your computer or intranet server:
1. Stop the DB2 Information Center.
v On Windows, click Start → Control Panel → Administrative Tools → Services.
Then right-click DB2 Information Center service and select Stop.
v On Linux, enter the following command:
/etc/init.d/db2icdv97 stop
2. Start the Information Center in stand-alone mode.
v On Windows:
a. Open a command window.
b. Navigate to the path where the Information Center is installed. By
default, the DB2 Information Center is installed in the
Program_Files\IBM\DB2 Information Center\Version 9.7 directory,
where Program_Files represents the location of the Program Files
directory.
c. Navigate from the installation directory to the doc\bin directory.
d. Run the help_start.bat file:
help_start.bat
v On Linux:
a. Navigate to the path where the Information Center is installed. By
default, the DB2 Information Center is installed in the /opt/ibm/db2ic/V9.7
directory.
b. Navigate from the installation directory to the doc/bin directory.
c. Run the help_start script:
help_start
The systems default Web browser opens to display the stand-alone Information
Center.
3. Click the Update button ( ). (JavaScript™ must be enabled in your browser.)
On the right panel of the Information Center, click Find Updates. A list of
updates for existing documentation displays.
4. To initiate the installation process, check the selections you want to install, then
click Install Updates.
5. After the installation process has completed, click Finish.
6. Stop the stand-alone Information Center:
v On Windows, navigate to the installation directory's doc\bin directory, and
run the help_end.bat file:
help_end.bat
Note: The help_end batch file contains the commands required to safely stop
the processes that were started with the help_start batch file. Do not use
Ctrl-C or any other method to stop help_start.bat.
Note: The help_end script contains the commands required to safely stop the
processes that were started with the help_start script. Do not use any other
method to stop the help_start script.
7. Restart the DB2 Information Center.
v On Windows, click Start → Control Panel → Administrative Tools → Services.
Then right-click DB2 Information Center service and select Start.
v On Linux, enter the following command:
/etc/init.d/db2icdv97 start
The updated DB2 Information Center displays the new and updated topics.
DB2 tutorials
The DB2 tutorials help you learn about various aspects of DB2 products. Lessons
provide step-by-step instructions.
You can view the XHTML version of the tutorial from the Information Center at
http://publib.boulder.ibm.com/infocenter/db2help/.
Some lessons use sample data or code. See the tutorial for a description of any
prerequisites for its specific tasks.
DB2 tutorials
Personal use: You may reproduce these Publications for your personal, non
commercial use provided that all proprietary notices are preserved. You may not
distribute, display or make derivative work of these Publications, or any portion
thereof, without the express consent of IBM.
Commercial use: You may reproduce, distribute and display these Publications
solely within your enterprise provided that all proprietary notices are preserved.
You may not make derivative works of these Publications, or reproduce, distribute
or display these Publications or any portion thereof outside your enterprise,
without the express consent of IBM.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the Publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information that has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at “Copyright and
trademark information” at www.ibm.com/legal/copytrade.shtml.
Index 437
F migrating
Query Patroller
file names to workload manager 197
general 413 monitor element
functions thresholds
table thresh_violations 379
WLM_GET_ACTIVITY_DETAILS 301 monitor elements
WLM_GET_QUEUE_STATS 308 act_exec_time 344
WLM_GET_SERVICE_CLASS_AGENTS_V97 311 act_remapped_in
WLM_GET_SERVICE_CLASS_WORKLOAD details 344
_OCCURRENCES_V97 319 act_remapped_out
WLM_GET_SERVICE_SUBCLASS_STATS_V97 322 details 345
WLM_GET_SERVICE_SUPERCLASS_STATS 330 activation time
WLM_GET_WORK_ACTION_SET_STATS 331 last_wlm_reset 365
WLM_GET_WORKLOAD_OCCURRENCE activities
_ACTIVITIES_V97 333 act_total 345
WLM_GET_WORKLOAD_STATS_V97 338 activity_collected 346
activity_id 346
activity_secondary_id 347
H activity_type 347
help coord_act_aborted_total 357
configuring language 425 coord_act_completed_total 357
SQL statements 425 coord_act_rejected_total 362
histogram templates parent_activity_id 367
altering 194, 202 agg_temp_tablespace_top 348
creating 194 CONCURRENTDBCOORDACTIVITIES threshold
dropping 195 concurrentdbcoordactivities_wl_was _threshold_id 355
histograms concurrentdbcoordactivities_wl_was
example 195 _threshold_queued 356
monitor elements concurrentdbcoordactivities_wl_was
histogram_type 364 _threshold_value 356
number_in_bin 367 concurrentdbcoordactivities_wl_was
top 384 _threshold_violated 356
overview 189 coord_act_est_cost_avg 358
coord_act_exec_time_avg 359
coord_act_interarrival_time_avg 359
I coord_act_lifetime_avg 360
coord_act_queue_time_avg 361
identifiers destination_service_class_id 364
monitor elements histograms
arm_correlator 348 histogram_type 364
bin_id 349 number_in_bin 367
db_work_action_set_id 363 top 384
db_work_class_id 364 identifiers
sc_work_action_set_id 374 arm_correlator 348
sc_work_class_id 374 bin_id 349
service_class_id 375 db_work_action_set_id 363
work_action_set_id 393 db_work_class_id 364
work_class_id 394 sc_work_action_set_id 374
in-service-class thresholds 117 sc_work_class_id 374
service_class_id 375
work_action_set_id 393
L work_class_id 394
Linux locks
workload management integration with DB2 workload uow_lock_wait_time 386
manager 222 log space
locks uow_log_space_used 386
monitor elements names
uow_lock_wait_time 386 service_subclass_name 376
logs service_superclass_name 376
monitor elements work_action_set_name 393
uow_log_space_used 386 work_class_name 394
num_remaps 366
partitions
M coord_partition_num 362
queries
metrics queue_assignments_total 368
DB2 workload manager objects 203
Index 439
Query Patroller service subclasses (continued)
migrating to DB2 workload manager 198 creating 78
script 197 dropping 83
queues monitoring data 175
prefetch 75 service superclasses
altering 80
creating 78
R dropping 83
monitoring data 175
ranges
SET WORKLOAD command
monitor elements
assigning connection to default administration
bottom 349
workload 28
REMAP ACTIVITY action
details 397
defining 117
snapshot monitoring
sample scripts 121
supplementing table functions 204
remapping activities
SQL statements
details 127
help
sample scripts 121
displaying 425
revoking
monitor elements
USAGE privilege on workload 36
stmt_invocation_id 378
roles
sqleseti API
details 415
workload assignment 37
routines
SQLROWSREAD activity threshold
monitor elements
details 102
routine_id 370
SQLROWSREADINSC activity threshold 103
WLM_CANCEL_ACTIVITY example 281
SQLROWSRETURNED activity threshold 104
rows
SQLTEMPSPACE activity threshold
monitor elements
details 104
rows_fetched 370
statement invocation identifier monitor element 378
rows_modified 370
statistics
rows_returned 372
collection
rows_returned_top 373
workload management 200
DB2 workload manager objects 178
event monitor 169
S stored procedures
sample scripts WLM_CANCEL_ACTIVITY 177
QP to WLM migration 198 WLM_CAPTURE_ACTIVITY_IN_PROGRESS 177
scenarios WLM_COLLECT_STATS 177
cancelling WLM_SET_CLIENT_INFO 177
activities 283 SYSDEFAULTMAINTENANCECLASS service superclass
schemas overview 68
classification of CALL statement 50 SYSDEFAULTSYSTEMCLASS service superclass
sections overview 68
monitor elements SYSDEFAULTUSERCLASS service superclass
section_env 374 overview 68
security
trusted contexts 417
service classes
activity states 76
T
table functions
agent priority 74
aggregating data 167
altering
determining WLM threshold queue information
changes occur at statistics reset 202
example 168
procedure 80
example of using 162
analyzing system slowdown 89
monitoring at different levels
buffer pool priority 75
example 163
connection states 76
snapshot monitor 204
creating 78
WLM_COLLECT_STATS 202
default service subclasses 68
table spaces
default service superclasses 68
SQLTEMPSPACE threshold 104
dropping 83
terms and conditions
entities not tracked by 78
publications 430
examples 84, 89
threshold violations
mapping activities 70
email notifications 207
point-in-time statistics 166
threshold violations event monitor 169
prefetch priority 75
thresholds
service subclasses
action 91
altering 80
activity 98
Index 441
WLM_GET_SERVICE_SUBCLASS_STATS_V97 table function workload management
details 322 examples
examples cancellation of all activities 286
aggregating data 167 disconnection of all applications 287
analyzing system slowdown 89, 280 threshold violations
obtaining point-in-time statistics 166 email notifications 207
WLM_GET_SERVICE_SUPERCLASS_STATS table workloads
function 330 altering 31
WLM_GET_WORK_ACTION_SET_STATS table function assignment
analyzing workloads (examples) 59 details 23
details 331 examples 37
WLM_GET_WORKLOAD_OCCURRENCE _ACTIVITIES_V97 connection assignment to the default administration
table function workload 28
description 333 creating 30
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES_V97 default 25
table function disabling 35
aggregating data (examples) 167 dropping 36
examples enabling 34
identifying long-running activities 281 evaluation order 23
WLM_GET_WORKLOAD_STATS_V97 table function 338 examples
WLM_SET_CLIENT_INFO procedure 341 analyzing system slowdown 280
WLMADM (workload administration) authority assignment when multiple workloads exist 43
details 3 assignment when workload attributes have multiple
work action sets values 46
altering 146 assignment when workload attributes have single
concurrency control 140 values 41
creating 144 monitor elements
disabling 146 wlo_completed_total 393
domain and permitted work actions 134 workload_id 394
dropping 147 workload_name 395
examples workload_occurrence_id 396
association with other objects 130 workload_occurrence_state 396
determining types of work being run 154 monitoring data 175
work action set and database threshold 152 overview 19
overview 132 permitting database access 33
work actions specifying thresholds 138 position in workload list 23
workload level preventing database access 33
concurrency control 140 USAGE privilege
work actions granting 35
altering 150 revoking 36
assigning to database activities 138 work action set comparison 142
association with other objects (example) 130
creating 147
disabling 152
dropping 152
thresholds 138
work action sets 134
work class sets
altering 58
association with other objects (example) 130
creating 58
dropping 58
managing DML activities (example) 60
overview 50
work class evaluation order 52
work classes
altering 57
assigning activities 53
creating 54
dropping 57
evaluation order 52
examples
association with other objects 130
defined with ALL keyword 61
overview 47
supported thresholds 53
Printed in USA
SC27-2464-02
Spine information:
IBM DB2 9.7 for Linux, UNIX, and Windows Version 9 Release 7 Workload Manager Guide and Reference