OSM Performance Testing and Tuning Guidelines v3.9-1
OSM Performance Testing and Tuning Guidelines v3.9-1
Guidelines
Version 3.9
1
Contents
1. Overview .................................................................................................................................................................. 5
1.1 Introduction ...................................................................................................................................................... 5
1.2 Objective ........................................................................................................................................................... 5
1.3 Assumptions...................................................................................................................................................... 5
2. Understanding OSM Performance ............................................................................................................................. 6
2.1 Goals of OSM Performance Testing ................................................................................................................... 6
2.2 Measuring OSM Throughput ............................................................................................................................. 6
2.3 Factors Impacting OSM Performance................................................................................................................. 7
2.4 Planning for OSM Performance Testing ............................................................................................................. 8
3. Performance Testing Guidelines................................................................................................................................ 9
3.1 Preparing for the Performance Test ................................................................................................................... 9
3.1.1 Establish Objectives ................................................................................................................................... 9
3.1.2 Performance Test Environment Selection Guidelines ................................................................................. 9
3.1.3 Prepare the Performance Test Environment Line-up ................................................................................ 10
3.1.4 Establish Warm-up Criteria ...................................................................................................................... 10
3.2 Setting up the Performance Test Environment ................................................................................................ 10
3.2.1 Synchronize Time across Servers.............................................................................................................. 10
3.2.2 Database Size .......................................................................................................................................... 11
3.2.3 Test Client Setup for Load Generation...................................................................................................... 11
3.2.4 Emulator Setup ........................................................................................................................................ 11
3.2.5 Setup Monitoring Tools, Scripts, Desired Log Levels ................................................................................. 12
3.3 Running Tests and Analyzing Performance ...................................................................................................... 12
3.4 Single-Order Performance Testing ................................................................................................................... 14
4. Tuning Guidelines ................................................................................................................................................... 17
4.1 Storage and I/O System ................................................................................................................................... 17
4.2 Stack for OSM Installation ............................................................................................................................... 17
4.3 OSM Application Tier ....................................................................................................................................... 18
4.3.1 Server/Operating System ......................................................................................................................... 18
4.3.2 Domain .................................................................................................................................................... 19
4.3.3 WebLogic Server ...................................................................................................................................... 20
4.3.4 WebLogic Server JVM .............................................................................................................................. 21
2
4.3.5 Cluster ..................................................................................................................................................... 25
4.3.6 JMS Messaging ........................................................................................................................................ 25
4.3.7 JDBC ........................................................................................................................................................ 26
4.3.8 Coherence ............................................................................................................................................... 27
4.3.9 Work Managers ....................................................................................................................................... 27
4.4 Database Configuration ................................................................................................................................... 28
4.4.1 Tuning for Redo Logs ............................................................................................................................... 29
4.4.2 Database Schema Partitioning Configuration ........................................................................................... 30
4.4.3 Multi Data Source Configuration Using N-RAC Nodes ............................................................................... 30
4.4.4 Database Storage..................................................................................................................................... 31
4.4.5 Database Statistics Gathering .................................................................................................................. 31
4.5 Design Studio .................................................................................................................................................. 31
4.5.1 Cartridge Deployment Timeout................................................................................................................ 31
4.6 Use of OSS Configuration Compliance Verification Tool ................................................................................... 32
5. Performance Analysis Guidelines ............................................................................................................................ 33
5.1 Managed Server Monitoring ............................................................................................................................ 33
5.1.1 WebLogic Monitoring .............................................................................................................................. 33
5.1.2 Operating System Monitoring .................................................................................................................. 34
5.1.3 Heap Dump Analysis ................................................................................................................................ 35
5.1.4 Thread Dump Analysis ............................................................................................................................. 35
5.1.5 Garbage Collection Analysis ..................................................................................................................... 36
5.1.6 Class Loading Analysis .............................................................................................................................. 36
5.1.7 Coherence Datagram Test........................................................................................................................ 36
5.2 Database Monitoring ....................................................................................................................................... 36
6. Case Study: Performance Testing of an OSM Implementation ................................................................................. 39
6.1 Performance Test Environment Line-up........................................................................................................... 39
6.2 Solution Design Overview ................................................................................................................................ 39
6.3 Deployment Details ......................................................................................................................................... 40
6.4 Multi Data Source Configuration Using 4 RAC Nodes ....................................................................................... 41
6.5 Database Tuning for OSM ................................................................................................................................ 42
6.6 WebLogic Domain Changes and Tuning ........................................................................................................... 43
Appendix A: SQL Statements Used for Gathering Data from OSM ................................................................................ 44
3
Appendix B: Tools for Performance Testing, Tuning and Troubleshooting .................................................................... 46
B.1 OSW Black Box ................................................................................................................................................ 47
B.2 Remote Diagnostics Agent ............................................................................................................................... 47
Operating System RDA Report................................................................................................................................. 48
Oracle WebLogic RDA Report .................................................................................................................................. 49
Oracle Database RDA Report ................................................................................................................................... 49
Oracle RAC Cluster RDA Report ............................................................................................................................... 50
OSM RDA Report ..................................................................................................................................................... 50
B.3 GCViewer ........................................................................................................................................................ 51
B.4 Java VisualVM ................................................................................................................................................. 51
B.5 JMap ............................................................................................................................................................... 52
B.6 JStack .............................................................................................................................................................. 52
B.7 ThreadLogic ..................................................................................................................................................... 53
B.8 Oracle WebLogic Server Administration Console ............................................................................................. 53
System Status ......................................................................................................................................................... 54
Server Logs ............................................................................................................................................................. 54
JMS Queues ............................................................................................................................................................ 54
OSM Web Services .................................................................................................................................................. 55
B.9 Oracle Enterprise Manager .............................................................................................................................. 55
Performance ........................................................................................................................................................... 55
Top SQL Statements ................................................................................................................................................ 55
AWR Reports with ADDM Analysis .......................................................................................................................... 56
B.10 SoapUI............................................................................................................................................................. 56
B.11 OSS Product Compliance Tool .......................................................................................................................... 56
B.12 OSM Task Web Client ...................................................................................................................................... 57
Reporting Page ....................................................................................................................................................... 57
B.13 Design Studio .................................................................................................................................................. 57
B.14 Software Load Balancer ................................................................................................................................... 57
4
1. Overview
1.1 Introduction
Oracle Communications Order and Service Management (OSM) provides great flexibility and usage in different
solution spaces offering support for several different topologies, which can create challenges in performance testing
and tuning. Because each customer implementation is unique, there is generally not a one-size-fits-all configuration
for OSM.
This document provides a set of guidelines, recommendations, and tools to reach target performance, stress, and
stability requirements for a solution. Because of this, this document is more of a guideline and less like a cookbook
with exact measurements.
1.2 Objective
The objective of this document is to enable performance engineers to set up OSM for high performance, scalability,
and reliability. The high level objectives of this document are:
• Provide tips, tools, and techniques to identify and resolve performance bottlenecks
• Demonstrate the importance of the right hardware, operating system, and network configuration
• Discuss key tuning parameters for the JVM, Oracle WebLogic Server, and Oracle Database as they relate
to OSM
1.3 Assumptions
This document is based on the following assumptions:
1. The reader already has a basic understanding of the OSM, WebLogic, the database, and UNIX-based operating
systems.
2. Installation of all components of OSM is a well understood process and therefore is not covered in this document.
3. Any specific examples of tuning parameters must be adapted to target system, based on target system’s
capabilities and resource limits. Directly applying tuning parameters (without testing) from one environment to
another might not result in same performance.
5
2. Understanding OSM Performance
2.1 Goals of OSM Performance Testing
Similar to performance testing for any other enterprise software solution, the business goal is to ensure that the
production deployment will be able to handle the load that is expected. There are typically two characteristics in
this goal:
There is often an engineering trade-off between the two goals. For example, when maximum order throughput is
achieved, all transactions may not have the best response time. It is sometimes necessary to tune the OSM
configuration such that it responds faster, at the expense of order throughput – this is more applicable to real-time
or near real-time service fulfillment solutions such as for mobile services.
It is not uncommon to have the performance goal tied to a set of technical safety boundaries that the system must
have for the sake of robustness, such as any hardware resource utility (e.g. CPU%, heap size) or being able to
concurrently process a certain number of orders and manual user interactions at the same time. There are other
technical, secondary goals to ensure robustness in stress scenarios such as handling a large burst of orders, or
handling outages of other systems, or failover and recovery of OSM after hardware failure. All these can be viewed
as operational goals that need to be achieved and verified in performance testing.
A transition-per-second -based metric is employed because order fulfillment can be widely different among
deployments. Orders for simpler products and services typically average a small number of tasks, and orders for
complex products and services will average a larger number of tasks. While there is the inevitable performance
requirement from the business that would typically be stated in number of orders fulfilled per time, using TPS to
quantify performance provides the atomic denomination needed for a fair engineering comparison. The TPS metric
also allows translation or projection of business-meaningful performance qualification, which is useful since the
deployment will evolve in complexity during its software lifecycle.
While this will vary for each deployment, it is useful to consider the following rules of thumb, and adjust when
necessary for specific circumstances:
• Simple Orders, such as for the consumer mobile market: less than 10 tasks per order.
• Moderate Orders, such as for typical fixed line consumer market: approximately 25 tasks/order
• Complex Orders such as AIA Orders, used in business services for SOHO market: approximately 100
tasks/order
6
• Very Complex Orders, such as business services for the large enterprise market: approximately 1000
tasks/order
Based on the above, an order throughput per second can be calculated using the simple formula of:
From there, order throughputs can be calculated as hourly (multiply by 3600 seconds/hour), or daily (multiply by
3600 seconds/hour and number of operation hours per day).
Another metric that can be a fairly atomic measurement of OSM performance is order lines per second. While the
metric can be skewed by a service’s specifics – such as the number of transitions required to fulfill the order lines – it
is a useful metric for comparison when baselines and performance testing are conducted against the same, or
slightly changed, set of cartridges.
The OSM product benchmark reports called ‘Performance Summary Reports’ are released for every major release of
OSM. It is a great reference to set some rough expectations as the performance testing is being planned. TPS is the
most common performance metric reported in the OSM Performance Summary Reports.
• Hardware – OSM performance is bounded by the hardware that runs it. Hardware limits – when CPU,
memory, or other resources are highly saturated – are essentially hard boundaries to performance
improvements.
• Software – OSM heavily relies on its technology stack, and thus proper tuning of these elements,
including the operating system, JVM, database, and WebLogic is imperative for providing the
performance characteristics that are appropriate for running OSM.
• Solution – OSM cartridges provide the metadata instructions for the OSM server to fulfill orders as per
business requirements. The level of complexity defined in OSM cartridges heavily impacts how many
orders OSM can process in a given period of time.
Let’s have a closer look at the solution factors impacting OSM performance. Not only is the number of tasks
executed for the fulfillment of an order an important factor – which can vary from 5 tasks (high-volume wireless
orders processing) to over 1000 tasks (B2B enterprise orders for IP provisioning) – there are many other factors that
can influence the performance of OSM, including but not limited to:
7
• average percentage of incoming orders that are revision orders
• number of rule and delay tasks
This diverse nature in customer deployments poses a serious challenge in creating a unified Performance Testing
model.
1. Identifying the solution architecture surrounding OSM, and identifying performance acceptance criteria
2. Securing hardware for performance testing
3. Setting up the performance test environment
4. Designing the performance test strategy
5. Designing the performance test cases based on the strategy
6. Securing or developing test harnesses, such as external systems' emulators, order creation scripts, system
monitoring and test data extraction scripts
7. Executing the test plan, analyzing the outcome, and reporting results
8. Improving and tuning the system based on the analysis
9. Repeating steps 7 and 8 until the desired performance goals are achieved
8
3. Performance Testing Guidelines
This section introduces general guidelines for conducting performance testing on OSM. The methodology involves
testing and tuning various layers until the desired performance goals are reached.
As part of the performance test planning, prepare to document the order complexity and other performance factors
(as listed in section 2.3) involved in each test dataset. This will help understand the performance outcome and
communicate performance impacts to the broader team effectively.
Note:
Even if you cannot come up with explicit target performance requirements, it is still valuable to conduct
performance testing in order to obtain benchmark numbers that can be used as a baseline for comparisons with
future releases of the solution. Comparison for this release can also be compared against the product benchmark
numbers detailed in the Performance Summary Report.
The actual selection of a performance test environment is based on the availability of environments and the timeline
of the performance testing. When the performance test environment is smaller than production, a conservative
approach must be taken to extrapolate the results, considering that the results in production may be substantially
different than what is being observed in the test environment. It is dangerous to assume a simple, linear
extrapolation based on hardware. Often, usage and data contention bottlenecks do not manifest themselves until
the system is large enough.
Conversely, it is also possible that an undersized system may magnify issues that otherwise would not exist – for
instance, using an Oracle RAC database with a slow interconnect or slow storage.
As a side note, the performance test environment may also serve as a preproduction environment, being used for
tasks such as to validating upgrade plans or new cartridges. Plan ahead for the availability of the environment. It is
recommended to keep the performance test environment available not only prior to the first go-live of the
deployment, but throughout the lifespan of the OSM solution, so that performance testing can be conducted on any
new enhancements, fixes, and workarounds, and any other changes introduced to the implementation.
9
3.1.3 Prepare the Performance Test Environment Line-up
As part of the performance test planning, think about what the performance test environment should look like.
Similar to the goal of hardware parity in performance test environment planning, the technology stack and solution
architecture should mimic the production environment as closely as possible, such as how RAC is setup or how
shared storage is used.
The following software and files should be readied and the line-up information should be captured as part of the
performance test plan write-up:
1. Versions (including patches) of OSM, WebLogic Server, the JDK, the database, Oracle Communications Design
Studio, the OSM Design Studio plug-ins, and the operating system
3. Latest cartridges for performance and stress tests. If interactions with external systems are required, prepare to
stub them out with emulators in order to be able to pinpoint OSM issues in test environments.
4. Configuration of WebLogic Server (including all files in the <domain>/config folder), JVM heap, Coherence
configuration XML files, and the OSM oms-config.xml file.
5. OSM Database configuration including whether either RAC or partitioning (or both) are used, and if they are,
their configuration.
This will ensure that these resources do not need to be loaded or initialized during the actual test run. Without a
proper warm-up of the system, the lack of statistics that would otherwise be gathered during the warm-up period
can heavily skew the results. In the database, performance issues arise when starting with an empty schema and no
stats. You should also ensure that cursors are invalidated when gathering stats.
Note that the database server must be installed on a server that runs in a time zone that does not use daylight
savings time. The WebLogic servers can run in any time zone as long as the database_timezone_offset parameter in
the oms-config.xml file is adjusted accordingly.
10
3.2.2 Database Size
Because the size of the database has an impact on performance, it is recommended that testing occur against the
real world target size. This could occur via seeding, migration or execution of the tests themselves. Initial testing
against an empty database will only highlight the most serious problems. With an empty schema, database
performance problems that relate to inaccurate optimizer statistics gathering will never present themselves (until
you go into production!).
Oracle recommends that you backup the OSM schema before conducting a performance test iteration. After the
test, the schema can be restored so that no order purging work is required. Alternatively, you can drop the entire
partition(s) and rebuild the seed data, which can be faster than a backup-and-restore procedure.
1. It is best to run test clients on different hardware from where OSM is deployed. This is done so that the test
client does not consume vital resources of the test system.
2. The number of test client threads should be configurable and supported in the test client machine. This is
essential for load and scalability testing. The number of concurrent users that the system could support is
determined based on the number of test client threads. If high loading is required, you may need to use
multiple test client machines to generate sufficiently high load.
3. Exclude start-up time and setup times from time measurements. The start-up time refers to the actual time
at which the test client is invoked.
4. The test client should be able to provide vital statistics on performance data such as average, max, standard
deviation, 90th percentile performance numbers.
5. The test client should be able to run long-running tests with steady load.
For 3rd party tools like JMeter, there are additional plug-ins available to add monitoring around server and database
calls. Useful plug-ins include:
http://code.google.com/p/jmeter-plugins/wiki/PerfMon
http://code.google.com/p/jmeter-plugins/wiki/JMeterPluginsCMD
To find the maximum order throughput of the OSM implementation, change the number of threads and test delay in
the test client to adjust load generation to match creation and completion rate of the OSM orders. Order creation
and completion rate can be monitored using the SQL described in the Appendix.
11
has to be conducted when the external interface like a billing or an inventory system is not yet ready. This is where
emulators are useful.
Based on need, you can create a simple MDB that reads a request from a queue; parses it and responds based on a
pre-configured mapping file.
Sample source code for an emulator of ASAP (asap_mdb_emulator.7z) is attached to this KM note.
Emulators should not impact performance testing by consuming resources of OSM, so they should run on separate
hardware from the App server of OSM. Also, it is recommended to set up the same underlying messaging
infrastructure that is in the production environment, such as any SAF queues that are necessary for JMS interactions
when crossing the boundaries of the WebLogic Server domain, even if an emulator would otherwise not require it.
Such a setup would mimic the real scenario as closely as possible.
These dumps have a moderate impact on performance. The procedure and tools to be used for monitoring are
discussed in detail in chapter 5 of this document.
• Ensure that the system has been warmed up as described in section 3.1.4
• Gather operating system info for OSM application machines during testing.
• If the duration of the test is short (say, less than 30m), perform a checkpoint in the database and flush the
buffer cache before warmup.
During the performance test run as well as the end, you can monitor and gather the following statistics to help
analyze the results:
• Gather AWR, Automatic Database Diagnostic Monitor (ADDM),and Active Session History (ASH) reports from
the database for the exact duration of the test.
12
• Gather garbage collection and server logs.
• Monitor WebLogic Server CPU, heap, and threads using tools like VisualVM, and JConsole.
• Monitor WebLogic Server activities - JMS queues, JDBC connections, execute thread pool, etc. The best tool
for this is probably the WebLogic Scripting Tool (WLST).
• Monitor network and storage for lower latency and consistent throughput based on the documented service
times for the hardware.
• Gather multiple thread dumps regularly, especially during the occurrence of any issues.
Chapter 5 of this document provides the above information in greater detail. Some of the tools and statistics
gathering listed above can affect the performance of the system, so they should only be used when needed to
troubleshoot the cause of issues for brief periods of time.
To capture the task transitions per second (TPS) number1 that was mentioned in the first chapter of this document,
it can be calculated using an SQL query similar to:
select
count(*),
min(timestamp_in),
max(timestamp_in),
24*3600*(max(timestamp_in)-min(timestamp_in)) duration,
to_char((count(*)/(24*3600*(max(timestamp_in)-min(timestamp_in)))), '999.99') tps
from
om_hist$order_header
where
hist_order_state_id = 4 and
timestamp_in and
task_type in ('A','M','C')
between '12-apr-2012 12:30:00' and '12-apr-2012 19:30:00';
Note: The timestamp parameter value needs to be changed based on the time where we need to measure throughput.
1
Prior to OSM 7.2, the TPS figure in performance reports included all types of tasks. From OSM 7.2 on, the measurement has been
improved to only measure the “real” tasks in orchestration scenarios, which have a large number of sub-process tasks as wrappers.
Specifically, the current measurement of TPS includes automation, manual, and creation tasks, but excludes join and rule tasks. If the
legacy method of measurement is desired, it can be achieved by removing the “task_type in ('A','M','C')” line in the sample SQL
statement.
13
To gather other types of statistics in the database in a performance test, see SQL Statements Used for Gathering Data
from OSM.
Note that single-order performance testing is subject to the same procedure as throughput test, such as system
warm-up.
Note:
To handle manual tasks for performance tests, either design the flow to bypass manual tasks in the cartridge
(similar to breakpoint bypass in O2A cartridges) or use the OSM XML API to complete manual tasks.
Query Entry:
Query Result:
14
On the Query Result page, select the "Process History" option. Click the ellipsis (…) button to the left of the desired
order. Then click the Detailed Table button.
Detailed Table:
15
On the resulting page, above the table with the order details there is a link “Open new window to print/save”. Click
that link. Save the output in HTML format by right clicking in it and selecting “Save As”.
The data shows the time taken to complete tasks and the time taken to transition giving an overall view for
performance testing. This data is critical in identifying performance bottlenecks as a first step.
16
4. Tuning Guidelines
Performance testing and tuning is an iterative process with test-tune-analyze-retest cycles. OSM is a multi-tier, Java
EE application that leverages a number of technologies to implement a scalable, cluster-enabled architecture. The
performance optimizations need to be applied to various tiers of the system. These are summarized in the following
sections.
OSM is considered an Online Transaction Processing (OLTP) application, and as such, tuning efforts should be based
on this assumption. Everything from disks to operating system tuning must be optimized for high-volume
processing.
Note:
Oracle recommends that you have a configuration management tool/process like CVS which enforces strong
change control. Changes should be easily reversible and traceability should be provided for observation. Changing
too many parameters in a single test iteration makes measurement difficult for analysis and interpretation.
• The OSM domain for clustered setup should be hosted on high-performance shared storage.
• For the database, plan to have at least 500GB of free space for the OSM schema. This may need to be higher
depending on the design and the order volume. This must be included in the planning during the hardware sizing
exercise.
• Use RAID 1+0 (Normal redundancy) backed shared storage for installing Fusion Middleware (FMW) packages,
creating the OSM domain, and deploying applications and server log files. See also I/O Tuning with Different
RAID Configurations, in KM note 30286.1.
• Similarly reliable and high-performance storage of 5 GB should also be set aside for JMS persistent file stores.
This may need to be higher depending on the design and the order volume.
• The usual latency requirement from storage is “service time of less than 5ms.” The Input/Output per second
(IOPS) requirement must be decided during the hardware sizing exercise.
17
• Ensure that the 64-bit version of the JDK is installed and available. Also make sure that a 64-bit JDK is used to run
the installers. Use of 32-bit JDK for any sized production system or performance testing is strongly discouraged,
as it will not support adequate heap sizes for performance and scalability growth demands.
• Plan to use 64-bit versions of the FMW components. When installing and creating FMW Home, (e.g., WebLogic
Server, Oracle Application Development Framework (ADF)), use 64-bit JDK to run the installers. This will ensure
that FMW installations will be set up to support 64-bit, including native library support (these are also referred
to as WebLogic Server Native IO Performance Packs).
• Make sure the JDK and FMW installations are updated to recommended patch levels.
Incorrect - or less than ideal - configuration of the underlying operating system can have a significant impact on
overall OSM performance.
The following are system and user-limits configuration which should be set:
• Core file size: Limit core file size to zero. In case of a core dump or crash of the JVM, due to large heaps involved,
you do not want multi tens of Gigabytes of memory dumps to be written to disk. So, set a positive value only if a
crash is encountered and needs to be debugged if it happens again.
• Number of Open files: OSM typically references and loads a large number internal as well as 3rd-party JAR
archives. Additionally, each of the applications also opens and maintains several configuration files, log files and
numerous network socket and JDBC connections. All of these activities consume a large number of open file
handles, both during startup and during potential (re)deployments of the applications involved. Thus, it is a good
idea to increase the number of open file descriptors limits for OSM user.
• Number of User Processes: same reasons as parameter above.
• Socket buffers: To help minimize packet loss for the Coherence Cluster, the operating system socket buffers
need to be large enough to handle the traffic. At least 2MB is recommended. See Coherence tuning in Oracle
Coherence Developer’s Guide here on OTN for details.
A quick summary of the recommended user configuration best practices is listed below:
18
Note:
With Oracle Engineered Systems such as the Sparc SuperCluster or Exalogic, most of the operating-system- level
tuning has already been done by default.
4.3.2 Domain
Optimizations for creating WebLogic domains for OSM include the following:
• Create the Domain in Production Mode. This provides some optimization for production environments by
default. You can verify that production mode is enabled for all the servers in a domain using the WebLogic
Server Administration Console, by navigating to Home, Domain, Configuration, General and confirm that
Production Mode is selected. You can also specify production mode in the command line used to start WebLogic
using the -Dweblogic.ProductionModeEnabled=true argument.
• Transaction Timeouts: In OSM, global transactions are leveraged to coordinate transactions spanning multiple
transaction sources such as JMS and JDBC. Since multiple tiers are involved in the transactions, it is important to
ensure the timeouts are set appropriately for correct rollback handling.
• Longest transaction time: The longest time a transaction can be active.
• The J2EE Transaction Manager’s global transaction timeout – which is the longest time that a transaction can be
active – is determined by the Java Transaction API (JTA) timeout. To avoid premature rollbacks, the default JTA
timeout of 30 seconds should be increased. For OSM, a value of 2400 seconds should avoid problems, but you
may want to determine the optimal level for your system based on performance and stress testing.
o The JTA timeout can be set using the WebLogic Server Administration Console; navigate to Home,
Domain, Configuration, JTA and set Timeout Seconds.
• If you are using Exalogic, you should enable the following optimizations for improved thread management and
request processing and reduced lock contention.
19
4.3.3 WebLogic Server
Best practices and optimizations for creating and configuring Managed Servers for OSM include the following:
• 64-Bit flags: Ensure you are using the 64-bit JDK. On Windows and Linux, the JDK installer has separate 32-bit
and 64-bit installation packages. If both packages are installed on a system, select the 64-bit JDK by adding the
appropriate "bin" directory to your path. If the JDK installation includes both 32-bit and 64-bit implementations
and you want the 64-bit runtime (e.g. on Solaris), pass in the ‘-d64’ option when starting each WebLogic server.
This ensures that available native libraries or performance packs are used at start up. (Note: the Java
implementations on Linux accept the -d64 option as well, to be consistent with its Solaris implementation).
• Logging:
o Consider limiting number of log files, log file maximum size and rotation policy. The number of log
files can be limited to 10 and Log files are rotated at start up to ensure availability of fresh log file.
§ However for initial test phrases, log files may be valuable for troubleshooting and thus log size
and rotation policy should be relaxed according to debugging needs.
o Consider turning off broadcast of log messages to the domain. WebLogic Server instances send
messages to a number of destinations including the local server log file and the centralized domain
log. To avoid the performance impact associated with message broadcasts to the domain log, it is
recommended that you disable the domain log broadcaster. For each managed server, navigate to
Home, Environment, Servers, <server name>, Logging, expand Advanced and set Domain log
broadcaster, Severity level to Off. Please see note in Domain configuration best practices. For
details, also see the WebLogic documentation:
http://docs.oracle.com/cd/E23943_01/web.1111/e13739/logging_services.htm#i1173689.
• JDBC Logging:
o Disable JDBC logging in production systems. JDBC logging has substantial impact on performance.
This can be done using WebLogic Server Administration Console by navigating to Home,
Environment, Servers, <server name>, Debug, weblogic, jdbc, or by via command line used to start
WebLogic using the -Dweblogic.debug.DebugJDBCSQL=false argument.
• WebLogic Networking
o Enable Native I/O: WebLogic uses software modules called multiplexers (muxers) to read incoming
requests on the server and incoming responses on the client. To maximize network socket
performance, make sure that native-platform-optimized-muxers are used. In the WebLogic Server
Administration Console, navigate to Environment, Servers, Configuration, Tuning and confirm that
Enable Native IO is selected. Always make sure this box is checked for each server in production.
• Stuck Thread Max Time
o WebLogic considers a thread to be stuck when the thread takes more than a specified amount of
time to process a single request. As the default value of 600 seconds is most likely too high, it is
recommended that you set this value to an optimal level based on performance and stress testing.
Ensure that you define the WebLogic Server listen address if the computer uses multiple IP addresses. On such a
computer, the server will bind to all available IP addresses and this will unnecessarily slow down server startup time.
A WebLogic server should be bound to a fully qualified hostname rather than to an IP address. This ensures that SSL
server-to-server communication works correctly without requiring hostname verification. It also enables
administrators to change IP addresses without the need to reconfigure WebLogic.
20
Tuning Question For Information, See:
How big should the JDBC connection pool be? Tune Pool Sizes
How should you use JDBC caches? Use the Prepared Statement Cache
What optimizations are there for transactional database Use Logging Last Resource Optimization
applications?
How many connections should WebLogic Server accept? Tune Connection Backlog Buffering
What is the optimal size of the WebLogic Server network Tune the Chunk Size
layer?
What type of Entity Bean cache should be used? Use Optimistic or Read-only Concurrency
How should you avoid serialization when one EJB calls Use Local Interfaces
another?
How should you load related beans using a single Use eager-relationship-caching
SQL statement?
A quick summary of the recommended WebLogic Server common configuration best practices is listed below:
21
An extensive treatise on the interplay of these factors to influence JVM tuning is out of scope for this paper. But in
general, optimal configuration of Java HotSpot revolves around how the JVM:
• Sizes and manages heap space between old and new objects
The JVM dynamically allocates memory to Java objects from the heap. The size of the heap has a significant impact
on the frequency and duration of garbage collection and, as result, overall system throughput. Tuning the heap size
involves finding the correct balance between garbage collection time and the amount of memory required to store
the Java objects created by your application - including objects created by the underlying platform. To this end, you
should specify the same settings for both maximum heap and minimum size for various memory categories, to avoid
having the JVM frequently grow and shrink the heap. These values are configured using the -Xmssize and -Xmxsize
HotSpot parameters. This follows the standard recommendation from JVM performance tuning to keep the heap
management simple and efficient. The following additional options should be used:
• –server: Server HotSpot VM: Always be sure to specify and use Server version of the HotSpot VM. The Server
VM is optimized for better workload performance for long running processes with large heaps and long-lived
network connections. JVM will maximize the compilation of hot methods into assembly code for subsequent
reuse. While this will cause the JVM to perform more slowly at first, performance will improve measurably over
time for long running processes. This HotSpot parameter is particularly important for commands used to start
managed WebLogic Server instances.
§ Note that the -server parameter is used by default with the 64-bit version of HotSpot. However,
this is subject to change in a future release.
• –Xms<hh>g –Xmx<hh>g: Specifies same <hh> size area for JVM total heap for objects, as said above. This helps
performance by avoiding constant resizes of the heap, which are costly.
• -XX:PermSize=<pp>m -XX:MaxPermSize=<pp>m: Specifies same <pp> size area for JVM permanent generation
area. Primarily, this area is used for keeping loaded classes. OSM loads a very large number of classes and also
has heavy use of dynamic class generation involved with JSP and ADF. The default value for 64-bit JVMs is 256m
and this is found to be inadequate for Managed Servers hosting the OSM application. Without the higher
(512m) value setting, Managed Servers may experience JVM thread hang ups. This is optional for Administration
and Proxy Servers.
• -XX:+UseCompressedOops: An "oop", or ordinary object pointer in Java Hotspot parlance, is a managed pointer
to an object. An oop is normally the same size as a native machine pointer, which means 64 bits on an LP64
system.
On an ILP32 system, maximum heap size is somewhat less than 4 gigabytes, which is insufficient for many
applications.
On an LP64 system, the heap used by a given program might have to be around 1.5 times larger than when it is
run on an ILP32 system. This requirement is due to the expanded size of managed pointers.
Memory is inexpensive, but these days bandwidth and cache are in short supply, so significantly increasing the
size of the heap and only getting just over the 4 gigabyte limit is undesirable.
22
Compressed oops is supported and enabled by default in Java SE 1.6.0u23 and later (Oracle Java) and 1.6.0.10
and later (HP Java). In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't
specified and for values of -Xmx less than 32 gigabytes.
For JDK 6 before the 1.6.0u23 (Oracle Java) and 1.6.0.10 (HP Java) release, use the -XX:+UseCompressedOops
argument with the java command to enable the feature.
Refer to My Oracle Support (MOS) note for more details: Java HotSpot Virtual Machine Performance
Enhancements and Need Performance/Load Test Results For OSM 7.2 [ID 1552842.1].
Use this option with 64-bit JVMs started with less than 32GB in size. It does not hurt if you specify this option
even with machines with 32GB or larger heap sizes, a warning is printed and this option is ignored. This setting
allows getting performance close to that of 32-bit JVMs with its 32-bit pointer mechanics, with benefit of 64-bit
heap sizes made possible.
For many deployment scenarios, the use of parallel algorithms is the best option for OSM given that throughput is
often the most important consideration. For HotSpot, this is achieved through the use of the -XX:+UseParallelGC
argument. Presently, multi-core systems are widely available and form the basis of most production level
deployments. This setting controls the number of GC threads which work in parallel to reduce GC pause times.
Specify and tune number of ParallelGCThreads, when deploying the solution to multi-core systems. The maximum
effective value for this parameter is equal to number of processor cores on the system. This maximum threshold
needs to be adjusted downward if there are additional JVM instances running on the same system.
By default, on multi-core (CPU) systems, with -XX:+UseParallelGC throughput collector specification, JVM will
allocate as many GC threads as there are number of CPUs. However, the determination of number of CPUs by JVM is
a tricky mechanism. For instance, GC threads needs to be reduced if there are multiple JVM instances running on the
same system, and whether the JVMs are running something other than OSM also plays a significant factor. Hence
23
the recommendation is to tune this value explicitly. The GC threads should not be set to more than the total number
of hardware threads you have available to the OSM application; otherwise, garbage collections may intrude on the
performance of other applications as a result of a large number of garbage collection threads executing at the same
time. In other words, set JVM option -XX:ParallelGCThreads=n, where recommended number of GC Threads (n) ≤
(number of CPUs * number of Hardware Threads per core)/(number of JVMs on same machine). The tuning process
should start with a smaller number than this maximum. For instance, setting the maximum can be very dangerous
on large Sparc T-series servers with core numbers reaching 128 for T5-8. In such very large machines, a value of 32
is a realistic maximum rather than the available hardware thread count.
-XX:+ParallelRefProcEnabled: Turn on this setting by passing this argument. This option reduces the reference
collection pause time, as this can take place in parallel, efficiently leveraging available multiple cores.
You may need to use the concurrent mark sweep collector (which is done using the -XX+UseConcMarkSweepGC
argument instead of the UseParallelGC argument) if there are long GC pauses that cause the OSM Coherence cluster
to become unstable. This might be the case if your deployment is dealing mostly with large orders. One of the
stability concerns is due to Coherence. OSM uses Coherence as a simple local cache, for the invocation service, and
very sparingly as a read-mainly distributed cache. Hence, the long GC pauses can cause OSM to become unstable. In
these scenarios, it is recommended to use concurrent mark sweep instead of parallel to minimize long GC pauses.
Use the -XX:+DisableExplicitGC argument to disable explicit garbage collection triggered by application calls to
System.gc(). Otherwise, garbage collection might end up being called too often. A better option is to let the JVM
perform garbage collection when necessary.
Given the wide variety of OSM deployment and usage scenarios, optimal values should be determined by analyzing
garbage collection efficiency at different settings. It is recommended that you enable verbose garbage collection to
facilitate garbage collection tuning. Verbose garbage collection generates logs detailing the timing and duration of
garbage collection activities. This option is set using the -verbose:gc Hotspot argument. The -Xloggc:logfilename
argument redirects the associated logs to a file. The -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps and -
XX:+PrintTenuringDistribution arguments should also be specified to facilitate analysis and tuning.
You can visualize data produced by verbose garbage collection using the GCViewer tool (see Appendix B for an
instructional summary). Pay particular attention to garbage collection duration and frequency. While keeping the
number of garbage collections threads and garbage collection frequency to a reasonable level, consider adding
threads or reducing heap size - or both - if garbage collection takes more than a few seconds. If you reduce heap
size, you may also consider deploying additional WebLogic instances on the same server.
Given that verbose garbage collection is relatively lightweight, this is the preferred approach for production system.
A more involved approach is the live profiling of running code. The VisualVM tool can be used when a more detailed
analysis is required. See Appendix B for an instructional summary of VisualVM.
For more information on garbage collection tuning, see the following information on the Oracle Technology Network
(OTN) Web site:
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
A quick summary of the initial WebLogic Server JVM arguments is listed below:
24
Configuration Item Value
-server ü
-Xms<nn>g -Xmx<nn>g 24
-XX:PermSize<nn>m 512
-XX:+UseCompressedOops ü
-XX:+HeapDumpOnOutOfMemoryError ü
-XX:HeapDumpPath=path
-XX:+UseParallelGC ü
-XX:+UseParallelOldGC ü
-XX:+ParallelRefProcEnabled ü
-verbose:gc ü
4.3.5 Cluster
• Messaging Mode: in general, use multicast messaging mode when setting up OSM. This is based on WebLogic
Server’s recommendation and OSM Install Guide. The Installation Guides for OSM 7.2.2 and later include
detailed guidance on using multicast or unicast in WebLogic. Information on this subject can be found in the
OSM 7.2.4 Installation Guide here on OTN.
• Load Balancing: Set up clusters to use “Round-Robin” algorithm.
A quick summary of the recommended cluster configuration best practices is listed below:
• JMS Message Persistence needs to use JMS persistent file store (default option in OSM Installer). File stores
generally offer better throughput than a JDBC store. For optimal - and transactionally safe - performance, it is
recommended that you use Direct-Write-With-Cache file stores if this option is supported in your environment.
The following are the best practices for setting up a file store:
o Provision a custom file store, following WebLogic Server’s documented procedures. The physical
location of the file store should be on high-performance shared storage. When using the Direct-Write-
With-Cache policy, the cache directory should exist on a RAM disk. Using shared storage provides higher
availability.
o Switch all Default Persistent Store, JMS Server and SAF Agents in each Managed Server to use the
appropriate custom file stores.
o For more information on WebLogic file store configuration, see the OTN Web site:
http://docs.oracle.com/cd/E23943_01/web.1111/e13701/store.htm
• In a clustered deployment, WebLogic uses load balancing to distribute the workload across cluster instances. For
OSM, the load balancing policy for distributed JMS queues should be set to Random. For more information on
WebLogic JMS Random distribution, see the OTN Web site:
http://docs.oracle.com/cd/E23943_01/web.1111/e13738/advance_config.htm#JMSAD210
25
• OSM uses messaging bridges with Asynchronous Mode Enabled and Exactly-once quality of service. For this
type of messaging bridge, throughput can usually be improved by increasing the Batch Size attribute as this
reduces the number of transaction commits.
• WebLogic supports the Store-and-Forward (SAF) service for reliable delivery of messages between distributed
applications running on different WebLogic Server instances. It is recommended that you set the Conversation
Idle Time Maximum on SAF agents to a non-zero, positive value to allow messages to be forwarded to other
active members when the original target is down or unavailable. For more information on the WebLogic SAF
service, see the OTN Web site: http://docs.oracle.com/cd/E14571_01/web.1111/e13742/overview.htm
• The default JMS time-to-deliver parameter has a relatively high value, in order to avoid potential "automation
context not found" exceptions that can happen, depending on the solution. This does not affect throughput but
it does affects order completion time. If fast order completion time is paramount, a lower time-to-deliver
settings should be attempted.
In the case study described in Chapter 6, the file store was set up to use a high-performance shared storage
mechanism, using a Network-attached storage array.
4.3.7 JDBC
Optimizations for configuring various JDBC data sources are listed below:
• Initial Capacity: For RAC, this value has to be 0. Otherwise the managed server will not start if a RAC node is
down. For non-RAC, setting this value equal to Max Capacity is recommended for production deployments to
avoid shrinking and growing of the JDBC pool size. Note that the Initial Capacity value will impact available
resources on both WebLogic Server and the database server by consuming additional resources at startup and
will also lengthen the server startup time.
• Maximum Capacity: This needs to be set to the highest safely sustainable value, which can be supported by both
by WebLogic Server server (additional memory and processing) and the database server (session resources and
concurrency). Given uniform cluster deployments of JDBC connection pools to a cluster, note that the Maximum
Capacity setting applies to each WebLogic server node individually, not for the whole cluster. Hence, the
Maximum Capacity should be carefully selected such that ((<number of nodes in WebLogic Server cluster> X
Maximum Capacity) <= number of peak, concurrent and safely sustainable sessions to the Database Server).
This is a key parameter and will require iterative tuning based on scenario and workload. One approach is to
set it to a high value for peak load tests, and monitor what percentage of it has been used, and then adjust the
Maximum Capacity to at least that high water mark.
• Statement Cache Size: The prepared statement cache size is used to keep a cache of compiled SQL statements in
memory, thus improving the performance by avoiding repeated processing in WebLogic Server and the
database. Set this to the number of unique prepared and callable statements, which may be parsed and handled
by each tuned (heavily used) data source. For lightly used data sources, the default value of ‘10’ should be
sufficient.
Note that each JDBC connection in the connection pool creates its own prepared statement cache. Hence, any
tuning of this parameter needs to consider the additional memory consumption demand caused by ({steady size
of Connection Pool} X {Prepared Statement Cache Size}). If the result of this equation is too high, it may cause
26
“Out of Memory” exceptions on the WebLogic server and may disable the connection pool altogether, rendering
the server useless. Hence, tuning of Statement Cache Size is achieved by an iterative process, influenced by
the factors of the scenario, workload, and steady-state size of the connection pool for the given data source. If
the “Parse CPU to Parse Elapsd %” efficiency metric in the AWR is close to 100 then the statement cache size is
adequate. However, this efficiency metric also depends on the session_cached_cursors db initialization
parameter. Past experience has shown that there is no observable benefit of increasing the statement cache
size beyond 40.
For Engineered Systems that are with InfiniBand, use the SDP protocol over InfiniBand. It enables multiple
performance enhancements such as input/output, thread management, and request-handling efficiency. Typical
steps to enable the SDP protocol include:
• Make sure a physical InfiniBand connection exists and is operational between the WebLogic Server host and the
database host
• Set up an SDP listener on the InfiniBand network
• In the JDBC URL, replace the TCP protocol with the SDP protocol.
• Change the port number to match SDP listener’s port in JDBC URL if necessary.
• Manually add the following arguments to the startWebLogic.sh (or startManagedServer_XX.sh) script:
o -Djava.net.preferIPv4Stack=true
o -Doracle.net.SDP=true.
SDP does not support Oracle Single Client Access Name (SCAN) addresses, but it supports VIP addresses over
InfiniBand. Please see the Oracle documentation for more instructions on configuring Engineered Systems with SDP.
4.3.8 Coherence
OSM uses Coherence for several purposes. In general, OSM uses it as a simple local cache, for the invocation service,
and very sparingly as a read-mainly distributed cache. For better reliability, especially when OSM nodes reside on
different systems, nodes in an OSM cluster must be set up with Coherence Well Known Address (WKA) (a point-to-
point unicast) configuration. Each OSM node will then have its own unique Coherence WKA configuration, in which
addresses of all other nodes must also be listed.
The Installation Guides for OSM 7.2.2 and later include Coherence configuration file parameters for securing
managed server IP addresses with a list of authorized hosts. Information on this subject can be found in the OSM
7.2.4 Installation Guide here on OTN.
For details on Coherence tuning refer to the Oracle Coherence Developer’s Guide here on OTN.
• oms.automation: used to process work performed by automated tasks. Usually these have the highest number
of threads and a normal priority.
27
• oms.xml: used to process requests coming in from external clients for the OMS XML API. Usually these have a
low number of threads (< 20) and a normal priority.
• oms.web: used to process requests from manual users using the OSM Task Client. Usually these have a low
number of threads (< 20) and a higher priority than the other execute queues (6).
For all these OSM GA releases, they do not define explicit specific execute queues for the following services:
• Order Management Web Client
• OSM WebService API (HTTP or JMS)
Because of this, the above services are mapped to the default execute queue for the WebLogic server.
The use of execute queues means that more tuning is needed and there may be scenarios (encountered in
performance testing) where the internal WebLogic queues were starved. In order to avoid this scenario, execute
queues should be changed to work managers.
The configuration steps to change from execute queues to work managers can be found in KM note 1613129.1 on
My Oracle Support.
Properly tuned work managers are an effective way to defend against system overload. By limiting the threads
configured in OSM’s work managers, you can ensure that your system does not accept more load than you have
tested for. In an overloaded system, minor problems can spiral to become serious system stability problems very
quickly.
Through performance testing, you can push the system past its breaking point in a controlled environment so that
you know where the breaking point is. You can subsequently set the maximum number of threads available in
OSM’s work managers to below those test runs, to protect the system from going beyond that point. Assuming that
the hardware has been dimensioned to handle peak load with a safety factor below the observed breaking load,
there should still be enough threads in the work managers to handle peak loads.
2
filesystemio_options is not applicable if ASM is used.
28
Parameter Name Sparc Exadata
Supercluster(SSC)
pga_aggregate_target 0 8589934592
processes 5000 5000
sessions 7680 7536
sga_max_size 77846282240 34359738368
sga_target 77846282240 34359738368
undo_retention 1800 1800
deferred_segment_creation true true
Please note that commit_wait value is not changed from default in either case and for SSC, set sga_max_size=
sga_target to avoid Solaris kernel panic.
Implementing huge pages is one option to increase database performance as it reduces the overhead on the
operating system maintenance of page states and increases the translation lookaside buffer (TLB) hit ratio. More
details about large pages is in the Oracle Database documentation on OTN here. The value for the huge pages can be
calculated from the script in KM note 401749.1 on My Oracle Support.
Enhancements in OSM as well as best practice guidelines have been made to allow for collecting OSM database
statistics more accurately. Please see KM note 1662447.1 on My Oracle Support for details.
First, RAID configuration for the redo logs should be specifically configured. See I/O Tuning with Different RAID
Configurations (KM note Doc ID 30286.1 on My Oracle Support).
Second, check the "log switches (derived)" statistic in the AWR report. A rough guide is to switch log files at most
once every 20 minutes. If the log file switch frequency is much higher than this, the logs are undersized. Increase the
log file size and/or number of redo groups. Note that checkpoint frequency is affected by several factors, including
log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the
FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle Database automatically
tries to checkpoint as frequently as necessary. The optimal size can be obtained by querying the
OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. If FAST_START_MTTR_TARGET is not set,
OPTIMAL_LOGFILE_SIZE is not set either. For more information see
http://docs.oracle.com/cd/E11882_01/server.112/e41573/build_db.htm#PFGRF94147.
If and only if the above remedies do not work, try to renice LGWR to run at higher priority or run LGWR in RT class
by adding LGWR to the parameter: _high_priority_processes='VKTM|LGWR".
29
Warning:
_high_priority_processes should be changed only in consultation with database support. For example, more
processes may need to be added, such as PMON. And if the database is RAC, LMS should be added to this
parameter. Be prepared to test this change thoroughly.
Finally, as the last resort, set the _log_parallelism_max hidden parameter, again after consultation with db support.
See About _log_parallelism_dynamic and _log_parallelism_max (Doc ID 457966.1).
Note:
As of OSM 7.2.x version, OSM installer sets up only 2 active Oracle RAC nodes by default. Additional nodes can be
added based on the details provided in this section.
The steps required to manually create and configure additional data sources for an Oracle RAC instance, as well as
information about adding the data source to multi-data source can be found in the OSM Installation Guide here. A
sample configuration based on these procedures is described in this document’s case study in section 6.4.
Tip:
As thoroughly explained in OSM product documentation, the algorithm of multi data source has to be Failover in
all cases! A common mistake for active-active RAC deployments is to use the load balancing algorithm.
30
4.4.4 Database Storage
Following are the best practices used for OSM database storage setup:
• Automatic Storage Management (ASM) should be used for managing data disk storage.
• For storage reliability, Normal (two-way mirrored) Redundancy should be used. Ensure that the tablespace,
datafiles and redo logs are placed on this storage.
• Redo logs are very sensitive to storage response time and should be placed on storage with service time less
than 5ms.
• Specify large redo log files to avoid frequent redo log switching. Also, we recommend redo logs be mirrored for
redundancy.
• The requirements on latency and IOPS must be finalized during hardware sizing.
Tablespace configuration:
• Automatic Segment Space Management (ASSM) should be used for each tablespace.
• When disk space permits, leverage BIGFILE for tablespace creation. This simplifies management of tablespace
allocation with single datafile for the tablespace.
• If there is a need to use SMALLFILE tablespace, then plan for a fairly large number of datafiles to be created for
the OSM schema. Plan a naming convention for the datafiles and plan the storage location with future growth
requirements in mind.
By default, automatic statistics collection is enabled, which collects statistics during a predefined maintenance
window. The KM note 1662447.1 (Best Practices for Managing Optimizer Statistics for OSM) on My Oracle Support
provides a detailed explanation and recommendations on how best to configure the database to collect optimizer
statistics efficiently.
Not having correct database statistics is a common cause of database performance problems in OSM. With respect
to performance testing, this highlights the value of having a realistic amount of in-progress and completed orders in
the system in the testing environment, so that these database performance problems can be identified before the
deployment goes into production.
Configuring an environment properly is not an easy task. Official product documentation must be read thoroughly
and steps must be followed carefully. Even so, steps may be missed, or not applicable to certain environments.
There are also Knowledge Management notes in Oracle Support and product patches to be considered. If any one of
the steps are not performed in your configuration, problems may occur which sometimes cost a long time to
diagnose.
The Oracle Communications OSS Product Compliance Tool is designed to capture configuration snapshots and
evaluates compliance of captured configuration snapshots based on product documentation, product usage best
practices and guidelines. While the tool has proven tremendously useful in troubleshooting production
environments, this tool also provides the first level of information when it comes to troubleshooting a performance
test environment with non-performing results as well.
See Appendix B.11 for more information about the OSS Product Compliance Tool.
32
5. Performance Analysis Guidelines
This section describes several tools specific to OSM as well as general tools to gather data when performance
related issues are encountered. While some of the content here can be also applied to monitoring or
troubleshooting production environment, the focus in this document is observing performance bottlenecks at
performance test iterations.
• Collecting GC logs for the scenario runs reveal helpful details regarding the heap usage for the scenario in
question for later analysis and adjustment of JVM heap and Garbage Collection fine tuning. Use the GCViewer
tool to determine the duration and frequency of garbage collection pauses and the impact of garbage collection
on application throughput.
• Monitoring the JVM can be accomplished via JConsole or JVisualVM, bundled with standard JDK distribution.
• Use the jstackscript.sh shell script to generate a series of thread dumps and run the results through the
ThreadLogic tool. Review ThreadLogic advisories and follow recommendations.
• In the WebLogic Administration Console, make sure all servers are marked as “OK” in the System status pane. If
any servers are listed with any other status (i.e. Overloaded or Critical) investigate immediately to determine
what particular sub-system of the server is under stress. Also check server logs for signs of trouble.
• Use the “Reporting” screen in the OSM Task Web client to monitor orders in progress and task statistics. The
Reporting Page has the following tabs:
o Pending Orders: make sure that when under load the number of pending orders is stable, so the system
is able to keep up with incoming orders.
o Order Volume: provides a table or chart displaying the number of orders received and completed for a
specified time period (such as hour, day, week, or month). Again, make sure that received and
completed bars are equal or reasonably close in size.
o Completed Order Statistics: make sure that orders’ Average and Highest time is within expected
standards of service.
o Completed Task Statistics: examine all tasks, make sure Average and Highest time for all tasks is within
expectations. If a task’s duration is too high, it might indicate delays either in external systems or inside
of OSM.
In some instances, it may be necessary to enable the WebLogic Diagnostics Framework (WLDF) to gather further
information about the application and application server performance. Logging levels can be adjusted on both the
application and application server if necessary.
To analyze OSM order completion rates, other than using OSM Task Web client to view its reports, you can execute
the scripts in Appendix A: SQL Statements Used for Gathering Data from OSM as the OSM database user.
The following sections list the monitoring tools and methods that can be used during performance testing.
33
1. WLST
2. JMX
Note:
There is overhead in using these tools and they must be used on-need basis. Start with WLST and go to other
tools only when needed.
For monitoring using JMX calls from the server, there are open-source projects available. They monitor WebLogic
Server activities - JMS queues, JDBC connections, execute thread pool, etc.
The system should be engineered so that CPU utilization does not exceed 90%. If minimizing latency becomes a
primary concern, the CPU utilization should be targeted a little lower. Additionally, you should account for peak
utilization as well as increased load on remaining servers in the event of a cluster node failure.
On memory utilization, OSM systems should be engineered so as to avoid paging and swapping. See WebLogic
Server JVM for details.
On network utilization, note that bandwidth and latency are the primary network characteristics that can impact
OSM performance and throughput. In particular, OSM is very sensitive to latency between Oracle database servers
and WebLogic servers. Monitor the amount of data being transferred across the network by checking the data
transferred between WebLogic servers, and between WebLogic servers and Oracle database servers. This amount
should not exceed your network bandwidth; otherwise, the network will become a bottleneck. You can check for
symptoms of insufficient bandwidth by monitoring for retransmission and duplicate packets.
You should also monitor the WebLogic servers to make sure that IO waits are low; anything above a few percentage
points of IO waits should be investigated.
Common operating system commands are available to measure and monitor CPU, memory, network and storage
performance and utilization on Linux and Solaris systems. This includes top on Linux systems or prstat on Solaris
operating systems, as well as sar, mpstat, iostat, netstat, vmstat and ping on both platforms.
OSWatcher (OSW) Black Box is the tool you can use to monitor operating-system-level statistics. See OSW Black Box
in Appendix B for a brief instructional summary.
34
The Remote Diagnostics Agent (RDA) tool can also be used to generate a report that includes a comprehensive suite
of operating system measurements. See Remote Diagnostics Agent in Appendix B for a brief instructional summary.
Now, let’s discuss the tools used for analysis after the performance test run.
Note:
To get a heap dump on a Linux or UNIX machine, use the following method –
where pid is the OSM WebLogic server process. Refer to jmap documentation for more details.
While a single thread dump can sometimes reveal the problem (e.g., a thread deadlock), a series of thread dumps
generated at regular intervals (e.g., 10 thread dumps at 30-second intervals) is often required to troubleshoot more
complex conditions (e.g., threads not progressing and waiting on other processes).
The Java VisualVM tool can be used to generate and analyze thread dumps. The JStack tool can be used to generate
thread dumps from a command line or using a shell script.
Once a series of thread dumps has been generated, it is recommended that you run the results through the
ThreadLogic tool. This should expedite the analysis of thread dumps given that ThreadLogic implements built-in
analysis algorithms and provides recommend solutions (advisories) for common problems. These advisories include
advanced analysis and recommendations for WebLogic applications. This tool builds upon the popular TDA (Thread
Dump Analyzer) by adding logic for common patterns found in application servers. ThreadLogic is able to parse Sun,
JRockit, and IBM thread dumps and provide advice based on predefined and externally defined patterns.
35
Note:
kill -3 <pid>
36
o It is possible that the workload is too much and may need to reduced for the given database server
processing power.
o Examine if there was a recent change in relevant database server init.ora parameters that could be
affecting the query performance. This could be true especially if the query has not been problematic
before.
o Any disproportionate spikes in Session Activity in one of areas of CPU, CPU Wait, User I/O, Commit,
Configuration and Concurrency must be examined further for root cause and remedial action must be
taken in accordance with your local database administration practices.
o Create Automatic Workload Repository (AWR) snapshots immediately before and after the test. The
AWR report can be generated from those snapshots after the test. ASH reports should be generated as
needed during the test. ASH and AWR reports characterize the typical workloads. A database
administrator (DBA) well-versed in reading these reports would then be able to analyze data for
performance bottlenecks.
o Monitor the Cluster – Performance screen. Make sure if running Active-Active RAC that both nodes are
equally loaded and exhibit similar usage pattern. Any imbalance must be investigated.
• Monitor the operating system on the database server, making sure the system has enough idle CPU and I/O
waits are low (nothing more than few percent of I/O waits are acceptable).
• The AWR is a background database monitoring process which collects information about overall database server
performance statistics in a top-down manner. Some of the key measures which will be collected in the AWR
Report at periodic intervals are shown below.
Measure Notes
Top 5 Waits Shows database bottlenecks. Will use this to guide further
investigations.
Table Scans Table scans show potential problematic queries or poor execution
plans.
Instance Efficiency Monitor various activities with target of reaching 100%
Percentages (Target 100%)
Load Profile Observe reads, writes and transactions rates.
Log switches (derived) A rough guide is to switch log files at most once every 20 minutes.
37
If bottlenecks are observed in the database, be sure to check if the optimizer statistics are being gathered
accurately. Please see KM note 1662447.1 on My Oracle Support on how to configure the database to gather
optimizer statistics efficiently.
38
6. Case Study: Performance Testing of an OSM Implementation
Based on the performance testing, tuning and monitoring guidelines suggested in the earlier chapters, Exalogic-
Exadata scalability performance testing was conducted to understand the OSM’s ability to scale on a larger/faster
infrastructure. As reference, the following illustrates sample test plan information that was captured, as well as the
analysis and tuning procedures that were conducted.
Components Exalogic X2-2 – APP Quarter Rack x1 Exadata X2-2 – DB Quarter Rack x ½
Server server
Processor X86 E5675 (3.09 GHz) X86 E5675 (3.09 GHz)
CPU Core/server 96 48
vCPU/server 192 96
Memory (GB) 1536 768
Storage Sun ZFS 40 TB 21.6 TB
Network & IO Options 1/10/40 Gbps 1/10/40 Gbps
Fiber 40 Gbps 1/10/40 Gbps
Virtualization NO NO
OS Version OEL 5.5 OEL 5.5
The OSM application solution was split into two domains to provision the five customer order types. Siebel sent
orders into the first domain and the orders were provisioned right up to the task that updated Siebel Order Header
Status to Completed. Before completing the Order and moving the order status to Completed in Siebel, the first
OSM domain would create orders referencing the same Siebel order number and send them to the second domain.
(Similar to OSM in the central order management role creating orders into OSM in the service order management
role).
The figure shows the relationship between orders and domains. Orders are submitted into the domain and
Fulfillment Functions are triggered based on the incoming Order Data that is mapped to the Product Specification.
Subsequently, each function will trigger Automated Tasks that in turn will communicate with external systems via
Oracle Service Bus or a WebLogic Integration wrapper using JMS SAF for the outbound messages and JMS Module
Queue for the inbound messages.
39
Order Details
At a high level, the tasks processing Siebel orders in OSM domain-1 are executed mostly in sequence and the orders
created by OSM for domain-2 have tasks that are executed in parallel. The statistics shown above can be gathered
using the monitoring SQL statements given in Appendix A: SQL Statements Used for Gathering Data from OSM.
When the testing started, we could easily achieve 30 orders/second (1800 order/minute or 108000 orders/hour)
and identified that this version of OSM wouldn’t scale beyond 30 orders/second due to the following factors –
40
• Exadata compute node (active) was becoming CPU bound (reaching 80-90% CPU utilization).
• OSM 7.0.3 supports only Active-Passive Oracle RAC.
Owing to these reasons, the OSM version was upgraded to OSM 7.2 to take advantage of Active-Active Oracle RAC
support. The solution cartridges were migrated as-is.
The result shows that the ExaLogic Quarter Rack is able to achieve 55 orders/sec and is stable after several
optimizations in different areas. The hardware that was allocated for OSM was 4 compute nodes of Exadata and 4
compute nodes each for two OSM domains (1 & 2).
With more hardware, we could scale even higher, as illustrated in the following diagram:
The following sections capture the tuning settings used, different issues faced, and the tools used to debug the
issues.
41
6.5 Database Tuning for OSM
The following table outlines the database changes that were done (in addition to the guidelines described in the
tuning chapter) to improve the efficiency and performance of the OSM application:
The heap arguments were tuned for Sun JDK. The changes are outlined below.
2 Duser.timezone=US/Eastern -Dweblogic.threadpool.MinPoolSize=100
43
Appendix A: SQL Statements Used for Gathering Data from OSM
SQL Description
44
SQL Description
BEGIN
om_part_config_pkg.init(true);
om_part_config_pkg.add_hist_partition;
om_part_config_pkg.exec;
-- This is just a precaution, since all indexes should remain usable
om_part_config_pkg.rebuild_ind_partitions;
om_part_config_pkg.exec;
END;
For 7.2.x:
om_part_maintain.add_partitions(1);
-- The above adds 1 partition. The int parameter specifies the
number of partitions to add.
45
SQL Description
46
B.1 OSW Black Box
Not everyone is skilled with reading and analyzing data from operating system performance utilities. In some
scenarios, the amount of data to analyze can be quite large. The interpretation of the results can also be made more
complex by the fact that the meaning of that data often varies across different platforms.
OSW Black Box is an Oracle utility that addresses these issues by providing platform independent, real-time analysis
of a large volume of operating system data. OSW Black Box is comprised of two separate components:
The following command executes OSW Black Box as a background process collecting data at 30-second intervals and
logging the last 48 hours of data to archive files:
./stopOSWbb.sh
For easy upload to a Service Request (SR), a text-based OSW Black Box Analyzer report can be generated using:
OSW Black Box Analyzer reports provide recommendations on how to resolve problems identified through
automated analysis. The System Status section provides a quick summary status for CPU, MEMORY,I/O and NET
subsystems; status can be Critical, Warning, OK or Unknown.
For more information about OSW Black Box - including download, installation and execution instructions, see the
following document on My Oracle Support:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=301137.1
For the OSW Black Box Analyzer user guide, see the following document on My Oracle Support:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=461053.1
47
service request (SR). By reducing the need for additional questions about your technology stack and application
environment, RDA can help speed time-to-resolution when working through SRs with Oracle Support.
While you can use RDA only when a problem occurs, it is recommended that you practice using RDA beforehand.
RDA supports diagnostics and other proactive tools that could actually help prevent problems from occurring in the
first place. RDA reports include both system configuration and performance data. Taking regular RDA snapshots
could also help you spot configuration changes that might explain differences in system behavior or performance.
Moreover, being able to observe how your system performs under normal conditions may help troubleshoot a faulty
or degraded system.
Starting with 11gR8, RDA is installed with Oracle Fusion Middleware and is available from the
MW_home/oracle_common/rda directory. To facilitate the use of RDA, it is recommended that you define
the$RDA_HOME environment variable as MW_home/oracle_common/rda; you may also want to include this
directory in you path.
rda.sh -cv
If RDA is not installed in your environment, see the following document on My Oracle Support:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=314422.1
Use rda.sh -L modules to list available modules. Use rda.sh -L profiles to list available profiles.
Once RDA is set up, you can generate and package the operating system report using the command:
rda.sh -vCRP
The report is stored in the directory from which RDA was executed. With the command above, both an output
folder and an RDA_output_hostname.zip archive are created. The report can be viewed with your default web
48
browser by double clicking on the file named RDA__start.htm in the output directory. For troubleshooting tasks, the
Performance, System Performance Overview and Network, Network Performance hyperlinks are particularly
relevant.
Note that all the information from this operating system report is included in the report generated using the
Com_OSM profile. See the OSM RDA Report section for more details. While this standalone operating system report
can be used when you want to focus on the operating system itself, you do not have to attach this report to OSM
SRs.
$RDA_HOME/rda.sh -s wls_report
The report is stored in the directory from which RDA was executed. With the command above, both a wls_report
folder and an RDA_wls_report_hostname.zip archive are created.
$RDA_HOME/rda.sh -s db_report
The report is stored in the directory from which RDA was executed. With the command above, both an db_report
folder and an RDA_db_report_hostname.zip archive are created.
49
Oracle RAC Cluster RDA Report
When creating an OSM SR, you may be asked to attach an Oracle RAC Cluster RDA report. Before running RDA for
your RAC cluster, log into one of the cluster nodes and check that ssh can be used to reach remote nodes:
$RDA_HOME/rda.sh -T ssh
$RDA_HOME/rda.sh -p Rac
When prompted to connect as sysdba, you should override the default and answer Y.
To add remote nodes to the RDA setup, use the following command:
$RDA_HOME/rda.sh -v -e
Note that data collected from each RAC node will be packaged separately in its own zip file; the zip files are listed in
the RDA report under Remote Data Collection, Collected Data.
RDA reports can be particularly useful for understanding and troubleshooting RAC problems. The report includes all
relevant log files, init files and diagnostic files as well as your network and cluster configurations. As is generally the
case for RDA, it is a good idea to baseline your system by running this report when the system is operating normally.
This way, when a problem occurs you can regenerate your report run and compare with baseline results.
For more details on RAC Cluster RDA reports, see the following document on My Oracle Support:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=359395.1
50
To regenerate the report, use:
$RDA_HOME/rda.sh -s osm_report
The report is stored in the directory from which RDA was executed. With the command above, both an osm_report
folder and an RDA_osm_report_hostname.zip archive are created.
You may want to specify a larger number of logs than the default to ensure that enough information is included in
the OSM RDA report.
For more information on using RDA with OSM and other Oracle Communications Applications, see the following
document on My Oracle Support:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1057563.1
B.3 GCViewer
GCViewer is a free open source tool to visualize data produced by verbose garbage collection. For more information
on the original version of GCViewer, see the following Web site:
http://www.tagtraum.com/gcviewer.html
While the original version of GCViewer is no longer supported, a forked version is still being maintained. For more
information of the forked version of GCViewer, see the following Web site:
https://github.com/chewiebug/GCViewer
http://sourceforge.net/projects/gcviewer/files/gcviewer-1.33.jar/download
The latest version can be downloaded from the following Web site:
https://github.com/chewiebug/GCViewer/wiki/Changelog
Java VisualVM is part of the JDK. For example, on UNUIX and Linux systems, VisualVM can be invoked using the
following command:
JDK_home/bin/jvisualvm
51
Select Monitor, and then Heap Dump to generate a heap dump. Select Threads, and then Thread Dump to generate
a thread dump. To save a heap dump or a thread dump to a specific location, right-click on its name and select Save
As .
For more information about VisualVM, see the following Web site:
http://visualvm.java.net/
B.5 JMap
JMap is a utility that can be used to generate a Java heap dump. JMap is part of the JDK. For example, on UNIX and
Linux systems, JMap can generate a heap dump to the file heap_dump.out using the following command:
Note that using JMap will pause the JVM. As a result, you should be very careful about running JMap on a
production OSM system as this will likely cause problems. See section 4.3.4 for details on using the -
XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=path parameters to generate a heap dump in the
specified directory when an out of memory condition is reached.
You should also monitor available space in the directory where you are saving the heap dump - make sure that
available space exceeds the JVM size.
B.6 JStack
JStack is a utility that can be used to generate a thread dump. JStack is part of the JDK. For example, on UNIX and
Linux systems, JStack can be invoked using the following command:
JDK_home/bin/jstack -l processid
You should monitor available space in the directory where you are saving thread dumps.
The advantage of using JStack instead of VisualVM to generate thread dumps is that JStack can be scripted. For
example, the following shell script will generate count thread dumps every delay seconds.
#!/bin/bash
if [ $# -eq 0 ]; then
echo >&2 "Usage: jstackSeries <pid> [<count> [<delay>]]"
echo >&2 " Defaults: count = 180, delay = 60 (seconds)"
exit 1
fi
52
pid=$1 # required
count=${2:-180} # defaults to 180 times
delay=${3:-60} # defaults to 60 seconds
For example, this can be used to save 3 hour's worth of thread dumps to file with:
B.7 ThreadLogic
ThreadLogic is an open source visual Thread Dump Analyzer developed by the Oracle FMW Architects Team. It is
based on a fork of the TDA open source tool with new capabilities to analyze and provide advice based on a set of
extensible advisories. ThreadLogic also provides a thorough and in-depth analysis of WebLogic Server thread dumps.
Moreover, ThreadLogic can merge and analyze multiple thread dumps and provide enhanced reporting that lets the
user see at a glance whether threads are progressing across thread dumps.
For more information about ThreadLogic, and to download the software, see the following Web site:
http://java.net/projects/threadlogic
One or more thread dump files can then be opened using File, Open . To merge thread dumps, select the thread
dumps to be merged, right click and select Diff Selection.
The following Web site provides a useful overview of ThreadLogic by a member of the Oracle FMW Architects Team:
http://www.ateam-oracle.com/analyzing-thread-dumps-in-middleware-part-4-2/
53
System Status
Navigate to the Home screen and check the System Status, Health of Running Servers. Make sure that all servers
are marked as OK. If a server is listed with an Overloaded, Critical or Failed status, investigate immediately to
determine what particular sub-system is under stress.
You can access the performance monitor by navigating to Home, Environment, Servers, server name, Monitoring.
Select the Health tab to view the health status for all OSM related sub-systems. If the status is not OK, review the
reason and, if required, review the server logs for more information.
Select the Performance tab to view JVM memory utilization statistics for the server. If the memory usage statistics
are high, you may need to allocate more memory to the Java runtime by increasing the -Xms and -Xmx parameter
values.
Select the Threads tab to monitor thread activity for the server. Important columns to monitor are Queue Length
and Pending User Request Count. A count of 0 is optimal, meaning no user requests are stuck or waiting to be
processed. If any of the counts are unusually high in the pool, go to the second table to troubleshoot individual
threads.
Select the Workload tab to monitor the Work Managers configured for the server. If the Pending Requests count is
not 0, you should review server logs for more information. A thread dump analysis may also be required.
Select the JDBC tab to monitor the database pool connections configured for the server. If the Active Connections
Average Count is half that of the Active Connections High Count, you may need to increase the number of
connections. See Managing Database Connections in the OSM Administrator's guide for more details.
Select the JTA tab to monitor transaction activity on the server. Note that, for OSM, it is normal for Rolled Back
statistics to not be 0 in the summary; there are cases when OSM intentionally triggers rollbacks due to contention
on order data access.
Server Logs
Navigate to Home, Diagnostics, Log Files, ServerLog, View to view server logs.
JMS Queues
If messages sent to a JMS queue are not being processed, or if the incoming message rate is higher than the rate at
which messages are processed, JMS queues can get backlogged. To monitor JMS queues, navigate to Home,
Messaging, JMS Servers, omsjmsservername, Monitoring, Active Destinations. A list of active destinations targeted
to the server is displayed.
The default view of the table does not contain the Consumers Current column. Oracle recommends that you
customize the table using the Customize this table link to include this column. The Consumers Current column
shows the current number of listeners on the destination. If a destination does not have any listeners, messages will
not be received.
The Messages Current column shows the current number of unprocessed messages in the JMS destination. A large
and growing number indicates that messages are not being processed - or not processed fast enough - or that
54
messages are being processed but errors are occurring and messages are being put back on the destination. Pay
special attention to ws_request and oms_event queues.
There are two versions of EM that you can use to manage Oracle 11g databases:
• EM Database Control
• EM Grid Control
EM Database Control is installed with the Oracle database. It can manage a single database, including RAC
databases.
EM Grid Control is a separately licensed product that can manage multiple databases and offers enhanced RAC
support. Note that EM Grid Control can also be used to manage WebLogic Server and Coherence.
This document focuses on features available with EM Database Control with separately licensed Diagnostics and
Tuning Packs.
Performance
The Performance tab provides a graphical view of database performance and throughput over time.
Unusual spikes in session activity in one or more areas - including CPU Used, User I/O, Commit, Configuration and
Concurrency - may indicate the need for further investigation.
If running in an Active-Active RAC configuration, verify that nodes are equally loaded and exhibit a similar usage
pattern. Imbalances should be investigated.
The Top Activity screen displays the Top SQL statements and the associated Top Sessions. If the OSM system is
performing poorly, SQL statements that consume a disproportionately large amount of CPU may need to be
investigated further. A properly tuned OSM schema usually has insert, update and delete statements on top. Any
select statement in the top 3 would be unusual and may indicate a sub-optimal SQL execution plan; run the SQL
Tuning Advisor on problematic SQL statements and review recommendations.
55
It is a good idea to become familiar with this page when OSM operates normally. This way, if OSM system
performance degrades, you can check this screen and look for unusual spikes or differences in usage patterns. When
a problem occurs, determine if there was a recent change in relevant database server parameters that could be
affecting the query performance. This could be true especially if the query has not been problematic in the past.
B.10 SoapUI
You can use tools like SoapUI to submit an XML order to a run-time OSM environment. This can be useful as a way to
confirm OSM's ability to receive and respond to order requests. For that purpose, you will probably want to submit
test orders associated with a test cartridge.
When submitting sample orders to run-time environments, the root level of the sample order XML document must
be either a CreateOrder or CreateOrderBySpec request.
For more information about submitting orders with SoapUI, see the OSM Developer's guide.
For more information about SoapUI, and to download the software, see the following Web site:
http://www.soapui.org/
SoapUI provides out-of-the-box support for HTTP and HTTPS. The HermesJMS plug-in can also be used to send JMS
requests. For more information about using HermesJMS with SoapUI, see the following Web site:
http://www.soapui.org/JMS/getting-started.html
• Evaluate compliance with documented configuration requirements, best practices and guidelines.
The Compliance tool uses an extensible set of compliance rules to determine if a configuration value is properly set
or, if a range of valid values is permitted, whether the configured value falls within that range. In some cases, the
tool flags configuration left to its default value, as this may be an indication that it might not have been properly
tested or tuned. The tool also verifies that required or recommended patches have been applied.
56
For every compliance rule, reports include a general description of the rule, an indication of whether the rule passed
or failed, and the rationale for the compliance rule. For non-compliant results, a severity level - Error, Warning and
Information - and the reason for the failure are also included.
The OSS product compliance tool is under controlled availability. Contact Oracle Support to seek permission to have
access to this tool.
Reporting Page
You can use the Reporting page to monitor orders and tasks.
Using the Pending Orders tab or the Order Volume tab, verify that OSM is able to keep up with the incoming order
flow. In the Pending Order tab, confirm that number of pending orders is stable or decreasing. In the Order Volume
tab, check that received and completed bars are reasonably close in size.
In the Completed Order Statistics and the Completed Task Statistics tabs, verify that Average and Highest times are
within expected ranges.
You can use Design Studio's Cartridge Management view to get a list of cartridges deployed to your OSM system.
The Deployed Versions table lists which cartridge version and build combination is currently deployed in the target
environment for the selected cartridge.
57