00_Slides_Summary
00_Slides_Summary
and Analyses
Summary
Manfred Mensch
SAP SE
Performance and Scalability
March 01, 2021
Contents
These lessons give an introduction to the most frequently used tools for measuring and
analyzing the performance of ABAP applications:
• Code Inspector (transaction SCI)
• Performance Monitor (transaction STATS)
• Performance Trace (transaction ST05)
• Runtime Analysis (transaction SAT)
After a brief explanation of the various aspects of performance, the main features and
capabilities of the individual tools are presented in detail. The overall purpose of the analysis
tools is the identification of root causes for an application’s poor performance. Exercises provide
hands-on experiences with each tool. Finally, the combined usage of the tools for finding
optimization potentials in an application is described. Performance optimization is not in scope of
these lessons.
Goals
After these modules you will …
• … understand that performance has multiple facets
• … appreciate the basic features of the tools
• … be able to measure the performance of ABAP applications
• … know how to identify optimization potential
1
ABAP Tools for Performance Measurements and Analyses
Table of Contents
Approach
Assess an application’s resource consumption.
Oriented
Tools-
IV. Performance Trace (ST05)
Evaluate events that leave the ABAP work process.
Approach
Problem-
VI. Performance Analysis
Oriented
Combine the tools to analyze an application’s performance.
2
Introduction and Overview
Abstract
3
Introduction and Overview
Performance
Performance …
… pertains to a computer system’s overall efficiency
… has various perspectives
o End user fast end-to-end response time
o IT administrator high data throughput, low resource consumption, linear scalability
… is characterized by 2 quantities
o Response time speed of task completion
o Throughput amount of work done in a given period of time
Customers assume that their IT infrastructure runs with maximum efficiency. They take this non-
functional quality for granted.
Performance has multiple aspects:
• The time-centric view focuses on minimizing an application’s end-to-end response time for
individual users.
• The resource-oriented approach attempts to maximize data throughput and number of concur-
rent users while minimizing the resource consumption. To reach this goal, linear scalability is a
prerequisite.
• In recent years, the issue of sustainability, with the environmental goals to reduce energy
consumption and greenhouse gas emissions, has gained prominence.
Performance and scalability are relevant in multi-process client-server systems:
• Processes interact with core resources (especially the central database server), which must be
spared as much as possible.
• Many concurrent users compete for a limited amount of resources.
• Applications request services from remote processes.
• In distributed systems, network issues (latency, bandwidth) become relevant.
• Various types of front ends handle parts of the application and access the back end.
Resources relevant for performance include:
• CPU time
• DB time
• Memory
• Network bandwidth
• Physical I/O
Very critical―especially with respect to throughput―are singular, central resources, which are
required by and have to be shared with many applications or users. If these resources must be
4
locked for exclusive use to guarantee data consistency, these locks must be active as short as
possible to maximize throughput.
4
Perceived Performance
Conclusions
• Good perceived performance leads to higher user motivation and thus better overall
performance.
• Tolerance for waiting times can be increased by applying perceived performance tricks (e.g.,
by providing timely feedback).
• Improving perceived performance may lead to worse objective performance, but still increase
the user‘s overall satisfaction.
5
Perceived Performance
Task Types
Response
0.1s 0.2s 1s 2s 3s 5s 10s 15s Time
Task Type
Simple Tasks
IF Complex Tasks Exceptionally Complex Tasks
(Dialog)
IF = Instant Feedback
6
Introduction and Overview
SAP Software Architecture
Application Server
~ 5 GB Server ICM <1µs
Internal Mode
ABAP
ABAP Dispatcher
Server
~ 50 GB ~10µs
Work Shared Memory
Shared
Process
Memory,
Buffers Work
process
Server to Server
~10ms
Central Enqueue Server Communication
Services
Message Server
Schematic overview of the multi-tier client-server software architecture of the ABAP stack of
SAP Solutions.
• The presentation layer consists of the front end software (mobile app, web browser, SAP GUI)
used for login and user interaction.
• The SAP Web Dispatcher is a load balancer. It distributes web-based requests over the
available application servers. For SAP GUI accesses, the message server acts as a load
balancer.
• Via the ICM (Internet Communication Manager), HTTP requests from mobile apps or from a
browser can be handled within the ABAP stack of the application server: The ICM sends the
request via the appropriate dispatcher to a work process where it is processed.
• The application layer consists of one or more application server instances (each with multiple
work processes) where the business logic of the software is running. This is the most important
layer for the scalability of the system.
• The persistence layer contains the central database (DBMS; with database processes,
database cache, and database files). This is a central resource and it should be conserved as
much as possible. It does not scale as well as the application layer.
Typical access times from the R/3 work process to the various components are shown together
with typical amounts of data held in each component (either in memory or on disk). The
numbers are representative for OLTP systems with dialog users. BW systems can be much
larger, especially their data persistency. The absolute numbers depend on the application and
on the system’s hardware and configuration. The main focus is on the relative orders of
magnitude.
To some minor extent the time to access the database depends on whether application server
and database server run on the same physical hardware. Analogously, the times for RFC and
HTTP communication depend slightly on the overall system configuration. Calls within one
physical server will be faster than identical calls to another server.
7
For in-depth analyses traces are available in transaction ST05:
• SQL database accesses
• Buffer accesses to the buffers on the ABAP application server
• Enqueue requests to the central enqueue server
• RFC communication via RFC protocol
• HTTP communication via HTTP protocol
• APC communication via WebSocket
• AMC publish/subscribe channels for message exchange
ABAP run time analysis within a work process is done with transaction SAT.
7
Introduction and Overview
Optimization Potential
≈ 35%
Memory, CPU Controls
WAN
Front End
Network
≈ 35% Compression,
Memory, CPU
Templates
Internet
Network
Cus tom e r Cre a te Produc tion Pla nt
Se rv ic e Produc tionOrde r
≈ 25%
Pe rs onne l
LAN
Re p Orde rs
Ac c e pt Ex plode Re le a s e
Re s e rv e Build
Cus tom e r Bill-of- Produc tion
M a te ria l Produc ts
Algorithms,
Orde r M a te ria l Orde rs
Memory, CPU
Orde r
Confirm
De liv e ry
Data Structures
Application Server
Network
Typical contributions to the end-to-end response time of an OLTP application with optimized
performance. The numbers refer to a simple user interaction step with a target time of 1 s.
Significant deviations may indicate a performance problem.
Front End:
• The performance on the presentation layer is determined by the consumption of CPU time and
main memory.
Internet:
• The performance-relevant resources in internet middleware (e.g., SAP Gateway, SAP Mobile
Platform, SAP ITS, SAP Web Dispatcher) are CPU and memory.
Application Server:
• The resources that control performance on the application server are CPU and memory. They
are consumed in the application’s coding, and in generic frameworks and runtime engines.
Database:
• To minimize physical I/O for reading or writing data to database files in the database server’s
file system, main memory is used to cache the data in shared memory. Processing SQL
statements on the database server consumes CPU time.
Network:
• The relevant network qualities are its latency and its bandwidth. To optimize network
communication, the number of data transfers (round trips) and the amount of transferred data
must be minimized.
• The network traffic between the front end and the back end across the internet has strongly
increased in recent years. This growth started with the GUI controls and accelerated with
browser-based applications, cloud computing, and mobile devices.
• The network communication between database and application servers is in most cases not
critical. Customers usually have high-performance networks between these servers.
8
Product Standard Performance
Overview
The Product Standard Performance provides pragmatic guidance during the entire software
lifecycle. It supports the design and development of performance-optimized software with a set
of requirements that are independent of programming language, computing platform, or specific
techniques. Software that follows the guidelines is guaranteed to have good performance and
scalability. The Product Standard Performance may be enhanced by specific requirements, and
non-applicable requirements may be ignored. It is not a formal checklist.
Rules of SAP’s Product Standard Performance are grouped into …
• Architecture Requirements
For the software to comply with the programming guidelines (architecture requirements) of the
Product Standard Performance, these have to be considered already in the design phase.
• Quality Requirements
The key performance indicator (KPI) measurements demanded by the standard’s quality
requirements indicate how well the application adheres to the architecture requirements.
Additionally, they allow the comparison of the application’s performance in the current release
with that in the previous release. Finally, they serve as input data for sizing calculations.
Detailed information on the Product Standard Performance is available at SAP Corporate Portal.
The document Requirements for HANA addresses the specific circumstances of applications
build on HANA.
Recently, the Product Standard Performance has been enhanced with performance related
architecture principles to reflect changes required for cloud development and to explicitly cover
essential cloud qualities.
9
Product Standard Performance
Architecture Requirements
Previous description of PERF-14: Parallel processing through efficient lock design and proper
splitting and size of work packages shall be enabled.
10
Product Standard Performance
Quality Requirements
Full description of PERF-02: Average and maximum throughput, end-to-end response time for a
user interaction step (UIS), or smooth animation performance shall be planned and provided.
The planned KPI values shall be competitive (default: 150 ms for instant feedback, 1000 ms for
simple UIS, 3000 ms for complex UIS, 20 images/s for smooth animation performance). The
measurement method shall be planned and provided.
The measurements demanded by quality requirements PERF-05 to PERF-10 form the basis for
the non-regression requirement PERF-11.
Units of measure are the kB for transferred data volumes, the MB for memory consumption, and
the ms for times or durations.
While the quality requirements of the Product Standard Performance call for objective
measurement values, users judge performance subjectively. Improving perceived performance
may worsen the objective performance, but may increase user satisfaction and productivity.
11
Introduction and Overview
Summary
12
Introduction and Overview
ABAP Tools for Performance Measurements and Analyses
Static Analyzers
o Inspect source code and underlying data structures
o Do not execute software
System Monitors
o Gather runtime information on an integral level
o Create little overhead
o Observe resources all the time
• Traces
o Measure in detail and with high granularity
o Give specific information on individual resources
o Cause considerable overhead
o Record only on demand
Several tools are available to check adherence to architecture requirements and to measure KPI
values mandated by quality requirements.
This training presents only ABAP tools for single user performance measurements. It does not
cover tools for volume, load, or stress testing.
13
Code Inspector
Abstract
This lesson …
… introduces the Code Inspector as SAP’s tool for static checks
… explains how to use Code Inspector to identify less than optimal software
… helps to interpret its results
… clarifies Code Inspector’s value for adhering to architecture requirements
A message from Code Inspector does not mean that the software has a defect.
No message from Code Inspector does not mean that the software has no defect.
14
Code Inspector
Introduction
15
Code Inspector
Scope and Purpose
16
Code Inspector
Summary
It is integrated into the development workbench tools, CheckMan, the ABAP Test
Cockpit (ATC), and the Transport Organizer (SE09).
17
Code Inspector
Best Practice
In general
o Use Code Inspector to check your objects
o Execute inspections periodically
In a system with ATC
o Subset of performance checks is integrated into ATC
o Check objects out of workbench with ATC
o Execute other performance checks through Code Inspector
Code Inspector shall be executed frequently and regularly during the development process.
Especially with respect to messages about accesses to database tables, a local analysis and
optimization of the SQL statement may not be enough. Consider other accesses to the same
table, review its definition, indexes, technical settings, expected amount of data, etc.
A message from Code Inspector does not mean that the software has a defect.
No message from Code Inspector does not mean that the software has no defect.
18
Performance Monitor
Abstract
This lesson …
… introduces statistics records
… explains how to analyze these statistics records
… clarifies what the data indicate about performance and resource consumption
19
Performance Monitor
Introduction
As a system monitor, transaction STATS (also known as the Performance Monitor) provides
information on an integral level (e.g., per transaction step, per background job execution). It
creates only little overhead and thus can observe resources all the time―no explicit activation is
necessary. This is especially helpful when investigating problems retrospectively.
During the execution of any task (e.g., dialog step, batch job run, update task) in any work
process of an ABAP application server instance, the SAP kernel continuously collects data on
various KPIs.¹ At the task’s end (even when it terminates with an ABAP short dump) the
collected data are combined into a statistics record and stored in chronological order into a
shared buffer on the application server. From there, they are moved to a file on the server’s file
system. The file’s name is determined by the value of profile parameter stat/file (e.g.,
/usr/sap/YI3/D33/data/stat). Every hour this file is copied to another file whose name is
derived from the original file’s name by appending a two digit running number (e.g.,
/usr/sap/YI3/D33/data/stat42). The total number of files is the time in hours that the
statistics records reach back into the past. It can be configured via parameter stat/max_files; the
default value is 48, and the maximum value is 99.
Statistics records roughly reflect how an application works on the technical level.
STATS is the main tool to display statistics records. It is the successor of transaction STAD
(Business Transaction Analysis), which is still available.
¹ Profile parameter stat/level must have its default value 1 to activate the data collection, which can be configured via
profile parameters stat/*.
20
Performance Monitor
Scope and Purpose
Investigate dialog steps of an application with respect to their response time and
their resource consumption.
Obtain measurements for quality requirements and data for sizing calculations.
Statistics records help to …
… decide whether an application needs further performance analyses
… identify critical dialog steps
… define next steps for more detailed investigations
Statistics records are the entry point for any performance analysis. Every UI step of a transaction
is recorded individually. This supports holistic analyses of transactions, including the response
times of each step and the user’s think times between steps. The statistics records provide a
very good overview where the time of a dialog step was spent (database, ABAP processing ≈
CPU time, RFC, GUI communication, ...) and indicate the dialog step’s resource consumption.
Measurements mandated by quality requirements of SAP’s Product Standard Performance can
be extracted from statistics records.
Looking at the entirety of statistics records during a given period of time, the overall performance
of the system can be judged and situations with excessive load can be identified.
Statistics records provide the data basis for a workload analysis with transaction ST03, which
supports historical performance analyses. For this purpose, the records are aggregated hourly,
weekly, monthly, and stored in a database table.
21
Performance Monitor
Summary
They indicate the overall resource consumption and the distribution of the response
time across the involved components.
22
Performance Monitor
Best Practice
Transaction STATS uses the ALV grid control for displaying the list of statistics records. Users
can customize the list via layouts to meet their requirements. These custom layouts can be
saved and reused.
23
Performance Trace
Abstract
This lesson …
… introduces the Performance Trace as SAP’s interface trace tool
… shows that traces capture an application’s requests to external resources
… explains how to record and analyze traces
… indicates that traces identify optimization potential
This lesson focusses mainly on SQL traces as this is the most relevant trace type offered by the
Performance Trace.
24
Performance Trace
Introduction
By default, all trace types are switched off, so that the system’s performance is not impaired.
An SQL trace does not record SQL statements that are handled by the application server’s table
buffer. It exclusively captures requests that are processed by the database server. DB trace
would be a more appropriate name.
25
Performance Trace
Scope and Purpose
Investigate applications with respect to the resources they request from outside of
the work process.
Check adherence to architecture requirements.
Get KPIs mandated by quality requirements.
Identify hot spots in an application, e.g., due to …
… expensive SQL statements
… unnecessary accesses to the persistence layer
… long running SAP locks
… RFC issues
Deduce approaches for optimization.
ST05 is not the appropriate tool if the database server or the entire system is slow.
The optimization of performance problems discovered with the help of ST05 traces is not in the
scope of this lesson.
26
Performance Trace
Summary
Traces …
… help to check adherence to architecture requirements
… supply KPIs mandated by quality requirements
… reveal hot spots in an application
… indicate where optimization is required
27
Performance Trace
Best Practice
Keep the trace size as small as possible to facilitate trace analyses and to minimize the risk of
loosing trace records.
28
Runtime Analysis
Abstract
This lesson …
… introduces the Runtime Analysis as SAP’s application trace tool
… explains how to use the Runtime Analysis to record and analyze traces
… shows that traces capture an application’s expensive events
… indicates that traces identify optimization potential
… demonstrates that ABAP traces allow to investigate an application’s scaling behavior
29
Runtime Analysis
Introduction
ABAP Statements that request services from external resources (e.g., database, remote
servers) are also traced, but the recorded information is rather limited―especially when
compared to a dedicated interface trace as captured by ST05.
The set of events that can be traced is defined in the kernel. Events that cannot be traced
include assignments, comparisons, and computations. The time of such events is assigned to
their caller. Some of these events might be expensive in some situations (e.g., implicit SORTs
when assigning a standard table to a sorted table; (lazy) create or update of secondary keys for
internal tables; calls to XSLTs or simple transformations).
With the comparison of two ABAPtraces, changes in the program flow can be detected.
By default, the ABAP trace is switched off, so that the system’s performance is not impaired.
In previous releases, SAT was called SE30. Starting with SAP_BASIS 7.10 this transaction code
is used again.
30
Runtime Analysis
Scope and Purpose
Investigate ABAP statements called by applications with respect to run time and
memory consumption.
SAT is not the appropriate tool if the ABAP application server instance or the entire system is
slow.
The optimization of performance problems discovered with the help of an SAT trace is not in the
scope of this lesson.
An SAT trace can also be used to analyze program flow, e.g., for reverse engineering purposes.
31
Runtime Analysis
Summary
ABAP traces …
… help to check adherence to architecture requirements
… detect hot spots in an application
… indicate where optimization is required
32
Runtime Analysis
Best Practice
33
Performance Analysis
Abstract
This lesson …
… explains the procedure for performance analyses
… stresses the importance of preparation
… demonstrates how to find optimization approaches
34
Performance Analysis
Introduction
The previous lessons have used the tools-oriented approach to present the features and
capabilities of the relevant transactions (SCI, STATS, ST05, SAT) in a self-contained fashion.
The current chapter employs the task-oriented approach to explain the joint use of the tools for
analyzing an application with the goal to find optimization potential.
Before running and measuring or tracing an application, Code Inspector should be used to verify
that the application’s static coding and its underlying data structures adhere to the architecture
requirements of the Product Standard Performance.
35
Performance Analysis
Scope and Purpose
Monitoring and trace tools measure and analyze the performance of applications.
Their data identify hot spots in an application’s resource consumption. These offer
the highest potential for optimization.
Identification of an application’s performance hot spots relies mostly on ST05’s list of Structure-
identical SQL statements and on SAT’s Hit List tool. The sequential order of the statements
(ST05’s Detailed Trace List) or events (SAT’s Call Hierarchy or Processing Blocks tools) will not
be used in these analyses.
Optimization is the process of removing or improving the hot spots. For optimization purposes,
considering the sequential order of the events might be very valuable, e.g., to decide which of
the identical SELECTs can be removed.
Optimization requires more knowledge than covered in this lesson.
36
Performance Analysis
Summary
From the resulting high quality data, optimization strategies can be derived.
Focus on
1. Elimination of unnecessary events
2. Tuning of required events
3. Verification of linear scalability
37
Performance Analysis
Best Practice
Optimize
Repeat performance analysis to confirm improvements
38
© 2021 SAP SE. All rights reserved
No part of this publication may be reproduced or transmitted in any form or for any purpose Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal
without the express permission of SAP SE. The information contained herein may be Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services
changed without prior notice. mentioned herein as well as their respective logos are trademarks or registered trademarks
of Business Objects Software Ltd. Business Objects is an SAP company.
Some software products marketed by SAP SE and its distributors contain proprietary
software components of other software vendors. Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase
products and services mentioned herein as well as their respective logos are trademarks or
Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft
registered trademarks of Sybase, Inc. Sybase is an SAP company.
Corporation.
IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, All other product and service names mentioned are the trademarks of their respective
System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, companies. Data contained in this document serves informational purposes only. National
z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, product specifications may vary.
PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, The information in this document is proprietary to SAP. No part of this document may be
OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, reproduced, copied, or transmitted in any form or for any purpose without the express prior
RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, written permission of SAP SE.
Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered This document is a preliminary version and not subject to your license agreement or any
trademarks of IBM Corporation. other agreement with SAP. This document contains only intended strategies, developments,
Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. and functionalities of the SAP® product and is not intended to be binding upon SAP to any
particular course of business, product strategy, and/or development. Please note that this
Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or
document is subject to change and may be changed by SAP at any time without notice.
registered trademarks of Adobe Systems Incorporated in the United States and/or other
countries. SAP assumes no responsibility for errors or omissions in this document. SAP does not
Oracle is a registered trademark of Oracle Corporation. warrant the accuracy or completeness of the information, text, graphics, links, or other items
contained within this material. This document is provided without a warranty of any kind,
UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. either express or implied, including but not limited to the implied warranties of
Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are merchantability, fitness for a particular purpose, or non-infringement.
trademarks or registered trademarks of Citrix Systems, Inc. SAP shall have no liability for damages of any kind including without limitation direct,
HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World special, indirect, or consequential damages that may result from the use of these materials.
Wide Web Consortium, Massachusetts Institute of Technology. This limitation shall not apply in cases of intent or gross negligence.
Java is a registered trademark of Sun Microsystems, Inc. The statutory liability for personal injury and defective products is not affected. SAP has no
control over the information that you may access through the use of hot links contained in
JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for these materials and does not endorse your use of third-party Web pages nor provide any
technology invented and implemented by Netscape. warranty whatsoever relating to third-party Web pages.
SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer,
StreamWork, and other SAP products and services mentioned herein as well as their
respective logos are trademarks or registered trademarks of SAP SE in Germany and other
countries.
39