0% found this document useful (0 votes)
13 views

10-Performance Systems Analysis

Uploaded by

micheleortiz899
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

10-Performance Systems Analysis

Uploaded by

micheleortiz899
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 59

Part 1 An Overview

of Performance Evaluation

Ch. 1 Introduction
Ch. 2 Common Mistakes and How
to Avoid Them
Ch. 3 Selection of Techniques and
Metrics
CH. 1 INTRODUCTION

Performance is a key criterion in the design, procurement,


and use of computer systems.

The goal is to get the highest performance for a given


cost.

A basic knowledge of performance evaluation terminology


and techniques.
1.1 Outline of Topics (1)
Performance Evaluation on system design alternatives

System Tuning : determining the optimal value

Bottleneck Identification : finding the performance bottleneck

Workload Characterization

Capacity Planning : determining the number/size of components

Forecasting : predicting the performance at future loads

Six Examples of the types of problems


1.1 Outline of Topics (2)

1. Select appropriate evaluation techniques, performance metrics,


and workloads for a system.
 The techniques for performance evaluation
: Measurement, Simulation, and Analytical modeling
 The metric : the criteria used to evaluate the performance
(ex) Response time – the time to service a request
(ex) Throughput – transactions per second
 The workload : the requests made by the users of the system

Ex. (1.1) What performance metrics should be used to compare the


performance of the following systems?
(a) Two disk drives
(b) Two transaction processing systems
(c) Two packet retransmission algorithms
1.1 Outline of Topics (3)

2. Conduct performance measurements correctly.


 Load Generator : a tool to load the system
(ex) Remote Terminal Emulator for a timesharing
system
 Monitor : a tool to measure the results

Ex. (1.2) Which type of monitor (software or hardware) would be more


suitable for measuring each of the following quantities?
(a) Number of instructions execute by a processor
(b) Degree of multiprogramming on a timesharing system
(c) Response time of packets on a network
1.1 Outline of Topics (4)
3. Use proper statistical techniques to compare several
alternatives.
 Most performance evaluation problems basically consist of finding
the best among a number of alternatives.
 Simply comparing the average result of a number of repeated
trials does not lead to correct conclusions, particularly if the
variability of the result is high.

Ex. (1.3) The number of packets lost on two links was measured for
four
file sizes
TABLEas shown
1.1 inLost
Packets Table
on 1.1. Which link is better?
Two Links

File Size Link A Link B


1000 5 10
1200 7 3
1300 3 0
50 0 1
1.1 Outline of Topics (5)

4. Design measurement and simulation experiments to provide the


most information with the least effort.
 Given a number of factors that affect the system performance, it is
useful to separate out the effects of individual factors.

Ex. (1.4) The performance of a system depends on the following three


factors
(a) Garbage collection technique used: G1, G2, or none.
(b) Type of workload: editing, computing, or artificial intelligence (AI).
(c) Type of CPU: C1, C2, or C3

How many experiments are needed? How does one estimate the
performance impact of each factor?
1.1 Outline of Topics (6)
5. Performance simulations correctly.
 In designing a simulation model, one has to select a language for
simulation, select seeds and algorithms for random-number
generation, decide the length of simulation run, and analyze the
simulation results.

Ex. (1.5) In order to compare the performance of two cache


replacement
algorithms:
(a) What type of simulation model should be used?
(b) How long should the simulation be run?
(c) What can be done to get the same accuracy with a shorter run?
(d) How can one decide if the random-number generator in the
simulation is a good generator?
1.1 Outline of Topics (7)
6. Use simple queueing models to analyze the performance of
systems .
 Queueing models are commonly used for analytical modeling of
computer systems.

Ex. (1.6) The average response time of a database system is 3 seconds.


During a 1-minute observation interval, the idle time on the
system was 10 seconds. Using a queueing model for the
system,
determine the following:
(a) System Utilization (b) Average service time per query
(c) Number of queries completed during the observation interval
(d) Average number of jobs in the system
(e) Probability of number of jobs in the system being greater than 10
(f) 90-percentile response time (g) 90-percentile waiting time
1.2 The Art of Performance Evaluation(1)

Some requirements for performance


evaluation
What a An intimate knowledge of the system
performanc being modeled
e metric?
A careful selection of the methodology,
workload, and tools

Given the same problem, two analysts


may choose different performance
metrics and evaluation
methodologies.

Given the same data, two analysts may


interpret them differently.
1.2 The Art of Performance Evaluation(2)

Example 1.7
The throughputs of two systems A and B were measured in
transactions per second.
The results are shown in Table 1.2

TABLE 1.2 Throughput in Transactions per


Second
System Workload 1 Workload 2

A 20 10

B 10 20

 There are three ways to compare the performance of the two


systems.
1.2 The Art of Performance Evaluation(3)

Example 1.7 (Cont.)


The first way is to take the average of the performance on the
two workloads.

System Workload 1 Workload 2 Average


A 20 10 15
B 10 20 15

 The second way is to consider the ratio of the performances


with system B as the base.
System Workload 1 Workload 2 Average
A 2 0.5 1.25
B 1 1 1
1.2 The Art of Performance Evaluation(4)

Example 1.7 (Cont.)


The third way is to consider the performance ratio with system
A as the base.

System Workload 1 Workload 2 Average


A 1 1 1
B 0.5 2 1.25

Example 1.7 illustrates a technique known as the ratio


game.
1.3 Professional Organizations,
Journals, and Conferences (1)
ACM SIGMETRICS
: for researchers engaged in developing methodologies and user
seeking new or improved techniques for analysis of computer
systems

IEEE Computer Society


: a number of technical committees – the technical committee on
simulation may of interest to performance analysts

ACM SIGSIM
: Special Interest Group on SIMulation – Simulation Digest

CMG
: Computer Measurement Group, Inc. – CMG Transactions
1.3 Professional Organizations,
Journals, and Conferences (2)

IFIP Working Group 7.3


: AFIPS(American Federation of Information Processing Societies)
- ACM, IEEE, etc.

The Society for Computer Simulation


: Simulation(monthly), Transactions of the Society for Computer
Simulation(quarterly)

SIAM
: SIAM Review, SIAM Journal on Control &Optimization, SIAM Journal
on Numerical Analysis, SIAM Journal on Computing, SIAM Journal
on Scientific and Statistical Computing, and Theory of Probability &
Its Applications
1.3 Professional Organizations,
Journals, and Conferences (3)
ORSA
: Operations Research, ORSA Journal on Computing, Mathematics
of Operations Research, Operations Research Letters, and
Stochastic Models

Each of the organizations organizes annual conferences.

Students interested in taking additional courses on


performance evaluation techniques may consider courses
on statistical inference, operations research, stochastic
processes, decision theory, time series analysis, design of
experiments, system simulation, queueing theory, and
other
related subjects.
1.4 Performance Projects
Select a computer subsystem, for example, a network mail
program, an operation system, a language complier, a
text
editor, a processor, or a database.

Perform some measurements.

Analyze the collected data.

Simulate or analytically model the subsystem.

Predict its performance.

Validate the model.


Chapter. 2 Common Mistakes and How
to Avoid Them
2.1 Common Mistakes in
Performance Evaluation (1)
No goals
Any endeavor without goals is bound to fail.
Each model must be developed with a particular goal in mind.
The metrics, workloads, and methodology all depend upon
the goal.

What goals?
General- purpose
model

Particular model
2.1 Common Mistakes in
Performance Evaluation (2)
Biased Goals
The stating the goals becomes that of finding the right
metrics
and workloads for comparing the two systems, not that of
finding the metrics and workloads such that our system
turns
out better.
I’m a jury.Your statement is wrong.
Be unbiased.

Our system Our system


is better. is better.
2.1 Common Mistakes in
Performance Evaluation (3)
Unsystematic Approach (Section 2.2)
Often analysts adopt an unsystematic approach whereby
they
select system parameters, factors, metrics, and workloads
arbitrarily.

Pick up as
my likes

Parameter Metric B
A

Workload C Factor D
2.1 Common Mistakes in
Performance Evaluation (4)
Analysis without Understanding the Problem
Defining a problem often takes up to 40% of the total effort.
A problem well stated is half solved.
Of the remaining 60%, a large share goes into designing
alternatives, interpretation of the results, and presentation
of
conclusions.

Model A

Final
results

Model B
2.1 Common Mistakes in
Performance Evaluation (5)

Incorrect Performance Metrics


A metric refers to the criterion used to quantify the
performance of the system.
The choice of correct performance metrics depends upon the
services provided by the system being modeled.

Compare MIPS

RISC Meaningless CISC


2.1 Common Mistakes in
Performance Evaluation (6)
Unrepresentative Workload
The workload used to compare two systems should be
representative of the actual usage of the systems in the
field.
The choice of the workload has a significant impact on the
results of a performance study.

Network

Short Packet Sizes

Long Packet Sizes


Network
2.1 Common Mistakes in
Performance Evaluation (7)
Wrong Evaluation Technique
There are three evaluation technique: measurement,
simulation,
and analytical modeling.
Analysts often have a preference for one evaluation
technique
that they use for every performance evaluation problem.
An analyst should have a basic knowledge of all three
techniques. Measurement

Analytical
Simulation Modeling
2.1 Common Mistakes in
Performance Evaluation (8)
Overlooking Important Parameters
It is good idea to make a complete list of system and
workload
characteristics that affect the performance of the system.
System parameters
- quantum size : CPU allocation
- working set size : memory allocation
Workload parameters
- the number of users
- request arrival patterns
- priority
2.1 Common Mistakes in
Performance Evaluation (9)
Ignoring Significant Factors
Parameters that are varied in the study are called factors.
Not all parameters have an equal effect on the performance.
: if packet arrival rate rather than packet size affects the response
time
of a network gateway, it would be better to use several different
arrival rates in studying its performance.
It is important to identify those parameters, which, if varied, will
make a significant impact on the performance.
It is important to understand the randomness of various system and
workload parameters that affect the performance.
The choice of factors should be based on their relevance and not on
the analyst’s knowledge of the factors.
For unknown parameters, a sensitivity analysis, which shows the effect
of changing those parameters form their assumed values, should be
done to quantify the impact of the uncertainty.
2.1 Common Mistakes in
Performance Evaluation (10)
Inappropriate Experimental Design
Experimental design relates to the number of measurement
or
simulation experiments to be conducted and the
parameter
values used in each experiment.
The simple design may lead to wrong conclusions if the
parameters interact such that the effect of one parameter
depends upon the values of other parameters.
Better alternatives are the use of the full factorial
experimental designs and fractional factorial designs.
2.1 Common Mistakes in
Performance Evaluation (11)
Inappropriate Level of Detail
The level of detail used in modeling a system has a
significant
impact on the problem formulation.
Avoid formulations that are either too narrow or too broad.
A common mistake is to take the detailed approach when a
high-level model will do and vice versa.
It is clear that the goals of a study have a significant impact
on
what is modeled and how it is analyzed.
2.1 Common Mistakes in
Performance Evaluation (12)
No Analysis
One of the common problems with measurement projects is
that they are often run by performance analysts who are
good
in measurement techniques but lack data analysis
expertise.
They collect enormous amounts of data but do not know to
analyze or interpret it.

Let’s explain
5
how one can 3
4
2
use the results 1
2.1 Common Mistakes in
Performance Evaluation (13)
Erroneous Analysis
There are a number of mistakes analysts commonly make in
measurement, simulation, and analytical modeling, for
example,
taking the average of ratios and too short simulations.

Simulation time
2.1 Common Mistakes in
Performance Evaluation (14)
No Sensitivity Analysis
Often analysts put too much emphasis on the results of their
analysis, presenting it as fact rather than evidence.
Without a sensitivity analysis, one cannot be sure if the
conclusions would change if the analysis was done in a
slightly
different setting.
Without a sensitivity analysis, it is difficult to access the
relative
importance of various parameters.
2.1 Common Mistakes in
Performance Evaluation (15)
Ignoring Errors in Input
Often the parameters of interest cannot be measured.
The analyst needs to adjust the level of confidence on the
model output obtained from input data.
Input errors are not always equally distributed about the
mean.

Packet 512
octects

Transmi Receive
t buffer buffer
2.1 Common Mistakes in
Performance Evaluation (16)
Improper Treatment of Outliers
Values that are too high or too low compared to a majority of
values in a set are called outliers.
Outliers in the input or model output present a problem.
If an outlier is not caused by a real system phenomenon, it
should be ignored.
Deciding which outliers should be ignored and which should
be
included is part of the art of performance evaluation and
requires careful understanding of the system being
modeled.
2.1 Common Mistakes in
Performance Evaluation (17)
Assuming No Change in the Future
It is often assumed that the future will be the same as the
past.
A model based on the workload and performance observed in
the past is used to predict performance in the future.
The future workload and system behavior is assumed to be
the
same as that already measured.
The analyst and the decision makers should discuss this
assumption and limit the amount of time into the future
that
predictions are made.
2.1 Common Mistakes in
Performance Evaluation (18)
Ignoring Variability
It is common to analyze only the mean performance since
determining variability is often difficult, if not impossible.
If the variability is high, the mean alone may be misleading to
the decision makers.

Load
demand Weekly
Mean = 80

Not useful

MON TUE WED THU FRI SAT SUN


2.1 Common Mistakes in
Performance Evaluation (19)
Too Complex Analysis
Performance analysts should convey final conclusions in as
simple a manner as possible.
It is better to start with simple models or experiments, get
some results or insights, and then introduce the
complications.
The decision deadlines often lead to choosing simple models.
Thus, a majority of day-to-day performance problems in
the
real world are solved by simple models.
My model is simple and
I’m easily
easier to explain it
understood

Decision Analyst
maker
2.1 Common Mistakes in
Performance Evaluation (20)
Improper Presentation of Results
The eventual aim of every performance study is to help in
decision making.
The right metric to measure the performance of an analyst is
not the number of analyses performed but the number of
analyses that helped the decision makers.

I’m analyst.
Let’s explain Words, pictures, and graphs
the results of
the analysis
2.1 Common Mistakes in
Performance Evaluation (21)
Ignoring Social Aspects
Successful presentation of the analysis results requires two
types of skills: social and substantive.
- Writing and speaking : Social skills
- Modeling and data analysis : Substantive skills.
Acceptance of the analysis results requires developing a trust
between the decision makers and the analyst and
presentation
of the results to the decision makers in a manner
understandable to them.
Social skills are particularly important in presenting results that
are counter to the decision maker’s beliefs and values or that
require a substantial change in the design.
2.1 Common Mistakes in
Performance Evaluation (21)
Ignoring Social Aspects (cont.)
The presentation to the decision makers should have minimal
analysis jargon and emphasize the final results, while the
presentation to other analysts should include all the details
of
the analysis techniques.
Combining these two presentations into one could make it
meaningless for both audiences.
2.1 Common Mistakes in
Performance Evaluation (22)
Omitting Assumptions and Limitations
Assumptions and limitations of the analysis are often omitted
from the final report.
This may lead the user to apply the analysis to another
context
where the assumptions will not be valid.

Assumption(A) Analysis results Assumption(B)

Final report Other context

Is the result right?


2.2 A Systematic Approach to
Performance Evaluation (1)
State Goals and Define the System
Given the same set of hardware and software, the definition
of
the system may vary depending upon the goals of the
study.
The choice of system boundaries affects the performance
metrics as well as workloads used to compare the systems.

Dual CPU
System
Timesharing system Different ALU system

System : Timesharing system System : CPU


Part : external components to CPU Part : internal components in CPU
2.2 A Systematic Approach to
Performance Evaluation (2)
List Service and Outcomes
Each system provides a set of services.

3. Perform a number of
1. Request the service different instructions

4. Request queries
2. Send the packets
6. Response

5. Answer queries
2.2 A Systematic Approach to
Performance Evaluation (3)
Select Metrics
Select criteria to compare the performance.
Choose the metrics(criteria).
In general, the metrics are related to the speed, accuracy,
and
availability of services.
The performance of a network
: the speed(throughput, delay), accuracy(error rate), and
availability of the packets sent.
The performance of a processor
: the speed of (time taken to execute) various instructions
2.2 A Systematic Approach to
Performance Evaluation (4)
List Parameters
Make a list of all the parameters that affect performance.
The list can be divided into system parameters and workload
parameters.
System parameters
: Hardware/Software parameters
: These generally do not vary among various installations
of the
system.
Workload parameters
: Characteristics of user’s requests
: These vary form one installation to the next.
2.2 A Systematic Approach to
Performance Evaluation (5)

Select Factors to Study


The list of parameters can be divided into two parts
: those that will be varied during the evaluation
and those that will not.
The parameters to be varied are called factors and their values
are called levels.
It is better to start with a short list of factors and a small
number of levels for each factor and to extend the list in the
next phase of the project if the resource permit.
It is important to consider the economic, political, and
technological constraints that exist as well as including the
limitations imposed by the decision makers’ control and the
time available for the decision.
2.2 A Systematic Approach to
Performance Evaluation (6)
Select Evaluation Technique
The right selection among analytical modeling, simulation,
and
measurement depends upon the time and resources
available
to solve the problem and the desired level of accuracy.
2.2 A Systematic Approach to
Performance Evaluation (7)
Select Workload
The workload consists of a list of service requests to the
system.
For analytical modeling, the workload is usually expressed as
a
probability of various requests.
For simulation, one could use a trace of requests measured
on
a real system.
For measurement, the workload may consist of user scripts to
be executed on the systems.
To produce representative workloads, one needs to measure
and characterize the workload on existing systems.
2.2 A Systematic Approach to
Performance Evaluation (8)
Design Experiments
Once you have a list of factors and their levels, you need to
decide on a sequence of experiments that offer maximum
information with minimal effort.
In first phase, the number of factors may be large but the
number of levels is small. The goal is to determine the
relative
effect of various factors.
In second phase, the number of factors is reduced and the
number of levels of those factors that have significant
impact
is increased.
2.2 A Systematic Approach to
Performance Evaluation (9)
Analyze and Interpret Data
It is important to recognize that the outcomes of
measurements
and simulations are random quantities in that the outcome
would be different each time the experiment is repeated.
In comparing two alternatives, it is necessary to take into
account the variability of the results.
The analysis only produces results and not conclusions.
The results provide the basis on which the analysts or
decision
makers can draw conclusions.
2.2 A Systematic Approach to
Performance Evaluation (10)
Present Results
It is important that the results be presented in a manner that
is
easily understood.
This usually requires presenting the results in graphic form
and
without statistical jargon.
The knowledge gained by the study may require the analysts
to
go back and reconsider some of the decisions made in the
previous steps.
The complete project consists of several cycles through the
steps rather than a single sequential pass.
Case Study 2.1 (1)
Consider the problem of comparing remote pipes with
remote procedure calls.
Procedure calls
The calling program is blocked, control is passed to the called
procedure along with a few parameters, and when the
procedure is complete, the results as well as the control
return
to the calling program.
Remote pipes
When called, the caller is not blocked.
The execution of the pipe occurs concurrently with the
continued execution of the caller. The results, if any, are
later
returned asynchronously.
Case Study 2.1 (2)
System Definition
Goal : to compare the performance of applications using
remote pipes to those of similar applications using
remote procedure calls.
Key component : Channel (either a procedure or a pipe)
System

Network

Clinet
Server
System
Case Study 2.1 (3)
Services
Two types of channel calls
: remoter procedure call and remote pipe
The resources used by the channel calls depend upon the
number of parameters passed and the action required on
those
parameters.
Data transfer is chosen as the application and the calls will be
classified simply as small or large depending upon the
amount
of data to be transferred to the remote machine.
The system offers only two services
: small data transfer or large data transfer
Case Study 2.1 (4)

Metrics
Due to resource limitations, the errors and failures will not be
studied. Thus, the study will be limited to correct operation
only.
Resources : local computer(client), the remote computer(server),
and the network link
Performance Metrics
- Elapsed time per call
- Maximum call rate per unit of time or equivalently, the time
required to complete a block of n successive calls
- Local CPU time per call
- Remote CPU time per call
- Number of bytes sent on the link per call
Case Study 2.1 (5)
Parameters
System Parameter
Speed of the local CPU, the remote CPU, and the network
Operating system overhead for interfacing with the
channels
Operating system overhead for interfacing with the
networks
Reliability of the network affecting the number of
retransmissions required
Workload Parameters
Time between successive calls
Number and sizes of the call parameters
Number and sizes of the results
Type of channel
Other loads on the local and remote CPUs
Other loads on the network
Case Study 2.1 (6)
Factors
Type of channel
: Two type – remote pipes and remote procedure calls
Speed of the network
: Two locations of the remote hosts will be used – short distance(in
the
campus) and long distance(across the country)
Sizes of the call parameters to be transferred
: Two levels will be used – small and large
Number n of consecutive calls
: Eleven different values of n – 1,2,4,8,16,32,,512,1024
All other parameters will be fixed.
The retransmissions due to network errors will be ignored.
Experiments will be conducted when there is very little other load on
the hosts and the network.
Case Study 2.1 (7)
Evaluation Technique
Since prototypes of both types of channels have already been
implemented, measurements will be used for evaluation.
Analytical modeling will be used to justify the consistency of
measured values for different parameters.

Workload
A synthetic program generating the specified types of
channel
requests
This program will also monitor the resources consumed and
log
the measured results(using Null channel requests).
Case Study 2.1 (8)
Experimental Design
A full factorial experimental design with 2311=88
experiments will be used for the initial study.

Data Analysis
Analysis of variance will be used to quantify the effects of the
first three factors and regression will be used to quantify
the effects of the number n of successive calls.

Data Presentation
The final results will be plotted as a function of the block size
n.

You might also like