0% found this document useful (0 votes)
65 views

Amdahls Law - Advanced Computer Architecture

This document discusses Amdahl's Law, which models the theoretical speedup from parallel processing based on the serial and parallel portions of a program. It defines speedup as the time to execute serially divided by the time to execute in parallel. Amdahl's Law formula shows that speedup is limited by the serial portion of the program and levels off as more processors are added. A speedup curve below the line of ideal speedup S=N illustrates this limiting effect.

Uploaded by

Shaikh Adnan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

Amdahls Law - Advanced Computer Architecture

This document discusses Amdahl's Law, which models the theoretical speedup from parallel processing based on the serial and parallel portions of a program. It defines speedup as the time to execute serially divided by the time to execute in parallel. Amdahl's Law formula shows that speedup is limited by the serial portion of the program and levels off as more processors are added. A speedup curve below the line of ideal speedup S=N illustrates this limiting effect.

Uploaded by

Shaikh Adnan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Imdad Hussain

AMDAHLS LAW
Amdahl's Law is a law governing the speedup of using parallel processors on a
problem, versus using only one serial processor. Before we examine Amdahl's Law,
we should gain a better understanding of what is meant by speedup.

Speedup:
The speed of a program is the time it takes the program to excecute. This could be
measured in any increment of time. Speedup is defined as the time it takes a program
to execute in serial (with one processor) divided by the time it takes to execute in
parallel (with many processors). The formula for speedup is:

T(1)

S = -------------
T(j)

Where T(j) is the time it takes to execute the program when using j processors.
Efficiency is the speedup, divided by the number of processors used. This is an
important factor to consider. Due to the cost of multiprocessor super computers, a
company wants to get the most bang for their dollar.

To explore speedup more, we shall do a bit of analysis. If there are N workers


working on a project, we may assume that they would be able to do a job in 1/N time
of one worker working alone. Now, if we assume the strictly serial part of the
program is performed in B*T(1) time, then the strictly parallel part is performed in
((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the
formula for speedup as:

N
S = -----------------------
(B*N)+(1-B)

This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl
in 1967:
For over a decade prophets have voiced the contention that the
organization of a single computer has reached its limits and that truly
significant advances can be made only by interconnection of a multiplicity
of computers in such a manner as to permit co-operative solution...The
nature of this overhead (in parallelism) appears to be sequential so that it
is unlikely to be amenable to parallel processing techniques. Overhead
alone would then place an upper limit on throughput of five to seven times
the sequential processing rate, even if the housekeeping were done in a
separate processor...At any point in time it is difficult to foresee how the
previous bottlenecks in a sequential computer will be effectively overcome.

Let us investigate speedup curves:


Now that we have determined speedup and efficiency, let us turn to using this
information to make sense of Amdahl's Law. We will refer to a Speedup Curve to do
this. A Speedup Curve is simply a graph with an X-axis of the number of processors,
compared against a Y-axis of the speedup. The best speed we could hope for, S = N,
would yield a 45 degree curve. That is, if there were ten processors, we would realize
a ten fold speedup. Anything better would mean that the program ran faster on a
single processor than in parallel, which would not make it a good candidate for
parallel computing. When B is constant (recall B = the percentage of the strictly
parallel portion of the program), Amdahl's Law yields a speedup curve which is
logarithmic and remains below the line S=N. This law shows that it is indeed the
algorithm and not the number of processors which limits the speedup. Also note that
as the curve begins to flatten out, efficiency is drastically being reduced.

You might also like