0% found this document useful (0 votes)
29 views

Week - 01 - Lec - 2 - 05-03-2021 (Types of Parallelism)

The document discusses parallel computing concepts including Amdahl's law, Karp-Flatt metric, and types of parallelism such as data, functional, and pipelining parallelism. Amdahl's law defines the theoretical maximum speedup from parallelizing a program and the document provides examples and derivations. The Karp-Flatt metric calculates the serial fraction of a parallel program given its speedup and number of processors. Types of parallelism addressed include distributing data across processors, concurrent independent tasks, and pipelining computations through stages.

Uploaded by

saeedmomina110
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Week - 01 - Lec - 2 - 05-03-2021 (Types of Parallelism)

The document discusses parallel computing concepts including Amdahl's law, Karp-Flatt metric, and types of parallelism such as data, functional, and pipelining parallelism. Amdahl's law defines the theoretical maximum speedup from parallelizing a program and the document provides examples and derivations. The Karp-Flatt metric calculates the serial fraction of a parallel program given its speedup and number of processors. Types of parallelism addressed include distributing data across processors, concurrent independent tasks, and pipelining computations through stages.

Uploaded by

saeedmomina110
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

CS416 – Parallel and

Distributed Computing
Lecture # 02
Spring 2021
Agenda
2

 Amdahl’s Law of Parallel Speedup

 Karp-Flatt Metric

 Types of Parallelism
 Data-parallelism
 Functional-parallelism
 Pipelining

 Multi-processor vs Multi-computer

 Cluster vs Network of workstations


CS443
Amdahl’s Law
3

 Amdahl’s was formulized in 1967


 It shows an upper-bound on the maximum speedup that can be
achieved
 Suppose you are going to design a parallel algorithm for a
problem
 Further suppose that fraction of total time that the algorithm
must consume in serial executions is ‘F’
 This implies fraction of parallel potion is (1- F)
 Now, Amdahl’s law states that

 Here ‘p’ is total number of available processing nodes.


CS443
Amdahl’s Law
4

Derivation
 Let’s suppose you have a sequential code for a problem that
can be executed in total T(s)time.
 T(p) be the parallel time for the same algorithm over p
processors.
Then speedup can be calculated using:-

 T(p) can be calculated as:

CS443
Amdahl’s Law
5

Derivation
 Again

 What if you have infinite number of processors?


 What you have to do for further speedup?

CS443
Amdahl’s Law
6

 Example 1: Suppose 70% of a sequential algorithm is


parallelizable portion. The remaining part must be calculated
sequentially. Calculate maximum theoretical speedup for
parallel variant of this algorithm using i). 4 processors and ii).
infinite processors.

 F and 1-F use Amdahl’s law to calculate theoretical speedups.

CS443
Amdahl’s Law
7

 Example 2: Suppose 25% of a sequential algorithm is


parallelizable portion. The remaining part must be calculated
sequentially. Calculate maximum theoretical speedup for
parallel variant of this algorithm using 5 processors and infinite
processors.
 ???

 Little challenge: Determine, according to Amdahl's law, how


many processors are needed to achieve maximum theoretical
speedup while sequential portion remains the same?
 The answer may be surprising?

CS443
8
Karp-Flatt Metric

CS443
Karp-Flatt Metric
9

 The metric is used to calculate serial fraction for a given


parallel configuration.
 i.e., if a parallel program is exhibiting a speedup S while using P
processing units then serial fraction e is given by :-

Example task: Suppose in a parallel program, for 5


processors, you gained a speedup of 1.25x, determine sequential
fraction of your program.

CS443
10
Types of Parallelism

CS443
Types of Parallelism
11

1. Data-parallelism
 When there are independent sub-tasks applying the same
operation to different elements of a data set
 When just distributing the data provides sufficient parallelism
 Example code
for i=0 to 99 do
a[ i ] = b[ i ] + c [ i ]
Endfor
 Here same operation (i.e., addition) is being performed on first 100
elements of ‘b’ and ‘c’
 All 100 iterations of the loop could be executed simultaneously.

CS443
Types of Parallelism
12

2. Functional-parallelism
 When there are independent tasks applying different operations
to different data elements
 Example code
1) a=2
2) b=3
3) m= (a+b)/2
4) s= ( + )/2
5) v= s -
 Here (1, 2) and (3, 4) statements could be performed
concurrently.

CS443
Types of Parallelism
13

3. Pipelining
 Usually used for the problems where single instance of the
problem can not be parallelized
 The output of one stage is input of the other stage
 Divide whole computation of each instance into multiple stages
(if there are multiple instances of the problem)
 An effective method of attaining parallelism (concurrency) on
the uniprocessor architectures
 Also depends on pipelining abilities of the processor

CS443
Types of
Parallelism
3. Pipelining
 Example:
Assembly line
analogy

Sequential Execution

14 CS443
Types of
Parallelism
3. Pipelining
 Example:
Assembly line
analogy

Pipelining

15 CS443
Types of
Parallelism
3. Pipelining
 Example:
Overlap
instructions in a
single instruction
cycle to achieve
parallelism

4-stage Pipelining

16 CS443
Questions
26

CS443

You might also like