0% found this document useful (0 votes)
60 views30 pages

CS621 Week 1

Uploaded by

Dua Batool
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views30 pages

CS621 Week 1

Uploaded by

Dua Batool
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Dr.

Muhammad Anwaar Saeed


Dr. Said Nabi
Ms. Hina Ishaq

CS621 Parallel and Distributed


Computing
What is Computing?

CS621 Parallel and Distributed


Computing
Introduction of Computing.

Objective
s
History of Computing.
What is Computing

“Computing is the process to complete a given goal-


oriented task by using computer technology.”
Computing may include the
design and development of
software and hardware
systems for broad range of
purposes.
What is
Computing
Used for structuring,
Cont.… processing and managing
any kind of information - to
aid in the pursuit of
scientific studies and
making intelligent systems.
Batch Era

Time Sharing
Era
History of
Computing Desktop Era

Network Era
History of Computing Cont..
Batch Era: Execution of series of programs on a
computer without manual intervention.

Time Sharing: Sharing of computing resource


among many users by means of multiprogramming
and multi-tasking.

Desktop Era: A personal computer provides


computing power to one user.

Network Era: Systems with shared memory and


distributed memory.
Serial Vs. Parallel
Computing

CS621 Parallel and Distributed


Computing
Serial Computing.

Objective
Parallel Computing.
s

Difference between serial and


parallel computing
Serial Computing

“Serial computing is a type of processing in which one


task is completed at a time and all the tasks are
executed by the processor in a sequence.”
Parallel Computing

“Parallel computing is a type of computing architecture


in which several processors simultaneously execute
multiple, smaller calculations broken down from an
overall larger, complex problem.”
Serial vs parallel Computing Cont..
Difference between Serial and parallel
Computing
Serial Parallel
AreComputing
uniprocessor Are Computing
multiprocessor
Cansystems.
execute one instruction Cansystems.
execute multiple
at a time. instructionsat a time.
Speed is limited. No limitation on speed.
Lower performance. Higher performance.
Examples: EDVAC, BINAC, and LGP- Example: Window 7, 8 and
30. 10.
Introduction to Parallel
Computing

CS621 Parallel and Distributed


Computing
Parallel Computing

Objective
Multi-Processer
s

Multi-Core
Introduction to Parallel
Computing

“Parallel Computing is the simultaneous execution of


the same task (split up and adapted) on multiple
processors in order to obtain faster results.”
Introduction to Parallel
Computing Cont.…

It is a kind of computing architecture where


the large problems break into independent,
smaller, usually similar parts that can be HPC: High
processed in one go. It is done by multiple Performance/Productivity
CPUs communicating via shared memory, Computing
which combines results upon completion. It Technical Computing
helps in performing large computations as Cluster computing
it divides the large problem between more
than one processor.
Introduction to Parallel
Computing Cont.…

The term parallel computing architecture sometimes used for a


computer with more than one processor available for processing.

The recent multicore processor (chips with more than one


processor core) are some commercial examples which bring
parallel computing to the desktop.
Introduction to Parallel
Computing Cont.…

Multi-processor Multi-core
More than one CPU works Is a microprocessor on a
together to carry out single integrated circuit
computer instructions or with two or more separate
programs. processing units, called
cores, each of which reads
and executes program
instructions.
Introduction to
Parallel Computing
Cont…
Principles of Parallel
Computing

CS621 Parallel and Distributed


Computing
Principles of Parallel
Objective Computing.
s
Finding
enough
parallelis
m

Pe
rf
mo orm
de anc ale
l in e Sc
g

Principles of
Parallel
Computing o
i n
rd ron
a
n
tio atio
iz
d
an n
Lo
ca
lity
Co ynch
S

Load
balance
Principles of Parallel Computing
Cont…
Finding Enough Parallelism Scale
Conventional architectures coarsely comprise Parallelism overhead includes cost of starting
of a processor, memory system, and the a head, accessing data, communicating
data-path. Each of these components' shared data, synchronization and extra
present significant performance bottlenecks. computation. Algorithms needs sufficiently
Parallelism addresses each of these large units of work to run fast in parallel.
components in significant ways.

Locality
Parallel processors collectively have large
and fast cache. The memory addresses are
distributed across the processors, a
processor may have faster access to memory
locations mapped locally than to memory
locations mapped to other processors.
Principles of Parallel Computing
Cont.…
Load Balance Coordination and Synchronization
Determines the workload, divide up evenly Several kind of synchronization is needed by
before staring in case of static load balancing processes cooperating to perform
but in dynamic load balance workload computation.
changes dynamically, need to rebalance
dynamically.

Performance Modeling
More efficient programming models and tools
formulated for massively parallel
supercomputers.
Why Use Parallel
Computing?

CS621 Parallel and Distributed


Computing
Objective Identifying the aspects that make
parallel computing more useful.
s
Computing
power

Why Use
P ro
vide
con a nce
c urr
enc e rform
y P

Parallel
Computing?
So
pr l v e i l it
y
ob la
lem rge a l ab
s Sc
Why Use Parallel Computing?

Computing power
Performance
Modern consumer grade computing
Theoretical performance steadily
hardware comes equipped with multiple
increased, due to the fact that
central processing units (CPUs) and/or
performance is proportional to the
graphics processing units (GPUs) that
product of the clock frequency and the
can process many sets of instructions
number of cores.
simultaneously.

Scalability
Problem can be scaled up from size to
sizes that were out of reach with a
serial application. The larger problem
sizes are enabled by the larger
amounts of main memory, disk storage,
bandwidth over networks and to disk,
and CPUs.
Why Use Parallel Computing Cont.

Solve large problems Cost
Solve large problems by breaking down the cost of computation would reduce
larger problems into smaller, if we are to deploy parallel
independent, often similar parts that can computation than sequential
be executed simultaneously by multiple computation.
processors communicating via shared
memory. Can solve large problems like
Web search engines, processing millions
of transaction per second, etc.

Provide concurrency
Parallelism leads naturally to
Concurrency. For example,
Several processes trying to
print a file on a single printer.

You might also like