CS621 Week 1
CS621 Week 1
Objective
s
History of Computing.
What is Computing
Time Sharing
Era
History of
Computing Desktop Era
Network Era
History of Computing Cont..
Batch Era: Execution of series of programs on a
computer without manual intervention.
Objective
Parallel Computing.
s
Objective
Multi-Processer
s
Multi-Core
Introduction to Parallel
Computing
Multi-processor Multi-core
More than one CPU works Is a microprocessor on a
together to carry out single integrated circuit
computer instructions or with two or more separate
programs. processing units, called
cores, each of which reads
and executes program
instructions.
Introduction to
Parallel Computing
Cont…
Principles of Parallel
Computing
Pe
rf
mo orm
de anc ale
l in e Sc
g
Principles of
Parallel
Computing o
i n
rd ron
a
n
tio atio
iz
d
an n
Lo
ca
lity
Co ynch
S
Load
balance
Principles of Parallel Computing
Cont…
Finding Enough Parallelism Scale
Conventional architectures coarsely comprise Parallelism overhead includes cost of starting
of a processor, memory system, and the a head, accessing data, communicating
data-path. Each of these components' shared data, synchronization and extra
present significant performance bottlenecks. computation. Algorithms needs sufficiently
Parallelism addresses each of these large units of work to run fast in parallel.
components in significant ways.
Locality
Parallel processors collectively have large
and fast cache. The memory addresses are
distributed across the processors, a
processor may have faster access to memory
locations mapped locally than to memory
locations mapped to other processors.
Principles of Parallel Computing
Cont.…
Load Balance Coordination and Synchronization
Determines the workload, divide up evenly Several kind of synchronization is needed by
before staring in case of static load balancing processes cooperating to perform
but in dynamic load balance workload computation.
changes dynamically, need to rebalance
dynamically.
Performance Modeling
More efficient programming models and tools
formulated for massively parallel
supercomputers.
Why Use Parallel
Computing?
Why Use
P ro
vide
con a nce
c urr
enc e rform
y P
Parallel
Computing?
So
pr l v e i l it
y
ob la
lem rge a l ab
s Sc
Why Use Parallel Computing?
Computing power
Performance
Modern consumer grade computing
Theoretical performance steadily
hardware comes equipped with multiple
increased, due to the fact that
central processing units (CPUs) and/or
performance is proportional to the
graphics processing units (GPUs) that
product of the clock frequency and the
can process many sets of instructions
number of cores.
simultaneously.
Scalability
Problem can be scaled up from size to
sizes that were out of reach with a
serial application. The larger problem
sizes are enabled by the larger
amounts of main memory, disk storage,
bandwidth over networks and to disk,
and CPUs.
Why Use Parallel Computing Cont.
…
Solve large problems Cost
Solve large problems by breaking down the cost of computation would reduce
larger problems into smaller, if we are to deploy parallel
independent, often similar parts that can computation than sequential
be executed simultaneously by multiple computation.
processors communicating via shared
memory. Can solve large problems like
Web search engines, processing millions
of transaction per second, etc.
Provide concurrency
Parallelism leads naturally to
Concurrency. For example,
Several processes trying to
print a file on a single printer.