0% found this document useful (0 votes)
62 views

Parallel Processing: sp2016 Lec#5

This document discusses parallel processing architectures. It begins by describing explicitly parallel processor configurations including task-level parallelism. The key elements of parallel architectures are then outlined, including processor configurations, memory configurations, and inter-processor communication approaches. Different parallel platforms are examined based on their physical and logical memory configurations as well as their data exchange and synchronization methods. Specific architectures like shared memory multiprocessors, clusters, and vector/array processors are then detailed. The document concludes by summarizing the different memory and interconnect configurations of parallel platforms.

Uploaded by

RohFollower
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Parallel Processing: sp2016 Lec#5

This document discusses parallel processing architectures. It begins by describing explicitly parallel processor configurations including task-level parallelism. The key elements of parallel architectures are then outlined, including processor configurations, memory configurations, and inter-processor communication approaches. Different parallel platforms are examined based on their physical and logical memory configurations as well as their data exchange and synchronization methods. Specific architectures like shared memory multiprocessors, clusters, and vector/array processors are then detailed. The document concludes by summarizing the different memory and interconnect configurations of parallel platforms.

Uploaded by

RohFollower
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

Parallel Processing

sp2016
lec#5
Dr M Shamim Baig

1.1

Explicitly Parallel Processor


architectures:
Task-level Parallelism

1.2

Elements of (Explicit) Parallel


Architectures
Processor configurations:
Instruction/Data Stream based
Memory Configurations:
- Physical & Logical based
- Access-Delay based
Inter-processor communication:
Communication-Interface design
- Data Exchange/ Synch approach
1.3

Parallel Platforms:
Memory (Physical vs Logical) Configurations
Physical vs Logical Memory Config
Physical Memory config (SM, DM, CSM)
Logical Address Space config (SAS, NSAS)
Combinations
CSM + SAS (SMP; UMA)
DM + SAS (DSM; NUMA)
DM + NSAD (Multicomputer/Clusters)

1.4

Shared memory (SM) Multiprocessor


It is important to note difference between
Shared Memory & Shared Address Space
Former is physical memory config, while
later is Logical memory address view for
program.
It is possible to provide Shared Address
Space using a physically distributed
memory.
SM-multiprocessors systems are SAS-config
using physical memory configuration
either as CSM or as (DM DSM)
1.5

UMA vs NUMA
SM-multiprocessors are further categorized based on
memory access delay as UMA (uniform memory
access) & NUMA (non uniform memory access)
UMA system is based on (CSM + SAS) config,
where each processor has same delay for
accessing any memory location
NUMA system is based on (DM+SAS = DSM)
config, where a processor may have different
delay for accessing different memory location.

1.6

UMA & NUMA Arch Block Diagrams

Both are SMmultiprocessors


differing in
Memory Access
Delay format

UMA (CSM+ SAS)

NUMA (DM+ SAS= DSM)

Typical shared-address-space architectures: (a) Uniform-memory access shared-address-space


computer; (b) Uniform-memory-access shared-address-space computer with caches and memories;
(c) Non-uniform-memory-access shared-address-space computer with local memory only.

1.7

Simplistic view of a small shared memory


Symmetric Multi Processor (SMP):
(CSM + SAS + Bus)
Processors

Shared memory

Bus

Examples:
Dual Pentiums
Quad Pentiums
1.8

Quad Pentium Shared


Memory SMP
Processor

Processor

Processor

Processor

L1 cache

L1 cache

L1 cache

L1 cache

L2 Cache

L2 Cache

L2 Cache

L2 Cache

Bus interface

Bus interface

Bus interface

Bus interface

Processor/
memory
bus
I/O interface

Memory controller

I/O bus

Shared memory

Memory
1.9

Multicomputer (Cluster) Platform


Complete computers P (CU + PE) & DM with NSAS &
interconnection network interface at I/O bus level.
Interconnection
network
Messages
Processor

Local
memory
Computers

These platforms comprise of a set of processors


and their own (exclusive/ distributed) memory
Instances of such a view come naturally from
non-shared-address space (NSAS)
multicomputers e.g clustered workstations

1.10

Data Exchange/Synch Approaches:


Shared data vs Message-Passing
There are two primary approaches of
data exchange/synch in parallel systems
Shared Memory Model
Message-Passing Model
SM-multiprocessors use Shared-Data
approach for data exchange/synch.
Multicomputers (Clusters) use MessagePassing approach for data exchange/
synch.
1.11

DataExchange/Synch Platforms:
Shared-memory vs Message-Passing
Shared memory platforms have low comm
overhead, can support lower grain levels,
while message passing platforms have more
comm overhead & therefore are more suited
for coarse grain levels
SM Multiprocessors are faster, but have poor
scalability
Message passing Multicomputer platforms
are slower but have higher scalability.
1.12

Clusters as a Computing Platform


Clusters: A network of computers became a
very attractive alternative to expensive
supercomputers used for high-performance
computing in early 1990s
Several early projects notably: NASA Beowulf project
Berkeley NOW (network of workstations)
project.

1.13

Beowulf Clusters*
A group of interconnected commodity computers
achieving high performance with low cost.
Typically using commodity interconnects e.g
high speed Ethernet & OS e.g Linux.
* Beowulf comes from name given by NASA Goddard
Space Flight Center cluster project.

1.14

Advantages of Cluster Computer:


(NOW-like)
Processing Nodes are high performance PCs/
workstations readily available at low cost.
Interconnection of processing nodes using
high performance LANs/ SANs
Easily Upgradable by incorporating latest
processors into system as they become available
Easily scalable to bigger & more powerful
systems
Existing software can be easily adapted for
parallel execution on Cluster system
1.15

Cluster Interconnects: LAN vs SAN


LANs : fast / Gbits/ 10-Gbits Ethernet
SANs: Myrinet, Quadrics, Infiniband

Comparison LAN vs SAN


Distance: LAN for longer distance few (km vs m),
causing more delay/slower
Reliability: LAN for less reliable networks, so includes
overhead (error correction etc) which adds to delays
Processing Speed: LAN uses OS calls, causing more
processing delays
1.16

Vector/ Array Data Processors


Vector proc:1D-Temporal parallelism using
pipeline Arith unit & Vector chaining
Float add pipe: Comp exp, algn mant, add mant, Normalize

Array proc:1D- Spatial parallelism using


ALU-array as SIMD
Systolic Array: combines 2-D spatial
parallelism with pipelined (computational
wavefront
Block Diagrams of Vector/array & Systolic processing
?????
1.17

Summary: Parallel Platforms;


Memory & Interconnect Configurations
Memory Config (Physical vs Logical)
Physical Memory config (SM, DM, CSM)
Logical Address Space config (SAS, NSAS)
Combinations
CSM + SAS (SMP; UMA)
DM + SAS (DSM; NUMA)
DM + NSAD (Multicomputer/Clusters)

Interconnection Network:
o Interface level: memory bus (using MBEU) in SMmultiprocessors (UMA, NUMA) vs I/O bus (using NIU)
in multicomputer / cluster
o Data Exchange / sync:
Shared Data model vs Message Passing model
1.18

Homework:
self assessed problems
Please mark your solution & note
the marks you achieved
???????

1.19

Problems:
Explicit Parallel Architectures

1.20

Example Problem1:
Bus based SM-Multiprocessor
Limit of Parallelism
Consider a SM-Multiprocessor using
32-bit RISC processors running at 150
MHz, carries out one instruction per
clock cycle. Assume 15% data-load &
10% data-store instructions using
shared Bus having 2GB/sec BW.
Compute Max number of processors
possible to connect on the above Bus
for following parallel configurations:1.21

Example Problems:
Bus based SM-Multiprocessor:
Limit of Parallelism.contd
(a) SMP (without cache memory)
(b) SMP with cache memory
having hit-ratio of 95% &
memory write-through policy
(c) NUMA with program Locality
factor = 80 %
1.22

SMP (SM & Shared Bus IN

Bus-based interconnects (a) with no local caches; (b) with local memory/caches.
Since much of the data accessed by processors is local to the processor, a
local memory can improve the performance of bus-based machines. Example??

1.23

UMA & NUMA Arch Block Diagrams

Both are SMmultiprocessors


differing in
Memory Access
Delay format

UMA (CSM+ SAS)

NUMA (DM+ SAS= DSM)

Typical shared-address-space architectures: (a) Uniform-memory access shared-address-space


computer; (b) Uniform-memory-access shared-address-space computer with caches and memories;
(c) Non-uniform-memory-access shared-address-space computer with local memory only.
1.24

Homework:
self assessed problem
Please mark your solution & note
the marks you achieved
???????

1.25

Example Problem2:
Message Passing Multicomputer,
Local vs Remote memory data access delays
Consider 64-node multicomputer, each node comprises of
32-bit RISC processor having 250 MHz clock rate & 8 MB
local memory. The Local memory access requires 4 clock
cycles, remote comm initiate (setup) overhead is 15 clock
cycles & the Interconnection Network BW is 80 MB/sec.
Total number of instructions executed are 200,000.
If memory data load & store are 15% & 10% respectively
of the instructions, compute:(a)Load/ store time if all accesses are to local nodes
(b)Load/ store time if 20% of accesses are to remote nodes
note: Assume Packet lengths are variable (depend on addr
& data bytes) & communication protocol given???.
Size of packet fields is in multiple of bytes.
1.26

Example Problem2: contd


Message Passing Multicomputer,
Local vs Remote memory data access delays
Interconnection
network
Messages
Processor

Local
memory
Computers

1.27

You might also like