0% found this document useful (0 votes)
12 views

Parallel Architecture Fundamental

The document discusses parallel architecture fundamentals including what parallel architecture is, why it is studied, and how technological trends have made parallel computing inevitable. The document also covers evolution of parallel architectures and how application demands are fueling advances in hardware that require more parallelism.

Uploaded by

Tang Piseth
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Parallel Architecture Fundamental

The document discusses parallel architecture fundamentals including what parallel architecture is, why it is studied, and how technological trends have made parallel computing inevitable. The document also covers evolution of parallel architectures and how application demands are fueling advances in hardware that require more parallelism.

Uploaded by

Tang Piseth
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Parallel Architecture What is Parallel Architecture?

Fundamentals A parallel computer is a collection of processing


elements that cooperate to solve large problems fast
CS 740 Some broad issues:
September 22, 2003 • Resource Allocation:
– how large a collection?
– how powerful are the elements?
– how much memory?
• Data access, Communication and Synchronization
Topics
– how do the elements cooperate and communicate?
• What is Parallel Architecture? – how are data transmitted between processors?
• Why Parallel Architecture? – what are the abstractions and primitives for cooperation?
• Evolution and Convergence of Parallel Architectures • Performance and Scalability
• Fundamental Design Issues – how does it all translate into performance?
– how does it scale?
–2– CS 740 F’03

Why Study Parallel Architecture? Why Study it Today?


Role of a computer architect: History: diverse and innovative organizational
• To design and engineer the various levels of a computer system to
structures, often tied to novel programming models
maximize performance and programmability within limits of
technology and cost. Rapidly maturing under strong technological constraints
• The “killer micro” is ubiquitous
• Laptops and supercomputers are fundamentally similar!
• Technological trends cause diverse approaches to converge
Parallelism:
Technological trends make parallel computing inevitable
• Provides alternative to faster clock for performance
• In the mainstream
• Applies at all levels of system design
Need to understand fundamental principles and design
• Is a fascinating perspective from which to view architecture tradeoffs, not just taxonomies
• Is increasingly central in information processing • Naming, Ordering, Replication, Communication performance

–3– CS 740 F’03 –4– CS 740 F’03


Conventional Processors No Longer Scale Future potential of novel architecture
Performance by 50% each year is large (1000 vs 30)

1e+7 1e+7

52
1e+6 52%
1e+6
%/ Perf (ps/Inst) /ye Perf (ps/Inst)
ye
1e+5
ar Delay/CPUs
1e+5 ps
/g ar 1e+4
a
Ga te 1e+3
19%/ye
1e+4 te
s /c
19
% 1e+2
74 30:1 ar
Clo
ck loc
k9 %/
1e+3 s/
ins % 19%
1e+1
yea
t1 /yea 1e+0 r 1,000:1
1e+2 8% r 1e-1 30,000:1
1e-2
1e+1
1e-3
1e+0 1e-4
1980 1990 2000 2010 2020 1980 1990 2000 2010 2020
Bill Dally Bill Dally
–5– CS 740 F’03 –6– CS 740 F’03

Inevitability of Parallel Computing Application Trends

Application demands: Our insatiable need for cycles Demand for cycles fuels advances in hardware, and vice-versa
• Scientific computing: CFD, Biology, Chemistry, Physics, ... • Cycle drives exponential increase in microprocessor performance
• General- purpose computing: Video, Graphics, CAD, Databases, TP... • Drives parallel architecture harder: most demanding applications
Technology Trends Range of performance demands
• Number of transistors on chip growing rapidly • Need range of system performance with progressively increasing cost
• Clock rates expected to go up only slowly • Platform pyramid

Architecture Trends Goal of applications in using parallel machines: Speedup


• Instruction-level parallelism valuable but limited Speedup (p processors) = Performance (p processors)
• Coarser-level parallelism, as in MPs, the most viable approach Performance (1 processor)

Economics For a fixed problem size (input data set), performance = 1/time

Current trends: Time (1 processor)


• Today’s microprocessors have multiprocessor support
Speedup fixed problem (p processors) =
Time (p processors)
• Servers & even PCs becoming MP: Sun, SGI, COMPAQ, Dell,...
• Tomorrow’s microprocessors are multiprocessors
–7– CS 740 F’03 –8– CS 740 F’03
Scientific Computing Demand Engineering Computing Demand
Large parallel machines a mainstay in many industries
• Petroleum (reservoir analysis)
• Automotive (crash simulation, drag analysis, combustion
efficiency),
• Aeronautics (airflow analysis, engine efficiency, structural
mechanics, electromagnetism),
• Computer-aided design
• Pharmaceuticals (molecular modeling)
• Visualization
– in all of the above
– entertainment (films like Toy Story)
– architecture (walk-throughs and rendering)
• Financial modeling (yield and derivative analysis)
• etc.

–9– CS 740 F’03 – 10 – CS 740 F’03

Learning Curve for Parallel Programs Commercial Computing

Also relies on parallelism for high end


• Scale not so large, but use much more wide-spread
• Computational power determines scale of business that can be
handled
Databases, online-transaction processing, decision
support, data mining, data warehousing ...
TPC benchmarks (TPC-C order entry, TPC-D decision
support)
• Explicit scaling criteria provided
• Size of enterprise scales with size of system
• AMBER molecular dynamics simulation program • Problem size no longer fixed as p increases, so throughput is used
• Starting point was vector code for Cray-1 as a performance measure (transactions per minute or tpm)
• 145 MFLOP on Cray90, 406 for final version on 128-
processor Paragon, 891 on 128-processor Cray T3D
– 11 – CS 740 F’03 – 12 – CS 740 F’03
TPC-C Results for March 1996 Summary of Application Trends
25,000
Tandem Himalaya
DEC Alpha Transition to parallel computing has occurred for
20,000
SGI PowerChallenge
HP PA scientific and engineering computing
In rapid progress in commercial computing
IBM PowerPC
Other
Throughput (tpmC)

• Database and transactions as well as financial


15,000

• Usually smaller-scale, but large-scale systems also used


Desktop also uses multithreaded programs, which are
10,000

a lot like parallel programs


5,000

Demand for improving throughput on sequential


0
workloads
0 20 40 60 80 100 120
Number of processors
• Greatest use of small-scale multiprocessors

• Parallelism is pervasive Solid application demand exists and will increase


• Small to moderate scale parallelism very important
• Difficult to obtain snapshot to compare across vendor platforms
– 13 – CS 740 F’03 – 14 – CS 740 F’03

Technology Trends Architectural Trends


Architecture translates technology’s gifts to performance
100 and capability
Supercomputers
Resolves the tradeoff between parallelism and locality
• Current microprocessor: 1/3 compute, 1/3 cache, 1/3 off-chip connect
• Tradeoffs may change with scale and technology advances
10
Understanding microprocessor architectural trends
Performance

Mainframes
Microprocessors • Helps build intuition about design issues or parallel machines
Minicomputers • Shows fundamental role of parallelism even in “sequential” computers
1
Four generations of architectural history: tube,
transistor, IC, VLSI
• Here focus only on VLSI generation
0.1
1965 1970 1975 1980 1985 1990 1995 Greatest delineation in VLSI has been in type of
Commodity microprocessors have caught up with supercomputers. parallelism exploited
– 15 – CS 740 F’03 – 16 – CS 740 F’03
Arch. Trends: Exploiting Parallelism Phases in VLSI Generation
Bit-level parallelism Instruction-level Thread-level (?)

Greatest trend in VLSI generation is increase in


100,000,000

parallelism
• Up to 1985: bit level parallelism: 4-bit -> 8 bit -> 16-bit
10,000,000

R10000

– slows after 32 bit


– adoption of 64-bit almost complete, 128-bit far (not performance issue) 1,000,000
Pentium

– great inflection point when 32-bit micro and cache fit on a chip

Transistors
i80386

i80286 R3000
• Mid 80s to mid 90s: instruction level parallelism
100,000
R2000

– pipelining and simple instruction sets, + compiler advances (RISC) i8086

– on-chip caches and functional units => superscalar execution 10,000


i8080
i8008

– greater sophistication: out of order execution, speculation, prediction i4004

» to deal with control transfer and latency problems 1,000


1970 1975 1980 1985 1990 1995 2000 2005

• Next step: thread level parallelism • How good is instruction-level parallelism?


• Thread-level needed in microprocessors?
– 17 – CS 740 F’03 – 18 – CS 740 F’03

Architectural Trends: ILP ILP Ideal Potential


3
• Reported speedups for superscalar processors
30

• Horst, Harris, and Jardine [1990] ...................... 1.37 25 2.5

• Wang and Wu [1988] .......................................... 1.70 Fraction of total cycles (%) 20 2


• Smith, Johnson, and Horowitz [1989] .............. 2.30

Speedup
15 1.5
• Murakami et al. [1989] ........................................ 2.55
• Chang et al. [1991] ............................................. 2.90 10 1

• Jouppi and Wall [1989] ...................................... 3.20


5 0.5
• Lee, Kwok, and Briggs [1991] ........................... 3.50
• Wall [1991] .......................................................... 5 0 0
0 1 2 3 4 5 6+ 0 5 10 15
• Melvin and Patt [1991] ....................................... 8 Number of instructions issued Instructions issued per cycle
• Butler et al. [1991] ............................................. 17+
• Infinite resources and fetch bandwidth, perfect branch
• Large variance due to difference in prediction and renaming
– application domain investigated (numerical versus non-numerical)
– real caches and non-zero miss latencies
– capabilities of processor modeled
– 19 – CS 740 F’03 – 20 – CS 740 F’03
Results of ILP Studies Economics
perfect branch prediction
4x Commodity microprocessors not only fast but CHEAP
1 branch unit/real prediction
• Development cost is tens of millions of dollars (5-100 typical)
3x • BUT, many more are sold compared to supercomputers
• Crucial to take advantage of the investment, and use the commodity
building block
2x • Exotic parallel architectures no more than special-purpose

Multiprocessors being pushed by software vendors (e.g.


1x database) as well as hardware vendors
Standardization by Intel makes small, bus-based SMPs
Jouppi_89 Smith_89 Murakami_89 Chang_91 Butler_91 Melvin_91 commodity
• Concentrate on parallelism for 4-issue machines
• Realistic studies show only 2-fold speedup Desktop: few smaller processors versus one larger one?
• Recent studies show that for more parallelism, one must look • Multiprocessor on a chip
across threads
– 21 – CS 740 F’03 – 22 – CS 740 F’03

Summary: Why Parallel Architecture?


Increasingly attractive
• Economics, technology, architecture, application demand

Increasingly central and mainstream


Parallelism exploited at many levels
• Instruction-level parallelism Convergence of Parallel Architectures
• Thread-level parallelism within a microprocessor
• Multiprocessor servers
• Large-scale multiprocessors (“MPPs”)

Same story from memory system perspective


• Increase bandwidth, reduce average latency with many local memories

Wide range of parallel architectures make sense


• Different cost, performance and scalability

– 23 – CS 740 F’03
History Today
Historically, parallel architectures tied to programming models Extension of “computer architecture” to support
• Divergent architectures, with no predictable pattern of growth. communication and cooperation
• OLD: Instruction Set Architecture
• NEW: Communication Architecture
Application Software
Defines
System • Critical abstractions, boundaries, and primitives (interfaces)
Systolic Software • Organizational structures that implement interfaces (hw or sw)
Arrays SIMD
Architecture
Message Passing Compilers, libraries and OS are important bridges
Dataflow today
Shared Memory

Uncertainty of direction paralyzed parallel software development!


– 25 – CS 740 F’03 – 26 – CS 740 F’03

Modern Layered Framework Programming Model

What programmer uses in coding applications


CAD Database Scientific modeling Parallel applications
Specifies communication and synchronization
Multiprogramming Shared
address
Message
passing
Data
parallel
Programming models
Examples:
• Multiprogramming: no communication or synch. at program level
Compilation • Shared address space: like bulletin board
or library Communication abstraction
User/system boundary • Message passing: like letters or phone calls, explicit point to point
Operating systems support
• Data parallel: more regimented, global actions on data
Hardware/software boundary – Implemented with shared address space or message passing
Communication hardware
Physical communication medium

– 27 – CS 740 F’03 – 28 – CS 740 F’03


Communication Abstraction Communication Architecture
User level communication primitives provided = User/System Interface + Implementation
• Realizes the programming model
• Mapping exists between language primitives of programming model User/System Interface:
and these primitives • Comm. primitives exposed to user-level by hw and system-level sw
Supported directly by hw, or via OS, or via user sw Implementation:
Lot of debate about what to support in sw and gap • Organizational structures that implement the primitives: hw or OS
between layers • How optimized are they? How integrated into processing node?
• Structure of network
Today:
• Hw/sw interface tends to be flat, i.e. complexity roughly uniform Goals:
• Compilers and software play important roles as bridges today • Performance
• Technology trends exert strong influence • Broad applicability
Result is convergence in organizational structure • Programmability
• Scalability
• Relatively simple, general purpose communication primitives
• Low Cost
– 29 – CS 740 F’03 – 30 – CS 740 F’03

Evolution of Architectural Models Shared Address Space Architectures

Historically, machines tailored to programming models Any processor can directly reference any memory
• Programming model, communication abstraction, and machine location
organization lumped together as the “architecture” • Communication occurs implicitly as result of loads and stores
Evolution helps understand convergence Convenient:
• Identify core concepts • Location transparency
• Similar programming model to time-sharing on uniprocessors
Most Common Models:
– Except processes run on different processors
• Shared Address Space, Message Passing, Data Parallel
– Good throughput on multiprogrammed workloads
Other Models: Naturally provided on wide range of platforms
• Dataflow, Systolic Arrays • History dates at least to precursors of mainframes in early 60s
• Wide range of scale: few to hundreds of processors
Examine programming model, motivation, intended
applications, and contributions to convergence Popularly known as shared memory machines or model
• Ambiguous: memory may be physically distributed among processors

– 31 – CS 740 F’03 – 32 – CS 740 F’03


Shared Address Space Model Communication Hardware
Process: virtual address space plus one or more threads of control
Also a natural extension of a uniprocessor
Portions of address spaces of processes are shared
Virtual address spaces for a
Machine physical address space Already have processor, one or more memory modules and I/O
collection of processes communicating
via shared addresses Pn pr i v ate
controllers connected by hardware interconnect of some sort
I/O
Load devices
Pn

Common physical
P2 addresses
P1
P0
Mem Mem Mem Mem I/O ctrl I/O ctrl

St ore
P2 pr i vate
Shared portion
of address space Interconnect Interconnect

P1 pri vat e
Private portion
of address space
P0 pri vat e
Processor Processor

•Writes to shared address visible to other threads, processes


Memory capacity increased by adding modules, I/O by controllers
•Natural extension of uniprocessor model: conventional memory
•Add processors for processing!
operations for comm.; special atomic operations for synchronization
•For higher-throughput multiprogramming, or parallel programs
•OS uses shared memory to coordinate processes
– 33 – CS 740 F’03 – 34 – CS 740 F’03

History Example: Intel Pentium Pro Quad


“Mainframe” approach: CPU
P-Pro P-Pro P-Pro
• Motivated by multiprogramming P Interrupt
controller
256-KB
L2 $
module module module

• Extends crossbar used for mem bw and I/O P


Bus interface

• Originally processor cost limited to small scale I/O C P-Pro bus (64-bit data, 36-bit address, 66 MHz)

– later, cost of crossbar I/O C


• Bandwidth scales with p
PCI PCI Memory
bridge bridge controller
M M M M
• High incremental cost; use multistage instead

PCI bus
PCI

PCI bus
I/O MIU
cards
1-, 2-, or 4-way
interleaved

“Minicomputer” approach: DRAM

• Almost all microprocessor systems have bus


• Motivated by multiprogramming, TP I/O I/O • All coherence and
• Used heavily for parallel computing C C M M multiprocessing glue in
• Called symmetric multiprocessor (SMP) processor module
• Latency larger than for uniprocessor • Highly integrated,
• Bus is bandwidth bottleneck $ $
targeted at high volume
– caching is key: coherence problem P P
• Low latency and bandwidth
• Low incremental cost
– 35 – CS 740 F’03 – 36 – CS 740 F’03
Example: SUN Enterprise Scaling Up
M M M
°°°
CPU/mem
P P cards
$ $

$2 $2 Mem ctrl Network Network

Bus interface/switch

$ °°° $ M $ M $ °°° M $
$
Gigaplane bus (256 data, 41 address, 83 MHz)
P P P P P P
I/O cards
Bus interface
“Dance hall” Distributed memory

2 FiberChannel
• Problem is interconnect: cost (crossbar) or bandwidth (bus)

100bT, SCSI
• Dance-hall: bandwidth still scalable, but lower cost than crossbar

SBUS

SBUS
SBUS
– latencies to memory uniform, but uniformly large
• Distributed memory or non-uniform memory access (NUMA)
• 16 cards of either type: processors + memory, or I/O – Construct shared address space out of simple message transactions
• All memory accessed over bus, so symmetric across a general-purpose network (e.g. read-request, read-response)
• Higher bandwidth, higher latency bus • Caching shared (particularly nonlocal) data?
– 37 – CS 740 F’03 – 38 – CS 740 F’03

Example: Cray T3E Message Passing Architectures


External I/O Complete computer as building block, including I/O
• Communication via explicit I/O operations
P Mem
$
Programming model:
Mem
ctrl
and NI • directly access only private address space (local memory)
• communicate via explicit messages (send/receive)
XY Switch

High-level block diagram similar to distributed-mem SAS


Z • But comm. integrated at IO level, need not put into memory system
• Like networks of workstations (clusters), but tighter integration
• Scale up to 1024 processors, 480MB/s links • Easier to build than scalable SAS
• Memory controller generates comm. request for nonlocal references
• No hardware mechanism for coherence (SGI Origin etc. provide this)
Programming model further from basic hardware ops
• Library or OS intervention
– 39 – CS 740 F’03 – 40 – CS 740 F’03
Message Passing Abstraction Evolution of Message Passing

Early machines: FIFO on each link


101 100

Match ReceiveY, P, t
• Hardware close to programming model
Address Y
SendX, Q, t – synchronous ops 001 000

Address X
• Replaced by DMA, enabling non-blocking ops
– Buffered by system at destination until recv
111 110
Local process Local process
address space
address space

Diminishing role of topology 011 010


Process P Process Q
• Store & forward routing: topology important
• Send specifies buffer to be transmitted and receiving process
• Introduction of pipelined routing made it less so
• Recv specifies sending process and application storage to receive into
• Cost is in node-network interface
• Memory to memory copy, but need to name processes
• Simplifies programming
• Optional tag on send and matching rule on receive
• User process names local data and entities in process/tag space too
• In simplest form, the send/recv match achieves pairwise synch event
– Other variants too
• Many overheads: copying, buffer management, protection
– 41 – CS 740 F’03 – 42 – CS 740 F’03

Example: IBM SP-2 Example: Intel Paragon


i860 i860 Intel
Paragon
L1 $ L1 $ node

Power 2
CPU IBM SP-2 node

Memory bus (64-bit, 50 MHz)


L2 $

Memory bus Mem DMA


ctrl
General interconnection Driver
network formed from Memory 4-way NI
interleaved
8-port switches controller
DRAM 4-way
Sandia’ s Intel Paragon XP/S-based Supercomputer
interleaved
MicroChannel bus
DRAM
NIC

I/O DMA
DRAM

8 bits,
i860 NI 175 MHz,
2D grid network bidirectional
with processing node
attached to every switch

• Made out of essentially complete RS6000 workstations


• Network interface integrated in I/O bus (bw limited by I/O bus)
– 43 – CS 740 F’03 – 44 – CS 740 F’03
Toward Architectural Convergence Data Parallel Systems
Programming model:
Evolution and role of software have blurred boundary • Operations performed in parallel on each element of data structure
• Send/recv supported on SAS machines via buffers
• Logically single thread of control, performs sequential or parallel steps
• Can construct global address space on MP using hashing
• Conceptually, a processor associated with each data element
• Page-based (or finer-grained) shared virtual memory
Hardware organization converging too Architectural model:
• Tighter NI integration even for MP (low-latency, high-bandwidth) • Array of many simple, cheap processors with little memory each
• At lower level, even hardware SAS passes hardware messages – Processors don’t sequence through instructions
• Attached to a control processor that issues instructions
Even clusters of workstations/SMPs are parallel systems
• Specialized and general communication, cheap global synchronization
• Emergence of fast system area networks (SAN) Control

Programming models distinct, but organizations converging


processor

Original motivation:
• Nodes connected by general network and communication assists • Matches simple differential equation solvers
PE PE °°° PE

• Implementations also converging, at least in high-end machines • Centralize high cost of instruction fetch & PE PE °°° PE

sequencing °°° °°° °°°

PE PE PE
°°°
– 45 – CS 740 F’03 – 46 – CS 740 F’03

Application of Data Parallelism Evolution and Convergence


• Each PE contains an employee record with his/her salary Rigid control structure (SIMD in Flynn taxonomy)
If salary > 100K then • SISD = uniprocessor, MIMD = multiprocessor
salary = salary *1.05
else
Popular when cost savings of centralized sequencer high
• 60s when CPU was a cabinet; replaced by vectors in mid-70s
salary = salary *1.10
• Revived in mid-80s when 32-bit datapath slices just fit on chip
• Logically, the whole operation is a single step
• No longer true with modern microprocessors
• Some processors enabled for arithmetic operation, others disabled
Other examples:
Other reasons for demise
• Simple, regular applications have good locality, can do well anyway
• Finite differences, linear algebra, ...
• Loss of applicability due to hardwiring data parallelism
• Document searching, graphics, image processing, ...
– MIMD machines as effective for data parallelism and more general
Some recent machines:
• Thinking Machines CM-1, CM-2 (and CM-5) Programming model converges with SPMD (single program
• Maspar MP-1 and MP-2, multiple data)
• Contributes need for fast global synchronization
• Structured global address space, implemented with either SAS or MP
– 47 – CS 740 F’03 – 48 – CS 740 F’03
Dataflow Architectures Evolution and Convergence
Represent computation as a graph of essential dependences Key characteristics:
• Logical processor at each node, activated by availability of operands • Ability to name operations, synchronization, dynamic scheduling
• Message (tokens) carrying tag of next instruction sent to next processor
• Tag compared with others in matching store; match fires execution
Problems:
• Operations have locality across them, useful to group together
1 b c e
• Handling complex data structures like arrays
a = (b +1) × (b − c) + − × • Complexity of matching store and memory units
d = c× e
f = a× d
d
• Exposes too much parallelism (?)
× Dataflow graph
a
×
Converged to use conventional processors and memory
f
Network
• Support for large, dynamic set of threads to map to processors
• Typically shared address space as well
• But separation of programming model from hardware (like data parallel)
Lasting contributions:
Token Program
store store

Waiting
Matching
Instruction
fetch
Execute Form
token
Network
• Integration of communication with thread (handler) generation
Token queue • Tightly integrated communication and fine-grained synchronization
Network
• Remained useful concept for software (compilers etc.)
– 49 – CS 740 F’03 – 50 – CS 740 F’03

Systolic Architectures Systolic Arrays (Cont)


• Replace single processor with array of regular processing elements Example: Systolic array for 1-D convolution
• Orchestrate data flow for high throughput with less memory access
x(i+1) x(i) x(i-1) x(i-k)
M M y(i+k+1) y(i+k)
W (1) W (2) W (k)
y(i) y(i+1)
k
y(i) = w(j)*x(i-j)
PE
j=1
PE PE PE
• Practical realizations (e.g. iWARP) use quite general processors
Different from pipelining:
– Enable variety of algorithms on same hardware
• Nonlinear array structure, multidirection data flow, each PE may have
(small) local instruction and data memory • But dedicated interconnect channels
– Data transfer directly from register to register across channel
Different from SIMD: each PE may do something different
• Specialized, and same problems as SIMD
Initial motivation: VLSI enables inexpensive special-purpose chips
– General purpose systems work well for same algorithms (locality etc.)
Represent algorithms directly by chips connected in regular pattern

– 51 – CS 740 F’03 – 52 – CS 740 F’03


Convergence: General Parallel Architecture
A generic modern multiprocessor
Network

°°°

Fundamental Design
Communication
Mem assist (CA)

Issues
$

Node: processor(s), memory system, plus communication assist


• Network interface and communication controller
• Scalable network
• Convergence allows lots of innovation, now within framework
• Integration of assist with node, what operations, how efficiently...

– 53 – CS 740 F’03

Understanding Parallel Architecture Fundamental Design Issues

Traditional taxonomies not very useful At any layer, interface (contract) aspect and performance aspects
Programming models not enough, nor hardware • Naming: How are logically shared data and/or processes referenced?
structures • Operations: What operations are provided on these data
• Same one can be supported by radically different architectures
• Ordering: How are accesses to data ordered and coordinated?
Architectural distinctions that affect software
• Replication: How are data replicated to reduce communication?
• Compilers, libraries, programs
Design of user/system and hardware/software interface • Communication Cost: Latency, bandwidth, overhead, occupancy
• Constrained from above by progr. models and below by technology Understand at programming model first, since that sets requirements
Guiding principles provided by layers
• What primitives are provided at communication abstraction
Other issues:
• How programming models map to these • Node Granularity: How to split between processors and memory?
• How they are mapped to hardware • ...

– 55 – CS 740 F’03 – 56 – CS 740 F’03


Sequential Programming Model SAS Programming Model
Contract Naming:
• Naming: Can name any variable in virtual address space • Any process can name any variable in shared space
– Hardware (and perhaps compilers) does translation to physical
addresses Operations:
• Operations: Loads and Stores
• Loads and stores, plus those needed for ordering
• Ordering: Sequential program order
Simplest Ordering Model:
Performance • Within a process/thread: sequential program order
• Across threads: some interleaving (as in time-sharing)
• Rely on dependences on single location (mostly): dependence order
• Additional orders through synchronization
• Compilers and hardware violate other orders without getting caught
• Again, compilers/hardware can violate orders without getting caught
• Compiler: reordering and register allocation
– Different, more subtle ordering models also possible (discussed later)
• Hardware: out of order, pipeline bypassing, write buffers
• Transparent replication in caches

– 57 – CS 740 F’03 – 58 – CS 740 F’03

Synchronization Message Passing Programming Model

Mutual exclusion (locks) Naming: Processes can name private data directly.
• Ensure certain operations on certain data can be performed by • No shared address space
only one process at a time Operations: Explicit communication via send and receive
• Room that only one person can enter at a time • Send transfers data from private address space to another process
• No ordering guarantees
• Receive copies data from process to private address space
• Must be able to name processes
Event synchronization
• Ordering of events to preserve dependences
Ordering:
– e.g. producer —> consumer of data • Program order within a process
• 3 main types: • Send and receive can provide pt-to-pt synch between processes
– point-to-point • Mutual exclusion inherent
– global Can construct global address space:
– group • Process number + address within process address space
• But no direct operations on these names

– 59 – CS 740 F’03 – 60 – CS 740 F’03


Design Issues Apply at All Layers Naming and Operations
Programming model’s position provides constraints/goals for system Naming and operations in programming model can be directly supported
by lower levels, or translated by compiler, libraries or OS
In fact, each interface between layers supports or takes a position
on: Example: Shared virtual address space in programming model
• Naming model Hardware interface supports shared physical address space
• Set of operations on names • Direct support by hardware through v-to-p mappings, no software layers
• Ordering model Hardware supports independent physical address spaces
• Replication • Can provide SAS through OS, so in system/user interface
• Communication performance – v-to-p mappings only for data that are local
Any set of positions can be mapped to any other by software – remote data accesses incur page faults; brought in via page fault handlers
– same programming model, different hardware requirements and cost
Let’s see issues across layers: model
• How lower layers can support contracts of programming models • Or through compilers or runtime, so above sys/user interface
• Performance issues – shared objects, instrumentation of shared accesses, compiler support

– 61 – CS 740 F’03 – 62 – CS 740 F’03

Naming and Operations (Cont) Ordering


Example: Implementing Message Passing
Message passing: no assumptions on orders across
Direct support at hardware interface processes except those imposed by send/receive pairs
• But match and buffering benefit from more flexibility
Support at system/user interface or above in software SAS: How processes see the order of other processes’
(almost always) references defines semantics of SAS
• Hardware interface provides basic data transport (well suited) • Ordering very important and subtle
• Send/receive built in software for flexibility (protection, buffering) • Uniprocessors play tricks with orders to gain parallelism or locality
• Choices at user/system interface: • These are more important in multiprocessors
– OS each time: expensive • Need to understand which old tricks are valid, and learn new ones
– OS sets up once/infrequently, then little software involvement each time • How programs behave, what they rely on, and hardware implications
• Or lower interfaces provide SAS, and send/receive built on top with
buffers and loads/stores

Need to examine the issues and tradeoffs at every layer


• Frequencies and types of operations, costs
– 63 – CS 740 F’03 – 64 – CS 740 F’03
Replication Communication Performance
Performance characteristics determine usage of
Very important for reducing data transfer/communication operations at a layer
Again, depends on naming model • Programmer, compilers etc make choices based on this
Uniprocessor: caches do it automatically Fundamentally, three characteristics:
• Reduce communication with memory • Latency : time taken for an operation
Message Passing naming model at an interface • Bandwidth: rate of performing operations
• Cost: impact on execution time of program
• A receive replicates, giving a new name; subsequently use new name
• Replication is explicit in software above that interface If processor does one thing at a time: bandwidth ∝ 1/latency
SAS naming model at an interface • But actually more complex in modern systems
• A load brings in data transparently, so can replicate transparently Characteristics apply to overall operations, as well as
• Hardware caches do this, e.g. in shared physical address space individual components of a system, however small
• OS can do it at page level in shared virtual address space, or objects We will focus on communication or data transfer across
• No explicit renaming, many copies for same name: coherence problem nodes
– in uniprocessors, “coherence” of copies is natural in memory hierarchy
– 65 – CS 740 F’03 – 66 – CS 740 F’03

Communication Cost Model Summary of Design Issues


Communication Time per Message Functional and performance issues apply at all layers
= Overhead + Assist Occupancy + Network Delay + Size/Bandwidth + Functional: Naming, operations and ordering
Contention
Performance: Organization, latency, bandwidth,
= ov + oc + l + n/B + Tc
overhead, occupancy
Overhead and assist occupancy may be f(n) or not Replication and communication are deeply related
Each component along the way has occupancy and delay • Management depends on naming model
• Overall delay is sum of delays
• Overall occupancy (1/bandwidth) is biggest of occupancies Goal of architects: design against frequency and type
of operations that occur at communication
Comm Cost = frequency * (Comm time - overlap) abstraction, constrained by tradeoffs from above or
below
General model for data transfer: applies to cache
• Hardware/software tradeoffs
misses too
– 67 – CS 740 F’03 – 68 – CS 740 F’03
Recap
Parallel architecture is an important thread in the
evolution of architecture
• At all levels
• Multiple processor level now in mainstream of computing
Exotic designs have contributed much, but given way to
convergence
• Push of technology, cost and application performance
• Basic processor-memory architecture is the same
• Key architectural issue is in communication architecture
Fundamental design issues:
• Functional: naming, operations, ordering
• Performance: organization, replication, performance characteristics
Design decisions driven by workload-driven evaluation
• Integral part of the engineering focus
– 69 – CS 740 F’03

You might also like