0% found this document useful (0 votes)
12 views

Lecture 14 Basic Communication Operations.pptx

The lecture covers basic communication operations in parallel and distributed computing, including one-to-all broadcast and all-to-one reduction, along with their applications in various structures like linear arrays, meshes, and hypercubes. It discusses assumptions for operations, cost estimation, and provides exercises for practical understanding. Additional resources for further reading are also suggested.

Uploaded by

Sameer Zohaib
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture 14 Basic Communication Operations.pptx

The lecture covers basic communication operations in parallel and distributed computing, including one-to-all broadcast and all-to-one reduction, along with their applications in various structures like linear arrays, meshes, and hypercubes. It discusses assumptions for operations, cost estimation, and provides exercises for practical understanding. Additional resources for further reading are also suggested.

Uploaded by

Sameer Zohaib
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

CS 3006

Parallel and Distributed Computing


Lecture 14
Danyal Farhat
FAST School of Computing
NUCES Lahore
Basic Communication Operations
Lecture’s Agenda
•Basic Communication Operations
Preliminaries
Assumptions for the Operations
One-to-All Broadcast & All-to-One Reduction
✔ Linear Array or Ring
✔ Matrix-Vector Multiplication (An Application)
✔ Mesh Broadcast and Reduction
✔ Balanced Binary Tree
✔ Hypercube Broadcast (3-D Structure)
✔ Cost Estimation
Lecture’s Agenda (Cont.)
•All-to-All Broadcast and All-to-All Reduction
Linear Ring Broadcast
Linear Ring Reduction
All-to-All Broadcast on 2D Mesh
All-to-All Broadcast on HyperCube
Cost Estimation
•Exercise
•Summary
•Additional Resources
Basic Communication Operations

Basic Communication Operations (Cont.)
Assumptions for the Operations
• Interconnections support cut-through routing
• Communication time between any pair of nodes in the network is
same (regardless of the number of intermediate nodes)
• Links are bi-directional
The directly connected nodes can simultaneously send messages of m words
without any congestion
• Single-port communication model
A node can send on only one of its links at a time
A node can receive on only one of its links at a time
• However, a node can receive a message while sending another
message at the same time on the same or a different link
Basic Communication Operations (Cont.)
One-to-All Broadcast
• A single process sends identical data to all other processes
Initially one process has data of m size.
After broadcast operation, each of the processes have own copy of the m
size.
All-to-One Reduction
• Dual of one-to-all broadcast
• The m-sized data from all processes are combined through an
associative operator, accumulated at a single destination process
into one buffer of size m
• Applications: matrix-vector multiplication, shortest paths
calculation, and inner vector product
Basic Communication Operations (Cont.)
One-to-All Broadcast and All-to-One
Reduction
Linear Array or Ring
• Naïve solution
sequentially send p - 1 messages from the source to the other p - 1
processes
✔ Bottlenecks, and underutilization of communication network
Solution?

• Recursive doubling
Source process sends the massage to another process
In next communication phase both the processes can simultaneously
propagate the message
One-to-All Broadcast and All-to-One
Reduction
Linear Array or Ring (One Dimensional Structure)
• Recursive Doubling Broadcast
Select destination such that no congestions occur
One-to-All Broadcast and All-to-One
Reduction
Linear Array or Ring (One Dimensional Structure)
• Recursive Doubling Reduction
Odd will send message to even and associative operator will combine the results
Matrix-Vector Multiplication (An
Application)
Mesh Broadcast and Reduction
• We can regard each row and column of a square mesh of p nodes as
a linear array of nodes
• Communication algorithms on the mesh are simple extensions of
their linear array counterparts

• Broadcast and Reduction


Two step breakdown:
✔ The operation is performed along one by treating the row as linear array
✔ Then the all the columns are treated similarly
Mesh Broadcast and Reduction (Cont.)
Balanced Binary Tree
•Broadcast
Hypercube Broadcast (3-D Structure)
•Source node first send data to one node in the highest
dimension
•The communication successively proceeds along lower
dimensions in the subsequent steps
•The algorithm is same as used for linear array
But, here changing order of dimension does not congest the network
✔ i.e. can move from x-axis to z-axis or from z-axis to x-axis
Hypercube Broadcast (Cont.)
• Communication done along z-axis, then y-axis, and then x-axis
Hypercube Broadcast General
• Assigned Reading
Hypercube Broadcast General (Cont.)
Hypercube Reduction
• Assigned Reading
Cost Estimation

All-to-All Broadcast and All-to-All Reduction
All-to-All Broadcast
• A generalization to one-to-all broadcast
• Every process broadcasts m-word message
The broadcast-message for each of the processes can be different than others

All-to-All Reduction
• Dual of all-to-all broadcast
• Each node is the destination of an all-to-one reduction out of total P
reductions
• Applications in different matrix operations including matrix
multiplication and matrix-vector multiplication
All-to-All Broadcast and All-to-All Reduction
(Cont.)

•A naïve Broadcast method may be performing P one-to-all


broadcasts. This will result P( log(p)(t(s) + mt(w)) )
communication time.
All-to-All Broadcast and All-to-All Reduction
(Cont.)
Linear Ring Broadcast (Assigned Reading)
All-to-All Reduction
Linear Array or Ring
• Reduction
Draw an All-to-All Broadcast on a P-node linear ring
Reverse the directions in each foreach of the step without changing
message
After each communication step, combine messages having same broadcast
destination with associative operator

• Now, Its your turn to draw?


Draw an All-to-All Broadcast on a 4-node linear ring
Reverse the directions and combine the results using ‘SUM’
Linear Ring Reduction (Assigned Reading)
All-to-All Broadcast on 2D Mesh
All-to-All Broadcast on 2D Mesh Algorithm
(Assigned Reading)
All-to-All Broadcast on HyperCube
•Along x-axis then y-axis then z-axis
All-to-All Broadcast on HyperCube..Algorithm
(Assigned Reading)
All-to-All Broadcast and All-to-All Reduction

All-to-All Broadcast and All-to-All Reduction
(Cont.)

All-to-All Broadcast and All-to-All Reduction
(Cont.)

Homework Exercise
•Dry run one-to-all broadcast and all-to-one reduction on 16
nodes ring structure.
Summary
•Basic Communication Operations
Preliminaries
✔ Exchanging the data is fundamental requirement for most of the parallel algorithms

Assumptions for the Operations


✔ Interconnections support cut-through routing
✔ Communication time between any pair of nodes in the network is same
✔ Links are bi-directional
✔ Single-port communication model
Summary (Cont.)
•One-to-All Broadcast & All-to-One Reduction
One-to-All Broadcast - A single process sends identical data to all other
processes

All-to-One Reduction - The m-sized data from all processes are combined
through an associative operator, accumulated at a single destination process
into one buffer of size m
✔ Linear Array or Ring
✔ Matrix-Vector Multiplication (An Application)
✔ Mesh Broadcast and Reduction
✔ Balanced Binary Tree
✔ Hypercube Broadcast (3-D Structure)
✔ Cost Estimation
Summary (Cont.)
•All-to-All Broadcast and All-to-All Reduction
All-to-All Broadcast - Every process broadcasts m-word message

All-to-All Reduction - Each node is the destination of an all-to-one reduction


out of total P reductions
✔ Linear Ring Broadcast
✔ Linear Ring Reduction
✔ All-to-All Broadcast on 2D Mesh
✔ All-to-All Broadcast on HyperCube
✔ Cost Estimation

•Exercise
Additional Resources
•Introduction to Parallel Computing by Ananth Grama and
Anshul Gupta

Chapter 4: Basic Communication Operations


Questions?

You might also like