0% found this document useful (0 votes)
8 views

Unit 3 HPC

High Perform Computing Engineering Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Unit 3 HPC

High Perform Computing Engineering Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Sanjivani Rural Education Society’s

Sanjivani College of Engineering, Kopargaon-423 603


(An Autonomous Institute, Affiliated to Savitribai Phule Pune University, Pune)
NAAC ‘A’ Grade Accredited, ISO 9001:2015 Certified

Department of Computer Engineering


(NBA Accredited)

Course- High Performance Computing (410241)


Unit 3- Basic Communication

Prof. B. J. Dange
Assistant Professor
E-mail :
Contact No: 91301 91301 Ext :145, 9604146122
Contents
• Basic Communication Operations- One-to-All Broadcast and All-to-One Reduction

• All-to-All Broadcast and Reduction

• All-Reduce and Prefix-Sum Operations

• Scatter and Gather

• All-to-All Personalized Communication

• Circular Shift

• Improving the Speed of Some Communication Operations

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 2


Basic Communication Operations: Introduction
• Many interactions in practical parallel programs occur in well-defined patterns involving
groups of processors.

• Efficient implementations of these operations can improve performance, reduce


development effort and cost, and improve software quality.

• Efficient implementations must leverage underlying architecture. For this reason, we refer
to specific architectures here.

• We select a descriptive set of architectures to illustrate the process of algorithm design.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 3


Basic Communication Operations: Introduction
• Group communication operations are built using point-to-point messaging primitives.

• Recall from our discussion of architectures that communicating a message of size m over an
uncongested network takes time ts +tmw.

• We use this as the basis for our analyses. Where necessary, we take congestion into account
explicitly by scaling the tw term.

• We assume that the network is bidirectional and that communication is single-ported.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 4


One-to-All Broadcast and All-to-One Reduction

• One processor has a piece of data (of size m) it needs to send to everyone.

• The dual of one-to-all broadcast is all-to-one reduction.

• In all-to-one reduction, each processor has m units of data. These data items must be
combined piece-wise (using some associative operator, such as addition or min), and the
result made available at a target processor.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 5


One-to-All Broadcast and All-to-One Reduction

One-to-all broadcast and all-to-one reduction among processors.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 6


One-to-All Broadcast and All-to-One Reduction on Rings

• Simplest way is to send p-1 messages from the source to the other p-1 processors - this is
not very efficient.

• Use recursive doubling: source sends a message to a selected processor. We now have
two independent problems derived over halves of machines.

• Reduction can be performed in an identical fashion by inverting the process.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 7


One-to-All Broadcast

One-to-all broadcast on an eight-node ring. Node 0 is the source of the broadcast. Each message transfer
step is shown by a numbered, dotted arrow from the source of the message to its destination. The
number on an arrow indicates the time step during which the message is transferred.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 8


All-to-One Reduction

Reduction on an eight-node ring with node 0 as the destination of the reduction.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 9


Broadcast and Reduction: Example
Consider the problem of multiplying a matrix with a vector.

• The n x n matrix is assigned to an n x n (virtual) processor grid. The vector is assumed to


be on the first row of processors.

• The first step of the product requires a one-to-all broadcast of the vector element along
the corresponding column of processors. This can be done concurrently for all n columns.

• The processors compute local product of the vector element and the local matrix entry.

• In the final step, the results of these products are accumulated to the first row using n
concurrent all-to-one reduction operations along the columns (using the sum operation).

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 10


Broadcast and Reduction: Matrix-Vector Multiplication
Example

One-to-all broadcast and all-to-one reduction in the multiplication of a 4 x 4 matrix with


a 4 x 1 vector.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 11
Broadcast and Reduction on a Mesh
• We can view each row and column of a square mesh of p nodes as a linear array of √p
nodes.

• Broadcast and reduction operations can be performed in two steps - the first step does the
operation along a row and the second step along each column concurrently.

• This process generalizes to higher dimensions as well.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 12


Broadcast and Reduction on a Mesh: Example

One-to-all broadcast on a 16-node mesh.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 13


Broadcast and Reduction on a Hypercube

• A hypercube with 2d nodes can be regarded as a d-dimensional mesh with two nodes in
each dimension.

• The mesh algorithm can be generalized to a hypercube and the operation is carried out in d
(= log p) steps.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 14


Broadcast and Reduction on a Hypercube: Example

One-to-all broadcast on a three-dimensional hypercube. The binary representations of


node labels are shown in parentheses.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 15


Broadcast and Reduction on a Balanced Binary Tree
• Consider a binary tree in which processors are (logically) at the leaves and internal nodes
are routing nodes.

• Assume that source processor is the root of this tree. In the first step, the source sends the
data to the right child (assuming the source is also the left child). The problem has now been
decomposed into two problems with half the number of processors.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 16


Broadcast and Reduction on a Balanced Binary Tree

One-to-all broadcast on an eight-node tree.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 17


Broadcast and Reduction Algorithms
• All of the algorithms described above are adaptations of the same algorithmic template.

• We illustrate the algorithm for a hypercube, but the algorithm, as has been seen, can be
adapted to other architectures.

• The hypercube has 2d nodes and my_id is the label for a node.

• X is the message to be broadcast, which initially resides at the source node 0.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 18


Broadcast and Reduction Algorithms

One-to-all broadcast of a message X from source on a hypercube.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 19
Broadcast and Reduction Algorithms

Single-node accumulation on a d-dimensional hypercube. Each node contributes a message X containing


m words, and node 0 is the destination.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 20
Cost Analysis

• The broadcast or reduction procedure involves log p point-to-point simple message


transfers, each at a time cost of ts + twm.

• The total time is therefore given by:

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 21


All-to-All Broadcast and Reduction

• Generalization of broadcast in which each processor is the source as well as destination.

• A process sends the same m-word message to every other process, but different
processes may broadcast different messages.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 22


All-to-All Broadcast and Reduction

All-to-all broadcast and all-to-all reduction.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 23


All-to-All Broadcast and Reduction on a Ring

• Simplest approach: perform p one-to-all broadcasts. This is not the most efficient way,
though.

• Each node first sends to one of its neighbors the data it needs to broadcast.

• In subsequent steps, it forwards the data received from one of its neighbors to its other
neighbor.

• The algorithm terminates in p-1 steps.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 24


All-to-All Broadcast and Reduction on a Ring

All-to-all broadcast on an
eight-node ring.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 25
All-to-All Broadcast and Reduction on a Ring

All-to-all broadcast on a p-node ring.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 26


All-to-all Broadcast on a Mesh
• Performed in two phases - in the first phase, each row of the mesh performs an all-to-all
broadcast using the procedure for the linear array.

• In this phase, all nodes collect √p messages corresponding to the √p nodes of their
respective rows. Each node consolidates this information into a single message of size
m√p.

• The second communication phase is a column-wise all-to-all broadcast of the consolidated


messages.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 27


All-to-all Broadcast on a Mesh

All-to-all broadcast on a 3 x 3 mesh. The groups of nodes communicating with each


other in each phase are enclosed by dotted boundaries. By the end of the second
phase, all nodes get (0,1,2,3,4,5,6,7) (that is, a message from each node).
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 28
All-to-all Broadcast on a Mesh

All-to-all broadcast on a square mesh of p nodes.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 29
All-to-all broadcast on a Hypercube

• Generalization of the mesh algorithm to log p dimensions.

• Message size doubles at each of the log p steps.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 30


All-to-all broadcast on a Hypercube

All-to-all broadcast on an eight-node hypercube.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 31
All-to-all broadcast on a Hypercube

All-to-all broadcast on a d-dimensional hypercube.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 32


All-to-all Reduction
• Similar communication pattern to all-to-all broadcast, except in the reverse order.

• On receiving a message, a node must combine it with the local copy of the message that
has the same destination as the received message before forwarding the combined
message to the next neighbor.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 33


Cost Analysis
• On a ring, the time is given by: (ts + twm)(p-1).

• On a mesh, the time is given by: 2ts(√p – 1) + twm(p-1).

• On a hypercube, we have:

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 34


All-to-all broadcast: Notes
• All of the algorithms presented above are asymptotically optimal in message size.

• It is not possible to port algorithms for higher dimensional networks (such as a


hypercube) into a ring because this would cause contention.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 35


All-to-all broadcast: Notes

Contention for a channel when the hypercube is mapped onto a ring.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 36


All-Reduce and Prefix-Sum Operations
• In all-reduce, each node starts with a buffer of size m and the final results of the operation
are identical buffers of size m on each node that are formed by combining the original p
buffers using an associative operator.

• Identical to all-to-one reduction followed by a one-to-all broadcast. This formulation is not


the most efficient. Uses the pattern of all-to-all broadcast, instead. The only difference is
that message size does not increase here. Time for this operation is (ts + twm) log p.

• Different from all-to-all reduction, in which p simultaneous all-to-one reductions take place,
each with a different destination for the result.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 37


The Prefix-Sum Operation
• Given p numbers n0,n1,…,np-1 (one on each node), the problem is to compute the sums sk
= ∑ik= 0 ni for all k between 0 and p-1 .

• Initially, nk resides on the node labeled k, and at the end of the procedure, the same node
holds Sk.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 38


The Prefix-Sum Operation

Computing prefix sums on an eight-node hypercube. At each node, square brackets show the local prefix sum
accumulated in the result buffer and parentheses enclose the contents of the outgoing message buffer for the
next step.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 39
The Prefix-Sum Operation
• The operation can be implemented using the all-to-all broadcast kernel.

• We must account for the fact that in prefix sums the node with label k uses information
from only the k-node subset whose labels are less than or equal to k.

• This is implemented using an additional result buffer. The content of an incoming message
is added to the result buffer only if the message comes from a node with a smaller label
than the recipient node.

• The contents of the outgoing message (denoted by parentheses in the figure) are updated
with every incoming message.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 40


The Prefix-Sum Operation

Prefix sums on a d-dimensional hypercube.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 41


Scatter and Gather
• In the scatter operation, a single node sends a unique message of size m to every other node
(also called a one-to-all personalized communication).

• In the gather operation, a single node collects a unique message from each node.

• While the scatter operation is fundamentally different from broadcast, the algorithmic
structure is similar, except for differences in message sizes (messages get smaller in scatter
and stay constant in broadcast).

• The gather operation is exactly the inverse of the scatter operation and can be executed as
such.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 42


Gather and Scatter Operations

Scatter and gather operations.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 43


Example of the Scatter Operation

The scatter operation on an eight-node hypercube.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 44
Cost of Scatter and Gather
• There are log p steps, in each step, the machine size halves and the data size halves.

• We have the time for this operation to be:

• This time holds for a linear array as well as a 2-D mesh.

• These times are asymptotically optimal in message size.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 45


All-to-All Personalized Communication

• Each node has a distinct message of size m for every other node.

• This is unlike all-to-all broadcast, in which each node sends the same message to all other
nodes.

• All-to-all personalized communication is also known as total exchange.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 46


All-to-All Personalized Communication

All-to-all personalized communication.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 47


All-to-All Personalized Communication: Example

• Consider the problem of transposing a matrix.

• Each processor contains one full row of the matrix.

• The transpose operation in this case is identical to an all-to-all personalized


communication operation.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 48


All-to-All Personalized Communication: Example

All-to-all personalized communication in transposing a 4 x 4 matrix using four


processes.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 49


All-to-All Personalized Communication on a Ring
• Each node sends all pieces of data as one consolidated message of size m(p – 1) to one of
its neighbors.

• Each node extracts the information meant for it from the data received, and forwards the
remaining (p – 2) pieces of size m each to the next node.

• The algorithm terminates in p – 1 steps.

• The size of the message reduces by m at each step.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 50


All-to-All Personalized Communication on a Ring

All-to-all personalized communication on a six-node ring. The label of each message is of the form {x,y},
where x is the label of the node that originally owned the message, and y is the label of the node that is
the final destination of the message. The label ({x1,y1}, {x2,y2},…, {xn,yn}, indicates a message that is
formed by concatenating n individual messages.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 51
All-to-All Personalized Communication on a
Ring: Cost
• We have p – 1 steps in all.
• In step i, the message size is m(p – i).
• The total time is given by:

• The tw term in this equation can be reduced by a factor of 2 by communicating messages in


both directions.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 52


All-to-All Personalized Communication on a Mesh
• Each node first groups its p messages according to the columns of their destination nodes.

• All-to-all personalized communication is performed independently in each row with


clustered messages of size m√p.

• Messages in each node are sorted again, this time according to the rows of their destination
nodes.

• All-to-all personalized communication is performed independently in each column with


clustered messages of size m√p.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 53


All-to-All Personalized Communication on a Mesh

The distribution of messages at the beginning of each phase of all-to-all personalized communication on a
3 x 3 mesh. At the end of the second phase, node i has messages ({0,i},…,{8,i}), where 0 ≤ i ≤ 8. The
groups of nodes communicating together in each phase are enclosed in dotted boundaries.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 54
All-to-All Personalized Communication on a Mesh: Cost

• Time for the first phase is identical to that in a ring with √p processors, i.e., (ts +
twmp/2)(√p – 1).

• Time in the second phase is identical to the first phase. Therefore, total time is twice of this
time, i.e.,

• It can be shown that the time for rearrangement is less much less than this communication
time.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 55


All-to-All Personalized Communication on a Hypercube

• Generalize the mesh algorithm to log p steps.

• At any stage in all-to-all personalized communication, every node holds p packets of size m
each.

• While communicating in a particular dimension, every node sends p/2 of these packets
(consolidated as one message).

• A node must rearrange its messages locally before each of the log p communication steps.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 56


All-to-All Personalized Communication on a Hypercube

An all-to-all personalized communication algorithm on a three-dimensional hypercube.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 57
All-to-All Personalized Communication on a
Hypercube: Cost
• We have log p iterations and mp/2 words are communicated in each iteration.
Therefore, the cost is:

• This is not optimal!

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 58


All-to-All Personalized Communication on a
Hypercube: Optimal Algorithm
• Each node simply performs p – 1 communication steps, exchanging m words of data with a
different node in every step.

• A node must choose its communication partner in each step so that the hypercube links do
not suffer congestion.

• In the jth communication step, node i exchanges data with node (i XOR j).

• In this schedule, all paths in every communication step are congestion-free, and none of the
bidirectional links carry more than one message in the same direction.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 59


All-to-All Personalized Communication on a
Hypercube: Optimal Algorithm

Seven steps in all-to-all personalized communication on an eight-node hypercube.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 60
All-to-All Personalized Communication on a
Hypercube: Optimal Algorithm

A procedure to perform all-to-all personalized communication on a d-dimensional


hypercube. The message Mi,j initially resides on node i and is destined for node j.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 61


All-to-All Personalized Communication on a
Hypercube: Cost Analysis of Optimal Algorithm
• There are p – 1 steps and each step involves non-congesting message transfer of m
words.

• We have:

• This is asymptotically optimal in message size.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 62


Circular Shift

• A special permutation in which node i sends a data packet to node (i + q) mod p in a p-


node ensemble

(0 ≤ q ≤ p).

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 63


Circular Shift on a Mesh
• The implementation on a ring is rather intuitive. It can be performed in min{q,p – q}
neighbor communications.

• Mesh algorithms follow from this as well. We shift in one direction (all processors)
followed by the next direction.

• The associated time has an upper bound of:

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 64


Circular Shift on a Mesh

The communication steps in a circular 5-shift on a 4 x 4 mesh.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 65
Circular Shift on a Hypercube
• Map a linear array with 2d nodes onto a d-dimensional hypercube.
• To perform a q-shift, we expand q as a sum of distinct powers of 2.
• If q is the sum of s distinct powers of 2, then the circular q-shift on a hypercube is
performed in s phases.
• The time for this is upper bounded by:

• If E-cube routing is used, this time can be reduced to

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 66


Circular Shift on a Hypercube

The mapping of an eight-node linear array onto a three-dimensional hypercube to perform a circular 5-shift as
a combination of a 4-shift and a 1-shift.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 67
Circular Shift on a Hypercube

Circular q-shifts on an 8-node hypercube for 1 ≤ q < 8.


DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 68
Improving Performance of Operations
• Splitting and routing messages into parts: If the message can be split into p parts, a one-to-
all broadcast can be implemented as a scatter operation followed by an all-to-all broadcast
operation. The time for this is:

• All-to-one reduction can be performed by performing all-to-all reduction (dual of all-to-all


broadcast) followed by a gather operation (dual of scatter).

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 69


Improving Performance of Operations

• Since an all-reduce operation is semantically equivalent to an all-to-one reduction followed


by a one-to-all broadcast, the asymptotically optimal algorithms for these two operations
can be used to construct a similar algorithm for the all-reduce operation.

• The intervening gather and scatter operations cancel each other. Therefore, an all-reduce
operation requires an all-to-all reduction and an all-to-all broadcast.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 70


Summary

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 71


References

• Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar, "Introduction to
Parallel Computing", 2nd edition, Addison-Wesley, 2003, ISBN: 0-201-64865-2.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 72


Thank You.

DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 73

You might also like