0% found this document useful (0 votes)
43 views

Send and Receive

MPI_Isend and MPI_Irecv allow for non-blocking send and receive operations in MPI. After calling these functions, the program is free to perform other computations while the communication is happening in the background. When the data in the send/receive buffers is needed, MPI_Wait or MPI_Test must be used to check that the communication has completed. This allows for overlapping of communication and computation for improved performance compared to the blocking send/receive functions.

Uploaded by

Salina Ranabhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Send and Receive

MPI_Isend and MPI_Irecv allow for non-blocking send and receive operations in MPI. After calling these functions, the program is free to perform other computations while the communication is happening in the background. When the data in the send/receive buffers is needed, MPI_Wait or MPI_Test must be used to check that the communication has completed. This allows for overlapping of communication and computation for improved performance compared to the blocking send/receive functions.

Uploaded by

Salina Ranabhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Send and receive:

Output:
Output:
Send and receive message:

Output:
Non-blocking communications:

SendandReceive
Inoneofthepreviousles sonsweusedtheMPI_Send
andMPI_Recv functionstocommunicate
: betwe
e
ntheranks.Wesawthat thesefunctionsareblocking
MPI_Send wilonlyreturnwhenthe program
cansafelymodifythesendbufferandMPI_Recv
wilonlyreturnoncethedatahasbe en
receivedandwritentothereceivebuffer.Thisissafe
andusualystraightforward,butcausesthe program
towaitwhilethecommunicationishap p
ening.Usualythereiscomputationthatwecould
perform whilewaitingfordata.

The MPI standard includes non-blocking versions of the send and receive
functions, MPI_Isend and MPI_Irecv. These function will return immediately, giving you more
control of the flow of the program. After calling them, it is not safe to modify the sending or the
receiving buffer, but the program is free to continue with other operations. When it needs the
data in the buffers, it needs to make sure the communication process is complete using
the MPI_Wait and MPI_Test functions.
MPI_Isend():
int MPI_Isend(
void* data,
int count,
MPI_Datatype datatype,
int destination,
int tag,
MPI_Comm communicator,
MPI_Request* request)
Data Pointer to the start of the data being sent
Count Number of elements to send
Datatype The type of the data being sent
Destination The rank number of the rank the data will be sent to
Tag A message tag (integer)
Communicator The communicator
Request Pointer for writing the request structure

MPI_Irecv():
int MPI_Irecv(
void* data,
int count,
MPI_Datatype datatype,
int source,
int tag,
MPI_Comm communicator,
MPI_Request* request)

Data Pointer to where the received data should be written


Count Maximum number of elements received
Datatype The type of the data being received
Source The rank number of the rank sending the data
Tag A message tag (integer)
Communicator The communicator
Request Pointer for writing the request structure
Output:

Collective:

The following program creates an array called vector that contains a list of n_numbers on each
rank. The first rank contains the numbers from 1 to n_numbers, the second rank from
n_numbers to 2*n_numbers2 and so on. It then calls the find_max and find_sum functions that
should calculate the sum and maximum of the vector.
These functions are not implemented in parallel and only return the sum and the maximum of
the local vectors. Modify the find_sum and find_max functions to work correctly in parallel
using MPI_Reduce or MPI_Allreduce.
Output:
Ring:

Output:
data Pointer to where the received data should be written
count Maximum number of elements received
datatype The type of the data being received
source The rank number of the rank sending the data
tag A message tag (integer)
Communicator The communicator (we have used MPI_COMM_WORLD in earlier
examples)
Status A pointer for writing the exit status of the MPI command

• Manyproblemscanbedistributedacro
ssseveralprocessorsandsolved
faster.
• mpiexec runscopiesoftheprogram.
• ThecopiesareseparatedMPIrank.

You might also like