PDC Lecture 16 MPI - Net-New
PDC Lecture 16 MPI - Net-New
NET
• Open “Build” menu and click “Build Solution” or just hit ‘F6’ in order to
complile the program.
Executing the MPI program
• Already compiled the MPI program.
• To keep things simple, we will run the program using
command prompt.
• Usually the executable generated by VS can be found in
“Debug” folder under “bin” of the project folder.
• Navigate to this folder using command:
– cd <drive_letter>:\Folder name\
– cd D:\MPI Sample Programs\MPITestApp\MPITestApp\bin\Debug
Command to execute MPI program
• Types:
– World
– Self
Communicator.world
• Rarely used.
Custom communicator
• Users can create their own communicators.
• By cloning a communicator
– Produces a communicator with the same processes, same
ranks, but a separate communicator space.
• Collective communication
– One to all
– All to one
– All to all
Point to Point MPI
Communication
Blocking calls
Point-to-Point Communication
• Most basic form of communication.
• A process can send/receive data to/from another process
over a given communicator.
• Each message consists of
– Source process (rank; int value)
– Target process (rank; int value)
– Tag (int value)
– The message itself
Blocking point-to-point
• The blocking point-to-point operations will wait until a
communication has completed before continuing.
• Example:
– A blocking Send operation will not return until the message has
entered into MPI's internal buffers to be transmitted.
– A blocking Receive operation will wait until a message has
been received and completely decoded before returning.
Non-blocking point-to-point
Note: At receiving end, instead of source rank and tag, we can use the
following code:
comm.Receive<string>(Communicator.anySource, Communicator.anyTag);
Task (SPMD model)
• Problem: Compute sum of all integers in an array.
Steps:
• Declare and initialize an array of 1000 length.
• Divide this array into 10 equal parts.
• Using 10 MPI processes, compute sum for each divided
part.
• Combine all the sub-solutions to generate a final result.
Note: Implement the solution using blocking p to p
communication.
Data Types
• There are three kinds of types that can be transmitted via
MPI.NET:
– Primitive types: These are the basic types in C#, such as
integers and floating-point numbers.
– Public Structures:
No synchronization. Rank 4 completed all 5 iterations before any other rank could
even start the first one.
Sample code (with barrier)
Clearly, at any given time, all the processes are at the same step.
All to one
Collective Communication
Gethering the data: All to one
• The MPI Gather operation collects data provided by all of
the processes in a communicator on a single (root)
process.
• Example:
• Consider a system that takes user input from a single
process (rank 0) and distributes it to all of the processes
so that they all execute the command concurrently.
Broadcast sample code
Output
Broadcast(ref msg, 0)
• It requires two arguments:
– The first argument contains the value to send.
– The second is the rank of the root process, which will supply
the value.
• Here all processes define the same variable, but only the
root process gives it a meaningful value.
Scatter vs Broadcast
• Scatter collective is similar to Broadcast.