0% found this document useful (0 votes)
23 views

PDC Lecture 16 MPI - Net-New

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

PDC Lecture 16 MPI - Net-New

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

MPI.

NET

Develop parallel applications using Message Passing


Interface
Introduction
• MPI programming model is based on message passing.
• Unlike multithreading, where multiple threads share the
same program state, each of MPI process has own local
program state.
• These processes communicate with each other to solve
their part of the bigger problem.
• These process can run on different machines with
different architectures or on a single machine.
Installation

• Download MPI.NET software development kit using this


link.

• Using the executable file, install the SDK.


Test Installation Success

Open “Command Prompt” and type “mpiexec”. Hit enter.


Creating MPI project using Visual Studio 2019

Open VS2019 and click the button to create a project.


Creating MPI project (cont.)

Select the appropriate template and click “Next” to create a


“Console” application.
Creating MPI project (cont.)

Name your project and click “Create” button.


Creating MPI project (cont.)
• Once the project is created, click “Project” menu and select “Manage NuGet
Packages...”
Creating MPI project (cont.)

Once the NuGet tab is opened, click “Browse” and search


“mpi.net” in search bar.
Creating MPI project (cont.)

Select the mentioned package and hit install.


Creating MPI project (cont.)

Once the package has finished installing, goto “Solution Explorer”


and expand the “References” region just to make sure that “MPI”
has been installed successfully.
Creating MPI project (cont.)

Add a directive to import namespace and classes from “MPI”


library.
Hello World MPI program
static void Main(string[] args)
{
using (new MPI.Environment(ref args))
{
Console.WriteLine($"Hello world! from rank {Communicator.world.Rank} running on
{MPI.Environment.ProcessorName}");
}
}

• Open “Build” menu and click “Build Solution” or just hit ‘F6’ in order to
complile the program.
Executing the MPI program
• Already compiled the MPI program.
• To keep things simple, we will run the program using
command prompt.
• Usually the executable generated by VS can be found in
“Debug” folder under “bin” of the project folder.
• Navigate to this folder using command:
– cd <drive_letter>:\Folder name\
– cd D:\MPI Sample Programs\MPITestApp\MPITestApp\bin\Debug
Command to execute MPI program

• Once navigated to target folder, execute the following


command:
– mpiexec -n <no_of_processes> <executable_file_name>
– mpiexec -n 10 MPITestApp.exe
Output

Complete flow of executing the MPI program can be


observed.
MPI Environment

• All MPI programs should initialize the MPI environment


and finalize at the end of the program.

• Rest of the program lies within.

• We already did this with the help of “using” block.


MPI Communicator
• Communicator is an essential part of MPI programs.
• It enables the communication between different MPI
processes.
• Communicator is simply a distinctive namespace in which
processes belonging to that communicator can interact
without colliding with the messages of other
communicators.
• An MPI program can have more than one communicators
collectively working on a task or sub-tasks.
MPI Communicator
• Properties:
– Rank: Process identifier
– Size: Gives the total no of processes inside communicator

• Types:
– World
– Self
Communicator.world

• The world communicator contains all the processes that


MPI program started with.

• All these processes belong to the same communicator


and hence they can communicate with each other.
Communicator.self

• The “self” communicator is quite a bit more limited.

• Each process has its own communicator.

• So, each communicator contains only one process.

• Rarely used.
Custom communicator
• Users can create their own communicators.

• By cloning a communicator
– Produces a communicator with the same processes, same
ranks, but a separate communicator space.

• By selecting subgroups of those processes.


Example
Output of the example
A little modification in the code
Output
Communication between
Processes
Types
Types of communication
• Point-to-Point communication
– Blocking
– Non-blocking

• Collective communication
– One to all
– All to one
– All to all
Point to Point MPI
Communication
Blocking calls
Point-to-Point Communication
• Most basic form of communication.
• A process can send/receive data to/from another process
over a given communicator.
• Each message consists of
– Source process (rank; int value)
– Target process (rank; int value)
– Tag (int value)
– The message itself
Blocking point-to-point
• The blocking point-to-point operations will wait until a
communication has completed before continuing.

• Example:
– A blocking Send operation will not return until the message has
entered into MPI's internal buffers to be transmitted.
– A blocking Receive operation will wait until a message has
been received and completely decoded before returning.
Non-blocking point-to-point

• In non-blocking point-to-point operation, a process will


continue after initiating the communication without waiting
for it to be completed.

• The call to the non-blocking operation will return as soon


as the operation is begun, not when it completes.
Blocking point-to-point example
Output - Blocking point-to-point example

In this example, only rank 0 and rank 1 are communicating.


Steps:
-Rank 0 wants to send a string message to rank 1.
-Rank 0 prepares the message by mentioning the receiver rank and tags
the message with “0”.
-Rank 1 expects a message of type string from rank 0 with tag 0.

Note: At receiving end, instead of source rank and tag, we can use the
following code:
comm.Receive<string>(Communicator.anySource, Communicator.anyTag);
Task (SPMD model)
• Problem: Compute sum of all integers in an array.
Steps:
• Declare and initialize an array of 1000 length.
• Divide this array into 10 equal parts.
• Using 10 MPI processes, compute sum for each divided
part.
• Combine all the sub-solutions to generate a final result.
Note: Implement the solution using blocking p to p
communication.
Data Types
• There are three kinds of types that can be transmitted via
MPI.NET:
– Primitive types: These are the basic types in C#, such as
integers and floating-point numbers.
– Public Structures:

– Serializable Classes: A class can be made serializable by


attaching the “Serializable” attribute as shown below,
Sending an array from rank 0 to 1

Rank 1 has declared an array of enough length to receive data from


rank 0.
Collective Communication
Between MPI processes
Collective communication
• A more structured alternative to p-to-p communication.
• All the processes within a communicator collaborate on a
single communication operation.
• It is often easier to write, maintain and debug parallel
application written using collective communication than
point to point.
– Think where all processes are communicating with one another.
• MPI implementations contain optimized algorithms for
collective operations. So, better performance.
Barrier
• Usually processes of parallel applications work
independently.
• At some point we may want all the processes to be on the
same step.
• Barrier is a collective operation used for the purpose.
• When a process enters the barrier, it does not exit the
barrier until all processes have entered the barrier.
Sample code (without barrier)
Output

No synchronization. Rank 4 completed all 5 iterations before any other rank could
even start the first one.
Sample code (with barrier)

Place the barrier before or after the


instruction.
Output

Clearly, at any given time, all the processes are at the same step.
All to one
Collective Communication
Gethering the data: All to one
• The MPI Gather operation collects data provided by all of
the processes in a communicator on a single (root)
process.

• Sometimes, this root process is the one responsible to


present all the collected information to user.
– Think of the example where 10 processes calculated sum of
different parts of array. The root process can gather all the
(sub)solutions and compute the final sum.
Rank 0 receiving rank nos from all of the processes
Output
Gather(comm.rank, 0)
• The first parameter of this function defines the value
transmitted by all of the MPI processes.
– Here rank no is being transmitted by all the processes.

• The second parameter defines the root node at which


data is being gathered.
– Here root node is rank 0.

• The “Gather” function returns array.


Gather(comm.rank, 0) (cont.)
• Type of the array is determined by the type of 1st
parameter.
– Here 1st parameter (comm.rank) is of integer type.

• The ith value in the array corresponds to the value


provided by the process with rank i.

• All the processes other than root process receive null


array.
Allgather vs Gather
• Allgather is similar to Gather, with two differences:
1. No parameter defining the root node.
2. All processes receive the same array containing the
contributions from every process (null array in case of
Gather).

• An Allgather is, therefore, the same as a Gather followed


by a Broadcast.

• Syntax: int[] ranksArray = comm.Allgather(comm.Rank);


One to All
Collective Communication
Spreading the message: One to All
• As Gather and Allgather collectives bring data from all of
the processes, the Broadcast and Scatter collectives
distribute data from one process to all of the processes.

• Example:
• Consider a system that takes user input from a single
process (rank 0) and distributes it to all of the processes
so that they all execute the command concurrently.
Broadcast sample code
Output
Broadcast(ref msg, 0)
• It requires two arguments:
– The first argument contains the value to send.
– The second is the rank of the root process, which will supply
the value.

• Here all processes define the same variable, but only the
root process gives it a meaningful value.
Scatter vs Broadcast
• Scatter collective is similar to Broadcast.

• Scatter, however, will broadcast different values to each of


the processes, allowing the root to hand out different
tasks to each of the other processes.

• The root process provides an array of values, in which the


ith value will be sent to the process with rank i.
All to all
Collective Communication
Something for everyone: All to all
• Alltoall collective transmits data from every process to
every other process.

• Each (sending) process will provide an array whose ith


value will be sent to the process with rank i.

• Each (receiving) process will then receive in return a


different array, whose jth value will be the value received
from the process with rank j.

You might also like