0% found this document useful (0 votes)
34 views

Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

MPI is a standard for message passing parallel programming. It allows the same program to run on multiple processors that communicate through message passing. The basic MPI calls are MPI_INIT to initialize, MPI_COMM_RANK and MPI_COMM_SIZE to determine the processor number and total processors, and MPI_Send and MPI_Recv for sending and receiving messages between processors. MPI_FINALIZE finalizes the parallel job.

Uploaded by

Dinesh Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

MPI is a standard for message passing parallel programming. It allows the same program to run on multiple processors that communicate through message passing. The basic MPI calls are MPI_INIT to initialize, MPI_COMM_RANK and MPI_COMM_SIZE to determine the processor number and total processors, and MPI_Send and MPI_Recv for sending and receiving messages between processors. MPI_FINALIZE finalizes the parallel job.

Uploaded by

Dinesh Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Parallel Programming

Using Basic MPI

Presented by
Timothy H. Kaiser, Ph.D.
San Diego Supercomputer Center
Talk Overview
• Background on MPI
• Documentation
• Hello world in MPI
• Basic communications
• Simple send and receive program

2
Background on MPI
• MPI - Message Passing Interface
– Library standard defined by a committee of
vendors, implementers, and parallel
programmers
– Used to create parallel programs based on
message passing
• 100% portable: one standard, many
implementations
• Available on almost all parallel machines in C
and Fortran
• Over 100 advanced routines but 6 basic

3
Documentation
• MPI home page (contains the library
standard): www.mcs.anl.gov/mpi
• Books
"MPI: The Complete Reference" by Snir, Otto,
Huss-Lederman, Walker, and Dongarra, MIT
Press (also in Postscript and html)
"Using MPI" by Gropp, Lusk and Skjellum, MIT
Press
• Tutorials
many online, just do a search

4
MPI Implementations
• Most parallel supercomputer vendors provide
optimized implementations
• Others:
– www.lam-mpi.org
– www-unix.mcs.anl.gov/mpi/mpich
– GLOBUS:
www.globus.org/mpi/

5
Key Concepts of MPI
• Used to create parallel programs based on
message passing
– Normally the same program is running on
several different processors
– Processors communicate using message
passing
• Typical methodology:
start job on n processors
do i=1 to j
each processor does some calculation
pass messages between processor
end do
end job

6
Messages
• Simplest message: an array of data of one
type.
• Predefined types correspond to commonly
used types in a given language
– MPI_REAL (Fortran), MPI_FLOAT (C)
– MPI_DOUBLE_PRECISION (Fortran),
MPI_DOUBLE (C)
– MPI_INTEGER (Fortran), MPI_INT (C)
• User can define more complex types and
send packages.

7
Communicators
• Communicator
– A collection of processors working on some
part of a parallel job
– Used as a parameter for most MPI calls
– MPI_COMM_WORLD includes all of the
processors in your job
– Processors within a communicator are
assigned numbers (ranks) 0 to n-1
– Can create subsets of MPI_COMM_WORLD

8
Include files
• The MPI include file
– C: mpi.h
– Fortran: mpif.h (a f90 module is a good
place for this)
• Defines many constants used within MPI
programs
• In C defines the interfaces for the functions
• Compilers know where to find the include files

9
Minimal MPI program
• Every MPI program needs these…
– C version
/* the mpi include file */
#include <mpi.h>
int nPEs,ierr,iam;
/* Initialize MPI */
ierr=MPI_Init(&argc, &argv);
/* How many processors (nPEs) are there?*/
ierr=MPI_Comm_size(MPI_COMM_WORLD, &nPEs);
/* What processor am I (what is my rank)? */
ierr=MPI_Comm_rank(MPI_COMM_WORLD, &iam);
...
ierr=MPI_Finalize();

• In C MPI routines are functions and return


an error value

10
Minimal MPI program
• Every MPI program needs these…
– Fortran version
! MPI include file
include 'mpif.h'
integer nPEs, ierr, iam
! Initialize MPI
call MPI_Init(ierr)
! How many processors (nPEs) are there?
call MPI_Comm_size(MPI_COMM_WORLD, nPEs, ierr)
! What processor am I (what is my rank)?
call MPI_Comm_rank(MPI_COMM_WORLD, iam, ierr)
...
call MPI_Finalize(ierr)

• In Fortran, MPI routines are subroutines,


and last parameter is an error value

11
Exercise 1 : Hello World
• Write a parallel “hello world” program
– Initialize MPI
– Have each processor print out “Hello,
World” and its processor number (rank)
– Quit MPI

12
Basic Communication
• Data values are transferred from one
processor to another
– One processor sends the data
– Another receives the data
• Synchronous
– Call does not return until the message is
sent or received
• Asynchronous
– Call indicates a start of send or receive, and
another call is made to determine if finished

13
Synchronous Send
• C
– MPI_Send(&buffer, count ,datatype, destination,
tag,communicator);
• Fortran
– Call MPI_Send(buffer, count, datatype,
destination,tag,communicator, ierr)
• Call blocks until message on the way

14
Call MPI_Send(buffer, count, datatype,
destination, tag, communicator, ierr)

• Buffer: The data array to be sent


• Count : Length of data array (in elements, 1 for scalars)
• Datatype : Type of data, for example :
MPI_DOUBLE_PRECISION, MPI_INT, etc
• Destination : Destination processor number (within
given communicator)
• Tag : Message type (arbitrary integer)
• Communicator : Your set of processors
• Ierr : Error return (Fortran only)

15
Synchronous Receive
• C
– MPI_Recv(&buffer,count, datatype, source, tag,
communicator, &status);
• Fortran
– Call MPI_ RECV(buffer, count, datatype,
source,tag,communicator, status, ierr)
• Call blocks the program until message is in buffer
• Status - contains information about incoming
message
–C
• MPI_Status status;
– Fortran
• Integer status(MPI_STATUS_SIZE)

16
Call MPI_Recv(buffer, count, datatype,
source, tag, communicator,
status, ierr)
• Buffer: The data array to be received
• Count : Maximum length of data array (in elements, 1 for
scalars)
• Datatype : Type of data, for example :
MPI_DOUBLE_PRECISION, MPI_INT, etc
• Source : Source processor number (within given
communicator)
• Tag : Message type (arbitrary integer)
• Communicator : Your set of processors
• Status: Information about message
• Ierr : Error return (Fortran only)

17
Exercise 2 : Basic Send and Receive
• Write a parallel program to send & receive
data
– Initialize MPI
– Have processor 0 send an integer to
processor 1
– Have processor 1 receive an integer from
processor 0
– Both processors print the data
– Quit MPI

18
Summary
• MPI is used to create parallel programs based on
message passing
• Usually the same program is run on multiple
processors
• The 6 basic calls in MPI are:
– MPI_INIT( ierr )
– MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
– MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
– MPI_Send(buffer, count,MPI_INTEGER,destination, tag,
MPI_COMM_WORLD, ierr)
– MPI_Recv(buffer, count, MPI_INTEGER,source,tag,
MPI_COMM_WORLD, status,ierr)
– call MPI_FINALIZE(ierr)

19

You might also like