0% found this document useful (0 votes)
56 views

Matlab Mpi: Parallel Programming With Matlabmpi Reference Manual

MatlabMPI is a set of Matlab scripts that implement a subset of the Message Passing Interface (MPI) to allow Matlab programs to run in parallel on a cluster. It uses standard Matlab file input/output to communicate between processes, making it small (around 100 lines of code) and able to run on any system that supports Matlab. MatlabMPI requires only a Matlab license for each machine and access to a shared file system visible to all machines. It provides functions for initializing communication, sending and receiving messages between processes, and running Matlab scripts in parallel using MPI functionality.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Matlab Mpi: Parallel Programming With Matlabmpi Reference Manual

MatlabMPI is a set of Matlab scripts that implement a subset of the Message Passing Interface (MPI) to allow Matlab programs to run in parallel on a cluster. It uses standard Matlab file input/output to communicate between processes, making it small (around 100 lines of code) and able to run on any system that supports Matlab. MatlabMPI requires only a Matlab license for each machine and access to a shared file system visible to all machines. It provides functions for initializing communication, sending and receiving messages between processes, and running Matlab scripts in parallel using MPI functionality.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Matlab MPI

Parallel Programming with MatlabMPI


Reference Manual

By

Dr. Jeremy Kepner


Document Generated by Universal Report www.omegacomputer.com

Abstract
Matlab is the dominate programming language for implementing numerical computations and is widely used for algorithm development, simulation, data reduction, testing and system evaluation. Many of these
computations could benefit from faster execution on a parallel computer.
There have been many previous attempts to provide an efficient mechanism for running Matlab programs on parallel computers. These efforts
have faced numerous challenges and none have received widespread acceptance.
In the world of parallel computing the Message Passing Interface (MPI)
is the de facto standard for implementing programs on multiple processors. MPI defines C and Fortran language functions for doing point-topoint communication in a parallel program. MPI has proven to be an
effective model for implementing parallel programs and is used by many
of the worlds most demanding applications (weather modeling, weapons
simulation, aircraft design, etc.).
MatlabMPI is set of Matlab scripts that implement a subset of MPI
and allow any Matlab program to be run on a parallel computer. The
key innovation of MatlabMPI is that it implements the widely used MPI
look and feel on top of standard Matlab file i/o, resulting in a pure
Matlab implementation that is exceedingly small ( 100 lines of code).
Thus, MatlabMPI will run on any combination of computers that Matlab
supports. In addition, because of its small size, it is simple to download
and use (and modify if you like).
REQUIREMENTS
-Matlab license -File system visible to all processors
On shared memory systems, MatlabMPI only requires a single Matlab
license as each user is allowed to have many Matlab sessions. On distributed memory systems, MatlabMPI will require one Matlab license per
machine. Because MatlabMPI uses file i/o for communication, there must
be a directory that is visible to every machine (this is usually also required
in order to install Matlab). This directory defaults to the directory that
the program is launched from, but can be changed within the MatlabMPI
program.

Contents
1 Global Information

2 Quantitative Information

3 List of Files

4 List of Routines

5 Routines Description
5.1 MatMPI Buffer file() . . .
5.2 MatMPI Comm dir() . . .
5.3 MatMPI Comm init() . .
5.4 MatMPI Comm settings()
5.5 MatMPI Commands() . .
5.6 MatMPI Delete all() . . .
5.7 MatMPI lock file() . . . .
5.8 MatMPI Save messages()
5.9 MPI Abort() . . . . . . .
5.10 MPI Bcast() . . . . . . . .
5.11 MPI Comm rank() . . . .
5.12 MPI Finalize() . . . . . .
5.13 MPI Init() . . . . . . . . .
5.14 MPI Probe() . . . . . . .
5.15 MPI Recv() . . . . . . . .
5.16 MPI Run() . . . . . . . .
5.17 MPI Send() . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

8
8
8
8
9
9
10
10
10
11
11
12
12
13
13
14
14
15

6 ROUTINES BODY
6.1 MatMPI Buffer file() . . .
6.2 MatMPI Comm dir() . . .
6.3 MatMPI Comm init() . .
6.4 MatMPI Comm settings()
6.5 MatMPI Commands() . .
6.6 MatMPI Delete all() . . .
6.7 MatMPI lock file() . . . .
6.8 MatMPI Save messages()
6.9 MPI Abort() . . . . . . .
6.10 MPI Bcast() . . . . . . . .
6.11 MPI Comm rank() . . . .
6.12 MPI Finalize() . . . . . .
6.13 MPI Init() . . . . . . . . .
6.14 MPI Probe() . . . . . . .
6.15 MPI Recv() . . . . . . . .
6.16 MPI Run() . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

16
16
16
16
18
19
21
22
22
23
24
26
26
27
27
28
29

Parallel Programming with MatlabMPI


6.17 MPI Send() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4
32

Parallel Programming with MatlabMPI

Global Information
Project Name
Project Owner
Starting Date
Ending Date
Programming Environment
Technical Team
Overview Comment

Matlab MPI
Dr. Jeremy Kepner

Matlab
Dr. Jeremy Kepner
Parallel Programming with MatlabMPI

Table 1: General Information

Quantitative Information
Total
Total
Total
Total

number of files
number of routines
number of lines
size

19 file(s)
17 line(s)
1257 line(s)
44 Kbyte(s)

Table 2: Quantitative Information

Parallel Programming with MatlabMPI

3
N
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-

List of Files
File Name
MatlabMPI.m
MatMPI Buffer file.m
MatMPI Comm dir.m
MatMPI Comm init.m
MatMPI Comm settings.m
MatMPI Commands.m
MatMPI Delete all.m
MatMPI Lock file.m
MatMPI Save messages.m
MPI Abort.m
MPI Bcast.m
MPI Comm rank.m
MPI Comm size.m
MPI Finalize.m
MPI Init.m
MPI Probe.m
MPI Recv.m
MPI Run.m
MPI Send.m
TOTAL

Location
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI

Table 3: List of Files

Line(s)
69
36
41
112
60
105
66
37
42
79
127
33
33
34
34
89
64
145
51
1257

Bytes
3296
1566
1488
4084
2528
3935
2087
1547
1643
2503
3806
1341
1350
1355
1346
2943
2149
4743
1856
45566

Parallel Programming with MatlabMPI

List of Routines
N
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

Routine Name
MatMPI Buffer file()
MatMPI Comm dir()
MatMPI Comm init()
MatMPI Comm settings()
MatMPI Commands()
MatMPI Delete all()
MatMPI lock file()
MatMPI Save messages()
MPI Abort()
MPI Bcast()
MPI Comm rank()
MPI Finalize()
MPI Init()
MPI Probe()
MPI Recv()
MPI Run()
MPI Send()
Table 4: List of Routines

Lines
36 lines
41 lines
112 lines
60 lines
105 lines
66 lines
37 lines
42 lines
79 lines
127 lines
33 lines
34 lines
34 lines
89 lines
64 lines
145 lines
51 lines

Parallel Programming with MatlabMPI

5
5.1

Routines Description
MatMPI Buffer file()

Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MatMPI Buffer file -

MatMPI Buffer file()


MatlabMPI\MatMPI Buffer file.m
{ buffer file, comm, dest, source, tag }
{ buffer file }
36 Line(s)

Helper function for creating buffer file name.

buffer file = MatMPI Buffer file(source,dest,tag,comm)


Parent Routines
MPI Bcast()
MPI Recv()
MPI Send()
Child Routines

5.2

MatMPI Comm dir()

Routine Name
MatMPI Comm dir()
Routine Location
MatlabMPI\MatMPI Comm dir.m
Routine Objective
Routine Arguments { dir, new comm, old comm }
Routine Outputs
{ new comm }
Routine Size
41 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Comm dir - function for changing communication directory.
new comm = MatMPI Comm dir(old comm,dir)
Parent Routines
Child Routines

5.3

MatMPI Comm init()

Routine
Routine
Routine
Routine
Routine

Name
Location
Objective
Arguments
Outputs

MatMPI Comm init()


MatlabMPI\MatMPI Comm init.m
{ machines, MPI COMM WORLD, n proc }
{ MPI COMM WORLD }

Parallel Programming with MatlabMPI

Routine Size
112 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Comm init - Creates generic communicator.
MPI COMM WORLD = MatMPI Comm init(n proc,machines)
Parent Routines
MPI Run()
Child Routines
MatMPI Comm settings()

5.4

MatMPI Comm settings()

Routine Name
MatMPI Comm settings()
Routine Location
MatlabMPI\MatMPI Comm settings.m
Routine Objective
Routine Arguments { machine db settings }
Routine Outputs
{ machine db settings }
Routine Size
60 Line(s)
Routine Author
Routine Date
Routine Comment
Function for setting values in the MPI Communicator.
User can edit these values to customize the internals
MatlabMPI.
Parent Routines
MatMPI Comm init()
MPI Abort()
Child Routines

5.5

MatMPI Commands()

Routine Name
MatMPI Commands()
Routine Location
MatlabMPI\MatMPI Commands.m
Routine Objective
Routine Arguments { defscommands, m file, MPI COMM WORLD, rank,
unix command }
Routine Outputs
{ defscommands, unix command }
Routine Size
105 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Commands - Commands to launch a matlab script remotely.
[defscommands, unix command] = ...

Parallel Programming with MatlabMPI


MatMPI Commands(m file,rank,MPI COMM WORLD)
Parent Routines
MPI Run()
Child Routines

5.6

MatMPI Delete all()

Routine Name
MatMPI Delete all()
Routine Location
MatlabMPI\MatMPI Delete all.m
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
66 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Delete all - Deletes leftover MatlabMPI files.
MatMPI Delete all()
Parent Routines
Child Routines

5.7

MatMPI lock file()

Routine Name
MatMPI lock file()
Routine Location
MatlabMPI\MatMPI Lock file.m
Routine Objective
Routine Arguments { comm, dest, lock file, source, tag }
Routine Outputs
{ lock file }
Routine Size
37 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI lock file - function for creating lock file name.
lock file = MatMPI lock file(source,dest,tag,comm)
Parent Routines
Child Routines

5.8

MatMPI Save messages()

Routine
Routine
Routine
Routine
Routine

Name
Location
Objective
Arguments
Outputs

MatMPI Save messages()


MatlabMPI\MatMPI Save messages.m
{ new comm, old comm, save message flag }
{ new comm }

10

Parallel Programming with MatlabMPI


Routine Size
42 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Save messages - Toggles deleting or saving messages.
new comm = MatMPI Save messages(old comm,save message flag)
MatlabMPI helper function for setting the fate of messages.
save message flag = 1 (save messages).
save message flag = 0 (delete messages: default).
Parent Routines
Child Routines

5.9

MPI Abort()

Routine Name
MPI Abort()
Routine Location
MatlabMPI\MPI Abort.m
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
79 Line(s)
Routine Author
Routine Date
Routine Comment
MPI Abort - Aborts any currently running MatlabMPI sessions.
MPI Abort()
Will abort any currently running MatlabMPI sessions.
by looking for leftover Matlab jobs and killing them.
Cannot be used after MatMPI Deleta all.
Parent Routines
Child Routines
MatMPI Comm settings()

5.10

MPI Bcast()

Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine

Name
Location
Objective
Arguments
Outputs
Size
Author
Date

MPI Bcast()
MatlabMPI\MPI Bcast.m
{ comm, source, tag, varargin, varargout }
{ varargout }
127 Line(s)

11

Parallel Programming with MatlabMPI

12

Routine Comment
MPI Bcast - broadcast variables to everyone.
[var1, var2, ...] = ...
MPI Bcast( source, tag, comm, var1, var2, ...

Broadcast variables to everyone in comm.


Sender blocks until all the messages are received,
unless MatMMPI Save messages(1) has been called.
Parent Routines
Child Routines
MPI Comm rank()
MPI Recv()
MatMPI Buffer file()

5.11

MPI Comm rank()

Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Comm

Name
MPI Comm rank()
Location
MatlabMPI\MPI Comm rank.m
Objective
Arguments { comm, rank }
Outputs
{ rank }
Size
33 Line(s)
Author
Date
Comment
rank - returns the rank of the current processor.

rank = MPI Comm rank(comm)


Parent Routines
MPI Bcast()
MPI Probe()
MPI Recv()
MPI Send()
Child Routines

5.12

MPI Finalize()

Routine
Routine
Routine
Routine
Routine
Routine
Routine

Name
MPI Finalize()
Location
MatlabMPI\MPI Finalize.m
Objective
Arguments
Outputs
Size
34 Line(s)
Author

Parallel Programming with MatlabMPI

13

Routine Date
Routine Comment
MPI Finalize - Called at the end of a MatlabMPI program.
MPI Finalize()
Called at the end of an MPI program (currently empty).
Parent Routines
Child Routines

5.13

MPI Init()

Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Init

Name
Location
Objective
Arguments
Outputs
Size
Author
Date
Comment
- Called at

MPI Init()
MatlabMPI\MPI Init.m

34 Line(s)

the start of an MPI program.

MPI Init()
Called at the beginning of an MPI program (currently empty).
Parent Routines
Child Routines

5.14

MPI Probe()

Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MPI Probe - Returns

MPI Probe()
MatlabMPI\MPI Probe.m
{ comm, message rank, message tag, source, tag }
{ message rank, message tag }
89 Line(s)

a list of all messages waiting to be received.

[message rank, message tag] = MPI Probe( source, tag, comm )


Source and tag can be an integer or a wildcard *.
Parent Routines

Parallel Programming with MatlabMPI


Child Routines

5.15

MPI Comm rank()

MPI Recv()

Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Recv

Name
Location
Objective
Arguments
Outputs
Size
Author
Date
Comment
- Receives

[var1, var2, ...]

MPI Recv()
MatlabMPI\MPI Recv.m
{ comm, source, tag, varargout }
{ varargout }
64 Line(s)

message from source.


= MPI Recv( source, tag, comm )

Receives message from source with a given tag


and returns the variables in the message.
source can be an integer from 0 to comm size-1
tag can be any integer
comm is an MPI Communicator (typically a copy of MPI COMM WORLD)
Parent Routines
MPI Bcast()
Child Routines
MPI Comm rank()
MatMPI Buffer file()

5.16

MPI Run()

Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MPI Run - Run m file

MPI Run()
MatlabMPI\MPI Run.m
{ defscommands, m file, machines, n proc }
145 Line(s)

on multiple processors.

defscommands = MPI Run( m file, n proc, machines )


Runs n proc copies of m file on machines, where

14

Parallel Programming with MatlabMPI

15

machines = ;
Run on a local processor.
machines = machine1 machine2) );
Run on a multi processors.
machines = machine1:dir1 machine2:dir2) );
Run on a multi processors and communicate using via dir1 and
dir2,
which must be visible to both machines.
If machine1 is the local cpu, then defscommands will contain
the commands that need to be run locally, via eval(defscommands).
Parent Routines
Child Routines
MatMPI Comm init()
MatMPI Commands()

5.17

MPI Send()

Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Send

Name
MPI Send()
Location
MatlabMPI\MPI Send.m
Objective
Arguments { comm, dest, tag, varargin }
Outputs
Size
51 Line(s)
Author
Date
Comment
- Sends variables to dest.

MPI Send( dest, tag, comm, var1, var2, ...)


Send message containing variables to dest with a given tag
dest can be an integer from 0 to comm size-1
tag can be any integer
comm is an MPI Communicator (typically a copy of MPI COMM WORLD)
Parent Routines
Child Routines
MPI Comm rank()
MatMPI Buffer file()

Parallel Programming with MatlabMPI

6
6.1
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:

6.2
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:

6.3
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:

16

ROUTINES BODY
MatMPI Buffer file()
function buffer\_file = MatMPI\_Buffer\_file(source,dest,tag,comm)
% MatMPI\_Buffer\_file - Helper function for creating buffer file name.
%
% buffer\_file = MatMPI\_Buffer\_file(source,dest,tag,comm)
%
machine\_id = comm.machine\_id(1,dest+1);
dir = comm.machine\_db.dir{1,machine\_id};
buffer\_file = [dir,/p,num2str(source),\_p,num2str(dest),\_t,
num2str(tag),\_buffer.mat];

MatMPI Comm dir()


function new\_comm = MatMPI\_Comm\_dir(old\_comm,dir)
% MatMPI\_Comm\_dir - function for changing communication directory.
%
% new\_comm = MatMPI\_Comm\_dir(old\_comm,dir)
%
new\_comm = old\_comm;
n = new\_comm.machine\_db.n\_machine;
for i=1:n
new\_comm.machine\_db.dir{1,i} = dir;
end
new\_comm;

MatMPI Comm init()


function MPI\_COMM\_WORLD = MatMPI\_Comm\_init(n\_proc,machines)
% MatMPI\_Comm\_init - Creates generic communicator.
%
%
MPI\_COMM\_WORLD = MatMPI\_Comm\_init(n\_proc,machines)
%
% Get number of machines to launch on.
n\_machines = size(machines,2);
n\_m = max(n\_machines,1);
% Set default target machine.
host = getenv(HOST);

Parallel Programming with MatlabMPI


13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:

17

machine = host;
% Initialize comm.
MPI\_COMM\_WORLD.rank = -1;
MPI\_COMM\_WORLD.size = n\_proc;
MPI\_COMM\_WORLD.save\_message\_flag = 0;
MPI\_COMM\_WORLD.group = (1:n\_proc)-1;
MPI\_COMM\_WORLD.machine\_id = zeros(1,n\_proc);
% Initialize machine database.
machine\_db.n\_machine = n\_m;
% Number of machines.
machine\_db.type = cell(1,n\_m);
% Unix or Windows.
machine\_db.machine = cell(1,n\_m);
% Machine names.
machine\_db.dir = cell(1,n\_m);
% Communication directory.
machine\_db.matlab\_command = cell(1,n\_m); % Matlab command.
machine\_db.remote\_launch = cell(1,n\_m);
% Remote launch command.
machine\_db.remote\_flags = cell(1,n\_m);
% Remote launch flags.
machine\_db.n\_proc = zeros(1,n\_m);
% # processes on this machine.
machine\_db.id\_start = zeros(1,n\_m);
% Start index.
machine\_db.id\_stop = zeros(1,n\_m);
% Stop index.
% Start setting up machine id.
for i\_rank=0:n\_proc-1
i\_machine = mod(i\_rank,n\_m) + 1;
machine\_db.n\_proc(1,i\_machine) = machine\_db.n\_proc(1,i\_machine) + 1;
end
% Get possibly user settings.
machine\_db\_settings = MatMPI\_Comm\_settings;
% Set machine\_db values.
for i=1:n\_m
machine\_db.type{1,i} = machine\_db\_settings.type;
machine\_db.machine{1,i} = host;
machine\_db.dir{1,i} = [pwd /MatMPI];
machine\_db.matlab\_command{1,i} = machine\_db\_settings.matlab\_command;
machine\_db.remote\_launch{1,i} = machine\_db\_settings.remote\_launch;
machine\_db.remote\_flags{1,i} = machine\_db\_settings.remote\_flags;
if (i == 1)
machine\_db.id\_start(1,i) = 1;
machine\_db.id\_stop(1,i) = machine\_db.id\_start(1,i) +
machine\_db.n\_proc(1,i) -1;
else
machine\_db.id\_start(1,i) = machine\_db.id\_stop(1,i-1) + 1;
machine\_db.id\_stop(1,i) = machine\_db.id\_start(1,i) +
machine\_db.n\_proc(1,i) -1;

Parallel Programming with MatlabMPI


59:
60:
61:
62:
63:
64:
65:
66:
67:
68:
69:
70:
71:
72:
73:
74:
75:
76:
77:
78:
79:
80:
81:
82:
83:
84:
85:
86:
87:

6.4
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:

18

end
id\_start = machine\_db.id\_start(1,i);
id\_stop = machine\_db.id\_stop(1,i);
MPI\_COMM\_WORLD.machine\_id(1,id\_start:id\_stop) = i;
% Check if there is a machines list.
if (n\_machines > 0)
machine = machines{i};
machine\_db.machine{1,i} = machine;
% Check if there is a directory appended.
dir\_sep = findstr(machine,:);
if (dir\_sep)
machine\_piece = machine(1,1:dir\_sep-1);
dir\_piece = machine(1,(dir\_sep+1):end);
machine\_db.machine{1,i} = machine\_piece;
machine\_db.dir{1,i} = dir\_piece;
end
end
end
% Add machine\_db to communicator.
MPI\_COMM\_WORLD.machine\_db = machine\_db;
% Write out.
comm\_mat\_file = MatMPI/MPI\_COMM\_WORLD.mat;
save(comm\_mat\_file,MPI\_COMM\_WORLD);

MatMPI Comm settings()


function machine\_db\_settings = MatMPI\_Comm\_settings()
%
% Function for setting values in the MPI Communicator.
% User can edit these values to customize the internals
% MatlabMPI.
%
% Set to unix or windows.
% windows currently doesnt work.
machine\_db\_settings.type = unix;
% Matlab command and launch flags.
% Generic.

Parallel Programming with MatlabMPI


15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:

6.5
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:

19

machine\_db\_settings.matlab\_command = matlab -display null -nojvm nosplash ;


% Lincoln cluster common.
% machine\_db\_settings.matlab\_command = /tools/matlab/bin/matlab display null -nojvm -nosplash ;
% Lincoln cluster local.
% machine\_db\_settings.matlab\_command = /local/matlab12.1/bin/matlab display null -nojvm -nosplash ;
% LCS Cluster local.
% machine\_db\_settings.matlab\_command = /usr/local/bin/matlab -display
null -nojvm -nosplash ;
% Boston University.
% machine\_db\_settings.matlab\_command = /usr/local/IT/matlab-6.1/bin/
matlab -display null -nojvm -nosplash
% MHPCC local copy.
% machine\_db.matlab\_command{1,i} = /scratch/tempest/users/kepner/
matlab6/bin/matlab -display null -nojvm -nosplash ;
% Remote launch command.
% To use ssh, change rsh to ssh in line below.
% machine\_db\_settings.remote\_launch = ssh ;
machine\_db\_settings.remote\_launch = rsh ;
% Remote launch flags.
machine\_db\_settings.remote\_flags = -n ;

MatMPI Commands()
function [defscommands, unix\_command] = ...
MatMPI\_Commands(m\_file,rank,MPI\_COMM\_WORLD)
% MatMPI\_Commands - Commands to launch a matlab script remotely.
%
% [defscommands, unix\_command] = ...
%
MatMPI\_Commands(m\_file,rank,MPI\_COMM\_WORLD)
%
% Set newline string.
nl = sprintf(\n);
% Create filename each Matlab job will run at startup.
defsbase = [MatMPI/defs num2str(rank)];
defsfile = [defsbase .m];
comm\_mat\_file = MatMPI/MPI\_COMM\_WORLD.mat;
outfile = [MatMPI/ m\_file . num2str(rank) .out];

Parallel Programming with MatlabMPI


18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
63:

20

% Get single quote character.


q = strrep( , ,);
% Create Matlab MPI setup commands.
commands{1} = [global MPI\_COMM\_WORLD; nl];
commands{2} = [load q comm\_mat\_file q ; nl];
commands{3} = [MPI\_COMM\_WORLD.rank = num2str(rank) ; nl];
commands{4} = [delete( q defsfile q ); nl];
commands{5} = [m\_file ; nl];
defscommands = ;
% Get name of host.
machine\_id = MPI\_COMM\_WORLD.machine\_id(1,rank+1);
machine = MPI\_COMM\_WORLD.machine\_db.machine{1,machine\_id};
remote\_launch = MPI\_COMM\_WORLD.machine\_db.remote\_launch{1,machine\_id};
remote\_flags = MPI\_COMM\_WORLD.machine\_db.remote\_flags{1,machine\_id};
matlab\_command = MPI\_COMM\_WORLD.machine\_db.matlab\_command{1,machine\_id};
% Print name of machine we are launching on.
disp([Launching MPI rank: num2str(rank) on: machine]);
% Create base matlab command.
matlab\_command = [matlab\_command
% matlab\_command = [matlab\_command
> /def/null];
% matlab\_command = [matlab\_command
];
% matlab\_command = [matlab\_command

< defsfile > outfile ];


-r defsfile -logfile outfile
-r defsfile -logfile outfile
-r defsfile > outfile ];

% Determine how to run script and where to send output.


host = getenv(HOST);
if (strcmp(machine,host))
if (rank == 0)
% Run defsfile scipt interactively.
defscommands = [commands{1} commands{2} commands{3} commands{5}];
unix\_command = nl;
else
% Write commands to a .m text file.
fid = fopen(defsfile,wt);
n\_command = size(commands,2);
for i\_command=1:n\_command
fwrite(fid,commands{i\_command});
end
fclose(fid);

Parallel Programming with MatlabMPI


64:
65:
66:
67:
68:
69:
70:
71:
72:
73:
74:
75:
76:
77:
78:
79:
80:
81:
82:
83:
84:
85:

6.6
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:

21

% Create command to run defsfile locally and pipe output to another


file.
unix\_command = [matlab\_command & nl touch MatMPI/pid. machine
.$! nl];
end
else
% Write commands to a .m text file.
fid = fopen(defsfile,wt);
n\_command = size(commands,2);
for i\_command=1:n\_command
fwrite(fid,commands{i\_command});
end
fclose(fid);
% Create command to run defsfile locally and pipe output to another
file.
unix\_command = [matlab\_command & nl touch MatMPI/pid. machine
.$! nl];
end

MatMPI Delete all()


function MatMPI\_Delete\_all()
% MatMPI\_Delete\_all - Deletes leftover MatlabMPI files.
%
% MatMPI\_Delete\_all()
%
%
% First load MPI\_COMM\_WORLD.
load MatMPI/MPI\_COMM\_WORLD.mat;
% Set newline string.
nl = sprintf(\n);
% Get single quote character.
q = strrep( , ,);
% Get number of machines.
n\_m = MPI\_COMM\_WORLD.machine\_db.n\_machine;
% Loop backwards over each machine.
for i\_m=n\_m:-1:1

Parallel Programming with MatlabMPI


22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:

6.7
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:

6.8
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:

22

% Get number of processes to launch on this machine.


n\_proc\_i\_m = MPI\_COMM\_WORLD.machine\_db.n\_proc(1,i\_m);
if (n\_proc\_i\_m >= 1)
% Get communication directory.
comm\_dir = MPI\_COMM\_WORLD.machine\_db.dir{1,i\_m};
% Delete buffer and lock files in this directory.
delete([comm\_dir /p*\_p*\_t*\_buffer.mat]);
delete([comm\_dir /p*\_p*\_t*\_lock.mat]);
end
end
% Delete MatMPI directory.
delete(MatMPI/*);
delete(MatMPI);

MatMPI lock file()


function lock\_file = MatMPI\_lock\_file(source,dest,tag,comm)
% MatMPI\_lock\_file - function for creating lock file name.
%
%
lock\_file = MatMPI\_lock\_file(source,dest,tag,comm)
%
machine\_id = comm.machine\_id(1,dest+1);
dir = comm.machine\_db.dir{1,machine\_id};
lock\_file = [dir,/p,num2str(source),\_p,num2str(dest),\_t,
num2str(tag),\_lock.mat];

MatMPI Save messages()


function new\_comm = MatMPI\_Save\_messages(old\_comm,save\_message\_flag)
% MatMPI\_Save\_messages - Toggles deleting or saving messages.
%
%
new\_comm = MatMPI\_Save\_messages(old\_comm,save\_message\_flag)
%
%
MatlabMPI helper function for setting the fate of messsages.
%
save\_message\_flag = 1 (save messages).
%
save\_message\_flag = 0 (delete messages: default).
new\_comm = old\_comm;
new\_comm.save\_message\_flag = save\_message\_flag;

Parallel Programming with MatlabMPI


13:
14:

6.9
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:

23

new\_comm;

MPI Abort()
function MPI\_Abort()
% MPI\_Abort - Aborts any currently running MatlabMPI sessions.
%
%
MPI\_Abort()
%
%
Will abort any currently running MatlabMPI sessions.
%
by looking for leftover Matlab jobs and killing them.
%
Cannot be used after MatMPI\_Deleta\_all.
%
% Get possibly user defined settings.
machine\_db\_settings = MatMPI\_Comm\_settings;
% Get list of pid files.
pid\_files = dir(MatMPI/pid.*.*);
s = size(pid\_files);
n\_files = s(1);
% Create single qoute.
q = strrep( , ,);
% Check if there are any files
if (n\_files < 1)
disp(No pid files found);
else
% Loop over each file.
for i\_file=1:n\_files
% Get file name.
file\_name = pid\_files(i\_file).name;
% Check if there is a directory appended.
dir\_sep = findstr(file\_name,.);
if (dir\_sep)
% Parse file name.
machine = file\_name(1,(dir\_sep(1)+1):(dir\_sep(end)-1));
pid = file\_name(1,(dir\_sep(end)+1):end);
% Kill process.

Parallel Programming with MatlabMPI


42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:

6.10
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:

24

% To use ssh, change rsh to ssh in line below.


% unix\_command = [rsh machine q kill -9 pid q];
unix\_command = [ machine\_db\_settings.remote\_launch machine
kill -9 pid q];
disp(unix\_command);
unix(unix\_command);
end
end
end

MPI Bcast()
function varargout = MPI\_Bcast( source, tag, comm, varargin )
% MPI\_Bcast - broadcast variables to everyone.
%
%
[var1, var2, ...] = ...
%
MPI\_Bcast( source, tag, comm, var1, var2, ... )
%
%
Broadcast variables to everyone in comm.
%
%
Sender blocks until all the messages are received,
%
unless MatMMPI\_Save\_messages(1) has been called.
%
% Get processor rank.
my\_rank = MPI\_Comm\_rank(comm);
comm\_size = MPI\_Comm\_size(comm);
% If not the source, then receive the data.
if (my\_rank ~= source)
varargout = MPI\_Recv( source, tag, comm );
end
% If the source, then send the data.
if (my\_rank == source)
% Create data file.
buffer\_file = MatMPI\_Buffer\_file(my\_rank,source,tag,comm);
% Save varargin to file.
save(buffer\_file,varargin);
% Loop over everyone in comm and create link to data file.
link\_command = ;

Parallel Programming with MatlabMPI


33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47: %
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
63:
64:
65:
66:
67:
68:
69:
70:
71:
72:
73:
74:
75:
76:
77:
78:

25

for i=0:comm\_size-1
% Dont do source.
if (i ~= source)
% Create buffer link name.
buffer\_link = MatMPI\_Buffer\_file(my\_rank,i,tag,comm);
% Append to link\_command.
link\_command = [link\_command ln -s buffer\_file buffer\_link
; ];
end
end
% Create symbolic link to data\_file.
unix(link\_command);
% Write commands unix commands to .sh text file
% to fix Matlabs problem with very long commands sent to unix().
unix\_link\_file = [MatMPI/Unix\_Link\_Commands\_t num2str(tag) .sh];
fid = fopen(unix\_link\_file,wt);
fwrite(fid,link\_command);
fclose(fid);
unix([/bin/sh unix\_link\_file]);
delete(unix\_link\_file);
% Loop over everyone in comm and create lock file.
for i=0:comm\_size-1
% Dont do source.
if (i ~= source)
% Get lock file name.
lock\_file = MatMPI\_Lock\_file(my\_rank,i,tag,comm);
% Create lock file.
fclose(fopen(lock\_file,w));
end
end

% Check if the message is to be saved.


if (not(comm.save\_message\_flag))
% Loop over lock files.
% Delete buffer\_file when lock files are gone.
% Loop over everyone in comm and create lock file.
for i=0:comm\_size-1
% Dont do source.

Parallel Programming with MatlabMPI

26

79:
if (i ~= source)
80:
% Get lock file name.
81:
lock\_file = MatMPI\_Lock\_file(my\_rank,i,tag,comm);
82:
83:
% Spin on lock file until it is deleted.
84:
loop = 0;
85:
while exist(lock\_file) ~= 0
86: %
pause(0.01);
87:
loop = loop + 1;
88:
end
89:
90:
end
91:
end
92:
93:
94:
% Delete buffer file.
95:
if (not(comm.save\_message\_flag))
96:
delete(buffer\_file);
97:
end
98:
99:
end
100:
101:
end

6.11
1:
2:
3:
4:
5:
6:

6.12
1:
2:
3:
4:
5:
6:
7:

MPI Comm rank()


function rank = MPI\_Comm\_rank(comm)
% MPI\_Comm\_rank - returns the rank of the current processor.
%
%
rank = MPI\_Comm\_rank(comm)
%
rank = comm.rank;

MPI Finalize()
function MPI\_Finalize()
% MPI\_Finalize - Called at the end of a MatlabMPI program.
%
%
MPI\_Finalize()
%
%
Called at the end of an MPI program (currently empty).
%

Parallel Programming with MatlabMPI

6.13
1:
2:
3:
4:
5:
6:
7:

6.14
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:

27

MPI Init()
function MPI\_Init()
% MPI\_Init - Called at the start of an MPI program.
%
% MPI\_Init()
%
%
Called at the beginning of an MPI program (currently empty).
%

MPI Probe()
function [message\_rank, message\_tag] = MPI\_Probe( source, tag, comm )
% MPI\_Probe - Returns a list of all messages waiting to be received.
%
% [message\_rank, message\_tag] = MPI\_Probe( source, tag, comm )
%
%
Source and tag can be an integer or a wildcard *.
%
% Get processor rank.
my\_rank = MPI\_Comm\_rank(comm);
% Get lock file names.
lock\_file = MatMPI\_Lock\_file(source,my\_rank,tag,comm);
% Check to see if there are any messages.
message\_files = dir(lock\_file);
n\_files = length(message\_files);
% Create single qoute.
q = strrep( , ,);
% Check if there are any files
if (n\_files < 1)
% Set default (negative) return values.
message\_rank = ;
message\_tag = ;
else
% Create arrays to store rank and tag.
message\_rank = zeros(n\_files,1);
message\_tag = message\_rank;
% Set strings to search for (THIS IS VERY BAD, SHOULD HIDE THIS).
source\_str = p;
dest\_str = [\_p num2str(my\_rank) \_t];

Parallel Programming with MatlabMPI


35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
63:

6.15
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:

28

tag\_str = \_lock.mat;
source\_len = length(source\_str);
dest\_len = length(dest\_str);
tag\_len = length(tag\_str);
% Step through each file name and strip out rank and tag.
for i\_file=1:n\_files
% Get file name.
file\_name = message\_files(i\_file).name;
% Find location of each of the strings.
source\_pos = findstr(file\_name,source\_str);
dest\_pos = findstr(file\_name,dest\_str);
tag\_pos = findstr(file\_name,tag\_str);
% If we have found the location than extract rank and tag.
if (source\_pos & dest\_pos & tag\_pos)
message\_rank(i\_file) = str2num(file\_name(1,(source\_len+
1):(dest\_pos-1)));
message\_tag(i\_file) = str2num(file\_name(1,(dest\_pos+
dest\_len):(tag\_pos-1)));
end
end
end

MPI Recv()
function varargout = MPI\_Recv( source, tag, comm )
% MPI\_Recv - Receives message from source.
%
% [var1, var2, ...] = MPI\_Recv( source, tag, comm )
%
%
Receives message from source with a given tag
%
and returns the variables in the message.
%
%
source can be an iteger from 0 to comm\_size-1
%
tag can be any integer
%
comm is an MPI Communicator (typically a copy of MPI\_COMM\_WORLD)
%

Parallel Programming with MatlabMPI


14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:

6.16
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:

29

% Get processor rank.


my\_rank = MPI\_Comm\_rank(comm);
% Get file names.
buffer\_file = MatMPI\_Buffer\_file(source,my\_rank,tag,comm);
lock\_file = MatMPI\_Lock\_file(source,my\_rank,tag,comm);
% Spin on lock file until it is created.
loop = 0;
while exist(lock\_file) ~= 2
loop = loop + 1;
end
% Read all data out of buffer\_file.
buf = load(buffer\_file);
% Delete buffer and lock files.
if (not(comm.save\_message\_flag))
delete(buffer\_file);
delete(lock\_file);
end
% Get variable out of buf.
varargout = buf.varargin;

MPI Run()
function defscommands = MPI\_Run( m\_file, n\_proc, machines )
% MPI\_Run - Run m\_file on multiple processors.
%
% defscommands = MPI\_Run( m\_file, n\_proc, machines )
%
%
Runs n\_proc copies of m\_file on machines, where
%
%
machines = {};
%
Run on a local processor.
%
%
machines = {machine1 machine2}) );
%
Run on a multi processors.
%
%
machines = {machine1:dir1 machine2:dir2}) );
%
Run on a multi processors and communicate using via dir1 and dir2,
%
which must be visible to both machines.
%
%
If machine1 is the local cpu, then defscommands will contain

Parallel Programming with MatlabMPI

30

19: %
the commands that need to be run locally, via eval(defscommands).
20: %
21:
22:
% Check if the directory MatMPI exists
23:
if exist(MatMPI, dir) ~= 0
24:
%error(MatMPI directory already exists: rename or remove with
25:
MatMPI\_Delete\_all);
26: else
27:
mkdir(MatMPI);
28:
end
29:
30:
% Create working directory.
31: % mkdir(MatMPI);
32:
33:
% Get host.
34:
host = getenv(HOST);
35:
36:
% Get number of machines to launch on.
37:
n\_machines = size(machines,2);
38:
39:
% Create generic comm.
40:
MPI\_COMM\_WORLD = MatMPI\_Comm\_init(n\_proc,machines);
41:
42:
% Set newline string.
43:
nl = sprintf(\n);
44:
% Get single quote character.
45:
q = strrep( , ,);
46:
47:
% Initialize unix command launch on all the different machines.
48:
unix\_launch = ;
49:
50:
% Get number of machines.
51:
n\_m = MPI\_COMM\_WORLD.machine\_db.n\_machine;
52:
53:
% Loop backwards over each machine.
54:
for i\_m=n\_m:-1:1
55:
56:
% Get number of processes to launch on this machine.
57:
n\_proc\_i\_m = MPI\_COMM\_WORLD.machine\_db.n\_proc(1,i\_m);
58:
59:
if (n\_proc\_i\_m >= 1)
60:
61:
% Get machine info.
62:
machine = MPI\_COMM\_WORLD.machine\_db.machine{1,i\_m};
63:
remote\_launch = MPI\_COMM\_WORLD.machine\_db.remote\_launch{1,i\_m};
64:
remote\_flags = MPI\_COMM\_WORLD.machine\_db.remote\_flags{1,i\_m};

Parallel Programming with MatlabMPI

31

65:
66:
% Get starting and stopping rank.
67:
i\_rank\_start = MPI\_COMM\_WORLD.machine\_db.id\_start(1,i\_m) - 1;
68:
i\_rank\_stop = MPI\_COMM\_WORLD.machine\_db.id\_stop(1,i\_m) - 1;
69:
70:
% Initialize unix command that will be run on each node.
71:
unix\_matlab = ;
72:
73:
% Loop backwards over number of processes.
74:
for i\_rank=i\_rank\_stop:-1:i\_rank\_start
75:
76:
% Build commands
77:
[defscommands, unix\_matlab\_i\_rank] = ...
78:
MatMPI\_Commands(m\_file,i\_rank,MPI\_COMM\_WORLD);
79:
unix\_matlab = [unix\_matlab unix\_matlab\_i\_rank];
80:
81:
end
82:
83:
% Create a file name.
84: %
unix\_matlab\_file = [MatMPI/Unix\_Commands. machine .sh];
85:
unix\_matlab\_file = [MatMPI/Unix\_Commands. machine .
86:
num2str(i\_rank\_start) .sh];
87:
88:
% Append delete command.
89:
unix\_matlab = [unix\_matlab rm unix\_matlab\_file ; nl];
90:
91:
% Put commands in a file.
92:
fid = fopen(unix\_matlab\_file,wt);
93:
fwrite(fid,unix\_matlab);
94:
fclose(fid);
95:
96:
% Create unix commands to launch this file.
97:
if (strcmp(machine,host))
98:
unix\_launch\_i\_m = [/bin/sh ./ unix\_matlab\_file & nl];
99:
else
100:
unix\_launch\_i\_m = [remote\_launch machine remote\_flags ...
101:
q cd pwd ; /bin/sh ./ unix\_matlab\_file & q & nl];
102:
end
103:
104:
unix\_launch = [unix\_launch unix\_launch\_i\_m];
105:
end
106:
end
107:
108:
% Execute all launches in a single unix call.
109:
unix\_launch
110: % unix(unix\_launch);

Parallel Programming with MatlabMPI

32

111:
112:
113:
114:
115:
116:
117:
118:
119:
120:

% Write commands unix commands to .sh text file


% to fix Matlabs problem with very long commands sent to unix().
unix\_launch\_file = MatMPI/Unix\_Commands.sh;
fid = fopen(unix\_launch\_file,wt);
fwrite(fid,unix\_launch);
fclose(fid);
unix([/bin/sh unix\_launch\_file]);
delete(unix\_launch\_file);

6.17

MPI Send()

1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:

function MPI\_Send( dest, tag, comm, varargin )


% MPI\_Send - Sends variables to dest.
%
% MPI\_Send( dest, tag, comm, var1, var2, ...)
%
%
Send message containing variables to dest with a given tag
%
%
dest can be an iteger from 0 to comm\_size-1
%
tag can be any integer
%
comm is an MPI Communicator (typically a copy of MPI\_COMM\_WORLD)
%
% Get processor rank.
my\_rank = MPI\_Comm\_rank(comm);
% Create buffer and lock file.
buffer\_file = MatMPI\_Buffer\_file(my\_rank,dest,tag,comm);
lock\_file = MatMPI\_Lock\_file(my\_rank,dest,tag,comm);
% Save buf to file.
save(buffer\_file,varargin);
% Create lock file.
fclose(fopen(lock\_file,w));

You might also like