Matlab Mpi: Parallel Programming With Matlabmpi Reference Manual
Matlab Mpi: Parallel Programming With Matlabmpi Reference Manual
By
Abstract
Matlab is the dominate programming language for implementing numerical computations and is widely used for algorithm development, simulation, data reduction, testing and system evaluation. Many of these
computations could benefit from faster execution on a parallel computer.
There have been many previous attempts to provide an efficient mechanism for running Matlab programs on parallel computers. These efforts
have faced numerous challenges and none have received widespread acceptance.
In the world of parallel computing the Message Passing Interface (MPI)
is the de facto standard for implementing programs on multiple processors. MPI defines C and Fortran language functions for doing point-topoint communication in a parallel program. MPI has proven to be an
effective model for implementing parallel programs and is used by many
of the worlds most demanding applications (weather modeling, weapons
simulation, aircraft design, etc.).
MatlabMPI is set of Matlab scripts that implement a subset of MPI
and allow any Matlab program to be run on a parallel computer. The
key innovation of MatlabMPI is that it implements the widely used MPI
look and feel on top of standard Matlab file i/o, resulting in a pure
Matlab implementation that is exceedingly small ( 100 lines of code).
Thus, MatlabMPI will run on any combination of computers that Matlab
supports. In addition, because of its small size, it is simple to download
and use (and modify if you like).
REQUIREMENTS
-Matlab license -File system visible to all processors
On shared memory systems, MatlabMPI only requires a single Matlab
license as each user is allowed to have many Matlab sessions. On distributed memory systems, MatlabMPI will require one Matlab license per
machine. Because MatlabMPI uses file i/o for communication, there must
be a directory that is visible to every machine (this is usually also required
in order to install Matlab). This directory defaults to the directory that
the program is launched from, but can be changed within the MatlabMPI
program.
Contents
1 Global Information
2 Quantitative Information
3 List of Files
4 List of Routines
5 Routines Description
5.1 MatMPI Buffer file() . . .
5.2 MatMPI Comm dir() . . .
5.3 MatMPI Comm init() . .
5.4 MatMPI Comm settings()
5.5 MatMPI Commands() . .
5.6 MatMPI Delete all() . . .
5.7 MatMPI lock file() . . . .
5.8 MatMPI Save messages()
5.9 MPI Abort() . . . . . . .
5.10 MPI Bcast() . . . . . . . .
5.11 MPI Comm rank() . . . .
5.12 MPI Finalize() . . . . . .
5.13 MPI Init() . . . . . . . . .
5.14 MPI Probe() . . . . . . .
5.15 MPI Recv() . . . . . . . .
5.16 MPI Run() . . . . . . . .
5.17 MPI Send() . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
8
8
8
9
9
10
10
10
11
11
12
12
13
13
14
14
15
6 ROUTINES BODY
6.1 MatMPI Buffer file() . . .
6.2 MatMPI Comm dir() . . .
6.3 MatMPI Comm init() . .
6.4 MatMPI Comm settings()
6.5 MatMPI Commands() . .
6.6 MatMPI Delete all() . . .
6.7 MatMPI lock file() . . . .
6.8 MatMPI Save messages()
6.9 MPI Abort() . . . . . . .
6.10 MPI Bcast() . . . . . . . .
6.11 MPI Comm rank() . . . .
6.12 MPI Finalize() . . . . . .
6.13 MPI Init() . . . . . . . . .
6.14 MPI Probe() . . . . . . .
6.15 MPI Recv() . . . . . . . .
6.16 MPI Run() . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
16
16
16
16
18
19
21
22
22
23
24
26
26
27
27
28
29
4
32
Global Information
Project Name
Project Owner
Starting Date
Ending Date
Programming Environment
Technical Team
Overview Comment
Matlab MPI
Dr. Jeremy Kepner
Matlab
Dr. Jeremy Kepner
Parallel Programming with MatlabMPI
Quantitative Information
Total
Total
Total
Total
number of files
number of routines
number of lines
size
19 file(s)
17 line(s)
1257 line(s)
44 Kbyte(s)
3
N
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-
List of Files
File Name
MatlabMPI.m
MatMPI Buffer file.m
MatMPI Comm dir.m
MatMPI Comm init.m
MatMPI Comm settings.m
MatMPI Commands.m
MatMPI Delete all.m
MatMPI Lock file.m
MatMPI Save messages.m
MPI Abort.m
MPI Bcast.m
MPI Comm rank.m
MPI Comm size.m
MPI Finalize.m
MPI Init.m
MPI Probe.m
MPI Recv.m
MPI Run.m
MPI Send.m
TOTAL
Location
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
MatlabMPI
Line(s)
69
36
41
112
60
105
66
37
42
79
127
33
33
34
34
89
64
145
51
1257
Bytes
3296
1566
1488
4084
2528
3935
2087
1547
1643
2503
3806
1341
1350
1355
1346
2943
2149
4743
1856
45566
List of Routines
N
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Routine Name
MatMPI Buffer file()
MatMPI Comm dir()
MatMPI Comm init()
MatMPI Comm settings()
MatMPI Commands()
MatMPI Delete all()
MatMPI lock file()
MatMPI Save messages()
MPI Abort()
MPI Bcast()
MPI Comm rank()
MPI Finalize()
MPI Init()
MPI Probe()
MPI Recv()
MPI Run()
MPI Send()
Table 4: List of Routines
Lines
36 lines
41 lines
112 lines
60 lines
105 lines
66 lines
37 lines
42 lines
79 lines
127 lines
33 lines
34 lines
34 lines
89 lines
64 lines
145 lines
51 lines
5
5.1
Routines Description
MatMPI Buffer file()
Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MatMPI Buffer file -
5.2
Routine Name
MatMPI Comm dir()
Routine Location
MatlabMPI\MatMPI Comm dir.m
Routine Objective
Routine Arguments { dir, new comm, old comm }
Routine Outputs
{ new comm }
Routine Size
41 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Comm dir - function for changing communication directory.
new comm = MatMPI Comm dir(old comm,dir)
Parent Routines
Child Routines
5.3
Routine
Routine
Routine
Routine
Routine
Name
Location
Objective
Arguments
Outputs
Routine Size
112 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Comm init - Creates generic communicator.
MPI COMM WORLD = MatMPI Comm init(n proc,machines)
Parent Routines
MPI Run()
Child Routines
MatMPI Comm settings()
5.4
Routine Name
MatMPI Comm settings()
Routine Location
MatlabMPI\MatMPI Comm settings.m
Routine Objective
Routine Arguments { machine db settings }
Routine Outputs
{ machine db settings }
Routine Size
60 Line(s)
Routine Author
Routine Date
Routine Comment
Function for setting values in the MPI Communicator.
User can edit these values to customize the internals
MatlabMPI.
Parent Routines
MatMPI Comm init()
MPI Abort()
Child Routines
5.5
MatMPI Commands()
Routine Name
MatMPI Commands()
Routine Location
MatlabMPI\MatMPI Commands.m
Routine Objective
Routine Arguments { defscommands, m file, MPI COMM WORLD, rank,
unix command }
Routine Outputs
{ defscommands, unix command }
Routine Size
105 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Commands - Commands to launch a matlab script remotely.
[defscommands, unix command] = ...
5.6
Routine Name
MatMPI Delete all()
Routine Location
MatlabMPI\MatMPI Delete all.m
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
66 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI Delete all - Deletes leftover MatlabMPI files.
MatMPI Delete all()
Parent Routines
Child Routines
5.7
Routine Name
MatMPI lock file()
Routine Location
MatlabMPI\MatMPI Lock file.m
Routine Objective
Routine Arguments { comm, dest, lock file, source, tag }
Routine Outputs
{ lock file }
Routine Size
37 Line(s)
Routine Author
Routine Date
Routine Comment
MatMPI lock file - function for creating lock file name.
lock file = MatMPI lock file(source,dest,tag,comm)
Parent Routines
Child Routines
5.8
Routine
Routine
Routine
Routine
Routine
Name
Location
Objective
Arguments
Outputs
10
5.9
MPI Abort()
Routine Name
MPI Abort()
Routine Location
MatlabMPI\MPI Abort.m
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
79 Line(s)
Routine Author
Routine Date
Routine Comment
MPI Abort - Aborts any currently running MatlabMPI sessions.
MPI Abort()
Will abort any currently running MatlabMPI sessions.
by looking for leftover Matlab jobs and killing them.
Cannot be used after MatMPI Deleta all.
Parent Routines
Child Routines
MatMPI Comm settings()
5.10
MPI Bcast()
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Name
Location
Objective
Arguments
Outputs
Size
Author
Date
MPI Bcast()
MatlabMPI\MPI Bcast.m
{ comm, source, tag, varargin, varargout }
{ varargout }
127 Line(s)
11
12
Routine Comment
MPI Bcast - broadcast variables to everyone.
[var1, var2, ...] = ...
MPI Bcast( source, tag, comm, var1, var2, ...
5.11
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Comm
Name
MPI Comm rank()
Location
MatlabMPI\MPI Comm rank.m
Objective
Arguments { comm, rank }
Outputs
{ rank }
Size
33 Line(s)
Author
Date
Comment
rank - returns the rank of the current processor.
5.12
MPI Finalize()
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Name
MPI Finalize()
Location
MatlabMPI\MPI Finalize.m
Objective
Arguments
Outputs
Size
34 Line(s)
Author
13
Routine Date
Routine Comment
MPI Finalize - Called at the end of a MatlabMPI program.
MPI Finalize()
Called at the end of an MPI program (currently empty).
Parent Routines
Child Routines
5.13
MPI Init()
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Init
Name
Location
Objective
Arguments
Outputs
Size
Author
Date
Comment
- Called at
MPI Init()
MatlabMPI\MPI Init.m
34 Line(s)
MPI Init()
Called at the beginning of an MPI program (currently empty).
Parent Routines
Child Routines
5.14
MPI Probe()
Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MPI Probe - Returns
MPI Probe()
MatlabMPI\MPI Probe.m
{ comm, message rank, message tag, source, tag }
{ message rank, message tag }
89 Line(s)
5.15
MPI Recv()
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Recv
Name
Location
Objective
Arguments
Outputs
Size
Author
Date
Comment
- Receives
MPI Recv()
MatlabMPI\MPI Recv.m
{ comm, source, tag, varargout }
{ varargout }
64 Line(s)
5.16
MPI Run()
Routine Name
Routine Location
Routine Objective
Routine Arguments
Routine Outputs
Routine Size
Routine Author
Routine Date
Routine Comment
MPI Run - Run m file
MPI Run()
MatlabMPI\MPI Run.m
{ defscommands, m file, machines, n proc }
145 Line(s)
on multiple processors.
14
15
machines = ;
Run on a local processor.
machines = machine1 machine2) );
Run on a multi processors.
machines = machine1:dir1 machine2:dir2) );
Run on a multi processors and communicate using via dir1 and
dir2,
which must be visible to both machines.
If machine1 is the local cpu, then defscommands will contain
the commands that need to be run locally, via eval(defscommands).
Parent Routines
Child Routines
MatMPI Comm init()
MatMPI Commands()
5.17
MPI Send()
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
Routine
MPI Send
Name
MPI Send()
Location
MatlabMPI\MPI Send.m
Objective
Arguments { comm, dest, tag, varargin }
Outputs
Size
51 Line(s)
Author
Date
Comment
- Sends variables to dest.
6
6.1
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
6.2
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
6.3
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
16
ROUTINES BODY
MatMPI Buffer file()
function buffer\_file = MatMPI\_Buffer\_file(source,dest,tag,comm)
% MatMPI\_Buffer\_file - Helper function for creating buffer file name.
%
% buffer\_file = MatMPI\_Buffer\_file(source,dest,tag,comm)
%
machine\_id = comm.machine\_id(1,dest+1);
dir = comm.machine\_db.dir{1,machine\_id};
buffer\_file = [dir,/p,num2str(source),\_p,num2str(dest),\_t,
num2str(tag),\_buffer.mat];
17
machine = host;
% Initialize comm.
MPI\_COMM\_WORLD.rank = -1;
MPI\_COMM\_WORLD.size = n\_proc;
MPI\_COMM\_WORLD.save\_message\_flag = 0;
MPI\_COMM\_WORLD.group = (1:n\_proc)-1;
MPI\_COMM\_WORLD.machine\_id = zeros(1,n\_proc);
% Initialize machine database.
machine\_db.n\_machine = n\_m;
% Number of machines.
machine\_db.type = cell(1,n\_m);
% Unix or Windows.
machine\_db.machine = cell(1,n\_m);
% Machine names.
machine\_db.dir = cell(1,n\_m);
% Communication directory.
machine\_db.matlab\_command = cell(1,n\_m); % Matlab command.
machine\_db.remote\_launch = cell(1,n\_m);
% Remote launch command.
machine\_db.remote\_flags = cell(1,n\_m);
% Remote launch flags.
machine\_db.n\_proc = zeros(1,n\_m);
% # processes on this machine.
machine\_db.id\_start = zeros(1,n\_m);
% Start index.
machine\_db.id\_stop = zeros(1,n\_m);
% Stop index.
% Start setting up machine id.
for i\_rank=0:n\_proc-1
i\_machine = mod(i\_rank,n\_m) + 1;
machine\_db.n\_proc(1,i\_machine) = machine\_db.n\_proc(1,i\_machine) + 1;
end
% Get possibly user settings.
machine\_db\_settings = MatMPI\_Comm\_settings;
% Set machine\_db values.
for i=1:n\_m
machine\_db.type{1,i} = machine\_db\_settings.type;
machine\_db.machine{1,i} = host;
machine\_db.dir{1,i} = [pwd /MatMPI];
machine\_db.matlab\_command{1,i} = machine\_db\_settings.matlab\_command;
machine\_db.remote\_launch{1,i} = machine\_db\_settings.remote\_launch;
machine\_db.remote\_flags{1,i} = machine\_db\_settings.remote\_flags;
if (i == 1)
machine\_db.id\_start(1,i) = 1;
machine\_db.id\_stop(1,i) = machine\_db.id\_start(1,i) +
machine\_db.n\_proc(1,i) -1;
else
machine\_db.id\_start(1,i) = machine\_db.id\_stop(1,i-1) + 1;
machine\_db.id\_stop(1,i) = machine\_db.id\_start(1,i) +
machine\_db.n\_proc(1,i) -1;
6.4
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
18
end
id\_start = machine\_db.id\_start(1,i);
id\_stop = machine\_db.id\_stop(1,i);
MPI\_COMM\_WORLD.machine\_id(1,id\_start:id\_stop) = i;
% Check if there is a machines list.
if (n\_machines > 0)
machine = machines{i};
machine\_db.machine{1,i} = machine;
% Check if there is a directory appended.
dir\_sep = findstr(machine,:);
if (dir\_sep)
machine\_piece = machine(1,1:dir\_sep-1);
dir\_piece = machine(1,(dir\_sep+1):end);
machine\_db.machine{1,i} = machine\_piece;
machine\_db.dir{1,i} = dir\_piece;
end
end
end
% Add machine\_db to communicator.
MPI\_COMM\_WORLD.machine\_db = machine\_db;
% Write out.
comm\_mat\_file = MatMPI/MPI\_COMM\_WORLD.mat;
save(comm\_mat\_file,MPI\_COMM\_WORLD);
6.5
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
19
MatMPI Commands()
function [defscommands, unix\_command] = ...
MatMPI\_Commands(m\_file,rank,MPI\_COMM\_WORLD)
% MatMPI\_Commands - Commands to launch a matlab script remotely.
%
% [defscommands, unix\_command] = ...
%
MatMPI\_Commands(m\_file,rank,MPI\_COMM\_WORLD)
%
% Set newline string.
nl = sprintf(\n);
% Create filename each Matlab job will run at startup.
defsbase = [MatMPI/defs num2str(rank)];
defsfile = [defsbase .m];
comm\_mat\_file = MatMPI/MPI\_COMM\_WORLD.mat;
outfile = [MatMPI/ m\_file . num2str(rank) .out];
20
6.6
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
21
6.7
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
6.8
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
22
6.9
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
23
new\_comm;
MPI Abort()
function MPI\_Abort()
% MPI\_Abort - Aborts any currently running MatlabMPI sessions.
%
%
MPI\_Abort()
%
%
Will abort any currently running MatlabMPI sessions.
%
by looking for leftover Matlab jobs and killing them.
%
Cannot be used after MatMPI\_Deleta\_all.
%
% Get possibly user defined settings.
machine\_db\_settings = MatMPI\_Comm\_settings;
% Get list of pid files.
pid\_files = dir(MatMPI/pid.*.*);
s = size(pid\_files);
n\_files = s(1);
% Create single qoute.
q = strrep( , ,);
% Check if there are any files
if (n\_files < 1)
disp(No pid files found);
else
% Loop over each file.
for i\_file=1:n\_files
% Get file name.
file\_name = pid\_files(i\_file).name;
% Check if there is a directory appended.
dir\_sep = findstr(file\_name,.);
if (dir\_sep)
% Parse file name.
machine = file\_name(1,(dir\_sep(1)+1):(dir\_sep(end)-1));
pid = file\_name(1,(dir\_sep(end)+1):end);
% Kill process.
6.10
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
24
MPI Bcast()
function varargout = MPI\_Bcast( source, tag, comm, varargin )
% MPI\_Bcast - broadcast variables to everyone.
%
%
[var1, var2, ...] = ...
%
MPI\_Bcast( source, tag, comm, var1, var2, ... )
%
%
Broadcast variables to everyone in comm.
%
%
Sender blocks until all the messages are received,
%
unless MatMMPI\_Save\_messages(1) has been called.
%
% Get processor rank.
my\_rank = MPI\_Comm\_rank(comm);
comm\_size = MPI\_Comm\_size(comm);
% If not the source, then receive the data.
if (my\_rank ~= source)
varargout = MPI\_Recv( source, tag, comm );
end
% If the source, then send the data.
if (my\_rank == source)
% Create data file.
buffer\_file = MatMPI\_Buffer\_file(my\_rank,source,tag,comm);
% Save varargin to file.
save(buffer\_file,varargin);
% Loop over everyone in comm and create link to data file.
link\_command = ;
25
for i=0:comm\_size-1
% Dont do source.
if (i ~= source)
% Create buffer link name.
buffer\_link = MatMPI\_Buffer\_file(my\_rank,i,tag,comm);
% Append to link\_command.
link\_command = [link\_command ln -s buffer\_file buffer\_link
; ];
end
end
% Create symbolic link to data\_file.
unix(link\_command);
% Write commands unix commands to .sh text file
% to fix Matlabs problem with very long commands sent to unix().
unix\_link\_file = [MatMPI/Unix\_Link\_Commands\_t num2str(tag) .sh];
fid = fopen(unix\_link\_file,wt);
fwrite(fid,link\_command);
fclose(fid);
unix([/bin/sh unix\_link\_file]);
delete(unix\_link\_file);
% Loop over everyone in comm and create lock file.
for i=0:comm\_size-1
% Dont do source.
if (i ~= source)
% Get lock file name.
lock\_file = MatMPI\_Lock\_file(my\_rank,i,tag,comm);
% Create lock file.
fclose(fopen(lock\_file,w));
end
end
26
79:
if (i ~= source)
80:
% Get lock file name.
81:
lock\_file = MatMPI\_Lock\_file(my\_rank,i,tag,comm);
82:
83:
% Spin on lock file until it is deleted.
84:
loop = 0;
85:
while exist(lock\_file) ~= 0
86: %
pause(0.01);
87:
loop = loop + 1;
88:
end
89:
90:
end
91:
end
92:
93:
94:
% Delete buffer file.
95:
if (not(comm.save\_message\_flag))
96:
delete(buffer\_file);
97:
end
98:
99:
end
100:
101:
end
6.11
1:
2:
3:
4:
5:
6:
6.12
1:
2:
3:
4:
5:
6:
7:
MPI Finalize()
function MPI\_Finalize()
% MPI\_Finalize - Called at the end of a MatlabMPI program.
%
%
MPI\_Finalize()
%
%
Called at the end of an MPI program (currently empty).
%
6.13
1:
2:
3:
4:
5:
6:
7:
6.14
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
27
MPI Init()
function MPI\_Init()
% MPI\_Init - Called at the start of an MPI program.
%
% MPI\_Init()
%
%
Called at the beginning of an MPI program (currently empty).
%
MPI Probe()
function [message\_rank, message\_tag] = MPI\_Probe( source, tag, comm )
% MPI\_Probe - Returns a list of all messages waiting to be received.
%
% [message\_rank, message\_tag] = MPI\_Probe( source, tag, comm )
%
%
Source and tag can be an integer or a wildcard *.
%
% Get processor rank.
my\_rank = MPI\_Comm\_rank(comm);
% Get lock file names.
lock\_file = MatMPI\_Lock\_file(source,my\_rank,tag,comm);
% Check to see if there are any messages.
message\_files = dir(lock\_file);
n\_files = length(message\_files);
% Create single qoute.
q = strrep( , ,);
% Check if there are any files
if (n\_files < 1)
% Set default (negative) return values.
message\_rank = ;
message\_tag = ;
else
% Create arrays to store rank and tag.
message\_rank = zeros(n\_files,1);
message\_tag = message\_rank;
% Set strings to search for (THIS IS VERY BAD, SHOULD HIDE THIS).
source\_str = p;
dest\_str = [\_p num2str(my\_rank) \_t];
6.15
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
28
tag\_str = \_lock.mat;
source\_len = length(source\_str);
dest\_len = length(dest\_str);
tag\_len = length(tag\_str);
% Step through each file name and strip out rank and tag.
for i\_file=1:n\_files
% Get file name.
file\_name = message\_files(i\_file).name;
% Find location of each of the strings.
source\_pos = findstr(file\_name,source\_str);
dest\_pos = findstr(file\_name,dest\_str);
tag\_pos = findstr(file\_name,tag\_str);
% If we have found the location than extract rank and tag.
if (source\_pos & dest\_pos & tag\_pos)
message\_rank(i\_file) = str2num(file\_name(1,(source\_len+
1):(dest\_pos-1)));
message\_tag(i\_file) = str2num(file\_name(1,(dest\_pos+
dest\_len):(tag\_pos-1)));
end
end
end
MPI Recv()
function varargout = MPI\_Recv( source, tag, comm )
% MPI\_Recv - Receives message from source.
%
% [var1, var2, ...] = MPI\_Recv( source, tag, comm )
%
%
Receives message from source with a given tag
%
and returns the variables in the message.
%
%
source can be an iteger from 0 to comm\_size-1
%
tag can be any integer
%
comm is an MPI Communicator (typically a copy of MPI\_COMM\_WORLD)
%
6.16
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
29
MPI Run()
function defscommands = MPI\_Run( m\_file, n\_proc, machines )
% MPI\_Run - Run m\_file on multiple processors.
%
% defscommands = MPI\_Run( m\_file, n\_proc, machines )
%
%
Runs n\_proc copies of m\_file on machines, where
%
%
machines = {};
%
Run on a local processor.
%
%
machines = {machine1 machine2}) );
%
Run on a multi processors.
%
%
machines = {machine1:dir1 machine2:dir2}) );
%
Run on a multi processors and communicate using via dir1 and dir2,
%
which must be visible to both machines.
%
%
If machine1 is the local cpu, then defscommands will contain
30
19: %
the commands that need to be run locally, via eval(defscommands).
20: %
21:
22:
% Check if the directory MatMPI exists
23:
if exist(MatMPI, dir) ~= 0
24:
%error(MatMPI directory already exists: rename or remove with
25:
MatMPI\_Delete\_all);
26: else
27:
mkdir(MatMPI);
28:
end
29:
30:
% Create working directory.
31: % mkdir(MatMPI);
32:
33:
% Get host.
34:
host = getenv(HOST);
35:
36:
% Get number of machines to launch on.
37:
n\_machines = size(machines,2);
38:
39:
% Create generic comm.
40:
MPI\_COMM\_WORLD = MatMPI\_Comm\_init(n\_proc,machines);
41:
42:
% Set newline string.
43:
nl = sprintf(\n);
44:
% Get single quote character.
45:
q = strrep( , ,);
46:
47:
% Initialize unix command launch on all the different machines.
48:
unix\_launch = ;
49:
50:
% Get number of machines.
51:
n\_m = MPI\_COMM\_WORLD.machine\_db.n\_machine;
52:
53:
% Loop backwards over each machine.
54:
for i\_m=n\_m:-1:1
55:
56:
% Get number of processes to launch on this machine.
57:
n\_proc\_i\_m = MPI\_COMM\_WORLD.machine\_db.n\_proc(1,i\_m);
58:
59:
if (n\_proc\_i\_m >= 1)
60:
61:
% Get machine info.
62:
machine = MPI\_COMM\_WORLD.machine\_db.machine{1,i\_m};
63:
remote\_launch = MPI\_COMM\_WORLD.machine\_db.remote\_launch{1,i\_m};
64:
remote\_flags = MPI\_COMM\_WORLD.machine\_db.remote\_flags{1,i\_m};
31
65:
66:
% Get starting and stopping rank.
67:
i\_rank\_start = MPI\_COMM\_WORLD.machine\_db.id\_start(1,i\_m) - 1;
68:
i\_rank\_stop = MPI\_COMM\_WORLD.machine\_db.id\_stop(1,i\_m) - 1;
69:
70:
% Initialize unix command that will be run on each node.
71:
unix\_matlab = ;
72:
73:
% Loop backwards over number of processes.
74:
for i\_rank=i\_rank\_stop:-1:i\_rank\_start
75:
76:
% Build commands
77:
[defscommands, unix\_matlab\_i\_rank] = ...
78:
MatMPI\_Commands(m\_file,i\_rank,MPI\_COMM\_WORLD);
79:
unix\_matlab = [unix\_matlab unix\_matlab\_i\_rank];
80:
81:
end
82:
83:
% Create a file name.
84: %
unix\_matlab\_file = [MatMPI/Unix\_Commands. machine .sh];
85:
unix\_matlab\_file = [MatMPI/Unix\_Commands. machine .
86:
num2str(i\_rank\_start) .sh];
87:
88:
% Append delete command.
89:
unix\_matlab = [unix\_matlab rm unix\_matlab\_file ; nl];
90:
91:
% Put commands in a file.
92:
fid = fopen(unix\_matlab\_file,wt);
93:
fwrite(fid,unix\_matlab);
94:
fclose(fid);
95:
96:
% Create unix commands to launch this file.
97:
if (strcmp(machine,host))
98:
unix\_launch\_i\_m = [/bin/sh ./ unix\_matlab\_file & nl];
99:
else
100:
unix\_launch\_i\_m = [remote\_launch machine remote\_flags ...
101:
q cd pwd ; /bin/sh ./ unix\_matlab\_file & q & nl];
102:
end
103:
104:
unix\_launch = [unix\_launch unix\_launch\_i\_m];
105:
end
106:
end
107:
108:
% Execute all launches in a single unix call.
109:
unix\_launch
110: % unix(unix\_launch);
32
111:
112:
113:
114:
115:
116:
117:
118:
119:
120:
6.17
MPI Send()
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24: