MPI
Install mpi4py
downlowd openmpi
tar -zxf openmpi-X.X.X tar.gz
cd openmpi-X.X.X
./configure --prefix=/usr/local/openmpi
make all
make install
easy_install mpi4py
Overview
MPI.Comm, base class of communicators
MPI.COMM_SELF, MPI.COMM_WORLD
MPI.Comm.Get_size(), the number of processes in a communicator
MPI.Comm.Get_rank(), the calling process rank
MPI.Comm.Get_group(), returns an instance of the MPI.Group class
Point-to-Point Communications
- MPI provides a set of send and receive functions allowing the communication of typed data with an associated tag
Blocking Communications
- MPI provides basic send and receive functions that are blocking
- MPI.Comm.Send(), MPI.Comm.Recv() and MPI.Comm.Sendrecv()
- MPI.Comm.send(), MPI.Comm.recv() and MPI.Comm.sendrecv(), communicate general Python objects
Nonblocking Communications
- MPI provides nonblocking send and receive function
- MPI.Comm.Isend() and MPI.Comm.Irecv(), initiate send and receive operations, returns MPI.Request
- MPI.Request.Test(), MPI.Request.Wait() and MPI.Request.Cancel(), manage completion
Persistent Communications
- MPI.Comm.Send_init() and MPI.Comm.Recv_init(), create persistent requests for a send and receive operation, respectively, returns MPI.Prequest
- MPI.Prequest.Start(), start the actual communication
Collective Communications
- Barrier synchronization across all group members
- Global communication functions
- Broadcast data from one member to all members of a group
- Gather data from all members to one member of a group
- Scatter data from one member to all members of a group
- MPI.Comm.Bcast(), MPI.Comm.Scatter(), MPI.Comm.Gather(), MPI.Comm.Allgather(), and MPI.Comm.Alltoall() MPI.Comm.Alltoallw(), collective communications of memory buffers
- MPI.Comm.bcast(), MPI.Comm.scatter(), MPI.Comm.gather(), MPI.Comm.allgather() and MPI.Comm.alltoall(), communicate general Python objects
- Global reduction operations such as sum, maximum, minimum, etc.
- MPI.Comm.Reduce(), MPI.Comm.Reduce_scatter, MPI.Comm.Allreduce(), MPI.Intracomm.Scan() and MPI.Intracomm.Exscan(), communicate on memory buffers
- MPI.Comm.reduce(), MPI.Comm.allreduce(), MPI.Intracomm.scan() and MPI.Intracomm.exscan(), communicate general Python objects
Dynamic Process Management
- MPI.Intracomm.Spawn(), returns an MPI.Intercomm instance, MPI.Comm.Get_parent(), retrieve the matching intercommunicator
- Server, MPI.Open_port(), MPI.Publish_name(), MPI.Intracomm.Accept(), MPI.Unpublish_name(), MPI.Close_port()
- Client, MPI.Lookup_name(), MPI.Intracomm.Connect(), return an MPI.Intercomm instance, MPI.Comm.Disconnect()
Parallel Input/Output
- MPI.File, MPI.File.Open(), MPI.File.Close()
Timers
Tutorials
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
print('%d of %d, Create data ...' % (rank, comm.Get_size()))
data = {'a':7, 'b':3.14}
comm.send(data, dest=1, tag=11)
elif rank == 1:
data = comm.recv(source=0, tag=11)
print('%d of %d, Received data ...' % (rank, comm.Get_size()))
print(data, type(data))
else:
print('%d of %d, Not use ...' % (rank, comm.Get_size()))
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
if rank == 0:
# initial data
data = {'a':7, 'b':3.14}
comm.send(data, dest=rank+1, tag=11)
elif rank == size-1:
# print out data in the last thread
data = comm.recv(source=rank-1, tag=11)
print('%d of %d, Received data ...' % (rank, comm.Get_size()))
print(data, type(data))
else:
# relay data
data = comm.recv(source=rank-1, tag=11)
comm.send(data, dest=rank+1, tag=11)
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = numpy.arange(100, dtype=numpy.float64)
comm.Send(data, dest=1, tag=13)
print(data)
elif rank == 1:
data = numpy.empty(100, dtype=numpy.float64)
comm.Recv(data, source=0, tag=13)
print('%d of %d, Received data ...' % (rank, comm.Get_size()))
print(data)
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
print('%d of %d, Create data ...' % (rank, comm.Get_size()))
data = {'a':7, 'b':3.14}
else:
data = None
data = comm.bcast(data, root=0)
print('%d of %d, Received data ...' % (rank, comm.Get_size()))
print(data, type(data))
Reference