Programming in parallel high-performance computing --MPI

Original link https://www.cnblogs.com/52mm/articles/a27.html

MPI commonly used functions

  • MPI_Init(&argc, &argv)

To initialize the MPI environment, it may be some initial global variables. The first call MPI program, it is done all the MPI program initialization, all first executable statement MPI program is this statement.

  • MPI_Comm_rank(communicator, &myid)

To get the process ID of the current process has in the communicator. Different processes can be their own and other processes separately, each for parallel and collaborative process.

  • MPI_Comm_size(communicator, &numprocs)

Communicator to get the number of processes included. Through this call different processes that communicate in a given domain, a total number of processes in parallel.

  • MPI_Finalize()

To end the parallel programming environment. Then we can create a new MPI programming environment. The last call to MPI program, which ended run MPI programs, it is the last executable statement MPI program, or else run the program results are unpredictable.

  • int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

Transmitting a transmit buffer count data type datatype to the destination process, object of the process is the identification number of a communication domain dest, this message flag is transmitted Tag, the use of this flag, it is possible to send this other message and the difference message transmission process according to the present process to open the same purpose. A transmit buffer count by the data space datatype type continuous composition, the starting address of buf. Note that this is not in bytes, but the data type specified in units of length of the message.

  • int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag,
    MPI_Comm comm, MPI_Status *status)

Number of messages received from the specified process message source, and message type identifier and the data receiving process and present the datatype specified tag and consistent with the message, contained in the received data elements can not exceed count.
  A receive buffer count is made of successive elements space datatype type of composition, the type designated by the datatype, starting at address buf. If a short message in the receive buffer is reached, then only those messages corresponding to this address is modified. count may be zero, in this case the data portion of the message is empty.

By status.MPI_SOURCE, status.MPI_TAG status.MPI_ERROR and references, can be obtained identification process returns transmission data included in the state, the error code and the present identification tag data is transmitted using the reception operation returned.

Virtual process

  Virtual Process (MPI_PROC_NULL) is the hypothetical process does not exist, the main role is to act in MPI interprocess communication purpose or true source, purpose of introducing virtual process is to facilitate communication written statements in certain circumstances. When a real process of sending data to a virtual process or receive data from a virtual process, the real process will immediately return to the correct, as the implementation of a no-op.
  Specify a virtual communication in many cases the source or destination is very convenient, not only can greatly simplify the code that handles the border, but the program seems simple and easy to understand. In the bundled transmission and reception operations often use this means of communication. Success will return immediately when a real process MPI_PROC_NULL send a message to a virtual process; the process will be a real success immediately return a message is received from a virtual process MPI_PROC_NULL, and no change to the receive buffer.

Inter-process communication needs to be done by a communicator. MPI environment is automatically created when initializing two communicators, called MPI_COMM_WORLD, which contains all the processes in the program, another called MPI_COMM_SELF, which is composed solely of each process, only it contains its own communicator. MPI provides a special system process ID MPI_PROC_NULL, which represents the empty process (a process that does not exist), the equivalent of an empty operating communicate with MPI_PROC_NULL, has no effect on the operation of the program.

  • MPI_SENDRECV(sendbuf,sendcount,sendtype,dest,sendtag,recvbuf,recvcount,
    recvtype, source,recvtag,comm,status)

    Transmission buffer start address (optional data type)
    the number of transmission data (integer)
    data type of transmission data (handle)
    the target identification process (integer)
    sends a message identifier (integer)
    receiving an initial buffer address ( selected data type)
    the maximum number of received data (integer)
    data type of the received data (handle)
    the source process identifier (type)
    receives a message identification (type)
    of communication domains (handle)
    returned by state (status)

MPI的消息传递过程可以分为三个阶段:
  (1)消息装配,将发送数据从发送缓冲区中取出,加上消息信封等形成一个完整的消息。
  (2)消息传递,将装配好的消息从发送端传递到接收端。
  (3)消息拆卸,从接收到的消息中取出数据送入接收缓冲区。
  在这三个阶段,都需要类型匹配:(1)在消息装配时,发送缓冲区中变量的类型必须和相应的发送操作指定的类型相匹配;(2)在消息传递时,发送操作指定的类型必须和相应的接收操作指定的类型相互匹配;(3)在消息拆卸时,接收缓冲区中变量的类型必须和接收操作指定的类型相匹配。
  归纳起来,类型匹配规则可以概括为:

There are types of data communications, the sender and the recipient use the same data type.
Communications, the sender and recipient are not the type of data MPI_BYTE as the data type.
Packed data communication, both sender and receiver use MPI_PACKED .

  • double MPI_Wtime(void)

Returns a floating-point number represented by the number of seconds, it represents the time from a time past time to call experienced.

如何计时
  double starttime, endtime;
  ...
  starttime = MPI_Wtime()

Required timing section
the endtime = MPI_Wtime ()
the printf ( "% F secodes That tooks \ n-" , the endtime-startTime);

  • int MPI_Get_processor_name ( char name, int resultlen)

In actual use MPI to write parallel programs in the process, often want to output some intermediate results or final results to a file you have created yourself, for processes on different machines, often you want to name the output file that contains the machine name, or is the need to perform different actions depending on different machines, so that alone is not enough to rank the identification process, MPI aims to provide a dedicated call, so that each process can get the name of the dynamic process of the operation of the machine during operation. MPI_GET_PROCESSOR_NAME call returns the name of the machine where the calling process.

example

 #include "mpi.h"
 #include <stdio.h>
 int main(argc, argv)
 int argc;
 char **argv;
 {
  int rank, size, i, buf[1];
  MPI_Status status;
  MPI_Init( &argc, &argv );/*初始化*/
  MPI_Comm_rank( MPI_COMM_WORLD, &rank );/*进程号*/
  MPI_Comm_size( MPI_COMM_WORLD, &size );/*总的进程个数*/
  if (rank == 0) {
   for (i=0; i<100*(size-1); i++) {
   MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, 
     MPI_ANY_TAG, MPI_COMM_WORLD, &status );/*使用任意源和任意标识接收*/
   printf( "Msg=%d from %d with tag %d\n", 
   buf[0], status.MPI_SOURCE, status.MPI_TAG );
   }
  }
 else {
 for (i=0; i<100; i++) {
  buf[0]=rank+i;
  MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD );/*发送*/
  }
 }
 MPI_Finalize();/*结束*/
 }
 

explain:

  在接收操作中,通过使用任意源和任意tag标识,使得该接收操作可以接收任何进程以任何标识发送给本进程的数据,但是该消息的数据类型必须和接收操作的数据类型相一致。
  这里给出了一个使用任意源和任意标识的例子。其中ROOT进程(进程0)接收来自其它所有进程的消息,然后将各消息的内容,消息来源和消息标识打印出来。

Statute function

int MPI_Reduce(
void *input_data, /*指向发送消息的内存块的指针 */
void *output_data, /*指向接收(输出)消息的内存块的指针 */
int count,/*数据量*/
MPI_Datatype datatype,/*数据类型*/
MPI_Op operator,/*规约操作*/
int dest,/*要接收(输出)消息的进程的进程号*/
MPI_Comm comm);/*通信器,指定通信范围*/

Examples (trapezoidal integral)

#include "stdafx.h"
#include <iostream>
#include<math.h>
#include "mpi.h"
using namespace std;
const double a = 0.0;
const double b = 3.1415926;
int n = 100;
double h = (b - a) / n;
double trap(double a, double b, int n, double h)
{
    double*x = new double[n + 1];
    double*f = new double[n + 1];
    double inte = (sin(a) + sin(b)) / 2;
    for (int i = 1; i<n + 1; i++) {
        x[i] = x[i - 1] + h;
        f[i] = sin(x[i]);
        inte += f[i];
    }
    inte = inte*h;
    return 0;
}
int main(int argc, char * argv[])
{
    int myid, nprocs;
    int local_n;
    double local_a;
    double local_b;
    double total_inte;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &myid);   /* get current process id */
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs); /* get number of processes */
    local_n = n / nprocs;
    local_a = a + myid*local_n*h;
    local_b = local_a + local_n*h;
    double local_inte = trap(local_a, local_b, local_n, h);
    /*
    @local_inte:send buffer;
    @total_inte:receive buffer;
    @MPI_SUM:MPI_Op;
    @dest=0,rank of the process obtaining the result.
    */
    MPI_Reduce(&local_inte, &total_inte, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
    if (myid == 0)
    {
        printf("integral output is %d", total_inte);
    }
    MPI_Finalize();
    return 0;
}


Here Insert Picture Description

Released six original articles · won praise 4 · Views 977

Guess you like

Origin blog.csdn.net/qq_37703846/article/details/103994511