MPI basis (a)

Preface:

MPI (Message Passing Interface) is a cross-language communication protocol for writing parallel programs. OpenMP parallel with different procedures, MPI is a parallel programming techniques based on message passing. Messaging interface is a standard programming interface, rather than a particular programming language. Briefly, MPI standard defines a set of programming interfaces have portability. Different vendors and organizations follow this standard launched their own realization, and their different implementations will have different characteristics. Because MPI provides a unified programming interface, programmers only need to design a parallel algorithm, using the appropriate MPI library can be achieved in parallel computing based on message passing. MPI supports a variety of operating systems, including Windows and most UNIX-like systems.

1. The first MPI program

C language as an example here to start your first MPI program!

First of all, we should first include the header file into a <mpi.h>function we use in all of them. In addition, after this, the difference between MPI processes and normal C program is to have a function and a function of the end began to identify MPI section, and then what you want to be in this part, now to try it!

Of particular note is that this section and all sections of the routine after the number are hypothetical process is carried out under the premise of 4.

Although there is no difference between the output and the general program, but you've taken the first step!

Function Description:

int MPI_Init(int *argc, char **argv)

Enter through MPI_Init function MPI environment and complete all initialization, marking the beginning of the parallel code.

int MPI_Finalize(void)

By MPI_Finalize function exits from the MPI environment, marking the end of the code in parallel, if not the last MPI program executable statement, the operating results unknown.

#include <mpi.h>
#include <stdio.h>
int main(int argc, char **argv)
{ 
    
    MPI_Init(&argc, &argv);
    
    printf("Hello World!\n");
    
    MPI_Finalize();
   
    
    return 0;
}

2.

Gets the number of processes

In the MPI programming, we often need to obtain a specified number of process communication domain, in order to determine the size of the program.

A group of processes can send messages to each called a communication sub-set, typically by a MPI_Init()communicator all processes when a user starts a program, started by the user defined composition, the default value  of MPI_COMM_WORLD  . This parameter is a function of MPI communication operation parameters that are essential to define the range of the communication part in the process.

Function Description:

int MPI_Comm_size(MPI_Comm comm, int *rank)

Gets the number of processes specified communication domain.
Wherein the first parameter is the communicator, the second argument returns the number of processes.

Experiment Description:

Use function MPI_Comm_size get the number of process communication domain and printed out.

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int numprocs;
	MPI_Init(&argc, &argv);

	
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
	

	printf("Hello World! The number of processes is %d\n",numprocs);

	MPI_Finalize();
	return 0;
}

3.

Gets the process id

Similarly, we often need to export the current process id, in order to determine which specific process to complete the corresponding task.

This section is also to strengthen the review of the previous chapter.

Function Description:

int MPI_Comm_rank(MPI_Comm comm, int *rank)

Get the current process ID specified in the domain of communication, to distinguish itself from other programs.
Wherein the first parameter is the communicator, the second parameter returns the number of processes.

Experiment Description:

In each process, use the function MPI_Comm_rank to get the id of the current process and print them out.

Output:

0
1
3
2

Due to the uncertainty of parallel program execution order, your order and this may result in inconsistent results.

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs;
	MPI_Init(&argc, &argv);

    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
	
	printf("Hello World!I'm rank %d of %d\n", myid, numprocs);

	MPI_Finalize();
	return 0;
}

4.

Acquisition processor name

Sometimes in the actual processing process we may need to migrate to a different processor, MPI provides a function to get the name of the processor to simply allow this behavior.

Note that this need not be defined migration in MPI.

Function Description:

int MPI_Get_processor_name ( char *name, int *resultlen)
char *name : 实际节点的唯一说明字;
int *resultlen:在name中返回结果的长度;

Where the processor name call return calls.

Experiment Description:

In each process, use the function to get the current process MPI_Get_processor_name processor name and print it out.

Output:

Hello, world. I am PROCESS_NAME.

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int len;
	char name[MPI_MAX_PROCESSOR_NAME];
	MPI_Init(&argc, &argv);

    MPI_Get_processor_name (name, &len);

	printf("Hello, world. I am %s.\n", name);

	MPI_Finalize();
	return 0;
}

5.

operation hours

In actual programming, time is a very useful feature.

We can use the programming MPI MPI_Wtime function to calculate the running time code in parallel, with a view to MPI_Wtick accuracy.

Function Description:

double MPI_Wtime(void)

double MPI_Wtick(void)

MPI_WTIME return a floating-point number represented by the number of seconds, it represents the time from a time past time to call experienced.

MPI_WTICK return MPI_WTIME accuracy, in seconds, it can be considered a clock tick time occupied.

Experiment Description:

MPI_Wtime calculated using a function of the time code to run in parallel, and prints out a function MPI_WTICK accuracy between the two calculated time function.

Output:

The output should be formatted as follows:

The precision is: 1e-06
Hello World!I'm rank ... of ..., running ... seconds.

#include<stdio.h>
#include<mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs;
	double start, finish;
	
	MPI_Init(&argc, &argv);

    MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);

	//your code here
	start = MPI_Wtime();
	
	printf("The precision is: %f\n", MPI_Wtick());
	
	finish = MPI_Wtime();
	//your code here
	
	printf("Hello World!I'm rank %d of %d, running %f seconds.\n", myid, numprocs, finish-start);

	MPI_Finalize();
	return 0;
}

6.

Synchronize

In practice, we often need to be a number of reasons synchronous operation.

For example, we want to ensure that all processes run in parallel code at the same time start somewhere, or can not return before the end of a function call.

This time we need to use MPI_Barrier function.

Function Description:

int MPI_Barrier(MPI_Comm comm)

MPI_Comm comm : 通信子;

Stop calling until communicator in all processes have been completed calls, that is, to call at any one time process only after all the members have begun communicator calls.

Experiment Description:

MPI_Barrier function call to complete the synchronization information before run-time calculations.

Output:

The output should be formatted as follows:

The precision is: 1e-06
Hello World!I'm rank ... of ..., running ... seconds.

In this sample program, it may be whether to call function does not affect the final output, but that does not mean the same effect.

#include<stdio.h>
#include<mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs;
	double start, finish;
	
	MPI_Init(&argc, &argv);

    MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);

	//your code here
	MPI_Barrier(MPI_COMM_WORLD);
	//end of your code
	
	start = MPI_Wtime();
	
	printf("The precision is: %f\n", MPI_Wtick());
	
	finish = MPI_Wtime();
	
	printf("Hello World!I'm rank %d of %d, running %f seconds.\n", myid, numprocs, finish-start);

	MPI_Finalize();
	return 0;
}

7.

Messaging

In parallel programming, the messaging account for a large proportion. Good basic condition is the normal messaging operations between processes / node is complete. Here we introduce the basic transmission / reception function.

The most basic transmission / reception functions are in the buffer as an endpoint, to complete the operation specified by the parameters.

Function Description:

int MPI_Send(void* msg_buf_p, int msg_size, MPI_Datatype msg_type, int dest, int tag, MPI_Comm communicator)

Send buffer to the target process.
among them,

void* msg_buf_p : 发送缓冲区的起始地址;
int buf_size : 缓冲区大小;
MPI_Datatype msg_type : 发送信息的数据类型;
int dest :目标进程的id值;
int tag : 消息标签;
MPI_Comm communicator : 通信子;
int MPI_Recv(void* msg_buf_p, int buf_size, MPI_Datatype msg_type, int source, int tag, MPI_Comm communicator, MPI_Status *status_p)

Send buffer to the target process.
among them,

void* msg_buf_p : 缓冲区的起始地址;
int buf_size : 缓冲区大小;
MPI_Datatype msg_type : 发送信息的数据类型;
int dest :目标进程的id值;
int tag : 消息标签;
MPI_Comm communicator : 通信子;
MPI_Status *status_p : status_p对象,包含实际接收到的消息的有关信息

Experiment Description:

Here we process id of zero as a root process, then use the function MPI_Send in addition to the process of sending a "hello world!" To the root process, then the root process this information printed out.

Output:

A series of "hello world!".

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs, source;
	MPI_Status status;
	char message[100];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    
    if(myid != 0) {
    	strcpy(message, "hello world!");
    	
    	//your code here
    	MPI_Send(message, strlen(message)+1, MPI_CHAR, 0, 99, MPI_COMM_WORLD);
    	//end of your code
	}
	else { //myid == 0
		for(source=1; source<numprocs; source++) {
			//your code here
			MPI_Recv(message, 100, MPI_CHAR, source, 99, MPI_COMM_WORLD, &status);
			//end of your code
			
			printf("%s\n", message);
		}
	}

	MPI_Finalize();
	return 0;
}

8.

Offset

In the communication operation, we often need to be passed or address operation, such as sending / receiving buffer.

And a location address in the memory function can be obtained by MPI_ADDRESS.

Function Description:

int MPI_Address(void* location, MPI_Aint *address)

void* location : 调用者的内存位置;
MPI_Aint *address:位置的对应地址;

Experiment Description:

Given three temporary variables a, b, c, respectively, to obtain the address offset between a and b, a and c.

Output:

Since the variable type employed herein is int, so if the address is continuously variable, then it should be:
of The A and B Distance BETWEEN IS. 4
of The Distance BETWEEN IS. 8 A and C

#include<stdio.h>
#include<mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs;
	MPI_Aint address1, address2, address3;
	int a, b, c, dist1, dist2;
	
	a = 1;
	b = 2;
	c = 3;
	
	MPI_Init(&argc, &argv);
	
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
	
	
	// your code here
	MPI_Address(&a, &address1);
	MPI_Address(&b, &address2);
	MPI_Address(&c, &address3);
	// end of your code
	
	dist1 = address2 - address1 ;
	dist2 = address3 - address1 ;
	
	if(myid == 0) {
		printf("The distance between a and b is %d\n", dist1);
		printf("The distance between a and c is %d\n", dist2);
	}

	MPI_Finalize();
	return 0;
}

9.

Packed data (Pack)

Sometimes we want to send discrete data or different data types together to other processes, rather than inefficient sent one by one.

One solution to this problem is to encapsulate data into a packet, then the packet into a contiguous buffer, and then transmitted to the reception buffer dedicated extracted unpacked.

It is noteworthy that, packing / unpacking functions sometimes used instead of the system caching policy. In addition, for further development of additional top in MPI communication library will play a supporting role.

Function Description:

int MPI_Pack(void* inbuf, int incount, MPI_datatype datatype, void *outbuf, int outcount, int *position, MPI_Comm comm) 

void* inbuf : 输入缓冲区地址;
int incount :输入数据项数目;
MPI_datatype datatype :数据项的类型;
void *outbuf :输出缓冲区地址;
int outcount :输出缓冲区大小;
int *position :缓冲区当前位置;
MPI_Comm comm :通信子;

Experiment Description:

Packaged in the source process sends a packet to the process 1, process 1 is unpacked and print out the data.

Output:

The number is 1 and 2
```

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs, source;
	MPI_Status status;
	int i, j, position;
	int k[2];
	int buf[1000];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    
    i = 1;
    j = 2;
    
    if(myid == 0) {
        
        position = 0 ;
        
    	// your code here
    	MPI_Pack(&i, 1, MPI_INT, buf, 1000, &position, MPI_COMM_WORLD); 
        MPI_Pack(&j, 1, MPI_INT, buf, 1000, &position, MPI_COMM_WORLD); 
    	// end of your code
    	
    	MPI_Send(buf, position, MPI_PACKED, 1, 99, MPI_COMM_WORLD); 
	}
	else if (myid == 1){ 
		MPI_Recv(k, 2, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
		
		position = 0 ;
		
		MPI_Unpack(k, 2, &position, &i, 1, MPI_INT, MPI_COMM_WORLD);
		MPI_Unpack(k, 2, &position, &j, 1, MPI_INT, MPI_COMM_WORLD);
		
		printf("The number is %d and %d", i, j);
	}

	MPI_Finalize();
	return 0;
}

10.

Unpack the data (the unpack)

Unpacking packed MPI corresponding to the operation.

To note that: the difference between MPI_RECV and MPI_UNPACK: In the MPI_RECV in, count parameter indicates the maximum number of entries can be received by a number of actually received by the length of the received message to determine the MPI_UNPACK in, count parameter. reference to the actual number of items packaged; "size" the corresponding message is to increase the value of the reason for this change of position is "the size of the input message" until the user can not decide how to unpack determined in advance; the number of entries from the unpacked to determine the "message size" is also very difficult.

Function Description:

int MPI_Unpack(void* inbuf, int insize, int *position, void *outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) 

void* inbuf : 输入缓冲区地址;
int insize :输入数据项数目;
MPI_datatype datatype :数据项的类型;
void *outbuf :输出缓冲区地址;
int outcount :输出缓冲区大小;
int *position :缓冲区当前位置;
MPI_Comm comm :通信子;

Experiment Description:

Packaged in the source process sends a packet to the number of process 1, a process unpacking and print number data.

Output:

The number is 1 and 2
```

#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv)
{
	int myid, numprocs, source;
	MPI_Status status;
	int i, j, position;
	int k[2];
	int buf[1000];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    
    i = 1;
    j = 2;
    
    if(myid == 0) {
        
        position = 0 ;
        
    	MPI_Pack(&i, 1, MPI_INT, buf, 1000, &position, MPI_COMM_WORLD); 
        MPI_Pack(&j, 1, MPI_INT, buf, 1000, &position, MPI_COMM_WORLD); 
    	
    	MPI_Send(buf, position, MPI_PACKED, 1, 99, MPI_COMM_WORLD); 
	}
	else if (myid == 1){ 
		MPI_Recv(k, 2, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
		
		position = 0 ;
		i = j = 0;
		
		// your code here
		MPI_Unpack(k, 2, &position, &i, 1, MPI_INT, MPI_COMM_WORLD);
		MPI_Unpack(k, 2, &position, &j, 1, MPI_INT, MPI_COMM_WORLD);
		// end of your code
		
		printf("The number is %d and %d", i, j);
	}

	MPI_Finalize();
	return 0;
}

 

 

 

 

 

 

Guess you like

Origin blog.csdn.net/li123_123_/article/details/89057740
Recommended