MPI_Scatterv 示例

版权声明:码字不易,欢迎点赞。 https://blog.csdn.net/weixin_40255793/article/details/84336181

参照 MPI_Scattervman 文档,做了一个示例程序。

输出的效果是,将一个数组中的数组分散到 4 个进程,每个进程收到的数据长度不同,并且有重叠部分:

[mindle@master shared_folder]$ mpicc MPIScatterv_demo.c
[mindle@master shared_folder]$ mpirun -n 4 ./a.out
Sum data at rank 0 = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
Scattered data at rank 0 = {1, 2, 3}
Scattered data at rank 1 = {3, 4, 5, 6}
Scattered data at rank 2 = {6, 7, 8, 9, 10}
Scattered data at rank 3 = {10, 11, 12, 13, 14, 15}
int MPI_Scatterv(const void *sendbuf, const int *sendcounts, const int *displs,
                 MPI_Datatype sendtype, void *recvbuf, int recvcount,
                 MPI_Datatype recvtype,
                 int root, MPI_Comm comm)
  • sendbuf:address of send buffer (choice, significant only at root)
  • sendcounts:integer array (of length group size) specifying the number of elements to send to each processor
  • displs:integer array (of length group size). Entry i specifies the displacement (relative to sendbuf from which to take the outgoing data to process i
  • sendtype: data type of send buffer elements (handle)
  • recvcount: number of elements in receive buffer (integer)
  • recvtype:data type of receive buffer elements (handle)
  • root:rank of sending process (integer)
  • comm:communicator (handle)

示例程序(MPIScatterv_demo.c):

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main() {
#define MASTER 0

	MPI_Init(NULL, NULL);

	int world_size, world_rank;
	MPI_Comm_size(MPI_COMM_WORLD, &world_size);
	MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

	int sumnum = 15;
	short *sgn = (short *)malloc(sumnum * sizeof(short));

	if (world_rank == MASTER) {
		printf("Sum data at rank %d = {", world_rank);
		for (int i = 0; i < sumnum; i++) {
			sgn[i] = i + 1;
			if (i == sumnum - 1) {
				printf("%d}\n", sgn[i]);
			} else {
				printf("%d, ", sgn[i]);
			}
		}
	}

	const int localnum = world_rank + 3;
	int *localnums = NULL, *offsets = NULL;

	short *local_sgn = (short*)malloc(sizeof(short)*localnum);

	if (world_rank == MASTER) {
		localnums = (int *)malloc(world_size * sizeof(int));
		offsets = (int *)malloc((world_size + 1) * sizeof(int));
	}
	// everyone contributes their info
	MPI_Gather(&localnum, 1, MPI_INT, localnums, 1, MPI_INT, MASTER, MPI_COMM_WORLD);
	// the root constructs the offsets array
	if (world_rank == MASTER) {
		offsets[0] = 0;
		for (int i = 0; i < world_size; i++) {
			offsets[i + 1] = offsets[i] + localnums[i]
				- 1; // Consider the overlap
		}

		if ((sumnum - 1) != offsets[world_size]) {
			fprintf(stderr, "The sum of nodes is %d, but the root "
				"process only scattered %d nodes' data.\n",
				(sumnum - 1), offsets[world_size]);
		}
	}
	// everyone contributes their data
	MPI_Scatterv(sgn, localnums, offsets, MPI_SHORT,
		local_sgn, localnum, MPI_SHORT,
		MASTER, MPI_COMM_WORLD);
	free(localnums);
	free(offsets);
	free(sgn);

	printf("Scattered data at rank %d = {", world_rank);
	for (int i = 0; i < localnum; i++) {
		if (i == localnum - 1) {
			printf("%d}\n", local_sgn[i]);
		} else {
			printf("%d, ", local_sgn[i]);
		}
	}

	free(local_sgn);
	MPI_Finalize();
	return 0;
}

猜你喜欢

转载自blog.csdn.net/weixin_40255793/article/details/84336181
mpi