Parallel computing using MPI libraries

Author: Zen and the Art of Computer Programming

1 Introduction

What is parallel computing?

Parallel computing refers to a computing model in which two or more independent tasks run simultaneously without affecting each other. Parallel computing is implemented in two main ways, one is a distributed system (cluster), and the other is a parallel processor (multi-core CPU). Distributed systems communicate through network-linked computer nodes, using multiple computers to jointly process a task; while parallel processors handle multiple threads/processes in a more efficient manner, completing multiple tasks in each clock cycle. Generally speaking, parallel computing can increase computing speed, reduce latency, and save costs.

What is MPI (Message Passing Interface)?

MPI (Message Passing Interface) is a standard programming interface designed for distributed parallel computing. It provides a complete set of function libraries, including communication, multicast, exclusive locking, timers, etc., allowing users to create complex parallel programs. MPI has become one of the most popular parallel programming models. However, understanding how to use MPI correctly is not an easy task. Therefore, understanding the following is crucial to mastering parallel programming.

2. Explanation of basic concepts and terms

1. Process

A process is the basic execution unit in the operating system, a running program or a running process. The operating system is responsible for allocating resources to processes and scheduling them to run on available resources.

2. Concurrency

Parallelism is when two or more tasks execute during the same time period. That is, a particular task takes longer than other tasks

Guess you like

Origin blog.csdn.net/universsky2015/article/details/132785559