Want to understand the parallel computing framework but can't start learning?

Some understanding and learning path about parallel computing framework

Introduction to various methods in the parallel computing framework

Parallel computing is mainly divided into two major directions, CPU parallelism on the host side and GPU parallelism on the device side.
Parallel on the CPU side mainly includes openMP and MPI .
The mainstream GPU-side parallel method is NVIDIA's CUDA architecture (the GPU-side acceleration method is relatively mature, and various TOP500 supercomputers are using GPU accelerator cards in large quantities, including our Tianhe-2 supercomputer, which has used Intel Xeon Phi before. )

1.openMP

1. In fact, openMP, as a relatively easy-to-use parallel method (but needs to open the openMP support in the compiler), supports C, C++, Fortran three programming languages.

2. There are a lot of learning materials for C openMP. I personally learn basic grammar in the "Super Calculation Workshop", such as the most commonly used parallelization of loops, multi-threaded control, etc., which are not often used later. The platform is completely open and free, and you can also learn other parallel frameworks such as MPI (nanny-level teaching)
Insert picture description here

3. Fortran openMP is only recently that I came into contact with, because the ancestral code of the instructor was written in fortran (-_-), I can only choose to learn fortran by myself. Fortunately, there is not much difference between openMP and Fortran in C and Fortran. It would be much better to have a C openMP foundation.
Here I recommend this resource found on Fcode "Using OpenMP for Fortran95 Parallel Computing". There is a Chinese version under the webpage. If you are strong enough, you can click the English version.

2.MPI

I haven't learned this, first dig a hole, and come back later.

3.CUDA

Insert picture description here

This is the GPU computing framework launched by NVIDIA itself. It temporarily supports five programming languages ​​(as shown in the picture above).
I am going to focus on CUDA. I recommend the CUDA parallel programming by Machinery Industry Press, mainly C language. description.
Insert picture description here

1. I personally think that CUDA C has the best applicability. I am using VS 2019 to build a CUDA C environment. CUDA can go to NVIDIA’s official website and link to CUDA download.
Note that, first download VS, then download CUDA, otherwise it is very possible An error occurred when the connection failed.
The preliminary study of CUDA C can also be carried out in the above-mentioned supercomputing workshop.

2. The fortran environment to build detailed steps can refer to this article fortran environment to build
I personally use the second, you can rely on the VS2019 again (anyway a little while to build CUDA C on the environment has been under a).
However, there is a problem with either the first or the second. It is very serious, that is, it is impossible to use CUDA parallelization for the fortran code. For this, I only found a solution using the PGI compiler and found many forums. I did not see other methods ( Does not that just are talking nonsense ah Hey ), if you are using PGI compiler, then also need to use a professional operator card in linux system, I being there is no good solution , so fortran code For the time being, I use openMP on the CPU side for parallel optimization.

Guess you like

Origin blog.csdn.net/weixin_46091928/article/details/112967227