flying paddle
Paddle official website: https://www.paddlepaddle.org.cn/
Official website introduction
Today's machines are becoming more and more "smart", which is due to the emergence of deep learning . As the most influential key common technology of artificial intelligence, it has shown great strength in image classification and speech recognition .
Such a miraculous function must be very complicated to realize, right?
indeed so! only…
Now you can use the capabilities of the open source deep learning platform to solve it!
- Developers build their own AI applications like building blocks on the open source deep learning platform , which greatly reduces the R&D threshold and improves efficiency.
- Flying Paddle is a technology-leading, full-featured industrial-level deep learning open source open platform developed by Baidu .
- It integrates the core framework of deep learning, basic model library, end-to-end development kit, tool components and service platform , helps the industry to be intelligent, and is committed to making the innovation and application of deep learning easier .
- These are based on the four leading technologies of Flying Paddle:
- Paddle helps developers quickly realize AI ideas and quickly launch AI services. Help more and more industries complete AI empowerment and realize industrial intelligent upgrading .
- With the acceleration of the process of flying paddle empowering the industry , from small to intelligent peach sorting machines, parts quality inspection, large to urban planning, pest monitoring, unmanned driving, preventive medical care, etc.,
- Flying paddles have been applied in many industries such as industry, agriculture, service industry, retail, communication, real estate, medical care, and the Internet.
manual
paddle —— Paddle's deep learning core framework
Deep Learning Platform is an open source deep learning frameworkPaddle
developed by Baidu .
- It supports two modes of dynamic graph and static graph,
- Provides a wealth of algorithm model libraries, end-to-end development kits and tool components,
- It also has ultra-large-scale parallel deep learning capabilities.
Installation and uninstallation of local padddlepaddle
Notice! ! !
The corresponding version of paddlepaddle will be automatically installed when the project is created on the paddle platform . This is a tutorial that needs to be installed manually locally .
For details, see: Flying Propeller Quick Installation
Install
To install the latest stable version of paddlepaddle, you can directly run the following command:
# CPU:
# pip install paddlepaddle
# GPU:
pip install paddlepaddle-gpu
Check the currently installed version
Check the currently installed version of PaddlePaddle:
import paddle
print(paddle.__version__)
uninstall
Start GPU training
Training with GPU on paddle requires
- Install the GPU version of paddlepaddle first;
- Then specify in the code to use the GPU device, eg
paddle.device.set_device('gpu:0')
.
Specify GPU
Use the and paddle
of the to get and set the GPU.device
get_device()
set_device()
import paddle
print(paddle.device.get_device())
paddle.device.set_device('gpu:0')
print(paddle.device.get_device())
Notice! ! ! If the following errors occur,
ValueError: The device should not be 'gpu', since PaddlePaddle is not compiled with CUDA
At this point, look at the Cuda version.
Reason for error:
- This error message means that when using PaddlePaddle for deep learning tasks, GPU calculations on the linear layer are specified, but PaddlePaddle does not compile CUDA, so GPU cannot be used for calculations. This error can occur if GPU computing is not enabled in PaddlePaddle and you try to perform training or inference while using the GPU.
- The installed
paddlepaddle
version is the CPU version, which does not support GPU training.
Solution:
- You need to uninstall the CPU version first
paddlepaddle
, - Then install the GPU version
paddlepaddle
, - After the installation is successful, you can use it
paddle.device.set_device(‘gpu’)
to specify the use of GPU (if it is local, you must have a GPU; if you create a project on Paddle, you must use GPU computing resources).
Paddle Creation Project
The following creates a pyhon3.7
version PaddlePaddle 2.4.0
of the environment, using 0.5点/小时
computing power resources:
nvcc
It is the NVIDIA CUDA compiler, which isGPU
a compiler provided by NVIDIA Corporation for parallel computing.- When programming with CUDA
GPU
, the CUDA code needs tonvcc
be compiled with to generateGPU
a binary executable that can run on the .
Comparison under PaddlePaddle 2.1.2
pyhon3.7
The following PaddlePaddle 2.1.2
compares different computing power resources in the version environment to see the difference.
pyhon3.7
The following is a version of the environment createdPaddlePaddle 2.1.2
, using基础版 CPU
(not using GPU) computing resources:
- The following is the environment for creating a
pyhon3.7
versionPaddlePaddle 2.1.2
, using0.5点/小时
computing power resources:
- The following is the environment for creating a
pyhon3.7
versionPaddlePaddle 2.1.2
, using1.0点/小时
computing power resources:
Modified to paddlepaddle2.4.0
Now, paddlepaddle2.1.2
change to paddlepaddle2.4.0
and see the difference again.
This is the same project, only the version of the PaddlePaddle framework is modified, and 0.5点/小时
the computing resources used enter the environment.
Note that a new output has been added here: Build cuda_11.2.r11.2/compiler.29618528_0
.
- This text describes the version number of the CUDA Toolkit being built,
cuda_11.2.r11.2/compiler.29618528_0
.cuda_11.2.r11.2
Indicates the version number of the CUDA toolkit, where11.2
represents the major version number,r11.2
represents the minor version number,compiler.29618528_0
and represents the version of the compiler built.- Installing the proper version of the CUDA Toolkit is very important to ensure that your code is compatible with the CUDA version on the machine it is running on, and that you will get the best performance and functionality.
CUDA
CUDA (Compute Unified Device Architecture) is a platform and programming model developed by NVIDIA for parallel computing. The CUDA platform is based on the GPU (Graphics Processing Unit, graphics processing unit), which uses the powerful parallel computing capabilities of the GPU to accelerate computing-intensive applications, including scientific computing, machine learning, deep learning, computer vision, natural language processing and other fields.
The CUDA platform provides a series of hardware and software tools to help developers write GPU-accelerated applications using programming languages such as C/C++ and Python. At its coreCUDA Toolkit
, it includes tools such as the CUDA compiler, standard math library, debugger, and performance analyzer to help developers build efficient parallel applications.
CUDA is especially suitable for applications with massive data parallelism, such as matrix multiplication, convolutional neural network (CNN), recurrent neural network (RNN), etc. Due to the high parallel processing capability and memory bandwidth of GPU, compared with traditional CPU computing, using CUDA can significantly improve the performance of these applications and shorten the running time, providing important support for scientific and engineering computing and other fields.