Are windows at several learning platform can be successfully invoked to check the depth of GPU acceleration

The premise is correct cuda and cudnn, as well as the corresponding platform installation.

1、caffe

caffe there is a command-line argument: device_query can view the information of a specified GPU

Such as: the command line input caffe device_query -gpu 0

2, pytorch

Enter the following command directly in the python environment, it displays true it means you can call gpu

import torch
print (torch.cuda.is_available())

wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw ==

3、tensorflow

Enter the following command at the command line: If the available information is displayed GPU said GPU may be used to accelerate, if only the display cpu, can not call the GPU

import tensorflow
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

Here is no case of GPU can be used:

------------- ----------------------------------- additional content ------------------------------

Finally the way, you can look at all those processes using the information in the call GPU, GPU view with a command:

nvidia-smi

If the GPU acceleration with tensorflow or pytorch platform, a red python process will occur in this region

If caffe platform will be displayed caffe process:

Guess you like

Origin blog.csdn.net/sinat_33486980/article/details/92806775