检验 pytorch,tensorflow,paddle,mxnet 深度学习框架是否正确支持GPU功能

检验 pytorch,tensorflow,paddle,mxnet 深度学习框架是否正确支持GPU功能

1、pytorch 框架

import torch
a = torch.cuda.is_available()
print(a)

ngpu = 1
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
print(device)
print(torch.cuda.get_device_name(0))
print(torch.rand(3,3).cuda())

#输出

##注意二维数组是随机生成的

True
cuda:0
GeForce RTX 2080 Ti
tensor([[0.0074, 0.5685, 0.0860],
[0.5375, 0.0253, 0.9158],
[0.2480, 0.4524, 0.1458]], device=‘cuda:0’)

2、tensorflow 框架

import tensorflow as tf

print('GPU',tf.test.is_gpu_available())

a = tf.constant(2.)
b = tf.constant(4.)

print(a * b)

3、百度 paddle 框架

import paddle.fluid
paddle.fluid.install_check.run_check()

4、微软 mxnet 框架

# 测试 mxnet 是否使用GPU
from mxnet import nd

# import mxnet as mx

a = nd.array([1, 2, 3], ctx=mx.gpu())

print(a)

猜你喜欢

转载自blog.csdn.net/qq_38463737/article/details/114228574