Pytorch: What is Pytorch? (一)

LZ之前一直使用的都是tensorflow,对pytorch这个深度框架一直有所耳闻,只是看过部分代码,自己却没怎么写过,感觉还是要学习下,不然跟不上年轻人的时代呢,主要都是pytorch的tutorial的example,自己运行下,然后在稍微注释下,内容比较简单,Pytorch高阶玩家可忽略。

#载入对应包
from __future__ import print_function
import torch
#看下版本呗
print(torch.__version__)
1.1.0
#当使用的matrix为空时,初始值相当随意。。。
x = torch.empty(5, 3)
print(x)
 tensor([[ 0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 0.0000e+00,  9.1084e-44,  0.0000e+00],
        [-1.5327e+33,  3.0915e-41,  2.5906e-15]])
#随机初始化个矩阵,这个和tf差不多
x = torch.rand(5, 3)
print(x)
tensor([[0.9989, 0.1704, 0.8226],
        [0.0776, 0.9894, 0.5031],
        [0.3808, 0.3658, 0.4930],
        [0.0226, 0.0428, 0.9701],
        [0.2032, 0.0144, 0.8480]])
#初始化为0,type=long
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])
#创建个tensor
x = torch.tensor([5.5, 3])
print(x)
tensor([5.5000, 3.0000])
#对于已有tensor重新赋值使用
x = x.new_ones(5, 3, dtype=torch.double)
print(x)

x = torch.randn_like(x, dtype=torch.float)
print(x)
tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]], dtype=torch.float64)
tensor([[-1.2955, -0.5346, -0.3851],
        [ 0.0423,  0.7751, -0.8816],
        [-1.0434, -0.6452,  1.6098],
        [-0.7102, -1.1221, -0.5180],
        [-0.0641,  0.0103,  0.2643]])
#输出尺寸
print(x.size())
torch.Size([5, 3])
#开始两个变量了,加减乘除。。。
y = torch.rand(5, 3)
print(x, y)
tensor([[-1.2955, -0.5346, -0.3851],
        [ 0.0423,  0.7751, -0.8816],
        [-1.0434, -0.6452,  1.6098],
        [-0.7102, -1.1221, -0.5180],
        [-0.0641,  0.0103,  0.2643]]) tensor([[0.0126, 0.4379, 0.6608],
        [0.6111, 0.3980, 0.6987],
        [0.5446, 0.8241, 0.3343],
        [0.0104, 0.6765, 0.3375],
        [0.5786, 0.2635, 0.2601]])
y.add_(x)
print(y)
tensor([[-1.2829, -0.0967,  0.2757],
        [ 0.6534,  1.1731, -0.1829],
        [-0.4988,  0.1789,  1.9440],
        [-0.6999, -0.4456, -0.1805],
        [ 0.5145,  0.2738,  0.5244]])
print(x[:,1])
tensor([-0.5346,  0.7751, -0.6452, -1.1221,  0.0103])
#resize的操作啦
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)
print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
#输出对应tensor值,tf要session run才行
x = torch.randn(1)
print(x)
print(x.item())
tensor([-1.0967])
-1.0967001914978027
a = torch.ones(5)
print(a)
tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b)
[1. 1. 1. 1. 1.]
a.add_(1)
print(a)
print(b)
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
#移动数据GPU或者CPU
if torch.cuda.is_available():
    device = torch.device("cuda")
    y = torch.ones_like(x, device=device)
    x = x.to(device)
    z = x + y
    print(z)
    print(z.to("cpu", torch.double))
tensor([-0.0967], device='cuda:0')
tensor([-0.0967], dtype=torch.float64)

好像基本操作还不是很难,后续继续学习中。。。

参考地址:
https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py

发布了300 篇原创文章 · 获赞 203 · 访问量 59万+

猜你喜欢

转载自blog.csdn.net/Felaim/article/details/102854426