PyTorch基础之张量模块数据类型、基本操作、与Numpy数组的操作详解(附源码 简单全面)

需要源代码文件请点赞关注收藏后评论区留言私信~~~

一、张量模块

张量(Tensor)是PyTorch最基本的操作对象,是具有统一类型的多维数组。大家对标量、向量和矩阵都非常熟悉,但是当我们想描述一个高维数据时,标量、向量和矩阵有些“力不从心”,因此,张量应运而生

在几何定义中,张量是基于标量、向量和矩阵概念的延伸。通俗一点理解,可以将标量视为0维张量,向量视为1维张量,矩阵视为2维张量。在深度学习领域可以将张量视为一个数据的水桶,当水桶中只放一滴水时就是0维张量,多滴水排成一排就是1维张量,联排成面就是2维张量,以此类推,扩展到n维张量

像 Python 数值和字符串一样,所有张量都是不可变的:永远无法更新张量的内容,只能创建新的张量

1:张量的数据类型

创建张量

生成随机数

torch.rand():生成服从均匀分布的随机数;

torch.randn():生成服从标准正太分布的随机数;

torch.normal():指定均值和标准差的正太分布的随机数;

torch.linspace():生成均匀间隔的随机数;

torch.manual_seed():用来固定随机种子,生成相同的随机数;

扫描二维码关注公众号,回复: 15690739 查看本文章

torch.ones()、torch.zeros()、torch.eye()

 下面是上面基本操作的代码

import torch
# 从python数组构建
a = [[1, 2, 3],[4, 5, 6]]
x = torch.Tensor(a)
print(a, x)
# 输出结果为:
# tensor([[1., 2., 3.],
#         [4., 5., 6.]])

# 从列表构建张量
x = torch.Tensor([[1, 2]])
print(x)
# 输出结果为:
# tensor([[1., 2.]])
tensor1 = torch.rand(4)
tensor2 = torch.rand(2, 3)
print(tensor1, tensor2)
# tensor([0.7638, 0.3919, 0.9474, 0.6846]) 
# tensor([[0.3425, 0.0689, 0.6304],
#         [0.5676, 0.8049, 0.3459]])
tensor1 = torch.randn(5)
tensor2 = torch.randn(2, 4)
print(tensor1, tensor2)
# tensor([ 0.4315, -0.3812, 0.9554, -0.8051, -0.9421]) 
# tensor([[-0.6991, 0.0359, 1.2298, -0.1711],
#         [ 1.0056, 0.5772, 1.4460, -0.5936]])
tensor = torch.normal(mean=torch.arange(1., 11.), std= torch.arange(1, 0, -0.1))
print(tensor)
# tensor([0.0605, 2.5965, 3.3046, 4.2056, 5.0117, 6.7848, 6.3024, 7.9845, 9.4306, 9.7881])
torch.arange(1, 0, -0.1)
tensor = torch.normal(mean=0.5, std=torch.arange(1., 6.))
print(tensor)
# tensor([-0.0757, -0.5302, -1.1334, -4.3958, -5.8655])
tensor = torch.normal(mean=torch.arange(1., 6.), std=1.0)
print(tensor)
# tensor([1.6546, 2.7788, 2.4560, 3.2527, 4.1715])
tensor = torch.normal(2, 3, size=(1, 4))
print(tensor)
# tensor([[ 4.7555, -2.5026, -1.6333, -0.9256]])
tensor = torch.linspace(1, 10, steps=5)
print(tensor)
# tensor([ 1.0000,  3.2500,  5.5000,  7.7500, 10.0000])

torch.manual_seed(1)
temp1 = torch.rand(5)
print(temp1)  # tensor([0.7576, 0.2793, 0.4031, 0.7347, 0.0293])
torch.manual_seed(1)
temp2 = torch.rand(5)
print(temp2)  # tensor([0.7576, 0.2793, 0.4031, 0.7347, 0.0293])
temp3 = torch.rand(5)
print(temp3)  # tensor([0.7999, 0.3971, 0.7544, 0.5695, 0.4388])
tensor1 = torch.zeros(2, 3)
tensor2 = torch.ones(2, 3)
tensor3 = torch.eye(3)
print(tensor1, tensor2, tensor3)
# tensor([[0., 0., 0.],
#         [0., 0., 0.]]) 
#         tensor([[1., 1., 1.],
#         [1., 1., 1.]]) 
#         tensor([[1., 0., 0.],
#         [0., 1., 0.],
#         [0., 0., 1.]])
# 第一种方法:在创建张量时指定数据类型
x = torch.ones((2, 3, 4), dtype=torch.int64)  # 生成全1数组
print(x)
# 输出结果为:
# tensor([[[1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1]],
# 
#         [[1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1]]])

# 第二种方法:张量创建完成后,对数据类型进行转换
x = torch.ones(2, 3, 4)  # 生成全1数组
x = x.type(torch.int64)
print(x)
# 输出结果为:
# tensor([[[1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1]],
# 
#         [[1, 1, 1, 1],
#          [1, 1, 1, 1],

2:张量的基本操作

改变张量的形状

x = torch.rand(3, 2)
print(x.shape)  # torch.Size([3, 2])
y = x.view(2, 3)
print(y.shape)  # torch.Size([6])

增加和删除维度

# 增加维度
a = torch.rand(3, 4)
b = torch.unsqueeze(a, 0)
c = a.unsqueeze(0)
print(b.shape)  # torch.Size([1, 3, 4])
print(c.shape)  # torch.Size([1, 3, 4])

交换维度

# 删除维度
a = torch.rand(1, 1, 3, 4)
b = torch.squeeze(a)
c = a.squeeze(1)
print(b.shape)  # torch.Size([3, 4])
print(c.shape)  # torch.Size([1, 3, 4])

拼接和分割

# torch.cat()拼接方法的代码如下:
a = torch.rand(1, 2)
b = torch.rand(1, 2)
c = torch.rand(1, 2)
output1 = torch.cat([a, b, c], dim=0)  # dim=0为按列拼接
print(output1.shape)  # torch.Size([3, 2])
output2 = torch.cat([a, b, c], dim=1)  # dim=1为按行拼接
print(output2.shape)  # torch.Size([1, 6])

堆叠和分解

# torch.stack()堆叠方法的代码如下:
a = torch.rand(1, 2)
b = torch.rand(1, 2)
c = torch.rand(1, 2)
output1 = torch.stack([a, b, c], dim=0)  # dim=0为按列拼接
print(output1.shape)  # torch.Size([3, 1, 2])
output2 = torch.stack([a, b, c], dim=1)  # dim=1为按行拼接
print(output2.shape)  # torch.Size([1, 3, 2])
# torch.chunk()分解方法的代码如下:
a = torch.rand(3, 4)
output1 = torch.chunk(a, 2, dim=0)
print(output1)
# (tensor([[0.1943, 0.1760, 0.3022, 0.0746],
#         [0.5819, 0.7897, 0.2581, 0.0709]]), tensor([[0.2137, 0.5694, 0.1406, 0.0052]]))

output2 = torch.chunk(a, 2, dim=1)
print(output2)
# (tensor([[0.1943, 0.1760],
#         [0.5819, 0.7897],
#         [0.2137, 0.5694]]), tensor([[0.3022, 0.0746],
#         [0.2581, 0.0709],
#         [0.1406, 0.0052]]))

索引和切片

x = torch.rand(2, 3, 4)
print(x[1].shape)  # torch.Size([3, 4])

y = x[1, 0:2, :]
print(y.shape)  # torch.Size([2, 4])

z = x[:, 0, ::2]
print(z.shape)  # torch.Size([2, 2])

下面是基本的数学运算

元素求和 按索引求和 元素乘积 求平均数 求方差 求最大值 求最小值

# 元素求和第一种方法
a = torch.rand(4, 3)
b = torch.sum(a)
print(b)  # tensor(6.4069)
# 元素求和第二种方法
a = torch.rand(4, 3)
b = torch.sum(a, dim=1, keepdim=False)
print(b, b.shape)
# tensor([[0.6594],
#         [1.5325],
#         [1.5375],
#         [1.7755]]) torch.Size([4, 1])
# 按索引求和,不常用
x = torch.Tensor([[1, 2],[3, 4]])
y = torch.Tensor([[3, 4],[5, 6]])
index = torch.LongTensor([0, 1])
output = x.index_add_(0, index, y)
print(output)
# tensor([[ 4.,  6.],
#         [ 8., 10.]])
# 元素乘积第一种方法
a = torch.rand(4, 3)
b = torch.prod(a)
print(b)  # tensor(2.0311e-05)
# 元素乘积第二种方法
a = torch.rand(4, 3)
b = torch.prod(a, 1, True)
print(b, b.shape)
# tensor([[0.0194],
#         [0.1845],
#         [0.0336],
#         [0.4879]]) torch.Size([4, 1])
# 求平均数的第一种方法
a = torch.rand(4, 3)
b = torch.mean(a)
print(b)  # tensor(0.4836)
# 求平均数的第二种方法
a = torch.rand(4, 3)
b = torch.mean(a, 1, True)
print(b, b.shape)
# tensor([[0.6966],
#         [0.6087],
#         [0.3842],
#         [0.1749]]) torch.Size([4, 1])
# 求方差的第一种方法
a = torch.rand(4, 3)
b = torch.var(a)
print(b)  # tensor(0.0740)
# 求方差的第二种方法
a = torch.rand(4, 3)
b = torch.var(a, 1, True)
print(b, b.shape)
# tensor([0.1155, 0.0874, 0.0354, 0.0005]) torch.Size([4, 1])
# 求最大值的第一种方法
a = torch.rand(4, 3)
b = torch.max(a)
print(b)  # tensor(0.8765)
# 求最大值的第二种方法
a = torch.rand(4, 3)
b = torch.max(a, 1, True)
print(b)
# torch.return_types.max(
# values = tensor([[0.9875],
#          [0.6657],
#          [0.9412],
#          [0.7775]]),
# indices = tensor([[2],
#           [0],
#           [0],
#           [1]]))
# 求最小值的第一种方法
a = torch.rand(4,3)
b = torch.min(a)
print(b)  # tensor(0.0397)
# 求最小值的第二种方法
a = torch.rand(4, 3)
b = torch.min(a, 1, True)
print(b)
# torch.return_types.min(
# values = tensor([[0.0436],
#           [0.1586],
#           [0.4904],
#           [0.2536]]),
# indices = tensor([[0],
#           [1],
#           [2],
#           [1]]))

向量运算和矩阵运算

包括向量的点乘 向量的叉乘 矩阵的内积 矩阵的外积

# 向量的点乘,a和b必须为一维
a = torch.Tensor([1, 2, 3])
b = torch.Tensor([1, 1, 1])
output = torch.dot(a, b)
print(output)  # 等价于 1*1+2*1+3*1,tensor(6.)
# 向量的叉乘
a = torch.Tensor([1, 2, 3])
b = torch.Tensor([1, 1, 1])
output = torch.multiply(a, b)
print(output)  # tensor([1., 2., 3.])
# 矩阵的内积
a = torch.Tensor([1, 2, 3])
b = torch.Tensor([1, 1, 1])
output = torch.inner(a, b)
print(output)  # tensor(6.)
# 矩阵的外积:矩阵乘法
a = torch.Tensor([[1, 2, 3], [4, 5, 6]])
b = torch.Tensor([[1, 1], [2, 2], [3, 3]])
output = torch.matmul(a, b)
print(output)
# tensor([[14., 14.],
#         [32., 32.]])
# 按批量相乘
a = torch.randn(10, 3, 4)
b = torch.randn(10, 4, 5)
output = torch.bmm(a, b)
print(output.shape)
# tensor([[14., 14.],
#         [32., 32.]])

3:张量与Numpy数组

由于使用numpy中ndarray处理数据非常方便,经常会将张量与numpy数组进行相互转换,所以掌握两者之间的转换方法很有必要

张量转numpy数组:tensor.numpy()

Numpy数组转张量:torch.from_numpy()

张量转Numpy数组 

a = torch.ones(1, 2)
b = a.numpy()  # 进行转换
print(a, b)  # tensor([[1., 1.]]) [[1. 1.]]

a += 2
print(a, b)  # tensor([[3., 3.]]) [[3. 3.]]
b += 2  # 在a改变后,b也已经改变
print(a, b)  # tensor([[5., 5.]]) [[5. 5.]]

 Numpy数组转张量

import numpy as np
a = np.ones([1, 2])
b = torch.from_numpy(a)  # 进行转换
print(a, b)  # [[1. 1.]] tensor([[1., 1.]], dtype=torch.float64)

a += 2
print(a, b)  # [[3. 3.]] tensor([[3., 3.]], dtype=torch.float64)
b += 2  # 在a改变后,b也已经改变
print(a, b)  # [[5. 5.]] tensor([[5., 5.]], dtype=torch.float64)

4:Cuda张量与CPU张量

在深度学习过程中,GPU能起到加速作用。Pytorch中的张量默认存放在CPU设备中,如果GPU可用,可以将张量转移到GPU中

x = torch.rand(2, 4)
print(x.device)  # cpu

# 第一种方法
x = x.cuda()
print(x.device)  # cuda:0

# 第二种方法
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if torch.cuda.is_available():
    x = x.to(device)
    print(x.device)  #cuda:0

# 转化为cpu
x = x.cpu()
print(x.device)  # cpu

创作不易 觉得有帮助请点赞关注收藏~~~

猜你喜欢

转载自blog.csdn.net/jiebaoshayebuhui/article/details/130438133
今日推荐