Pytorch recently started learning programming, which used to be a simple order tensor operations, to facilitate their investigation to take, if a better way to welcome the message.
- tensor initialization
First, define the size tensor, denoted by size
# 支持多维的size
size = (dim1, dim2, ...)
Common initialization method
# a 返回大小为size的一个空tensor
a = torch.empty(size)
# b 返回大小为size的一个全为0的tensor
b = torch.zeros(size)
# b 返回大小和input tensor大小一样的一个全为0的tensor
b = torch.zeros_like(input)
# c 返回大小为size的一个全为1的tensor
c = torch.ones(size)
# c 返回大小和input tensor大小一样的一个全为1的tensor
c = torch.ones_like(input)
# d 返回大小为size的由[0,1)内的均匀分布随机数生成的一个tensor
d = torch.rand(size)
# d 返回大小和input tensor大小一样的由[0,1)内的均匀分布随机数生成的一个tensor
d = torch.rand_like(input)
- tensor splicing
Sometimes the need for stitching tensor according to one dimension, the operation is very simple
# 将tensor a和tensor b按照第dim_k维进行拼接
a = torch.cat((a, b), dim_k)
- tensor file access
Training resulting tensor can be stored in files to facilitate subsequent reading, to avoid a time to spend time re-training
ideas for: tensor -> numpy -> save numpy -> load numpy -> tensor
import numpy as np
# step 1. ts_a为一个tensor,转化为numpy格式,存为np_a
np_a = ts_a.numpy()
# step 2. 存储numpy,文件名记录在变量file_embns中(后缀.npy)
np.save(file_embns, np_embns)
# step 3. 读取numpy,返回numpy格式,存为np_a
np_a = np.load(file_embns)
# step 4. np_a为一个numpy,转化为tensor格式,存为ts_a
ts_a = torch.from_numpy(np_a)
- tensor data types into the python
#Tensor ----> 单个Python数据,data为Tensor变量且只能为包含单个数据
data.item()
#Tensor ----> Python list,data为Tensor变量,返回shape相同的可嵌套的list
data.tolist()
To be continued, updated ...