Torch notes
import torch
import numpy as np
import torch.nn as nn
a_np = np.random.rand(10,100)
numpy knowledge Review
a_np.dtype # 数据类型
a_np.ndim #维度个数
a_np.shape # 形状 整数元祖
a_np.dtype=np.int32 # 修改数据类型
Tensor get basic information
tensor_a = torch.from_numpy(a_np)
tensor_a.type() # 获取tensor类型
tensor_a.size() # 获取维度特征 并返回数组
tensor_a.size(0) #获取第一个维度信息
tensor_a.dim() # 获取维度个数
tensor_a.device # 返回设备类型
Data type conversion tensor
tensor_b = tensor_a.float()
tensor_b = tensor_a.to(torch.float)
Device type conversion
tensor_b = tensor_b.cuda()
tensor_b = tensor_b.cpu()
tensor_b = tensor_a.to(torch.device(0))
tensor data converted to numpy
tensor_b = tensor_b.numpy()
# 注意*gpu上的tensor不能直接转为numpy
ndarray = tensor_b.cpu().numpy()
# numpy 转 torch.Tensor
tensor = torch.from_numpy(ndarray)
tensor dimension conversion
tensor_a.t_().size() # 对tensor进行转置
tensor_a.t_().size()
tensor_a.unsqueeze_(2) #
unsqu_tensor = tensor_a.squeeze_() # 将tensor中维度元素数为1 的全部压缩掉
tensor = torch.reshape(tensor_a, [50,20])
tensor.size()
Extracted from the tensor contains only one element value
tensor_a = tensor_a.cuda()
tensor_a[1][1].item()
tensor_a.device
Stitching tensor
- When a parameter is, for example, three of the 10 × 5 tensor, that tensor result torch.cat 30 × 5, and the result is torch.stack tensor of 3 × 10 × 5.
temp1 = torch.from_numpy(np.random.rand(5,4))
temp2 = torch.from_numpy(np.random.rand(5,4))
temp1.size()
temp2.size()
temp3 = torch.stack([temp1,temp2],dim=0)
temp4 = torch.cat([temp1,temp2],dim=0)
temp3.size()
temp4.size()
Matrix Multiplication
# Matrix multiplication: (m*n) * (n*p) -> (m*p).
result = torch.mm(tensor1, tensor2)
# Batch matrix multiplication: (b*m*n) * (b*n*p) -> (b*m*p).
result = torch.bmm(tensor1, tensor2)
Print model information
- The amount of parameters
- Model structure uses torch summary
class myNet(nn.Module):
def __init__(self,*other_para):
super(myNet,self).__init__()
self.embedding_layer = nn.Embedding(10,3)
net = myNet()
num_parameters = sum(torch.numel(parameter) for parameter in net.parameters())
from torchsummary import summary
summary(net,input_size=(2,2))
Model initialization
# Common practise for initialization.
for layer in model.modules():
if isinstance(layer, torch.nn.Conv2d):
torch.nn.init.kaiming_normal_(layer.weight, mode='fan_out',
nonlinearity='relu')
if layer.bias is not None:
torch.nn.init.constant_(layer.bias, val=0.0)
elif isinstance(layer, torch.nn.BatchNorm2d):
torch.nn.init.constant_(layer.weight, val=1.0)
torch.nn.init.constant_(layer.bias, val=0.0)
elif isinstance(layer, torch.nn.Linear):
torch.nn.init.xavier_normal_(layer.weight)
if layer.bias is not None:
torch.nn.init.constant_(layer.bias, val=0.0)
# Initialization with given tensor.
layer.weight = torch.nn.Parameter(tensor)
Output calculation accuracy Softmax
score = model(images)
prediction = torch.argmax(score, dim=1)
num_correct = torch.sum(prediction == labels).item()
accuruacy = num_correct / labels.size(0)
Save Model
- torch.save(my_model.state_dict(), "params.pkl")
Load Model
- First initialize the network structure model
- model.load_state_dict(torch.load("params.pkl"))
Some strange Notes
- torch.nn.CrossEntropyLoss input does not require Softmax. torch.nn.CrossEntropyLoss equivalent torch.nn.functional.log_softmax + torch.nn.NLLLoss.
- The torch y do not turn into onehot it will automatically help you turn this weird, compared keras in multi-classification y must be able to calculate the loss of onehot
- model.train (): Enable BatchNormalization and Dropout
- model.eval (): do not enable BatchNormalization and Dropout
tricks
- Del promptly deleted with no intermediate variables, saving GPU memory.
- Inplace operations using GPU memory savings, as