Pytorch personal learning record summary 07

Table of contents

Neural Networks - Nonlinear Activation

Neural Networks - Introduction to Linear and Other Layers 


Neural Networks - Nonlinear Activation

Official document address: torch.nn — PyTorch 2.0 documentation 

Commonly used: Sigmoid, ReLU, LeakyReLU, etc.

 

Function : Introduce nonlinear features to the model, so that a model that conforms to more features can be trained during the training process.

One of the parameters is inplace, the default is False, indicating whether to change the input value in place , True means that the input is directly changed and there is no other return value; False does not directly change the input and has a return value (recommended inplace=False).

import torch
from torch import nn

input = torch.tensor([[3, -1],
                      [-0.5, 1]])
input = torch.reshape(input, (1, 1, 2, 2))

relu = nn.ReLU()
input_relu = relu(input)

print('input={}\ninput_relu:{}'.format(input, input_relu))

# input=tensor([[[[ 3.0000, -1.0000],
#           [-0.5000,  1.0000]]]])
# input_relu:tensor([[[[3., 0.],
#           [0., 1.]]]])

Neural Networks - Introduction to Linear and Other Layers 

Linear Layerstorch.nn.Linear(in_features, out_features, bias=True) in torch.nn.Linear(in_features, out_features, bias=True) . default bias=True. Applies a linear transformation to incoming data

Parameters

  • in_features – size of each input sample (the size of each input sample)
  • out_features – size of each output sample (the size of each output sample)
  • bias – If set to False, the layer will not learn an additive bias. Default: True (if set to False, the layer will not learn an additive bias, the default is true)

Shape : Pay attention to the size of the last dimension of the input and output respectively . During the training process, nn.Linear is often regarded as the fully connected layer in the last few steps after being flattened into one dimension , so at this time, only the number of channels is concerned. , that is, Input and Outputs are often one-dimensional)

"Flatten to one-dimensional" is often used torch.nn.Flatten(start_dim=1, end_dim=- 1)

I would like to say that start_dimit means "flatten the following dimensions to the same dimension starting from start_dim". The default is to flatten 1from the beginning in actual training , because the tensor in training is 4-dimensional, respectively [ start_dim=1batch_size, C, H, W], and the batch_size of the 0th dimension cannot move it, so it starts from 1.

The more important ones are: torch.nn.BatchNorm2dtorch.nn.DropoutLoss Functions(I will talk about it later). Other Transformer Layers and Recurrent Layers are not very commonly used.

import torch

# 对4维tensor展平,start_dim=1

input = torch.arange(54)
input = torch.reshape(input, (2, 3, 3, 3))

y_0 = torch.flatten(input)
y_1 = torch.flatten(input, start_dim=1)

print(input.shape)
print(y_0.shape)
print(y_1.shape)

# torch.Size([2, 3, 3, 3])
# torch.Size([54])
# torch.Size([2, 27])

Guess you like

Origin blog.csdn.net/timberman666/article/details/131877740