yTorch's nn.Linear() is used to set the fully connected layer in the network. It should be noted that the input and output of the fully connected layer are two-dimensional tensors, the general shape is [batch_size, size], which is different from convolution The layer requires input and output to be four-dimensional tensors. Its usage and formal parameters are explained as follows:
in_features refers to the size of the input two-dimensional tensor, that is, the size in the input [batch_size, size].
out_features refers to the size of the output two-dimensional tensor, that is, the shape of the output two-dimensional tensor is [batch_size, output_size], of course, it also represents the number of neurons in the fully connected layer.
From the perspective of the shape of the input and output tensors, it is equivalent to a tensor whose input is [batch_size, in_features] is transformed into an output tensor of [batch_size, out_features].
Example usage:
import torch as t
from torch import nn
# in_features由输入张量的形状决定,out_features则决定了输出张量的形状
connected_layer = nn.Linear(in_features = 64*64*3, out_features = 1)
# 假定输入的图像形状为[64,64,3]
input = t.randn(1,64,64,3)
# 将四维张量转换为二维张量之后,才能作为全连接层的输入
input = input.view(1,64*64*3)
print(input.shape)
output = connected_layer(input) # 调用全连接层
print(output.shape)
The result of running this code is:
input shape is %s torch.Size([1, 12288])
output shape is %s torch.Size([1, 1])
Reprinted from: https://blog.csdn.net/qq_42079689/article/details/102873766