"Hands-on deep learning" - 19 convolutional layers

Mushen's version of "Learning Deep Learning by Hands" study notes, recording the learning process, please buy books for detailed content.

B station video link
open source tutorial link

convolution

insert image description here

Use a 12M pixel camera to capture pictures, because they are RGB pictures, so there are 36M elements.
Problems encountered when using MLP for classification:
insert image description here
the parameters are too large, and the GPU can’t even save the parameters:
insert image description here
the introduction of convolution:
insert image description here
the principle of finding patterns in pictures, inspired the design of convolution:
insert image description here
re-transform the fully connected layer, and do into a two-dimensional input and output.
insert image description here
According to the idea of ​​MLP, it is necessary to assign a convolution kernel to each pixel area of ​​convolution kernel size, and add translation invariance and locality, so that the convolution kernel can be moved (pattern/recognizer unchanged), while reducing parameters quantity.
insert image description here
insert image description here
Summary
Convolution is a special kind of fully connected layer:insert image description here

convolutional layer

insert image description here
Convolution calculation, convolution kernel 2*2:
insert image description here
two-dimensional convolutional layer, output Y: ( nk − kh + 1 ) ∗ ( nw − kw + 1 ) (n_k-k_h+1)*(n_w-k_w+1)(nkkh+1)(nwkw+1 ) , lessk − 1 k-1k1 :
insert image description here
The effect brought by different convolution kernels, different tasks determine the appearance of the final convolution kernel:
insert image description here
Although it is a convolution layer, it is actually cross-correlation when implemented:
insert image description here
one-dimensional convolution and three-dimensional convolution:
insert image description here
the size of the convolution kernel Locality is controlled and is a hyperparameter:
insert image description here

hands-on learning

cross-correlation

import torch
from torch import nn
from d2l import torch as d2l

def corr2d(X, K):  #@save
    """计算二维互相关运算"""
    h, w = K.shape
    Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            Y[i, j] = (X[i:i + h, j:j + w] * K).sum()
    return Y
X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])
K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
corr2d(X, K)
tensor([[19., 25.],
        [37., 43.]])

Implementing a 2D Convolutional Layer

class Conv2D(nn.Module):
    def __init__(self, kernel_size):
        super().__init__()
        self.weight = nn.Parameter(torch.rand(kernel_size)) # 0-1
        self.bias = nn.Parameter(torch.zeros(1))

    def forward(self, x):
        return corr2d(x, self.weight) + self.bias

Edge Detection of Objects in Images

X = torch.ones((6, 8)) # 图像
X[:, 2:6] = 0
X
tensor([[1., 1., 0., 0., 0., 0., 1., 1.],
        [1., 1., 0., 0., 0., 0., 1., 1.],
        [1., 1., 0., 0., 0., 0., 1., 1.],
        [1., 1., 0., 0., 0., 0., 1., 1.],
        [1., 1., 0., 0., 0., 0., 1., 1.],
        [1., 1., 0., 0., 0., 0., 1., 1.]])
K = torch.tensor([[1.0, -1.0]]) # 卷积核
K
tensor([[ 1., -1.]])
Y = corr2d(X, K)
Y
tensor([[ 0.,  1.,  0.,  0.,  0., -1.,  0.],
        [ 0.,  1.,  0.,  0.,  0., -1.,  0.],
        [ 0.,  1.,  0.,  0.,  0., -1.,  0.],
        [ 0.,  1.,  0.,  0.,  0., -1.,  0.],
        [ 0.,  1.,  0.,  0.,  0., -1.,  0.],
        [ 0.,  1.,  0.,  0.,  0., -1.,  0.]])
corr2d(X.t(), K) # 无法做垂直检测
tensor([[0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0.]])

learning convolution kernel

# 构造一个二维卷积层,它具有1个输出通道和形状为(1,2)的卷积核
conv2d = nn.Conv2d(1,1, kernel_size=(1, 2), bias=False)

# 这个二维卷积层使用四维输入和输出格式(批量大小、通道、高度、宽度),
# 其中批量大小和通道数都为1
X = X.reshape((1, 1, 6, 8))
Y = Y.reshape((1, 1, 6, 7))
lr = 3e-2  # 学习率

for i in range(10):
    Y_hat = conv2d(X)
    l = (Y_hat - Y) ** 2
    conv2d.zero_grad()
    l.sum().backward()
    # 迭代卷积核
    conv2d.weight.data[:] -= lr * conv2d.weight.grad
    if (i + 1) % 2 == 0:
        print(f'epoch {
      
      i+1}, loss {
      
      l.sum():.3f}')
epoch 2, loss 2.274
epoch 4, loss 0.508
epoch 6, loss 0.137
epoch 8, loss 0.044
epoch 10, loss 0.016
conv2d.weight.data.reshape((1, 2))
tensor([[ 0.9806, -1.0056]])

Guess you like

Origin blog.csdn.net/cjw838982809/article/details/132447856