Previous review: weight decay, regularization
Article Directory
1. Discarding method
1.1 Motivation
- A good model needs to be robust to perturbations of the input data.
- Equivalent to Tikhonov regularization using noisy data.
- Dropout method: Add noise between layers.
1.2 Unbiased noise addition
- For x ⃗ \vec xxAdd noise to get x ⃗ ′ \vec x'x′ , we want:
E [ x ⃗ ′ ] = x ⃗ E[\vec x']=\vec xE[x′]=x - The dropout method perturbs each element as follows:
xi ′ = { 0 with probability pxi 1 − p otherwise x_i'=\begin{cases}0 \qquad \text{with probability} p \\ \frac{x_i}{1- p}\quad \text{otherise}\end{cases}xi′={ 0with probablityp1−pxiotherise
1.3 Using the discard method
1.3.1 Training
The dropout method is usually applied to the output of the hidden fully connected layer, for example:
h ⃗ = σ ( W 1 x ⃗ + b ⃗ 1 ) h ⃗ ′ = dropout ( h ⃗ ) o ⃗ = W 2 h ⃗ ′ + b ⃗ 2 y ⃗ = softmax ( o ⃗ ) \begin{aligned} \vec h &= \sigma(W_1 \vec x + \vec b_1) \\ \vec h'&=\mathop{dropout}(\vec h) \ \ \vec o &= W_2 \vec h' + \vec b_2 \\ \vec y &= \mathop{softmax}(\vec o) \end{aligned}hh′oy=s ( W1x+b1)=dropout(h)=W2h′+b2=softmax(o)Suppose we use the dropout method for a network with a single hidden layer, some elements of the hidden layer may be changed to 0.
1.3.2 Forecasting
- Regularizers are only used during training: they affect the update of model parameters.
- During inference, dropout returns the input directly:
h ⃗ = dropout ( h ⃗ ) \vec h = \mathop{dropout} (\vec h)h=dropout(h)- This also guarantees deterministic output.
1.4 Summary
- The discarding method randomly sets some output items to 0 to control the model complexity.
- The long acts on the output of the hidden layer of the multilayer perceptron.
- Dropout is a hyperparameter that controls model complexity.
2. Code implementation
2.1 Implementation from scratch
2.1.1 Dropout
We implement dropout_layer
the function that discards elements in dropout
the tensor input with probability .X
import torch
from torch import nn
from d2l import torch as d2l
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
if dropout == 1:
return torch.zeros_like(X)
if dropout == 0:
return X
mask = (torch.randn(X.shape) > dropout).float()
return mask * X / (1.0 - dropout)
In the above code, a uniform random distribution between 0 and 1 torch.randn(X.shape)
is generated with a shape equal to Let's test the function:X
dropout
# 测试dropout_layer函数
X = torch.arange(16, dtype=torch.float32).reshape((2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
2.1.2 Define the model
We define a multilayer perceptron with two hidden layers, each containing 256 units.
# 定义具有两个隐藏层的多层感知机
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256
dropout1, dropout2 = 0.2, 0.5
class Net(nn.Module):
def __init__(self, num_inputs, num_outputs,
num_hiddens1, num_hiddens2, is_training = True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.training = is_training
self.lin1 = nn.Linear(num_inputs, num_hiddens1)
self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)
self.lin3 = nn.Linear(num_hiddens2, num_outputs)
self.relu = nn.ReLU()
def forward(self, X):
H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))
if self.training == True:
H1 = dropout_layer(H1, dropout1)
H2 = self.relu(self.lin2(H1))
if self.training == True:
H2 = dropout_layer(H2, dropout2)
out = self.lin3(H2)
return out
2.1.3 Training and testing
# 训练和测试
num_epochs, lr, batch_size = 10, 0.5, 256
loss = nn.CrossEntropyLoss(reduction='none')
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
The running result is shown in the figure below:
2.2 Simple implementation
2.2.1 Define Model and Dropout
# 简洁实现
net_concise = nn.Sequential(nn.Flatten(), nn.Linear(784, 256), nn.ReLU(),
nn.Dropout(dropout1), nn.Linear(256, 256), nn.ReLU(),
nn.Dropout(dropout2), nn.Linear(256, 10))
# 简洁实现 初始化权重
def init_weights(m):
if type(m) == nn.Linear:
nn.init.normal_(m.weight, std=0)
net.apply(init_weights)
2.2.2 Training and testing
trainer = torch.optim.SGD(net_concise.parameters(), lr)
d2l.train_ch3(net_concise, train_iter, test_iter, loss, num_epochs, trainer)
The running results are shown in the figure below:
Next: [Hands-on Deep Learning v2 Li Mu] Study Notes 09: Numerical Stability, Model Initialization, Activation Function