《Siamese Neural Networks for One-shot Image Recognition》笔记

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_24548569/article/details/82024652

1 Motivation

  • Machine learning often break down when forced to make predictions about data for which little supervised information is available.

  • One-shot learning: we may only observe a single example of each possible class before making a prediction about a test instance.

2 Innovation

This paper uses siamese neural networks to deal with the problem of one-shot learning.

3 Adavantages

  • Once a siamese neural network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions.
  • Using a convolutional architecture, we are able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

4 Related Work

  • Li Fei-Fei et al. developed a variational Bayesian framework for one-shot image classification. (变贝叶斯框架)
  • Lake et al. addressed one-shot learning for character recognition with a method called Hierarchical Bayesian Program Learning(HBPL). (分层贝叶斯程序学习)

5 Model

Siamese Neural Network with L fully-connected layers

siamese neural network with L fully-connected layers

This paper tries 2-layer, 3-layer or 4-layer network.

h 1 , l : the hidden vector in layer l for the first twin.
h 1 , 2 : the hidden vector in layer l for the second twin.
for the first L-1 layers:

h 1 , l = m a x ( 0 , W l 1 , l T h 1 , ( l 1 ) + b l ) h 2 , l = m a x ( 0 , W l 1 , l T h 2 , ( l 1 ) + b l )

for the last layer:

p = σ ( j a j | h 1 , l ( j ) h 2 , l ( j ) | )

where σ is the sigmoidal activation function.

Siamese Neural Network with CNN

siamese neural network with CNN

first twin: (conv ReLU max-pooling)*3 conv FC sigmoid
second twin: (conv ReLU max-pooling)*3 conv FC sigmoid

h 1 , l ( k ) = max-pool ( m a x ( 0 , W l 1 , l ( k ) h 1 , ( l 1 ) + b l ) , 2 ) h 2 , l ( k ) = max-pool ( m a x ( 0 , W l 1 , l ( k ) h 2 , ( l 1 ) + b l ) , 2 )

where k is the k-th filter map, is the convolutional operation.

for the last fully connected layer:

p = σ ( j a j | h 1 , l ( j ) h 2 , l ( j ) | )

6 Learning

Loss function

M: minibatch size
y ( x 1 ( i ) , x 2 ( i ) ) : the labels for the minibatch, if x 1 and x 2 are from the same classs, y ( x 1 ( i ) , x 2 ( i ) ) = 1 , otherwise y ( x 1 ( i ) , x 2 ( i ) ) = 0
loss function: regularized cross-entropy

L ( x 1 ( i ) , x 2 ( i ) ) = y ( x 1 ( i ) , x 2 ( i ) ) log p ( x 1 ( i ) , x 2 ( i ) ) + ( 1 y ( x 1 ( i ) , x 2 ( i ) ) ) log ( 1 p ( x 1 ( i ) , x 2 ( i ) ) ) + λ T | w | 2

Optimizaiton

η j : learning rate for j layer
μ j : momentum for j layer
λ j : L 2 regularization weights for j layer

update rule at epoch T is as follows:

w k j T ( x 1 ( i ) , x 2 ( i ) ) = w k j ( T ) + Δ w T ( x 1 ( i ) , x 2 ( i ) ) + 2 λ j | w | Δ w k j ( T ) ( x 1 ( i ) , x 2 ( i ) ) = η j w k j ( T ) + μ j Δ w k j ( T )

where w k j ( T ) is the partial derivative with respect to the weight between the j-th neuron in some layer and the k-th neuron in the successive layer.

Weight initialization

Siamese Neural Network with L fully-connected layers

W of fully-connected layers: normal distribution, zero-mean, standard deviation 1 f a n i n (fan-in = n l 1 )
b of fully-connected layers: normal distribution, mean 0.5, standard deviation 0.01

Siamese Neural Network with CNN

W of fully-connected layers: normal distribution, zero-mean, standard deviation 0.2
b of fully-connected layers: normal distribution, mean 0.5, standard deviation 0.01
w of convolution layers: normal distribution, zero-mean, standard deviation 0.01
b of convolution layers: normal distribution, mean 0.5, standard deviation 0.01

Learning schedule

Learning reates are decayed by η j ( T 1 ) = 0.99 η j ( T 1 ) .
Momentum starts at 0.5 in every layer, increasing linearly each epoch until reaching the value μ j .

This paper trained siamese neural network with L fully-connected layer for 300 epochs, and siamese neural network with CNN for 200 epochs.

This paper monitored one-shot validatioin error on a set of 320 one-shot learning tasks. When the validation error did not decrease for 20 epochs, This paper stopped and used the parameters of the model at the best epoch according to the one-shot validation error.

Omniglot dataset

The Omniglot data set contains examples from 50 alphabets, and from about 15 to upwards of 40 characters in each alphabet. All characters across these alphabets are produced a single time by each of 20 drawers.

examples in the Omniglot data set
examples in the Omniglot data set

Affine distortions

This paper augmented the training set with small affine distortions. For each image pair x 1 , x 2 , the paper generate a pair of affine transformations T 1 , T 2 to yield x 1 = T 1 ( x 1 ) , x 2 = T 2 ( x 2 ) , and T = ( θ , ρ x , ρ y , s x , s y , t x , t y ) .

A sample of random affine distortions generated for a single character in the Omniglot data set.
sample of affine distortions

Train

The size of mini-batch is 32. The samples of mini-batch is:

the samples of mini-batch

7 Experiment

Test

The samples of test, N-way, the follow image shows 20-way.
20-way

Results

Siamese Neural Network with L fully-connected layers

accuracy on Omniglot verification task

Siamese Neural Network with CNN

accuracy on Omniglot verification task

One-shot Image Recognition

Example of the model’s top-5 classification performance on 1-versus-20 one-shot classification task.
Example of the model's top-5 classification performance on 1-versus-20 one-shot classification task

One-shot accuracy on evaluation set:
One-shot accuracy on evaluation set

Comparing best one-shot accuracy from each type of network against baselines:
Comparing best one-shot accuracy from each type of network against baselines

参考:https://blog.csdn.net/bryant_meng/article/details/80087079
code address: https://github.com/sorenbouma/keras-oneshot

猜你喜欢

转载自blog.csdn.net/qq_24548569/article/details/82024652