[Paper notes] Self-Supervised GAN: Self supervision of formula against network auxiliary spin losses

This is a work on the CVPR2019 UCLA and google brain. Model is very simple, using the auxiliary address the loss of GAN instability; rotary classification will remove the need for auxiliary classifier label, so that pictures can be marked directly on their own category.

Self-Supervised via Auxiliary GANs Rotation Loss
paper Address: https://arxiv.org/abs/1811.11212
GitHub Code: https://github.com/vandit15/Self-Supervised-Gans-Pytorch

Paper pointed out that the important issue GAN model is unstable (instability, divergence, cyclic behavior, or model collapse). Discriminator typically studying the characteristics of a distribution, but later samples often do not meet this profile, if left unchecked will cause the model underfitting. In order to improve the stability of the model, the researchers also proposed CGAN, generators and discrimination through the use of the data before the data labels to remember distributed. However, the main problem CGAN is that the model relies on good data annotation. Even if there is marked good data, which is also often sparse, and only contains only a small portion of the high-level abstract information.

Authors cited two examples to illustrate the problem of the current model (discriminator forgotten issue), the figure below, the blue dotted line is the current GAN, due to the confusion of distributed memory model, resulting in decreased accuracy.

In the following figure, the left side represents a change GAN distributed data after each data 1K. You can see the data distribution after the change, the original GAN ​​there is a big error, almost back to the original state is not learning.

Thus, the authors propose a self-supervised GAN (SS-GAN), by adding self-supervision can effectively prevent instability caused by forgetting. Author inspired by "Unsupervised Representation Learning by Predicting Image Rotations" of this article, the method using a self-supervised method based on image rotation. This method is defined as a geometric transformation of image rotation 0,90,180,270 degrees, in order to allow rotation of a convolutional network can recognize the transformed image, the image needs to be understood that the concepts described in the object. Although this self-monitoring methodology is very simple, but offers a powerful alternative to the supervisory signal is characterized by learning.

SS-GAN overall architecture shown below, the specific implementation:

  1. A first discriminator in accordance with the previous method, outputs the judgment result true / false of;
  2. The second penultimate discriminator output, as a feature, coupled with a linear classifier, the prediction type of rotation.

The authors noted, SS-GAN will combat training and self-supervised learning combined to achieve a CGAN advantage, without any label data. SS-GAN achieve a massive unconditional ImageNet image generation, this project is toward high quality, an important step in the synthesis of natural images unsupervised direction towards.

Guess you like

Origin www.cnblogs.com/gaopursuit/p/12235568.html