Pytorch framework implements VAE (Variational Autoencoder)

Table of contents

1. Understanding Variational Autoencoders (VAE)

2. Specific process of variational autoencoder

(1) Specific process 

(2) Pay attention to specific details

3. Variational autoencoder model structure 

(1) Encoder (Encoder)

(2) Decoder (Decoder)

4.Reparameterization Trick

5. Variational autoencoder loss function

6. VAE code implementation 


Autoencoder principle and implementation using Pytorch framework (AutoEncoder)

Pytorch implements autoencoder variants

1. Understanding Variational Autoencoders (VAE)

        The basic encoder essentially learns the mapping relationship between the input x and the hidden variable z. It is a discriminator model (Discriminator Model), not a generator model (Generator Model) . Regarding the basic principles of autoencoders, please read the content in the above link.

  • Given a hidden variable distribution p(z), if the conditional probability distribution p(x | z) can be learned, then the joint probability distribution p(x, z) = p(x | z), p(z) Sampling learning, generating different samples.
  • Variational autoencoders can meet the requirements given above
  •  Variational autoencoders also have encoders and decoders. The encoder receives an input x and outputs a hidden variable z, and the decoder receives a hidden variable z and outputs a variable x' that approximates x.

  • The VAE model has explicit constraints on the distribution of the hidden variable z, and it is hoped that the hidden variable z conforms to the preset prior distribution p(z).

2. Specific process of variational autoencoder

(1) Specific process 

  • The encoder (Encoder) and decoder (Decoder) of the variational autoencoder are not connected on the data stream, and the result output by the encoder will not be directly passed to the input of the decoder.

(2) Pay attention to specific details

3. Variational autoencoder model structure 

(1) Encoder (Encoder)

        Assuming that the batchSize = b is set, and each sample generates a mean and a standard deviation, the process of the encoder is shown in the figure below. 

       

(2) Decoder (Decoder)

        The mean and standard deviation of the encoder output follow a Gaussian distribution, and the decoder randomly samples z from the corresponding Gaussian distribution as input. 

4.Reparameterization Trick

        Tip: The encoder (Encoder) and decoder (Decoder) of the variational autoencoder are not connected on the data stream, and the result output by the encoder will not be directly passed to the input of the decoder. In order to solve this problem, a continuously accessible solution is proposed, which is called Reparameterization Trick.

        The Reparameterization Trick solution is as follows:

5. Variational autoencoder loss function

 

6. VAE code implementation 

Github code implementation:   GitHub - KeepTryingTo/Pytorch-GAN: The process of using Pytorch to implement GAN

The principle of VAE + intuitive understanding + formula derivation + denoising + anomaly detection

Guess you like

Origin blog.csdn.net/Keep_Trying_Go/article/details/130654962