VAE (Variational Autoencoder) simple record

foreword

I often encounter it, but I forget it every time I make a small supplement, so I might as well open an article and record it slowly.
VAE -> VQVAE, mainly adding Vector Quantization

About VQ-VAE, you can check this VQ-VAE of the generated model

This article will be updated continuously…

This article is well written, take a good look at it when you have time Variational Autoencoder VAE: This is what it is | Attached open source code , an article by Mr. Su Jianlin.


theory

training phase

To put it bluntly, input a sample (picture), and after the encoder extracts features from this sample, it learns two quantities, one is the mean value and the other is the variance.
feature = encoder(img)
mu, var = w_mu(feature), w_var(feature)
Then the hidden variable z can be sampled according to the mean and variance.
eps = torch.rand_like(mu)
z = mu + (var) ** 0.5 * eps

Then, according to this hidden variable z, we can decode the picture.
img_generate = decoder(z)

If the dimension is wrong, you can add a fully connected layer to map the dimension.

forecast stage

First generate a random hidden variable z, pay attention to the dimension of z,
and then stuff it into the decoder.



the code

The source of the following code: https://zhuanlan.zhihu.com/p/151587288
The training code can be viewed at https://shenxiaohai.me/2018/10/20/pytorch-tutorial-advanced-02/
is to do a crossEntropyloss with the ground truth , plus a KLloss

class VAE(nn.Module):
    def __init__(self, latent_dim):
        super().__init__()
        
        self.encoder = nn.Sequential(nn.Linear(28 * 28, 256),
                                     nn.ReLU(),
                                     nn.Linear(256, 128))
        
        self.mu     = nn.Linear(128, latent_dim)
        self.logvar = nn.Linear(128, latent_dim)
        
        self.latent_mapping = nn.Linear(latent_dim, 128)
        
        self.decoder = nn.Sequential(nn.Linear(128, 256),
                                     nn.ReLU(),
                                     nn.Linear(256, 28 * 28))
        
        
    def encode(self, x):
        x = x.view(x.size(0), -1)
        encoder = self.encoder(x)
        mu, logvar = self.mu(encoder), self.logvar(encoder)
        return mu, logvar
        
    def sample_z(self, mu, logvar):
        eps = torch.rand_like(mu)
        return mu + eps * torch.exp(0.5 * logvar)
    
    def decode(self, z,x):
        latent_z = self.latent_mapping(z)
        out = self.decoder(latent_z)
        reshaped_out = torch.sigmoid(out).view(x.shape[0],1, 28,28)
        return reshaped_out
        
    def forward(self, x):
        
        mu, logvar = self.encode(x)
        z = self.sample_z(mu, logvar)
        output = self.decode(z,x)
        
        return output

# 创建优化器
num_epochs = 10
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

for epoch in range(num_epochs):
    for i, (x, _) in enumerate(data_loader):
        # 获取样本,并前向传播
        x = x.to(device).view(-1, 28 * 28)
        x_predict = model(x)
        
        # 计算重构损失和KL散度(KL散度用于衡量两种分布的相似程度)
        # KL散度的计算可以参考论文或者文章开头的链接
        reconst_loss = F.binary_cross_entropy(x_predict, x, size_average=False)
        kl_div = - 0.5 * torch.sum(1 + log_var - mu.pow(2) - log_var.exp())
        
        # 反向传播和优化
        loss = reconst_loss + kl_div
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

Guess you like

Origin blog.csdn.net/weixin_43850253/article/details/128541132
Recommended