DCGAN:生成动漫头像

 个人博客:http://www.chenjianqu.com/

原文链接:http://www.chenjianqu.com/show-55.html

   上一篇博文《生成对抗网络(GAN)原理和实现》介绍了GAN的原理和实现了GAN最原始的算法,今天这篇博客写一下读DCGAN论文所做的笔记,并基于网上的tensorflow开源代码实现DCGANs的动漫头像的生成。

论文笔记

    论文:《UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS

1.解决什么问题

CNN在无监督学习上的应用。

Convolutinal GANs训练不稳定。

可视化GAN。

2.使用什么方法

提出了一系列技术去稳定的训练Convolutional GAN,叫做DCGAN。

使用训练好的判别器去做图像分类任务。

使用guided backpropagation可视化GAN。

证明生成器具有可向量运算的特性。

3.实验效果如何

DCGAN的G和D都能学习到分布到分布的映射的层级表示.

使用GAN作为特征抽取器,DCGAN在imagenet-1k上训练,将特征层用于SVM在CIFAR-10上分类,达到82.8%的准确率,超过了基于K-means的基线。

向量表示实验和可视化效果还不错。

4.仍存在的问题

模型仍有一些不稳定性存在,长时间训练的时候有时会有时会将滤波器的一个子集折叠成一个单一的振荡模式。

论文细节:

让DCGANs稳定训练的方法:

1.在判别器使用stride=2的卷积层替代池化层,在生成器使用frational-stride(转置卷积)替代池化层。

2.在判别器和生成器使用batchnorm

3.去掉全连接隐藏层

4.在生成器,除了输出层使用tanh激活函数,其他层使用relu激活函数。

5.在判别器,所有层都使用LeakyRelu激活函数。

使用到的数据集:

Large-scale Scene Understanding (LSUN)

Imagenet-1k

他们自己在网络上爬下来的人脸数据集 

训练细节:

    将图片的数据方位转换至[-1,1],使用SGD的batch_size=128,权重采用高斯分布的随机初始化,均值为0,方差为0.02。LeakyReLU的leak的斜率为0.2。使用Adam优化器,学习率为0.0002,beta1参数为0.9

    基于LSUN数据集的生成网络如下:

1.png

代码实现

  1. 数据集

动漫妹子的数据集爬取的代码网上有很多,爬下来后使用opencv的级联分类器可以截取头像。当然我这里使用的网上找的现成的数据集,我把数据集放在百度网盘:链接:https://pan.baidu.com/s/1MjzVS0RZ8jHkOspgX2yFNw 提取码:nj7l  。

    2.定义DCGAN的网络,并封装成类。

网络如下:

index.png

    其中两个判别器的变量是共享的。

    生成器的网络如下:

g.png

    判别器的网络如下:

d.png

    代码如下:

import math
import tensorflow as tf
slim = tf.contrib.slim
class DCGAN(object):
    def __init__(self, 
                 is_training,#是否训练
                 generator_depth=64,#生成器最后的反卷积层的输出深度
                 discriminator_depth=64,#判别器的输入深度
                 final_size=32,#生成器输出图片的大小
                 num_outputs=3,#生成器输出图片的通道
                 fused_batch_norm=False#batchnorm的fuse是否为true
                ):
        self._is_training = is_training
        self._generator_depth = generator_depth
        self._discirminator_depth = discriminator_depth
        self._final_size = final_size
        self._num_outputs = num_outputs
        self._fused_batch_norm = fused_batch_norm
        
    #构造判别器     
    def discriminator(self, 
                      inputs,#输入batch图片
                      depth=64,# Number of channels in first convolution layer.
                      is_training=True,
                      reuse=None,#是否参数重用
                      scope='Discriminator',#scope
                      fused_batch_norm=False 
                     ):
        normalizer_fn = slim.batch_norm
        normalizer_fn_args = {'is_training': is_training,'zero_debias_moving_mean': True,'fused': fused_batch_norm}
        
        height = inputs.get_shape().as_list()[1]
        
        end_points = {}
        with tf.variable_scope(scope, values=[inputs], reuse=reuse) as scope:
            with slim.arg_scope([normalizer_fn], **normalizer_fn_args):
                #判别器使用stride=2 kernel_size=4的cnn下采样,激活函数默认为leaky_relu
                with slim.arg_scope([slim.conv2d], stride=2, kernel_size=4,activation_fn=tf.nn.leaky_relu):
                    net = inputs
                    for i in range(int(math.log(height, 2))): #一共下采样log2(height)次
                        scope = 'conv%i' % (i+1)
                        current_depth = depth * 2**i #每次深度翻倍
                        normalizer_fn_ = None if i == 0 else normalizer_fn
                        net = slim.conv2d(net, num_outputs=current_depth, normalizer_fn=normalizer_fn_,scope=scope)
                        end_points[scope] = net #保存tensor至词典
                    #使用1x1的卷积代替全连接层输出
                    logits = slim.conv2d(net, 1, kernel_size=1, stride=1,padding='VALID', normalizer_fn=None,activation_fn=None)
                    logits = tf.reshape(logits, [-1, 1])
                    end_points['logits'] = logits #保存tensor至词典
                    return logits, end_points
    #构造生成器
    def generator(self,
                  inputs,#输入向量
                  depth=64,#转置卷积最后的通道数
                  final_size=32,#生成的图片大小
                  num_outputs=3,#输出的图片通道
                  is_training=True,#是否正在训练
                  reuse=None,#是否参数重用
                  scope='Generator',#生成器的变量域
                  fused_batch_norm=False#If 'True', use a faster, fused implementationof batch normalization.
                 ):
        normalizer_fn = slim.batch_norm
        normalizer_fn_args = { 'is_training': is_training,'zero_debias_moving_mean': True, 'fused': fused_batch_norm}
        
        #检查final_size必须是2的幂次
        if math.log(final_size, 2) != int(math.log(final_size, 2)):
            raise ValueError("'final_size' (%i) must be a power of 2." % final_size)
        
        #final_size大于8
        if final_size < 8: raise ValueError("'final_size' (%i) must be greater than 8."% final_size)
            
        end_points = {}
        num_layers = int(math.log(final_size, 2)) - 1 #转置卷积的层数,是log2(64)-1=5
        
        with tf.variable_scope(scope, values=[inputs], reuse=reuse) as scope:
            with slim.arg_scope([normalizer_fn], **normalizer_fn_args):#normalizer共享参数域
                with slim.arg_scope([slim.conv2d_transpose],normalizer_fn=normalizer_fn,stride=2, kernel_size=4):#转置卷积共享参数域
                    net = tf.expand_dims(tf.expand_dims(inputs, 1), 1)#将输入[batch,n]拓展为[batch,n,1,1]
                    current_depth = depth * 2 ** (num_layers - 1) #通道数
                    scope = 'deconv1'
                    net = slim.conv2d_transpose(net, current_depth, stride=1, padding='VALID', scope=scope)#第一次转置卷积的stride=1
                    end_points[scope] = net #保存tensor
                    #多层转置卷积
                    for i in range(2, num_layers):
                        scope = 'deconv%i' % i
                        current_depth = depth * 2 * (num_layers - i)
                        net = slim.conv2d_transpose(net, current_depth, scope=scope)
                        end_points[scope] = net
                        
                    #最后的反卷积层
                    scope = 'deconv%i' % num_layers
                    net = slim.conv2d_transpose(net, depth, normalizer_fn=None,activation_fn=None, scope=scope)
                    end_points[scope] = net
                    
                    #卷积得到图片
                    scope = 'logits'
                    logits = slim.conv2d(net,
                        num_outputs,
                        normalizer_fn=None,
                        activation_fn=tf.nn.tanh,
                        kernel_size=1,
                        stride=1,
                        padding='VALID',
                        scope=scope)
                    end_points[scope] = logits
                    
                    logits.get_shape().assert_has_rank(4)
                    logits.get_shape().assert_is_compatible_with([None, final_size, final_size, num_outputs])
                    
                    return logits, end_points
                
                
    #构造dcgan
    def dcgan_model(self, 
                      real_data,#输入batch图片
                      generator_inputs,#输入向量
                      generator_scope='Generator',#生成器变量域 当你想重用已经训练的网络权重时
                      discirminator_scope='Discriminator',#判别器变量域
                      check_shapes=True):#检查生成器生成的张量形状
        #创建生成器
        with tf.variable_scope(generator_scope) as gen_scope:
            generated_data, _ = self.generator(generator_inputs, self._generator_depth, self._final_size,
                                               self._num_outputs, self._is_training)
        #创建判别器
        with tf.variable_scope(discirminator_scope) as dis_scope:
            discriminator_gen_outputs, _ = self.discriminator(
                generated_data, self._discirminator_depth, self._is_training)
        with tf.variable_scope(dis_scope, reuse=True):
            discriminator_real_outputs, _ = self.discriminator(
                real_data, self._discirminator_depth, self._is_training)
        
        #检查生成器生成的数据形状是否和真实数据形状一样
        if check_shapes:
            if not generated_data.shape.is_compatible_with(real_data.shape):
                raise ValueError('Generator output shape (%s) must be the shape as real data (%s).'% (generated_data.shape, real_data.shape))
                
        #获得训练的变量
        generator_variables = slim.get_trainable_variables(gen_scope)
        discriminator_variables = slim.get_trainable_variables(dis_scope)
        
        #返回tensor
        return {'generated_data': generated_data,
                'discriminator_gen_outputs': discriminator_gen_outputs,
                'discriminator_real_outputs': discriminator_real_outputs,
                'generator_variables': generator_variables,
                'discriminator_variables': discriminator_variables}
        
    
    #预测,生成图片
    def predict(self, generator_inputs):
        logits, _ = self.generator(generator_inputs, self._generator_depth, self._final_size, self._num_outputs,is_training=False)
        return logits
        
    #判别器的loss
    def discriminator_loss(self, 
                           discriminator_real_outputs,
                           discriminator_gen_outputs,
                           label_smoothing=0.25): #使用label smoothing技术提高训练效果
        # -log((1 - label_smoothing) - sigmoid(D(x)))
        losses_on_real = slim.losses.sigmoid_cross_entropy(
            logits=discriminator_real_outputs,
            multi_class_labels=tf.ones_like(discriminator_real_outputs),
            label_smoothing=label_smoothing)
        loss_on_real = tf.reduce_mean(losses_on_real)
        
        # -log(- sigmoid(D(G(x))))
        losses_on_generated = slim.losses.sigmoid_cross_entropy(
            logits=discriminator_gen_outputs,
            multi_class_labels=tf.zeros_like(discriminator_gen_outputs))
        loss_on_generated = tf.reduce_mean(losses_on_generated)
        
        loss = loss_on_real + loss_on_generated
        return {'dis_loss': loss,'dis_loss_on_real': loss_on_real, 'dis_loss_on_generated': loss_on_generated}
        
    #生成器的Loss
    def generator_loss(self, discriminator_gen_outputs, label_smoothing=0.0):
        losses = slim.losses.sigmoid_cross_entropy(
            logits=discriminator_gen_outputs, 
            multi_class_labels=tf.ones_like(discriminator_gen_outputs),
            label_smoothing=label_smoothing)
        loss = tf.reduce_mean(losses)
        return loss
    
    #计算dcgan的损失
    def loss(self, discriminator_real_outputs, discriminator_gen_outputs):
        with tf.name_scope('loss'):
            dis_loss_dict = self.discriminator_loss(discriminator_real_outputs,discriminator_gen_outputs)
            gen_loss = self.generator_loss(discriminator_gen_outputs)
            dis_loss_dict.update({'gen_loss': gen_loss})
        return dis_loss_dict

3.定义工具函数

import cv2
import glob
import numpy as np
import os
import tensorflow as tf

DATA_DIR='D:/CV/datasets/cartoon_face/faces_not_oom'
IMAGE_SAVE_DIR='D:/Jupyter/GAN/gan_cartoon_image_data/save_pic'
LOG_DIR='D:/Jupyter/GAN/gan_cartoon_image_data/train_log'

def read_images(image_files):
    images = []
    for image_path in image_files:
        image = cv2.imread(image_path)
        image = cv2.resize(image, (64, 64))
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        image = (image - 127.5) / 127.5 #将图片的取值范围设置为(-1,1)
        images.append(image)
    return np.array(images)
#产生图片的输入的随机向量
def get_next_batch(batch_size=8):
    """Get a batch set of real images and random generated inputs."""
    images_path = os.path.join(DATA_DIR,'*.jpg')
    image_files_list = glob.glob(images_path) #匹配.jpg图片
    image_files_arr = np.array(image_files_list)
    selected_indices = np.random.choice(len(image_files_list), batch_size)#随机生成索引数组
    selected_image_files = image_files_arr[selected_indices]#选择一个批次的图片路径
    images = read_images(selected_image_files) #读取图片
    generated_inputs = np.random.uniform(low=-1, high=1.0, size=[batch_size, 64]) #随机产生64维的随机向量
    return images, generated_inputs
#保存图片
def write_images(generated_images, images_save_dir, num_step):
    #Scale images from [-1, 1] to [0, 255].
    generated_images = ((generated_images + 1) * 127.5).astype(np.uint8) #转换格式
    for j, image in enumerate(generated_images[:5]):
        image_name = 'generated_step{}_{}.jpg'.format(num_step+1, j+1)
        image_path = os.path.join(images_save_dir,image_name)
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
        cv2.imwrite(image_path, image)

4.训练DCGAN

STEPS=30000

#placeholder
real_data = tf.placeholder(tf.float32, shape=[None, 64, 64, 3], name='real_data')
generated_inputs = tf.placeholder(tf.float32, [None, 64], name='generated_inputs')

#创建gan
dcgan_model = DCGAN(is_training=True, final_size=64)
outputs_dict = dcgan_model.dcgan_model(real_data, generated_inputs)
generated_data = outputs_dict['generated_data'] #生成器的输出张量
generated_data_ = tf.identity(generated_data, name='generated_data')#y = tf.identity(x)是一个op操作表示将x的值赋予y
discriminator_gen_outputs = outputs_dict['discriminator_gen_outputs'] #假的输入时判别器的输出张量
discriminator_real_outputs = outputs_dict['discriminator_real_outputs'] #真实输入时判别器的输出张量
generator_variables = outputs_dict['generator_variables'] #生成器可训练的张量
discriminator_variables = outputs_dict['discriminator_variables'] #判别器可训练的张量

#获取dcgan的损失值
loss_dict = dcgan_model.loss(discriminator_real_outputs,discriminator_gen_outputs)
discriminator_loss = loss_dict['dis_loss']
discriminator_loss_on_real = loss_dict['dis_loss_on_real']
discriminator_loss_on_generated = loss_dict['dis_loss_on_generated']
generator_loss = loss_dict['gen_loss']

#将变量的损失值写入Loss
tf.summary.scalar('discriminator_loss', discriminator_loss)
tf.summary.scalar('discriminator_loss_on_real', discriminator_loss_on_real)
tf.summary.scalar('discriminator_loss_on_generated',discriminator_loss_on_generated)
tf.summary.scalar('generator_loss', generator_loss)
merged_summary = tf.summary.merge_all(key=tf.GraphKeys.SUMMARIES)

#创建判别器的优化器
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=0.0004,beta1=0.5)
discriminator_train_step = discriminator_optimizer.minimize(discriminator_loss, var_list=discriminator_variables)

#创建生成器的优化器
generator_optimizer = tf.train.AdamOptimizer(learning_rate=0.0001,beta1=0.5)
generator_train_step = generator_optimizer.minimize(generator_loss, var_list=generator_variables)

saver = tf.train.Saver(var_list=tf.global_variables())

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    # 保存图
    writer = tf.summary.FileWriter(LOG_DIR, sess.graph)
    #获取图片和向量,用于测试效果
    fixed_images, fixed_generated_inputs = get_next_batch()
    
    #断点续训
    ckpt=tf.train.get_checkpoint_state(LOG_DIR)
    if(ckpt and ckpt.model_checkpoint_path):
        saver.restore(sess,ckpt.model_checkpoint_path)
    for i in range(STEPS):
        #更新判别器
        batch_images, batch_generated_inputs = get_next_batch()
        train_dict = {real_data: batch_images,generated_inputs: batch_generated_inputs}
        sess.run(discriminator_train_step, feed_dict=train_dict)
        #更新生成器3次
        batch_images, batch_generated_inputs = get_next_batch()
        train_dict = {real_data: batch_images,generated_inputs: batch_generated_inputs}
        sess.run(generator_train_step, feed_dict=train_dict)
        batch_images, batch_generated_inputs = get_next_batch()
        train_dict = {real_data: batch_images,generated_inputs: batch_generated_inputs}
        sess.run(generator_train_step, feed_dict=train_dict)
        batch_images, batch_generated_inputs = get_next_batch()
        train_dict = {real_data: batch_images,generated_inputs: batch_generated_inputs}
        sess.run(generator_train_step, feed_dict=train_dict)
        
        #200轮输出一次图片对比效果
        if (i+1) % 200 == 0:
            batch_images = fixed_images
            batch_generated_inputs = fixed_generated_inputs
        else:
            batch_images, batch_generated_inputs = get_next_batch()
        train_dict = {real_data: batch_images,generated_inputs: batch_generated_inputs}
        #获得summary和生成的图片
        summary, generated_images  = sess.run([merged_summary, generated_data], feed_dict=train_dict)
        #写summary
        writer.add_summary(summary, i+1)
        if(i%10==0):
            print(i,end=' ')
        if (i+1) % 200 == 0:
            #模型
            model_save_path = os.path.join(LOG_DIR, 'model.ckpt')
            saver.save(sess, save_path=model_save_path, global_step=i+1)
            #保存图片
            write_images(generated_images, IMAGE_SAVE_DIR, i)
    writer.close()

由于多次OOM,因此loss曲线太乱了,就不放出来了。这是训练不知道多少轮后的效果:

cover.png

参考文献

[1]Alec Radford & Luke Metz,Soumith Chintala.UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS.2016-1-06

[2]公输睚信.TensorFlow 从零开始实现深度卷积生成对抗网络(DCGAN).https://www.jianshu.com/p/4d8473070ae7 .2018-05-27

发布了74 篇原创文章 · 获赞 33 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/qq_37394634/article/details/103157398