cycleGAN个人学习笔记

cycleGAN个人学习笔记

第一个问题:关于 “Generated Image Pool”

https://hardikbansal.github.io/CycleGANBlog/中是这样描述的:

Calculating the discriminator loss for each generated image would be computationally prohibitive. To speed up training we store a collection of previously generated images for each domain and to use only one of these images for calculating the error. First, fill the image_pool one by one until its full and after that randomly replace an image from the pool and store the latest one and use the replaced image for training in that iteration.

大概意思是:如果判别器对所有生成器产生的图像都进行计算,那么在计算量上是不允许的(太浪费时间,速度慢),所以定义一个“产生池”–gen_pool,该产生池将每一次迭代产生的图片存储起来,gen_pool[i] 即为第i次迭代,生成的假的数据。每次训练判别器时,从中选取一次迭代的生成数据。这样就大大提高判别器的训练速度。

但是在知乎 上有一个不同的解释,在论文https://arxiv.org/pdf/1612.07828.pdf中这样写道:

2.3. Updating Discriminator using a History of Refined Images
Another problem of adversarial training is that the discriminator network only focuses on the latest refined images. This lack of memory may cause (i) divergence of the adversarial training, and (ii) the refiner network re-introducing the artifacts that the discriminator has forgotten about. Any refined image generated by the refiner network at any time during the entire training procedure is a ‘fake’ image for the disciminator. Hence,the discriminator should be able to classify all these images as fake. Based on this observation, we introduce a method to improve the stability of adversarial training by updating the discriminator using a history of refined images, rather than only the ones in the current mini batch. We slightly modify Algorithm 1 to have a buffer of refined images generated by previous networks. Let B be the size of the buffer and b be the mini-batch size used in Algorithm 1. At each iteration of discriminator training, we compute the discriminator loss function by sampling b/2 images from the current refiner network, and sampling an additional b/2 images from the buffer to update parameters φ. We keep the size of the buffer,B, fixed. After each training iteration, we randomly replace b/2 samples in the buffer with the newly generated refined images. This procedure is illustrated in Figure 4. In contrast to our approach, Salimans et al. [32] used a running average of the model parameters to stabilize the training. Note that these two approaches are complementary
and can be used together.

这篇文章的意思是:判别器不应该每次更新最后一组生成图像,对于之前的生成图像也应该具备分类能力,所以在训练判别器时,也应该选去之前的生成图像。所以该论文没次训练判别器时,对于生成图像的类别的训练数据,一半来自于之前生成的图像,一半来自于最后一组生成图像。

两种方法各有各的道理,但是我更倾向于第一种。因为cycleGAN中的前提是如果不使用”Generated Image Pool”, 模型对于判别器的训练就要将所有迭代次数下生成图像作为negative data。而第二篇论文中,如果不采用其改进方法,则采用最后一次迭代中的生成数据。这两个观点的前提条件是不一样的

我也是才接触cycleGAN
欢迎小伙伴们一起讨论
未完待续,哈哈

猜你喜欢

转载自blog.csdn.net/feng_jiakai/article/details/80873075
今日推荐