A brief introduction to CGAN

       This article briefly introduces some understandings of a paper I read before, Conditional Generative Adversarial Nets. It is inevitable that there will be misunderstandings when I read it. I hope everyone can understand.

       There is actually no difference between it and the original GAN. The biggest difference is that labels are added for training during training. For example, when training the discriminator, it is necessary to train the real picture with its label. This article uses the MNIST training set. Of course, this label needs to become a one-hot vector. A simple explanation is that if the label is 0, then its one-hot vector is [1,0,0,0,0,0,0, 0,0,0] and so on for the rest. The generator is also like this. At the beginning, a random 100-dimensional vector is generated, plus a label, and put into the generator for operation. The specific implementation is as shown in the figure below.

       Next, let's show the running effect of the code, that is, the effect of not adding tags and adding tags 

       It can be seen that the operation effect after adding the label can generate some desired data, unlike the original GAN ​​that generates random data.

Guess you like

Origin blog.csdn.net/qq_45710342/article/details/121674587