The difference between ACGAN and CGAN

The difference between ACGAN and CGAN is as follows


1 The same as CGAN, the input of the generation network is mixed with label;
2 The difference is that when identifying the network input, ACGAN no longer mixes the label, but when identifying the output of the network, the label is used as a target for feedback to submit Give the learning ability to identify the network.
3 Another difference is that the network layer of the generation network and the authentication network is no longer a fully connected CGAN, but a deep convolutional network of ACGAN (this is a change introduced at the beginning of DCGAN), and convolution can better extract The feature value of the picture, all the edges of the picture generated by ACGAN are more continuous and feel more real. The network model is generated as follows, exactly the same as CGAN.


        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(1,), dtype='int32')
        label_embedding = Flatten()(Embedding(self.num_classes, 100)(label))
 
        model_input = multiply([noise, label_embedding])
        img = model(model_input)

To identify the network model as follows, the input is still img, but the output contains two parts:
1 validity, that is, the result of verifying whether the image is forged.
2 label, use softmax to activate, output the 10-dimensional result that is which number it belongs to.

        img = Input(shape=self.img_shape)
 
        # Extract feature representation
        features = model(img)
 
        # Determine validity and label of the image
        validity = Dense(1, activation="sigmoid")(features)
        label = Dense(self.num_classes, activation="softmax")(features)
 
        return Model(img, [validity, label])

Therefore, when identifying the network or generating the network training, target img_labels and sampled_labels are provided.

            # Train the discriminator
            d_loss_real = self.discriminator.train_on_batch(imgs, [valid, img_labels])
            d_loss_fake = self.discriminator.train_on_batch(gen_imgs, [fake, sampled_labels])
            d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
 
            # ---------------------
            #  Train Generator
            # ---------------------
 
            # Train the generator
            g_loss = self.combined.train_on_batch([noise, sampled_labels], [valid, sampled_labels])

Guess you like

Origin blog.csdn.net/a493823882/article/details/107352307