See also the principle of technology blog:
Read one article generated against the network (GANs)
Here only recorded further awareness of design ideas
It must first understand
What is the ultimate goal is to sample the output of real ones?
First samples of the output sample is not really easy to understand, while the output sample specimen is false, but these false samples can reach discriminator can not judge, that is, true and false with probability 0.5 effect
Then trained samples to reach a balance between true and false results, and not a hundred percent true sample, which explains, why should fake sample labeled 1
The most ingenious design places
After the generator followed by a discriminator, its role is to distinguish between true and false data, then the error generator generates a "false data" due to be returned to the generator so that the generator generates a more accurate false data
Put on a fake sample labeled as 1, the next round of false data to verify the authenticity of which is the effect after the builder training
The most ingenious place is the model used by the error!
Why do over and over again as a true training data to the generator will not allow more false data generated with fake data?
Discriminator good training from the outset, will not change the parameters, which is why discriminator has the ability to distinguish between true and false
This is the functional decoupling of superb applications, the generator is only responsible for generating data as possible is really true, and resolutely defending discrimination is responsible to ensure that the final output of the results as far as possible the real