Understand the generator of GAN from the perspective of uncertainty

1. Generative Network

Most of the classification and regression networks we usually encounter are discriminative. The discriminative network is very clear about the answers it wants to give, and these answers are not "creative".
Generative networks can create something new.

二、NetWork as generator

insert image description here
When the input is x and simple distribution (there are many ways to splicing them), the output is no longer fixed (not simply 0 and 1). why is that? Because when we have a definite distribution, we can sample it, that is, randomly input it. The output produced in this way will not always produce a specific value because x is determined, but will produce a complex distribution that depends on the distribution of the sampling.

3. Why distribution?

Why sometimes the output wants to be a distribution instead of some definite value, because sometimes we don't want the network to be too one-sided, just black and white. Even more so when doing creative work. Suppose we predict whether it will rain or not rain tomorrow, and hope that after adding a sample of a simple distribution, it will produce weather phenomena similar to light rain or sleet. This is uncertainty . This uncertainty increases output variety, or creativity.

Unconditional generation

Unconditional, as the name suggests, is unconditionally uncertain. We can call it unconditional generation by removing the input x at this time. The input is a low-dimensional vector sampled from a normal distribution. Of course, it may not be a normal distribution, but it must be sampled. A simple distribution of (so that we can make a random input), and output a higher-dimensional vector (such as a picture).
insert image description here
It is worth noting that the so-called high-dimensional and low-dimensional here are only talking about the quantity, not the real dimension, such as inputting a 12x1 vector, outputting a 1024x1 vector, and then resampling the output into a picture. Then 12 and 1024 themselves do not represent real dimensions, just remember this confusing expression.

Conclusion

Just here you can know how the data is generated, but if you want to know how to generate fake data, you must continue to understand the discriminator Discriminator and the training confrontation mechanism.

Guess you like

Origin blog.csdn.net/xiufan1/article/details/128149150