ElitesAI · depth hands-on science learning PyTorch version (fourth punch Task10)

GANs (Generativeadversarial networks, adversarial generation network)

Generative adversarial networks (GANs) composes of two deep networks, the generator and the discriminator.
The generator generates the image as much closer to the true image as possible to fool the discriminator, via maximizing the cross-entropy loss, i.e., maxlog⁡(D(x′)).
The discriminator tries to distinguish the generated images from the true images, via minimizing the cross-entropy loss, i.e., min−ylog⁡D(x)−(1−y)log⁡(1−D(x)).

Oppositional Generator and generates a network of depth Discriminator two networks: the
generator (Generator): generators strive to generate more realistic image, by making the cross-entropy function, such as loss log⁡ (D (x ')) maximum, to cheat through the discriminator. .
Discriminator (Discriminator): discriminator effort is required to distinguish between true and false picture, by making the cross-entropy loss function as -ylog⁡D (x) - minimum (1-y) log⁡ (1 -D (x)), to identify genuine.

Generator (Generator)

class net_G(nn.Module):
def init(self):
super(net_G,self).init()
self.model=nn.Sequential(
nn.Linear(2,2),
)
self.initialize_weights()
def forward(self,x):
x=self.model(x)
return x
def initialize_weights(self):
for m in self.modules():
if isinstance(m,nn.Linear):
m.weight.data.normal
(0,0.02)
m.bias.data.zero
()

Discriminator (Discriminator)

class net_D(nn.Module):
def init(self):
super(net_D,self).init()
self.model=nn.Sequential(
nn.Linear(2,5),
nn.Tanh(),
nn.Linear(5,3),
nn.Tanh(),
nn.Linear(3,1),
nn.Sigmoid()
)
self.initialize_weights()
def forward(self,x):
x=self.model(x)
return x
def initialize_weights(self):
for m in self.modules():
if isinstance(m,nn.Linear):
m.weight.data.normal
(0,0.02)
m.bias.data.zero
()

DCGANs (Deep Convolutional Generative Adversarial Networks deep convolution generative against network)

DCGAN architecture has four convolutional layers for the Discriminator and four “fractionally-strided” convolutional layers for the Generator.
The Discriminator is a 4-layer strided convolutions with batch normalization (except its input layer) and leaky ReLU activations.
Leaky ReLU is a nonlinear function that give a non-zero output for a negative input. It aims to fix the “dying ReLU” problem and helps the gradients flow easier through the architecture.

Deep convolution generative against network having four generators Generator convolution; Discriminator recognizer has a four-layer fractionally-strided convolution, using a small volume normalization and leaky ReLU activation function.

Model structure

Model structure:

Alternatively the pooling layer convolutions, wherein the substituting strided convolutions on a discriminator by substituting fractional-strided convolutions on the generator.
On the generator and discriminator use batchnorm.
Initialization solve the problem of poor
help spread the gradient of each layer
to prevent the generator all the samples converge to the same point.
BN directly applied to all layers of the sample can lead to shock and instability model, is not used by BN in the generator output layer and the discriminator input layer can prevent this phenomenon.
Removing layer fully connected
global pooling increases the stability of the model, but the damage convergence speed.
ReLU used in all layers except the generator output layer, the output layer using the tanh.
LeakyReLU used on all layers of the discriminator.

DCGAN network structure of the generator:

Here Insert Picture Description
Wherein the layers are here conv four fractionally-strided convolution, in another paper it may also be referred to as deconvolution

Image classification Case 2

This is part of the more familiar, do not do a summary note.

Released five original articles · won praise 0 · Views 270

Guess you like

Origin blog.csdn.net/qingxiuhu/article/details/104505576