Flow-based model

文章1:  NICE: NON-LINEAR INDEPENDENT COMPONENTS ESTIMATION

文章2:Real-valued Non-Volume Preserving (RealNVP

文章3:Glow: Generative Flow with Invertible 1x1 Convolutions

NICE

Learning goals : to find a transformation H = F (X) , so that the distribution of each component after conversion is independent of

 We assume h and x dimensions are the same , and f is reversible , then we have

 Also, we want to f the Jacobian matrix and f -1 easier calculation. If it can be operated, then we can sample directly to the p- the X- (the X-) ,

 The f key insight is that the design of the x split into two parts (x1, X2) , and then converted to (y1, y2) ,

m may be an arbitrary function (ie, a ReLU MLP). notes, the determinant of the Jacobian matrix is a unit matrix , and is very easy to count the inverse function of the number,

Of course, we can define a more general framework ,, additive staggered coupling structure. And there may be conversion layer by layer, the kind of flow feeling.

 

 

 We can define the maximum likelihood function:

P H (H) : prior distribution, may be pre-defined, for example, if an isotropic Gaussian distribution. H each component are independent , we can write,

 

RealNVP 

Split approach:

s and t are scale and translation, function, R & lt D -> R & lt D-D

Jacobia corresponding matrix,

The PS : Because our process and the inverse function calculation Jacobia matrix, are not directed to the inverse function of s and t, s and t can be so arbitrarily complex, IE, neural network.

 

Glow

There are many models, continued. . .

Guess you like

Origin www.cnblogs.com/skykill/p/11986839.html