Markov Monte Carlo sampling method

  • It may be used for more complex sample distribution, and can also be used in a high-dimensional space
  • Markov Chain Monte Carlo method
    • Monte Carlo method: Approximate Solution Method Based on Numerical sampled
    • Markov Chain: means for sampling
  • The basic idea MCMC
    • Target profile, construct a Markov chain, so that the stationary distribution of the Markov chain is the target distribution
    • Starting from any initial state, transferred along Markov
    • The final state transition sequence will converge to target distribution to give a series of samples
  • Core point: Markov chain configuration, state transition sequence determined

Metropolis-Hastings sampling

For the target distribution \ (P (X) \) , select an easy sampling of reference conditions distribution \ (Q (X ^ * | X) \) , and let
\ [A (x, x ^ *) = \ mathop {\ min} \ {1, \ frac
{p (x ^ *) q (x | x ^ *)} {p (x) q (x ^ * | x)} \} \] and then according to the following sampling procedure

  • A randomly selected initial sample \ (x ^ {(0) } \)
  • For t = 1, 2, 3,,,,,,
    • The reference condition distribution \ (q (x ^ {* } | x ^ {(t-1)}) \) one sample \ (x ^ {*} \ )
    • The uniform distribution \ (U (0,1) \) generates a random number \ (U \)
    • If \ (U <A (X ^ {(T-. 1)}, X ^ *) \) , then let \ (x ^ {(T)} = X ^ * \) , or Order \ (x ^ {( t)} = x ^ {( t-1)} \)
  • Sequence of samples \ (\ {\ dots, x ^ {(t-1)}, x ^ {(t)} \} \) will eventually converge to the target profile \ (p (x) \)
  • \ (A \) is the transition probability, \ (X \) configured between the Markov chain

Gibbs sampling

  • Exception Metropolis-Hastings algorithm
  • The core idea: Only one dimension sample to sample and update
  • For the target distribution \ (P (X) \) , where \ (x = (x_1, \ cdots, x_d) \) is a multidimensional vector, the sampling process as follows:
    • Randomly selecting an initial state \ (x ^ {(0) } = (x_1 ^ {(0)}, \ cdots, x_d ^ {(0)}) \)
    • \(For t = 1, 2, 3, \cdots\):
      • For a sample of the previous step generated \ (X ^ {(T-. 1)} = (x_1 ^ {(T-. 1)}, \ cdots, x_d ^ {(T-. 1)}) \) , are sampled and updated every values dimensions, i.e., sequentially randomly selected component \ (x_1 ^ {(t) } \ sim p (x_1 | x_2 ^ {(t-1)}, x_3 ^ {(t-1)}, \ dots, x_d ^ {(t-1)}) , \ dots, p (x_d | x_1 ^ {(t)}, x_2 ^ {(t)}, \ dots, x_ {d-1} ^ {(t)}) \)
    • Formation of a new sample \ (x ^ {(t) } = (x_1 ^ {(t)}, \ cdots, x_d ^ {(t)}) \)
  • Each dimension of the sample from the conditional probability sampling are other dimensions for the conditions. The conditions include the distribution of the dimensions of the current round or the last round of sampling has been sampled dimension
  • Sampling and updating each dimension, may be a random sequence

Feature

  • And refused to sample different, MCMC sampling every step will have a new sample, but sometimes this sample to sample just the same as before
  • "Burn-in" process: remove the cut portion of the sample sequence in the beginning

Obtain independent samples

  • The resulting sequence of samples MCMC sampling method does not separate adjacent samples (Markov chain)
  • Only sample, then the sample does not require independent between
  • Produce independent and identically distributed sample:
    • Run multiple Markov chains
    • Every few samples taken on the same time Markov chain, approximately independent

Guess you like

Origin www.cnblogs.com/weilonghu/p/11922682.html
Recommended