Machine Learning - Sampling

Sampling role

An analog sampling random phenomena, according to a given probability distribution of high, generating random simulated event corresponding to a.

Sample 2 can be seen as a non-parametric model, by using a small number of samples to approximate the overall distribution, and overall uncertainty characterization distribution.

Three pairs of current resample the data, can take advantage of existing data sets, mining more information, such as the Bootstrap and jackknife. In addition, the use of resampling technique, you can (target information is not lost) in keeping certain information, there are changes in the distribution of samples of consciousness, to accommodate training and follow-up model of learning.

4 many models the complicated structure because the lead containing hidden variables corresponding to the formula to solve the complex, there is no explicit analytical solution is difficult to perform accurate solution or reasoning. Random sampling methods can be used for simulation, thereby solving complex models or Approximate Reasoning. Generally translates to certain functions integrated or desired in a particular distribution, or find some random variables or parameters in the posterior distribution data is given.

Common sampling methods

The most basic method of sampling is uniformly distributed random number:

Generally linear congruential method (the LCG) generating a pseudo-random number uniformly distributed discrete, calculated as $ x_ {t + 1} = (a \ cdot x_t + c) \ mod (m) $

Guess you like

Origin www.cnblogs.com/wzhao-cn/p/11303473.html