PBRT_V2 总结记录 Importance Sampling

1. Importance Sampling

Importance sampling is a variance reduction technique that exploits(利用) the fact that the
Monte Carlo estimator

converges more quickly if the samples are taken from a distribution p(x) that is similar
to the function f (x) in the integrand.
The basic idea is that by concentrating work where
the value of the integrand is relatively high, an accurate estimate is computed more
efficiently.

So long as the random variables are sampled from a probability distribution that is
similar in shape to the integrand, variance is reduced.

In practice, importance sampling is one of the most frequently used variance reduction
techniques in rendering, since it is easy to apply and is very effective when good sampling
distributions are used. It is one of the variance reduction techniques of choice in pbrt,
and therefore a variety of techniques for sampling from distributions from BSDFs, light
sources, and functions related to participating media will be derived in this chapter.

(Importance sampling 用来减少variance,主要是由于采样的分布函数 p(x) 与 f(x) 相似的话,就可以减少 variance)

2. Multiple importance sampling

Monte Carlo provides tools to estimate integrals of the form
∫ f (x) dx. However, we are frequently faced with integrals that are the product of two or more functions:
∫ f (x)g(x) dx. If we have an importance sampling strategy for f (x) and a strategy for
g(x), which should we use? (Assume that we are not able to combine the two sampling
strategies to compute a PDF that is proportional to the product f (x)g(x) that can itself
be sampled easily.) As shown in the discussion of importance sampling, a bad choice of
sampling distribution can be much worse than just using a uniform distribution.

For example, consider the problem of evaluating direct lighting integrals of the form

If we were to perform importance sampling to estimate this integral according to distributions
based on either Ld or fr, one of these two will often perform poorly.

Unfortunately, the obvious solution of taking some samples from each distribution and
averaging the two estimators is hardly any better. Because the variance is additive in this
case, this approach doesn’t help—once variance has crept into an estimator, we can’t
eliminate it by adding it to another estimator even if it itself has low variance.

(例如 ∫ f (x)g(x) dx 这样的积分,用 《PBRT_V2 总结记录 <77> Monte Carlo Integration》 的公式:

,pdf 应该是"f(x)g(x)"的 pdf,如果这个pdf 使用  “f(x)” 或者是 “g(x)” 相关的 pdf的话,得到结果是不精准的,就算是 从 f(x) 和 g(x) 各自的pdf 去采样,再平均,也不能更好地去 模拟 ∫ f (x)g(x) dx 这样的积分)

Multiple importance sampling (MIS) addresses exactly these kinds of problems, with
a simple and easy-to-implement technique. The basic idea is that, when estimating an
integral, we should draw samples from multiple sampling distributions, chosen in the
hope that at least one of them will match the shape of the integrand reasonably well, even
if we don’t know which one this will be. MIS provides a method to weight the samples
from each technique that can eliminate large variance spikes due to mismatches between
the integrand’s value and the sampling density. Specialized sampling routines that only
account for unusual special cases are even encouraged, as they reduce variance when
those cases occur, with relatively little cost in general.

If two sampling distributions pf and pg are used to estimate the value of ∫f (x)g(x) dx,

the new Monte Carlo estimator given by MIS is

where nf is the number of samples taken from the pf distribution method, ng is the
number of samples taken from pg, andwf andwg are special weighting functions chosen
such that the expected value of this estimator is the value of the integral of f (x)g(x).

The weighting functions take into account all of the different ways that a sample Xi or
Yj could have been generated, rather than just the particular one that was actually used.
A good choice for this weighting function is the balance heuristic:

The balance heuristic is a provably good way to weight samples to reduce variance.

(MIS 就可以解决上面的 模拟 ∫ f (x)g(x) dx 的问题)

Here we provide an implementation of the balance heuristic for the specific case of two
distributions pf and pg.We will not need a more general multidistribution case in pbrt.

inline float BalanceHeuristic(int nf, float fPdf, int ng, float gPdf) {
    return (nf * fPdf) / (nf * fPdf + ng * gPdf);
}

In practice, the power heuristic often reduces variance even further. For an exponent β,
the power heuristic is

Veach determined empirically that β = 2 is a good value.We have β = 2 hard-coded into
the implementation here.

inline float PowerHeuristic(int nf, float fPdf, int ng, float gPdf) {
    float f = nf * fPdf, g = ng * gPdf;
    return (f*f) / (f*f + g*g);
}

猜你喜欢

转载自blog.csdn.net/aa20274270/article/details/84876086