【High Quality Rendering】—Stochastic Screen Space Reflection (SSSR)

Random Screen Space Reflections (SSSR)

  • overview

Random Screen Space Reflection is evolved from Screen Space Reflection and is the merged product of SSR+IBL. The idea is to use existing images to make up for the lack of indirect lighting in BRDF or shader rendering. The difference is that the method used in IBL is a sky ball, while SSSR uses a screen. But it is precisely because of this that the decent compromise in IBL has become the most troublesome burden in SSSR. The emergence of SSSR can be said to be a complement to IBL. IBL supplements ambient lighting, SSSR supplements detailed ambient reflection lighting, and GI adds detailed diffuse lighting on the basis of IBL.

  • technical details
  1. Screen Space stack
  2. Ray Marching
  3. Screen Space Reflect
  4. BRDF(GXX)
  5. IBL-
  6. random, importance sampling

  • Realization principle

3.1 Ray Marching sampling

Screen space reflections use a technique called ray marching to determine the reflection of each (Pixel) fragment. Ray marching is the process of iteratively expanding or contracting the length or size of a vector in order to probe or sample some space for information. A ray in screen space reflections is a position vector reflected around the normal.

Intuitively, a ray hits a point in the scene, bounces, travels in the opposite direction of the reflected position vector, bounces off the current fragment, travels in the opposite direction of the position vector, and hits the camera lens, allowing you to see the scene The color of a point in is reflected in the current fragment . SSR is the process of reverse tracing the ray path (Ray Trace. Ray Trace is a type of Ray marching). It tries to find the reflection point where the ray bounces off and hits the current fragment. On each iteration, the algorithm samples the position or depth of the scene along the reflected ray, each time asking whether the ray intersects the scene's geometry. If there is an intersection, that location in the scene is a potential candidate for reflection from the current fragment. (IBL, Image Base Render principle). -

 

Figure 3.1.1 Behavior Demonstration


Ideally, there would be some analytical method that would accurately determine the first point of intersection. The first intersection point is the only valid point in the current fragment to reflect (actually too optimistic). You don't know where the intersection point is (if any), so you start at the bottom of the reflection ray and iterate over the coordinates as you travel in the direction of the reflection. Each iteration, you compare the current coordinates to the pixel depth. If you do hit something, you try points around that area, hoping to find the exact intersection. (Accurate and detailed target points for the second iteration, which belongs to an optimization scheme)

 

Figure 3.1.2 The red interval is iterated twice to get the accurate interval

3.2 Screen Space Reflect

Ray Marching is a means for comparing spatial coordinates, and in order to realize Reflect, we need to meet the requirements of using Ray Marching to meet Reflect, that is, I need to get the reflection source while Ray Marching. (Of course, this stage is only the traditional SSR. A simulation renderer used by AMD is still using this reverse method, because its effect is very similar to the SSSR we will mention later, but it is not PBR).


In order to satisfy the nature of the reflection source, we only need to move a little bit, that is, iterate along the reflection direction:

                                                   Figure 3.2.1 Ray Marching sampling reflection source

But it should be noted that we are now using Screen Spaced Stack, which means that the data we have is only two-dimensional plane data, which will also become the biggest flaw and regret of SSR. This can be clearly realized after we finish.

Just  like SSAO , SSR switches back and forth between screen and view space. You need the projection matrix of the camera lens to transform a point in view space to clip space. From clip space you have to convert the points to UV space again. Once in UV space, you can sample a vertex/fragment position from the scene, which will be the closest position in the scene to the sample. This is the screen-space part of the screen-space reflection, since the "screen" is the texture's UVs mapped onto the screen-shaped rectangle.

The whole principle is very simple, our Ray takes each step along the Reflect Direction (reflection direction), we sample the depth texture, compare the depth of Ray with the depth of the current pixel (pixel), if it hits, it means this point is the reflection source. Of course, this idea is very happy, but the effect is very skinny. Because we can't guarantee that every point will hit flawlessly, we need to add a Trickness (tolerance thickness). Of course, this article only introduces the principle. There are many points in the initial implementation details, but there are many. You can also refer to the reference page to see the source code.

And with SSR implemented you'll see

                                                           Figure 3.2.2 Reflect reflection results


This is the effect that traditional SSR will get, because of the number of iterations and the tolerance space, you can see that the effect he gives will have very obvious bands. Traditional methods use some blurring techniques to make the bands grainy. But this also reduces the accuracy of the reflection. This is one of the reasons why we say that SSR is too idealistic. The only way to reduce banding is to increase stepping accuracy, but at the same time it will also bring serious performance consumption:

 

Figure 3.2.3 Double the number of steps Reflect reflection results

  But even so, it can’t solve the problem of grazing angle (same as shadow map, can’t solve the lattice problem), but I have to admit that the effect of SSR is very amazing, he can achieve some flat objects without ray tracing Simple reflection effect, which is an indispensable element to improve the quality of the whole scene.

3.3  Mip/Blur Map Smapler

Next is the section of traditional SSR, which is also the biggest difference between SSR and SSSR - how to calculate the illumination of reflected light. The calculation method used by SSR is similar to IBL, using multi-level MipMap, and then obtaining the corresponding reflection image based on the roughness. It's not a PBR render, so has some unrealistic issues. SSSR combines the rendering methods of IBL and SSR. Although Mipmap is also used, the Mipmap at this time is used to blur distant lighting, and the way SSSR calculates rough reflection images is to use GXX sampling, in other words. The Ray Tracing set (PBR), which means that the obtained data is the real data, not the fuzzy data, so the traditional SSR is on the final sampling image, while SSSR is on the UV level do processing.

Next, we will introduce the processing method of SSR, and the processing method of SSSR will be explained together in the section of SSSR, because this is an integrated content.


The idea of ​​SSR is very simple, that is, to sample different MipMaps according to different degrees of Roughness. It is also a commonly used fast image blurring algorithm. And according to the blurred image to get the corresponding light (pasted directly), such a process is completed:

Figure 3.3.1 Reflection results obtained from blurred scene sampling

The reflection on the water surface is the final result we get. Of course, you have also found that how SSR establishes the relationship between roughness and reflection is a very troublesome problem. Even if we can find a suitable value to sample different textures, it is unavoidable. One problem is that we have no way to find a reasonable way to demonstrate whether the reflection results we get are correct (physical base). If this problem is left, it will inevitably bury hidden dangers for the subsequent lighting improvement. But at least from the results point of view, SSR and SSSR are actually not much different (because the human eye is not sensitive to secondary light), and the performance requirements and calculation process of SSSR are unmatched by SSR. This is why many renderers still use SSR.

3.4 IBL's lighting understanding of reflect

IBL is similar to reflection probes and light probes in Unity.

The essence of SSSR is a complement to IBL, and many of its concepts involve the content of IBL. Of course, we will not pay too much attention to IBL, but we will only pull out the reflection items and talk about it separately.

If we understand IBL, we know that IBL has many "decent compromises", but these decent compromises are infinitely magnified in SSSR and become a fatal weakness. Therefore, it is necessary to understand the content of IBL in order to understand the thinking of SSSR.

The reflection term of IBL is  


In order to achieve this multidimensional integral, UE's method is to use approximation, the product of the integral is equal to the integral of the product:

 

 
At the same time, because the reflection lobe itself has directionality, the reflected light seen from different viewing angles should not be consistent. In order to make up for this problem, IBL adopts the method of turning a blind eye, that is, sampling normally like Diffuse, which is also the realization of SSSR. caused trouble.

 

Figure 3.4.1 Reflection lobe shape

In IBL itself, it is a core idea of ​​SSSR. In SSSR, the same method will be used to sample the ambient light, but now the light source has changed from the Cube map to the screen. At the same time, in order to make it reasonable, SSSR is implemented by Trace, but in order to take into account performance, we often use Filter (filtering) and multisample (multiplex sampling) and importance sampling to achieve fewer steps to achieve more Good effect, of course, since these three are essentially a mix (mixed) algorithm, the accuracy they can achieve is limited, and the number of wireless approximations is required to achieve the effect of Trace.

By the way, IBL is a big concept, and its essence is dynamic programming (reuse). In the screen space stack, we use SSSR and SSGI to make up for the defects of IBL in details.

3.5 SSSR (Stochastic Screen-Space Reflections) proposed by Frost

The original intention of Frost to propose IBL is to improve the quality of SSR while using PBR to reduce parameter control, so it borrowed the Ray Tracing solution. At the same time, in order to reduce the number of Rays, a mixed double of multisample (multiplex sampling) and importance sampling is used. At the same time, Frost believes that multisample (multiplex sampling) is part of Color resolve. So the essence of SSSR is the "castrated version" of diffuse reflection of Ray Tracing. Of course, compared with offline Tracing, the algorithm of SSSR is more complicated. After all, you only need to approach infinitely in the offline state.

3.5.1 Importance Sampling importance sampling

Importance Sampling (Importance Sampling), whose Chinese translation is too inaccurate, we prefer to call it "important" sampling, of course, for the sake of academic standardization, it is still called importance sampling. The essence of importance sampling lies in the probability density distribution function. In importance sampling, in order to approach violent settlement (exhaustive), we need to multiply the obtained data by a weight, and finally add up the average value as the final sampling result. Its core idea is consistent with IBL.

GXX Importance PDF:

In order to achieve random sampling, the traditional method will decompose the PDF into a solid angle binary vector and the marginal distribution probability of the PDF:

 

and

 

The marginal probability distribution can be sampled on average, which forms the basic parameters of very important sampling. Expressed as:


The meaning of this formula is to shoot rays randomly to get the average radiation of ambient light, because the light radiance coincides with the semicircle of the surface, we can think that the ambient light received by this point is the continuous accumulation of light corresponding to the solid angle of each direction , it is an exaggeration to say that if we only hit one light source L, then the ambient light received should be L*radiation semicircle area.

 

    Figure 3.5.1.1 Importance sampling

       Of course, because we use GXX and Reflect as the sampling reference here, the integrated area of ​​the reflection obtained will be a lobe shape, as shown in Figure 3.4.1.

3.5.2 Multi sample multiple sampling


       Multi-sampling is proposed by Frost in 2015 to solve the problem of too much noise caused by insufficient integration times of Importance Sampling (generally 4 Rays approach the limit in real-time rendering), the idea is to reuse the obtained Hit UV /Light (UV is also useful for the final result, but using uv is more in line with physics and stronger in reality, but it will increase the flaws) as sampling data, by sampling the data of adjacent points as the light emitted by pretending to be emitted. More samples are a way to solve the problem in terms of time. Suppose we use 4 points and one point emits 4 Rays, which can make the final emission number change from 4 to 16 Rays, but Each pixel of mine actually only emits 4 rays.

Figure 3.5.2.1 Repeated sampling of frost


       The original intention of this idea is very good, but it is obvious that this idea is to continue the "decent compromise" of reflected light in IBL, that is, the shape of the reflected lobe emitted by all points is consistent, but it is obvious that when the light source is compared When the reflective area is small and the reflective area is relatively large, there will be obvious problems, as shown in Figure 3.5.2.2, we assume that the surface is infinitely smooth, and the reflective lobe shape should be a line (for better expression, we do not consider the roughness):

 

Figure 3.5. 2.2 The actual reflection situation
   The actual situation may be far from what we think. The pixels in the adjacent area are at the grazing angle, and the difference in the reflection direction is greater in the case of a small light source. The difference value and the grazing angle are related to the reflection The object distance is related, which leads to the adjacent importance weight (PDF) we get may not be the weight that the pixel should have:


   Just like in the formula. The Pk of our adjacent rays may not be unbiased, which will cause a lot of noise in the picture. (Here we directly use the screenshot of Frost, because we knew that there was a defect that was not implemented at the beginning of the implementation):

Figure 3.5.2.3 Results provided by Frost

   In order to make up for this defect, Frost moved a little "brain", which is generally called Spatiofilter (spatial filter)

3.5.3 patiofilter spatial filtering

   The name is spatial filtering, but it has nothing to do with space. The core of spatial filtering lies in the replacement of weights.

   Frost found that due to the large difference in the reflection vector of the grazing angle, the difference in PDF was too large, so that the final integral was not unbiased, so Frost came up with a way, since the difference is caused by PDF, then kill PDF, or weaken Its influence is just fine. It just so happens that the PDF here is just the PDF of IBL's reflective item PDF, which is the PDF of GXX, so it has a formal relationship with IBL.

   Our original integral formula is expressed as:

   And in order to get rid of Pk (eliminate pk), we also multiply and divide the reflection term of the IBL reflection:

 


   Finally it can be expressed as:

 

 

   Because PK and IBL are both the PDF of GXX, so the approximation of the integral can subtly eliminate the PK (of course it only reduces the impact)

This way we subtly reduce the noise (but still have it). Of course, Frost also mentioned another method. Since you have the difference caused by the normal, then I will just find the average normal of this area, but this method is always a hacky method, because you have no basis Prove that the reflection of the average normal is the reflection of the point.

Pseudocode is expressed as:

result    = 0.0

weightSum = 0.0

for pixel in neighborhood:

      weight = localBrdf(pixel.hit) / pixel.hitPdf

      result += color(pixel.hit) * weight

      weightSum += weight

result /= weightSum

The effect is as follows:

 

Figure 3.5.3.1 Spatial filtering effect


It can be seen that compared with the effect of Figure 3.5.3.2, there has been a very big change:

 

Figure 3.5.3.2 No filter SSR effect added

3.5.4 Temporal filter time filter

    Filtering has nothing to do with spatial filtering, except that its data source is the image after spatial filtering, but temporal filtering has a lot to do with multi-sampling (Multi sample). In multi-sampling, we are in the same time dimension, using different spatial dimensions so that the number of sampling rays changes from N to N*M, and in time filtering, we hope it can evolve from N*M to N*M* X, that is, three dimensions.

The idea is also very simple. We store the data of the previous frame, which can be Hit UV (hit point) data or the final color data (End Color). If it is Hit UV, it will be used as an additional Hit Points are calculated, so that as long as my camera is not moving, my number of samples can always be accumulated. Instead, the final color data is considered to be the average value of the reflected light collected by the rays fired within a period of time. At this time, it only needs to be mixed linearly according to the speed.

As mentioned earlier, both methods need to consider the situation of the camera not moving, so in order to make it more versatile, it is also necessary to consider the spatial continuity. We need to record the speed of the camera to calculate the pixel offset. You can calculate it based on depth and FOV, or you can get it by converting the camera space. In short, there are many methods.

float4 OldColor// The color of the position of the point in the previous frame

float4 NewColor// The color of the point in this frame

float3 Velocity

NewColor = AA_Filter(NewColor...)//AA filter

float Tompweight = Resolve ( Velocity... ) // Get mixed weight according to your weight algorithm

return lerp(OldColor, NewColor, Tompweight);


At the same time, in order to reduce the noise of the picture, we also need to add a little Filter to reduce the noise of the picture. Pseudocode is expressed as:

Of course, the above is just a pseudocode. The actual AA filtering and weight calculation are also very troublesome problems. For details, please refer to the project source code.

The effect obtained is as follows:

 

Figure 3.5.3.1 Time filter filter effect

  • future career

4.1 Antialiasing

In SSSR, we use the PBR method to sample ambient light to form reflected light according to the idea of ​​IBL. However, due to the lack of lines, black and white noise is often formed. In order to take into account both performance and performance, Unreal uses the method of directly blurring low-precision sampling. And if we want to achieve high precision at the same time when we implement it, we need to blur the final rendered reflective image and then input it to the screen. Another method is to use screen anti-aliasing, and the two mixed doubles may have more Good results.

4.2 SSSR defects

   SSSR and SSR have one common problem, that is, the place where the screen does not exist cannot be illuminated. This problem cannot be solved. At the same time, it also has the defects of Ray Tracing, IBL and Raymarhing. IBL has poor grazing angle effect, and Ray Tracing is quite Neighboring integration results in different screen space reflections to form white noise points, while Raymarhing is insufficient in precision to form bands. There is no particularly good way to solve these problems. We can only solve this problem by blurring them like anti-aliasing.

 References (all useful condensed content)

【1】SSR Screen Space Reflection | 3D Game Shaders For Beginners (lettier.github.io)

【2】寒霜2015 Siggraph SSSR:Stochastic Screen-Space Reflections.pptx (live.com)

[3] IBL: Basics of Graphics | PBR Review_Sang Lai 93's Blog-CSDN Blog

[4] Importance Sampling - Know (zhihu.com)

Guess you like

Origin blog.csdn.net/qq_42999564/article/details/127631258