Hundred Talents Program (Program) 4

chapter Five 

1. PBR

        Overview: PBR (Physically Based Rendering) physically based rendering.

        Physics-based: Physically-based Material (Material), Physically-based Lighting (Lighting), Physically Adapted Camera (Camera).

  • Physically Based Materials (Lighting Model) Overview

        Physically based rendering is still an approximation of the real world. And three conditions need to be met: Microfacet-based surface model , energy conservation , and application of physically-based BRDF .

        1. Microplane theory: The theory of modeling the surface of an object as a microfacet with ideal mirror reflections with random orientations on countless microscopic scales.

        It is a good way to describe the difference between a rough surface and a smooth surface.

         2. Conservation of energy: The energy of the outgoing light can never be greater than the energy of the incoming light.

                ① Performance: As the roughness increases, the area of ​​the specular reflection area will increase . As a balance, the average brightness of the specular reflection area will decrease.

                ②How to conserve energy:

                        Use the albedo equation:

                        Simple understanding is:

                                Intensity of reflected light = intensity of incident light * reflection ratio * attenuation of incident light

                                Intensity of reflected light + self-illumination intensity = final intensity of outgoing light

                                Attenuation of incident light: dot product of half angle vector and normal

                        The difficulty lies in the size of the reflection scale

        3.BRDF(Bidirectional Reflectance Distribution Function) 

                Bidirectional reflectance distribution function

                *BSDF = BRDF + BTDF        BRDF:

                 ② How is BRDF calculated:

                        (1) Diffuse reflection : basically using the Lambert illumination model (empirical model)

         Kd is the diffuse reflection coefficient, which is added to the high light reflection coefficient to be 1; (I/r^2) → r refers to the distance from the direction of the light source to the plane, and the light intensity is inversely proportional to the square of the distance. The larger the distance, the smaller the light intensity; the light intensity is also related to the angle. The more vertical the angle, the greater the light intensity.

                        (2) Specular reflection (empirical model):

        The Blinn-Phong model is better in performance and closer to the real situation.

                       (3) Physically based specular reflection :

         Kd+Ks=1

        Ⅰ  Normal distribution function : A function that describes the effect of the normal on light attenuation.

         Ⅱ  Geometric (shading) function :

         Ⅲ  Fresnel equation :

         4. Summary:

         The blue part is the core algorithm of BRDF. The front of BRDF is the diffuse reflection part, which is calculated by Lambert model.

        5.  BRDF of the Disney principle

  1. Intuitive parameters should be used instead of obscure parameters of physics classes
  2. parameters should be as few as possible
  3. The parameter should be 0 to 1 within its reasonable range
  4. Arguments are allowed outside of the normal reasonable range when meaningful
  5. All parameter combinations should be as robust and reasonable as possible
With 11 parameters, the lighting results of metal, non-metal and different roughness materials can be simulated very realistically

         6. Reference:

        https://learnopengl-cn.github.io/07%20PBR/01%20Theory/#pbr

        https://aras-p.info/texts/files/201403-GDC_UnityPhysicallyBasedShading_notes.pdf

        https://zhuanlan.zhihu.com/p/33464301

        https://www.zhihu.com/column/c_1249465121615204352

        https://www.zhihu.com/column/game-programming

        https://blog.uwa4d.com/archives/1582.html

  • Physically Based Camera Overview

        1. Exposure triangle

                ① Exposure is mainly controlled by three parameters:

                        Shutter Speed ​​→ Motion Blur

                        Aperture → Depth of Field

                        Sensitivity (ISO) → Noise (Grain)

         ②Concrete steps:

        These three parameters work together to regulate the amount of light entering the sensor (controlled by aperture and shutter speed) and the sensitivity of that surface (traditional film or modern digital ISO). We can see that the light first passes through the aperture (Aperture) control, then passes through the shutter (Shutter), and finally reaches the sensor (Sensor).

camera lens to sensor

         ③Camera assembly line:

         Scene Luminance → Lens aperture → Sensor illuminance → Shutter → Sensor Exposure → CCD → Analog Voltages → ADC → Digital Values ​​→ Remapping → Pixel Value s)

        The photographer can control the relative aperture , shutter time , and sensor sensitivity/gain three main parameters to set the exposure.

        For example, using the human eye to compare:

        The aperture (Aperture) functions like the iris of the eye, expanding and contracting the diameter of its opening to limit the amount of light allowed to enter the eye.

        Shutter Speed ​​is similar to blinking, except that the eyelids are usually open when we are awake. Imagine your eyelids opening momentarily to capture a single image before closing, like a camera shutter.

        Sensitivity to light (ISO) is similar to the sensitivity of the rods and cones at the back of the eye.

        ④Exposure Compensation

        Exposure compensation is an exposure control method , generally within ±2~3EV. In the previous film era, only 1 EV could be adjusted, but now digital can adjust 1~3 EV. Every 1.0 increase in the EV value is equivalent to doubling the amount of incoming light . Every 1.0 decrease in the EV value is equivalent to a doubling of the amount of incoming light. If the ambient light source is too dark to adjust the above three parameters to achieve the effect or the manual adjustment fails, you can increase the exposure value (such as adjusting to +1EV, +2EV) to highlight the clarity of the picture. Exposure compensation is a method of consciously changing the "appropriate" exposure parameters automatically calculated by the camera to make the photo brighter or darker. Photographers can adjust the brightness and darkness of photos according to their own ideas, creating unique visual effects, etc. Generally, the exposure value is adjusted by changing the aperture value or the shutter speed. Exposure compensation comes last.

        ⑤Histogram

        A histogram is a histogram used to represent the brightness distribution in a digital image , plotting the number of pixels for each brightness value in the image. You can use this histogram to see how you need to adjust the brightness distribution.

        In this histogram, the left side of the abscissa is a pure black, darker area, while the right side is a brighter, pure white area. Thus, a darker image has more data in the image histogram concentrated on the left and center; the opposite is true for an overall bright image with only a few shadows.

         The histogram is an important tool to observe whether the photo is exposed or not. Modern digital cameras should have built-in histograms. The histogram is mainly to help us have a reference during the shooting process to ensure that the photos we take will not lose any details in the highlights or shadows.

        Because we know that viewing photos on the camera screen is completely different from viewing photos on a computer, and viewing on the camera screen may also be affected by some interfering elements such as ambient light. Therefore, the built-in histogram in the camera provides us with a reference to help us analyze whether the photo has lost details when taking a photo. Or is it overexposed.

        2. Shutter speed

                ①Definition: Shutter speed is a measure of how long the camera's shutter is open—the amount of time light is allowed to hit the photosensitive sensor.

                ②Motion blur: Focal plane shutters and rolling electronic shutters, due to their design and function, can cause interesting distortions in images when there is rapid motion on the image plane.

        Therefore, when we shoot highly moving objects, such as cars and running people, the shutter speed should be adjusted faster to compensate for the afterimage effect. But this means that the light received by the photoreceptor is insufficient, and how to solve this problem is what we need to talk about below. We achieve this effect when we pause while rapidly rotating the camera in-game. This effect is more to accentuate the sense of movement and speed of the subject . As shown on the left, it is based on the motion blur generated by the physical camera under the UnityHDRP pipeline.

        3. Aperture

        ①Aperture is a device used to control the amount of light passing through the lens and entering the sensor inside the body, usually inside the lens.

It is impossible for us to change the diameter of the lens that has been manufactured, but we can control the amount of light passing through the lens         by adding a polygonal or circular hole-shaped grating with a variable area inside the lens. This device is called the aperture .

        ② To express the size of the aperture , we use the F number to express it, which is usually called f-number (N), denoted as F/. The ratio of the focal length (F) of the lens to the aperture (D), we record it as N=F/D, the larger the aperture (D), the more light enters the camera. The aperture is not equal to the F-number, on the contrary, the aperture size is inversely proportional to the F-number, and the F-number is also called the aperture number. For example, a lens with a large aperture has a small F-number and a small aperture; a lens with a small aperture has a large F-number.

        So: Aperture value and aperture size are opposite, a small "aperture value" represents a large aperture, and a large "aperture value" represents a small aperture . For example, the three groups of values ​​F/1.2, F/1.8, and F/2.8 all represent large apertures (generally F/+ a value refers to the aperture value), among which F/1.2 has the largest aperture, while F/18, F/22, and F/32 represent small apertures, and F/32 is the smallest.

        ③Aperture can be understood as a circle, and the formula for the area of ​​a circle is: Area = π times the square of the radius.

        So if you set your camera aperture to F/8, take a picture, and then adjust the aperture to F/5.6, you're doubling the amount of light passing through the lens. Changing from F/8 to F/4 quadruples the amount of light. Going from F/11 to F/16 reduces the amount of light by half. Follow the inverse square law.

        ④ The size of the aperture will affect the depth of field (focus). The larger the aperture, the stronger the background blur; the smaller the aperture, the smaller the background blur.

         ⑤Diffraction phenomenon: The aperture not only controls the amount of light passing through the lens, but also affects the angle at which light passes through the lens. Light is bent slightly as it passes through the aperture grating. This bending of light is called "diffraction" and is a wave characteristic of light.

        When you shrink the aperture grating of a lens, it diffracts closer to the center of the image. When many photographers first learn about aperture, because of the effect of aperture on depth of field, they think that the key to maximizing sharpness is a small aperture. However this is incorrect. Due to diffraction, even though depth of field is removed by stopping down the aperture, the amount of diffraction in the image increases, causing the image to lose sharpness.

        The middle zone of the aperture, the area where depth of field and diffraction is controllable, is known as the "sweet spot" of the lens, typically the area between f/4 and f/11, depending on the lens design.

Diffraction also occurs when viewing spider webs and discs after rain from the side. Because the microsurface surface of these objects is not uniform (similar to anisotropy)

        4. Sensitivity (ISO) ①ISO is
       
linear like shutter speed . An ISO of 200 is half as sensitive to light as an ISO rating of 400. Double the ISO, double the sensitivity. Half the ISO, half the sensitivity. As shown in the figure below, different ISOs will make the brightness of the picture different.

        So, assuming all else being equal, if you change your camera's ISO from 400 to 200, you get a -1 EV (exposure) shift, and from ISO 800 to ISO 1600, doubling the sensitivity, you get a +1 EV shift.

         ②Film grain Grain:

                There are two ways to increase the ISO of a digital camera:

                1. Forcibly increase the brightness and contrast of each pixel;

                2. Use multiple pixels to jointly complete tasks that only need one pixel to complete.

        This also leads to an effect - digital noise (Grain), the higher the ISO, the more digital noise is introduced in the image. While camera manufacturers continue to add large numbers of pixels to new cameras, they have also been working hard to reduce the amount of noise at high ISOs. A small amount of film grain can improve the feel and texture of a photo, but it's not something you can do.

        5. Camera and engine

        ①Physically-based camera environment under HDRP:

        In the HDRP rendering pipeline environment, we find Camera Body, Lens and Aperture under the Camera component, and check Physical Camera. At the same time, hang the Volume component on the Exposure and Depth of field and set the Mode to Camera. The physically based camera is enabled. The physics here is based on the theory of camera and photography knowledge, but it is not perfect at the current stage. For example, Grain will not produce effects because of too high ISO, and you need to set the Volume component yourself. There will be no diffraction due to the smaller aperture. Currently supported are depth of field and different exposure effects obtained through different value settings.   

        ②Physically-based camera shutter speed (Shutter Speed): When other attributes remain unchanged, the faster the shutter speed, the lower the exposure, which decreases linearly.

         ③Physical-based camera aperture (Aperture): When other attributes remain unchanged, the larger the aperture number, the smaller the exposure, which decreases exponentially, and the weaker the depth of field effect.

         ④Physically-based camera sensitivity (ISO): When other attributes remain unchanged, the smaller the sensitivity, the smaller the exposure value

Because there is no grain effect, the grain effect produced by high sensitivity is not increased.

         ⑤Physically-based camera exposure compensation (Exposure Compensation):. Using exposure compensation can compensate for overexposure or overdarkening caused by insufficient parameter settings.

         ⑥Physically-based camera focal length (Focal Length): The focal length is mainly used for focusing. SLR cameras usually focus automatically or manually. Here, manual focus is simulated. The focal length simulates different types of lenses, from macro lenses for shooting flowers, plants, fish and insects to telephoto lenses for hunting birds and shooting the moon.

         ⑦ Self-illumination and Bloom effects:

                Causes of Bloom: 1. High brightness values ​​saturate the sensor of a digital camera and leak to adjacent sensor units. 2. The light is reflected inside the camera lens. 3. There is dust inside or on the surface of the camera lens.

        The final brightness of the pixel depends on the camera settings, and the EV value determines whether the effect of Bloom is produced. Bloom occurs when the incident brightness exceeds the maximum brightness value of the sensor itself.

        The conundrum: The special effects designer wanted a Bloom that looked the same during the day and at night. For self-illuminating surfaces: can provide designers with tools to control Bloom.

  1. Defines an "Exposure Compensation" for emissive surfaces that adjusts its intensity to ensure that the intensity exceeds the saturation point.
  2. This is a scene-global value, and extra exposure is injected into the scene to make emissive surfaces variable or dark.

        HDRP's intensity control for self-illumination: _EmissiveIntensity is not an independent property. The shader just uses _EmissiveIntensity to serialize the property in the UI and stores the final result in the _EmissiveColor property.

 

         ⑧Sunny-16 rule: In photography, it is a way to estimate the camera's aperture size and shutter length without the help of an electronic light meter.

        Outdoors in sunlight, if the aperture is f/16, the shutter speed should be the reciprocal of the international ISO index of the film being used.

        For example, if the aperture is f/16 in outdoor sunlight and the ISO is 100, the shutter speed should be 1/100s (or 1/125). "16" does not mean that the aperture must be f/16. If f/11 is selected, the shutter speed should be doubled to 1/200 second, and so on. Secondly, adjust according to the weather conditions. If it is not sunny but cloudy, the ISO is 100. If the shutter speed is still 1/100s, the aperture should be doubled from 1/16 to 1/11; and so on.

  • Physically Based Lighting

        1. Basic theory of radiometry

                ①Definition: Radiometry is a science that studies the measurement of electromagnetic radiation energy in the entire electromagnetic band. The radiometry involved in computer graphics focuses on the calculation of radiant energy in the visible spectrum in the optical spectrum in the entire electromagnetic wave .

                 ② Solid angle: the area of ​​the spherical part corresponding to an area on the unit sphere.

        The area of ​​any area on the spherical surface is equal to the square of the radius of the sphere. When viewed from the center of the sphere, the area of ​​this area is 1SR.

         Unit solid angle: the solid angle subtended by an area of ​​1 square meter on a sphere with a solid radian of radius 1 meter.

                ③Radiation flux: In order to study the sum of energy carried by each photon passing through a certain surface per unit time, the concept of radiation flux (Radiation Flux) is introduced in physics. Radiant flux is defined as the power emitted, transmitted or received in the form of radiation, that is, the radiation energy per unit time, the unit is W, denoted as ɸ .

                 ④ Radiation intensity: In a given transmission direction, the radiant flux emitted by the light source within a unit solid angle.

                 ⑤ Radiance: the radiant flux emitted by the radiating surface within the unit solid angle of its unit projected area.

        Radiance describes the radiant intensity characteristics of the surface area microelements of the light source in the direction of vertical transmission. For example, it is meaningless to describe the emission characteristics of a certain area of ​​an incandescent lamp. It should be regarded as a point light source as a whole to describe the radiation intensity in a given viewing direction.

                ⑥Irradiance: Irradiance is used to measure the intensity of light entering a unit area, and it can also indicate the intensity of light leaving a surface, which we call the incidence and output.

         2. Basic theory of photometry

         3. Lighting type

                ①Precise light:

        Frostbite and Unity3D only support two types of precise lights: point lights and spot lights. For precise light to be physically correct, it must obey the inverse square law, as shown in the diagram below. The observed light intensity from a light source of constant brightness falls off proportional to the square of the distance to the object. The inverse square law is only valid for point lights.

Not suitable for:

  1. Floodlights and searchlights with narrow distribution because the beam is highly focused.
  2. Area lights or special lights like Fresnel lenses

Non-physically based hack to limit the effect of light on the falloff radius. Square the result to smooth the function

         ②IES: Use the photometric profile to describe its intensity distribution, and these distribution files are stored in a photometric file. Two common formats exist: IES (.ies) and eulumdat (.ldt). In computer graphics, most of them only support the IES format, and Unity and Frostbite are no exception.

         Why use IES profiles? IES profiles are more useful in interior design than gaming. Being able to use a light profile as a Mask brings interesting effects. IES profiles can be created with corresponding tools and can be used to simulate complex light and shadow effects, similar to cookies.

         ③Sun

        The sun is a very important light source, especially for outdoor environments. The sun is very sensitive to changes in direction and illuminance. It is an acceptable approximation to have such a light source as exact light for the diffuse part of the material, but doing so for specular materials is problematic. In Frostbite, to partially alleviate these problems, the sun is treated as a disc area light that is always perpendicular to the outer hemisphere. The artist specifies the solar illuminance, in lux, for surfaces perpendicular to the sun's direction. The picture below shows the optical table they used directly.

         4. Lighting parameters

Natural light/artificial light indoor light recommended parameter table
Unity Lighting and Exposure Maps

  • PBR workflow

        The two most commonly used workflows for PBR are Metallic/ Roughness and Specular/ Glossiness .

        1. Metallic/Roughness workflow (Metallic/Roughness):

        The maps corresponding to this workflow include Base Color (basic color map), Metallic (metal map), Roughness (roughness map), Ambient Occlusion (ambient occlusion/environmental absorption/AO map, hereinafter collectively referred to as ambient light occlusion map), Normal (normal map), Height (height map).

       F0 is the value that controls the reflectivity of the object. Non-metallic objects = reflection values ​​are all set to 0.04 (linear space) In some M/R workflows it is possible to change the range control (0.0-0.08).

        ① Base Color | Base color map (RGB map-sRGB)

        The Base Color map is an RGB map, which contains two types of data (diffuse reflection/reflection color), and the color represents the wavelength of the light it reflects itself, such as white reflection is strong, black and white are weak. If the area is indicated as metal in the metal map (white in the metal map), then this reflection value will be displayed.

        The tone of the Base Color map will generally look flat, and its contrast will be lower than that of the traditional Diffuse map (the traditional Diffuse map has light and shadow information). Note: If the value is too bright or too dark, it may affect the subsequent light and shadow rendering effect.

         ② Metallic | Metal map (grayscale map-Linear)

        The function of the metal map can be understood as a mask , which tells the shader how to interpret the RGB data in the Base Color map. 0.0 (pure black - 0sRGB) in the metal map represents non-metal, while 1.0 (pure white - 255sRGB) represents raw metal (metal state without corrosion).

        Raw Metal's grayscale range in the metal map is roughly between 235-255 sRGB. Corroded metal is a lot of rust, or paint. This part of the rust or paint part is still treated as non-metallic.

        ③  Roughness | roughness map (grayscale map-Linear)

         The roughness map describes whether the surface of the object is rough, and if it is rough, it will cause light to diffuse. The rougher the surface of the stone, the more diffuse the reflected highlights. The smoother the surface of the glass, the more concentrated the specular reflection is, making the highlights appear stronger. Pure black (0.0) represents a smooth surface, while pure white (1.0) represents a rough surface.

         ④Summary:

        Advantages: In the M/R workflow, since the F0 of the non-conductor (dielectric) is specified, the designer is less likely to make mistakes when assigning the value of the non-conductor F0; the cache pressure of the texture is less, because the metal map and the roughness map are grayscale maps; currently, it is the most compatible workflow.

        Disadvantage: The value of F0 for non-conductors (dielectrics) is fixed at 4% and cannot be adjusted. However, in most implementation processes, there are controllers that can directly rewrite this value, so it is not a flaw; the white edge problem is more obvious, especially in the case of low resolution.

        2. Specular/Glossiness workflow (Specular/Glossiness):

         The textures unique to the S/G workflow are Diffuse (diffuse texture), Specular (specular reflection texture), and Glossiness (glossiness texture). In this workflow, the reflection value of the metal and the F0 value of the non-metal are placed in the specular reflection map (Specular), and you can control the F0 value of the non-conductive material (the reflection map is likely to be given the wrong value). 

        ① Diffuse | Diffuse texture (RGB texture-sRGB)

        Only the albedo color (Albedo) is included in the diffuse map, so those areas that indicate raw metal will be marked as black (0.0), since metal won't have any diffuse color.

       ② Specular | Specular reflection map (RGB map-sRGB)

        The specular reflection map is a map used to define the metal reflection value and non-conductor (dielectric) F0 value. The specular reflection map is an RGB map, so non-conductive materials can be given different F0 values. In the specular reflection map, the F0 values ​​​​of metals and non-conductors are written in the RGB channel.

        The F0 value for the virgin metal should be set based on real world measured data. If the metal is oxidized or some covering layer is defined as non-metal, the reflectance value of this metal area will be reduced. In the S/G workflow, however, dirt and oxide layers will have lighter color values ​​in the diffuse map and lower color values ​​in the specular map.

         ③ Glossiness | Glossiness map (grayscale map-Linear)

        Gloss maps are suitable for maps that describe surface unevenness, which causes light to scatter (see Figure 41). In this map, pure black (0.0) represents a rough surface, while pure white (1.0) represents a smooth surface. This is the exact opposite of roughness maps in the M/R workflow , but has similar mapping principles on the design side.

         ④Summary:

        Advantages: less pronounced edge effects; free adjustment of F0 values ​​for non-conductive (dielectric) materials in the specular map. 

        Disadvantage: Since the F0 value of the non-conductor (dielectric) material can be adjusted freely in the specular reflection map of the S/G workflow, it is also easy for the designer to enter a wrong value. And these wrong values ​​may break the law of energy conservation after being misread by the shader, resulting in incorrect rendering effects;

        Due to the addition of a specular reflection map of the RGB channel, the performance consumption will be greater;

        Some terms of S/G workflow are too similar to traditional workflow, but the data corresponding to the essence may be different, which will lead to misunderstanding or misoperation by designers. This situation requires the designer to have better knowledge of PBR theory, such as understanding the correct F0 value of non-conductors (dielectrics), metals appearing as pure black in diffuse reflection colors, and basic knowledge related to energy conservation when the shader does not automatically correct.

        3. General textures :

        ① Ambient Occlusion | Ambient Occlusion / Ambient Light Absorption Map (AO) 

        AO maps are used to define the exposure of each point on a surface to ambient light, and are often used in post-production effects (emphasis on angles, shadow details on interfaces, etc.).

        ② Height | height map

        Height maps are often used for rendering displacement (Displacement) effects, it can be used as a parallax map to add more obvious depth to textures, and is more realistic than normal and bump maps.

        ③ Normal | Normal map

        Normal maps can be used to simulate surface detail. The RGB channels in the texture correspond to the X, Y, and Z coordinates of the normals of different parts on the surface. It can be used to save high-poly details and map them to low-poly. 

 F0: F0 is fixed in the M/R workflow, and the relationship between the F0 value range of the non-conductor (dielectric) material and the next-generation PBR shader. Nonconductive materials reflect a smaller amount of light than metallic materials. The F0 value of common non-conductive materials will be between 2-5%, corresponding to the RGB range of 40-75, which coincides with the linear range of 0.02-0.05. The S/G workflow is suitable for making gems, plastic reflective but non-metallic objects.

Source: URP | PBR Material (1) - Introduction to 2 Workflows - 哔哩哔哩(bilibili.com)

 

2. Ray tracing, path tracing, ray casting

  • Ray Tracing Ray Tracing

        1. Concept: A rendering framework that is juxtaposed with rasterized rendering.

        2. Rasterized rendering: The rendering of the scene disassembles each object according to the level of object→triangular surface→pixel , resulting in the loss of global information, and there is a bottleneck in the accurate realization of global effects (such as soft shadows, indirect lighting, etc.).

        Ray tracing: starting from the camera , projecting light to the scene for each pixel until the light intersects with the first object in the scene . At the intersection, the color of the intersection is calculated according to the properties of the object , the properties of the light source , and the lighting model . Then, according to the refraction or reflection of the intersection, the reflected or refracted light is traced until it intersects with the surface of the next object and calculates the color of the point until the light reaches the light source or escapes the scene. Ray tracing disassembles the rendering of the scene by rays , and these rays are parallel, and ray tracing can obtain the global information of a scene.

         3. Calculate the intersection point of the ray and the object (recursive ray tracing):

                ①Simultaneous ray equations and surface equations, solving equations;

                ②Calculate the intersection of the ray and the triangular surface that makes up the object: first calculate the intersection of the ray and the plane where the triangular surface is located, and then judge whether the intersection is within the triangle; or use the Moller Trumbore algorithm to directly calculate the intersection of the ray and the triangle.

                ③Fast ray intersection: Use a bounding box (usually a cuboid) to surround the object to ensure that the object is completely within the bounding box. If the ray doesn't intersect the bounding box, then it certainly won't intersect the object. Usually an axis-aligned bounding box (AABB) is used, that is, the edges of the bounding box are parallel to the coordinate axes.

                Space division: After the bounding box is constructed, although the object is surrounded, there are still some gaps inside. The bounding box can be divided into grids to determine which ones contain objects. (Data structure: octree, KD number, BSP number, etc.)

        4. Disadvantages of recursive ray tracing: It is not rigorous enough in physics, and the reflection and refraction of the structure are ideal; it is suitable for simulating reflectors, suitable for diffuse reflection results of direct lighting, but does not take into account reflections from other objects.

  • Ray Casting Ray Casting

1. Concept: Starting from the camera, for each pixel point, project light to the scene until the light intersects with the first object         in the scene. At the intersection, calculate the color of the intersection according to the properties of the object, the properties of the light source, and the lighting model.

        2. Disadvantages: only consider the projected light, and will not continue to trace the light (do not consider the subsequent reflection, refraction, etc.).

         3. Volumetric ray casting - volume rendering (Volume Rendering) technology

        The ray-casting technique is used to sample volume-rendered 3D textures:

  • Path Tracing Path Tracing

        1. Concept: When the light meets the surface and needs to be reflected, make a hemisphere with the surface normal as the center, draw a beam of rays to several directions on the hemisphere, and then recursively calculate the illumination contribution value of each light in the beam to the surface point.

         2. Kajiya rendering equation:

\omega _{0}Translation: the final observed color          at point p along the direction, consisting of two parts: the sum of the self-illuminated part and the color of light from other reflections

        3. Questions:

                ①Solution of definite integral → Monte Carlo integral method to solve surface integral

                ②There are too many rays that need to be sampled → multiple rays are emitted towards a pixel block, and each ray is only reflected once

                 ③ Termination condition for ray iteration (if a ray has been reflected tens of thousands of times without hitting the light source, it is unrealistic to continue iterating) → Generate a random number for each iteration, and decide whether to continue iterating based on the random number

                ④ When the area of ​​the light source is small, the probability of light emitted from the camera reaching the light source becomes smaller. A lot of light fails to reach the light source and is wasted, creating a lot of noise. → Sampling directly on the light source (the sampling object has changed from a hemisphere to a light source)

  • Ray Marching

  • Summarize

5. Volume Rendering

  • Media and Optical Transmission

  • phase function

  • volume rendering equation

  • Volumetric Path Tracing

  • Volume Radiance Cache

  • volumetric photon mapping

  • bidirectional path tracing

  • Volumetric Bidirectional Path Tracing

6. Water body rendering

  • Governing equations and non-divergent flow fields (basics)

        1. Fluid description method

                ① Lagrangian method:

                ② Euler method:

  • convolution (tools)
  • Fourier and Pull Transforms (Tools)
  • z-transform and DFT (tools)
  • FFT/IFFT (Tools)
  • FFT Water Simulation (Applied)
  • Lagrangian/Eulerian method

  • rendering

        1. Basic idea:

                ①Coloring: diffuse reflection, specular reflection, normal map, refraction, white foam

                ② Effect improvement: transparency (depth-based lookup table method, subsurface scattering), flow performance, underwater fog effect

        2.LOD:

        ①GeometryClipmaps

        ②. Rendering performance optimization

        The first part LOD 0: render refraction with a complete shader, only for the shallow water part; the second part LOD 1: no reflection and transparency processing.

        3. Highlight:

                ①Minimalist Cook-Torrance

        The original Cook-Torrance (a shading model based on microplane theory) has D, F, and G items; the simplified ones include D (normal distribution function), F (Fresnel), V (visibility item V: is a practice in UE, which combines the G item (geometric function) with the original denominator part), kd (diffuse reflection ratio kd), ks (specular reflection ratio ks).

                 ②urp

         If you want to output only highlights in URP, you can manually modify the properties of InitializeBRDFData to only output highlights and smoothness, so that only highlights can be guaranteed to be output in the subsequent DirectBRDF functions.

         4. Rendering process

                 ① Foam control

        Through the wave height, when it is far away from the horizontal surface, it is marked as white foam, and then mixed with the white foam map to achieve the effect of the top of the wave. At the same time, white foam on the shore should also be generated according to the distance between the water surface and the bottom.

                ②Subsurface effect

        First, there is a color search based on depth (linear or exponential) to get a basic color of water, and then add a disturbance color based on this, and mix according to waveHeight to achieve a volume effect.

        As the foam moves, the color changes with the waves and foam rolling, so you can take a channel of the foam texture and blend between depth shading and turbulence shading.

                ③ Normal

        In addition to the GeometricNormal of the water body itself, a BumpedNormal is needed to disturb the surface and increase details.

                ④Refraction and reflection

        Use fresnel term to mix refraction and reflection. You can sample a value equally between 0-1. When it is greater than frenelterm, let it participate in reflection, and the rest will be refraction. At the same time, the refraction of the water body also depends on the depth of the water.

                ⑤ flow

        Rolling normal map uv can be achieved by vector field (flowmap).

        5.reflection

                ①Fresnel

                ② Disturbance

        6. Refraction and caustic

        GPUGens caustic simulation scheme: One of the assumptions is that the sun is directly above, so we calculate the incident ray v through the refraction theorem through the n vector, and then calculate the angle between v and n vector, that is, the angle between the sun and the sun to obtain the density emitted from the sun.

        7. Foam

                ① wave crest

                 ②shore

 

 

 

         8. Subsurface Scattering

                 ①BSSRDF

                 ② Depth-based lookup table

        According to the depth of the water body, the color of the water body can be changed to achieve the pseudo subsurface, and a disturbance can be added on the basis of it.

                 ③ Fast subsurface

Guess you like

Origin blog.csdn.net/weixin_56784984/article/details/128585895