Irradiance map generation algorithm analysis

When reading this blog, readers need to have some basic IBL (image-based lighting) relevant knowledge.
For example, what is the rate of radiation-related knowledge, radiometric, solid angle and so on. Also you need to know the relationship between each turn of the spherical coordinates and Cartesian coordinates.
At the same time the official Reference URL: https: //learnopengl.com/PBR/IBL/Diffuse-irradiance
source code there: https: //github.com/JoeyDeVries/LearnOpenGL.git
corresponding source code is:
Here Insert Picture Description
the ok, the complete code readers free to download for analysis.

Here is my re-reading the book review.

1, if only a limited number of light sources (point light source, whether light or parallel), then calculate the effect of a light source wherein the surface of the object in the scene that is intuitive and simple, as long as n and l direct Lambert dot , then a corresponding attenuation, the lighting calculations can be completed.
These are the direct light.
Advantage is that calculation is simple and intuitive; disadvantage is not considered part of the influence of the surrounding (environmental mapping is then specifically the impact on the object).
2, if the surrounding environment (environment or sky boxes) are taken into account, then what is meant, means that each pixel on the environment map is a point light source, will be on a point on the object, while the light Effects this is the core of the IBL.

So how will this fall algorithm to achieve it?
The answer is on the environment map pretreatment. Here we must be clear algorithm process:

  1. To a point p on the object as an example:
    Here Insert Picture Description
    it needs to consider all influence the direction of light entering thereto, all necessary to carry out an integration operation.
    So integral is impractical for real-time lighting it, so can consider whether pretreatment environment maps, last as long as a given direction, direct sampling on the line.

The answer is yes, mainly to explain this is also in this section.

  1. Preparation of pre-environment map shader
Shader irradianceShader("2.1.2.cubemap.vs", "2.1.2.irradiance_convolution.fs");

Vertex shader is:

#version 330 core
layout (location = 0) in vec3 aPos;
out vec3 WorldPos;
uniform mat4 projection;
uniform mat4 view;
void main()
{
    WorldPos = aPos;  
    gl_Position =  projection * view * vec4(WorldPos, 1.0);
}

Input vertex shader is only one: aPos
output two: WorldPos and gl_Position

Fragment shader following look:

#version 330 core
out vec4 FragColor;
in vec3 WorldPos;
uniform samplerCube environmentMap;
const float PI = 3.14159265359;
void main()
{		
	// The world vector acts as the normal of a tangent surface
    // from the origin, aligned to WorldPos. Given this normal, calculate all
    // incoming radiance of the environment. The result of this radiance
    // is the radiance of light coming from -Normal direction, which is what
    // we use in the PBR shader to sample irradiance.
    vec3 N = normalize(WorldPos);
    vec3 irradiance = vec3(0.0);   
    // tangent space calculation from origin point
    vec3 up    = vec3(0.0, 1.0, 0.0);
    vec3 right = cross(up, N);
    up            = cross(N, right);

WorldPos directly as a normal direction tangent space.
Cutting up the vector space is defined herein as: (0.0,1.0,0.0)
OpenGL is a right-handed coordinate system, is calculated up right cross product N, to obtain a right vector.
And up vector, and get right with the N cross product.
In this way up, right, N orthogonal respectively, constitute a tangent space.

	float sampleDelta = 0.025;
    float nrSamples = 0.0;
    for(float phi = 0.0; phi < 2.0 * PI; phi += sampleDelta)
    {
        for(float theta = 0.0; theta < 0.5 * PI; theta += sampleDelta)
        {
            // spherical to cartesian (in tangent space)
            vec3 tangentSample = vec3(sin(theta) * cos(phi),  sin(theta) * sin(phi), cos(theta));
            // tangent space to world
            vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N; 

            irradiance += texture(environmentMap, sampleVec).rgb * cos(theta) * sin(theta);
            nrSamples++;
        }
    }
    irradiance = PI * irradiance * (1.0 / float(nrSamples));
    FragColor = vec4(irradiance, 1.0);
}

First for: increment is sampleDelta = 0.025, 2PI range is 0 to (azimuthal)
second for: increments sampleDelta = 0.025, in the range of 0 to 0.5PI (zenith)
and then converted to spherical coordinates Cartesian coordinates:
Here Insert Picture Description

x=r*sinθ*cosφ
y=r*sinθ*sinφ
z=r*cosθ

Cartesian coordinate system (x, y, z) and the spherical coordinates (r, θ, φ) conversion relationship
Here Insert Picture Description
we can try at unity Videos:
Here Insert Picture Description

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class NewBehaviourScript : MonoBehaviour
{
    public Vector3 normal;
    public float length = 4;
    public float sampleDelta = 1.0f;
    private const float PI = 3.14159265359f;

    public void OnDrawGizmos()
    {
        Vector3 N = Vector3.Normalize(normal);
        Vector3 up = new Vector3(0.0f, 1.0f, 0.0f);
        Vector3 right = Vector3.Cross(up, N);
        up = Vector3.Cross(N, right);
        Gizmos.color = Color.blue;
        Gizmos.DrawLine(Vector3.zero, N.normalized * 2 * length);
        Gizmos.color = Color.red;
        Gizmos.DrawLine(Vector3.zero, right.normalized * 2 * length);
        Gizmos.color = Color.green;
        Gizmos.DrawLine(Vector3.zero, up.normalized * 2 * length);

        Gizmos.color = Color.white;
        for (float phi = 0.0f; phi < 2.0 * PI; phi += sampleDelta)
        {
            for (float theta = 0.0f; theta < 0.5 * PI; theta += sampleDelta)
            {
                Vector3 tangentSample = new Vector3(
                    Mathf.Sin(theta) * Mathf.Cos(phi), 
                    Mathf.Sin(theta) * Mathf.Sin(phi), 
                    Mathf.Cos(theta));

                Vector3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N;
                Gizmos.DrawLine(Vector3.zero, sampleVec.normalized * length);
            }
        }
    }
}

Here Insert Picture Description
Here Insert Picture Description
This is actually not a coordinate transformation, if a change in thinking to understand even simpler.

Look:
Here Insert Picture Description
vector shown in Fig. (1,1). It is really the x-axis (1,0) and y (0,1) of this coordinate system is the coordinate (1,1).
Imagine, and if we put a 45-degree rotation of the coordinate system, then the time x 'axis on the (1, -1) and the y' axis (1,1).
Then the time vector (1,1) coordinate in this coordinate system is how much?
Naturally dot obtained:
X-axis coordinates: (1,1) DOT (. 1, -1) = 0
Y axis coordinates: (1,1) dot (1,1) = 2
obtained after the unitizing :( 0,1)
is ready to serve: (0,2) as shown below:
Here Insert Picture Description
Therefore, this is a transformation space. After converting the actual position vector is constant.

So back to the question above. If we use the original method:

Vector3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N;

What vector evaluated (1,1) so you get it?
* 1 (1, -1) + 1 * (1,1) = (2,0)
Here Insert Picture Description
obtained at this time is green vectors.

So we know that, in fact, the sampling of the above process is that normal to eradicate, to obtain a local coordinate system, then according to a vector in the hemisphere,
the coordinates in the local coordinate system is.
Hemisphere normals changed, then the sample vector direction will change along, but is always above the positive hemisphere. This can be pre-calculus.

Published 610 original articles · won praise 96 · views 330 000 +

Guess you like

Origin blog.csdn.net/wodownload2/article/details/104316125