OpenGL.Shader: 9-learning light-normal map (calculating TBN matrix)

OpenGL.Shader: 9-learning light-normal map (calculating TBN matrix)

This article learns about normal maps. Normal maps are especially widely used in game development and GIS system development. Their expressiveness is particularly strong, and the drawing effect is very close to reality. The more important point is that we can make a very economical model with very little cost. This is especially important in the game masterpiece.

              

Let's take a look at the effect of the two cube textures above. The left side is a normal texture map, and the right side is a normal map. There is a huge difference in visual effects between the two. This is the charm of the normal map.

Before the show code, here is a brief description of the relevant knowledge of the normal map.

1. Normal mapping

When simulating the lighting effect, what makes the surface to be illuminated as a completely flat surface? The answer is the normal vector of the surface. Taking a brick as an example, from the perspective of the lighting algorithm, the surface of the brick has only one normal vector, and the surface is illuminated in a consistent manner based on this normal vector, so the realization of the detail effect is often relatively simple. What if each fragment has its own different normal? In this way, we can change the normal vector according to the subtle details of the surface; this will result in a visual effect that looks much more complicated on the surface:

Each fragment uses its own normal, so we can make a surface composed of many tiny (different normal vector) planes, and the details of the surface of such an object will be greatly improved. This technique of using each fragment pixel to use its own normal instead of using the same normal for all fragments on a surface is called normal mapping.

Since the normal vector is a three-element geometric model, a 2D texture can not only store color and lighting data, but also store the normal vector. Think about it, the color components r, g, and b in the texture are represented by a mathematical model of vec3. The normal vector x, y, and z elements of the same vec3 can also be stored in the texture, instead of the color r, g, and b elements to form a normal texture. In this way, we can sample the normal vector of the corresponding position from the normal texture based on a set of position data at the same time. This way the normal map can work like a texture map.

2. Tangent space

Since we map the normal vector (x, y, z) to the (r, g, b) component of a texture, then the first intuition is that the normal vector of each fragment must be perpendicular to the texture. The plane (that is, the plane composed of UV coordinates), the proportions of the three components are all on the z(b) component, so the normal texture mostly presents a bluish visual appearance. (As shown below) But this will have a serious problem. In our example, there are six faces of a cube, only the normal vector of the top face is (0, 0, 1), can't the faces in other directions use this method? Line texture?

Think about it, how do we convert model vertex/texture coordinates into world coordinates? How is the normal vector synchronized to the change operation of the model? All are through the matrix operation of the coordinate system, and the tangent space coordinate system is introduced here . Ordinary 2D texture coordinates include U and V. The direction in which the U coordinate increases is the tangent direction in tangent space (tangent axis), and the direction in which the V coordinate increases is the secondary tangent direction (bitangent axis) model in tangent space. The different faces in the, all have corresponding tangent space, the tangent axis and bitangent axis are respectively located on the drawn plane, combined with the corresponding normal direction, we call the tangant axis (T), bitangent axis (B) and normal axis (N ) The coordinate system composed of tangent space ( TBN ) (as shown below)

With the TBN tangent space coordinate system, the normal vector extracted from the normal texture is based on the value of TBN, and then we perform mathematical matrix operations and multiply it by a TBN conversion matrix to convert it into the correct direction required by the model. Normal vector.
(For the calculation of TBN matrix and the principle of deeper mathematical conversion, please refer to the following link)
https://blog.csdn.net/jxw167/article/details/58671953    
https://blog.csdn.net/yuchenwuhen/article/ details/71055602

 

3. Code use

class CubeTBN {
    struct V3N3UV2 {
        float x, y, z; //位置坐标
        float nx, ny, nz; //法向量
        float u,v; //纹理坐标
    };

    struct V3N3UV2TB6
    {
        float x, y, z;
        float nx, ny, nz;
        float u, v;
        float tx,ty,tz;
        float bx,by,bz;
    };
    // ...
};

First, we define two structures. V3N3UV2 is the standard data structure we have been using before (position vertex vec3, normal vector vec3, texture coordinate vec2). Add two vec3s to V3N3UV2, which are the tangent direction (tangent axis) and the secondary tangent direction (bitangent axis). Find the specific value through the method convertTBN

public:
    V3N3UV2TB6       _data[36];
    
    void        init(const CELL::float3 &halfSize)
    {
        // 之前的标准数据,通过传入size确定大小。
        V3N3UV2 verts[] =
        {
                // 前
                {-halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 0.0f,0.0f},
                {-halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 1.0f,0.0f},
                {+halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 0.0f,1.0f},
                {-halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 1.0f,0.0f},
                {+halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 1.0f,1.0f},
                {+halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  0.0f,  +1.0f, 0.0f,1.0f},
                // 后
                {+halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 1.0f,0.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 1.0f,1.0f},
                {+halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 0.0f,0.0f},
                {-halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 1.0f,0.0f},
                {+halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 0.0f,0.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  0.0f,  -1.0f, 1.0f,1.0f},
                // 左
                {-halfSize.x, +halfSize.y, +halfSize.z, -1.0f, 0.0f,  0.0f,  0.0f,0.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, -1.0f, 0.0f,  0.0f,  1.0f,1.0f},
                {-halfSize.x, -halfSize.y, +halfSize.z, -1.0f, 0.0f,  0.0f,  1.0f,0.0f},
                {-halfSize.x, +halfSize.y, -halfSize.z, -1.0f, 0.0f,  0.0f,  0.0f,1.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, -1.0f, 0.0f,  0.0f,  1.0f,1.0f},
                {-halfSize.x, +halfSize.y, +halfSize.z, -1.0f, 0.0f,  0.0f,  0.0f,0.0f},
                // 右
                {+halfSize.x, +halfSize.y, -halfSize.z, +1.0f, 0.0f,  0.0f,  0.0f,0.0f},
                {+halfSize.x, +halfSize.y, +halfSize.z, +1.0f, 0.0f,  0.0f,  0.0f,1.0f},
                {+halfSize.x, -halfSize.y, +halfSize.z, +1.0f, 0.0f,  0.0f,  1.0f,1.0f},
                {+halfSize.x, -halfSize.y, -halfSize.z, +1.0f, 0.0f,  0.0f,  1.0f,0.0f},
                {+halfSize.x, +halfSize.y, -halfSize.z, +1.0f, 0.0f,  0.0f,  0.0f,0.0f},
                {+halfSize.x, -halfSize.y, +halfSize.z, +1.0f, 0.0f,  0.0f,  1.0f,1.0f},
                // 上
                {-halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  +1.0f, 0.0f,  0.0f,1.0f},
                {+halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  +1.0f, 0.0f,  1.0f,1.0f},
                {+halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  +1.0f, 0.0f,  1.0f,0.0f},
                {-halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  +1.0f, 0.0f,  0.0f,0.0f},
                {-halfSize.x, +halfSize.y, +halfSize.z, 0.0f,  +1.0f, 0.0f,  0.0f,1.0f},
                {+halfSize.x, +halfSize.y, -halfSize.z, 0.0f,  +1.0f, 0.0f,  1.0f,0.0f},
                // 下
                {+halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  -1.0f, 0.0f,  1.0f,1.0f},
                {+halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  -1.0f, 0.0f,  1.0f,0.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  -1.0f, 0.0f,  0.0f,1.0f},
                {+halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  -1.0f, 0.0f,  1.0f,0.0f},
                {-halfSize.x, -halfSize.y, +halfSize.z, 0.0f,  -1.0f, 0.0f,  0.0f,0.0f},
                {-halfSize.x, -halfSize.y, -halfSize.z, 0.0f,  -1.0f, 0.0f,  0.0f,1.0f}
        };
        // 根据位置/纹理 -> TBN
        convertTBN(verts, _data);
    }

    void convertTBN(V3N3UV2* vertices,V3N3UV2TB6* nmVerts)
    {
        for (size_t i = 0; i <36; i += 3) // 一次操作一个三角形的三个点
        {
            // copy xyz normal uv
            nmVerts[i + 0].x  = vertices[i + 0].x;
            nmVerts[i + 0].y  = vertices[i + 0].y;
            nmVerts[i + 0].z  = vertices[i + 0].z;
            nmVerts[i + 0].nx = vertices[i + 0].nx;
            nmVerts[i + 0].ny = vertices[i + 0].ny;
            nmVerts[i + 0].nz = vertices[i + 0].nz;
            nmVerts[i + 0].u  = vertices[i + 0].u;
            nmVerts[i + 0].v  = vertices[i + 0].v;

            nmVerts[i + 1].x  = vertices[i + 1].x;
            nmVerts[i + 1].y  = vertices[i + 1].y;
            nmVerts[i + 1].z  = vertices[i + 1].z;
            nmVerts[i + 1].nx = vertices[i + 1].nx;
            nmVerts[i + 1].ny = vertices[i + 1].ny;
            nmVerts[i + 1].nz = vertices[i + 1].nz;
            nmVerts[i + 1].u  = vertices[i + 1].u;
            nmVerts[i + 1].v  = vertices[i + 1].v;

            nmVerts[i + 2].x  = vertices[i + 2].x;
            nmVerts[i + 2].y  = vertices[i + 2].y;
            nmVerts[i + 2].z  = vertices[i + 2].z;
            nmVerts[i + 2].nx = vertices[i + 2].nx;
            nmVerts[i + 2].ny = vertices[i + 2].ny;
            nmVerts[i + 2].nz = vertices[i + 2].nz;
            nmVerts[i + 2].u  = vertices[i + 2].u;
            nmVerts[i + 2].v  = vertices[i + 2].v;

            // Shortcuts for vertices
            CELL::float3  v0  = CELL::float3(vertices[i + 0].x,vertices[i + 0].y,vertices[i + 0].z);
            CELL::float3  v1  = CELL::float3(vertices[i + 1].x,vertices[i + 1].y,vertices[i + 1].z);
            CELL::float3  v2  = CELL::float3(vertices[i + 2].x,vertices[i + 2].y,vertices[i + 2].z);
            CELL::float2  uv0 = CELL::float2(vertices[i + 0].u, vertices[i + 0].v);
            CELL::float2  uv1 = CELL::float2(vertices[i + 1].u, vertices[i + 1].v);
            CELL::float2  uv2 = CELL::float2(vertices[i + 2].u, vertices[i + 2].v);
            // 构建triangle平面的方向向量 (position delta, δ)
            CELL::float3  deltaPos1 = v1 - v0;
            CELL::float3  deltaPos2 = v2 - v0;
            // 构建UV平面的两个方向的向量 (uv delta, δ)
            CELL::float2 deltaUV1   = uv1 - uv0;
            CELL::float2 deltaUV2   = uv2 - uv0;

            float   r  = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);  // uv叉积作底
            CELL::float3 tangent    = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y)*r; // 得出切线
            CELL::float3 binormal   = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x)*r; // 得出副切线

            // 赋值到t b
            nmVerts[i + 0].tx = tangent.x;  nmVerts[i + 0].bx = binormal.x;
            nmVerts[i + 0].ty = tangent.y;  nmVerts[i + 0].by = binormal.y;
            nmVerts[i + 0].tz = tangent.z;  nmVerts[i + 0].bz = binormal.z;

            nmVerts[i + 1].tx = tangent.x;  nmVerts[i + 1].bx = binormal.x;
            nmVerts[i + 1].ty = tangent.y;  nmVerts[i + 1].by = binormal.y;
            nmVerts[i + 1].tz = tangent.z;  nmVerts[i + 1].bz = binormal.z;

            nmVerts[i + 2].tx = tangent.x;  nmVerts[i + 2].bx = binormal.x;
            nmVerts[i + 2].ty = tangent.y;  nmVerts[i + 2].by = binormal.y;
            nmVerts[i + 2].tz = tangent.z;  nmVerts[i + 2].bz = binormal.z;
        }
    }

So far, the TBN matrix of each reference point has been calculated. Once we have the data, we can start to learn how to write shader programs.

First we look at the vertex shader part:

#version 320 es
in vec3 _position; // external input
in vec3 _normal;
in vec2 _uv;
in vec3 _tagent;
in vec3 _biTagent;
uniform mat4 _mvp; // mvp matrix
uniform mat3 _normalMatrix; // normal matrix
uniform mat4 _matModel; / / Model conversion matrix
out vec2 _outUV;
out vec3 _outPos;
out mat3 _TBN;
void main()
{         _outUV = _uv; Output texture coordinates to the fragment shader for extracting texture maps and normal maps         vec4 pos = _matModel*vec4( _position,1);         _outPos = pos.xyz; Output the vertex position of the model to ensure that each fragment has a refined light direction         vec3 normal = normalize(_normalMatrix * _normal); // Multiply by the normal matrix to ensure model transformation Consistency after operation.




        vec3 tagent = normalize(_normalMatrix * _tagent);
        vec3 biTagent = normalize(_normalMatrix * _biTagent);
        _TBN = mat3x3(tagent,biTagent,normal); // Build a TBN matrix and output it to the fragment shader
        gl_Position = _mvp * vec4(_position, 1.0); // Output the final drawn vertex coordinates
}

em... The comments have been added. As for why the vertices (of the world coordinate system) after the modeling operation should be output to the fragment shader? Calculating the main attribute of light intensity—lighting direction. We used to do it directly in the vertex shader. This is because we did not master the key content of normal map before, and failed to refine the normal vector to each fragment. among. After the vertex position data is output from the vertex shader to the fragment shader, the fragment shader will perform interpolation operations. After interpolation, each fragment has the interpolated vertex position data, so it is necessary to recalculate the more detailed light direction to match the normal map for better calculation results.

ES 320. #Version
Precision mediump a float;
in vec2 _outUV;
in Vec3 _outPos;
in MAT3 _TBN;
Uniform Vec3 _lightColor;
Uniform Vec3 _lightDiffuse;
Uniform sampler2D _texture;
Uniform sampler2D _texNormal;
Uniform Vec3 _lightPos;
Uniform Vec3 _cameraPos;
OUT Vec3 _fragColor;
void main ()
{         vec3 lightDir = normalize(_lightPos-_outPos); // Calculate the light direction of each fragment         vec3 normal = normalize(_TBN * (texture(_texNormal,_outUV).xyz*2.0-1.0));         // Extract the normal vector from the normal map to normalize the [-1,1] interval and transform it into the final normal vector through the TBN matrix



        vec4 materialColor = texture(_texture,_outUV); 
        float lightStrength = max(dot(normal, lightDir), 0.0); // Calculate light intensity
        vec4 lightColor = vec4(_lightColor * lightStrength + _lightDiffuse, 1); // Light intensity * color Light + diffuse light
        _fragColor.rgb = materialColor.rgb * 0.2 + 0.8 * lightColor.rgb; // Mixed input effect.

_uv texture coordinates extract the color value of the texture map, and can extract the normal vector value of the normal map. It should be noted that the range value of texture(_texNormal,_outUV).xyz is [0,255], and the normalization is [0,1]. But what our normal vector needs is [-1, 1], and we need to convert it ourselves. The final mixed output effect is not fixed. Just adjust the effect as needed.

 

Finally, add the CubeTBN.render method.

void        render(Camera3D& camera)
{
    _program.begin();
    static  float   angle = 0;
    angle += 0.1f;
    CELL::matrix4   matRot;
    matRot.rotateYXZ(angle, 0.0f, 0.0f);
    CELL::matrix4   model   =   _modelMatrix * matRot;
    glUniformMatrix4fv(_program._matModel, 1, GL_FALSE, model.data());
    CELL::matrix4   vp = camera.getProject() * camera.getView();
    CELL::matrix4   mvp = (vp * model);
    glUniformMatrix4fv(_program._mvp, 1, GL_FALSE, mvp.data());
    CELL::matrix3   matNormal = mat4_to_mat3(model)._inverse()._transpose();
    glUniformMatrix3fv(_program._normalMatrix, 1, GL_FALSE, matNormal.data());

    glUniform3f(_program._lightDiffuse, 0.1f, 0.1f, 0.1f); // 漫反射 环境光
    glUniform3f(_program._lightColor, 1.0f, 1.0f, 1.0f);  // 定向光源的颜色
    glUniform3f(_program._lightPos, camera._eye.x, camera._eye.y, camera._eye.z);//光源位置

    glActiveTexture(GL_TEXTURE0);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,  _texMaterial); 
    glUniform1i(_program._texture, 0); // 加载纹理贴图
    glActiveTexture(GL_TEXTURE1);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,  _texNormal);
    glUniform1i(_program._texNormal, 1); // 加载法线贴图

    glVertexAttribPointer(_program._position, 3, GL_FLOAT, GL_FALSE, sizeof(V3N3UV2TB6), _data);
    glVertexAttribPointer(_program._normal, 3, GL_FLOAT, GL_FALSE, sizeof(V3N3UV2TB6), &_data[0].nx);
    glVertexAttribPointer(_program._uv, 2, GL_FLOAT, GL_FALSE, sizeof(V3N3UV2TB6), &_data[0].u);
    glVertexAttribPointer(_program._tagent, 3, GL_FLOAT, GL_FALSE, sizeof(V3N3UV2TB6), &_data[0].tx);
    glVertexAttribPointer(_program._biTagent, 3, GL_FLOAT, GL_FALSE, sizeof(V3N3UV2TB6), &_data[0].bx);
    glDrawArrays(GL_TRIANGLES, 0, 36);
    _program.end();
}

Project demo source code: Reference file CubeTBN.hpp CubeTbnProgram.hpp

https://github.com/MrZhaozhirong/NativeCppApp      

Guess you like

Origin blog.csdn.net/a360940265a/article/details/94719015