OpenGL.Shader: 8-Learning Light-Normal Matrix

OpenGL.Shader: 8-Learning Light-Normal Matrix

 

 The previous article recorded the basic knowledge of lighting-normal vectors, and simple lighting calculation principles. It can be seen from the renderings that the light source changes with the position of the camera, and each side of the cube presents different dark effects. What if the model changes if the position of the light source does not change? According to what we have learned before, the position of the model needs to be multiplied by the mvp matrix when the position of the model changes, so can the normal vector be multiplied by the mvp matrix?

Error demonstration, directly use mvp as the normal matrix

#version 320 es
uniform mat4   _mvp;
uniform mat3   _normalMatrix;
uniform vec3   _lightDir;
uniform vec3   _lightColor;
uniform vec3   _lightDiffuse;
in      vec3   _position;
in      vec3   _normal;
in      vec2   _uv;
out     vec2   _outUV;
out     vec4   _outComposeColor;
void main()
{
    _outUV                =   _uv;
    vec3    normal        =   normalize(_normalMatrix * _normal); //输入法线*法线矩阵 再归一化
    float lightStrength   =   max(dot(normal, -_lightDir), 0.0);
    _outComposeColor =   vec4(_lightColor * lightStrength + _lightDiffuse, 1);
    gl_Position      =   _mvp * vec4(_position,1.0);
}

#version 320 es
precision mediump float;
in      vec4        _outComposeColor;
in      vec2        _outUV;
uniform sampler2D   _texture;
out     vec4        _fragColor;
void main()
{
    vec4    color   =   texture(_texture,_outUV);
    _fragColor      =   color * _outComposeColor;
}

First, we paste the used shader program group, introduce uniform mat3 _normalMatrix; the normal matrix used to change the normal vector, normalize(_normalMatrix * _normal); get the changed normal vector, and then perform the lighting calculation.

    void    render(Camera3D& camera)
    {
        sprogram.begin();
        static  float   angle = 0;
        angle += 0.3f;
        CELL::matrix4   matRot;
        matRot.rotateYXZ(angle, 0.0f, 0.0f);
        // 这里的模型矩阵只进行简单进行旋转操作。
        CELL::matrix4   model   =   mModelMatrix * matRot; //mModelMatrix只是一个简单的单位矩阵
        CELL::matrix4   vp = camera.getProject() * camera.getView();
        CELL::matrix4   mvp = (vp * model);
        glUniformMatrix4fv(sprogram._mvp, 1, GL_FALSE, mvp.data());
        // 设置法线矩阵 为 mvp
        glUniformMatrix3fv(sprogram._normalMatrix, 1, GL_FALSE, mat4_to_mat3(mvp).data());

        glActiveTexture(GL_TEXTURE0);
        glEnable(GL_TEXTURE_2D);
        glBindTexture(GL_TEXTURE_2D,  mCubeSurfaceTexId);
        glUniform1i(sprogram._texture, 0);

        glUniform3f(sprogram._lightDiffuse, 0.1f, 0.1f, 0.1f); // 漫反射 环境光
        glUniform3f(sprogram._lightColor, 1.0f, 1.0f, 1.0f); // 定向光源的颜色
        glUniform3f(sprogram._lightDir, // 定向光源的方向
                    static_cast<GLfloat>(camera._dir.x),
                    static_cast<GLfloat>(camera._dir.y),
                    static_cast<GLfloat>(camera._dir.z));

        glVertexAttribPointer(static_cast<GLuint>(sprogram._position), 3, GL_FLOAT, GL_FALSE,
                              sizeof(CubeIlluminate::V3N3T2), &_data[0].x);
        glVertexAttribPointer(static_cast<GLuint>(sprogram._normal),   3, GL_FLOAT, GL_FALSE,
                              sizeof(CubeIlluminate::V3N3T2), &_data[0].nx);
        glVertexAttribPointer(static_cast<GLuint>(sprogram._uv),       2, GL_FLOAT, GL_FALSE,
                              sizeof(CubeIlluminate::V3N3T2), &_data[0].u);
        glDrawArrays(GL_TRIANGLES, 0, 36);
        sprogram.end();
    }
//-------------------------------
    tmat3x3<T> mat4_to_mat3(const tmat4x4<T> & m)
    {
        return  tmat3x3<T>(
                tvec3<T>(m[0][0],m[0][1],m[0][2])
                ,tvec3<T>(m[1][0],m[1][1],m[1][2])
                ,tvec3<T>(m[2][0],m[2][1],m[2][2]));
    }

Then paste the rendering parameter settings and simply rotate it along the Y axis. One thing to note is that the normal vector is of type vec3, which only represents the direction and has no position information. mvp is a 4x4 matrix, so the two cannot be directly multiplied. Customize the mat4_to_mat3 method to remove the w component.

It can be seen from the rendering that the lighting calculation is abnormal after automatic rotation. Manually slide the lens to make the lighting and the model seem to be normal again after reaching the specified angle. Why is this so? Let’s briefly analyze the underlying mathematical principles:

In OpenGL ES 2.0, a vertex is converted to the eye coordinate system by:
                                             vertexEyeSpace = view_Matrix * model_Matrix * vertex;
then why can't we do the same work like normal vector? First, the normal vector is a vector of 3 floats, and the modelView matrix is ​​a 4X4 matrix. This can be achieved by the following code:
                                            normalEyeSpace = vec3(view_Matrix * model_Matrix * vec4(normal, 0.0));
There is a potential problem above: the
       vertex is (x,y,z) representing the default vector (x,y, z,1); and the normal vector is a directional vector without a fixed point. Because the normal vector can be obtained by subtracting two fixed vertices (x1,y1,z1,1) and (x2,y2,z2,1) on the normal. Therefore, it can be seen from this point that the vertex is different from the normal vector, which results in an invisible transformation. The normal vector can only guarantee the consistency of the direction, but not the consistency of the position.

In the picture above we see a triangle with a normal vector and a plane tangent vector. The next picture shows the scene when an observation matrix is ​​zoomed. If we still call the above code:

As shown above, the observation matrix affects all vertices and normals. Obviously this result is wrong. The normals are not perpendicular to the plane tangent. So now we know that the observation matrix cannot be applied to all normals. So what matrix should we use to transform the normal vector? Take a look at the solution in the table below.

From the derivation in the box above, we can see that if we want to transform the u vector corresponding to the matrix A, if we want to ensure that the normal vector is still perpendicular to the corresponding tangent vector after the transformation, then we will Just need to do an inverse matrix transformation on the vertical vector, so that we don't have to worry about the situation corresponding to the above picture, and after the inverse matrix transformation is the correct result we expect.

The final conclusion is so transposed matrix = inverse matrix of the model matrix transform normal

    void        render(Camera3D& camera)
    {
        sprogram.begin();
        static  float   angle = 0;
        angle += 0.1f;
        CELL::matrix4   matRot;
        matRot.rotateYXZ(angle, 0.0f, 0.0f);
        // 这里的模型矩阵只进行简单进行旋转操作。
        CELL::matrix4   model   =   mModelMatrix * matRot;
        // 法线矩阵 = 模型矩阵的逆矩阵的转置
        CELL::matrix3   matNormal=   mat4_to_mat3(model)._inverse()._transpose();
        glUniformMatrix3fv(sprogram._normalMatrix, 1, GL_FALSE, matNormal.data());

        CELL::matrix4   vp = camera.getProject() * camera.getView();
        CELL::matrix4   mvp = (vp * model);
        glUniformMatrix4fv(sprogram._mvp, 1, GL_FALSE, mvp.data());

        glActiveTexture(GL_TEXTURE0);
        glEnable(GL_TEXTURE_2D);
        glBindTexture(GL_TEXTURE_2D,  mCubeSurfaceTexId);
        glUniform1i(sprogram._texture, 0);
        ... ...
    }

The update effect is as follows, you can see that the final lighting effect is correct whether it is model rotation or manually sliding the camera to change the position of the light source.

Demo project link https://github.com/MrZhaozhirong/NativeCppApp ->LightRenderer.cpp CubeIlluminate.hpp CubeIlluminateProgram.hpp 

Guess you like

Origin blog.csdn.net/a360940265a/article/details/93780849