OpenGL.Shader: 11-Shadow Realization-Directional Light Shadow

OpenGL.Shader: 11-Shadow Realization-Directional Light Shadow

 

I feel that I haven't written a study article for a long time, and I am mainly busy with work and life. After the National Day, there is not much time left in 2019. Fight for yourself and your loved ones.

1. The theory of shadow mapping

Shadow is the result of blocked light; when the light of a light source cannot reach the surface of an object due to the obstruction of other objects, then the object is in the shadow. Shadows can make the scene look much more real, and allow the observer to obtain the spatial position relationship between objects.

This time the content needs to use the Depth Texture from the previous section, and the portal is here. Depth texture is a texture object whose depth value is the main storage content. Its principle is: the light source angle points to the observation target (projection matrix * view matrix) light source matrix space . The depth test is started in this light source space, and all observation objects are saved. The occlusion situation. According to the occlusion situation, perform Shadow Mapping.

The idea behind Shadow Mapping is very simple: we use the position of the light as the perspective to render, everything we can see will be illuminated, and the invisible must be in the shadows. Suppose there is a floor, and there is a big box between the light source and it. Since the light source is looking in the direction of the light, the box can be seen, but part of the floor is not visible. This part should be in shadow. (All the blue lines here represent the fragments that the light source can see. The black lines represent the occluded fragments)

 

The boring theoretical learning part is over. According to the above theory, it is not difficult to find that the realization of shadow is a general algorithm, as long as there is a light source + object = shadow mapping. The following takes shader code as an example to further grasp the realization of shadows.

#version 320 es
in  vec3    position;
in  vec3    normal;
in  vec2    uv;

out VS_OUT {
    vec3 FragPosWorldSpace;
    vec3 Normal;
    vec2 TexCoords;
    vec4 FragPosLightSpace;
} vs_out;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat4 lightSpaceMatrix;

void main()
{     gl_Position = projection * view * model * vec4(position, 1.0f);     vs_out.TexCoords = uv; // The texture uv coordinates are passed to the fragment shader for interpolation     vs_out.Normal = transpose(inverse(mat3 (model))) * normal;     vs_out.FragPosWorldSpace = vec3(model * vec4(position, 1.0));     vs_out.FragPosLightSpace = lightSpaceMatrix * vec4(vs_out.FragPosWorldSpace, 1.0); }





The content of the vertex shader is relatively easy to understand, and the structure VS_OUT is used to integrate all relevant outputs; the
normal vector needs to be transformed by the normal vector matrix (transpose of the inverse matrix of the model matrix);
FragPosWorldSpace = absolute position coordinates in the world coordinate system ,
Save the fragment shader to use; lightSpaceMatrix is ​​the light source matrix space mentioned above, the light source matrix space * absolute position coordinate is the vertex position under the light source space;

Next look at the key point-the fragment shader.

#version 320 es
precision mediump float;
uniform vec3             _lightColor;
uniform sampler2D   _texture;
uniform sampler2D   _shadowMap;
uniform vec3             _lightPos;
uniform vec3             _viewPos;

in  VS_OUT {
    vec3 FragPosWorldSpace;
    vec3 Normal;
    vec2 TexCoords;
    vec4 FragPosLightSpace;
} fs_in;

out     vec4        fragColor;

float ShadowCalculation(vec4 fragPosLightSpace)
{     [...] } void main() {     vec3 color = texture(_texture, fs_in.TexCoords).rgb;     vec3 normal = normalize(fs_in.Normal);     vec3 lightColor = _lightColor;     // surrounding Ambient     vec3 ambient = 0.5 * color;     // diffuse reflection diffuse     vec3 lightDir = normalize(_lightPos-fs_in.FragPosWorldSpace);     float diff = max(dot(lightDir, normal), 0.0);     vec3 diffuse = diff * lightColor;     // Shadow distortion     //float bias = max(0.01 * (1.0-dot(normal, lightDir)), 0.0005);     // Calculate the shadow     float shadow = ShadowCalculation(fs_in.FragPosLightSpace);

















    vec3 lighting = (ambient + (1.0 - shadow) * diffuse) * color;
    fragColor = vec4(lighting, 1.0f);
}

The fragment shader uses the Blinn-Phong lighting model to render the scene. Then calculate a shadow value, which is 1.0 when the fragment is in the shadow and 0.0 when it is outside the shadow. Then, the diffuse color will be multiplied by this shadow element. Since the shadow will not be completely black (due to scattering), we remove the ambient component from the multiplication.

To check whether a fragment is in the shadow, first convert the position of the fragment in the light space into the standardized device coordinates of the cutting space (in vernacular means: the 3D space is converted back to the 2D coordinates on the screen, and the depth texture Compare). When we output a clipping space vertex position to gl_Position in the vertex shader, OpenGL automatically performs a perspective division, which converts the range of clipping space coordinates -w to w to -1 to 1, which requires x, y, z The element is divided by the w element of the vector. Since the FragPosLightSpace of the crop space will not be passed to the pixel shader through gl_Position, we must do the perspective division ourselves:

// Perform perspective division
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;

The xyz components of the projCoords above are all [-1,1] (the following will point out that this is only true for points such as the far plane), and in order to compare with the depth of the depth map, the z component needs to be transformed to [0,1] ; In order to be the coordinates sampled from the depth map, the xy component also needs to be transformed to [0,1]. So the entire projCoords vector needs to be transformed to the range [0,1].

projCoords = projCoords * 0.5 + 0.5;

With these projection coordinates, we can sample the results from 0 to 1 from the depth texture, and the projCoords coordinates from the first rendering stage directly correspond to the transformed NDC coordinates. We will get the nearest depth under the field of view of the light position: 

float closestDepth = texture(shadowMap, projCoords.xy).r;

In order to get the current depth of the fragment, we simply obtain the z coordinate of the projection vector, which is equal to the depth of the fragment from the perspective of light. 

float currentDepth = projCoords.z; 

The actual comparison is simply to check whether currentDepth is greater than closetDepth. If it is, then the fragment is in the occluded back view. 

float shadow = currentDepth > closestDepth ? 1.0 : 0.0; 

The complete shadowCalculation function looks like this:

float ShadowCalculation(vec4 fragPosLightSpace)
{
    // 执行透视除法
    vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
    // 变换到[0,1]的范围
    projCoords = projCoords * 0.5 + 0.5;
    // 取得最近点的深度(使用[0,1]范围下的fragPosLight当坐标)
    float closestDepth = texture(shadowMap, projCoords.xy).r; 
    // 取得当前片元在光源视角下的深度
    float currentDepth = projCoords.z;
    // 检查当前片元是否在阴影中
    float shadow = currentDepth > closestDepth  ? 1.0 : 0.0;
    return shadow;
}

Second, shadow realization

The key theoretical part of the study is over, the next step is the actual operation. I plan to make a fixed-point light source, illuminate the original green grass and cube, and present a shadow effect. The position of the light source is simulated by a small white cube.

void ShadowFBORender::surfaceCreated(ANativeWindow *window)
{
    if (mEglCore == NULL) {
        mEglCore = new EglCore(NULL, FLAG_TRY_GLES3);
    }
    mWindowSurface = new WindowSurface(mEglCore, window, true);
    mWindowSurface->makeCurrent();

    char res_name[250]={0};
    sprintf(res_name, "%s%s", res_path, "land.jpg");
    GLuint land_texture_id = TextureHelper::createTextureFromImage(res_name);
    sprintf(res_name, "%s%s", res_path, "test.jpg");
    GLuint texture_cube_id = TextureHelper::createTextureFromImage(res_name);
    // 带阴影效果的正方体
    cubeShadow.init(CELL::float3(1,1,1));
    cubeShadow.setSurfaceTexture(texture_cube_id);
    // 带阴影效果的草地板
    landShadow.init(10, -1);
    landShadow.setSurfaceTexture(land_texture_id);
    // 实际的光源位置
    mLightPosition = CELL::real3(5, 5, 2);
    // 模拟光源位置的小正方体
    lightPositionCube.init(CELL::real3(0.15f,0.15f,0.15f), 0);
    lightPositionCube.mModelMatrix.translate(mLightPosition);
}

Next, implement the process of drawing and rendering.

void ShadowFBORender::renderOnDraw(double elpasedInMilliSec)
{
    mWindowSurface->makeCurrent();

    matrix4 cProj(mCamera3D.getProject());
    matrix4 cView(mCamera3D.getView());
    mLightProjectionMatrix = CELL::perspective(45.0f, (float)mViewWidth/(float)mViewHeight, 0.1f, 30.0f);
    mLightViewMatrix       = CELL::lookAt(mLightPosition, CELL::real3(0,0,0), CELL::real3(0,1.0,0));
    // Note.绘制深度纹理
    renderDepthFBO();
    
    glEnable(GL_DEPTH_TEST);
    glEnable(GL_BLEND);
    glClear(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);
    glViewport(0,0, mViewWidth, mViewHeight);
    // 绘制模拟光源位置的小正方体
    lightPositionCube.render(mCamera3D);
    // 绘制带阴影效果的地板
    landShadow.setShadowMap(depthFBO.getDepthTexId());
    landShadow.render(cProj,cView, mLightPosition, mLightProjectionMatrix, mLightViewMatrix);
    // 绘制带阴影效果的正方体
    cubeShadow.setShadowMap(depthFBO.getDepthTexId());
    cubeShadow.render(cProj,cView, mLightPosition, mLightProjectionMatrix, mLightViewMatrix);

    mWindowSurface->swapBuffers();
}

void ShadowFBORender::renderDepthFBO()
{
    depthFBO.begin();
    {
        glEnable(GL_DEPTH_TEST);
        glClear(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);
        //glEnable(GL_CULL_FACE);
        //glCullFace(GL_FRONT);
        landShadow.render(mLightProjectionMatrix,mLightViewMatrix,
                          mLightPosition,
                          mLightProjectionMatrix,mLightViewMatrix);
        cubeShadow.render(mLightProjectionMatrix,mLightViewMatrix,
                          mLightPosition,
                          mLightProjectionMatrix,mLightViewMatrix);
        //glCullFace(GL_BACK);
        //glDisable(GL_CULL_FACE);
    }
    depthFBO.end();
}

The difference from the previous one is the drawing method of LandShadow / CubeShadow object. Previously, the Camera3D object was passed in. In the rendering depth texture method renderDepthFBO, the incoming parameters are also different. It is mainly the depth value in the light space required by the depth texture. In real rendering, it is the effect of the camera's view space. Take LandShadow as an example to see the specific code.

class LandShadow {
public:
    struct V3N3T2 {
        float x, y, z; //位置坐标
        float nx, ny, nz; //法向量
        float u,v; //纹理坐标
    };
public:
    V3N3T2                  _data[6];
    CELL::matrix4           _modelMatrix;
    GLuint                  _texId;
    GLuint                  _ShadowMapId;
    IlluminateWithShadow    sprogram;

    void        setShadowMap(GLuint texId) {
        _ShadowMapId = texId;
    }
    void        setSurfaceTexture(GLuint texId) {
        _texId = texId;
    }
    void        init(const float size, const float y_pos)
    {
        float   gSizeX = 10;
        float   gSizeZ = 10;
        V3N3T2 verts[] =
        {
            {-gSizeX, y_pos, -gSizeZ, 0,1,0,  0.0f, 0.0f}, // left far
            { gSizeX, y_pos, -gSizeZ, 0,1,0,  size, 0.0f}, // right far
            { gSizeX, y_pos,  gSizeZ, 0,1,0,  size, size}, // right near
            {-gSizeX, y_pos, -gSizeZ, 0,1,0,  0.0f, 0.0f}, // left far
            { gSizeX, y_pos,  gSizeZ, 0,1,0,  size, size}, // right near
            {-gSizeX, y_pos,  gSizeZ, 0,1,0,  0.0f, size}  // left near
        };
        memcpy(_data, verts, sizeof(verts));
        _modelMatrix.identify();
        sprogram.initialize();
    }

    void        render(matrix4 currentProjectionMatrix, matrix4 currentViewMatrix,
                       real3& lightPos,
                       matrix4 lightProjectionMatrix, matrix4 lightViewMatrix)
    {
        sprogram.begin();
        // 加载材质纹理
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, _texId);
        glUniform1i(sprogram._texture, 0);
        // 加载阴影深度测试的纹理
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D,  _ShadowMapId);
        glUniform1i(sprogram._shadowMap, 1);
        // 用于对象顶点坐标的空间转换
        glUniformMatrix4fv(sprogram._projection, 1, GL_FALSE, currentProjectionMatrix.data());
        glUniformMatrix4fv(sprogram._view, 1, GL_FALSE, currentViewMatrix.data());
        glUniformMatrix4fv(sprogram._model, 1, GL_FALSE, _modelMatrix.data());
        
        glUniform3f(sprogram._lightColor, 1.0f, 1.0f, 1.0f);
        glUniform3f(sprogram._lightPos, lightPos.x, lightPos.y, lightPos.z);
        // 光源空间矩阵
        matrix4 lightSpaceMatrix = lightProjectionMatrix * lightViewMatrix;
        glUniformMatrix4fv(sprogram._lightSpaceMatrix, 1, GL_FALSE, lightSpaceMatrix.data());
        // 绘制
        glVertexAttribPointer(static_cast<GLuint>(sprogram._position), 3, GL_FLOAT, GL_FALSE,
                              sizeof(LandShadow::V3N3T2), &_data[0].x);
        glVertexAttribPointer(static_cast<GLuint>(sprogram._normal),   3, GL_FLOAT, GL_FALSE,
                              sizeof(LandShadow::V3N3T2), &_data[0].nx);
        glVertexAttribPointer(static_cast<GLuint>(sprogram._uv),       2, GL_FLOAT, GL_FALSE,
                              sizeof(LandShadow::V3N3T2), &_data[0].u);
        glDrawArrays(GL_TRIANGLES, 0, 6);
        sprogram.end();
    }
};

The first parameter and the second parameter are used for rendering the specific vertex position of the object; the third parameter is the position of the light source;

The fourth and fifth parameters are used to calculate the position of the object vertices of the light source space matrix, so as to facilitate the shadow judgment of the fragment where the vertices are located.

The above code, if there is no big difference, the running effect is probably like this:

Why is this? This phenomenon is due to the existence of very serious shadow distortion and redundant shadow occlusion tests. How to deal with it? Please listen to the decomposition next time!

Guess you like

Origin blog.csdn.net/a360940265a/article/details/102516439