OpenGL.Shader: 4-GPU cutscenes, analysis of parallel operations

OpenGL.Shader: 4-GPU cutscenes, analysis of parallel operations

 

This article consists of two simple examples to continue learning the shader coding specification. And the concept of shader extension-parallel operation. Corresponding project address: https://github.com/MrZhaozhirong/NativeCppApp 

The rendering of the first example. The same is still based on the cube model data of the previous article. The relevant code will not be uploaded. Students in need can get it on github. Now focus on analyzing and learning the relevant code of the shader program.


#include "GPUMixShaderProgram.h"
#include "ShaderHelper.h"

/**
 * 正方体动态渐变 着色器程序
 */
GPUMixShaderProgram::GPUMixShaderProgram()
{
    const char * vertexShaderResourceStr  = "uniform mat4    u_Matrix;\n\
                                             attribute vec4  a_Position;\n\
                                             attribute vec2  a_uv;\n\
                                             varying vec2    out_uv;\n\
                                             void main()\n\
                                             {\n\
                                                 out_uv      =   a_uv;\n\
                                                 gl_Position =   u_Matrix * a_Position;\n\
                                             }";
    const char * fragmentShaderResourceStr= "precision mediump float;\n\
                                             uniform sampler2D _texture0;\n\
                                             uniform sampler2D _texture1;\n\
                                             uniform float     _mix;\n\
                                             varying vec2      out_uv;\n\
                                             void main()\n\
                                             {\n\
                                                 vec4 color0    =  texture2D(_texture0, out_uv);\n\
                                                 vec4 color1    =  texture2D(_texture1, out_uv);\n\
                                                 vec4 dstColor  =  color0 * (1.0 - _mix)  + color1 * _mix;\n\
                                                 gl_FragColor   =  mix(color0, color1, _mix);\n\
                                             }";

    //                                           gl_FragColor   =   dstColor;
    programId = ShaderHelper::buildProgram(vertexShaderResourceStr, fragmentShaderResourceStr);

    uMatrixLocation     = glGetUniformLocation(programId, "u_Matrix");
    aPositionLocation   = glGetAttribLocation(programId, "a_Position");
    uMixLocation        = glGetUniformLocation(programId, "_mix");
    aTexUvLocation      = glGetAttribLocation(programId, "a_uv");
    uTextureUnit0       = glGetUniformLocation(programId, "_texture0");
    uTextureUnit1       = glGetUniformLocation(programId, "_texture1");
}

void GPUMixShaderProgram::setMVPUniforms(float* matrix){
    glUniformMatrix4fv(uMatrixLocation, 1, GL_FALSE, matrix);
}

void GPUMixShaderProgram::setMixUniform(double mix){
    glUniform1f(uMixLocation, static_cast<GLfloat>(mix));
}

The calling process is as follows:

void NativeGLRender::renderOnDraw(double elpasedInMilliSec)
{
    mWindowSurface->makeCurrent();
    glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
    // GPUMixShaderProgram和GPUFlatSlidingProgram所需
    double _hasElasped = elpasedInMilliSec/1000 * 0.1f;
    if (_hasElasped > 1.0f)
    {
        _hasElasped = 1.0f;
    }
    gpuMixShaderProgram->ShaderProgram::userProgram();
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, texture_0_id);
    glUniform1i(gpuMixShaderProgram->uTextureUnit0, 0);
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D, texture_1_id);
    glUniform1i(gpuMixShaderProgram->uTextureUnit1, 1);

    CELL::Matrix::multiplyMM(modelViewProjectionMatrix, viewProjectionMatrix, cube->modelMatrix);
    gpuMixShaderProgram->setMVPUniforms(modelViewProjectionMatrix);
    gpuMixShaderProgram->setMixUniform(_hasElasped); // 设置混合因子
    cube->bindData(gpuFlatSlidingProgram);
    cube->draw();
    mWindowSurface->swapBuffers();
}

The functional processing of the shader program this time focuses on the fragment shader. The vertex shader simply transparently transmits the relevant data to the fragment shader. In fact, the effect is to gradually change from the first image to the second target image. The mixing factor _mix is ​​passed to the shader program over time through OpenGL.API.

precision mediump float;
uniform sampler2D _texture0;
uniform sampler2D _texture1;
uniform float _mix;
varying vec2 out_uv;
void main()
{     vec4 color0 = texture2D(_texture0, out_uv); // Extract the color value of the color point corresponding to texture 1     vec4 color1 = texture2D (_texture1, out_uv); // Extract the color value of texture 2 corresponding to the color point     vec4 dstColor = color0 * (1.0-_mix) + color1 * _mix; // Do the mathematical calculation of     mixing gl_FragColor = mix(color0, color1, _mix) ; // Use the system's own mix function for mixing calculations }




Secondly, I wrote the source code of the mix function myself, which is the third line of code in the main kernel function. I seem to understand a little bit here, but I don't seem to understand it all. What I want to add is that the built-in function texture2D is linearly sampled according to the texture coordinate out_uv. Although there are only four texture coordinate points, the sampling points after rasterization are actually the same as the number of shading points. This example may not be able to explain this sentence, it doesn't matter if you don't understand, let's continue to look at the second example.

 

        The second example renderings, corresponding to the shader program

#include "GPUFlatSlidingProgram.h"
#include "ShaderHelper.h"

GPUFlatSlidingProgram::GPUFlatSlidingProgram()
{
    const char * vertexShaderResourceStr  = "uniform mat4    u_Matrix;\n\
                                             attribute vec4  a_Position;\n\
                                             attribute vec2  a_uv;\n\
                                             varying vec2    out_uv;\n\
                                             void main()\n\
                                             {\n\
                                                 out_uv      =   a_uv;\n\
                                                 gl_Position =   u_Matrix * a_Position;\n\
                                             }";

    const char * fragmentShaderResourceStr= "precision mediump float;\n\
                                             uniform sampler2D _texture0;\n\
                                             uniform sampler2D _texture1;\n\
                                             uniform float     offset;\n\
                                             varying vec2      out_uv;\n\
                                             void main()\n\
                                             {\n\
                                                 vec4 color = vec4(0,0,0,1);\n\
                                                 if(out_uv.x <= offset )\n\
                                                    color = texture2D(_texture1, vec2(out_uv.x + (1.0 - offset), out_uv.y));\n\
                                                 else\n\
                                                    color = texture2D(_texture0, vec2(out_uv.x - offset, out_uv.y));\n\
                                                 gl_FragColor   =  color; \n\
                                             }";

    programId = ShaderHelper::buildProgram(vertexShaderResourceStr, fragmentShaderResourceStr);

    uMatrixLocation     = glGetUniformLocation(programId, "u_Matrix");
    aPositionLocation   = glGetAttribLocation(programId,  "a_Position");
    aTexUvLocation      = glGetAttribLocation(programId,  "a_uv");
    uOffset             = glGetUniformLocation(programId, "offset");
    uTextureUnit0       = glGetUniformLocation(programId, "_texture0");
    uTextureUnit1       = glGetUniformLocation(programId, "_texture1");
}

void GPUFlatSlidingProgram::setMVPUniforms(float* matrix){
    glUniformMatrix4fv(uMatrixLocation, 1, GL_FALSE, matrix);
}

void GPUFlatSlidingProgram::setOffsetUniform(double offset){
    glUniform1f(uOffset, static_cast<GLfloat>(offset));
}

The calling process is the same as above, except that the mixing factor _mix is ​​no longer set, this time it is the time offset offset. Still focus on analyzing the fragment shader.

precision mediump float;
uniform sampler2D _texture0;
uniform sampler2D _texture1;
uniform float     offset;
varying vec2      out_uv;
void main()
{
    vec4 color = vec4(0,0,0,1);
    if(out_uv.x <= offset ) {
       color = texture2D(_texture1, vec2(out_uv.x + (1.0 - offset), out_uv.y));
    } else {
       color = texture2D(_texture0, vec2(out_uv.x - offset, out_uv.y));
    }
    gl_FragColor   =  color;
}

The offset gradually changes from 0 to 1, and the abscissa of the texture is used for judgment. In the display space on the left of the offset, we extract the 0 ~ out_uv.x + (1.0-offset) area of ​​texture1 for sampling; on the right of the offset Display space, extract the 0 ~ out_uv.x-offset area of ​​texture0 for sampling; it doesn’t matter if you don’t understand it, just look at the simple illustration below.

When offset=0.4, first look at the right area of ​​the display, all texture coordinates greater than 0.4 (0.5~6~7~8~9~1.0) first subtract the value of offset = (0.1~2~3~4~ 5) Then sample the color points of texture texture0. When offset=0.5, the texture coordinates will be reduced by one digit (0.1~2~3~0.4), and the display area of ​​texture0 will shrink to the left, and the visual effect is to shift to the right;

When offset=0.4, in the left area of ​​the display (uv.x<=offset), all texture coordinates less than 0.4 (0~0.1~0.2~0.3~0.4) are all +1 across the texture range of a picture, and change It becomes (1~1.1~1.2~1.3~1.4), then subtract 0.4=(0.6~7~8~9~1.0), and finally sample the color value of texture1 with this set of coordinates. When offset=0.5, add one more digit to the texture coordinates (0~0.1~0.2~0.3~0.4~0.5) => (1~1.1~1.2~1.3~1.4~1.5) => (0.5~6~7~8 ~9~1.0). The texture1 has the boundary as the reference, and continues to expand to the left.

Through this example, if your classmates are not dizzy yet, congratulations on mastering the basic concepts of parallel computing, and it is two-level parallelism. Because the fragment shader is executed in units of shading points, and the GPU rendering of the hardware cannot be shaded one by one, the writing of the fragment shader basically requires parallel thinking, in the CUDA parallel of NIVIDA The key kernel functions in the arithmetic library are actually equivalent to the fragment shader program. Interested students can expand their horizons on Baidu CUDA.

Through two examples, I learned the coding specifications of the fragment coloring program, and some application skills, and derives some concepts, which are well understood.

Guess you like

Origin blog.csdn.net/a360940265a/article/details/89083798