OpenGL.Shader: 3-GPU texture animation, vertex/fragment shader relearning

OpenGL.Shader: 3-GPU texture animation, vertex/fragment shader relearning

 

First put the project address: https://github.com/MrZhaozhirong/NativeCppApp   and the renderings of the content of this article

At the beginning of this article, the knowledge of OpengGL.Shader is officially launched. From the analysis of the effect to the in-depth theory to dissect GLSL step by step.

Following the last OpenGL.Shader: 2 article, we have already completed a cube texture. As shown in the upper left picture, the basic knowledge points are the simple practice of OpenGL.ES on Android: 11-panoramic (index-depth test) Let’s take a look at the Cpp version of CubeIndex

CubeIndex::CubeIndex() {
    modelMatrix = new float[16];
    CELL::Matrix::setIdentityM(modelMatrix, 0);

    CUBE_VERTEX_DATA = new int8_t[60];
    int8_t * p = CUBE_VERTEX_DATA;
    p[0]=-1;   p[1]=1;    p[2]=1;    p[3]=0;   p[4]=0;
    p[5]=1;    p[6]=1;    p[7]= 1;   p[8]=1;   p[9]=0;
    p[10]=-1;  p[11]=-1;  p[12]= 1;  p[13]=0;  p[14]=1;
    p[15]=1;   p[16]=-1;  p[17]= 1;  p[18]=1;  p[19]=1;
    p[20]=-1;  p[21]= 1;  p[22]=-1;  p[23]=1;  p[24]=0;
    p[25]=1;   p[26]=1;   p[27]=-1;  p[28]=0;  p[29]=0;
    p[30]=-1;  p[31]=-1;  p[32]=-1;  p[33]=1;  p[34]=1;
    p[35]=1;   p[36]=-1;  p[37]=-1;  p[38]=0;  p[39]=1;

    p[40]=-1;  p[41]= 1;  p[42]=-1;  p[43]=0;  p[44]=0;
    p[45]=1;   p[46]=1;   p[47]=-1;  p[48]=1;  p[49]=0;
    p[50]=-1;  p[51]=1;   p[52]=1;   p[53]=0;  p[54]=1;
    p[55]=1;   p[56]=1;   p[57]= 1;  p[58]=1;  p[59]=1;

    //{
    //        //x,   y,  z    s, t,
    //        -1,   1,   1,   0, 0,  // 0 left top near
    //        1,   1,   1,    0, 1,  // 1 right top near
    //        -1,  -1,   1,   1, 0,  // 2 left bottom near
    //        1,  -1,   1,    1, 1,  // 3 right bottom near
    //        -1,   1,  -1,   1, 0,  // 4 left top far
    //        1,   1,  -1,    0, 0,  // 5 right top far
    //        -1,  -1,  -1,   1, 1,  // 6 left bottom far
    //        1,  -1,  -1,    1, 0,  // 7 right bottom far
    //        这样安排的纹理坐标点,四周是正常的,但是顶底是不正常,
    //        所以顶底要重新安排一组
    //        -1,   1,  -1,   0, 0,  // 8  left top far
    //        1,   1,  -1,    1, 0,  // 9  right top far
    //        -1,   1,   1,   0, 1,  // 10 left top near
    //        1,   1,   1,    1, 1,  // 11 right top near
    //};

    CUBE_INDEX = new int8_t[24];
    CUBE_INDEX[0 ]= 8;  CUBE_INDEX[1 ]= 9;  CUBE_INDEX[2 ]=10;  CUBE_INDEX[3 ]=11;
    CUBE_INDEX[4 ]= 6;  CUBE_INDEX[5 ]= 7;  CUBE_INDEX[6 ]=2;   CUBE_INDEX[7 ]=3;
    CUBE_INDEX[8 ]= 0;  CUBE_INDEX[9 ]= 1;  CUBE_INDEX[10]=2;   CUBE_INDEX[11]=3;
    CUBE_INDEX[12]= 4;  CUBE_INDEX[13]= 5;  CUBE_INDEX[14]=6;   CUBE_INDEX[15]=7;
    CUBE_INDEX[16]= 4;  CUBE_INDEX[17]= 0;  CUBE_INDEX[18]=6;   CUBE_INDEX[19]=2;
    CUBE_INDEX[20]= 1;  CUBE_INDEX[21]= 5;  CUBE_INDEX[22]=3;   CUBE_INDEX[23]=7;
    //{
    //    //top
    //    8,9,10,11,
    //    //bottom
    //    6,7,2,3
    //    //front
    //    0,1,2,3,
    //    //back
    //    4,5,6,7,
    //    //left
    //    4,0,6,2,
    //    //right
    //    1,5,3,7,
    //};
}

CubeIndex::~CubeIndex() {
    delete [] CUBE_VERTEX_DATA;
    delete [] CUBE_INDEX;
    delete [] modelMatrix;
}

void CubeIndex::bindData(CubeShaderProgram* shaderProgram) {
    glVertexAttribPointer(static_cast<GLuint>(shaderProgram->aPositionLocation),
                          POSITION_COMPONENT_COUNT, GL_BYTE,
                          GL_FALSE, STRIDE,
                          CUBE_VERTEX_DATA);
    glEnableVertexAttribArray(static_cast<GLuint>(shaderProgram->aPositionLocation));

    glVertexAttribPointer(static_cast<GLuint>(shaderProgram->aTexUvLocation),
                          TEXTURE_COORDINATE_COMPONENT_COUNT, GL_BYTE,
                          GL_FALSE, STRIDE,
                          &CUBE_VERTEX_DATA[POSITION_COMPONENT_COUNT]);
    glEnableVertexAttribArray(static_cast<GLuint>(shaderProgram->aTexUvLocation));
}

void CubeIndex::draw() {
    // 正方体 六个面,每个面两个三角形,每个三角形三个点
    //glDrawElements(GL_TRIANGLES, 6*2*3, GL_UNSIGNED_BYTE, CUBE_INDEX );
    // 正方体 六个面,每个面四个点
    glDrawElements(GL_TRIANGLE_STRIP, 6*4, GL_UNSIGNED_BYTE, CUBE_INDEX );
}

Briefly describe the code:

The array CUBE_VERTEX_DATA stores the point coordinates (x, y, z) and texture coordinate data (s, t) of 11 locations, where 4 and 8 are the same location but different texture coordinates, the same as 5 and 9, 0 and 10 , 1 and 11. Why are the textures different? If you don’t understand, draw a sketch to match the position of the texture coordinates, so I won’t discuss it here.

CUBE_INDEX stores the index of the 4 point positions that make up each face. Previously, we drew triangles (GL_TRIANGLES). This time we made a subtle optimization and drew triangle strips (GL_TRIANGLE_STRIP), saving 36-24=12 points.

Don't look less at these 12 points, and then start to enter the first basic knowledge of Shader, the shader rendering process.

 

The execution flow of the rendering pipeline

Rendering a cube, do you know clearly what is the execution process of rendering? How many times is the vertex shader (VertexShader) executed? How many times will the FragmentShader be executed? First, let's take a look at the picture below:

As shown in the figure, OpengGL's API and shader workflow: 1. Pass various vertex data to the memory/GPU memory through the OpenGL client API (the code we wrote); 2. After the vertex shader passes through the original assembly , Assigned to the corresponding vertex data; 3. Rasterization, that is, after the cube is mapped onto the screen through the MVP matrix, it becomes a diamond-like drawing area; 4. The fragment shader calculates the rendering operation of each fragment and determines this What color value should be displayed on the corresponding point of the cube; 5. Render the picture and output it to the frame buffer for display.

Okay, beep a bunch of theory and theory. In this example, how many times is our vertex shader executed? The answer is the count parameter of glDrawXXXXX! When drawing a triangle (GL_TRIANGLES), the vertex shader is executed 36 times; when drawing a triangle strip (GL_TRIANGLE_STRIP), the vertex shader is executed 24 times. Understand what has been said above, don’t look at these subtle differences. Imagine the canyon of the king of pesticides. There are hundreds or thousands of rendering objects. Each object can be reduced by 10 rendering points, and 1k objects are reduced. 1w vertex shader execution times, how much performance should be optimized?

CubeShaderProgram::CubeShaderProgram()
{
    const char * vertexShaderResourceStr = const_cast<char *>(" uniform mat4    u_Matrix;\n\
                                                                attribute vec4  a_Position;\n\
                                                                attribute vec2  a_uv;\n\
                                                                varying vec2    out_uv;\n\
                                                                void main()\n\
                                                                {\n\
                                                                      out_uv = a_uv;\n\
                                                                      gl_Position = u_Matrix * a_Position;\n\
                                                                }");

    const char * fragmentShaderResourceStr= const_cast<char *>("precision mediump float;\n\
                                                                uniform sampler2D _texture;\n\
                                                                varying vec2      out_uv;\n\
                                                                void main()\n\
                                                                {\n\
                                                                   gl_FragColor = texture2D(_texture, out_uv);\n\
                                                                }");

    programId = ShaderHelper::buildProgram(vertexShaderResourceStr, fragmentShaderResourceStr);

    uMatrixLocation     = glGetUniformLocation(programId, "u_Matrix");
    aPositionLocation   = glGetAttribLocation(programId, "a_Position");
    aTexUvLocation      = glGetAttribLocation(programId, "a_uv");
    uTextureUnit        = glGetUniformLocation(programId, "_texture");
}

void CubeShaderProgram::setUniforms(float* matrix){
    glUniformMatrix4fv(uMatrixLocation, 1, GL_FALSE, matrix);
}
CubeShaderProgram::~CubeShaderProgram() {

}

Cooperate with vertexShaderResourceStr to continue to deepen the understanding of the previous paragraph. glDrawElements(GL_TRIANGLE_STRIP, 6*4, GL_UNSIGNED_BYTE, CUBE_INDEX ); Trigger vertex data transfer to the vertex shader program, the first vertex (index 0) attribute vec4 a_Position = {-1,1,1} attribute vec2 a_uv = {0, 0}; After the custom logic calculation, the relevant data is transmitted to the corresponding fragment shader through the built-in variables. The second vertex (index 1) attribute vec4 a_Position = {1,1,1} attribute vec2 a_uv = {0,1}; When the third and fourth vertices are executed, the segment will be triggered if they meet to form a triangle band Shading, but it does not mean executing the fragment shader program once!

So when it comes to fragment shading, how many times does the fragment shader program execute? This is really inaccurate. what? ! The pants are all over. You tell me this? I'm not sure, but I can use a picture to show that.

For the fragment coloring triggered by the first triangle, the number of executions of the fragment shader program depends on how many coloring points there are in the yellow area of ​​the above figure. Shading points are similar to pixels, but there is a difference. The pixels are for the screen, and the shading points are for the GPU rendering pipeline. A pixel may contain more than one shading point.

If the model matrix of this cube is scaled to a certain ratio, its display on the screen will become smaller, and the number of executions of the fragment shader rendered in the current frame will be reduced; with the view matrix of the current camera position, after turning on the depth detection, the bottom The face and the back surface will not be rendered, so the corresponding triangle strip will not trigger the coloring, and naturally the corresponding fragment coloring program will not be executed.

 

GPU texture animation

After the introduction of theoretical knowledge, then enter the actual combat practice, how to achieve the effect on the right side of the opening? Some students will propose such a solution, as time changes, constantly update the texture to bring the effect of animation. This is indeed a feasible solution, but the disadvantages are also obvious. If there are too many periodic animation frames, the physical space occupied by the resource pack will increase, and the resources for operating memory -> GPU memory will also increase. Here is another more efficient way to manipulate the playback of texture animation in the shader. Normal 2D and 2.5D games use this method to achieve character animation.

First, with the help of Linux's gettimeofday function, the prepared application running time can be obtained, and it can be simply packaged into CELL::TimeCounter. Then increase the runtime parameter in the renderOnDraw callback of GLThread before. The relevant code is as follows:

void *glThreadImpl(void *context)
{
    GLThread *glThread = static_cast<GLThread *>(context);

    CELL::TimeCounter tm;
    while(true)
    {
        // ... ...
        double  second  =   tm.getElapsedTimeInMilliSec();
        if(glThread->isStart)
        {
            //LOGD("GLThread onDraw.");
            glThread->mRender->renderOnDraw(second);
        }
        //tm.update();
        // 不update就是计算整个应用的运行时长
        // update之后,计算清零,用于获取代码间执行的时长
    }
    return 0;
}

Then load the following resource image to the texture buffer.

This is a composite picture, which neatly arranges all the frame pictures needed for a cycle of animation. The number of rows is not necessarily the same, but it must be the number of rows and columns. Seeing this picture, I think everyone should understand the method I will introduce next, which is to change the texture coordinates over time to display the current texture areas of different rows and columns. Without replacing the texture ID, Achieve the effect of displaying animation.

First of all, the first question, over time, how to determine which frame is currently in?

void NativeGLRender::renderOnDraw(double elpasedInMilliSec)
{
    if (mEglCore==NULL || mWindowSurface==NULL) {
        LOGW("Skipping drawFrame after shutdown");
        return;
    }
    mWindowSurface->makeCurrent();
    glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);

    double elpasedInSec = elpasedInMilliSec/1000; // 运行时间毫秒转为秒
    // 若以1秒为一个周期,播放完所有帧图,即当elpasedInSec==1,纹理位置索引是row*col==16
    // 若以2秒为一个周期,播放完所有帧图,即当elpasedInSec==2,纹理位置索引是row*col==16
    // 所以要用运行时间 / 周期时间 * (row*col)= 当前纹理索引
    int  cycleTimeInSec = 1;
    // 1秒后,纹理位置索引归0,所以要mod上(row*col)防止索引越界
    int    frame        = int(elpasedInSec/cycleTimeInSec * 16)%16;
    
    gpuAnimationProgram->ShaderProgram::userProgram();
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, animation_texure);
    glUniform1i(gpuAnimationProgram->uTextureUnit, 0);
    CELL::Matrix::multiplyMM(modelViewProjectionMatrix, viewProjectionMatrix, cube->modelMatrix);
    gpuAnimationProgram->setMVPUniforms(modelViewProjectionMatrix);
    gpuAnimationProgram->setAnimUniforms(4,4,frame);
    cube->bindData(gpuAnimationProgram);
    cube->draw();
    mWindowSurface->swapBuffers();
}

In fact, the mathematics behind it is relatively simple. It has already been written in the notes. If you don’t understand, em... there is no way. After that is some template code: start shader, bind texture, bind mvp matrix, bind vertex data, start rendering.

The next step is to analyze the protagonist of this article: GPUAnimationProgram 

GPUAnimationProgram::GPUAnimationProgram()
{
    const char * vertexShaderResourceStr = const_cast<char *> ("uniform mat4    u_Matrix;\n\
                                                                attribute vec4  a_Position;\n\
                                                                uniform vec3    u_AnimInfor;\n\
                                                                attribute vec2  a_uv;\n\
                                                                varying vec2    out_uv;\n\
                                                                void main()\n\
                                                                {\n\
                                                                      float uS  =  1.0/u_AnimInfor.y;\n\
                                                                      float vS  =  1.0/u_AnimInfor.x;\n\
                                                                      out_uv    =  a_uv * vec2(uS,vS);\n\
                                                                      float  row  =  int(u_AnimInfor.z)/int(u_AnimInfor.y);\n\
                                                                      float  col  =  mod((u_AnimInfor.z), (u_AnimInfor.x));\n\
                                                                      out_uv.x    +=  float(col) * uS;\n\
                                                                      out_uv.y    +=  float(row) * vS;\n\
                                                                      gl_Position = u_Matrix * a_Position;\n\
                                                                }");

    const char * fragmentShaderResourceStr= const_cast<char *>("precision mediump float;\n\
                                                                uniform sampler2D _texture;\n\
                                                                varying vec2      out_uv;\n\
                                                                void main()\n\
                                                                {\n\
                                                                   vec4 texture_color = texture2D(_texture, out_uv);\n\
                                                                   vec4 background_color = vec4(1.0, 1.0, 1.0, 1.0);\n\
                                                                   gl_FragColor = mix(background_color,texture_color, 0.9);\n\
                                                                }");

    programId = ShaderHelper::buildProgram(vertexShaderResourceStr, fragmentShaderResourceStr);

    uMatrixLocation     = glGetUniformLocation(programId, "u_Matrix");
    uAnimInforLocation  = glGetUniformLocation(programId, "u_AnimInfor");

    aPositionLocation   = glGetAttribLocation(programId,  "a_Position");
    aTexUvLocation      = glGetAttribLocation(programId,  "a_uv");

    uTextureUnit        = glGetUniformLocation(programId, "_texture");
}


void GPUAnimationProgram::setMVPUniforms(float* matrix){
    glUniformMatrix4fv(uMatrixLocation, 1, GL_FALSE, matrix);
}

void GPUAnimationProgram::setAnimUniforms(int row,int col,int frame){
    glUniform3f(uAnimInforLocation, row, col, frame);
}

Next, start the vertex shader program, followed by the analysis line by line.

uniform vec3 u_AnimInfor; //(1)
// A new custom input variable, similar to vec3(x,y,z),
// where glUniform3f (GLint location, GLfloat v0, GLfloat v1 ) can be used on the client , GLfloat v2); Specify the value of the element to be filled
// This represents (row, col, frame), where row and col are fixed values, which are the number of rows and columns of the grid image above,
// frame is the current texture index that changes dynamically The position is the 4*4 grid map above, which corresponds to the current grid.
uniform mat4 u_Matrix;
attribute vec4 a_Position;
attribute vec2 a_uv;
varying vec2 out_uv;
void main()
{       float uS = 1.0/u_AnimInfor.y;       float vS = 1.0/u_AnimInfor.x;       out_uv = a_uv * vec2(uS,vS); // (2)       // The normal input texture coordinates are for the entire image. After changing to a composite image, we need to reduce the texture coordinates according to the ratio of the rows and columns.       // The abscissa u of the texture is multiplied by 1/col, The ordinate v is to be multiplied by 1/row





      int row = int(u_AnimInfor.z)/int(u_AnimInfor.y);
      float col = mod((u_AnimInfor.z), (u_AnimInfor.x));
      // Then calculate the specific row and column of the current index position s position.
      out_uv.x += float(col) * uS; // abscissa, how many columns are the offset
      out_uv.y += float(row) * vS; // ordinate, how many rows are the offset
      gl_Position = u_Matrix * a_Position ;
}

I think the comments should be very clear. Anyway, it is necessary to pay attention to the offset calculation of the texture coordinates. At first, I myself was confused for a few minutes. But as soon as I realized the points of attention, it was easy to solve. The vertex shader program is analyzed here, and then the fragment shader program.

precision mediump float;
uniform sampler2D _texture;
varying vec2 out_uv;
void main()
{    vec4 texture_color = texture2D(_texture, out_uv); // Find the normal texture coloring value    vec4 background_color = vec4(1.0, 1.0, 1.0, 1.0); // Another white color value    gl_FragColor = mix(background_color,texture_color, 0.9); // The two color values ​​are mixed     // Otherwise, the completely transparent cube is completely invisible under a black background }




To mention here, GLSL's built-in function: T mix(T x, T y, float a) takes the linear mixture of x and y, and its calculation formula is x*(1-a)+y*a.

In some very low versions, it is necessary to start the mixing function in the OpenGL.ES API, namely: glEnable(GL_BLEND);           

The other knowledge of mixing can go to  the simple practice of OpenGL.ES on Android: 20-watermark recording (preview + transparent watermark emoji bullet screen gl_blend) to continue learning.

Guess you like

Origin blog.csdn.net/a360940265a/article/details/88977764