Android OpenGL ES 3.0 3D model introduction and loading and rendering

1. OpenGLES 3D model

The OpenGLES 3D model is essentially constructed by a series of triangles in 3D space (OpenGL coordinate system), and also contains information such as texture, lighting, and material used to describe the surface of the triangle.

Using 3D modeling software, designers can build some complex shapes and apply textures to the shapes without having to pay attention to the technical details of the image. Finally, when exporting the model file, the modeling tool will generate all vertex coordinates, vertex normals and texture coordinates by itself .

Commonly used model file formats include .obj, .max, .fbx, .3ds, etc. Among them, .obj is a geometry graphics file format developed by Wavefront Technology, including the position of each vertex, texture coordinates, normals, and component faces ( Polygon) and other data such as vertex lists are widely used.

2. The structure of the OBJ file

# Blender v2.83.15 OBJ File: 'monkey.blend'
# www.blender.org
mtllib monkey.mtl
o face_Plane
v 0.156846 -0.166171 1.299656
v 0.040169 -0.151694 1.361770
v 0.167955 -0.148006 1.292944
...
vt 0.211637 0.867755
vt 0.220409 0.881854
vt 0.205920 0.875706
vt 0.225888 0.873355
vt 0.235953 0.888705
vt 0.197610 0.862036
...
vn 0.7356 -0.5550 0.3885
vn 0.8016 -0.5069 0.3170
vn 0.7547 -0.5032 0.4209
vn 0.7271 -0.6192 0.2964
vn 0.7657 -0.5795 0.2791
vn 0.6553 -0.4363 0.6166
...
f 313/720/711 1850/727/718 316/722/713
f 1623/337/337 317/728/719 1873/729/720
f 311/719/710 1851/730/721 313/720/711
f 317/728/719 1839/717/708 1871/704/695
f 1847/721/712 1874/726/717 1873/729/720
f 1875/725/716 1625/327/327 1874/726/717
f 1850/727/718 1877/731/722 1876/732/723

The amount of data in the model is very large, and some interceptions have been made here

A brief description of the OBJ file data structure:

  • #A line beginning with a comment line
  • mtllibIndicates the mtl file (material file) used to specify the OBJ file
  • vThe first line indicates that the vertex coordinates are stored , and the last three numbers indicate the (x, y, z) coordinate values ​​of a vertex respectively.
  • vnThe first line indicates that the vertex normal vector is stored , and the last three numbers respectively indicate the three-dimensional (x, y, z) component value of a vertex normal vector
  • vtThe first line indicates that the texture coordinates are stored, and the last three numbers indicate the (s, t, p) component values ​​of a texture coordinate, among which the p component is generally used for 3D textures
  • fThe first line indicates that the information of a triangular surface is stored, and the following three sets of data respectively represent the information of the three vertices that make up the triangular surface. The format of each vertex information is: vertex position index/texture coordinate index/normal vector index.

3. Model loading

Model loading can use the model loading library Assimp, which is called Open Asset Import Library, which can support the analysis of dozens of model files in different formats (it can also export some model formats). Assimp itself is a C++ library and can be used across platforms.

Assimp can convert dozens of model files into a unified data structure. No matter what format of model files we import, we can use the same method to access the model data we need.

Since the data format of the model is known, it can also be easily analyzed through C++ code. This article analyzes it through C++ and does not use the Assimp library for the time being.

std::string fileName = dirPath + "/monkey.obj";

std::ifstream inputStream(fileName, std::ifstream::in | std::ifstream::binary);

if (!inputStream.is_open()){
    std::cerr << "Error opening file:"<<fileName<<std::endl;
}


std::vector<glm::vec3> coords;     //顶点数据
std::vector<glm::vec2> texturCoords;   //uv贴图顶点
std::vector<glm::vec3> normals;     //法线数据

std::vector<MSVertexData> vertexes;
std::vector<GLuint> indexes;

MSMesh *mesh = nullptr;

std::string mtlName;
std::string lineString;

while (std::getline(inputStream,lineString))
{

    std::vector<std::string> list = ccStringSplit(lineString," ");
    if (list[0]  == "#") {
        std::cout<< "This is comment:" << lineString;
        continue;
    } else if (list[0]  == "mtllib") {
        std::cout<< "File with materials:" << list[1];
        continue;
    } else if (list[0]  == "v") { //顶点
        coords.emplace_back(glm::vec3(atof(list[1].c_str()), atof(list[2].c_str()), atof(list[3].c_str())));
        continue;
    } else if (list[0]  == "vt") {  //纹理坐标
        texturCoords.emplace_back(glm::vec2(atof(list[1].c_str()), atof(list[2].c_str())));
        continue;
    } else if (list[0]  == "vn") { //顶点法向量  Normal vector
        normals.emplace_back(glm::vec3(atof(list[1].c_str()), atof(list[2].c_str()), atof(list[3].c_str())));
        continue;
    } else if (list[0]  == "f") { // 顶点位置索引/纹理坐标索引/法向量索引
        for (int i = 1; i <= 3; ++i){
            std::vector<std::string> vert = ccStringSplit(list[i],"/");
            vertexes.emplace_back(VertexData(
                    coords[static_cast<int>(atol(vert[0].c_str())) - 1],
                    texturCoords[static_cast<int>(atol(vert[1].c_str())) - 1],
                    normals[static_cast<int>(atol(vert[2].c_str())) -1 ])
                    );
            indexes.emplace_back(static_cast<unsigned>(indexes.size()));
        }
        continue;
    } else if (list[0] == "usemtl") {
        mtlName = list[1];
        std::cout<< "This is used naterial:" << mtlName;

    }
}

The 3D model file can be parsed through different fields, and the parsed data can be saved in vertexes and indexes

4. Perform TBN space calculation

void MSModelLoader::calculateTBN(std::vector<MSVertexData> &vertData)
{
    for (int i = 0; i < (int)vertData.size(); i += 3) {

        glm::vec3 &v1 = vertData[i].position;
        glm::vec3 &v2 = vertData[i + 1].position;
        glm::vec3 &v3 = vertData[i + 2].position;

        glm::vec2 &uv1 = vertData[i].textCoord;
        glm::vec2 &uv2 = vertData[i + 1].textCoord;
        glm::vec2 &uv3 = vertData[i + 2].textCoord;

        // https://youtu.be/ef3XR0ZttDU?t=1097
        // deltaPos1 = deltaUV1.x * T + deltaUV1.y * B;
        // deltaPos2 = deltaUV2.x * T + deltaUV2.y * B;

        glm::vec3 deltaPos1 = v2 - v1;
        glm::vec3 deltaPos2 = v3 - v1;

        glm::vec2 deltaUV1 = uv2 - uv1;
        glm::vec2 deltaUV2 = uv3 - uv1;

        float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
        glm::vec3 tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y) * r;
        glm::vec3 bitangent = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x) * r;

        vertData[i].tangent = tangent;
        vertData[i + 1].tangent = tangent;
        vertData[i + 2].tangent = tangent;

        vertData[i].bitangent = bitangent;
        vertData[i + 1].bitangent = bitangent;
        vertData[i + 2].bitangent = bitangent;

    }
}

Since only vertex data, material vertex data, normal data and vertex index are saved in the model, tangent and two-dimensional tangent data are required for normal texture mapping.

Therefore, the tangent and the two-dimensional tangent are obtained through TBN space calculation to load the normal texture map.

5. Load vertices and vertex sequence data

void MSMesh::InitRenderResources(AAssetManager *pManager, const std::vector<MSVertexData> &vertData, const std::vector<GLuint> &indexes)
{
    if(pManager == NULL){
        return;
    }

    loadTextureResources(pManager);
    loadShaderResources(pManager);

    m_indexBuffSize = indexes.size() ;


    m_pVAO->Create();
    m_pVAO->Bind();
    /*给顶点数据赋值*/
    m_pVBO->Create();
    m_pVBO->Bind();
    m_pVBO->SetBufferData(vertData.data(),vertData.size() * sizeof (MSVertexData));

    /*给Index 数据赋值 */
    m_pEBO->Create();
    m_pEBO->Bind();
    m_pEBO->SetBufferData(indexes.data(), indexes.size() * sizeof (GLuint));

    int offset = 0;
    /*给a_position传值 */
    m_pOpenGLShader->SetAttributeBuffer(0, GL_FLOAT, (void *)offset, 3, sizeof(MSVertexData));
    m_pOpenGLShader->EnableAttributeArray(0);

    offset += sizeof (glm::vec3);
    /*给shader材质顶点 a_texturCoord 赋值*/
    m_pOpenGLShader->SetAttributeBuffer(1, GL_FLOAT, (void *)offset, 2, sizeof(MSVertexData));
    m_pOpenGLShader->EnableAttributeArray(1);

    offset += sizeof (glm::vec2);

    /*给shader a_normal赋值 法向量*/
    m_pOpenGLShader->SetAttributeBuffer(2, GL_FLOAT, (void *)offset, 3, sizeof (MSVertexData));
    m_pOpenGLShader->EnableAttributeArray(2);

    offset += sizeof (glm::vec3);
    /*给切线赋值 a_tangent */
    m_pOpenGLShader->SetAttributeBuffer(3, GL_FLOAT, (void *)offset, 3, sizeof (MSVertexData));
    m_pOpenGLShader->EnableAttributeArray(3);

    offset += sizeof (glm::vec3);

    /*给双切线赋值*/
    m_pOpenGLShader->SetAttributeBuffer(4, GL_FLOAT, (void *)offset, 3, sizeof (MSVertexData));
    m_pOpenGLShader->EnableAttributeArray(4);


    m_pVAO->Release();
    m_pVBO->Release();
    m_pEBO->Release();
}

Since the number of vertices in the 3D model is very large, complex models may reach millions of vertices or more, so VBO, EBO, and VAO are needed, otherwise the frequent transfer of data from the CPU to the GPU will freeze.

This step is to load the vertex data and vertex sequence data of the parsed model into the shader.

6. Perform rendering operations

void MSMesh::Render(MSGLCamera* pCamera)
{


    glm::mat4x4  modelMatrix = glm::mat4x4(1.0);

    glm::mat4x4  objectTransMat = glm::translate(glm::mat4(1.0f), glm::vec3(m_Objx, m_Objy, m_Objz));
    glm::mat4x4  objectScaleMat = glm::scale(glm::mat4(1.0f),glm::vec3(0.25f*m_ObjScale, 0.25f*0.6*m_ObjScale, 0.25f*m_ObjScale) );

    modelMatrix = objectTransMat * objectScaleMat ;  //模型矩阵


    m_pOpenGLShader->Bind();  //使用程序

    //对shader的三个矩阵传值
    m_pOpenGLShader->SetUniformValue("u_modelMatrix", modelMatrix);
    m_pOpenGLShader->SetUniformValue("u_viewMatrix", pCamera->viewMatrix);
    m_pOpenGLShader->SetUniformValue("u_projectionMatrix", pCamera->projectionMatrix);

    //给shader的散射光采样器赋值
    m_pOpenGLShader->SetUniformValue("texture_diffuse", 0);
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D,m_diffuseId);

    //给环境光 采样器赋值
    m_pOpenGLShader->SetUniformValue("texture_normal", 1);
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D,m_normalId);

    //给镜面光 采样器赋值
    m_pOpenGLShader->SetUniformValue("texture_specular", 2);
    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D,m_specularId);

    m_pOpenGLShader->SetUniformValue("m_shiness", 32.0f);
    /*观察者的位置*/
    m_pOpenGLShader->SetUniformValue("u_viewPos", pCamera->GetEyePosition());

    /*给光线的位置赋值*/
    m_pOpenGLShader->SetUniformValue("myLight.m_ambient", glm::vec3(0.5,0.5,0.5));
    m_pOpenGLShader->SetUniformValue("myLight.m_diffuse", glm::vec3(0.8,0.8,0.8));
    m_pOpenGLShader->SetUniformValue("myLight.m_specular", glm::vec3(0.9,0.9,0.9));

    m_pOpenGLShader->SetUniformValue("myLight.m_pos", glm::vec3(5.0,5.0,5.0));
    m_pOpenGLShader->SetUniformValue("myLight.m_c", 1.0f);
    m_pOpenGLShader->SetUniformValue("myLight.m_l", 0.09f);
    m_pOpenGLShader->SetUniformValue("myLight.m_q", 0.032f);


    m_pVAO->Bind();

    const short* indices =(const short *) 0;
    glDrawElements(GL_TRIANGLES, m_indexBuffSize,  GL_UNSIGNED_INT, indices);

    glBindTexture(GL_TEXTURE_2D,0);

    m_pOpenGLShader->Release();
    m_pVAO->Release();

}

Most of the rendering operations are carried out through uniform to transfer data to sahder

Due to the use of VBO, EBO, and VAO, the rendering operation only needs to execute glDrawElements.

7. Vertex shader

#version 300 es

layout(location = 0) in   vec3 a_position;      //顶点
layout(location = 1) in   vec2 a_texturCoord;   //材质顶点
layout(location = 2) in   vec3 a_normal;        //法线
layout(location = 3) in   vec3 a_tangent;       //切线
layout(location = 4) in   vec3 a_bitangent;     //双切线

uniform  mat4 u_projectionMatrix;           //透视矩阵
uniform  mat4 u_viewMatrix;                 //观察者矩阵
uniform mat4 u_modelMatrix;                 //模型矩阵

out  vec4 vary_pos;
out  vec2 vary_texCoord;
out  mat3 vary_tbnMatrix;


void main(void)
{
    mat4 mv_matrix = u_viewMatrix * u_modelMatrix;    //观察者矩阵和模型矩阵相乘
    gl_Position = u_projectionMatrix * mv_matrix * vec4(a_position,1.0);

    vary_texCoord = a_texturCoord;

    vary_pos = u_modelMatrix *  vec4(a_position,1.0);
    //transpose 求转置矩阵  inverse 逆矩阵   normalize 归一化
    vec3 normal = normalize(mat3(transpose(inverse(u_modelMatrix))) * a_normal);
    vec3 tangent = normalize(mat3(transpose(inverse(u_modelMatrix))) * a_tangent);
    vec3 bitangent = normalize(mat3(transpose(inverse(u_modelMatrix))) * a_bitangent);

    vary_tbnMatrix = mat3(tangent, bitangent, normal);  //得到的是TBN空间矩阵  传给片元着色器

}
  • MVP matrix * vertex coordinates get screen coordinates
  • transpose find transpose matrix
  • inverse inverse matrix
  • normalize normalization

8. Fragment shader

#version 300 es
precision highp float;


struct Light
{
    vec3 m_pos;
    vec3 m_ambient;     //环境光
    vec3 m_diffuse;     //散射光
    vec3 m_specular;    //镜面光

    float m_c;          
    float m_l;
    float m_q;
};

uniform Light myLight;

uniform sampler2D   texture_diffuse;
uniform sampler2D   texture_normal;       //环境光采样器
uniform sampler2D   texture_specular;

uniform float       m_shiness;

uniform vec3       u_viewPos;

in vec4 vary_pos;
in vec2 vary_texCoord;
in  mat3 vary_tbnMatrix;

out vec4 fragColor;    //GUP 本质上只要这个4分量的颜色

void main(void)
{
    vec3 normal = texture(texture_normal,vary_texCoord).rgb;  //texture(texture_normal,vary_texCoord)  //材质顶点  采样器
    normal = normalize(normal * 2.0 - 1.0);
    normal = normalize(vary_tbnMatrix * normal);

    float dist = length(myLight.m_pos - vary_pos.xyz);

    float attenuation = 1.0f / (myLight.m_c + myLight.m_l * dist + myLight.m_q *dist * dist);

    //ambient  环境光
    vec3 ambient = myLight.m_ambient * vec3(texture(texture_diffuse , vary_texCoord).rgb);
    //diffuse  散射光
    vec3 lightDir = normalize(myLight.m_pos - vary_pos.xyz);
    float diff = max(dot(normal , lightDir) , 0.0f);
    vec3 diffuse = myLight.m_diffuse * diff * vec3(texture(texture_diffuse , vary_texCoord).rgb);

    //mirror reflect  镜面光
    float specular_strength = 0.5;
    vec3 viewDir = normalize(u_viewPos - vary_pos.xyz);
    vec3 reflectDir = reflect(-lightDir , normal);

    float spec =  pow(max(dot(viewDir , reflectDir) , 0.0f) , m_shiness);

    vec3 sepcular = specular_strength* myLight.m_specular * spec;

    vec3 result = ambient  + diffuse + sepcular ;   //三个光的强度 加在一起
    fragColor = vec4(result,1.0f) ;

}
  • dot: dot multiplication gets the angle between two vectors
  • reflect: for reflection, specular light is used
  • The essence of the fragment shader is to pass a color data to the GPU

Guess you like

Origin blog.csdn.net/u014078003/article/details/128010355