LearnOpenGL Notes->Model

Model loading library

A very popular model import library is Assimp , which stands for Open Asset Import Library . Assimp can import many different model file formats (and can also export some formats), and it will load all model data into Assimp's common data structure . Once Assimp has loaded the model, we can extract all the data we need from Assimp's data structure. Since Assimp's data structure remains unchanged, no matter what type of file format is imported, it can abstract us from these different file formats and access the data we need in the same way.

When using Assimp to import a model, it usually loads the entire model into a scene object, which contains all data in the imported model/scene. Assimp will load the scene as a series of nodes (Node). Each node contains the index of the data stored in the scene object. Each node can have any number of child nodes. A (simplified) model of the Assimp data structure is as follows:

  • Like materials and meshes, all scene/model data is contained in Scene objects. The Scene object also contains a reference to the scene root node.

  • The scene's Root node may contain child nodes (like other nodes), which will have a series of indices pointing to the mesh data stored in the mMeshes array in the scene object . The mMeshes array under Scene stores the real Mesh object, and the mMeshes array in the node only stores the index of the mesh array in the scene.

  • A Mesh object itself contains all the relevant data needed for rendering, such as vertex positions, normal vectors, texture coordinates, faces, and object materials.

  • A mesh contains multiple faces. Face represents the rendering primitive (Primitive) of the object (triangle, square, point). A face contains the indices of the vertices that make up the primitive. Since vertices and indices are separate, it's very simple to render using an index buffer (see hello triangles ).

  • Finally, a mesh also contains a Material object , which contains functions that allow us to obtain the object's material properties, such as color and texture maps (such as diffuse and specular maps).

So, the first thing we need to do is load an object into the Scene object, iterate through the nodes, get the corresponding Mesh object (we need to recursively search the children of each node), and process each Mesh object to get the vertices Data, index, and its material properties. The final result is a series of grid data, which we will contain in a Model object.

grid

When modeling objects using modeling tools, artists typically do not create the entire model out of a single shape. Typically each model is composed of several sub-models/shapes. Each individual shape of the combined model is called a mesh. For example, let's say there is a humanoid character: Artists usually model the head, limbs, clothes, and weapons as separate components, and the result of combining these meshes is expressed as the final model. A mesh is the smallest unit (vertex data, indices, and material properties) we need to draw objects in OpenGL . A model will (usually) contain multiple meshes.

grid

By using Assimp, we can load different models into the program, but after loading they are all stored as Assimp data structures. We still have to eventually convert this data into a format that OpenGL can understand so that we can render this object. We learned from the previous section that a mesh represents a single drawable entity. Let's first define a mesh class of our own.

First, let's review what we have learned so far and think about the minimum data required for a grid. A mesh should at least require a series of vertices , each vertex containing a position vector, a normal vector and a texture coordinate vector . A mesh should also contain indices for index drawing and material data in the form of textures (diffuse/specular maps) .

Now that we have the minimum requirements for a mesh class, we can define a vertex in OpenGL:

struct Vertex {
    glm::vec3 Position;
    glm::vec3 Normal;
    glm::vec2 TexCoords;
};

We store all the needed vectors into a structure called Vertex, which we can use to index each vertex attribute. In addition to the Vertex structure, we also need to organize the texture data into a Texture structure.

struct Texture {
    unsignedint id;
    string type;
};

We store the texture id and its type, such as a diffuse map or a specular map.

Knowing the implementation of vertices and textures, we can start to define the structure of the mesh class:

class Mesh {
    public:
        /*  网格数据  */
        std::vector<Vertex> vertices;
        std::vector<unsignedint> indices;
        std::vector<Texture> textures;
        /*  函数  */
        Mesh(std::vector<Vertex> vertices, std::vector<unsignedint> indices, std::vector<Texture> textures);
        voidDraw(Shader shader);
    private:
        /*  渲染数据  */
        unsignedint VAO, VBO, EBO;
        /*  函数  */
        void setupMesh();
};  

In the constructor Mesh function, we assign all necessary data to the mesh, we initialize the buffer in the setupMesh function, and finally use the Draw function to draw the mesh. Note that we passed a shader into the Draw function. Passing the shader into the mesh class allows us to set some uniforms before drawing (such as linking the sampler to the texture unit) .

The content of the constructor is very easy to understand. We only need to set the public variables of the class using the parameters of the constructor. We also called the setupMesh function in the constructor:

Mesh(vector<Vertex> vertices, vector<unsignedint> indices, vector<Texture> textures)
{
//理解为将传入的三组数据拷贝一份;
    this->vertices = vertices;
    this->indices = indices;
    this->textures = textures;

    setupMesh();//用于初始化缓冲;
}

We discuss the setupMesh function next .

initialization

Thanks to the constructor, we now have a large column of grid data to render. Before doing this we must also configure the correct buffers and define the layout of the vertex shader via vertex attribute pointers. These concepts should all be familiar to you by now, but we're going to change it up a little this time and use vertex data in a structure:

voidsetupMesh(){
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);
    glGenBuffers(1, &EBO);

    glBindVertexArray(VAO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);

    glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW);  

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsignedint), 
                 &indices[0], GL_STATIC_DRAW);

    // 顶点位置
    glEnableVertexAttribArray(0);   
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)0);
    // 顶点法线
    glEnableVertexAttribArray(1);   
    glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, Normal));
    // 顶点纹理坐标
    glEnableVertexAttribArray(2);   
    glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, TexCoords));

    glBindVertexArray(0);
}  

A great feature of C++ structures is that their memory layout is sequential. That is, if we use the structure as a data array, then it will arrange the variables of the structure in order, which will directly convert to the float (actually bytes) array we need in the array buffer . For example, if we have a filled Vertex structure, its memory layout will be equal to:

Vertex vertex;
vertex.Position  = glm::vec3(0.2f, 0.4f, 0.6f);
vertex.Normal    = glm::vec3(0.0f, 1.0f, 0.0f);
vertex.TexCoords = glm::vec2(1.0f, 0.0f);
// = [0.2f, 0.4f, 0.6f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f];

Thanks to this useful feature, we can directly pass in a large list of Vertex structure pointers as buffer data, and they will be perfectly converted into parameters that glBufferData can use:

//这里传入一系列Vertex结构体的指针作为缓冲的对象;
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW);

Naturally, the sizeof operation can also be used on a structure to calculate its byte size. This should be 32 bytes (8 floats * 4 bytes each).

Another good use of structures is its preprocessing directive offsetof(s, m) , whose first parameter is a structure and the second parameter is the name of the variable in the structure. This macro will return the byte offset (Byte Offset) of that variable from the head of the structure . This can be used to define the offset parameter in the glVertexAttribPointer function:

glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, Normal)); 
//(void*)offsetof(Vertex,Normal)表示偏移量是从顶点结构体Vertex的Normal位置开始;

The offset is now defined using offsetof, where it sets the byte offset of the normal vector to the offset of the normal vector in the structure, which is 3 floats, or 12 bytes. Note that we also set the stride parameter to the size of the Vertex structure.

Using such a structure not only provides more readable code, but also allows us to easily extend the structure. If we wish to add another vertex attribute, we simply add it to the structure. Due to its flexibility, the rendered code will not be broken.

rendering

We need to define the last function for the Mesh class, its Draw function . Before actually rendering this mesh, we need to bind the corresponding texture before calling the glDrawElements function. However, this is actually somewhat difficult, as we don't know initially how many textures (if any) the mesh has and what types of textures they have. So how do we set up texture units and samplers in the shader?

To solve this problem, we need to set a naming standard: each diffuse texture should be named texture_diffuseN , and each specular texture should be named texture_specularN , where N ranges from 1 to the maximum allowed number of the texture sampler. For example, if we have 3 diffuse textures and 2 specular textures for a certain mesh, their texture samplers should be called later:

uniform sampler2D texture_diffuse1;
uniform sampler2D texture_diffuse2;
uniform sampler2D texture_diffuse3;
uniform sampler2D texture_specular1;
uniform sampler2D texture_specular2;
//这里这么多但实际不一定全部需要,或者有的还更多,在后续对纹理单元进行命名时要注意;

According to this standard, we can define as many texture samplers as we want in the shader, and if a mesh really contains (that many) textures, we also know what their names are. According to this standard, we can also handle any number of textures in a mesh, and the developer is also free to choose the number to use, he only needs to define the correct sampler (although defining less will be a bit wasteful binding and uniform calls).

Final rendering code:

voidDraw(Shader shader){
    unsignedint diffuseNr = 1;
    unsignedint specularNr = 1;
    for(unsignedint i = 0; i < textures.size(); i++)
    {
        glActiveTexture(GL_TEXTURE0 + i); // 在绑定之前激活相应的纹理单元// 获取纹理序号(diffuse_textureN 中的 N)string number;
        string name = textures[i].type;
        if(name == "texture_diffuse")
            number = std::to_string(diffuseNr++);
        elseif(name == "texture_specular")
            number = std::to_string(specularNr++);

        shader.setInt(("material." + name + number).c_str(), i);
        glBindTexture(GL_TEXTURE_2D, textures[i].id);
    }
    glActiveTexture(GL_TEXTURE0);

    // 绘制网格
    glBindVertexArray(VAO);
    glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0);
    glBindVertexArray(0);
}

We first calculated the N-component of each texture type and concatenated it to the texture type string to obtain the corresponding uniform name. Next we find the corresponding sampler, set its position value to the currently active texture unit, and bind the texture. This is why we need shaders in the Draw function. We also added "material." to the final uniform name, since we want to store the texture in a material structure (this may be different in each implementation).

Here is the current code of the Mesh class:

#ifndef MESH_H
#define MESH_H
#include <E:\OpenGl\glm\glm-master\glm\glm.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\matrix_transform.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\type_ptr.hpp>
#include <string>
#include <vector>
#include <iostream>
#include "shader s.h"

struct Vertex {
    glm::vec3 Position;
    glm::vec3 Normal;
    glm::vec2 TexCoords;
};//我们将所有需要的向量储存到一个叫做Vertex的结构体中,我们可以用它来索引每个顶点属性。

//我们还要将纹理数据整合到一个Texture结构体中;
//这里我们存储了纹理的id,以及纹理的类型;
struct Texture {
    unsigned int id;//纹理的id;
    std::string type;//纹理的类型,比如漫反射贴图或者镜面光贴图;texture_specular/texture_diffuse;
    std::string path;//纹理的路径;
};

//Mesh类;
class Mesh {
    public:
        //网格数据;Mesh Data;
        std::vector<Vertex> vertices;//顶点数组;
        std::vector<unsigned int> indices;//索引数组;
        std::vector<Texture> textures;//纹理数组;
        //函数;(Mesh函数会传入三个数组);
        Mesh(std::vector<Vertex> vertices, std::vector<unsigned int> indices, std::vector<Texture> textures)
        {
            this->vertices = vertices;
            this->indices = indices;
            this->textures = textures;

            setupMesh();//启用setupMesh函数,进行初始化缓冲!!!;
        }
        //接下来会进行Draw,要调用着色器;[将着色器传入网格类中可以让我们在绘制之前设置一些uniform(像是链接采样器到纹理单元)]
        
        void Draw(Shader shader)//调用着色器;
        {
            //包括两个大步:
            // 1.将纹理传入对应的纹理单元;
            // 2.根据前面求得的VAO,EBO等缓冲绘制网格上的所有三角形;
            //我们把每一个镜面光贴图命名为texture_specularN;(N是1,2,3...);同样把每一个漫反射贴图命名为texture_diffuseN;
            unsigned int diffuseNr = 1;//分别对应material中对应纹理采样器的数字;
            unsigned int specularNr = 1;
            for (unsigned int i = 0; i < textures.size(); i++)//对每一个传入的纹理;
            {
                glActiveTexture(GL_TEXTURE0 + i);//先激活每一个纹理单元,可以直接在后面加上i;
                std::string number;
                std::string name = textures[i].type;//获取纹理类型;
                if (name == "texture_diffuse")//如果类型为漫反射贴图;
                    number = std::to_string(diffuseNr++);
                else if (name == "texture_specular")//如果为镜面光贴图;
                    number = std::to_string(specularNr++);//就把指定镜面光贴图采样器的个数设为下一个;

                shader.setInt(("material."+name+number).c_str(), i);//将此张纹理的采样器设置为当前的纹理单元;
                glBindTexture(GL_TEXTURE_2D, textures[i].id);//绑定纹理到GL_TEXTURE_2D对象;
                //为后面的绘制做准备;
            }
            //绘制网格;
            glBindVertexArray(VAO);//绑定顶点数组;(会绑定EBO和VBO);
            glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0);//根据索引数组绘制三角形;
            glBindVertexArray(0);//解除绑定;
        }
   
    private:
        //渲染部分;
        unsigned int VAO, VBO, EBO;//建立三种数组;
        //setupMesh函数;(进行初始化缓冲);
        void setupMesh()
        {
            glGenVertexArrays(1, &VAO);
            glBindVertexArray(VAO);//绑定VAO;

            glGenBuffers(1, &VBO);
            glBindBuffer(GL_ARRAY_BUFFER, VBO);
            glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices[0], GL_STATIC_DRAW);

            glGenBuffers(1, &EBO);
            glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
            glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(Vertex), &indices[0], GL_STATIC_DRAW);

            //激活顶点,法线,以及纹理属性;
            glVertexAttribPointer(0, 3, GL_FLOAT,GL_FALSE,sizeof(Vertex),(void*)0);
            glEnableVertexAttribArray(0);
            glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, Normal));
            glEnableVertexAttribArray(1);
            glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)offsetof(Vertex, TexCoords));
            glEnableVertexAttribArray(2);

            glBindVertexArray(0);//解除绑定;
        }
};

#endif

Model

class Model {
    public:
        //构造器;加载函数;
        Model(std::string path)
        {
            loadModel(path);
        }
        void Draw(const Shader& shader)
        {
            for (unsigned int i = 0; i < meshes.size(); i++)
                meshes[i].Draw(shader);
        }
    private:
        /*模型数据*/ //包括文件目录,网格,以及一个存放纹理路径的容器;
        std::string directory;//记录传入文件的目录;
        std::vector<Mesh> meshes;//用于记录传入的模型文件的网格的容器,类型为Mesh;
        std::vector<Texture> texture_loaded;//用于记录加载纹理的路径;
        /*各种具体函数*/
        void loadModel(std::string path);//用Assimp将模型数据加载到scene中;
        void processNode(aiNode* node, const aiScene* scene);//递归对每一个节点操作;
        Mesh processMesh(aiMesh* mesh, const aiScene* scene);//对每一个网格进行操作;
        std::vector<Texture> loadMaterialTextures(aiMaterial* mat, aiTextureType type, std::string typeName);
};

Model contains a vector of Mesh objects, and the constructor will load the file directly through loadModel;

The Draw function traverses all grids and calls their respective Draw functions.

voidDraw(Shader shader){
    for(unsignedint i = 0; i < meshes.size(); i++)
        meshes[i].Draw(shader);
//分别调用每一个Mesh以及其draw函数对每一个网格进行绘制;
}

Import 3D models into OpenGL

To import a model and convert it into our own data structure, we first need to include the header file corresponding to Assimp.

#include <assimp/Importer.hpp>
#include <assimp/scene.h>
#include <assimp/postprocess.h>

The first function that needs to be called is loadModel, which is called directly from the constructor . In loadModel, we use Assimp to load the model into a data structure called scene in Assimp . You may remember from the first tutorial in the model loading chapter that this is the root object of the Assimp data interface. Once we have this scene object, we have access to all the data we need in the loaded model.

The great thing about Assimp is that it abstracts away all the technical details of loading different file formats and does it all with just one line of code:

Assimp::Importer importer;//创建一个构造器;
const aiScene *scene = importer.ReadFile(path, aiProcess_Triangulate | aiProcess_FlipUVs);
//ReadFile的第一个参数是文件路径,第二个参数是一些后期处理;
//注意此处aiProcess_FlipUVs后处理指令会修复上下颠倒问题;

We first declare an Importer in the Assimp namespace, and then call its ReadFile function . This function requires a file path , and its second parameter is some post-processing options. In addition to loading files, Assimp allows us to set some options to force it to do some additional calculations or operations on the imported data. By setting aiProcess_Triangulate , we tell Assimp that if the model is not (entirely) made up of triangles, it needs to transform the shape of all primitives in the model into triangles. aiProcess_FlipUVs will flip the y-axis texture coordinates while processing (you may remember we said in the texture tutorial that most images in OpenGL have the y-axis inverted , so this post-processing option will fix that) . Some other useful options are:

  • aiProcess_GenNormals : If the model does not contain normal vectors, create normals for each vertex.

  • aiProcess_SplitLargeMeshes : Split a larger mesh into smaller sub-meshes. This is very useful if your rendering has a maximum number of vertices and can only render smaller meshes.

  • aiProcess_OptimizeMeshes : Contrary to the previous option, it will splice multiple small meshes into one large mesh, reducing draw calls for optimization.

Assimp provides many useful post-processing instructions, you can find them all here . It's actually very easy to load the model using Assimp (you can see that too). The hard part is then using the returned scene object to convert the loaded data into an array of Mesh objects.

The complete loadModel function will look like this:

voidloadModel(string path){
    Assimp::Importer import;
    const aiScene *scene = import.ReadFile(path, aiProcess_Triangulate | aiProcess_FlipUVs);    

    if(!scene || scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !scene->mRootNode) 
    {
        cout << "ERROR::ASSIMP::" << import.GetErrorString() << endl;
        return;
    }
    directory = path.substr(0, path.find_last_of('/'));//获取文件目,也就是文件位置的上一级目录;

    processNode(scene->mRootNode, scene);//将根结点传入,进行递归遍历所有子节点;
}

If no errors occur, we want to process all nodes in the scene , so we pass the first node (the root node) into the recursive processNode function . Because each node (potentially) contains multiple child nodes , we want to process the node in the parameter first, then continue processing all of that node's child nodes , and so on. This qualifies as a recursive construct, so we'll define a recursive function. After the recursive function does some processing, it calls the function itself recursively with different parameters until a certain condition is met to stop the recursion. In our example the exit condition (Exit Condition) is that all nodes have been processed.

You may remember that in Assimp's structure, each node contains a series of mesh indices , each index pointing to that specific mesh in the scene object. What we want to do next is get these grid indexes, get each grid, process each grid, and then repeat this process for each node's children. The content of the processNode function is as follows:

//对所有节点进行索引,并在scene中找到对应的mesh对象,最后返回;
//别忘了要对子节点进行递归查询,此函数最后会找到所有节点对应的网格位置;

void processNode(aiNode *node, const aiScene *scene){
    // 处理节点所有的网格(如果有的话)
   for(unsignedint i = 0; i < node->mNumMeshes; i++)
    {
        aiMesh *mesh = scene->mMeshes[node->mMeshes[i]]; 
        meshes.push_back(processMesh(mesh, scene));         
    }
    // 接下来对它的子节点重复这一过程
    for(unsignedint i = 0; i < node->mNumChildren; i++)
    {
        processNode(node->mChildren[i], scene);
    }
}
//最后返回的Mesh数组会存储在meshes列表中;

We first check the mesh index of each node and index the scene's mMeshes array to get the corresponding mesh. The returned mesh will be passed to the processMesh function, which will return a Mesh object, which we can store in the meshes list/vector.

After all meshes have been processed, we iterate through all the child nodes of the node and call the same processMesh function on them. This function will stop executing when a node no longer has any child nodes.

//上述提到的processMesh函数返回的是一个Mesh对象;
//processMesh函数会对传入的scene中的每一个网格进行处理,分别对顶点,法线,以及材质三个属性进行配置;

Mesh Model::processMesh(aiMesh*mesh,const aiScene* scene)
{
    //建立三个临时数组;
    std::vector<Vertex> tempVetices;
    std::vector<unsigned int> tempIndices;
    std::vector<Texture> tempTextures;

    //Vertex;
    //遍历网格中的所有顶点;>---->顶点处理;
    for (unsigned int i = 0; i < mesh->mNumVertices; i++)
    {
        Vertex vertex;//创建一个临时结构体;
        glm::vec3 vector;//用来进行接收;
        //position;
        vector.x = mesh->mVertices[i].x;
        vector.y = mesh->mVertices[i].y;
        vector.z = mesh->mVertices[i].z;
        vertex.Position = vector;//顶点位置传入;
        //normal;
        vector.x = mesh->mNormals[i].x;
        vector.y = mesh->mNormals[i].y;
        vector.z = mesh->mNormals[i].z;
        vertex.Normal = vector;//顶点法线传入;
        //TexCoords;注意对于纹理要进行一次判断,判断第一组纹理坐标是否存在;
        if (mesh->mTextureCoords[0])
        {
            glm::vec2 temVector;
            temVector.x = mesh->mTextureCoords[0][i].x;
            temVector.y = mesh->mTextureCoords[0][i].y;
            //这里获取的是mesh网格的第一组纹理坐标中的第i个纹理坐标;
            vertex.TexCoords = temVector;//纹理坐标传入;
        }
        else
        {
            //如果没有纹理坐标则将坐标设置为0.0f,0.0f;
            vertex.TexCoords = glm::vec2(0.0f);
        }
        tempVetices.push_back(vertex);//最后把顶点结构体压入数组中;
    }
    //Indices;
    //对索引进行处理;索引传入的是面;
    for (unsigned int i = 0; i < mesh->mNumFaces; i++)
    {
        //要进行的操作是遍历每一个面,并将每一个面的索引传入Indices数组中;
        aiFace face = mesh->mFaces[i];//每一个面;
        //一个面有多个索引,下面要循环将每一个索引加入索引数组中;
        for (unsigned int j = 0; j < face.mNumIndices; j++)
        {
            tempIndices.push_back(face.mIndices[j]);//把每一个面的索引加入到索引数组中;
        }
    }

    //Textures;材质;
    //和节点一样,一个网格只包含了一个指向材质对象的索引。
    if (mesh->mMaterialIndex)//如果存在材质索引;
    {
        // 索引场景对象获得该网格对应的材质数据
        aiMaterial* material = scene->mMaterials[mesh->mMaterialIndex];//通过索引获得面mesh对于的纹理;
        //下面分别对两种纹理进行操作,将不同纹理结构体均加入到textures数组当中;
        std::vector<Texture> diffuseMap = loadMaterialTextures(material, aiTextureType_DIFFUSE, "texture_diffuse");
        tempTextures.insert(tempTextures.end(), diffuseMap.begin(), diffuseMap.end());
        std::vector<Texture> specularMap = loadMaterialTextures(material, aiTextureType_SPECULAR, "texture_specular");
        tempTextures.insert(tempTextures.end(), specularMap.begin(), specularMap.end());
        //注意insert函数加入的方法,以specular为例,此处是从tempTextures的末尾开始添加,添加的部分是specularMap数组的头到尾的部分;
    }
    return Mesh(tempVetices, tempIndices, tempTextures);//最后返回一个Mesh;
}

There are three main parts to processing the mesh: getting all the vertex data, getting their mesh indexes, and getting the relevant material data. The processed data will be stored in three vectors. We will use them to construct a Mesh object and return it to the caller of the function.

Getting vertex data is very simple, we define a Vertex structure, which we will add to the vertices array after each iteration. We will traverse all vertices in the mesh (use mesh->mNumVertices to obtain). On each iteration, we want to populate this structure with all relevant data. The position of the vertices is handled like this:

glm::vec3 vector; 
vector.x = mesh->mVertices[i].x;
vector.y = mesh->mVertices[i].y;
vector.z = mesh->mVertices[i].z; 
vertex.Position = vector;

Note that in order to transmit Assimp data, we defined a temporary variable of vec3 . The reason for using such a temporary variable is that Assimp has its own set of data types for vectors, matrices, strings, etc., which cannot be perfectly converted to GLM's data types.

Assimp calls its vertex position array mVertices , which is actually not that intuitive.

The steps for processing normals are similar:

vector.x = mesh->mNormals[i].x;
vector.y = mesh->mNormals[i].y;
vector.z = mesh->mNormals[i].z;
vertex.Normal = vector;

The processing of texture coordinates is generally similar, but Assimp allows a model to have up to 8 different texture coordinates on a vertex. We will not use that many, we only care about the first set of texture coordinates. We also want to check if the mesh actually contains texture coordinates (which may not always be the case)

if(mesh->mTextureCoords[0]) // 网格是否有纹理坐标?
{
    glm::vec2 vec;
    vec.x = mesh->mTextureCoords[0][i].x; 
    vec.y = mesh->mTextureCoords[0][i].y;
    vertex.TexCoords = vec;
}
else
    vertex.TexCoords = glm::vec2(0.0f, 0.0f);

The vertex structure is now filled with the required vertex attributes, and we will push it into the tail of the vertices vector at the end of the iteration. This process is repeated for each mesh vertex.

index

Assimp's interface defines that each mesh has an array of faces, each face represents a primitive, which in our case (due to the aiProcess_Triangulate option ) is always a triangle. A face contains multiple indices, which define which vertex we should draw in each primitive and in what order, so if we traverse all the faces and store the face indexes in the indices vector That's it.

for(unsignedint i = 0; i < mesh->mNumFaces; i++)
{
    aiFace face = mesh->mFaces[i];
    for(unsignedint j = 0; j < face.mNumIndices; j++)
        indices.push_back(face.mIndices[j]);
}

All outer loops are finished and we now have a set of vertex and index data that can be used to draw the mesh via the glDrawElements function. However, to end this topic and provide some details on the mesh, we also need to deal with the mesh's material.

Material

Like nodes, a mesh only contains an index into a material object . If we want to get the real material of the mesh, we also need to index the scene's mMaterials array. The mesh material index is located in its mMaterialIndex property . We can also use it to detect whether a mesh contains a material:

//要先对网格材质索引进行判断,如果存在索引则进一步在scene场景中获取其真正的材质;
if(mesh->mMaterialIndex >= 0)
{
    aiMaterial *material = scene->mMaterials[mesh->mMaterialIndex];
    std::vector<Texture> diffuseMaps = loadMaterialTextures(material, 
                                        aiTextureType_DIFFUSE, "texture_diffuse");
    textures.insert(textures.end(), diffuseMaps.begin(), diffuseMaps.end());
   std:: vector<Texture> specularMaps = loadMaterialTextures(material, 
                                        aiTextureType_SPECULAR, "texture_specular");
    textures.insert(textures.end(), specularMaps.begin(), specularMaps.end());
}

We first get the aiMaterial object from the scene's mMaterials array . Next we want to load the diffuse and/or specular maps of the mesh . A material object stores an array of texture positions internally for each texture type . Different texture types are prefixed with aiTextureType_ . We use a utility function called loadMaterialTextures to get textures from materials . This function will return a vector of Texture structures , which we will store after the tail of the model's textures vector.

The loadMaterialTextures function traverses all texture locations of a given texture type, obtains the file location of the texture, loads and generates the texture, and stores the information in a Vertex structure. It will look like this:

std::vector<Texture> loadMaterialTextures(aiMaterial *mat, aiTextureType type, string typeName)
{
    std::vector<Texture> textures;
    for(unsignedint i = 0; i < mat->GetTextureCount(type); i++)
    {
        aiString str;
        mat->GetTexture(type, i, &str);
        Texture texture;
        texture.id = TextureFromFile(str.C_Str(), directory);//获取每一个纹理的文件位置;
        //将三种信息传入texture;
        texture.type = typeName;
        texture.path = str;
        textures.push_back(texture);
    }
    return textures;
}

We first check the number of textures stored in the material through the GetTextureCount function , which requires a texture type . We will use GetTexture to get the file location of each texture , which will store the result in an aiString . We next use another utility function called TextureFromFile , which will load a texture (using stb_image.h ) and return the ID of the texture .

The TextureFromFile function is a series of operations in the previous texture chapter;

This function will return the ID of the texture;

unsigned int TextureFromFile(const char* path, const std::string& directory)
{
    std::string filename = path;//oringe-->string(path);
    filename = directory + "/" + filename;//这里为完整路径;

    unsigned int textureID;
    glGenTextures(1, &textureID);

    int width, height, nrchannels;//这里先把纹理加载进来;
    unsigned char* data = stbi_load(filename.c_str(), &width, &height, &nrchannels,0);
    if (data)
    {
        GLenum format;
        if (nrchannels == 1)
            format = GL_RED;
        else if (nrchannels == 3)
            format = GL_RGB;
        else if (nrchannels == 4)
            format = GL_RGBA;

        glBindTexture(GL_TEXTURE_2D, textureID);
        glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        //释放数据;
        stbi_image_free(data);
    }
    else
    {
        std::cout << "Fail to load a Image" << std::endl;
        stbi_image_free(data);//同样要记得释放数据;
    }
    return textureID;//返回纹理的ID;
}

Major optimization

We will pass the loaded texture into a container vector array of global variables, so that in future operations, we do not need to go through the steps of querying and loading, saving time;

Next we store all loaded textures in another vector, declared as a private variable at the top of the model class:

vector<Texture> textures_loaded;

Later, in the loadMaterialTextures function, we want to compare the texture path with all the textures stored in the textures_loaded vector to see if the current texture path is the same as one of them. If so, skip the texture loading/generation part and directly use the positioned texture structure as the texture of the mesh. The updated function is as follows:

vector<Texture> loadMaterialTextures(aiMaterial *mat, aiTextureType type, string typeName)
{
    vector<Texture> textures;
    for(unsignedint i = 0; i < mat->GetTextureCount(type); i++)
    {
        aiString str;
        mat->GetTexture(type, i, &str);
        bool skip = false;
        for(unsignedint j = 0; j < textures_loaded.size(); j++)
        {
            //注意此处字符串的比较方法;
            if(std::strcmp(textures_loaded[j].path.data(), str.C_Str()) == 0)
            { 
                //若存在则仅需要把该纹理传入到texture_loaded当中即可;
                textures.push_back(textures_loaded[j]);
                skip = true; 
                break;
            }
        }
        if(!skip)
        {   // 如果纹理还没有被加载,则加载它
            Texture texture;
            texture.id = TextureFromFile(str.C_Str(), directory);
            texture.type = typeName;
            texture.path = str.C_Str();
            textures.push_back(texture);
            textures_loaded.push_back(texture); // 添加到已加载的纹理中
        }
    }
    return textures;
}

Lighting and mapping

Here is the code after I added the specular map and lighting:

fragment shader

#version 330 core
out vec4 FragColor;

struct Material {
    sampler2D diffuse;
    sampler2D specular;    
    float shininess;
}; 

//定向光的结构体;
struct DirLight{
    vec3 direction;//定向光的方向;

    vec3 ambient;
    vec3 diffuse;
    vec3 specular;//定向光的三种分量;
};
//点光源的结构体;
struct PointLight{
    vec3 position;//点光源不用考虑光照方向;光照方向即为与片段的连线;
    
    float constant;
    float linear;
    float quadratic;
    
    vec3 ambient;
    vec3 diffuse;
    vec3 specular;
};

//聚光的结构体;
struct SpotLight{
    vec3 position;
    vec3 direction;
    float cutOff;
    float outerCutOff;
  
    float constant;
    float linear;
    float quadratic;
  
    vec3 ambient;
    vec3 diffuse;
    vec3 specular;       
};

#define NR_POINT_LIGHTS 4

in vec3 FragPos;  
in vec3 Normal;  
in vec2 TexCoords;
  
uniform vec3 viewPos;
uniform Material material;
uniform DirLight dirlight;//声明一个定向光结构体的变量;
uniform PointLight pointlights[NR_POINT_LIGHTS];//创建一个包含4个元素的PointLight结构体数组;
uniform SpotLight spotlight;//声明一个聚光的结构体变量;

vec3 CalcDirLight(DirLight light,vec3 normal,vec3 viewDir);//定向光;
vec3 CalcPointLight(PointLight light,vec3 normal,vec3 fragPos,vec3 viewDir);//点光源;
vec3 CalcSpotLight(SpotLight light,vec3 normal,vec3 fragPos,vec3 viewDir);//聚光;

//漫反射采样器;
uniform sampler2D texture_diffuse0;
uniform sampler2D texture_diffuse1;
uniform sampler2D texture_diffuse2;
uniform sampler2D texture_diffuse3;

//镜面光采样器;
uniform sampler2D texture_specular0;
uniform sampler2D texture_specular1;
uniform sampler2D texture_specular2;
uniform sampler2D texture_specular3;

void main()
{
    vec3 norm = normalize(Normal);//法线方向;
    vec3 viewDir = normalize(viewPos - FragPos);//视觉方向;
    //计算定向光;
    vec3 result = CalcDirLight(dirlight,norm,viewDir);
    //计算点光照;
    for(int i = 0; i < NR_POINT_LIGHTS; i++)
        result += CalcPointLight(pointlights[i],norm, FragPos, viewDir);    
    //计算聚光;
    result += CalcSpotLight(spotlight, norm, FragPos, viewDir); 
    //输出最终片段;
    FragColor = vec4(result,1.0);//最终片段的颜色是三种光照的相加值;
} 

//一个用于计算定向光最后的颜色函数;(返回的是一个vec3的向量);
vec3 CalcDirLight(DirLight light,vec3 normal,vec3 viewDir)//函数传入的是定向光的结构体,以及片段已经单位化的法线,还有viewDir向量;
{
    vec3 lightDir = normalize(-light.direction);
    //漫反射着色;
    float diff = max(dot(normal, lightDir), 0.0);
    //镜面着色;
    vec3 reflectDir = reflect(-lightDir,normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
    //进行合并;
    vec3 ambient = light.ambient * vec3(texture(material.diffuse,TexCoords));
    vec3 diffuse = light.diffuse * diff * vec3(texture(material.diffuse,TexCoords));
    vec3 specular = light.specular * spec * vec3(texture(material.specular,TexCoords));
    return (ambient + diffuse + specular);
}

//一个用于计算点光源最终输出颜色的函数;
vec3 CalcPointLight(PointLight light,vec3 normal,vec3 fragPos,vec3 viewDir)//normal已经单位化;
{
    vec3 lightDir = normalize(light.position - fragPos);
    // 漫反射着色
    float diff = max(dot(normal, lightDir), 0.0);
    // 镜面光着色
    vec3 reflectDir = reflect(-lightDir, normal);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
    // 衰减
    float distance    = length(light.position - fragPos);
    float attenuation = 1.0 / (light.constant + light.linear * distance + 
                 light.quadratic * (distance * distance));    
    // 合并结果
    vec3 ambient  = light.ambient  * vec3(texture(material.diffuse, TexCoords));
    vec3 diffuse  = light.diffuse  * diff * vec3(texture(material.diffuse, TexCoords));
    vec3 specular = light.specular * spec * vec3(texture(material.specular, TexCoords));
    ambient  *= attenuation;
    diffuse  *= attenuation;
    specular *= attenuation;
    return (ambient + diffuse + specular);
}
//一个用于计算聚光最终输出的函数;
//传入聚光结构体数组,单位化后的法线,片段位置,以及viewDir视角方向;
vec3 CalcSpotLight(SpotLight light,vec3 normal,vec3 fragPos,vec3 viewDir)
{
    vec3 lightDir = normalize(light.position - fragPos);//光线方向;
    //漫反射着色;
    float diff = max(0.0,dot(lightDir,normal));
    //镜面光着色;
    vec3 reflectDir = reflect(-lightDir,normal);
    float spec = pow(max(0.0,dot(reflectDir,viewDir)),material.shininess);
    
    //衰减;
    float distance = length(light.position - fragPos);
    float atten = 1.0 / (light.constant+distance * light.linear+light.quadratic*(distance*distance));
    //柔和;
    float theta = dot(lightDir,normalize(-light.direction));
    float epison = light.cutOff - light.outerCutOff;
    float intensity = clamp((theta - light.outerCutOff) / epison,0.0,1.0);//利用clamp将柔和限制在0到1之间;
    //进行整合;
    vec3 ambient = light.ambient * vec3(texture(material.diffuse,TexCoords));
    vec3 diffuse = light.diffuse * diff * vec3(texture(material.diffuse,TexCoords));
    vec3 specular = light.specular * spec * vec3(texture(material.specular,TexCoords));
    
    //柔和衰减的添加;
    diffuse *= intensity * atten;
    specular *= intensity * atten;
    ambient *= intensity * atten;
    return (ambient+diffuse+specular);
}

main.cpp

#include <glad/glad.h> 
#include <GLFW/glfw3.h>
#include <iostream>
#include<E:\OpenGl\练习1.1\3.3.shader_class\shader s.h>
//以下三行为glm的头文件代码;
#include <E:\OpenGl\glm\glm-master\glm\glm.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\matrix_transform.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\type_ptr.hpp>

//#define STB_IMAGE_IMPLEMENTATION
//#include <E:/OpenGl/stb_image.h/stb-master/stb_image.h>//这两行代码加入了stb_image库;

#include"Model s.h"
#include"Mesh s.h"

void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow* window);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void scroll_back(GLFWwindow* window, double xoffset, double yoffset);

//三个调整摄像机位置的全局变量;
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);

float deltatime = 0.0f;//上一帧与这一帧的时间差;
float lastime = 0.0f;//上一帧的时间;

//用来存储上一帧鼠标的位置!,设置为屏幕中心;
float lastX = 400.0;
float lastY = 300.0;

//仰俯角和偏航角;
float pitch = 0.0f;
float yaw = -90.0f;//从下往上;

float fov = 45.0f;//视域角;

glm::vec3 lightPos(1.2f, 1.0f, 2.0f);//声明一个光源,表示光源在空间中的位置;


int main()
{

    //先进行初始化glfw;
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);//主版本设置为3;
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);//次版本设置为3;
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWwindow* window = glfwCreateWindow(800, 600, "MY OPENGL", NULL, NULL);
    if (window == NULL)
    {
        std::cout << "Fail to create a window" << std::endl;
        glfwTerminate();//释放资源;
        return -1;
    }
    glfwMakeContextCurrent(window);
    glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
    //创建完告诉将上下文设置为进程上下文;

    //以下两步用于摄像机操作中的设置,由于是窗口的操作,因此放在此处!!;
    glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);//告诉GLFW隐藏光标并捕捉他;
    glfwSetCursorPosCallback(window, mouse_callback);//此句代码当发生鼠标移动时,就会调用mouse_callback函数改变两个欧拉角,
    //进而改变cameraFront方向向量,进而可以实现3D旋转;
    //还要对视域角fov做出变化,可以进行放大缩小;
    glfwSetScrollCallback(window, scroll_back);//当滚动鼠标滚轮的时候,我们就可以通过调用该函数来改变fov,进而改变透视投影矩阵,
    //以此来进一步形成放大和缩小!!;


    //对glad进行初始化;
    if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
    {
        std::cout << "Fail to initnite glad" << std::endl;
        return -1;
    }
    //引入着色器类,着色器被封装到了class Shader里面;
    //这里要创建两个着色器程序,分别用在物体和光源上面;

    //
    // light positions
    float vertices[] = {
        -0.5f, -0.5f, -0.5f,
         0.5f, -0.5f, -0.5f,
         0.5f,  0.5f, -0.5f,
         0.5f,  0.5f, -0.5f,
        -0.5f,  0.5f, -0.5f,
        -0.5f, -0.5f, -0.5f,

        -0.5f, -0.5f,  0.5f,
         0.5f, -0.5f,  0.5f,
         0.5f,  0.5f,  0.5f,
         0.5f,  0.5f,  0.5f,
        -0.5f,  0.5f,  0.5f,
        -0.5f, -0.5f,  0.5f,

        -0.5f,  0.5f,  0.5f,
        -0.5f,  0.5f, -0.5f,
        -0.5f, -0.5f, -0.5f,
        -0.5f, -0.5f, -0.5f,
        -0.5f, -0.5f,  0.5f,
        -0.5f,  0.5f,  0.5f,

         0.5f,  0.5f,  0.5f,
         0.5f,  0.5f, -0.5f,
         0.5f, -0.5f, -0.5f,
         0.5f, -0.5f, -0.5f,
         0.5f, -0.5f,  0.5f,
         0.5f,  0.5f,  0.5f,

        -0.5f, -0.5f, -0.5f,
         0.5f, -0.5f, -0.5f,
         0.5f, -0.5f,  0.5f,
         0.5f, -0.5f,  0.5f,
        -0.5f, -0.5f,  0.5f,
        -0.5f, -0.5f, -0.5f,

        -0.5f,  0.5f, -0.5f,
         0.5f,  0.5f, -0.5f,
         0.5f,  0.5f,  0.5f,
         0.5f,  0.5f,  0.5f,
        -0.5f,  0.5f,  0.5f,
        -0.5f,  0.5f, -0.5f,
    };

    //光源
    unsigned int lightVAO;
    unsigned int lightVBO;
    glGenVertexArrays(1, &lightVAO);
    glBindVertexArray(lightVAO);

    glGenBuffers(1, &lightVBO);
    glBindBuffer(GL_ARRAY_BUFFER, lightVBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
    glEnableVertexAttribArray(0);
    
    glEnable(GL_DEPTH_TEST);//启用深度测试;

    Shader lightshader("3.2.shader2.vs", "3.2.shader2.fs");
    Shader lightCubeshader("3.2.shader.light.vs", "3.2.shader.light.fs");//用于光源的着色器程序;//后面会绘制两个光源;

    //模型导入:
    Model OurModel("E:/OpenGl/Model/Model1/nanosuit.obj");
    
    //光源位置;
    glm::vec3 pointLightPositions[] = {
        glm::vec3(-2.0f,6.0f,1.0f),
        glm::vec3(2.0f,15.0f,0.0f)
    };

    //准备引擎:
    while (!glfwWindowShouldClose(window))
    {
        float currentFrame = static_cast<float>(glfwGetTime());
        deltatime = currentFrame - lastime;
        lastime = currentFrame;

        processInput(window);

        glClearColor(0.05f, 0.05f, 0.05f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        // don't forget to enable shader before setting uniforms
        lightshader.useProgram();//启用着色器;
        lightshader.setFloat("material.shininess", 32.0f);//设置material中的选项;

        //人眼设置;//这里设置为摄像机的位置;
        lightshader.setVec3("viewPos", cameraPos);
        //定向光参数设置;
        lightshader.setVec3("dirlight.position", glm::vec3(-2.0f, -0.3f, -1.0f));
        lightshader.setVec3("dirlight.ambient", glm::vec3(0.2f, 0.2f, 0.2f));
        lightshader.setVec3("dirlight.specular", glm::vec3(1.0f, 1.0f, 1.0f));
        lightshader.setVec3("dirlight.diffuse", glm::vec3(0.5f));

        //点光源参数设置;下面有两个电光源需要进行设置;
        //point1;
        lightshader.setVec3("pointlights[0].position", pointLightPositions[0]);
        lightshader.setVec3("pointlights[0].ambient", glm::vec3(0.2f, 0.2f, 0.2f));
        lightshader.setVec3("pointlights[0].specular", glm::vec3(50.0f, 50.0f, 50.0f));
        lightshader.setVec3("pointlights[0].diffuse", glm::vec3(0.5f, 0.5f, 0.5f));
        lightshader.setFloat("pointlights[0].constant", 1.0f);
        lightshader.setFloat("pointlights[0].linear", 0.09f);
        lightshader.setFloat("pointlights[0].quadratic", 0.032f);
        //point2;
        lightshader.setVec3("pointlights[1].position", pointLightPositions[1]);
        lightshader.setVec3("pointlights[1].ambient", glm::vec3(0.2f, 0.2f, 0.2f));
        lightshader.setVec3("pointlights[1].specular", glm::vec3(10.0f, 10.0f, 10.0f));
        lightshader.setVec3("pointlights[1].diffuse", glm::vec3(0.5f, 0.5f, 0.5f));
        lightshader.setFloat("pointlights[1].constant", 1.0f);
        lightshader.setFloat("pointlights[1].linear", 0.09f);
        lightshader.setFloat("pointlights[1].quadratic", 0.032f);
        
        //聚光参数设置;
        lightshader.setVec3("spotlight.position", cameraPos);
        lightshader.setVec3("spotlight.direction", cameraFront);
        lightshader.setVec3("spotlight.ambient", glm::vec3(0.2f, 0.2f, 0.2f));
        lightshader.setVec3("spotlight.specular", glm::vec3(1.0f, 1.0f, 1.0f));
        lightshader.setVec3("spotlight.diffuse", glm::vec3(0.5f, 0.5f, 0.5f));
        lightshader.setFloat("spotlight.constant", 1.0f);
        lightshader.setFloat("spotlight.linear", 0.09f);
        lightshader.setFloat("spotlight.quadratic", 0.032f);
        lightshader.setFloat("spotlight.outOff", glm::radians(12.5f));
        lightshader.setFloat("spotlight.outerCutOff", glm::radians(17.5f));

        // view/projection transformations
        glm::mat4 projection = glm::perspective(glm::radians(fov), (float)800 / (float)600, 0.1f, 100.0f);
        glm::mat4 view = glm::lookAt(cameraPos,cameraPos+cameraFront,cameraUp);
        lightshader.setMat4("projection", projection);
        lightshader.setMat4("view", view);

        // render the loaded model
        glm::mat4 model = glm::mat4(1.0f);
        model = glm::translate(model, glm::vec3(0.0f, 0.0f, 0.0f)); // translate it down so it's at the center of the scene
        model = glm::scale(model, glm::vec3(1.0f, 1.0f, 1.0f));    // it's a bit too big for our scene, so scale it down
        lightshader.setMat4("model", model);
        OurModel.Draw(lightshader);//调用Draw进行绘制相关操作;


        //下面是对灯源的操作,会在场景中画出两个灯光位置,灯光均为白色;
        lightCubeshader.useProgram();//启用另外一个着色器;
        glBindVertexArray(lightVAO);
        lightCubeshader.setMat4("view", view);
        lightCubeshader.setMat4("projection", projection);
        for (unsigned int i = 0; i < 2; i++)
        {
            model = glm::translate(glm::mat4(1.0f), pointLightPositions[i]);
            model = glm::rotate(model, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
            model = glm::scale(model, glm::vec3(0.25f, 0.25f, 0.25f));
            lightCubeshader.setMat4("model", model);
            glDrawArrays(GL_TRIANGLES, 0, 36);
        }

        glfwSwapBuffers(window);
        glfwPollEvents();
    }

    glfwTerminate();
    return 0;
}

void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
    glViewport(0, 0, width, height);
}
void processInput(GLFWwindow* window)
{
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)//如果按下的键为回车键;
        glfwSetWindowShouldClose(window, true);
    float cameraSpeed = 10.0f * deltatime;//移动速度;
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraUp * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraUp * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
}

bool firstMouse = true;

void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    //计算鼠标距上一帧的偏移量。
        //把偏移量添加到摄像机的俯仰角和偏航角中。
        //对偏航角和俯仰角进行最大和最小值的限制。
        //计算方向向量。
    if (firstMouse)
    {
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;//否则每一次都会进行循环;
    }
    //1.计算鼠标距上一帧的偏移量。
    float xoffset = xpos - lastX;
    float yoffset = lastY - ypos;
    lastX = xpos;
    lastY = ypos;//更新存储的上一帧的值;
    float sensitivity = 0.1f;//设置灵敏度;
    xoffset *= sensitivity;
    yoffset *= sensitivity;

    //2.把偏移量添加到摄像机的俯仰角和偏航角中。
    pitch = pitch + yoffset;
    yaw = yaw + xoffset;

    //3.对偏航角和俯仰角进行最大和最小值的限制

    if (pitch > 89.0f)
        pitch = 89.0f;
    if (pitch < -89.0f)
        pitch = -89.0f;
    //计算方向向量。
    glm::vec3 direction;
    direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
    direction.y = sin(glm::radians(pitch));
    direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
    cameraFront = glm::normalize(direction);
}
void scroll_back(GLFWwindow* window, double xoffset, double yoffset)
{
    //我们要把fov限制在1.0到45.0之间!!;
    if (fov >= 1.0f && fov <= 45.0f)
    {
        fov -= yoffset;
    }
    if (fov >= 45.0f)
    {
        fov = 45.0f;
    }
    if (fov <= 1.0f)
    {
        fov = 1.0f;
    }
}

result:

But I feel like this specular light map is not pasted?... emmmmmmm, please ask someone online to take a look at the code, it's a bit strange. . .

That’s the end of it!!;

Guess you like

Origin blog.csdn.net/2201_75303014/article/details/128905929