OpenGL Introductory Tutorial Camera

introduction

 In the previous tutorial we discussed the view matrix and how to use it to move the scene. OpenGL itself does not have the concept of a camera, but we can simulate a camera by moving all objects in the scene in opposite directions, so that it feels like we are moving, not the scene.
 In this section we will discuss how to simulate a camera in OpenGL, and we will discuss an FPS-style camera that can move freely in a 3D scene. We'll also discuss keyboard and mouse input, culminating in a custom camera class.

camera/observation space

 When we talk about the camera/view space (Camera/View Space), we are talking about the coordinates of all visible vertices in the scene when the camera's perspective is used as the origin of the scene. The view matrix transforms all world coordinates to view coordinates, and these new coordinates are relative to the camera's position and orientation. To define a camera, we need a camera's position in world space, the viewing direction, a vector pointing to its right, and a vector pointing above it. Careful readers may have noticed that we actually created a coordinate system with three unit axes perpendicular to each other and with the camera's position as the origin.

(1) Camera position

 Getting the camera position is simple. The camera position is simply a vector representing the camera's position in world space.

glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);

 Don't forget that the positive z-axis is pointing at you from the screen, if we want the camera to move backwards, we move in the positive z-axis direction.

(2) Camera direction

 Direction Vector is not the best name because it points exactly in the opposite direction from it to the target vector, i.e. the camera direction vector is opposite to the camera direction.
 The next vector needed is the orientation of the camera, i.e. which direction it is pointing. Now let's point the camera at the scene origin: (0, 0, 0). The result of subtracting the scene origin vector from the camera position vector is the camera pointing vector. Since we know the camera is pointing in the negative z direction, we want the direction vector to point in the positive z direction of the camera. If we change the order of the subtraction, we get a vector pointing in the direction of the positive z-axis of the camera not the direction the camera is looking at):

glm::vec3 cameraTarget = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraDirection = glm::normalize(cameraPos - cameraTarget);

(3) Right axis

 The other vector we need is a Right Vector, which represents the positive direction of the x-axis in camera space. To get the right vector we need to use a little trick first: define an up vector (Up Vector). We cross-multiply the up vector with the camera direction vector obtained in the second step. The result of the cross product of two vectors is a vector perpendicular to both vectors at the same time, so we will get the vector pointing in the positive direction of the x-axis (if we swap the order of the two vectors, we will get the opposite vector pointing in the negative direction of the x-axis) :

// 此例中上向量为(0,1,0)
glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f); 
glm::vec3 cameraRight = glm::normalize(glm::cross(up, cameraDirection));

(4) Upper shaft

 Now that we have the x-axis and z-axis vectors, it is relatively simple to obtain the positive y-axis of the camera; we cross-multiply the right vector and the direction vector (Direction Vector):

glm::vec3 cameraUp = glm::cross(cameraDirection, cameraRight);

 With the help of cross product and a few tricks, we create all view/camera space vectors. Using these camera vectors we can create a LookAt matrix, which is very useful when creating cameras.

 You may be wondering, isn't the upper axis just (0, 1, 0)? Me too, so I executed code like this:

 glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
    glm::vec3 cameraTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    glm::vec3 cameraDirection = glm::normalize(cameraPos - cameraTarget);
    glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f);
    glm::vec3 cameraRight = glm::normalize(glm::cross(up, cameraDirection));
    glm::vec3 cameraUp = glm::cross(cameraDirection, cameraRight);
    std::cout << cameraUp.x << cameraUp.y << cameraUp.z;

 The output result is: (0, 1, 0)
insert image description here

Look At

 One of the nice things about using matrices is that if you define a coordinate space with 3 mutually perpendicular axes, you can use those 3 axes plus a translation vector to create a matrix that you can multiply by any vector to Transform to that coordinate space. This is exactly what the LookAt matrix does, and now that we have 3 mutually perpendicular axes and a position coordinate that defines the camera space, we can create our own LookAt matrices: luckily, GLM already provides support for these
insert image description here
 . All we have to do is define a camera position, a target position and a vector in world space representing the up vector (we use the up vector to calculate the right vector). Then GLM will create a LookAt matrix, which we can use as our observation matrix:

glm::mat4 view;
view = glm::lookAt(glm::vec3(0.0f, 0.0f, 3.0f)),
	glm::vec3(0.0f, 0.0f, 0.0f);
	glm::vec3(0.0f, 1.0f, 0.0f));

 Before we start doing user input, let's do something fun and rotate our camera around the scene. Our fixation point remains at (0, 0, 0).
 We create x and z coordinates every frame, which uses a little trigonometry. x and z represent a point on a circle, which we will use as the camera's position. By repeatedly calculating the x and y coordinates, iterate through all points on the circle, so that the camera will rotate around the scene. We predefine the radius of this circle and increment its value using the glfwGetTime function, creating a new view matrix at each rendering iteration. Running result:
insert image description here
 vertex shader code:

#version 330 core
layout (location = 0) in vec3 position;	// 顶点位置
layout (location = 1) in vec2 texCoord;	// 顶点的纹理坐标

uniform mat4 model; 	  // 模型矩阵
uniform mat4 view;	  // 观察矩阵
uniform mat4 projection;  // 投影矩阵

// 定义输出变量(到片段着色器)
out vec2 TexCoord;	//	顶点的纹理坐标

void main()
{
    
    
    // 注意从右向左读(模型矩阵->观察矩阵->投影矩阵)
    gl_Position = projection * view * model * vec4(position, 1.0f);
    // 顶点的纹理坐标(由于图片是反的,所以纹理坐标的y值“取反”)
    TexCoord = vec2(texCoord.x , 1.0 - texCoord.y);
}

 Fragment shader code:

#version 330 core
in vec2 TexCoord;

out vec4 color;

uniform float mixValue;

// Texture samplers
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;

void main()
{
    
    
	// Linearly interpolate between both textures (second texture is only slightly combined)
	color = mix(texture(ourTexture1, TexCoord), texture(ourTexture2, TexCoord), mixValue);
}

 Full code:

#include <iostream>

// GLEW
#define GLEW_STATIC
#include <GL/glew.h>

// GLFW
#include <GLFW/glfw3.h>

// Other Libs
#include <SOIL.H>

// Other includes
#include "Shader.h"

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>


// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);

// Window dimensions
const GLuint WIDTH = 800, HEIGHT = 600;

GLfloat mixValue = 0.2f;

// The MAIN function, from here we start the application and run the game loop
int main()
{
    
    

    // Init GLFW
    glfwInit();
    // Set all the required options for GLFW
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

    // Create a GLFWwindow object that we can use for GLFW's functions
    GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
    glfwMakeContextCurrent(window);

    // Set the required callback functions
    glfwSetKeyCallback(window, key_callback);

    // Set this to true so GLEW knows to use a modern approach to retrieving function pointers and extensions
    glewExperimental = GL_TRUE;
    // Initialize GLEW to setup the OpenGL Function pointers
    glewInit();

    // Define the viewport dimensions
    glViewport(0, 0, WIDTH, HEIGHT);
    // 开启深度测试功能
    glEnable(GL_DEPTH_TEST);

    // Build and compile our shader program
    Shader ourShader("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\vertexShader.txt", "C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\fragmentShader.txt");


    // 立方体顶点数据(位置坐标+纹理坐标)
    GLfloat vertices[] = {
    
    
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
    };
    glm::vec3 cubePositions[] = {
    
    
          glm::vec3(0.0f,  0.0f,  0.0f),
          glm::vec3(2.0f,  5.0f, -15.0f),
          glm::vec3(-1.5f, -2.2f, -2.5f),
          glm::vec3(-3.8f, -2.0f, -12.3f),
          glm::vec3(2.4f, -0.4f, -3.5f),
          glm::vec3(-1.7f,  3.0f, -7.5f),
          glm::vec3(1.3f, -2.0f, -2.5f),
          glm::vec3(1.5f,  2.0f, -2.5f),
          glm::vec3(1.5f,  0.2f, -1.5f),
          glm::vec3(-1.3f,  1.0f, -1.5f)
    };

    // 调整读取方式
    GLuint VBO, VAO;
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);

    glBindVertexArray(VAO);

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    // 位置坐标数据解析方式   一次读取3个FLOAT    跨越5个FLOAT值再读取    从0号位置开始读取          
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
    // 纹理坐标数据解析方式   一次读取2个FLOAT    跨越5个FLAOT值再读取    从3个FLOAT值位置后开始读取
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);

    glBindVertexArray(0); // Unbind VAO

    // Load and create a texture 
    GLuint texture1;
    GLuint texture2;
    // ====================
    // Texture 1
    // ====================
    glGenTextures(1, &texture1);
    glBindTexture(GL_TEXTURE_2D, texture1); // All upcoming GL_TEXTURE_2D operations now have effect on our texture object
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);	// Set texture wrapping to GL_REPEAT
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    // Load, create texture and generate mipmaps
    int width, height;
    unsigned char* image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\container.jpg", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0); // Unbind texture when done, so we won't accidentily mess up our texture.
    // ===================
    // Texture 2
    // ===================
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

    float borderColor[] = {
    
     1.0f, 1.0f , 0.0f, 1.0f };
    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);

    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // Load, create texture and generate mipmaps
    image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\awesomeface.png", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0);

    // Game loop
    while (!glfwWindowShouldClose(window))
    {
    
    
        // Check if any events have been activiated (key pressed, mouse moved etc.) and call corresponding response functions
        glfwPollEvents();

        // Render
        // Clear the colorbuffer
        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        // Bind Textures using texture units
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, texture2);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);

        // Activate shader
        ourShader.Use();

        // 观察矩阵的设置
        glm::mat4 view = glm::mat4(1.0f);
        GLfloat radius = 10.0f;
        GLfloat camX = sin(glfwGetTime()) * radius;
        GLfloat camZ = cos(glfwGetTime()) * radius;
        view = glm::lookAt(glm::vec3(camX, 0.0f, camZ), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
        
        GLint viewLoc = glGetUniformLocation(ourShader.Program, "view");
        glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));

        // 投影矩阵的设置
        glm::mat4 projection = glm::mat4(1.0f);
        projection = glm::perspective(glm::radians(45.0f), (float)(width / height), 0.1f, 100.0f);
        GLint proLoc = glGetUniformLocation(ourShader.Program, "projection");
        glUniformMatrix4fv(proLoc, 1, GL_FALSE, glm::value_ptr(projection));
        GLint modeLoc = glGetUniformLocation(ourShader.Program, "model");

        // 设置混合参数
        glUniform1f(glGetUniformLocation(ourShader.Program,"mixValue"),mixValue);

        // Draw container
        glBindVertexArray(VAO);
        for (int i = 0; i < 10; i++)
        {
    
    
            glm::mat4 model = glm::mat4(1.0f);
            model = glm::translate(model, cubePositions[i]);
            GLfloat angle = 20.0f * i;
            model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
            
            glUniformMatrix4fv(modeLoc, 1, GL_FALSE, glm::value_ptr(model));

            glDrawArrays(GL_TRIANGLES, 0, 36);
        }
        glBindVertexArray(0);

        // Swap the screen buffers
        glfwSwapBuffers(window);
    }
    // Properly de-allocate all resources once they've outlived their purpose
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    // Terminate GLFW, clearing any resources allocated by GLFW.
    glfwTerminate();
    return 0;
}

// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    
    
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
    if (key == GLFW_KEY_UP && action == GLFW_PRESS)
    {
    
    
        mixValue += 0.1f;
        if (mixValue >= 1.0f)
            mixValue = 1.0f;
    }
    if (key == GLFW_KEY_DOWN && action == GLFW_PRESS)
    {
    
    
        mixValue -= 0.1f;
        if (mixValue <= 0.0f)
            mixValue = 0.0f;
    }
}

free movement

 It's fun to have the camera go around the scene, but it's even more fun to let us move the camera ourselves! First we have to set up a camera system, it is useful to define some camera variables earlier in our program:

glm::vec3 cameraPos   = glm::vec3(0.0f, 0.0f,  3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp    = glm::vec3(0.0f, 1.0f,  0.0f);

 The LookAt function now becomes:

view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);

 We first set the previously defined cameraPos as the camera position. Direction is the current position plus the direction vector we just defined. This ensures that no matter how we move, the camera will always look at the target. We update the cameraPos vector when a button is pressed.
 We have defined a key_callback function for GLFW's keyboard input, let's add a few new key commands:

void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    
    
    ...
    GLfloat cameraSpeed = 0.05f;
    if(key == GLFW_KEY_W)
        cameraPos += cameraSpeed * cameraFront;
    if(key == GLFW_KEY_S)
        cameraPos -= cameraSpeed * cameraFront;
    if(key == GLFW_KEY_A)
        cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
    if(key == GLFW_KEY_D)
        cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;  
}

 When we press the WASD key, the position of the camera will be updated accordingly. If we wish to move forward or backward, we add or subtract the direction vector to or from the position vector. If we want to move sideways, we do a cross product to create a right vector along which to move. This creates an effect similar to using the camera to move sideways, back and forth.
 Note that we normalize the right vector. If we don't normalize this vector, the final cross product result will return a different size depending on the size of the cameraFront variable. If we don't normalize the vector, we have to accelerate or decelerate according to the orientation of the camera, but if we normalize the movement is uniform.
 For multiple button detection and deltaTime calculation movement, please refer to the original text. (The code has already been implemented and will not be elaborated)
 Running results:
insert image description here
 Complete code:

#include <iostream>

// GLEW
#define GLEW_STATIC
#include <GL/glew.h>

// GLFW
#include <GLFW/glfw3.h>

// Other Libs
#include <SOIL.H>

// Other includes
#include "Shader.h"

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>


// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);

void do_movement();

// Window dimensions
const GLuint WIDTH = 800, HEIGHT = 600;

GLfloat mixValue = 0.2f;

bool keys[1024];

glm::mat4 view = glm::mat4(1.0f);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 5.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);

// The MAIN function, from here we start the application and run the game loop
int main()
{
    
    

    // Init GLFW
    glfwInit();
    // Set all the required options for GLFW
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

    // Create a GLFWwindow object that we can use for GLFW's functions
    GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
    glfwMakeContextCurrent(window);

    // Set the required callback functions
    glfwSetKeyCallback(window, key_callback);

    // Set this to true so GLEW knows to use a modern approach to retrieving function pointers and extensions
    glewExperimental = GL_TRUE;
    // Initialize GLEW to setup the OpenGL Function pointers
    glewInit();

    // Define the viewport dimensions
    glViewport(0, 0, WIDTH, HEIGHT);
    // 开启深度测试功能
    glEnable(GL_DEPTH_TEST);

    // Build and compile our shader program
    Shader ourShader("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\vertexShader.txt", "C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\fragmentShader.txt");


    // 立方体顶点数据(位置坐标+纹理坐标)
    GLfloat vertices[] = {
    
    
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
    };
    glm::vec3 cubePositions[] = {
    
    
          glm::vec3(0.0f,  0.0f,  0.0f),
          glm::vec3(2.0f,  5.0f, -15.0f),
          glm::vec3(-1.5f, -2.2f, -2.5f),
          glm::vec3(-3.8f, -2.0f, -12.3f),
          glm::vec3(2.4f, -0.4f, -3.5f),
          glm::vec3(-1.7f,  3.0f, -7.5f),
          glm::vec3(1.3f, -2.0f, -2.5f),
          glm::vec3(1.5f,  2.0f, -2.5f),
          glm::vec3(1.5f,  0.2f, -1.5f),
          glm::vec3(-1.3f,  1.0f, -1.5f)
    };

    // 调整读取方式
    GLuint VBO, VAO;
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);

    glBindVertexArray(VAO);

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    // 位置坐标数据解析方式   一次读取3个FLOAT    跨越5个FLOAT值再读取    从0号位置开始读取          
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
    // 纹理坐标数据解析方式   一次读取2个FLOAT    跨越5个FLAOT值再读取    从3个FLOAT值位置后开始读取
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);

    glBindVertexArray(0); // Unbind VAO

    // Load and create a texture 
    GLuint texture1;
    GLuint texture2;
    // ====================
    // Texture 1
    // ====================
    glGenTextures(1, &texture1);
    glBindTexture(GL_TEXTURE_2D, texture1); // All upcoming GL_TEXTURE_2D operations now have effect on our texture object
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);	// Set texture wrapping to GL_REPEAT
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    // Load, create texture and generate mipmaps
    int width, height;
    unsigned char* image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\container.jpg", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0); // Unbind texture when done, so we won't accidentily mess up our texture.
    // ===================
    // Texture 2
    // ===================
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

    float borderColor[] = {
    
     1.0f, 1.0f , 0.0f, 1.0f };
    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);

    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // Load, create texture and generate mipmaps
    image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\awesomeface.png", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0);

    // Game loop
    while (!glfwWindowShouldClose(window))
    {
    
    
        // Check if any events have been activiated (key pressed, mouse moved etc.) and call corresponding response functions
        glfwPollEvents();
        do_movement();

        // Render
        // Clear the colorbuffer
        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        // Bind Textures using texture units
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, texture2);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);

        // Activate shader
        ourShader.Use();

        // 观察矩阵的设置
        view = glm::lookAt(cameraPos, cameraFront, cameraUp);
        
        GLint viewLoc = glGetUniformLocation(ourShader.Program, "view");
        glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));

        // 投影矩阵的设置
        glm::mat4 projection = glm::mat4(1.0f);
        projection = glm::perspective(glm::radians(45.0f), (float)(width / height), 0.1f, 100.0f);
        GLint proLoc = glGetUniformLocation(ourShader.Program, "projection");
        glUniformMatrix4fv(proLoc, 1, GL_FALSE, glm::value_ptr(projection));
        GLint modeLoc = glGetUniformLocation(ourShader.Program, "model");

        // 设置混合参数
        glUniform1f(glGetUniformLocation(ourShader.Program,"mixValue"),mixValue);

        // Draw container
        glBindVertexArray(VAO);
        for (int i = 0; i < 10; i++)
        {
    
    
            glm::mat4 model = glm::mat4(1.0f);
            model = glm::translate(model, cubePositions[i]);
            GLfloat angle = 20.0f * i;
            model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
            
            glUniformMatrix4fv(modeLoc, 1, GL_FALSE, glm::value_ptr(model));

            glDrawArrays(GL_TRIANGLES, 0, 36);
        }
        glBindVertexArray(0);

        // Swap the screen buffers
        glfwSwapBuffers(window);
    }
    // Properly de-allocate all resources once they've outlived their purpose
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    // Terminate GLFW, clearing any resources allocated by GLFW.
    glfwTerminate();
    return 0;
}

// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    
    
    if (action == GLFW_PRESS)
        keys[key] = true;
    else if (action == GLFW_RELEASE)
        keys[key] = false;
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
    if (key == GLFW_KEY_UP && action == GLFW_PRESS)
    {
    
    
        mixValue += 0.1f;
        if (mixValue >= 1.0f)
            mixValue = 1.0f;
    }
    if (key == GLFW_KEY_DOWN && action == GLFW_PRESS)
    {
    
    
        mixValue -= 0.1f;
        if (mixValue <= 0.0f)
            mixValue = 0.0f;
    }
}
void do_movement()
{
    
    
    // 摄像机控制
    GLfloat cameraSpeed = 0.01f;
    if (keys[GLFW_KEY_W])
        cameraPos += cameraSpeed * cameraFront;
    if (keys[GLFW_KEY_S])
        cameraPos -= cameraSpeed * cameraFront;
    if (keys[GLFW_KEY_A])
        cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
    if (keys[GLFW_KEY_D])
        cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
}

camera movement

 It's not much fun to just move around with the keyboard. Especially since we can't turn yet. Time to use the mouse!
 To be able to change the orientation, we have to change the cameraFront vector according to the mouse input. However, changing the orientation vector based on mouse rotation is a bit more complicated and requires more knowledge of trigonometry. If you don't know much about trigonometry, don't worry, you can skip this part and just copy-paste our code; come back when you want to learn more.

Euler angle calculation formula

direction.y = sin(glm::radians(pitch)); // 注意我们先把角度转为弧度
direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));//译注:	    direction代表摄像机的“前”轴,但此前轴是和本文第一幅图片的第二个摄像机的direction是相反的
direction.y = sin(glm::radians(pitch));
direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));

mouse input

 The yaw and pitch angles are obtained from the mouse movement, the horizontal mouse movement affects the yaw angle, and the vertical mouse movement affects the pitch angle. Its idea is to store the position of the mouse in the last frame. In the current frame, we calculate the difference between the position of the mouse and the position of the previous frame. If the difference is greater then the pitch or yaw angle changes more.
 First we have to tell GLFW that the cursor should be hidden and captured (Capture) it. Snapping the mouse means that the cursor should stay in the window when the application focuses on the mouse (unless the application picks up focus or quits). We can make a simple configuration:

glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

 After this function is called, no matter how we move the mouse, it will not be displayed and will not leave the window. That's fine for FPS camera systems:
 to calculate pitch and yaw we need to tell GLFW to listen for mouse movement events. We use the following prototype to create a callback function to do this (similar to keyboard input):
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
 where xpos and ypos represent the current mouse position. We registered the callback function of GLFW, and the mouse_callback function is called as soon as the mouse moves:

glfwSetCursorPosCallback(window, mouse_callback);

 When dealing with FPS-style camera mouse input, we must do the following steps before getting the final direction vector:

1.计算鼠标和上一帧的偏移量。
2.把偏移量添加到摄像机和俯仰角和偏航角中。
3.对偏航角和俯仰角进行最大和最小值的限制。
4.计算方向向量。

 The first step is to calculate the offset of the mouse from the previous frame. We must first store the mouse position from the previous frame, which we initially set to the center of the screen (the size of the screen is 800 by 600):

GLfloat lastX = 400, lastY = 300;

 Then in the callback function we calculate the offset between the current frame and the previous frame mouse position:

GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos; // 注意这里是相反的,因为y坐标的范围是从下往上的
lastX = xpos;
lastY = ypos;

GLfloat sensitivity = 0.05f;
xoffset *= sensitivity;
yoffset *= sensitivity;

 Note that we multiplied the offset by the sensitivity value. If we removed it, the mouse movement would be too large; you can adjust the sensitivity value yourself.
 Next we add the offset to the global variables pitch and yaw:

yaw   += xoffset;
pitch += yoffset;  

 In the third step we add some constraints to the camera so that the camera does not move strangely. For the pitch angle, the user cannot look higher than 89 degrees (the angle of view will be reversed at 90 degrees, so we take 89 degrees as the limit), and it is also not allowed to be less than -89 degrees. This ensures that the user can only see the sky or the ground below but cannot go further than the past. Restrictions can be done like this:

if(pitch > 89.0f)
  pitch =  89.0f;
if(pitch < -89.0f)
  pitch = -89.0f;

 Note that we don't set a limit on the yaw angle because we don't want to limit the user's horizontal rotation. However, it is also easy to set limits on the yaw angle if you wish. The fourth and final step is to calculate the pitch and yaw angles to get the actual direction vector mentioned earlier:

glm::vec3 front;
front.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
front.y = sin(glm::radians(pitch));
front.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
cameraFront = glm::normalize(front);

 This time the direction vector is calculated, which contains all the rotations according to the movement of the mouse point. Since the cameraFront vector is already included in the glm::lookAt function, we go straight to setting it.
 If you run the code now, you will notice that the camera will jump suddenly when the program runs for the first time to capture the mouse. The reason is that when your mouse enters the window, the mouse callback function will use the xpos and ypos at this time. This is usually a place far from the center of the screen, thus creating a large offset, so it jumps. We can simply use a boolean variable to check whether we are getting mouse input for the first time, if so, then we first update the mouse position to xpos and ypos, which can solve this problem; the final mouse movement will be used after entering Mouse position coordinates to calculate its offset:

if(firstMouse) // 这个bool变量一开始是设定为true的
{
    
    
  lastX = xpos;
  lastY = ypos;
  firstMouse = false;
}

 The final code should look like this:

void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    
    
    if(firstMouse)
    {
    
    
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;
    }

    GLfloat xoffset = xpos - lastX;
    GLfloat yoffset = lastY - ypos; 
    lastX = xpos;
    lastY = ypos;

    GLfloat sensitivity = 0.05;
    xoffset *= sensitivity;
    yoffset *= sensitivity;

    yaw   += xoffset;
    pitch += yoffset;

    if(pitch > 89.0f)
        pitch = 89.0f;
    if(pitch < -89.0f)
        pitch = -89.0f;

    glm::vec3 front;
    front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
    front.y = sin(glm::radians(pitch));
    front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
    cameraFront = glm::normalize(front);
}  

zoom

 We also need to add something to the camera system and implement a zoom interface. In the previous tutorial, we said that the field of view (Field of View or fov) defines how much we can see in the scene. When the field of view becomes smaller, the visible area will be reduced, resulting in a sense of magnification. We use the mouse wheel to zoom in. Like mouse movement and keyboard input, we need a callback function for the mouse wheel:

void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
    
    
  if(aspect >= 1.0f && aspect <= 45.0f)
    aspect -= yoffset;
  if(aspect <= 1.0f)
    aspect = 1.0f;
  if(aspect >= 45.0f)
    aspect = 45.0f;
}

 The yoffset value represents the size of our scroll. When the scroll_callback function is called, we change the content of the global aspect variable. Since 45.0f is the default fov, we will limit the zoom level to 1.0f to 45.0f.
 We now have to upload the perspective projection matrix to the GPU every frame, but this time with the aspect variable as its fov:

projection = glm::perspective(aspect, (GLfloat)WIDTH/(GLfloat)HEIGHT, 0.1f, 100.0f);

Finally don't forget to register the scroll callback function:

glfwSetScrollCallback(window, scroll_callback);

 Complete code:
Pay attention to the pointing of the camera in view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);: cameraPos + cameraFront.

#include <iostream>

// GLEW
#define GLEW_STATIC
#include <GL/glew.h>

// GLFW
#include <GLFW/glfw3.h>

// Other Libs
#include <SOIL.H>

// Other includes
#include "Shader.h"

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>


// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);

void do_movement();

void mouse_callback(GLFWwindow* window, double xpos, double ypos);

void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);

// Window dimensions
const GLuint WIDTH = 800, HEIGHT = 600;

GLfloat mixValue = 0.2f;

bool keys[1024];

glm::mat4 view = glm::mat4(1.0f);
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 5.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);

GLfloat deltaTime = 0.0f;
GLfloat lastFrame = 0.0f;

GLfloat lastX = WIDTH / 2, lastY = HEIGHT / 2;
bool firstMouse;
GLfloat yaw = -90.0f;	
GLfloat pitch = 0.0f;

GLfloat aspect = 45.0f;

// The MAIN function, from here we start the application and run the game loop
int main()
{
    
    

    // Init GLFW
    glfwInit();
    // Set all the required options for GLFW
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

    // Create a GLFWwindow object that we can use for GLFW's functions
    GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
    glfwMakeContextCurrent(window);

    // Set the required callback functions
    glfwSetKeyCallback(window, key_callback);
    glfwSetScrollCallback(window, scroll_callback);
    glfwSetCursorPosCallback(window, mouse_callback);
    glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

    // Set this to true so GLEW knows to use a modern approach to retrieving function pointers and extensions
    glewExperimental = GL_TRUE;
    // Initialize GLEW to setup the OpenGL Function pointers
    glewInit();

    // Define the viewport dimensions
    glViewport(0, 0, WIDTH, HEIGHT);
    // 开启深度测试功能
    glEnable(GL_DEPTH_TEST);

    // Build and compile our shader program
    Shader ourShader("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\vertexShader.txt", "C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Shader\\fragmentShader.txt");


    // 立方体顶点数据(位置坐标+纹理坐标)
    GLfloat vertices[] = {
    
    
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
    };
    glm::vec3 cubePositions[] = {
    
    
          glm::vec3(0.0f,  0.0f,  0.0f),
          glm::vec3(2.0f,  5.0f, -15.0f),
          glm::vec3(-1.5f, -2.2f, -2.5f),
          glm::vec3(-3.8f, -2.0f, -12.3f),
          glm::vec3(2.4f, -0.4f, -3.5f),
          glm::vec3(-1.7f,  3.0f, -7.5f),
          glm::vec3(1.3f, -2.0f, -2.5f),
          glm::vec3(1.5f,  2.0f, -2.5f),
          glm::vec3(1.5f,  0.2f, -1.5f),
          glm::vec3(-1.3f,  1.0f, -1.5f)
    };

    // 调整读取方式
    GLuint VBO, VAO;
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);

    glBindVertexArray(VAO);

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    // 位置坐标数据解析方式   一次读取3个FLOAT    跨越5个FLOAT值再读取    从0号位置开始读取          
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);
    // 纹理坐标数据解析方式   一次读取2个FLOAT    跨越5个FLAOT值再读取    从3个FLOAT值位置后开始读取
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);

    glBindVertexArray(0); // Unbind VAO

    // Load and create a texture 
    GLuint texture1;
    GLuint texture2;
    // ====================
    // Texture 1
    // ====================
    glGenTextures(1, &texture1);
    glBindTexture(GL_TEXTURE_2D, texture1); // All upcoming GL_TEXTURE_2D operations now have effect on our texture object
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);	// Set texture wrapping to GL_REPEAT
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    // Load, create texture and generate mipmaps
    int width, height;
    unsigned char* image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\container.jpg", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0); // Unbind texture when done, so we won't accidentily mess up our texture.
    // ===================
    // Texture 2
    // ===================
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

    float borderColor[] = {
    
     1.0f, 1.0f , 0.0f, 1.0f };
    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);

    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    // Load, create texture and generate mipmaps
    image = SOIL_load_image("C:\\Users\\32156\\source\\repos\\LearnOpenGL\\Resource\\awesomeface.png", &width, &height, 0, SOIL_LOAD_RGB);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
    glGenerateMipmap(GL_TEXTURE_2D);
    SOIL_free_image_data(image);
    glBindTexture(GL_TEXTURE_2D, 0);

    // Game loop
    while (!glfwWindowShouldClose(window))
    {
    
    
        GLfloat currentFrame = glfwGetTime();
        deltaTime = currentFrame - lastFrame;
        lastFrame = currentFrame;

        // Check if any events have been activiated (key pressed, mouse moved etc.) and call corresponding response functions
        glfwPollEvents();
        do_movement();

        // Render
        // Clear the colorbuffer
        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        // Bind Textures using texture units
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, texture2);
        glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);

        // Activate shader
        ourShader.Use();

        // 观察矩阵的设置
        view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
        
        GLint viewLoc = glGetUniformLocation(ourShader.Program, "view");
        glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));

        // 投影矩阵的设置
        glm::mat4 projection = glm::mat4(1.0f);
        projection = glm::perspective(glm::radians(aspect), (GLfloat)WIDTH / (GLfloat)HEIGHT, 0.1f, 100.0f);
        GLint proLoc = glGetUniformLocation(ourShader.Program, "projection");
        glUniformMatrix4fv(proLoc, 1, GL_FALSE, glm::value_ptr(projection));
        GLint modeLoc = glGetUniformLocation(ourShader.Program, "model");

        // 设置混合参数
        glUniform1f(glGetUniformLocation(ourShader.Program,"mixValue"),mixValue);

        // Draw container
        glBindVertexArray(VAO);
        for (int i = 0; i < 10; i++)
        {
    
    
            glm::mat4 model = glm::mat4(1.0f);
            model = glm::translate(model, cubePositions[i]);
            GLfloat angle = 20.0f * i;
            model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
            
            glUniformMatrix4fv(modeLoc, 1, GL_FALSE, glm::value_ptr(model));

            glDrawArrays(GL_TRIANGLES, 0, 36);
        }
        glBindVertexArray(0);

        // Swap the screen buffers
        glfwSwapBuffers(window);
    }
    // Properly de-allocate all resources once they've outlived their purpose
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    // Terminate GLFW, clearing any resources allocated by GLFW.
    glfwTerminate();
    return 0;
}

// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    
    
    if (action == GLFW_PRESS)
        keys[key] = true;
    else if (action == GLFW_RELEASE)
        keys[key] = false;
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
    if (key == GLFW_KEY_UP && action == GLFW_PRESS)
    {
    
    
        mixValue += 0.1f;
        if (mixValue >= 1.0f)
            mixValue = 1.0f;
    }
    if (key == GLFW_KEY_DOWN && action == GLFW_PRESS)
    {
    
    
        mixValue -= 0.1f;
        if (mixValue <= 0.0f)
            mixValue = 0.0f;
    }
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    
    
    if (firstMouse)
    {
    
    
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;
    }

    GLfloat xoffset = xpos - lastX;
    GLfloat yoffset = lastY - ypos;
    lastX = xpos;
    lastY = ypos;

    GLfloat sensitivity = 0.1;
    xoffset *= sensitivity;
    yoffset *= sensitivity;

    yaw += xoffset;
    pitch += yoffset;

    if (pitch > 89.0f)
        pitch = 89.0f;
    if (pitch < -89.0f)
        pitch = -89.0f;

    glm::vec3 front;
    front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
    front.y = sin(glm::radians(pitch));
    front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
    cameraFront = glm::normalize(front);
}
void do_movement()
{
    
    
    // 摄像机控制
    GLfloat cameraSpeed = 5.0f * deltaTime;
    if (keys[GLFW_KEY_W])
        cameraPos += cameraSpeed * cameraFront;
    if (keys[GLFW_KEY_S])
        cameraPos -= cameraSpeed * cameraFront;
    if (keys[GLFW_KEY_A])
        cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
    if (keys[GLFW_KEY_D])
        cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
    
    
    if (aspect >= 1.0f && aspect <= 45.0f)
        aspect -= yoffset;
    if (aspect <= 1.0f)
        aspect = 1.0f;
    if (aspect >= 45.0f)
        aspect = 45.0f;
}

Guess you like

Origin blog.csdn.net/qq_51563654/article/details/130365931