LearnOpenGL->Getting Started->Camera

Likewise, start today’s study with a nice picture

Document address: https://learnopengl-cn.github.io/01%20Getting%20started/09%20Camera/ ;

1.Camera

First of all, we need to know that OpenGL does not have the concept of a camera , but we can simulate a camera by moving all objects in the scene in the opposite direction, creating a feeling that we are moving, rather than that the scene is moving. .

In this section we'll discuss how to configure a camera in OpenGL , and we'll discuss FPS-style cameras that allow you to move freely around a 3D scene. We'll also discuss keyboard and mouse input, culminating in a custom camera class .

2. Camera/observation space

When we discuss Camera/View Space, we are discussing all vertex coordinates in the scene when the camera's perspective is used as the origin of the scene : the observation matrix transforms all world coordinates relative to the camera position and direction. Observe the coordinates. To define a camera, we need its position in world space, the direction in which it is looking, a vector pointing to the right of it, and a vector pointing above it. We actually create a coordinate system with three unit axes perpendicular to each other and with the camera's position as the origin.

That is, the required values ​​include:

Camera position, camera direction, right axis, upper axis

2.1Camera position

Getting the camera position is simple. The camera position is simply a vector in world space pointing to the camera position . We set the camera position to the same position as in the previous section (that is, the coordinate system chapter):

glm::vec3 cameraPos = glm::vec3(0.0f,0.0f,3.0f);//指向+Z轴,位于Z轴的3.0f位置处!!;

[Don’t forget that the positive z-axis is pointing away from the screen towards you . If we want the camera to move backwards, we move it along the positive z-axis .

2.2 Camera direction

The next vector needed is the direction of the camera, in this case which direction the camera is pointing . Now we have the camera pointed at the scene origin : (0, 0, 0) . Remember if we subtract two vectors, we get the difference between the two vectors? The result of subtracting the camera position vector from the scene origin vector is the camera's pointing vector . Since we know that the camera points in the negative z-axis direction , but we want the Direction Vector to point in the positive z-axis direction of the camera . If we swap the order of the subtractions, we get a vector pointing in the positive z-axis of the camera :

glm::vec3 cameraPos = glm::vec3(0.0f,0.0f,3.0f);//摄像机位置;
glm::vec3 cameraTarget = glm::vec3(0.0f,0.0f,0.0f)//场景原点;
glm::vec3 cameraDirection = glm::normalize(cameraPos-cameraTarget);//场景原点指向摄像机位置;
//原本应该用场景原点减去摄像机位置,但是由于我们希望方向向量指向+Z方向,因此我们会交换相减的顺序,即用摄像机的位置减去场景空间原点,以获得一个指向+Z方向的向量,就是所求的摄像机方向向量;
//同时不要忘了要进行normalize操作,摄像机方向应该是单位向量;

[ Direction Vector is not the best name because it actually points in the opposite direction from it to the target vector ( Annotation: Pay attention to the previous figure, the blue direction vector points roughly in the positive direction of the z-axis, is exactly opposite to the direction the camera is actually pointing) .

2.3 Right axis

The other vector we need is a Right Vector , which represents the positive direction of the x-axis of the camera space . To get the right vector we need to use a little trick: first define an up vector (Up Vector). Next, cross-multiply the upper vector and the direction vector obtained in the second step . The result of the cross product of two vectors will be perpendicular to both vectors at the same time, so we will get the vector pointing in the positive direction of the x-axis (if we swap the order of the cross product of the two vectors, we will get the opposite vector pointing in the negative direction of the x-axis) :

glm::vec3 up = glm::vec3(0.0f,1.0f,0.0f);//创建一个向上的向量;
glm::vec3 cameraRight = glm::normalize(glm::cross(up,cameraDirection));
//此处创建了一个向上的向量,用其与摄像机方向向量进行叉乘,就可以得到向右方向的向量;

2.4 Upper shaft

From the above, we have already obtained a unit vector pointing in the +Z axis direction and a unit vector pointing in the +X direction (camerRight), so we only need to cross-multiply the two to get the upward unit vector we want;

[Pay attention to the right-hand rule!!];

glm::vec3 cameraUp = glm::cross(cameraDirection,cameraRight);
//注意右手定则,不要混淆;

With the help of cross products and a few tricks, we create all the vectors that make up the view/camera space . For readers who want to learn more about mathematical principles, a reminder that in linear algebra this process is called the Gram -Schmidt Process. Using these camera vectors we can create a LookAt matrix , which is very useful when creating cameras .

The total code is as follows:

glm::vec3 cameraPos = glm::vec3(0.0f,0.0f,3.0f);
glm::vec3 cameraTarget = glm::vec3(0.0f,0.0f,0.0f);
glm::vec3 cameraDirection = glm::normalize(cameraPos-cameraTarget);
glm::vec3 up = glm::vec3(0.0f,1.0f,0.0f);
glm::vec3 cameraRight = glm::normalize(glm::cross(up,cameraDirection));
glm::vec3 cameraUp = glm::cross(cameraDirection,cameraRight);

3.Look At

One of the benefits of using matrices is that if you define a coordinate space using 3 mutually perpendicular (or non-linear) axes , you can use these 3 axes plus a translation vector to create a matrix, and you can multiply by this matrix Take any vector to transform it into that coordinate space . This is exactly what the LookAt matrix does. Now that we have 3 mutually perpendicular axes and a position coordinate that defines the camera space, we can create our own LookAt matrix:

where R is the right vector, U is the up vector, D is the direction vector and P is the camera position vector. Note that the position vectors are opposite, since we ultimately want to translate the world in the opposite direction of our own movement. Using this LookAt matrix as the observation matrix can efficiently transform all world coordinates into the observation space just defined. The LookAt matrix does exactly what its name says: it creates an observation matrix that looks at a given target.

!!!Fortunately, GLM already provides this support. All we have to do is define a camera position, a target position and a vector representing the up vector in world space (the one we use to calculate the right vector) . GLM will then create a LookAt matrix, which we can use as our observation matrix:

//我们可以用glm::LookAt函数生成一个LookAt矩阵,可以高效地把所有世界坐标转换到定义的观查空间中!!,我们要传入的参数包括:一个摄像机位置,一个目标位置,以及一个表示世界空间中上向量的一个向量;

glm::mat4 view = glm::mat4(1.0f)//
view = glm::LookAt(glm::vec3(0.0f,0.0f,3.0f),glm::vec3(0.0f,0.0f,0.0f),glm::vec3(0.0f,1.0f,0.0f));

The glm::LookAt function requires a position, target, and up vector. This will create the same observation matrix as used in the previous section.

Before we talk about user input, let's do something interesting and rotate our camera around the scene. We will keep the camera's gaze point at (0, 0, 0):

们需要用到一点三角学的知识来在每一帧创建一个x和z坐标,它会代表圆上的一点,我们将会使用它作为摄像机的位置。通过重新计算x和y坐标,我们会遍历圆上的所有点,这样摄像机就会绕着场景旋转了。我们预先定义这个圆的半径radius,在每次渲染迭代中使用GLFW的glfwGetTime函数重新创建观察矩阵,来扩大这个圆。

float radius = 10.0f;
float camX = sin(glfwGetTime()) * radius;
float camZ = cos(glfwGetTime()) * radius;

//上面两句代码分别指位于半径为10的圆上的任意一个点;

glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, 0.0, camZ), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); 

每一次循环的代码为:

for (int i = 0; i < 10; i++)
        {
            //创建一个随时间旋转的矩阵:
            shader.useProgram();//启用着色器程序;
            //创建三个变换矩阵;
            //先初始化三个单位矩阵;
            glm::mat4 projection = glm::mat4(1.0f);
            glm::mat4 view = glm::mat4(1.0f);
            glm::mat4 model = glm::mat4(1.0f);
            model = glm::translate(model, cubePositions[i]);
            float angle = 20.0f;
            model = glm::rotate(model, (float)glfwGetTime() * glm::radians(angle * (i + 1)), glm::vec3(0.4f, 0.2f, 0.0f));

//--------------------------------修改开始------------------------------------//
            float radius = 10.0f;//圆的半径;
            float camX = sin(glfwGetTime()) * radius;
            float camZ = cos(glfwGetTime()) * radius;
            view = glm::lookAt(glm::vec3(camX, 1.0f, camZ), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
//--------------------------------修改结束-----------------------------------//
            projection = glm::perspective(glm::radians(45.0f), (float)800 / (float)600, 0.1f, 100.0f);

            //将变换矩阵传入着色器;注意到由于变换随时发生,要每一次迭代都进行,因此要放在里面;
            int Location1 = glGetUniformLocation(shader.ID, "model");
            glUniformMatrix4fv(Location1, 1, GL_FALSE, glm::value_ptr(model));
            int Location2 = glGetUniformLocation(shader.ID, "view");
            glUniformMatrix4fv(Location2, 1, GL_FALSE, glm::value_ptr(view));
            int Location3 = glGetUniformLocation(shader.ID, "projection");
            glUniformMatrix4fv(Location3, 1, GL_FALSE, glm::value_ptr(projection));
            glDrawArrays(GL_TRIANGLES, 0, 36);
        }

输出会看到摄像机绕场景中心(0.0f,0.0f,0.0f)旋转的一个片段,我截了几张图:

四.自由移动

让摄像机绕着场景转的确很有趣,但是让我们自己移动摄像机会更有趣!首先我们必须设置一个摄像机系统,所以在我们的程序前面定义一些摄像机变量很有用:

glm::vec3 cameraPos = glm::vec3(0.0f,0.0f,3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f,0.0f,-1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f,1.0f,0.0f);
//这里这样设计是为了后面的向前向后,向上向下移动;

此时的LookAt矩阵变为:

glm::vec3 view = glm::vec3(1.0f);
view = glm::LookAt(cameraPos,cameraPos+cameraFront,cameraUp);

我们首先将摄像机位置设置为之前定义的cameraPos方向是当前的位置加上我们刚刚定义的方向向量。这样能保证无论我们怎么移动,摄像机都会注视着目标方向。让我们摆弄一下这些向量,在按下某些按钮时更新cameraPos向量。(WASD);

我们已经为GLFW的键盘输入定义过一个processInput函数了,我们来新添加几个需要检查的按键命令:

void processInput(GLFWwindow* window)
{
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)//如果按下的键为回车键;
        glfwSetWindowShouldClose(window, true);
    float cameraSpeed = 0.005;//移动速度;
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
}

记住要设置几个全局变量:

glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
//设在程序前面!!!;

五.移动速度

目前我们的移动速度是个常量。理论上没什么问题,但是实际情况下根据处理器的能力不同,有些人可能会比其他人每秒绘制更多帧,也就是以更高的频率调用processInput函数。结果就是,根据配置的不同,有些人可能移动很快,而有些人会移动很慢。当你发布你的程序的时候,你必须确保它在所有硬件上移动速度都一样。

图形程序和游戏通常会跟踪一个时间差(Deltatime)变量,它储存了渲染上一帧所用的时间。我们把所有速度都去乘以deltaTime值。结果就是,如果我们的deltaTime很大,就意味着上一帧的渲染花费了更多时间,所以这一帧的速度需要变得更高来平衡渲染所花去的时间。使用这种方法时,无论你的电脑快还是慢,摄像机的速度都会相应平衡,这样每个用户的体验就都一样了。

我们跟踪两个全局变量来计算出deltaTime值:

float deltaTime = 0.0f; // 当前帧与上一帧的时间差
float lastFrame = 0.0f; // 上一帧的时间

在每一帧中我们计算出新的deltaTime以备后用。

float currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;

现在我们有了deltaTime,在计算速度的时候可以将其考虑进去了:

void processInput(GLFWwindow* window)
{
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)//如果按下的键为回车键;
        glfwSetWindowShouldClose(window, true);
//--------------------------------修改开始---------------------------------//
    float cameraSpeed = 2.5f * deltatime;//移动速度;
//--------------------------------修改结束---------------------------------//
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
}

与前面的部分结合在一起,我们可以得到一个更流畅点的摄像机系统。

main.cpp

此时的main.cpp代码如下:

#include <glad/glad.h> 
#include <GLFW/glfw3.h>
#include <iostream>
#include<E:\OpenGl\练习1.1\3.3.shader_class\shader s.h>
//以下三行为glm的头文件代码;
#include <E:\OpenGl\glm\glm-master\glm\glm.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\matrix_transform.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\type_ptr.hpp>

#define STB_IMAGE_IMPLEMENTATION
#include <E:/OpenGl/stb_image.h/stb-master/stb_image.h>//这两行代码加入了stb_image库;

void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow* window);


glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);

float deltatime = 0.0f;//上一帧与这一帧的时间差;
float lastime = 0.0f;//上一帧的时间;

int main()
{

    //先进行初始化glfw;
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);//主版本设置为3;
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);//次版本设置为3;
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWwindow* window = glfwCreateWindow(800, 600, "MY OPENGL", NULL, NULL);
    if (window == NULL)
    {
        std::cout << "Fail to create a window" << std::endl;
        glfwTerminate();//释放资源;
        return -1;
    }
    glfwMakeContextCurrent(window);
    glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
    //创建完告诉将上下文设置为进程上下文;

    //对glad进行初始化;
    if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
    {
        std::cout << "Fail to initnite glad" << std::endl;
        return -1;
    }
    //引入着色器类,着色器被封装到了class Shader里面;
    Shader shader("3.3.shader.vs", "3.3.shader.fs");

    float vertices[] = {
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
    };


    unsigned int VAO, VBO;
    glGenVertexArrays(1, &VAO);//创建VAO;
    glGenBuffers(1, &VBO);//创建VBO;
    //glGenBuffers(1, &EBO);
    glBindVertexArray(VAO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
    //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    //glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);


    //链接顶点属性;
    //位置属性;
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);//这里的步长为获得下一个属性值,应该右移六个长度的单位;
    glEnableVertexAttribArray(0);
    //纹理属性;
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
    glEnableVertexAttribArray(1);//启用纹理属性;

    //加入纹理1:
    unsigned int texture1;
    glGenTextures(1, &texture1);
    glBindTexture(GL_TEXTURE_2D, texture1);
    //修改纹理的环绕和过滤方式;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    //加载入纹理:
    stbi_set_flip_vertically_on_load(true);//防止上下颠倒;
    int width, height, nrchannels;
    unsigned char* data = stbi_load("E:/OpenGl/textures/v2-6e8b14becd4699a1e02421670e25ec74_r.jpg", &width, &height, &nrchannels, 0);
    //生成纹理;
    if (data)
    {
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Fail to load a image" << std::endl;
    }
    stbi_image_free(data);

    //加入纹理2:
    unsigned int texture2;
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    //修改纹理的环绕和过滤方式;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    //加载入纹理:
    stbi_set_flip_vertically_on_load(true);//防止上下颠倒;
    int width1, height1, nrchannels1;
    unsigned char* data2 = stbi_load("E:/OpenGl/textures/2e3c98d3bf204d029be74443398e0c87.jpeg", &width1, &height1, &nrchannels1, 0);
    //生成纹理;
    if (data2)
    {
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, data2);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Fail to load a image" << std::endl;
    }
    stbi_image_free(data2);

    //还要指定纹理单元对应的采样器;
    shader.useProgram();
    shader.setInt("Tex1", 0);//采样器1对应纹理单元0;
    shader.setInt("Tex2", 1);

    glEnable(GL_DEPTH_TEST);

    glm::vec3 cubePositions[] = {
  glm::vec3(0.0f,  0.0f,  0.0f),
  glm::vec3(2.0f,  5.0f, -15.0f),
  glm::vec3(-1.5f, -2.2f, -2.5f),
  glm::vec3(-3.8f, -2.0f, -12.3f),
  glm::vec3(2.4f, -0.4f, -3.5f),
  glm::vec3(-1.7f,  3.0f, -7.5f),
  glm::vec3(1.3f, -2.0f, -2.5f),
  glm::vec3(1.5f,  2.0f, -2.5f),
  glm::vec3(1.5f,  0.2f, -1.5f),
  glm::vec3(-1.3f,  1.0f, -1.5f)
    };

    //准备引擎:
    while (!glfwWindowShouldClose(window))
    {
//---------------------以下是对deltatime的相关操作----------------//
        float currentime = glfwGetTime();
        deltatime = currentime - lastime;
        lastime = currentime;
//-------------------------------------------------------------//
        //准备输入:
        processInput(window);
        //每一次更新缓冲颜色,之后继续画下一帧;
        glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT);//清除颜色缓冲单元;
        glClear(GL_DEPTH_BUFFER_BIT);

        //绑定两个纹理,要先激活没一个纹理单元;
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);//绑定这个纹理到当前激活的纹理单元;
        glActiveTexture(GL_TEXTURE1);//激活纹理单元1;
        glBindTexture(GL_TEXTURE_2D, texture2);
        //启用Shader对象shader的程序启动函数;

        glBindVertexArray(VAO);//绑定VAO;
        //进行十次循环;
        for (int i = 0; i < 10; i++)
        {
            //创建一个随时间旋转的矩阵:
            shader.useProgram();//启用着色器程序;
            //创建三个变换矩阵;
            //先初始化三个单位矩阵;
            glm::mat4 projection = glm::mat4(1.0f);
            glm::mat4 view = glm::mat4(1.0f);
            glm::mat4 model = glm::mat4(1.0f);
            model = glm::translate(model, cubePositions[i]);
            float angle = 20.0f;
            model = glm::rotate(model, (float)glfwGetTime() * glm::radians(angle * (i + 1)), glm::vec3(0.4f, 0.2f, 0.0f));

            view = glm::lookAt(cameraPos,cameraPos+cameraFront,cameraUp);

            projection = glm::perspective(glm::radians(45.0f), (float)800 / (float)600, 0.1f, 100.0f);

            //将变换矩阵传入着色器;注意到由于变换随时发生,要每一次迭代都进行,因此要放在里面;
            int Location1 = glGetUniformLocation(shader.ID, "model");
            glUniformMatrix4fv(Location1, 1, GL_FALSE, glm::value_ptr(model));
            int Location2 = glGetUniformLocation(shader.ID, "view");
            glUniformMatrix4fv(Location2, 1, GL_FALSE, glm::value_ptr(view));
            int Location3 = glGetUniformLocation(shader.ID, "projection");
            glUniformMatrix4fv(Location3, 1, GL_FALSE, glm::value_ptr(projection));
            glDrawArrays(GL_TRIANGLES, 0, 36);
        }

        //开始绘制;
        //glDrawArrays(GL_TRIANGLES, 0, 3);//第一个参数是要进行绘制的类型,第二个参数制定了顶点数组的开始索引,
        //glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
        //注意:6是指一个要绘制六个顶点;!!
        //第三个参数是要进行绘制的顶点的长度;

        //更新缓冲;
        glfwSwapBuffers(window);
        //进行检查;
        glfwPollEvents();
    }
    //结束要删除VAO,VBO,以及删除着色器程序; 
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    //glDeleteProgram(shaderProgram);//删除着色器程序;
    //结束清楚资源;
    glfwTerminate();
    return 0;
}

void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
    glViewport(0, 0, width, height);
}
void processInput(GLFWwindow* window)
{
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)//如果按下的键为回车键;
        glfwSetWindowShouldClose(window, true);
    float cameraSpeed = 2.5f * deltatime;//移动速度;
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
}

六.视角移动

只用键盘移动没什么意思。特别是我们还不能转向,移动很受限制。是时候加入鼠标了!

为了能够改变视角,我们需要根据鼠标的输入改变cameraFront向量。然而,根据鼠标移动改变方向向量有点复杂,需要一些三角学知识。

七.欧拉角

欧拉角(Euler Angle)是可以表示3D空间中任何旋转的3个值,由莱昂哈德·欧拉(Leonhard Euler)在18世纪提出。一共有3种欧拉角:俯仰角(Pitch)、偏航角(Yaw)和滚转角(Roll),下面的图片展示了它们的含义:

1.俯仰角是描述我们如何往上或往下看的角,可以在第一张图中看到。

2.第二张图展示了偏航角,偏航角表示我们往左和往右看的程度。滚转角代表我们如何翻滚摄像机,通常在太空飞船的摄像机中使用。每个欧拉角都有一个值来表示,把三个角结合起来我们就能够计算3D空间中任何的旋转向量了。

对于我们的摄像机系统来说,我们只关心俯仰角和偏航角,所以我们不会讨论滚转角。给定一个俯仰角和偏航角,我们可以把它们转换为一个代表新的方向向量的3D向量。俯仰角和偏航角转换为方向向量的处理需要一些三角学知识,我们先从最基本的情况开始:

如果我们把斜边边长定义为1,我们就能知道邻边的长度是cos x/h=cos x/1=cos xcos⁡ x/h=cos⁡ x/1=cos⁡ x,它的对边是sin y/h=sin y/1=sin ysin⁡ y/h=sin⁡ y/1=sin⁡ y。这样我们获得了能够得到x和y方向长度的通用公式,它们取决于所给的角度。我们使用它来计算方向向量的分量:

这个三角形看起来和前面的三角形很像,所以如果我们想象自己在xz平面上,看向y轴,我们可以基于第一个三角形计算来计算它的长度/y方向的强度(Strength)(我们往上或往下看多少)。从图中我们可以看到对于一个给定俯仰角的y值等于sin θ

direction.y = sin(glm::radians(pitch));//注意要进行角度转弧度!!!;

这里我们只更新了y值,仔细观察x和z分量也被影响了。从三角形(上图)中我们可以看到它们的值等于:

direction.x = dirextion.z = cos(glm::radians(pitch));//pitch即为仰俯角;

接下来看看我们是否能够为偏航角找到需要的分量:

就像俯仰角的三角形一样,我们可以看到x分量取决于cos(yaw)的值,z值同样取决于偏航角的正弦值。把这个加到前面的值中,会得到基于俯仰角和偏航角的方向向量:

direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
direction.y = sin(glm::radians(pitch));
direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));

 // 译注:direction代表摄像机的前轴(Front),这个前轴是和本文第一幅图片的第二个摄像机的方向向量是相反的,

这样我们就有了一个可以把俯仰角和偏航角转化为用来自由旋转视角的摄像机的3维方向向量了。你可能会奇怪:我们怎么得到俯仰角和偏航角?接下来就要加入鼠标的作用!!;

八.鼠标输入

偏航角和俯仰角是通过鼠标(或手柄)移动获得的,水平的移动影响偏航角,竖直的移动影响俯仰角。它的原理就是,储存上一帧鼠标的位置,在当前帧中我们当前计算鼠标位置与上一帧的位置相差多少。如果水平/竖直差别越大那么俯仰角或偏航角就改变越大,也就是摄像机需要移动更多的距离。

首先我们要告诉GLFW,它应该隐藏光标,并捕捉(Capture)它。捕捉光标表示的是,如果焦点在你的程序上(译注:即表示你正在操作这个程序,Windows中拥有焦点的程序标题栏通常是有颜色的那个,而失去焦点的程序标题栏则是灰色的),光标应该停留在窗口中(除非程序失去焦点或者退出)。我们可以用一个简单地配置调用来完成:

glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

在调用这个函数之后,无论我们怎么去移动鼠标,光标都不会显示了,它也不会离开窗口。对于FPS摄像机系统来说非常完美。【建议此处代码加在窗口设置部分】;

为了计算俯仰角和偏航角,我们需要让GLFW监听鼠标移动事件。(和键盘输入相似)我们会用一个回调函数来完成,函数的原型如下:

void mouse_callback(GLFWwindow* window, double xpos, double ypos);

这里的xposypos代表当前鼠标的位置。当我们用GLFW注册了回调函数之后,鼠标一移动mouse_callback函数就会被调用

glfwSetCursorPosCallback(window, mouse_callback);
//这一步很关键,链接了鼠标和回调函数,进一步就可以链接到cameraFront以改变观测空间view;

在处理FPS风格摄像机的鼠标输入的时候,我们必须在最终获取方向向量之前做下面这几步:

  1. 计算鼠标距上一帧的偏移量。

  1. 把偏移量添加到摄像机的俯仰角和偏航角中。

  1. 对偏航角和俯仰角进行最大和最小值的限制。

  1. 计算方向向量。

第一步是计算鼠标自上一帧的偏移量。我们必须先在程序中储存上一帧的鼠标位置,我们把它的初始值设置为屏幕的中心(屏幕的尺寸是800x600):

float lastX = 400, lastY = 300;

然后在鼠标的回调函数中我们计算当前帧和上一帧鼠标位置的偏移量:

float xoffset = xpos - lastX;
float yoffset = lastY - ypos; // 注意这里是相反的,因为y坐标是从底部往顶部依次增大的
lastX = xpos;
lastY = ypos;

float sensitivity = 0.05f;
xoffset *= sensitivity;
yoffset *= sensitivity;

注意我们把偏移量乘以了sensitivity(灵敏度)值如果我们忽略这个值,鼠标移动就会太大了;你可以自己实验一下,找到适合自己的灵敏度值。

接下来我们把偏移量加到全局变量pitchyaw上:

yaw   += xoffset;
pitch += yoffset;

第三步,我们需要给摄像机添加一些限制,这样摄像机就不会发生奇怪的移动了(这样也会避免一些奇怪的问题)。对于俯仰角,要让用户不能看向高于89度的地方(在90度时视角会发生逆转,所以我们把89度作为极限),同样也不允许小于-89度。这样能够保证用户只能看到天空或脚下,但是不能超越这个限制。我们可以在值超过限制的时候将其改为极限值来实现:

if(pitch > 89.0f)
  pitch =  89.0f;
if(pitch < -89.0f)
  pitch = -89.0f;

第四也是最后一步,就是通过俯仰角和偏航角来计算以得到真正的方向向量:

glm::vec3 front;
front.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
front.y = sin(glm::radians(pitch));
front.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
cameraFront = glm::normalize(front);

计算出来的方向向量就会包含根据鼠标移动计算出来的所有旋转了。由于cameraFront向量已经包含在GLM的lookAt函数中,我们这就没什么问题了。

如果你现在运行代码,你会发现在窗口第一次获取焦点的时候摄像机会突然跳一下。这个问题产生的原因是,在你的鼠标移动进窗口的那一刻,鼠标回调函数就会被调用,这时候的xposypos会等于鼠标刚刚进入屏幕的那个位置。这通常是一个距离屏幕中心很远的地方,因而产生一个很大的偏移量,所以就会跳了。我们可以简单的使用一个bool变量检验我们是否是第一次获取鼠标输入,如果是,那么我们先把鼠标的初始位置更新为xposypos值,这样就能解决这个问题;接下来的鼠标移动就会使用刚进入的鼠标位置坐标来计算偏移量了:

if(firstMouse) // 这个bool变量初始时是设定为true的
{
    lastX = xpos;
    lastY = ypos;
    firstMouse = false;
}

最后的代码应该是这样的:

void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    //如果是第一次进入,要做出一些调整:
    if (firstMouse)
    {
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;
    }
    //1.首先进行欧拉角的获取,利用两帧间的差别;
    float xoffer = lastX - xpos;
    float yoffer = ypos - lastY;
    lastX = xpos;
    lastY = ypos;
    float sensitivity = 0.05f;//灵敏度设置;
    xoffer *= sensitivity; //注意要考虑上灵敏度;
    yoffer *= sensitivity;
    //2.把偏移量添加到摄像机的两个欧拉角中;
    yaw += xoffer;
    pitch += yoffer;
    //3.设置上下的限制;pitch
    if (pitch > 89.0f)
        pitch = 89.0f;
    if (pitch < -89.0f)
        pitch = -89.0f;
    //4.将欧拉角传入cameraFront对方向进行改变,进而改变view观察空间;
    glm::vec3 direction;
    direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
    direction.y = sin(glm::radians(pitch));
    direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
    cameraFront = glm::normalize(direction);
}

九.缩放

作为我们摄像机系统的一个附加内容,我们还会来实现一个缩放(Zoom)接口。在之前的教程中我们说视野(Field of View)或fov定义了我们可以看到场景中多大的范围。当视野变小时,场景投影出来的空间就会减小,产生放大(Zoom In)了的感觉。我们会使用鼠标的滚轮来放大。与鼠标移动、键盘输入一样,我们需要一个鼠标滚轮的回调函数

voidscroll_callback(GLFWwindow* window, double xoffset, double yoffset){
  if(fov >= 1.0f && fov <= 45.0f)
    fov -= yoffset;
  if(fov <= 1.0f)
    fov = 1.0f;
  if(fov >= 45.0f)
    fov = 45.0f;
}

当滚动鼠标滚轮的时候yoffset值代表我们竖直滚动的大小当scroll_callback函数被调用后,我们改变全局变量fov变量的内容。因为45.0f是默认的视野值,我们将会把缩放级别(Zoom Level)限制在1.0f45.0f

我们现在在每一帧都必须把透视投影矩阵上传到GPU,但现在使用fov变量作为它的视野:

projection = glm::perspective(glm::radians(fov), 800.0f / 600.0f, 0.1f, 100.0f);

最后不要忘记注册鼠标滚轮的回调函数:

glfwSetScrollCallback(window, scroll_callback);

总的代码如下:

#include <glad/glad.h> 
#include <GLFW/glfw3.h>
#include <iostream>
#include<E:\OpenGl\练习1.1\3.3.shader_class\shader s.h>
//以下三行为glm的头文件代码;
#include <E:\OpenGl\glm\glm-master\glm\glm.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\matrix_transform.hpp>
#include <E:\OpenGl\glm\glm-master\glm\gtc\type_ptr.hpp>

#define STB_IMAGE_IMPLEMENTATION
#include <E:/OpenGl/stb_image.h/stb-master/stb_image.h>//这两行代码加入了stb_image库;

void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow* window);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void scroll_back(GLFWwindow* window, double xoffset, double yoffset);

//三个调整摄像机位置的全局变量;
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);

float deltatime = 0.0f;//上一帧与这一帧的时间差;
float lastime = 0.0f;//上一帧的时间;

//用来存储上一帧鼠标的位置!,设置为屏幕中心;
float lastX = 400.0;
float lastY = 300.0;

//仰俯角和偏航角;
float pitch = 0.0f;
float yaw = -90.0f;//从下往上;

float fov = 45.0f;//视域角;

int main()
{

    //先进行初始化glfw;
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);//主版本设置为3;
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);//次版本设置为3;
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWwindow* window = glfwCreateWindow(800, 600, "MY OPENGL", NULL, NULL);
    if (window == NULL)
    {
        std::cout << "Fail to create a window" << std::endl;
        glfwTerminate();//释放资源;
        return -1;
    }
    glfwMakeContextCurrent(window);
    glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);
    //创建完告诉将上下文设置为进程上下文;
    
    //以下两步用于摄像机操作中的设置,由于是窗口的操作,因此放在此处!!;
    glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);//告诉GLFW隐藏光标并捕捉他;
    glfwSetCursorPosCallback(window, mouse_callback);//此句代码当发生鼠标移动时,就会调用mouse_callback函数改变两个欧拉角,
    //进而改变cameraFront方向向量,进而可以实现3D旋转;
    //还要对视域角fov做出变化,可以进行放大缩小;
    glfwSetScrollCallback(window, scroll_back);//当滚动鼠标滚轮的时候,我们就可以通过调用该函数来改变fov,进而改变透视投影矩阵,
    //以此来进一步形成放大和缩小!!;


    //对glad进行初始化;
    if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
    {
        std::cout << "Fail to initnite glad" << std::endl;
        return -1;
    }
    //引入着色器类,着色器被封装到了class Shader里面;
    Shader shader("3.3.shader.vs", "3.3.shader.fs");

    float vertices[] = {
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
    };


    unsigned int VAO, VBO;
    glGenVertexArrays(1, &VAO);//创建VAO;
    glGenBuffers(1, &VBO);//创建VBO;
    //glGenBuffers(1, &EBO);
    glBindVertexArray(VAO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
    //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    //glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);


    //链接顶点属性;
    //位置属性;
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);//这里的步长为获得下一个属性值,应该右移六个长度的单位;
    glEnableVertexAttribArray(0);
    //纹理属性;
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
    glEnableVertexAttribArray(1);//启用纹理属性;

    //加入纹理1:
    unsigned int texture1;
    glGenTextures(1, &texture1);
    glBindTexture(GL_TEXTURE_2D, texture1);
    //修改纹理的环绕和过滤方式;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    //加载入纹理:
    stbi_set_flip_vertically_on_load(true);//防止上下颠倒;
    int width, height, nrchannels;
    unsigned char* data = stbi_load("E:/OpenGl/textures/v2-6e8b14becd4699a1e02421670e25ec74_r.jpg", &width, &height, &nrchannels, 0);
    //生成纹理;
    if (data)
    {
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Fail to load a image" << std::endl;
    }
    stbi_image_free(data);

    //加入纹理2:
    unsigned int texture2;
    glGenTextures(1, &texture2);
    glBindTexture(GL_TEXTURE_2D, texture2);
    //修改纹理的环绕和过滤方式;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    //加载入纹理:
    stbi_set_flip_vertically_on_load(true);//防止上下颠倒;
    int width1, height1, nrchannels1;
    unsigned char* data2 = stbi_load("E:/OpenGl/textures/2e3c98d3bf204d029be74443398e0c87.jpeg", &width1, &height1, &nrchannels1, 0);
    //生成纹理;
    if (data2)
    {
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, data2);
        glGenerateMipmap(GL_TEXTURE_2D);
    }
    else
    {
        std::cout << "Fail to load a image" << std::endl;
    }
    stbi_image_free(data2);

    //还要指定纹理单元对应的采样器;
    shader.useProgram();
    shader.setInt("Tex1", 0);//采样器1对应纹理单元0;
    shader.setInt("Tex2", 1);

    glEnable(GL_DEPTH_TEST);

    glm::vec3 cubePositions[] = {
  glm::vec3(0.0f,  0.0f,  0.0f),
  glm::vec3(2.0f,  5.0f, -15.0f),
  glm::vec3(-1.5f, -2.2f, -2.5f),
  glm::vec3(-3.8f, -2.0f, -12.3f),
  /*glm::vec3(2.4f, -0.4f, -3.5f),
  glm::vec3(-1.7f,  3.0f, -7.5f),
  glm::vec3(1.3f, -2.0f, -2.5f),
  glm::vec3(1.5f,  2.0f, -2.5f),
  glm::vec3(1.5f,  0.2f, -1.5f),
  glm::vec3(-1.3f,  1.0f, -1.5f)*/
    };

    //准备引擎:
    while (!glfwWindowShouldClose(window))
    {
        float currentime = glfwGetTime();
        deltatime = currentime - lastime;
        lastime = currentime;

        //准备输入:
        processInput(window);

        //每一次更新缓冲颜色,之后继续画下一帧;
        glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT);//清除颜色缓冲单元;
        glClear(GL_DEPTH_BUFFER_BIT);

        //绑定两个纹理,要先激活没一个纹理单元;
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, texture1);//绑定这个纹理到当前激活的纹理单元;
        glActiveTexture(GL_TEXTURE1);//激活纹理单元1;
        glBindTexture(GL_TEXTURE_2D, texture2);
        //启用Shader对象shader的程序启动函数;

        创建一个随时间旋转的矩阵:
        //shader.useProgram();//启用着色器程序;
        创建三个变换矩阵;
        //glm::mat4 projection = glm::mat4(1.0f);
        //glm::mat4 view = glm::mat4(1.0f);
        //glm::mat4 model = glm::mat4(1.0f);
        //model = glm::rotate(model, (float)glfwGetTime()*glm::radians(-55.0f), glm::vec3(0.5f, 1.0f, 0.0f));
        //view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f));
        //projection = glm::perspective(glm::radians(45.0f), (float)800 / (float)600, 0.1f, 100.0f);

        将变换矩阵传入着色器;注意到由于变换随时发生,要每一次迭代都进行,因此要放在里面;
        //int Location1 = glGetUniformLocation(shader.ID, "model");
        //glUniformMatrix4fv(Location1, 1, GL_FALSE, glm::value_ptr(model));
        //int Location2 = glGetUniformLocation(shader.ID, "view");
        //glUniformMatrix4fv(Location2, 1, GL_FALSE, glm::value_ptr(view));
        //int Location3 = glGetUniformLocation(shader.ID, "projection");
        //glUniformMatrix4fv(Location3, 1, GL_FALSE, glm::value_ptr(projection));

        //创建三个变换矩阵;
            //先初始化三个单位矩阵;
        glm::mat4 projection = glm::mat4(1.0f);
        glm::mat4 view = glm::mat4(1.0f);
        glm::mat4 model = glm::mat4(1.0f);
        float angle = 20.0f;

        glBindVertexArray(VAO);//绑定VAO;
        //进行十次循环;
        for (int i = 0; i < 4; i++)
        {
            //创建一个随时间旋转的矩阵:
            shader.useProgram();//启用着色器程序;
            model = glm::translate(model, cubePositions[i]);
            model = glm::rotate(model, (float)glfwGetTime() * glm::radians(angle * (i + 1)), glm::vec3(0.4f, 0.2f, 0.0f));

            view = glm::lookAt(cameraPos,cameraPos+cameraFront,cameraUp);

            projection = glm::perspective(glm::radians(fov), (float)800 / (float)600, 0.1f, 100.0f);

            //将变换矩阵传入着色器;注意到由于变换随时发生,要每一次迭代都进行,因此要放在里面;
            int Location1 = glGetUniformLocation(shader.ID, "model");
            glUniformMatrix4fv(Location1, 1, GL_FALSE, glm::value_ptr(model));
            int Location2 = glGetUniformLocation(shader.ID, "view");
            glUniformMatrix4fv(Location2, 1, GL_FALSE, glm::value_ptr(view));
            int Location3 = glGetUniformLocation(shader.ID, "projection");
            glUniformMatrix4fv(Location3, 1, GL_FALSE, glm::value_ptr(projection));
            glDrawArrays(GL_TRIANGLES, 0, 36);
        }

        //开始绘制;
        //glDrawArrays(GL_TRIANGLES, 0, 3);//第一个参数是要进行绘制的类型,第二个参数制定了顶点数组的开始索引,
        //glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
        //注意:6是指一个要绘制六个顶点;!!
        //第三个参数是要进行绘制的顶点的长度;

        //更新缓冲;
        glfwSwapBuffers(window);
        //进行检查;
        glfwPollEvents();
    }
    //结束要删除VAO,VBO,以及删除着色器程序; 
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    //glDeleteProgram(shaderProgram);//删除着色器程序;
    //结束清楚资源;
    glfwTerminate();
    return 0;
}

void framebuffer_size_callback(GLFWwindow* window, int width, int height)
{
    glViewport(0, 0, width, height);
}
void processInput(GLFWwindow* window)
{ 
    if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)//如果按下的键为回车键;
        glfwSetWindowShouldClose(window, true);
    float cameraSpeed = 2.5f * deltatime;//移动速度;
    if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
        cameraPos += cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
        cameraPos -= cameraFront * cameraSpeed;
    if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
        cameraPos -= cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
    if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
        cameraPos += cameraSpeed * glm::normalize(glm::cross(cameraFront, cameraUp));
}

bool firstMouse = true;

void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    //计算鼠标距上一帧的偏移量。
        //把偏移量添加到摄像机的俯仰角和偏航角中。
        //对偏航角和俯仰角进行最大和最小值的限制。
        //计算方向向量。
    if (firstMouse)
    {
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;//否则每一次都会进行循环;
    }
    //1.计算鼠标距上一帧的偏移量。
    float xoffset = xpos - lastX;
    float yoffset = lastY - ypos;
    lastX = xpos;
    lastY = ypos;//更新存储的上一帧的值;
    float sensitivity = 0.1f;//设置灵敏度;
    xoffset *= sensitivity;
    yoffset *= sensitivity;

    //2.把偏移量添加到摄像机的俯仰角和偏航角中。
    pitch = pitch + yoffset;
    yaw = yaw + xoffset;

    //3.对偏航角和俯仰角进行最大和最小值的限制

    if (pitch > 89.0f)
        pitch = 89.0f;
    if (pitch < -89.0f)
        pitch = -89.0f;
    //计算方向向量。
    glm::vec3 direction;
    direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
    direction.y = sin(glm::radians(pitch));
    direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
    cameraFront = glm::normalize(direction);
}
void scroll_back(GLFWwindow* window, double xoffset, double yoffset)
{
    //我们要把fov限制在1.0到45.0之间!!;
    if (fov >= 1.0f && fov <= 45.0f)
    {
        fov -= yoffset;
    }
    if (fov >= 45.0f)
    {
        fov = 45.0f;
    }
    if (fov <= 1.0f)
    {
        fov = 1.0f;
    }
}

结果为一个可以任意进行旋转,缩放,利用WASD进行前后上下移动的摄像机系统!!!;

十.入门结语:

恭喜您完成了本章的学习,至此为止你应该能够创建一个窗口,创建并且编译着色器,通过缓冲对象或者uniform发送顶点数据,绘制物体,使用纹理,理解向量和矩阵,并且可以综合上述知识创建一个3D场景并可以通过摄像机来移动。

下面是一些词汇:

词汇表


  • OpenGL: 一个定义了函数布局和输出的图形API的正式规范。

  • GLAD: 一个拓展加载库,用来为我们加载并设定所有OpenGL函数指针,从而让我们能够使用所有(现代)OpenGL函数。

  • 视口(Viewport): 我们需要渲染的窗口。

  • 图形管线(Graphics Pipeline): 一个顶点在呈现为像素之前经过的全部过程。

  • 着色器(Shader): 一个运行在显卡上的小型程序。很多阶段的图形管道都可以使用自定义的着色器来代替原有的功能。

  • 标准化设备坐标(Normalized Device Coordinates, NDC): 顶点在通过在剪裁坐标系中剪裁与透视除法后最终呈现在的坐标系。所有位置在NDC下-1.0到1.0的顶点将不会被丢弃并且可见。

  • 顶点缓冲对象(Vertex Buffer Object): 一个调用显存并存储所有顶点数据供显卡使用的缓冲对象。

  • 顶点数组对象(Vertex Array Object): 存储缓冲区和顶点属性状态。

  • 元素缓冲对象(Element Buffer Object,EBO),也叫索引缓冲对象(Index Buffer Object,IBO): 一个存储元素索引供索引化绘制使用的缓冲对象。

  • Uniform: 一个特殊类型的GLSL变量。它是全局的(在一个着色器程序中每一个着色器都能够访问uniform变量),并且只需要被设定一次。

  • 纹理(Texture): 一种包裹着物体的特殊类型图像,给物体精细的视觉效果。

  • 纹理缠绕(Texture Wrapping): 定义了一种当纹理顶点超出范围(0, 1)时指定OpenGL如何采样纹理的模式。

  • 纹理过滤(Texture Filtering): 定义了一种当有多种纹素选择时指定OpenGL如何采样纹理的模式。这通常在纹理被放大情况下发生。

  • 多级渐远纹理(Mipmaps): 被存储的材质的一些缩小版本,根据距观察者的距离会使用材质的合适大小。

  • stb_image.h: 图像加载库。

  • 纹理单元(Texture Units): 通过绑定纹理到不同纹理单元从而允许多个纹理在同一对象上渲染。

  • 向量(Vector): 一个定义了在空间中方向和/或位置的数学实体。

  • 矩阵(Matrix): 一个矩形阵列的数学表达式。

  • GLM: 一个为OpenGL打造的数学库。

  • 局部空间(Local Space): 一个物体的初始空间。所有的坐标都是相对于物体的原点的。

  • 世界空间(World Space): 所有的坐标都相对于全局原点。

  • 观察空间(View Space): 所有的坐标都是从摄像机的视角观察的。

  • 裁剪空间(Clip Space): 所有的坐标都是从摄像机视角观察的,但是该空间应用了投影。这个空间应该是一个顶点坐标最终的空间,作为顶点着色器的输出。OpenGL负责处理剩下的事情(裁剪/透视除法)。

  • 屏幕空间(Screen Space): 所有的坐标都由屏幕视角来观察。坐标的范围是从0到屏幕的宽/高。

  • LookAt矩阵: 一种特殊类型的观察矩阵,它创建了一个坐标系,其中所有坐标都根据从一个位置正在观察目标的用户旋转或者平移。

  • 欧拉角(Euler Angles): 被定义为偏航角(Yaw),俯仰角(Pitch),和滚转角(Roll)从而允许我们通过这三个值构造任何3D方向。

Guess you like

Origin blog.csdn.net/2201_75303014/article/details/128808638