OpenGL camera

1 Introduction

OpenGL itself does not have the concept of a camera (Camera), but we can simulate a camera by moving all objects in the scene in the opposite direction, creating a feeling that we are moving, not that the scene is moving.

To define a camera, we need its position in world space, the direction it is looking at, a vector pointing to its right, and a vector pointing up it.

Camera position:

Getting the camera position is simple. The camera position is simply a vector pointing to the camera position in world space. Don't forget that the positive z-axis is pointing towards you from the screen, if we want the camera to move backwards we move along the positive z-axis.

QVector3D cameraPos = QVector3D( 0.0f,  0.0f,  2.0f);//摄像机位置

Camera Direction:

This refers to which direction the camera is pointing. Now let's point the camera at the scene origin: (0, 0, 0). The result of subtracting the camera position vector from the scene origin vector is the camera pointing vector. Since we know that the camera points to the negative direction of the z-axis, we want the direction vector (Direction Vector) to point to the positive direction of the camera's z-axis. If we swap the order of subtraction, we get a vector pointing in the direction of the positive z-axis of the camera:

    cameraTarget = QVector3D( 0.0f,  0.0f,  0.0f);//摄像机看到的位置
    cameraDirection = QVector3D(cameraPos - cameraTarget);//摄像机的方向
    cameraDirection.normalize();

Right axis: 

It represents the positive direction of the x-axis in camera space. To get the right vector we need to use a little trick: first define an up vector (Up Vector). Next, cross-multiply the upper vector and the camera direction vector obtained in the second step . The result of the cross product of two vectors will be perpendicular to both vectors at the same time, so we will get the vector pointing in the positive direction of the x-axis (if we swap the order of the cross product of the two vectors, we will get the opposite vector pointing in the negative direction of the x-axis) :

    up = QVector3D(0.0f,  1.0f,  0.0f);
    cameraRight = QVector3D::crossProduct(up,cameraDirection);//两个向量叉乘的结果会同时垂直于两向量,因此我们会得到指向x轴正方向的那个向量
    cameraRight.normalize();

Upper axis:

Cross product the right vector with the camera direction vector to get the up vector of the camera :

cameraUp = QVector3D::crossProduct(cameraDirection,cameraRight);

Look At:

One of the nice things about using matrices is that if you define a coordinate space with 3 mutually perpendicular (or non-linear) axes, you can create a matrix with those 3 axes plus a translation vector, and you can multiply Transform it into that coordinate space with any vector. 

    QTime gtime;
    QMatrix4x4 view;

    float radius = 10.0f;    //圆半径
    float time = gtime.elapsed()/1000.0;
    float camx = sin(time) * radius;
    float camz = cos(time) * radius;

    view.lookAt(QVector3D(camx,0.0,camz),cameraTarget,up);

Introduction to parameters:

A camera position , a target position , and a vector representing the up vector in world space (the one we use to compute the right vector).

2. Examples

#ifndef MYOPENGLWIDGET_H
#define MYOPENGLWIDGET_H
#include <QOpenGLWidget>
#include <QOpenGLFunctions_3_3_Core>
#include <QOpenGLTexture>
#include <QImage>
#include <QOpenGLShaderProgram>
#include <QVector3D>
#include <QVector>

class MyOpenGLWidget : public QOpenGLWidget,public QOpenGLFunctions_3_3_Core
{
public:
    MyOpenGLWidget(QWidget *parent = nullptr);

protected:
    virtual void initializeGL();
    virtual void paintGL();
    virtual void resizeGL(int w, int h);

private:
    QOpenGLTexture *m_wall;

    QOpenGLTexture *m_face;

    QOpenGLShaderProgram *m_program;

    QVector3D cameraPos;
    QVector3D cameraTarget;
    QVector3D cameraDirection;
    QVector3D up;
    QVector3D cameraRight;
    QVector3D cameraUp;
};

#endif // MYOPENGLWIDGET_H



#include "myopenglwidget.h"
#include <QMatrix4x4>
#include <QTime>
#include <QTimer>
#include <math.h>


float vertices[] = {
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 0.0f,

    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 1.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,

    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,

    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,
     0.5f, -0.5f, -0.5f,  1.0f, 1.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
     0.5f, -0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f, -0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f, -0.5f, -0.5f,  0.0f, 1.0f,

    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f,
     0.5f,  0.5f, -0.5f,  1.0f, 1.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
     0.5f,  0.5f,  0.5f,  1.0f, 0.0f,
    -0.5f,  0.5f,  0.5f,  0.0f, 0.0f,
    -0.5f,  0.5f, -0.5f,  0.0f, 1.0f
};

GLuint indices[] = {
    0, 1, 3,
    1, 2, 3
};

//顶点着色器语言
const GLchar* vertexShaderSource = "#version 330 core\n"
"layout (location = 0) in vec3 position;\n"
"layout (location = 1) in vec2 texCoord;\n"
"out vec2 outTexCoord;\n"
"uniform mat4 model;\n"
"uniform mat4 view;\n"
"uniform mat4 projection;\n"
"void main()\n"
"{\n"
"gl_Position = projection * view * model * vec4(position,1.0);\n"
"outTexCoord = texCoord;\n"
"}\n\0";

//片段着色器语言
//texture函数会使用之前设置的纹理参数对相应的颜色值进行采样
//mix按一定的比例,混合两个纹理颜色
const GLchar* fragmentShaderSource = "#version 330 core\n"
"out vec4 color;\n"
"uniform sampler2D ourTexture1;\n"
"uniform sampler2D ourTexture2;\n"
"in vec2 outTexCoord;\n"
"void main()\n"
"{\n"
"color = mix(texture(ourTexture1, outTexCoord),texture(ourTexture2, vec2(outTexCoord.x, outTexCoord.y)),0.45);\n"
"}\n\0";

GLuint VBO, VAO,EBO;
GLuint shaderProgram;

QTimer *timer;
QTime gtime;

QVector<QVector3D> cubePositions = {
  QVector3D( 0.0f,  0.0f,  0.0f),
  QVector3D( 2.0f,  5.0f, -15.0f),
  QVector3D(-1.5f, -2.2f, -2.5f),
  QVector3D(-3.8f, -2.0f, -12.3f),
  QVector3D( 2.4f, -0.4f, -3.5f),
  QVector3D(-1.7f,  3.0f, -7.5f),
  QVector3D( 1.3f, -2.0f, -2.5f),
  QVector3D( 1.5f,  2.0f, -2.5f),
  QVector3D( 1.5f,  0.2f, -1.5f),
  QVector3D(-1.3f,  1.0f, -1.5f)
};

MyOpenGLWidget::MyOpenGLWidget(QWidget *parent)
    : QOpenGLWidget(parent)
{
    timer = new QTimer();
    timer->start(50);
    connect(timer,&QTimer::timeout,[=]{
        update();
    });

    gtime.start();

    cameraPos = QVector3D( 0.0f,  0.0f,  2.0f);//摄像机位置
    cameraTarget = QVector3D( 0.0f,  0.0f,  0.0f);//摄像机看到的位置
    cameraDirection = QVector3D(cameraPos - cameraTarget);//摄像机的方向
    cameraDirection.normalize();

    up = QVector3D(0.0f,  1.0f,  0.0f);
    cameraRight = QVector3D::crossProduct(up,cameraDirection);//两个向量叉乘的结果会同时垂直于两向量,因此我们会得到指向x轴正方向的那个向量
    cameraRight.normalize();

    cameraUp = QVector3D::crossProduct(cameraDirection,cameraRight);
}

void MyOpenGLWidget::initializeGL()
{
    initializeOpenGLFunctions();

    m_program = new QOpenGLShaderProgram();
    m_program->addShaderFromSourceCode(QOpenGLShader::Vertex,vertexShaderSource);
    m_program->addShaderFromSourceCode(QOpenGLShader::Fragment,fragmentShaderSource);
    m_program->link();

    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);

    glBindVertexArray(VAO);//绑定VAO
    glBindBuffer(GL_ARRAY_BUFFER, VBO);//顶点缓冲对象的缓冲类型是GL_ARRAY_BUFFER
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);//把顶点数据复制到缓冲的内存中GL_STATIC_DRAW :数据不会或几乎不会改变。

    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);

    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
    glEnableVertexAttribArray(1);

    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glGenBuffers(1, &EBO);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);

    glBindVertexArray(0);//解绑VAO

    m_wall = new QOpenGLTexture(QImage("./container.jpg").mirrored());
    m_face = new QOpenGLTexture(QImage("./awesomeface.png").mirrored());

    m_program->bind();
    m_program->setUniformValue("ourTexture1",0);
    m_program->setUniformValue("ourTexture2",1);

    //设置投影透视矩阵
    QMatrix4x4 projection;
    projection.perspective(60,(float)( width())/(height()),0.1,100);
    m_program->setUniformValue("projection",projection);

}

void MyOpenGLWidget::paintGL()
{
    glClearColor(0.2f,0.3f,0.3f,1.0f);
    glEnable(GL_DEPTH_TEST);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    QMatrix4x4 model;
    QMatrix4x4 view;

    //随时间变化的x和z坐标
    float radius = 10.0f;
    float time = gtime.elapsed()/1000.0;
    float camx = sin(time) * radius;
    float camz = cos(time) * radius;

    view.lookAt(QVector3D(camx,0.0,camz),cameraTarget,up);

    m_program->bind();

    glBindVertexArray(VAO);//绑定VAO

    m_wall->bind(0);
    m_face->bind(1);

    //设置观察矩阵
    m_program->setUniformValue("view",view);

    foreach(auto pos , cubePositions)
    {
        model.setToIdentity();
        model.translate(pos);
        //model.rotate(time,1.0f,5.0f,3.0f);
        //设置模型矩阵
        m_program->setUniformValue("model",model);
        glDrawArrays(GL_TRIANGLES,0,36);
    }

}

void MyOpenGLWidget::resizeGL(int w, int h)
{

}

3. Freedom to move

Now the lookat function becomes like this, we first set the camera position to the cameraPos defined earlier. The direction is the current position plus the direction vector we just defined. This ensures that no matter how we move, the camera will always look in the direction of the target.

    QMatrix4x4 view;
    view.lookAt(cameraPos,cameraPos + cameraFront,cameraUp);

Keyboard events:

void MyOpenGLWidget::keyPressEvent(QKeyEvent *event)
{
    qDebug()<<event->key();
    cameraSpeed = 2.5 * 100 / 1000.0;
    switch (event->key()) {
    case Qt::Key_W:{
        cameraPos += cameraSpeed * cameraFront;
    }
        break;
    case Qt::Key_S:{
        cameraPos -= cameraSpeed * cameraFront;
    }
        break;
    case Qt::Key_A:{
        cameraPos -= cameraSpeed * cameraRight;
    }
        break;
    case Qt::Key_D:{
        cameraPos += cameraSpeed * cameraRight;
    }
        break;
    default:
        break;

    }
    update();
}

When we press any of the WASD keys, the position of the camera will be updated accordingly. If we wish to move forward or backward, we add or subtract the direction vector to or from the position vector. If we want to move left and right, we use the cross product to create a Right Vector and move along it accordingly. This creates the familiar Strafe effect when working with cameras. 

4. Angle of view movement

It's not much fun to just move around with the keyboard. Especially since we can't turn yet, movement is very limited.

To be able to change the viewing angle, we need to change the cameraFront vector according to the mouse input .

If you are interested, you can learn more about Euler angles. Here is the code directly.

float PI = 3.1415926;
QPoint deltaPos;
void MyOpenGLWidget::mouseMoveEvent(QMouseEvent *event)
{
    static float yaw = -90;
    static float pitch = 0;
    //上一次的位置
    static QPoint lastPos(width()/2,height()/2);
    
    //当前的位置
    auto currentPos = event->pos();
    
    //
    deltaPos = currentPos-lastPos;
    lastPos = currentPos;
    
    //灵敏度
    float sensitivity = 0.1f;
    deltaPos *= sensitivity;
    yaw += deltaPos.x();
    pitch -= deltaPos.y();
    
    if(pitch > 89.0f) 
        pitch = 89.0f;
    
    if(pitch < -89.0f) 
        pitch = -89.0f;
    
    cameraFront.setX(cos(yaw*PI/180.0) * cos(pitch *PI/180));
    cameraFront.setY(sin(pitch*PI/180));
    cameraFront.setZ(sin(yaw*PI/180) * cos(pitch *PI/180));
    cameraFront.normalize();
    
    update();
}

5. Zoom

We say that the field of view (Field of View) or fov defines how much of the scene we can see. When the field of view becomes smaller, the space projected by the scene will be reduced, resulting in the feeling of zooming in (Zoom In). We will use the scroll wheel of the mouse to zoom in.

We now have to upload the perspective projection matrix to the GPU every frame, but now use the fov variable as its field of view:

projection.perspective(fov,(float)( width())/(height()),0.1,100);

Wheel events:


void MyOpenGLWidget::wheelEvent(QWheelEvent *event)
{
    if(fov >= 1.0f && fov <= 75.0f)
        fov -= event->angleDelta().y()/120;
    if(fov <= 1.0f)
        fov = 1.0f;
    if(fov >= 75.0f)
        fov = 75.0f;

    update();
}

 6. Complete source code

https://download.csdn.net/download/wzz953200463/87887281https://download.csdn.net/download/wzz953200463/87887281

Guess you like

Origin blog.csdn.net/wzz953200463/article/details/131134080