golang, OpenGL, computer graphics (2)

code repository

https://github.com/phprao/go-graphic

transform

Matrix operations and vector operations: https://learnopengl-cn.github.io/01%20Getting%20started/07%20Transformations/

In OpenGL, we usually use a 4×4 transformation matrix for some reasons, and the most important reason is that most vectors have 4 components.

A matrix is ​​actually an array

type Mat4 [16]float32

func Ident4() Mat4 {
    
    
	return Mat4{
    
    1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1}
}
vector scaling

Insert image description here

vector displacement

Insert image description here

Homogeneous Coordinates

The w component of a vector is also called 齐次坐标. To get a 3D vector from a homogeneous vector, we can divide the x, y, and z coordinates by the w coordinate. We usually don't notice this problem because the w component is usually 1.0. Using homogeneous coordinates has several advantages: it allows us to displace a 3D vector (we cannot displace a vector without the w component).

If the homogeneous coordinate of a vector is 0, this coordinate is a direction vector. Because the w coordinate is 0, this vector cannot be displaced (Annotation: This is what we say cannot be displaced in one direction).

With the displacement matrix, we can move the object in the three directions x, y, and z. It is a very useful transformation matrix in our transformation toolbox.

rotation of vector

Insert image description here

Rotate along any axis

Insert image description here

It is recommended that when combining matrices, you perform scaling operations first, then rotation, and finally displacement, otherwise they will affect each other.

GLM is the abbreviation of Open GL Mathematicals . Starting from version 0.9.9, the GLM library will initialize the matrix type to a zero matrix (all elements are 0) by default, rather than an identity matrix (diagonal elements are 1, other elements are 0).

Corresponding to the package in golanggithub.com/go-gl/mathgl/mgl32

// 生成一个向量
v4 = mgl32.Vec4{
    
    1, 1, 1, 1}
v3 = mgl32.Vec3{
    
    3, 3, 3}
v4 = v3.Vec4(1)
v2 = v3.Vec2()

// 向量的Add, Sub, Mul, Dot, Cross, Len

// 矩阵之间点乘
A.Mul4(B)

// 生成4*4的单位矩阵
model := mgl32.Ident4()

// 生成一个沿向量(3,4,5)移动的变换矩阵trans3d
/*
[1, 0, 0, 3]
[0, 1, 0, 4]
[0, 0, 1, 5]
[0, 0, 0, 1]
*/
trans3d := mgl32.Translate3D(3,4,5)
// 使向量vec3(1,2,3)沿着向量(3,4,5)移动
// 结果 (4,6,8)
mgl32.TransformCoordinate(mgl32.Vec3{
    
    1, 2, 3}, trans3d)

// 生成缩放比例(2,2,2)的变换矩阵
// 如果缩放的比例是负值,会导致图像翻转
/*
[2, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 2, 0]
[0, 0, 0, 1]
*/
scale3d := mgl32.Scale3D(2, 2, 2)
// 使向量vec3(1,2,3)缩放(2,2,2)
// 结果 (2,4,6)
mgl32.TransformCoordinate(mgl32.Vec3{
    
    1, 2, 3}, scale3d)

// 沿轴(3,3,3)旋转20度的变化矩阵
/*
[5.735343 2.588426 8.066097 0.000000]
[8.066097 5.735343 2.588426 0.000000]
[2.588426 8.066097 5.735343 0.000000]
[0.000000 0.000000 0.000000 1.000000]
*/
rotate3d := mgl32.HomogRotate3D(mgl32.DegToRad(20), mgl32.Vec3{
    
    3, 3, 3})
// 使向量vec3(1,2,3)沿轴(3,3,3)旋转20度
// 结果 (35.11049, 27.302063, 35.92665)
mgl32.TransformCoordinate(mgl32.Vec3{
    
    1, 2, 3}, rotate3d)

Because the bottom layer math.Sin(angle)accepts radians radianinstead of degrees degree, the radian value and conversion function must also be passed here mgl32.RadToDeg() 和 mgl32.DegToRad().

The transformation matrix type is mgl32.Mat4type, we use uniform to pass it into the shader

model := mgl32.Ident4()
modelUniform := gl.GetUniformLocation(program, gl.Str("model\x00"))
gl.UniformMatrix4fv(modelUniform, 1, false, &model[0])
uniform mat4 model;

Example: For an existing texture, achieve the effect of first scaling, then rotating, and then moving.

Insert image description here

Effect after operation

Insert image description here

Main code:

for !window.ShouldClose() {
    
    
    ......
    gl.UseProgram(program)

    rotate := mgl32.HomogRotate3D(mgl32.DegToRad(90), mgl32.Vec3{
    
    0, 0, 1})
    scale := mgl32.Scale3D(0.5, 0.5, 0.5)
    translate := mgl32.Translate3D(0.5, -0.5, 0)
    // 顺序要反着看:依次是 scale,rotate,translate
    transe := translate.Mul4(rotate).Mul4(scale)
    gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])
    ......
}

vertex shader

......
uniform mat4 transe;
......
void main() {
    
    
    gl_Position = transe * vec4(vPosition, 1.0);
    ......
}

We can make the arc of rotation change over time, so that the image rotates

rotate := mgl32.HomogRotate3D(float32(glfw.GetTime()), mgl32.Vec3{
    
    0, 0, 1})

glfw.GetTime()The returned time is measured from the time the window was created. The unit is seconds and the values ​​are as follows:

0.19938110100045076
0.3682488961570842
0.8834820281800326
...
1.0471016692995818
1.2141550765655154
1.380787958668221
...

Indicates how many seconds the window has been running.

Example 2: Draw two boxes in a window, one is constantly rotating, and the other is constantly shrinking and enlarging

for !window.ShouldClose() {
    
    
    gl.ClearColor(0.2, 0.3, 0.3, 1.0)
    gl.Clear(gl.COLOR_BUFFER_BIT)
    gl.UseProgram(program)

    gl.ActiveTexture(gl.TEXTURE0)
    gl.BindTexture(gl.TEXTURE_2D, texture1)
    gl.Uniform1i(gl.GetUniformLocation(program, gl.Str("ourTexture1"+"\x00")), 0)

    gl.ActiveTexture(gl.TEXTURE1)
    gl.BindTexture(gl.TEXTURE_2D, texture2)
    gl.Uniform1i(gl.GetUniformLocation(program, gl.Str("ourTexture2"+"\x00")), 1)

    gl.BindVertexArray(vao)

    // 第一个箱子
    rotate := mgl32.HomogRotate3D(float32(glfw.GetTime()), mgl32.Vec3{
    
    0, 0, 1}) // 旋转效果
    scale := mgl32.Scale3D(0.5, 0.5, 0.5)
    translate := mgl32.Translate3D(0.5, -0.5, 0)
    transe := translate.Mul4(rotate).Mul4(scale)
    gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])
    gl.DrawElements(gl.TRIANGLES, pointNum, gl.UNSIGNED_INT, gl.Ptr(indices))

    // 第二个箱子
    rotate2 := mgl32.HomogRotate3D(mgl32.DegToRad(90), mgl32.Vec3{
    
    0, 0, 1})
    s := float32(math.Sin(glfw.GetTime()))
    scale2 := mgl32.Scale3D(s, s, s)
    translate2 := mgl32.Translate3D(-0.5, 0.5, 0)
    transe2 := translate2.Mul4(rotate2).Mul4(scale2)
    gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe2[0])
    gl.DrawElements(gl.TRIANGLES, pointNum, gl.UNSIGNED_INT, gl.Ptr(indices))

    glfw.PollEvents()
    window.SwapBuffers()
}

Insert image description here


coordinate system

All the different states a vertex needs to go through before it is finally converted into a fragment.

  • Local Space, also known as Object Space
  • World Space
  • Observation Space (View Space), also known as Visual Space (Eye Space), Camera Space (Camera Space)
  • Clip Space
  • Screen Space

In order to transform coordinates from one coordinate system to another, we need to use several transformation matrices. The most important ones are the three matrices of Model, View, and Projection. Our vertex coordinates start from the local space (Local Space), where it is called the local coordinate (Local Coordinate). It will later become the world coordinate (World Coordinate), the observation coordinate (View Coordinate), and the clipping coordinate (Clip Coordinate), and finally ends in the form of screen coordinate (Screen Coordinate). The picture below shows the entire process and what each transformation process does:

Insert image description here

matrix transformation
  • Transform from local space to world space: Model Matrix.
  • Transform from world space to observation space: View Matrix.
  • Transform from observation space to clipping space: Projection Matrix.

Once all vertices have been transformed into clipping space, the final operation is 透视除法(Perspective Division)performed, in which we divide the x, y, and z components of the position vector by the homogeneous w component of the vector; the perspective division is to divide the 4D clipping space into The process of coordinate transformation into 3D standardized device coordinates. This step is done at the end of every vertex shader run 自动执行.

The projection matrix that transforms observation coordinates into clipping coordinates can be in two different forms, each defining a different frustum. We can choose to create one 正射投影矩阵(Orthographic Projection Matrix)or the other 透视投影矩阵(Perspective Projection Matrix).

orthographic projection

It is specified by width, height, near plane and far plane.

// near平面为靠近观察者的平面
// 参数一二,表示near平面的左右坐标
// 参数三四,表示near平面的底顶坐标
// 参数五六,near和far平面距离屏幕的距离
mgl32.Ortho(0, 800, 0, 600, 0.1, 100)

Insert image description here

Orthographic projection treats both near and distant objects equally, which means that the w component of each vertex is 1, but this is inconsistent with reality. In fact, objects of the same size will be seen farther away from the human eye. The smaller it is, this is determined by the structure of the eye.

perspective projection
// 第一个参数为视野角,通常为45度。
// 第二个参数为视口的宽高比。
// near和far平面距离屏幕的距离。通常设置near为0.1,far为100
mgl32.Perspective(mgl32.DegToRad(45.0), float32(windowWidth)/windowHeight, 0.1, 10.0)

Insert image description here

The farther away you are from the camera, the larger the field of view you can see, but the screen size is fixed, so objects will shrink.

The w component will be modified. The farther the vertex coordinates are from the observer, the larger the w component will be. In the end, x, y, and z will be divided by the w component, so the distant objects become smaller.

Characteristics of the viewing angle : The smaller the angle, the smaller the range of the scene seen, and the projection on the screen will have a magnifying effect; conversely, the larger the viewing angle, the shrinking effect.

The final transformation process:

V_clip = M_projection * M_view * M_model * V_local
concrete practice

Model Matrix

We drew two tables in a room (world space). They are to the left and right of the origin of the world coordinate system. When we enter the table on the left for drawing, we will place the coordinate origin at the center of the table for the convenience of drawing. , it is the local space at this time. After painting, we need to return to the world space to see the overall effect. We need to make the table smaller first (because I will let it occupy most of the screen when painting). After it becomes smaller, We can see the entire scene, but the center of the table we just drew is at the center of the world space. We need to move it. If the table should be placed diagonally, we need to rotate it first. 也就是说 Model 是用来调整单个的物体, you will find that the three operations of translation, scaling, and rotation should be sequential. It should be 缩放 --> 旋转 --> 平移.

View Matrix

After arriving in the world space, from which angle should we observe this space? In other words, as a graphics tool, from which angle should we present the scene to the user? Here is the concept of a camera, which can be understood as the user's Eyes, moving the camera backward is the same as moving the entire scene forward 也就是说 View 是用来调整整个场景的. OpenGL is a right-handed coordinate system, so we need to move along the positive z-axis. We will do this by translating the scene along the negative z-axis. It will give us the feeling that we are moving backwards. Imagine that a three-dimensional scene is stationary there, and people can choose to observe it from different points. The camera cannot be rotated, it is always viewed perpendicular to the screen. Of course, we can also rotate the camera, which is another knowledge point.

Projection Matrix

Projects the scene onto the window area, which determines which part of the scene is projected onto the screen.

model := mgl32.HomogRotate3D(mgl32.DegToRad(-55), mgl32.Vec3{
    
    1, 0, 0})
view := mgl32.Translate3D(0, 0, -3)
projection := mgl32.Perspective(mgl32.DegToRad(45), float32(width)/float32(height), 0.1, 100)
transe := projection.Mul4(view).Mul4(model)
gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])

If there is a 3D effect, then you need to turn on the depth test, which will process occlusion, otherwise the effect will be strange.

gl.Enable(gl.DEPTH_TEST)

gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)

camera

The coordinate system mentioned above assumes that the camera is not moving, but that the scene is moving and the objects are moving. In fact, another way of observation is to make the camera also move.

To define a camera is to create a coordinate system with three unit axes perpendicular to each other and with the camera's position as the origin.

Insert image description here

Steps to define a camera:

  • The position of the camera: a point in the world space, which means that the camera is placed here. The farther the position is, the smaller the object seen P1(x,y,z).

    cameraPos := mgl32.Vec3{
          
          0, 0, 3}
    
  • The direction of the camera: In the world space, where the camera points, because it only represents one direction, so just select a point in that direction. For example, if the camera points to the origin, then the direction of the camera is, which is the blue arrow in the P2(0,0,0)picture P1-P2. But this direction is opposite to the direction in which the camera takes pictures.

    cameraTarget := mgl32.Vec3{
          
          0, 0, 0}
    cameraDirction := cameraPos.Sub(cameraTarget)
    
  • Right axis: It is also called the X-axis square. To obtain the right vector, we need to use a little trick: first define an up vector (Up Vector). Next, cross-multiply the upper vector and the direction vector obtained in the second step. The result of the cross product of two vectors will be perpendicular to both vectors at the same time, so we will get the vector pointing in the positive direction of the x-axis (if we swap the order of the cross product of the two vectors, we will get the opposite vector pointing in the negative direction of the x-axis) .

    up := mgl32.Vec3{
          
          0, 1, 0}
    cameraRight := up.Cross(cameraDirction)
    
  • Upper axis: Now that we have the x-axis vector and the z-axis vector, getting a positive y-axis vector pointing towards the camera is relatively simple: we take the cross product of the right vector and the direction vector.

    cameraUp := cameraDirction.Cross(cameraRight)
    

So we found that as long as three vectors are given cameraPos, cameraTarget, up, we can construct a camera coordinate, so there is LookAta function.

camera := mgl32.LookAtV(cameraPos, cameraTarget, up)

During use, the camera is actually the view, so the order is projection * camera * model.

Example 1

A cube rotates around the Y axis. Normally we can only see the Y axis. After adding a camera, we can see a three-dimensional effect.

model := mgl32.HomogRotate3D(float32(glfw.GetTime()), mgl32.Vec3{
    
    0, 1, 0})
camera := mgl32.LookAtV(mgl32.Vec3{
    
    2, 2, 2}, mgl32.Vec3{
    
    0, 0, 0}, mgl32.Vec3{
    
    0, 1, 0})
projection := mgl32.Perspective(mgl32.DegToRad(45), float32(width)/height, 0.1, 100)
transe := projection.Mul4(camera).Mul4(model)
gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])

Insert image description here

Example 2

The scene is stationary, and the camera position rotates around a circle with a radius of 3

radius := 3.0
cx := float32(math.Sin(glfw.GetTime()) * radius)
cz := float32(math.Cos(glfw.GetTime()) * radius)
camera := mgl32.LookAtV(mgl32.Vec3{
    
    cx, 2, cz}, mgl32.Vec3{
    
    0, 0, 0}, mgl32.Vec3{
    
    0, 1, 0})
projection := mgl32.Perspective(mgl32.DegToRad(45), float32(width)/height, 0.1, 100)
transe := projection.Mul4(camera)
gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])

Insert image description here

Example 3

Use the WSAD button to control the camera to move left and right,

cameraPos := mgl32.Vec3{
    
    0, 0, 3}
cameraFront := mgl32.Vec3{
    
    0, 0, -1}
cameraUp := mgl32.Vec3{
    
    0, 1, 0}

func KeyPressAction(window *glfw.Window) {
    
    
	keyCallback := func(w *glfw.Window, key glfw.Key, scancode int, action glfw.Action, mods glfw.ModifierKey) {
    
    
		cameraSpeed := float32(0.05)
		if key == glfw.KeyW && action == glfw.Press {
    
    
			cameraPos = cameraPos.Sub(cameraFront.Mul(cameraSpeed))
		}
		if key == glfw.KeyS && action == glfw.Press {
    
    
			cameraPos = cameraPos.Add(cameraFront.Mul(cameraSpeed))
		}
		if key == glfw.KeyA && action == glfw.Press {
    
    
             // Normalize 标准化坐标使其落在 [-1,1]
			cameraPos = cameraPos.Add(cameraFront.Cross(cameraUp).Normalize().Mul(cameraSpeed))
		}
		if key == glfw.KeyD && action == glfw.Press {
    
    
			cameraPos = cameraPos.Sub(cameraFront.Cross(cameraUp).Normalize().Mul(cameraSpeed))
		}
         // log.Println(cameraPos, cameraPos.Add(cameraFront))
	}
	window.SetKeyCallback(keyCallback)
}

func Run10() {
    
    
	......
	KeyPressAction(window)

	for !window.ShouldClose() {
    
    
		......
         // 这样能保证无论我们怎么移动,摄像机都会注视着目标方向
		camera := mgl32.LookAtV(cameraPos, cameraPos.Add(cameraFront), cameraUp)
		projection := mgl32.Perspective(mgl32.DegToRad(45), float32(width)/height, 0.1, 100)
		transe := projection.Mul4(camera)
		gl.UniformMatrix4fv(gl.GetUniformLocation(program, gl.Str("transe\x00")), 1, false, &transe[0])

		......

		glfw.PollEvents()
		window.SwapBuffers()
	}
}

A and D need to move orthogonally, so vectors are used 叉乘.

How should we understand some of the comments above 这样能保证无论我们怎么移动,摄像机都会注视着目标方向? Let’s keyCallbackadd printing to take a closer look. Because the WSAD operation only changes cameraPosthe value, but cameraFrontonly the Z direction, this cameraTarget = cameraPos + cameraFrontwill cause the camera’s orientation to always be parallel to the Z axis, which is the same as at the beginning. Same.


perspective shift
Euler angles

Euler angles are three values ​​that can represent any rotation in 3D space. They were proposed by Leonhard Euler in the 18th century. There are three types of Euler angles: pitch angle (Pitch), yaw angle (Yaw) and roll angle (Roll). The following picture shows their meaning:

Insert image description here

Pitch angle is the angle that describes how we look up or down and can be seen in the first image. The second image shows the yaw angle, which represents how far we are looking to the left and right. Roll angle represents how we roll the camera and is commonly used in spacecraft cameras. Each Euler angle has a value, and by combining the three angles we can calculate any rotation vector in 3D space.

Insert image description here

Insert image description here

The final direction vector calculation formula

// direction代表摄像机的前轴(Front),这个前轴是和本文第一幅图片的第二个摄像机的方向向量是相反的
direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw)); 
direction.y = sin(glm::radians(pitch));
direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
Mouse control

The origin of the mouse coordinate system is the upper left corner of the screen, positive X is to the right, and positive Y is downward, so the increment of the Y axis should be reversed.

There will be jitter when you first come in. This is because the default cursorX and cursorY are at the center of the screen, and the mouse is not at the center of the screen at the beginning, so the starting point needs to be initialized.

The pitch angle must prevent the user from looking higher than 89 degrees (the viewing angle will flip at 90 degrees), and it is also not allowed to be less than -89 degrees. This ensures that users can only see the sky or their feet, but cannot exceed this limit. Analogous to human eyes looking up and down, things seen beyond 90 degrees will be reversed.

The yaw angle can be a 360-degree rotation.

If you set the initial value of the yawsum pitchto 0, you will find that it becomes blank as soon as you move the mouse. That is because there is something wrong with the orientation of our camera, and the orientation of the camera is determined. So how to set the initial value cameraFront? cameraFrontThe initial value of is (0,0,-1), after the mouse enters the screen, it should change linearly, not a sudden change, so the initial value of yawand is , we will know by analyzing its formula .pitchcameraFront = (0,0,-1)pitch = 0, yaw = -90

var firstMouse bool
var cursorX float64 = 400
var cursorY float64 = 300
var yaw float64 = -90
var pitch float64
sensitivity := 0.05 // 鼠标移动的灵敏度
cursorPosCallback := func(w *glfw.Window, xpos float64, ypos float64) {
    
    
    if firstMouse {
    
    
        cursorX = xpos
        cursorY = ypos
        firstMouse = false
    }

    xoffset := sensitivity * (xpos - cursorX)
    yoffset := sensitivity * (cursorY - ypos)
    cursorX = xpos
    cursorY = ypos
    yaw += xoffset
    pitch += yoffset
    if pitch > 89 {
    
    
        pitch = 89
    }
    if pitch < -89 {
    
    
        pitch = -89
    }

    cameraFront = mgl32.Vec3{
    
    
        float32(math.Cos(float64(mgl32.DegToRad(float32(pitch)))) * math.Cos(float64(mgl32.DegToRad(float32(yaw))))),
        float32(math.Sin(float64(mgl32.DegToRad(float32(pitch))))),
        float32(math.Cos(float64(mgl32.DegToRad(float32(pitch)))) * math.Sin(float64(mgl32.DegToRad(float32(yaw))))),
    }.Normalize()
}
window.SetCursorPosCallback(cursorPosCallback)

As we said before about the sky box
, the texture map is pasted on both sides by default, which means that the inside of the cube is also pasted. However, if you analyze it carefully, you will find that the outside and inside pictures are flipped left and right. If we turn the camera at this time What will happen if the position is placed in the center of the cube? In fact, this is the effect of a sky box. By controlling the mouse, we can travel in an internal space. Of course, a more detailed skybox would require different textures for the six sides.

Of course, the correct operation of the sky box should be to use cube maps, that is, the texture coordinates are three-dimensional. Refer to the implementation https://learnopengl-cn.github.io/04%20Advanced%20OpenGL/06%20Cubemaps

First is to create the cube texture

func MakeTextureCube(filepathArray []string) uint32 {
    
    
	var texture uint32
	gl.GenTextures(1, &texture)
	gl.BindTexture(gl.TEXTURE_CUBE_MAP, texture)

	for i := 0; i < len(filepathArray); i++ {
    
    
		imgFile2, _ := os.Open(filepathArray[i])
		defer imgFile2.Close()
		img2, _, _ := image.Decode(imgFile2)
		rgba2 := image.NewRGBA(img2.Bounds())
		draw.Draw(rgba2, rgba2.Bounds(), img2, image.Point{
    
    0, 0}, draw.Src)

		// right, left, top, bottom, back, front
		//
		// TEXTURE_CUBE_MAP_POSITIVE_X   = 0x8515
		// TEXTURE_CUBE_MAP_NEGATIVE_X   = 0x8516
		// TEXTURE_CUBE_MAP_POSITIVE_Y   = 0x8517
		// TEXTURE_CUBE_MAP_NEGATIVE_Y   = 0x8518
		// TEXTURE_CUBE_MAP_POSITIVE_Z   = 0x8519
		// TEXTURE_CUBE_MAP_NEGATIVE_Z   = 0x851A
		gl.TexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X+uint32(i), 0, gl.RGBA, int32(rgba2.Rect.Size().X), int32(rgba2.Rect.Size().Y), 0, gl.RGBA, gl.UNSIGNED_BYTE, gl.Ptr(rgba2.Pix))
	}

	gl.TexParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
	gl.TexParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
	gl.TexParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_R, gl.CLAMP_TO_EDGE)
	gl.TexParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
	gl.TexParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR)

	return texture
}

Then there is the vertex data. We do not need to set texture coordinates, but directly take the vertex coordinates as texture coordinates.

#version 410

in vec3 vPosition;
out vec3 textureDir;

uniform mat4 transe;

void main() {
    
    
	gl_Position = transe * vec4(vPosition, 1.0);
	textureDir = vPosition;
}

Used in fragment shadersamplerCube

#version 410

in vec3 textureDir;

out vec4 frag_colour;

uniform samplerCube cubemap;

void main() {
    
    
	frag_colour = texture(cubemap, textureDir);
}

When passing in pictures, they are also in order right, left, top, bottom, back, front.

Scroll wheel controls zoom

Field of View or fov defines how much of a scene we can see. When the field of view becomes smaller, the space projected by the scene will decrease, resulting in the feeling of zooming in (Zoom In). We will use the mouse wheel to zoom in. Just like mouse movement and keyboard input, we need a callback function for the mouse wheel. When rolling the mouse wheel, the yoff value represents the size of our vertical scrolling. When the scrollCallback function is called, we change the contents of the global variable fov variable. Because 45.0fit is the default field of view value, we will limit the zoom level 1.0fto 45.0f.

var fov float64 = 45
scrollCallback := func(w *glfw.Window, xoff float64, yoff float64) {
    
    
    if fov >= 1.0 && fov <= 45.0 {
    
    
        fov -= yoff
    }
    if fov <= 1.0 {
    
    
        fov = 1.0
    }
    if fov >= 45.0 {
    
    
        fov = 45.0
    }
}
window.SetScrollCallback(scrollCallback)
......
projection := mgl32.Perspective(mgl32.DegToRad(float32(fov)), float32(width)/height, 0.1, 100)

save Picture

We can save the graphics in the current window as pictures, such as setting keyboard events ctrl+sand saving them once.
You can use gl.ReadPixels()the function. We know that glfw uses a double buffer. This function will read the data in the front buffer.
func ReadPixels(x int32, y int32, width int32, height int32, format uint32, xtype uint32, pixels unsafe.Pointer)

x and y represent the starting point coordinates, the lower left corner of the window is (0,0), upward Y is positive, and right X is positive; then there is the width and height to be intercepted, and the last three parameters are the same as the function gl.TexImage2D().

However, the graphics library we use basically uses the upper left corner as the (0,0) point, so the saved picture has the Y-axis upside down, so we need to flip the Y-axis ourselves.

func (c *Camera) SavePng(filepath string) {
    
    
	img := image.NewRGBA(image.Rect(0, 0, c.WindowWidth, c.WindowHeight))

	gl.ReadPixels(0, 0, int32(c.WindowWidth), int32(c.WindowHeight), gl.RGBA, gl.UNSIGNED_BYTE, gl.Ptr(img.Pix))

	// 翻转Y坐标
	for x := 0; x < c.WindowWidth; x++ {
    
    
		for y := 0; y < c.WindowHeight/2; y++ {
    
    
			s := img.RGBAAt(x, y)
			t := img.RGBAAt(x, c.WindowHeight-1-y)
			img.SetRGBA(x, y, t)
			img.SetRGBA(x, c.WindowHeight-1-y, s)
		}
	}

	if filepath == "" {
    
    
		filepath = strconv.Itoa(int(time.Now().Unix())) + ".png"
	}
	f, _ := os.Create(filepath)
	b := bufio.NewWriter(f)
	png.Encode(b, img)
	b.Flush()
	f.Close()
}

Guess you like

Origin blog.csdn.net/raoxiaoya/article/details/131429420