golang, OpenGL, computer graphics (1)

Development environment and dependencies

github.com/go-gl/gl/v4.1-core/gl
github.com/go-gl/glfw/v3.2/glfw

OpenGL only provides drawing functions, and you need to create windows yourself. This requires learning the window creation method of the corresponding operating system, which is relatively complicated and different for each operating system. To simplify the process of creating windows, specialized window libraries can be used, such as GLUT, GLFW, etc. Since GLUT is already a thing from the 90s (but then there was freeglut) and GLFW is new, it is recommended to use GLFW.

GLFW is a lightweight tool library used with OpenGL, abbreviated from Graphics Library Framework. The main function of GLFW is to create and manage windows and OpenGL contexts, and also provides functions for handling handles, keyboard, and mouse input, and event handling.

Both gl and glfw use cgo, so they need to install gcc, which is MinGW. Installation reference: https://blog.csdn.net/raoxiaoya/article/details/130820906

UI control library: Nuclear is a UI control library developed in C language. It has golang bindings and can help us deal with simple interface display problems. github.com/golang-ui/nuklear

Regarding glad (not used yet):
glad's API includes: window operations, window initialization, window size, position adjustment, etc.; callback functions; responding to refresh messages, keyboard messages, mouse messages, timer functions, etc.; creating complex three-dimensional bodies; Menu functions; program running functions, etc.
Install glad: https://glad.dav1d.de/

Does glfw need to be installed?
golang uses cgo to call clang code. You can call compiled dynamic libraries and static libraries, or you can put clang code in the golang project so that it can be called directly. Obviously the package uses the latter, so you don't need github.com/go-gl/glfw/v3.2/glfwto I installed the glfw program on my computer, which README.mdis also explained in

  • GLFW C library source is included and built automatically as part of the Go package. But you need to make sure you have dependencies of GLFW:
    • On macOS, you need Xcode or Command Line Tools for Xcode (xcode-select --install) for required headers and libraries.
    • On Ubuntu/Debian-like Linux distributions, you need libgl1-mesa-dev and xorg-dev packages.
    • On CentOS/Fedora-like Linux distributions, you need libX11-devel libXcursor-devel libXrandr-devel libXinerama-devel mesa-libGL-devel libXi-devel libXxf86vm-devel packages.
    • On FreeBSD, you need the package pkgconf. To build for X, you also need the package xorg; and to build for Wayland, you need the package wayland.
    • On NetBSD, to build for X, you need the X11 sets installed. These are included in all graphical installs, and can be added to the system with sysinst(8) on non-graphical systems. Wayland support is incomplete, due to missing wscons support in upstream GLFW. To attempt to build for Wayland, you need to install the wayland libepoll-shim packages and set the environment variable PKG_CONFIG_PATH=/usr/pkg/libdata/pkgconfig.
    • On OpenBSD, you need the X11 sets. These are installed by default, and can be added from the ramdisk kernel at any time.
    • See here for full details.
  • Go 1.4+ is required on Windows (otherwise you must use MinGW v4.8.1 exactly, see Go issue 8811).

However, it was also mentioned that since glfw also depends on other things, these dependencies need to be installed on specific systems, but on Windows systems, no additional installation is required.

opengl32.dllFor opengl, currently all windows operating systems come with it, and there is no need to install it . The corresponding version can be found in the C:\Windows\System32or directory: Requirements:C:\Windows\SysWOW64

  • A cgo compiler (typically gcc).
  • On Ubuntu/Debian-based systems, the libgl1-mesa-dev package.

opengl is just a set of programming interfaces and a specification, while some advanced implementations (instructions) are completed by different graphics card manufacturers based on different operating systems (similar to the relationship between assembly instructions and CPU). To put it bluntly, we can The CPU is used to complete matrix operations, but since the original intention of the GPU is to perform graphics operations, it has natural advantages in matrix operations, so using the GPU can achieve a graphics enhancement effect. In addition, the popular machine learning neural networks are all doing matrices. Computing, using GPU will greatly improve efficiency.

golang + opengl --> graphics driver --> monitor

So when developing, you should pay attention to the opengl version, graphics card driver, and graphics card version. Most of the time, there should be no problems, unless your machine is too old.

The model imports the Golang implementation corresponding to Assimp:github.com/raedatoui/assimp

opengl-golang related tool classes:github.com/raedatoui/glutils

opengl-golang text rendering: github.com/go-gl/gltext,github.com/raedatoui/glfont

Render text FreeType:http://www.freetype.org/

Reference documentation

opengl official documentation:https://www.opengl.org/ --> Documentation --> Current OpenGL Version --> OpenGL 4.1 --> API Core Profile --> https://registry.khronos.org/OpenGL/specs/gl/glspec41.core.pdf

opengl official tutorial:http://www.opengl-tutorial.org/cn/

Website for learning opengl
https://learnopengl-cn.github.io/
https://blog.csdn.net/weixin_42050609?type=blog
https://github.com/raedatoui/learn-opengl-golang

GLFW documentation (best to use):https://www.glfw.org/docs/latest/

In the GLFW official documentation, functions all glfwstart with, such as glfw.SwapInterval()corresponding glfwSwapInterval().
In the official opengl documentation, functions do not have glprefixes, such as gl.BindVertexArray()corresponding BindVertexArray().

code repository

https://github.com/phprao/go-graphic

Initialize glfw

glfw is used to operate windows, so it needs to be initialized first.
glfw related functions need to run in the main thread main thread, so they need to 主 goroutinebe called in runtime.LockOSThread()and then call glfw.

func initGlfw() *glfw.Window {
    
    
	if err := glfw.Init(); err != nil {
    
    
		panic(err)
	}
	glfw.WindowHint(glfw.Resizable, glfw.False)
	glfw.WindowHint(glfw.ContextVersionMajor, 4)
	glfw.WindowHint(glfw.ContextVersionMinor, 1)
	glfw.WindowHint(glfw.OpenGLProfile, glfw.OpenGLCoreProfile)
	glfw.WindowHint(glfw.OpenGLForwardCompatible, glfw.True)
	window, err := glfw.CreateWindow(width, height, "Conway's Game of Life", nil, nil)
	if err != nil {
    
    
		panic(err)
	}
	window.MakeContextCurrent()
	return window
}
  • glfw.WindowHint(target Hint, hint int)
    The corresponding function is glfwWindowHint()to set some attribute values ​​​​of the window and OpenGL context. What specific attributes can you refer to?

    https://www.glfw.org/docs/latest/window_guide.html#window_hints。

GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR specify the client API version that the created context must be compatible with. The exact behavior of these hints depend on the requested client API.

While there is no way to ask the driver for a context of the highest supported version, GLFW will attempt to provide this when you ask for a version 1.0 context, which is the default for these hints.

Do not confuse these hints with GLFW_VERSION_MAJOR and GLFW_VERSION_MINOR, which provide the API version of the GLFW header.

That is to set the version number of opengl. Here we use opengl 4.1

GLFW_OPENGL_PROFILE specifies which OpenGL profile to create the context for. Possible values are one of GLFW_OPENGL_CORE_PROFILE or GLFW_OPENGL_COMPAT_PROFILE, or GLFW_OPENGL_ANY_PROFILE to not request a specific profile. If requesting an OpenGL version below 3.2, GLFW_OPENGL_ANY_PROFILE must be used. If OpenGL ES is requested, this hint is ignored.

Regarding the immediate rendering mode (Immediate mode, also known as fixed rendering mode), it is very convenient to draw graphics in this mode. Most of OpenGL's functionality is hidden by libraries, and developers have little freedom to control how OpenGL performs calculations. And developers are eager for more flexibility. Over time, the specification became more flexible, giving developers more control over the details of drawing. Immediate rendering mode is indeed easy to use and understand, but it is too inefficient. Therefore, starting from OpenGL 3.2, the specification document began to abandon the immediate rendering mode and encouraged developers to develop under the OpenGL core mode (Core-profile). This branch of the specification completely removed the old features.

GLFW_OPENGL_FORWARD_COMPAT specifies whether the OpenGL context should be forward-compatible, i.e. one where all functionality deprecated in the requested version of OpenGL is removed. This must only be used if the requested OpenGL version is 3.0 or above. If OpenGL ES is requested, this hint is ignored.

  • glfw.CreateWindow(width, height int, title string, monitor *Monitor, share *Window)
    WindowHintCreates a window and its associated OpenGL or OpenGL ES context based on the parameters set by the function.

Regarding the share parameters, the official website explains . It is also a window object, which means that they share the same OpenGL context. For example, the
second_window = glfwCreateWindow(640, 480, "Second Window", NULL, first_window)
shared data includes textures, vertex and element buffersetc., but how it is implemented depends on the operating system and graphics driver.

Regarding the monitor parameter, the official website explains that the monitor refers to the display device. The reason why the display device can display graphics is because the data line has been inputting data to it frame by frame, even though what my computer is seeing is static. The picture is actually being rendered repeatedly. Our monitor now displays the window 10 desktop. This is because the computer inputs and outputs the image of the window 10 desktop to the monitor. Can I just input a picture to the monitor? Obviously it is possible. In this case, the monitor Your picture will be displayed in full screen. At this time, you cannot see any windows elements, but you need to repeatedly input this picture to the monitor so that the picture can be maintained. This is the meaning of full screen, which we are familiar with. The zoom button in the upper right corner of many software has different meanings.

Get the main monitor glfwGetPrimaryMonitorand get the list of connected monitors glfwGetMonitors. See the documentation for details.

If we specify a monitor, the graphics card will directly send your data to the monitor, which is the full-screen effect.

window, err := glfw.CreateWindow(width, height, name, glfw.GetPrimaryMonitor(), nil)

Of course, if the width and height are different from the width and height of the monitor, black borders will also appear. We do an optimization for this.

monitor := glfw.GetPrimaryMonitor()
videoMode := monitor.GetVideoMode()
glfw.WindowHint(glfw.RedBits, videoMode.RedBits)
glfw.WindowHint(glfw.GreenBits, videoMode.GreenBits)
glfw.WindowHint(glfw.BlueBits, videoMode.BlueBits)
glfw.WindowHint(glfw.RefreshRate, videoMode.RefreshRate)
window, err := glfw.CreateWindow(videoMode.Width, videoMode.Height, name, monitor, nil)

This reminds me of the effect of watching videos through the video player. You can click to go full screen or exit full screen. This is easy to implement. We add two key events.

// 按键K
if window.GetKey(glfw.KeyK) == glfw.Press {
    
    
    // 设置显示器,X坐标,Y坐标,图像的宽高,最后是刷新频率
    // 我们这是从左上角开始,宽高为满屏
    window.SetMonitor(glfw.GetPrimaryMonitor(), 0, 0, 1920, 1080, 1)
}
// 按键M
if window.GetKey(glfw.KeyM) == glfw.Press {
    
    
    // 缩回来后的位置和宽高
    window.SetMonitor(nil, 100, 100, 500, 500, 1)
}

In order to adapt to different monitors, let’s optimize it

if window.GetKey(glfw.KeyK) == glfw.Press {
    
    
    monitor := glfw.GetPrimaryMonitor()
    videoMode := monitor.GetVideoMode()
    window.SetMonitor(monitor, 0, 0, videoMode.Width, videoMode.Height, videoMode.RefreshRate)
}
if window.GetKey(glfw.KeyM) == glfw.Press {
    
    
    monitor := glfw.GetPrimaryMonitor()
    videoMode := monitor.GetVideoMode()
    window.SetMonitor(nil, 100, 100, 500, 500, videoMode.RefreshRate)
}

If you do not specify a monitor, then windowed modeon win10 its effect is to create a black window, and your data is rendered into this window. The monitor used at this time is still the primary monitor.

The window position can be passed through the function glfwSetWindowPos(window, 100, 100), with the upper left corner as the origin. Get the window position glfwGetWindowPos.

Regarding window centering, obviously we need to get the width and height of the monitor first. We create a non-full-screen window. The main monitor is used by default. We also need to know the display mode, which can be understood as resolution-related, such as my resolution video mode. It is 1920*1080.

sw := glfw.GetPrimaryMonitor().GetVideoMode().Width // 1920
sh := glfw.GetPrimaryMonitor().GetVideoMode().Height // 1080
window.SetPos((sw-width)/2, (sh-height)/2)

In fact, CreateWindowwhen the window is displayed, SetPosyou will see that the window has moved to the center. I hope that it will be in the center as soon as it comes out. There is no moving process. You can hide it first and then display it.

glfw.WindowHint(glfw.Visible, glfw.False)

window, err := glfw.CreateWindow(width, height, name, nil, nil)
if err != nil {
    
    
    panic(err)
}

sw := glfw.GetPrimaryMonitor().GetVideoMode().Width
sh := glfw.GetPrimaryMonitor().GetVideoMode().Height
window.SetPos((sw-width)/2, (sh-height)/2)

window.Show()
  • window.MakeContextCurrent()
    We searched on the glfw official website glfwMakeContextCurrentand the explanation is as follows:

void glfwMakeContextCurrent(GLFWwindow *window)

This function makes the OpenGL or OpenGL ES context of the specified window current on the calling thread. A context must only be made current on a single thread at a time and each thread can have only a single current context at a time.

When moving a context between threads, you must make it non-current on the old thread before making it current on the new one.

By default, making a context non-current implicitly forces a pipeline flush. On machines that support GL_KHR_context_flush_control, you can control whether a context performs this flush by setting the GLFW_CONTEXT_RELEASE_BEHAVIOR hint.

The specified window must have an OpenGL or OpenGL ES context. Specifying a window without a context will generate a GLFW_NO_WINDOW_CONTEXT error.

Before you can use the OpenGL API, you must have a current OpenGL context.
The context will remain current until you make another context current or until the window owning the current context is destroyed.

Before using the opengl api, you need to set the context first. You can bind the context of the current window as the current context. Because the newly created context cannot be used directly, it needs to be bound to it before Current Contextit can be used. At any time, a thread can only have one Current Context bound to it, and at any time, a Current Context can only have one thread bound to it.

OpenGL itself is a huge state machine: a series of variables describing how OpenGL should run at this moment. The state of OpenGL is usually called the OpenGL context. We usually use setting options and operating buffers to change the OpenGL state. Finally, we use the current OpenGL context for rendering.

The relationship between window, OpenGL context, and threads:
1. Each window has an OpenGL context.
2. Multiple windows can share an OpenGL context.
3. A thread can only bind one OpenGL context at the same time, as the Current Context.

Window scaling is disabled here. If scaling is allowed, you will see that the viewport size is different from the window size. Therefore, you need to listen to the scaling event to allow OpenGL to change the viewport size output.

sizeCallback := func(w *glfw.Window, width int, height int) {
    
    
    gl.Viewport(0, 0, int32(width), int32(height))
}
window.SetSizeCallback(sizeCallback)
Initialize opengl

OpenGL related functions need to run in the main thread.

func initOpenGL() uint32 {
    
    
	if err := gl.Init(); err != nil {
    
    
		panic(err)
	}
	version := gl.GoStr(gl.GetString(gl.VERSION))
	log.Println("OpenGL version", version)

	vertexShader, err := compileShader(vertexShaderSource, gl.VERTEX_SHADER)
	if err != nil {
    
    
		panic(err)
	}
	fragmentShader, err := compileShader(fragmentShaderSource, gl.FRAGMENT_SHADER)
	if err != nil {
    
    
		panic(err)
	}

	prog := gl.CreateProgram()

	gl.AttachShader(prog, vertexShader)
	gl.AttachShader(prog, fragmentShader)

	gl.LinkProgram(prog)
	return prog
}
Build VBO, VAO, EBO

Both VAO and VBO are used to store vertex information and send this information to the vertex shader.

In an OpenGL program, only one VAO is bound to opengl at the same time. Of course, you can bind to another VAO after operating one.

VBO is Vertex Buffer Objects (VBO), which contains 3D coordinates, color, texture coordinates and other information of vertices. But they are stored in arrays and stored in a piece of video memory space. The program does not know which of these numbers represents 3D coordinates and which represents color.

VAO is a Vertex Array Object (VAO), which is used to indicate what attributes of the vertices each number represents. For example, digits 1-3 of these numbers represent the 3D xyz coordinates, and digits 4-7 represent rbg color and transparency.

EBO is an Element Buffer Object (EBO). EBO is a buffer, just like a VBO. It stores the index used by OpenGL to determine which vertices to draw and sets the drawing order of the vertices.
Insert image description here
We use VBO to store data, and VAO to tell the computer what attributes and functions these data have.

The vertex coordinates are []float32types, in order, X, Y, Z, 窗口中心点为原点,向右为X正,上为Y正and the value range is [-1,1],

triangle = []float32{
    
    
    0, 0.5, 0,
    -0.5, -0.5, 0,
    0.5, -0.5, 0,
}
func makeVao(points []float32) uint32 {
    
    
	var vbo uint32

    // 在显卡中开辟一块空间,创建顶点缓存对象,个数为1,变量vbo会被赋予一个ID值。
	gl.GenBuffers(1, &vbo)

    // 将 vbo 赋值给 gl.ARRAY_BUFFER,要知道这个对象会被赋予不同的vbo,因此其值是变化的
    // 可选类型:GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, GL_PIXEL_PACK_BUFFER, GL_PIXEL_UNPACK_BUFFER
	gl.BindBuffer(gl.ARRAY_BUFFER, vbo)

    // 将内存中的数据传递到显卡中的gl.ARRAY_BUFFER对象上,其实是把数据传递到绑定在其上面的vbo对象上。
    // 4*len(points) 代表总的字节数,因为是32位的
	gl.BufferData(gl.ARRAY_BUFFER, 4*len(points), gl.Ptr(points), gl.STATIC_DRAW)

	var vao uint32
    // 创建顶点数组对象,个数为1,变量vao会被赋予一个ID值。
	gl.GenVertexArrays(1, &vao)

    // 后面的两个函数都是要操作具体的vao的,因此需要先将vao绑定到opengl上。
    // 解绑:gl.BindVertexArray(0),opengl中很多的解绑操作都是传入0
	gl.BindVertexArray(vao)

    // 使vao去引用到gl.ARRAY_BUFFER上面的vbo,这一步完成之后vao就建立了对特定vbo的引用,后面即使gl.ARRAY_BUFFER 的值发生了变化也不影响vao的使用
	gl.VertexAttribPointer(0, 3, gl.FLOAT, false, 0, nil)
    // 设置 vertex attribute 的状态enabled,默认是disabled,后面会有具体解释
	gl.EnableVertexAttribArray(0)

	return vao
}

BufferDataThe fourth parameter specifies how we want the graphics card to manage the given data. It comes in three forms:

  • GL_STATIC_DRAW : The data will never or rarely change.
  • GL_DYNAMIC_DRAW: The data will be changed a lot.
  • GL_STREAM_DRAW: The data changes every time it is plotted.

The position data of the triangle will not change and will remain the same for each rendering call, so its best use type is GL_STATIC_DRAW. If, for example, the data in a buffer will be changed frequently, then the type used is GL_DYNAMIC_DRAW or GL_STREAM_DRAW, which ensures that the graphics card places the data in a part of memory that can be written at high speed.

  • VertexAttribPointer(index uint32, size int32, xtype uint32, normalized bool, stride int32, pointer unsafe.Pointer)
    Each vertex has multiple attributes. For example, the commonly used ones are:
    1. Position attribute, that is, coordinates, including three values: X, Y, and Z.
    2. Color attribute, if it is RGB, it has three values, if it is RGBA, it has four values.
    3. Texture coordinates, S and T, are two values.
    4. Other custom attributes.

When we define vertex data, we will inject all these data into an array, that is, VBO, for example:

vertices = []float32{
    
    
    0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0,
    0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0,
    -0.5, -0.5, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0,
}

So how many points there are in this pile of data depends on what attributes you define. For example, I can say that it has three points, and each point has three attributes: 1. Position coordinates (three values), 2. Color (three values), 3. Texture coordinates (two values).

I can also say that it has eight points, and each point has an attribute: 1. Position coordinates (three values).

Therefore, VertexAttribPointerthe role of the function is to specify how to split and use the vertex data VBO. Parameter description is as follows:

  • index: Since there may be multiple attributes, let's give it a number. Starting from 0, we can use layout(location = 0) in the vertex shader to set the position value of the vertex attribute to 0.
  • size: This attribute has several values. For example, the location attribute has three values, here it is 3.
  • xtype: What type is the value of the attribute, for example gl.FLOAT.
  • normalized: Whether you want the data to be normalized. If it is TRUE, then all unsigned data will be converted into [0,1], and signed data will be converted into [-1,1]. Generally, FALSE is selected.
  • stride: Step size, the unit is bytes, the calculation method is how many bytes are occupied by all attributes of each pixel. For example, if there is only position attribute, it is 3 float32, that is, 12 bytes, that is, fill in 12. Of course, we can also fill in 0 and let OpenGL calculate it by itself.
  • pointer: This is difficult to understand. Its value is the offset of this attribute in bytes. Let’s take verticesdata as an example. Suppose it has three attributes: 1. Position coordinates (three values), 2. Color (three values) , 3. Texture coordinates (two values). The first attribute has an offset of 0, the second attribute has an offset of 3 float32s of 12, and the third attribute has an offset of 6 float32s of 24.

For ease of use, we use VertexAttribPointerWithOffsetinstead VertexAttribPointer. For example, the three properties above can be set like this

// Position attribute
gl.VertexAttribPointerWithOffset(0, 3, gl.FLOAT, false, 8*4, 0)
gl.EnableVertexAttribArray(0)
// Color attribute
gl.VertexAttribPointerWithOffset(1, 3, gl.FLOAT, false, 8*4, 3*4)
gl.EnableVertexAttribArray(1)
// TexCoord attribute
gl.VertexAttribPointerWithOffset(2, 2, gl.FLOAT, false, 8*4, 6*4)
gl.EnableVertexAttribArray(2)

Here, the properties are set through the default position, but if there are 顶点着色器many variables in it, the positions are easy to be messed up, and after testing, the position order is the order used in main, for example:

#version 410

in vec3 vert;
in vec2 vertTexCoord;

uniform mat4 projection;
uniform mat4 camera;
uniform mat4 model;

out vec2 fragTexCoord;

void main() {
    
    
	fragTexCoord = vertTexCoord;
    gl_Position = projection * camera * model * vec4(vert, 1);
}

The order of my data is the coordinate position first, then the texture coordinate. In the vertex shader, it is defined first and then vertdefined vertTexCoord. However, in main, it is used first vertTexCoordand then used vert. Finally , vertAttrib := uint32(gl.GetAttribLocation(program, gl.Str("vert\x00")))the attribute position is obtained by getting it vert为1,vertTexCoord为0. Or maybe it is internally Other mechanisms for setting are currently unknown, but it is unreasonable to blindly set the index. Therefore, it is best for us to obtain the attribute position first and then set it, so we need to define the variables of the vertex shader first in.

for example

#version 410

......

in vec3 vert;
in vec3 color;
in vec2 vertTexCoord;

......
vertAttrib := uint32(gl.GetAttribLocation(program, gl.Str("vert\x00")))
gl.EnableVertexAttribArray(vertAttrib)
gl.VertexAttribPointerWithOffset(vertAttrib, 3, gl.FLOAT, false, 8*4, 0)

colorAttrib := uint32(gl.GetAttribLocation(program, gl.Str("color\x00")))
gl.EnableVertexAttribArray(colorAttrib)
gl.VertexAttribPointerWithOffset(colorAttrib, 3, gl.FLOAT, false, 8*4, 3*4)

texCoordAttrib := uint32(gl.GetAttribLocation(program, gl.Str("vertTexCoord\x00")))
gl.EnableVertexAttribArray(texCoordAttrib)
gl.VertexAttribPointerWithOffset(texCoordAttrib, 2, gl.FLOAT, false, 8*4, 6*4)

Although several attributes are set for each point, by default, for performance reasons, these attributes are inactive, that is, disabled, which means that the data is not visible on the shader side, even if the data has been uploaded to the GPU. So you have to call them manually EnableVertexAttribArrayto make them take effect one by one. Its parameter index is the VertexAttribPointersame.

There is an upper limit to the vertex attributes we can declare, which is generally determined by the hardware. OpenGL ensures that at least 16 4-component vertex attributes are available, but some hardware may allow more vertex attributes. You can query GL_MAX_VERTEX_ATTRIBS to get the specific upper limit. Normally it will return at least 16, which is enough in most cases.

So, glEnableVertexAttribArrayshould it glVertexAttribPointerbe called before or after? The answer is yes, as long as glDraw*it is called before the drawing call (series function).

At this point, the reference of the current VAO to the current VBO is completed.

vertex shader

Shaders are small programs inside OpenGL and are written in GLSL (OpenGL Shader Language) language.

Vertex shaders contain basic processing of some vertex attributes (data).

vertexShaderSource = `
    #version 410
    in vec3 vp;
    void main() {
     
     
        gl_Position = vec4(vp, 1.0);
    }
` + "\x00"

Types such as vec2, vec3, and vec4 are often seen in shaders, which represent several components or numbers. Here vp is the coordinate and has three components: x, y, and x.
The keyword in indicates that this is an input parameter, and out is an output parameter.

vecRepresents a vector type vector. In addition, matit is a matrix type matrix, such as mat4.

type meaning
vecn nDefault vector containing float components
bvecn nA vector containing bool components
ivecn vector containing nint components
uvecn nvector containing unsigned int components
dvecn A vector containing ndouble components

The input parameters come from all the attributes of a vertex in the VAO. There are only coordinate attributes here, so there is only one variable. If there are multiple attributes, multiple in variables are needed.

If there are color and texture attributes in the vertex attributes, then the out variable needs to be defined, and then the out variable will be passed to the in variable of the fragment shader.

vec4The four components of are respectively x,y,x,w, which wcan be understood as homogeneous coordinates. They are not used currently, but they must be set to1.0

A more detailed explanation about shaders Shaders

fragment shader

The function of the fragment shader is to calculate the final color of each pixel. Usually the fragment shader will contain some additional data of the 3D scene, such as texture, light, shadow, etc.

Use RGBA form values ​​to define the color of our graphics through vec4. The values ​​of the four components are all [0, 1].

fragmentShaderSource = `
    #version 410
    out vec4 frag_colour;
    void main() {
     
     
        frag_colour = vec4(1, 1, 1, 1);
    }
` + "\x00"

It is also important to note that both programs run under #version 410the version of . If you are using it OpenGL 2.1, you can also change it to #version 120.

The program has no input variables, so the colors are fixed and output to downstream processing.

Below, you'll see an abstract representation of each stage of the graphics rendering pipeline. Note that the blue part represents the part where we can inject custom shaders.
Insert image description here
顶点着色器The main purpose is to convert 3D coordinates into another 3D coordinate, and the vertex shader allows us to perform some basic processing on vertex attributes.

图元装配(Primitive Assembly)The stage takes as input all the vertices output by the vertex shader (if it is GL_POINTS, then it is a vertex), and assembles all the points into the shape of the specified primitive.

几何着色器Taking as input a collection of vertices in the form of primitives, it can generate other shapes by generating new vertices to construct new (or other) primitives.

The output of the geometry shader is passed in 光栅化阶段(Rasterization Stage), where it maps the primitives to the corresponding pixels on the final screen, generating fragments for use by the fragment shader. Clipping is performed before the fragment shader runs. Cropping discards all pixels outside your view to improve performance. A fragment in OpenGL is all the data OpenGL needs to render a pixel.

片段着色器The main purpose is to calculate the final color of a pixel, which is where all OpenGL advanced effects occur. Typically, fragment shaders contain 3D scene data (such as lighting, shadows, light colors, etc.) that can be used to calculate the final pixel color. It only calculates the color for that part of the graph that can be seen in the window.

After all corresponding color values ​​are determined, the final object will be passed to the final stage, which we call the Alpha testing and blending stage. This stage detects the corresponding depth (and stencil) values ​​of the fragment (will be discussed later), uses them to determine whether the pixel is in front or behind other objects, and decides whether it should be discarded. This stage also checks the alpha value (the alpha value defines the transparency of an object) and blends the object. So, even if the output color of a pixel is calculated in the fragment shader, the final pixel color may be completely different when rendering multiple triangles.

As you can see, the graphics rendering pipeline is very complex and contains many configurable parts. However, for most situations, we only need to configure the vertex and fragment shaders. The geometry shader is optional, usually just use its default shader.

In modern OpenGL we have to define at least one vertex shader and one fragment shader (because there are no default vertex and fragment shaders in GPU).

Draw a picture

All graphics in OpenGL are drawn by decomposing them into triangles.

The coordinate system of OpenGL is the right-hand rule, and the coordinates are normalized to [-1,1].
Insert image description here
Use VAO as data to draw in the opengl program, where the drawing and color filling are completed by the programmable pipeline (vertex shader and fragment shader), and finally presented on the window.

func draw(vao uint32, window *glfw.Window, prog uint32) {
    
    
	gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)

    // 使用这个程序
	gl.UseProgram(prog)

    // 绑定VAO,可鞥会感到奇怪,明明在 makeVao 中已经调用 gl.BindVertexArray(vao) 进行了绑定,为什么这里还要再绑定一次呢?
    // 因为绑定的操作是为了后续的操作服务的,并且有可能在中途又绑定了别的VAO,所以最好是在每次调用跟VAO有关的函数之前绑定一次。
	gl.BindVertexArray(vao)

    // 绘制的类型mode:
    //   1、gl.TRIANGLES:每三个顶点之间绘制三角形,之间不连接。
    //   2、gl.TRIANGLE_FAN:以V0,V1,V2;V0,V2,V3;V0,V3,V4,……的形式绘制三角形。
    //   3、gl.TRIANGLE_STRIP:以V0,V1,V2;V1,V2,V3;V2,V3,V4……的形式绘制三角形。
    // first:一般从第一个顶点开始
    // count:除以3为顶点个数
	gl.DrawArrays(gl.TRIANGLES, 0, int32(len(triangle)/3))

	glfw.PollEvents()

	window.SwapBuffers()
}

GLFW and OpenGL share memory data through associated context.

glfw.PollEvents()

It can be understood as a consumer program. If we set some events or callback functions, once triggered, they will be added to the event queue. This function will consume and execute the corresponding callback function. If the queue is empty, this function will Returns immediately, so this function should be placed in the loop body. If this function is not called, window prompts will be found 无响应. On some platforms, window movement, scaling and other operations will cause the pollevent program to block. If necessary, you can use it glfwSetWindowRefreshCallbackto redraw it. The corresponding function iswindow.SetRefreshCallback()

gl.ClearColor(1.0, 0.0, 0.0, 1.0)

Assign a value to a specific object in the OpenGL context, assumed here to be ColorObj, which represents an RGBA color value [0,1]. If this function is called again, the value of ColorObj will be overwritten, otherwise its value will always exist.
If ColorObj is used, it means that the RGBA values ​​of all pixels in this frame are the same. A solid-color screen can also be understood as a clear screen color and background color.

gl.ClearDepth(1.0)

Assign a value to a specific object in the OpenGL context, here assumed to be DepthObj, which represents a depth value [0,1]. If this function is called again, the value of DepthObj will be overwritten, otherwise its value will always exist.
If DepthObj is used, it means that the depth values ​​of all pixels in this frame are the same, a plane.

The so-called depth value is the value of each point in the 3D space in the Z-axis direction, so that you know which object is above and which object is below. The Z-axis is relative to the window screen.

If you use GL_LESS(默认)as a comparison rule, that is gl.DepthFunc(gl.LESS), then the Z axis is perpendicular to the screen and inward, that is, the small depth value is close to the human eye, and the large depth value is far away, and it will be blocked. If used gl.GREATERthen the result would be the opposite. By default, the depth value is 0, that is, the 0 point of the Z axis is on the window screen. The purpose of setting the buffer depth value is to translate the XOY plane up and down, and the things you see will be different.

In order to enable the depth buffer for depth testing, you only need to call:; gl.Enable(gl.DEPTH_TEST)In addition, even if the depth buffer is not enabled, if the depth buffer is created, OpenGL will write the depth values ​​corresponding to all color fragments written to the color buffer. into the depth buffer. However, if we want to temporarily prohibit writing values ​​​​to the depth buffer during depth testing, we can use the function: gl.DepthMask(mask bool); take false as a parameter, which prohibits writing depth values, but does not prohibit using values ​​that have been written to the depth buffer. area value for depth testing. Passing true as argument re-enables writing to the depth buffer. Also, this is the default setting.

gl.ClearStencil(1.0)

The stencil buffer can hold an unsigned integer value for each pixel on the screen. The specific meaning of this value depends on the specific application of the program.

During the rendering process, this value can be compared with a preset reference value, and based on the comparison result, it is decided whether to update the color value of the corresponding pixel. This comparison process is called template testing. Template testing occurs after the alpha test and before the depth test. If the template test passes, the corresponding pixel is updated, otherwise it is not updated. Just like using cardboard and spray paint to mix images as accurately as possible, when the template test is started, the pixels of the fragments that pass the template test will be replaced into the color buffer and displayed, while those that fail will not be saved to the color buffer. , thereby achieving the filtering function.

gl.Clear(gl.COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

Clear buffer data, including color buffer (GL_COLOR_BUFFER_BIT), depth buffer (GL_DEPTH_BUFFER_BIT), and stencil buffer (GL_STENCIL_BUFFER_BIT). Multiple values ​​can be passed in at the same time and connected using the OR operator. If the underlying graphics card supports simultaneous clearing, it will be cleared simultaneously. If it is not supported, it will be cleared one by one.

The so-called buffer is to temporarily store data. When OpenGL renders an image, it will put the parameter data in the buffer, where the color value of the pixel is placed in the color buffer, and the depth value of the pixel is placed in the depth buffer, and then The data is transferred to the graphics card for drawing. After drawing, the graphics of this frame are placed in the window back buffer.

If you want to display a static picture, then obviously there is only one frame. How can you keep the picture displayed on the screen? The usual method is to think of it as data with many frames, but each frame is the same, so the same picture can be drawn repeatedly in the for loop. And there will be no depth data for static images.

If you don't actively call the data in the buffer gl.Clear(), the data will always exist unless it is overwritten. Therefore, a reasonable way is to perform gl.Clear()operations before drawing each frame.

gl.Clear()When executing, it will first check whether OpenGL has set the values ​​​​of ColorObj and DepthObj. If so, the buffer will be initialized with its value. If not, the buffer will be initialized with the default black and 0 depth.

However, the call gl.Clear()will trigger a rendering of the graphics card, that is, drawing, and this frame will be placed back buffereverywhere. Example:

var count int
glfw.SwapInterval(10)
func draw(vao uint32, window *glfw.Window, prog uint32) {
    
    
	gl.Clear(gl.COLOR_BUFFER_BIT)
	count++
	if count%10 == 0 {
    
    
		gl.UseProgram(prog)

		gl.BindVertexArray(vao)
		gl.DrawArrays(gl.TRIANGLES, 0, int32(len(triangle)/3)) // 三角形
		log.Println(count)
	}
	if count >= 100000 {
    
    
		count = 0
	}

	glfw.PollEvents()
	window.SwapBuffers()
    log.Println("ok")
}

You can see flashing triangles, and only one triangle appears every 9 times, indicating that the pure black screen is always outputting.

So, we know that the steps to draw each frame are:

  1. Initialize the OpenGL buffer, corresponding to the Clear operation, which will trigger a drawing.
  2. Call OpenGL's Draw related functions to draw.
  3. OpenGL places the drawn picture in the window back buffer.
  4. Call GLFW's SwapBuffers to display back bufferthe screen.
window.SwapBuffers()

The corresponding function is glfwSwapBuffers(GLFWwindow *window), the window object of GLFW has two buffers front bufferand back buffer. When using OpenGL or OpenGL ES to render the window, the function of this function is to exchange the two buffers. The specific process is to front bufferstore the current frame and back bufferstore To move to the next frame (if there is one), the switching process is just to change the pointing of the pointer, so the front becomes the back, and the back becomes the front, and so on. This window must have an OpenGL or OpenGL ES context. , otherwise an error will be reported.

In addition, there is a setting for the exchange frequency glfw.SwapInterval(10), which can also be understood as the refresh frequency. It can be normally set to 1. The default is 0, which means constant refreshing. If n is set, it means that it will be refreshed every n frames. Of course, you can also use the sleep function to control the frequency yourself.

The advantage of GLFW's double buffer is that it improves display efficiency and avoids the embarrassment of displaying while drawing, because rendering is not completed in an instant and takes time, so render it in advance and then switch directly.

What needs to be noted here is that although GLFW is switching between the two caches, it will not clear them. Instead, the graphics card will draw new frames to cover them. If no new frames come in, it will become two cached frames. Shown alternately.

KeyPressAction(window)

glfw.SwapInterval(100)

for !window.ShouldClose() {
    
    
    glfw.PollEvents()
    window.SwapBuffers()
    log.Println("ok")
}

func KeyPressAction(window *glfw.Window) {
    
    
	keyCallback := func(w *glfw.Window, key glfw.Key, scancode int, action glfw.Action, mods glfw.ModifierKey) {
    
    
		if window.GetKey(glfw.KeyR) == glfw.Press {
    
    
			log.Println("R")
			gl.ClearColor(1.0, 0.0, 0.0, 1.0)
			gl.Clear(gl.COLOR_BUFFER_BIT)
		}
	}

	window.SetKeyCallback(keyCallback)
}
gl.PolygonMode(face uint32, mode uint32)

Before calling gl.DrawArraysa similar function to draw, we can also set the polygon mode. Its parameters faceare fixed gl.FRONT_AND_BACK, which means it is applied to the front and back of the polygon to be drawn. modeThere are three values ​​for " gl.POINT, gl.LINE, gl.FILL," and the corresponding effect is to draw three points. , draw the lines of the triangle, draw the triangle and fill it with color, that is, the difference between point, line and surface. The default modeis gl.FILL.

The above example is to draw a triangle. If you want to draw a square, you only need to add three more vertices.

square = []float32{
    
    
    -0.5, 0.5, 0,
    -0.5, -0.5, 0,
    0.5, -0.5, 0,
    -0.5, 0.5, 0,
    0.5, 0.5, 0,
    0.5, -0.5, 0,
}

If I only want to use four points to draw a square, it is definitely possible. At this time, I need to use EBOan object to indicate the index of the vertex.

square2 = []float32{
    
    
    -0.5, 0.5, 0,
    -0.5, -0.5, 0,
    0.5, -0.5, 0,
    0.5, 0.5, 0,
}

// 索引数据
indexs = []uint32{
    
    
    0, 1, 2, // 使用第0,1,2三个顶点来绘制第一个三角形
    0, 2, 3, // 使用第0,2,3三个顶点来绘制第二个三角形
}

At the same time, modify makVaothe method

func MakeVaoWithEbo(points []float32, indexs []uint32) uint32 {
    
    
	var vbo uint32
	gl.GenBuffers(1, &vbo)
	gl.BindBuffer(gl.ARRAY_BUFFER, vbo)
	gl.BufferData(gl.ARRAY_BUFFER, 4*len(points), gl.Ptr(points), gl.STATIC_DRAW)

	var ebo uint32
	gl.GenBuffers(1, &ebo)
	gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, ebo)
	gl.BufferData(gl.ELEMENT_ARRAY_BUFFER, 4*len(indexs), gl.Ptr(indexs), gl.STATIC_DRAW)

	var vao uint32
	gl.GenVertexArrays(1, &vao)
	gl.BindVertexArray(vao)

	gl.EnableVertexAttribArray(0)
	gl.VertexAttribPointer(0, 3, gl.FLOAT, false, 0, nil)

	return vao
}

It should be noted here that although we only defined four vertices, we drew 6 vertices during Draw and only reused two of them. At this time we use to draw DrawElements.

gl.BindVertexArray(vao)
gl.DrawElements(gl.TRIANGLES, int32(len(indexs)), gl.UNSIGNED_INT, gl.Ptr(indexs))

A shader program can only Attach a set of vertex shaders, geometry shaders, and fragment shaders; but sometimes we need to use different shaders to render different parts. In this case, we need to create multiple programs and use When you need it, use whichever program you need.


Keyboard events

If a keyboard event is set, 窗口被聚焦this event will be triggered when the keyboard is pressed.

window := util.InitGlfw(width, height, "keyboard")
// scancode是一个系统平台相关的键位扫描码信息
// action参数表示这个按键是被按下还是释放,按下的时候会触发action=1,如果不放会一直触发action=2,放开的时候会触发action=0事件
// mods表示是否有Ctrl、Shift、Alt、Super四个按钮的操作,1-shift, 2-ctrl, 4-alt,8-win
keyCallback := func(w *glfw.Window, key glfw.Key, scancode int, action glfw.Action, mods glfw.ModifierKey) {
    
    
    log.Printf("key:%d, scancode:%d, action:%d, mods:%v, name:%s\n", key, scancode, action, mods, glfw.GetKeyName(key, scancode))
    // 如果按下了ESC键就关闭窗口
    if key == glfw.KeyEscape && action == glfw.Press {
    
    
        window.SetShouldClose(true)
    }
}
// 或者 glfw.GetCurrentContext().SetKeyCallback(keyCallback)
window.SetKeyCallback(keyCallback)

Cancel keyboard event

window.SetKeyCallback(nil)

There is a pit here. Sometimes we use it window.GetKey(glfw.KeyW)to get the status of the button. We know that the button has three states 0, 1, 2, but window.GetKeythe function only returns 0, 1. The comments are as follows:

// GetKey returns the last reported state of a keyboard key. The returned state
// is one of Press or Release. The higher-level state Repeat is only reported to
// the key callback.
//
// If the StickyKeys input mode is enabled, this function returns Press the first
// time you call this function after a key has been pressed, even if the key has
// already been released.
//
// The key functions deal with physical keys, with key tokens named after their
// use on the standard US keyboard layout. If you want to input text, use the
// Unicode character callback instead.
func (w *Window) GetKey(key Key) Action
keyCallback := func(w *glfw.Window, key glfw.Key, scancode int, action glfw.Action, mods glfw.ModifierKey) {
    
    
    if window.GetKey(glfw.KeyW) == glfw.Press {
    
    
        ......
    }
    
    log.Printf("key:%d, scancode:%d, action:%d, mods:%v\n", key, scancode, action, mods)
}

In other words, if you press a certain key and hold it down, callbackthe action in is 2, but window.GetKey(glfw.KeyW)it is 1. This should be noted.

Character input event, focus the mouse to the window, and then open the input method input

charCallback := func(w *glfw.Window, char rune) {
    
    
    log.Printf("char:%s", string(char))
}
window.SetCharCallback(charCallback)
2023/05/29 09:30:48 char:我
2023/05/29 09:30:48 char:们
2023/05/29 09:31:02 char:a
2023/05/29 09:31:02 char:s
2023/05/29 09:31:02 char:d

mouse events

The mouse click event follows the keyboard event, but changes the keys to left button, right button, and scroll wheel.

// 左键:button=0,按下action=1,松开action=0,没有按住事件
// 右键:button=1,按下action=1,松开action=0,没有按住事件
// 滚轮:button=2,按下action=1,松开action=0,没有按住事件
mouseCallback := func(w *glfw.Window, button glfw.MouseButton, action glfw.Action, mod glfw.ModifierKey) {
    
    
    log.Printf("button:%d, action:%d, mod:%d\n", button, action, mod)
}
window.SetMouseButtonCallback(mouseCallback)

Mouse coordinate movement event, the upper left corner of the window is(0, 0)

cursorPosCallback := func(w *glfw.Window, xpos float64, ypos float64) {
    
    
    log.Printf("x:%f, y:%f", xpos, ypos)
}
window.SetCursorPosCallback(cursorPosCallback)

The effect of pressing the left button and dragging it to the mouse

var x0, y0, x1, x2, y1, y2 float64

mouseCallback := func(w *glfw.Window, button glfw.MouseButton, action glfw.Action, mod glfw.ModifierKey) {
    
    
    log.Printf("button:%d, action:%d, mod:%d\n", button, action, mod)

    if button == glfw.MouseButtonLeft && action == glfw.Press {
    
    
        x1, y1 = x0, y0
        log.Printf("x1:%f, y1:%f", x1, y1)
    }

    if button == glfw.MouseButtonLeft && action == glfw.Release {
    
    
        x2, y2 = x0, y0
        log.Printf("x2:%f, y2:%f", x2, y2)
        log.Printf("x move:%f, y move:%f", x2-x1, y2-y1)
    }
}
window.SetMouseButtonCallback(mouseCallback)

cursorPosCallback := func(w *glfw.Window, xpos float64, ypos float64) {
    
    
    x0 = xpos
    y0 = ypos
}
window.SetCursorPosCallback(cursorPosCallback)

Mouse wheel or touch pad, mouse wheel only yoff, indicates how much vertical scrolling, touch pad has xoffand yoff.

scrollCallback := func(w *glfw.Window, xoff float64, yoff float64) {
    
    
    log.Printf("xoff:%f, yoff:%f", x2, y2)
}
window.SetScrollCallback(scrollCallback)

Drag the object to the window and drop the event. It can be a multi-selected file, and namesit is the absolute address of these files.

dropCallback := func(w *glfw.Window, names []string) {
    
    
    // names:[D:\dev\php\magook\trunk\server\go-graphic\demo5\square.png]
    log.Printf("names:%v", names)
}
window.SetDropCallback(dropCallback)

Mouse focus on window event

// CursorEnterCallback is the cursor boundary crossing callback.
type CursorEnterCallback func(w *Window, entered bool)

// SetCursorEnterCallback the cursor boundary crossing callback which is called
// when the cursor enters or leaves the client area of the window.
func (w *Window) SetCursorEnterCallback(cbfun CursorEnterCallback) (previous CursorEnterCallback)

joystick, joystick event

// JoystickCallback is the joystick configuration callback.
type JoystickCallback func(joy, event int)

// SetJoystickCallback sets the joystick configuration callback, or removes the
// currently set callback. This is called when a joystick is connected to or
// disconnected from the system.
func SetJoystickCallback(cbfun JoystickCallback) (previous JoystickCallback)

// JoystickPresent reports whether the specified joystick is present.
func JoystickPresent(joy Joystick) bool

// GetJoystickAxes returns a slice of axis values.
func GetJoystickAxes(joy Joystick) []float32

// GetJoystickButtons returns a slice of button values.
func GetJoystickButtons(joy Joystick) []byte

// GetJoystickName returns the name, encoded as UTF-8, of the specified joystick.
func GetJoystickName(joy Joystick) string
texture
texture target

GL_TEXTURE_1D、GL_TEXTURE_2D、GL_TEXTURE_3D

Texture coordinates

Texture coordinates are two-dimensional and indicate which point on the texture image it corresponds to. After normalization [0,1], the lower left corner is and (0,0)the upper right corner is (1,1). It should be noted here that general image reading programs start reading from the upper left corner. They will regard the upper left corner as the upper left corner (0,0). In fact, the Y-axis direction is opposite.

We set the texture coordinate attributes for each vertex, which is to paste the texture image onto the graphics we want to draw.
Insert image description here

texture surround

As mentioned earlier, texture coordinates should be between 0-1. as follows

vertices = []float32{
    
    
    // Positions   // Colors      // Texture Coords
    0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, // Top Right
    0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, // Bottom Right
    -0.5, -0.5, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, // Bottom Left
    -0.5, 0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, // Top Left
}

The target polygon is a quadrilateral, the texture is also a quadrilateral, and then the value of the texture coordinates reaches 1, then this data is intended to completely spread the texture onto the quadrilateral. If the size or proportion does not match, the texture image will be automatically scaled and eventually covered. Let's take laying tiles in a room as an example. If we want to cover it with two tiles on the X-axis, we need to enlarge the X-axis of the texture by 2 times. That is to say, we can change the texture and target by modifying the size of the texture coordinates. The size ratio of the polygon.

So what happens when the texture coordinates are greater than this value? We can set OpenGL to determine what action should be taken when the texture coordinates are not within this range.

Surround mode describe
GL_REPEAT Default behavior for textures. Repeat texture image.
GL_MIRRORED_REPEAT Same as GL_REPEAT, but the image is mirrored each time it is repeated.
GL_CLAMP_TO_EDGE The texture coordinates will be constrained between 0 and 1, and the excess will repeat the edges of the texture coordinates, producing an effect of the edges being stretched.
GL_CLAMP_TO_BORDER The exceeded coordinates are the edge colors specified by the user. If not defined, there will be no texture color.

Specific effects, examples will be given below.

gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT)
gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT)

The meaning here iis intthat the set value is inta type. If it is a 3D surround, there is also a rcoordinate.

Texture filtering

Texture coordinates do not depend on resolution (Resolution). It can be any floating point value. That is to say, the texture coordinates you define may not be the center of a texture pixel, so OpenGL needs to know how to convert texture pixels (Texture Pixels). called Texel) is mapped to texture coordinates. This becomes important when you have a large object but a low resolution texture. As you may have guessed, OpenGL also has options for texture filtering. There are many options for texture filtering, but for now we will only discuss the two most important ones: GL_NEAREST and GL_LINEAR.

Texture Pixel is also called Texel. You can imagine that you open a .jpg format picture. If you keep zooming in, you will find that it is composed of countless pixels. This point is a texture pixel; be careful not to confuse it with texture coordinates. Texture coordinates are your In the array set for the model vertex, OpenGL uses the texture coordinate data of this vertex to find the pixels on the texture image, and then samples and extracts the color of the texture pixel.

GL_NEAREST (also called Nearest Neighbor Filtering) is OpenGL's default texture filtering method. When set to GL_NEAREST, OpenGL will select the pixel whose center point is closest to the texture coordinates. In the image below you can see four pixels, the plus sign represents the texture coordinate. The center of the texel in the upper left corner is closest to the texture coordinates, so it will be selected as the sample color:
Insert image description here
GL_LINEAR (also called linear filtering, (Bi)linear Filtering) will calculate an interpolation based on the texels near the texture coordinates, approximately Out the color between these texels. The closer the center of a texel is to the texture coordinate, the greater the contribution of the texel's color to the final sample color. In the image below you can see that the returned color is a blend of neighboring pixels:
Insert image description here
GL_NEAREST produces a grainy pattern where we can clearly see the pixels that make up the texture, while GL_LINEAR produces a smoother pattern where it is difficult to see individual textures pixels. GL_LINEAR can produce more realistic output, but some developers prefer the 8-bit style, so they use the GL_NEAREST option.

You can set texture filtering options when performing magnify and minify operations. For example, you can use proximity filtering when the texture is reduced and linear filtering when it is enlarged. We need to use glTexParameter*functions to specify filtering methods for zooming in and out. This code will look very similar to the texture wrapping mode settings:

gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR)

Available texture filters

filter describe
GL_NEAREST Get nearest neighbor pixel
GL_LINEAR Linear internal interpolation
GL_NEAREST_MIPMAP_NEAREST The nearest neighbor pixel of the most recent multi-map level
GL_NEAREST_MIPMAP_LINEAR Internal linear interpolation at nearest multimap level
GL_LINEAR_MIPMAP_NEAREST External linear interpolation at nearest multimap level
GL_LINEAR_MIPMAP_LINEAR External and internal linear interpolation at nearest multimap level
Multi-level gradient texture

Imagine we have a large room with thousands of objects, each with textures on them. Some objects will be far away, but their textures will have the same high resolution as nearby objects. Since distant objects may only result in very few fragments, it is difficult for OpenGL to get the correct color values ​​for these fragments from a high-resolution texture, because it needs to pick up only one texture color for a fragment that spans a large part of the texture. . This can create an unrealistic feel on small objects, not to mention wasting memory using high-resolution textures for them.
Insert image description here
OpenGL uses a concept called multi-level progressive texture (Mipmap) to solve this problem. It is simply a series of texture images, and the latter texture image is half of the previous one. The idea behind multi-level textures is simple: when the distance from the viewer exceeds a certain threshold, OpenGL will use a different multi-level texture, i.e. the one that best suits the distance of the object. Due to the long distance, the low resolution will not be noticed by users. At the same time, another plus point of multi-level fade texture is that its performance is very good. Let's take a look at what a multi-level texture looks like:
Insert image description here
Creating a series of multi-level textures manually for each texture image is cumbersome. Fortunately, OpenGL has a glGenerateMipmaps function. OpenGL will take care of calling it after creating a texture. All the rest is work. You'll see how to use it in a later tutorial.

When switching multi-level fade texture levels (Levels) during rendering, OpenGL will produce unrealistic hard boundaries between two different levels of multi-level fade texture layers. Just like normal texture filtering, when switching multi-level texture levels you can also use NEAREST and LINEAR filtering between two different multi-level texture levels. In order to specify the filtering method between different multi-level texture levels, you can use one of the following four options instead of the original filtering method:

Filter method describe
GL_NEAREST_MIPMAP_NEAREST Use nearest neighbor multilevel textures to match pixel sizes and use neighbor interpolation for texture sampling
GL_LINEAR_MIPMAP_NEAREST Use the nearest multi-level asymptotic texture level and sample using linear interpolation
GL_NEAREST_MIPMAP_LINEAR Linear interpolation between two multi-level progressive textures that best match the pixel size, sampling using neighbor interpolation
GL_LINEAR_MIPMAP_LINEAR Use linear interpolation between two adjacent multilevel textures and sample using linear interpolation

A common mistake is to set the magnification filtering option to one of the multi-level fade texture filtering options. This has no effect, because multi-level textures are mainly used when textures are scaled down: texture upscaling does not use multi-level textures, and setting the multi-level texture option for upscaling filtering will generate a GL_INVALID_ENUM error. code.

Create a 2D texture image

TexImage2D(target uint32, level int32, internalformat int32, width int32, height int32, border int32, format uint32, xtype uint32, pixels unsafe.Pointer)

  • target: texture target,GL_TEXTURE_1D、GL_TEXTURE_2D、GL_TEXTURE_3D
  • level: Specifies the level of the multi-level fade texture, if you wish to manually set the level of each multi-level fade texture individually. Here we fill in 0, which is the basic level.
  • internalformat: Tell OpenGL what format we want to store the texture in, that is, which color model, such as RGBA.
  • width,height: The width and height of the texture.
  • border: Always set to 0(historical issue).
  • format: The color model of the original image, such as RGBA.
  • xtype: Data type of pixel data.
  • pixels: Pointer to pixel data array.

Example

// 引入 
// _ "image/jpeg"
// _  "image/png"
func MakeTexture(filepath string) uint32 {
    
    
	var texture uint32
	gl.GenTextures(1, &texture)
	gl.BindTexture(gl.TEXTURE_2D, texture)
	gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT)
	gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT)
	gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
	gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR)

	imgFile2, _ := os.Open(filepath)
	defer imgFile2.Close()
	img2, _, _ := image.Decode(imgFile2)
	rgba2 := image.NewRGBA(img2.Bounds())
	draw.Draw(rgba2, rgba2.Bounds(), img2, image.Point{
    
    0, 0}, draw.Src)

	gl.TexImage2D(gl.TEXTURE_2D, 0, gl.RGBA, int32(rgba2.Rect.Size().X), int32(rgba2.Rect.Size().Y), 0, gl.RGBA, gl.UNSIGNED_BYTE, gl.Ptr(rgba2.Pix))
    
	gl.GenerateMipmap(gl.TEXTURE_2D)

	return texture
}
Use textures

The texture is attached to the polygon, so you first need to define a vertex array of the polygon, and define texture coordinate attributes for each vertex.

width  = 800
height = 600

vertices = []float32{
    
    
    // Positions   // Colors      // Texture Coords
    0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, // Top Right
    0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, // Bottom Right
    -0.5, -0.5, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, // Bottom Left
    -0.5, 0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, // Top Left
}

indices = []uint32{
    
    
    0, 1, 3, // First Triangle
    1, 2, 3, // Second Triangle
}

The step size and offset information are as follows
Insert image description here

func Run() {
    
    
	runtime.LockOSThread()
	window := util.InitGlfw(width, height, "texture2d")
	defer glfw.Terminate()

	program, _ := util.InitOpenGL(vertexShaderSource, fragmentShaderSource)
	vao := util.MakeVaoWithAttrib(program, vertices, indices, []util.VertAttrib{
    
    {
    
    Name: "vPosition", Size: 3}, {
    
    Name: "vColor", Size: 3}, {
    
    Name: "vTexCoord", Size: 2}})
	pointNum := int32(len(indices))
	texture1 := util.MakeTexture("demo4/container.jpg")

	for !window.ShouldClose() {
    
    
		gl.ClearColor(0.2, 0.3, 0.3, 1.0)
		gl.Clear(gl.COLOR_BUFFER_BIT)
		gl.UseProgram(program)

		gl.ActiveTexture(gl.TEXTURE0)
		gl.BindTexture(gl.TEXTURE_2D, texture1)

		gl.BindVertexArray(vao)
		gl.DrawElements(gl.TRIANGLES, pointNum, gl.UNSIGNED_INT, gl.Ptr(indices))

		glfw.PollEvents()
		window.SwapBuffers()
	}
}

Insert image description here
In the above example, there is only one texture, and only one texture variable is defined in the fragment shader, so we do not need to specify the corresponding relationship. In fact, its default correspondence is, what if we define multiple texture variables TEXTURE0? This means that the two textures need to be displayed linearly interpolated at a certain ratio, that is, blended together. Let’s modify the fragment shader first

#version 410

in vec3 fColor;
in vec2 fTexCoord;

out vec4 frag_colour;

uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;

void main() {
    
    
    frag_colour = mix(texture(ourTexture1, fTexCoord), texture(ourTexture2, fTexCoord), 0.2);
}

GLSL's built-in mixfunctions take two values ​​as arguments and linearly interpolate them based on a third argument. If the third value is 0.0, it returns the first input value; if it is 1.0, it returns the second input value. 0.2Will return 80%the first input color and 20%the second input color, that is, return the mixed color of the two textures.

Then we need to specify which uniform sampler corresponds to which texture.

func Run() {
    
    
	runtime.LockOSThread()
	window := util.InitGlfw(width, height, "texture2d")
	defer glfw.Terminate()

	program, _ := util.InitOpenGL(vertexShaderSource, fragmentShaderSource)
	vao := util.MakeVaoWithAttrib(program, vertices, indices, []util.VertAttrib{
    
    {
    
    Name: "vPosition", Size: 3}, {
    
    Name: "vColor", Size: 3}, {
    
    Name: "vTexCoord", Size: 2}})
	pointNum := int32(len(indices))
	texture1 := util.MakeTexture("demo4/container.jpg")
	texture2 := util.MakeTexture("demo4/awesomeface.png")

	for !window.ShouldClose() {
    
    
		gl.ClearColor(0.2, 0.3, 0.3, 1.0)
		gl.Clear(gl.COLOR_BUFFER_BIT)
		gl.UseProgram(program)

		gl.ActiveTexture(gl.TEXTURE0)
		gl.BindTexture(gl.TEXTURE_2D, texture1)
         // 给 ourTexture1 赋值为0,它就是第0号采样器,即gl.TEXTURE0
		gl.Uniform1i(gl.GetUniformLocation(program, gl.Str("ourTexture1"+"\x00")), 0)

		gl.ActiveTexture(gl.TEXTURE1)
		gl.BindTexture(gl.TEXTURE_2D, texture2)
         // 给 ourTexture1 赋值为1,它就是第1号采样器,即gl.TEXTURE1
		gl.Uniform1i(gl.GetUniformLocation(program, gl.Str("ourTexture2"+"\x00")), 1)

		gl.BindVertexArray(vao)
		gl.DrawElements(gl.TRIANGLES, 6, gl.UNSIGNED_INT, nil)

		gl.BindVertexArray(vao)
		gl.DrawElements(gl.TRIANGLES, pointNum, gl.UNSIGNED_INT, gl.Ptr(indices))

		glfw.PollEvents()
		window.SwapBuffers()
	}
}

Insert image description here
As you can see, the smiley face is upside down. In fact, the box is also upside down. This is because the pixel scanning of the image is from the upper left corner as the origin. This is the case with any program we use to read the image. This is no problem, but OpenGL The y-axis coordinate is required 0to be at the bottom of the image, so we can modify the vertex shader and use it 1-yto flip the y of the texture coordinates.

#version 410

in vec3 vPosition;
in vec3 vColor;
in vec2 vTexCoord;

out vec3 fColor;
out vec2 fTexCoord;

void main() {
    
    
    gl_Position = vec4(vPosition, 1.0);
    fColor = vColor;
	fTexCoord = vec2(vTexCoord.x, 1.0-vTexCoord.y);
}

Insert image description here

texture wrap effect
Surround mode describe
GL_REPEAT Default behavior for textures. Repeat texture image.
GL_MIRRORED_REPEAT Same as GL_REPEAT, but the image is mirrored each time it is repeated.
GL_CLAMP_TO_EDGE The texture coordinates will be constrained between 0 and 1, and the excess will repeat the edges of the texture coordinates, producing an effect of the edges being stretched.
GL_CLAMP_TO_BORDER The exceeded coordinates are the edge colors specified by the user. If not defined, there will be no texture color.

To demonstrate texture wrapping, we need to modify the texture coordinates

0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 2.0, 2.0,
0.5, -0.5, 0.0, 0.0, 1.0, 0.0, 2.0, 0.0,
-0.5, -0.5, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0,
-0.5, 0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 2.0,

The first picture is in the lower left corner

GL_REPEAT

Insert image description here

GL_MIRRORED_REPEAT

Insert image description here

GL_CLAMP_TO_EDGE

Insert image description here

GL_CLAMP_TO_BORDER

Insert image description here

gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_BORDER)
gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_BORDER)
c := []float32{
    
    1.0, 1.0, 0.0, 1.0}
gl.TexParameterfv(gl.TEXTURE_2D, gl.TEXTURE_BORDER_COLOR, &c[0])

Insert image description here

Guess you like

Origin blog.csdn.net/raoxiaoya/article/details/131391468