threejs(10)-WEBGL and GPU rendering principles (difficulties) can be digested later

1. Rendering pipeline

What is WebGL

WebGL (Web Graphics Library) is a JavaScript API that renders high-performance interactive 3D and 2D graphics in any compatible web browser without the need for plug-ins. WebGL does this by introducing an API that is very consistent with OpenGL ES 2.0 and can be used within HTML5 elements. This consistency allows the API to take advantage of the hardware graphics acceleration provided by the user's device.

WebGL Development History

The development of WebGL can be traced back to 2006. WebGL originated from a Canvas 3D experimental project by Mozilla employee Vladimir Vkisievich. The prototype of Canvas 3D was first demonstrated in 2006. This technology was launched in 2007. Implemented in FireFox and Opera browsers by the end of the year. In early 2009, the KhronosGroup alliance created the WebGL working group. The initial working members included Apple, Google, Mozilla, Opera, etc. The WebGL 1.0 specification was released in March 2011. The development of the WebGL 2 specification began in 2013 and was finally completed in January 2017. The WebGL 2 specification was supported in Firefox 51, Chrome 56 and Opera 43 for the first time.

rendering pipeline

Webgl's rendering relies on the rendering capabilities of the underlying GPU. Therefore, the WEBGL rendering process is consistent with the rendering pipeline inside the GPU. The function of the rendering pipeline is to convert the 3D model into a 2D image.

In the early days, the rendering pipeline was not programmable and was called a fixed rendering pipeline . The detailed work process was fixed. If it was modified, some parameters would need to be adjusted. The rendering pipeline included in modern GPUs is a programmable rendering pipeline . You can control some details of the rendering stage by programming the GLSL shader language. To put it simply: using the shader, we can process each pixel in the canvas, and then A variety of cool effects can be generated.
Insert image description here
Insert image description here

2. Describe the rendering process in detail

vertex shader

WebGL deals with the GPU. The code running on the GPU is a pair of shaders, one is a vertex shader and the other is a fragment shader. Every time the shader program is called, the vertex shader is executed first, and then the fragment shader is executed.
Insert image description here
The job of a vertex shader is to generate clip space coordinate values, usually in the following form:

const vertexshaderSource = `
	attribute vec3 position; 
	void main() {
		gl_Position = vec4(position,1);
	}
`

The (vertex) shader is called once for each vertex. Each call needs to set a special global variable gl_Position. The value of this variable is the clipping space coordinate value. Some students here asked, what are the coordinate values ​​of the clipping space ? ? ?
In fact, I have said it before, and I will say it again.
What are clipping space coordinates? That is, no matter how big your canvas is, the coordinate range of the cropping coordinates is always -1 to 1. Look at the picture below:

Insert image description here
If you run the vertex shader once, then gl_Position is (-0.5, -0.5, 0, 1). Remember that it is always a Vec4. A simple understanding is that it corresponds to x, y,
z, w. Even if you don't use anything else, set the default value. This is what is called the 3D model that is transferred to our screen.
The data required by the vertex shader can be obtained in the following four ways.

  1. attributes attribute (read data from buffer)
  2. uniforms global variables (generally used to make overall changes, rotation, and scaling to objects)
  3. textures textures (get data from pixels or textures)
  4. varyings variables (pass the variables of the vertex shader to the fragment shader)

Primitive assembly and rasterization

What are primitives?
Functions that describe various graphic elements are called primitives, and functions that describe geometric elements are called geometric primitives (points, line segments or polygons). Points and lines are the simplest geometric primitives. After the coordinates are calculated by the vertex shader, they are assembled into combined primitives.
Popular explanation: A primitive is a point, a line segment, or a polygon.
What is primitive assembly?
A simple understanding is the process of assembling the vertices, colors, textures and other content we set into a renderable polygon.
The type of assembly depends on: the type of shape you selected last drawing

gl.drawArrays(gl.TRIANGLES, 0, 3)

If it is a triangle, the vertex shader is executed three times

rasterization

What is rasterization:
polygons generated by assembling primitives, calculating pixels and filling them, removing invisible parts, and clipping out parts that are not within the visible range. Finally, visible graphics with color data are generated and drawn.
Rasterization process diagram:

Insert image description here

Culling and clipping

Insert image description here
Culling: In everyday life, for opaque objects, the back side is invisible to the observer. Similarly, in webgl, we can also set the back side of the object to be invisible, so that during the rendering process, the invisible part will be removed and will not participate in drawing. Save rendering overhead.

Insert image description here
Clipping: In daily life, whether we are watching TV or observing objects, there is a visual range. We cannot see things outside the visual range. Similarly, after the graphics is generated, some parts may be outside the visible range, and this part will be clipped and will not participate in drawing. Use this to improve performance. This is the visual frustum. Only things that can be seen within the ■ range are drawn.
Insert image description here

fragment shader

Insert image description here
Insert image description here
After rasterization, each pixel contains color, depth, and texture data. This is called a fragment.
Tips: The color of each pixel is provided by gl_FragColor of the fragment shader
to receive the fragment generated during the rasterization stage. In the rasterization stage, the color information of each fragment has been calculated. At this stage, the fragments will be selected one by one, and the processed fragments will continue to be passed to subsequent stages. The number of times the fragment shader runs is determined by how many fragments the graphics has.

Fragment-by-fragment selection
determines whether the fragment should be displayed through template testing and depth testing. During the testing process, some useless fragment content will be discarded, and then a drawable two-dimensional image will be generated and displayed.

  • Depth test: It is to test the value of the z-axis. The content of the fragment with a smaller value will cover the value with a larger value. (Similar to the fact that nearby objects will block distant objects).
  • Template test: simulates the observation behavior of the observer, which can be connected to mirror observation. Mark all fragments appearing in the image, and finally draw only the marked content.

3. WEBGL draws triangles

Insert image description here
Insert image description here

Initialize CANVAS

Create a new webgl canvas

<canvas id="webgl" width="500" height="500"></canvas>

Create webgl context:

const gl = document.getElementById('webgl').getContext('webgl')

Create a shader program

The codes of the shader program are actually repeated. Let's look at the picture below to see what steps we need:
Insert image description here
Then we will follow this flow chart: Let's do it step by step.

Create shader

 const vertexShader = gl.createShader(gl.VERTEX_SHADER)
 const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER)

gl.VERTEX_SHADER and gl.FRAGMENT_SHADER are global variables representing the vertex shader and fragment shader respectively.

Bind data source

As the name suggests: the data source, which is our shader code.
There are many ways to write shader code:

  1. Use script tag type notjs to write like this
  2. Template string (I prefer this kind of recommendation)
    Let’s write the vertex shader first:
const vertexShaderSource = `
    attribute vec4 a_position;
    void main() {
        gl_Position = a_position;
    }
 `

The vertex shader must have a main function. It is a strongly typed language. Remember to add a semicolon, it is not js, brothers. My shader code is very simple to define the vertex position of a vec4, and then pass it to gl_Position.
Some friends here may ask? Must a_position be done like this here? ?
This is actually like this. When we name variables, we usually use the prefix to distinguish whether it is an attribute, a global variable or a texture, such as this:

uniform mat4 u_mat;

Represents a matrix, if not, it’s okay. But be professional and avoid bugs.
We then write the fragment shader:

const fragmentShaderSource = `
    void main() {
        gl_FragColor = vec4(1.0,0.0,0.0,1.0);
    }
`

This is actually very simple to understand. The color of each pixel is red, and gl_FragColor actually corresponds to rgba, which is the representation of color.
After you have the data source, start binding:

// 创建着色器
const vertexShader = gl.createShader(gl.VERTEX_SHADER)
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER)
//绑定数据源
gl.shaderSource(vertexShader, vertexShaderSource)
gl.shaderSource(fragmentShader, fragmentShaderSource)

Isn’t it a very simple answer? Hahahaha, I think you should know it.

Some operations behind the shader

In fact, compiling shaders, binding shaders, connecting shader programs, and using shader programs are all handled by an API. I won’t go into more details and just look at the code:

// 编译着色器
gl.compileShader(vertexShader)
gl.compileShader(fragmentShader)
// 创建着色器程序
const program = gl.createProgram()
gl.attachShader(program, vertexShader)
gl.attachShader(program, fragmentShader)
// 链接 并使用着色器
gl.linkProgram(program)
gl.useProgram(program)

In this way we have created a shader program.
Someone here asked again, how do I know whether the shader I created is right or wrong? Am I just a very careless person? ? ? OK here he comes. How to debug:

const success = gl.getProgramParameter(program, gl.LINK_STATUS)
if (success) {
    
    
  gl.useProgram(program)
  return program
}
console.error(gl.getProgramInfoLog(program), 'test---')
gl.deleteProgram(program)

The getProgramParameter method is used to determine whether the glsl language of our shader is written correctly. Then you can find out through the getProgramInfoLog method, which is similar to logging.

Data is stored in the buffer

With the shader, the only thing we need now is the data, right?
The Attributes attribute was used when writing the vertex shader above, indicating that this variable needs to read data from the buffer. Next, we will store the data in the buffer.
First create a Vertex Buffer Object (VBO)

const buffer = gl.createBuffer()

The gl.createBuffer() function creates a buffer and returns an identifier. Next, you need to bind this buffer to WebGL.

gl.bindBuffer(gl.ARRAY_BUFFER, buffer)

The gl.bindBuffer() function sets the identifier buffer to the current buffer, and all subsequent data will be put into the current buffer until bindBuffer is bound to another current buffer.
We create a new array and store the data in the buffer.

const data = new Float32Array([0.0, 0.0, -0.3, -0.3, 0.3, -0.3])
gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW)

Because the communication between JavaScript and WebGL must be binary and cannot be in traditional text format, the ArrayBuffer object is used here to convert the data into binary. Because the vertex data is a floating point number, the accuracy does not need to be too high, so Float32Array is sufficient. This It is an efficient way to exchange large amounts of data in real time between JavaScript and the GPU.
gl.STATIC_DRAW specifies the use of the data storage area: the contents of the cache area may be frequently used, but will not change.
gl.DYNAMIC_DRAW indicates that the contents of the cache area are frequently used and will be frequently changed.
gl.STREAM_DRAW indicates that the contents of the buffer may not be used frequently

Read data from buffer

The only input to the GLSL shading program is an attribute value a_position. The first thing we need to do is find the location of this property value from the GLSL shader program we just created.

const aposlocation = gl.getAttribLocation(program, 'a_position')

Next we need to tell WebGL how to get the data from the buffer we prepared before and give it to the attributes in the shader. First we need to enable the corresponding attributes

gl.enableVertexAttribArray(aposlocation)

Finally, the data is read from the buffer and bound to the location of the activated aposlocation.

gl.vertexAttribPointer(aposlocation, 2, gl.FLOAT, false, 0, 0)

The gl.vertexAttribPointer() function has six parameters:

  1. Where should the read data be bound to?
  2. Indicates how many data are fetched from the cache each time, or how many units of data each vertex has. The value range is 1-4. Here, 2 data are taken each time. The 6 data declared by vertices before are exactly the two-dimensional coordinates of 3 vertices.
  3. Indicates the data type. Optional parameters include gl.BYTE signed 8-bit integer, gl.SHORT signed 16-bit integer, gl.UNSIGNED_BYTE unsigned 8-bit integer, gl.UNSIGNED_SHORT unsigned 16-bit integer, gl. FLOAT32-bit IEEE standard floating point number.
  4. Indicates whether integer values ​​should be normalized to a specific range. This parameter is invalid for type gl.FLOAT.
  5. Indicates how many bits are separated between each time the data is fetched and the last time. 0 indicates that each time the data is fetched, it is immediately adjacent to the position of the last data. WebGL will calculate the interval by itself.
  6. Represents the offset when data is first fetched, which must be a multiple of the byte size. 0 means starting from the beginning.

rendering

Now that the shader program and data are ready, it's time to render. Before rendering, perform the same action as the 2d canvas to clear the canvas:

// 清除canvas
gl.clearColor(0, 0, 0, 0)
gl.clear(gl.COLOR_BUFFER_BIT)

We clear the canvas with 0, 0, 0, 0, corresponding to the r, g, b, alpha (red, green, blue, alpha) values ​​respectively, so in this example we make the canvas transparent.
Enable drawing triangles:

gl.drawArrays(gl.TRIANGLES, 0, 3)

  1. The first parameter indicates the type of drawing
  2. The second parameter indicates which vertex to start drawing from.
  3. The third parameter indicates how many points to draw. There are a total of 6 data in the buffer, 2 are taken each time, a total of 3 points.
    There are several drawing types. View the picture:
    Insert image description here
    Here we see if the picture is a red triangle:

All code

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Document</title>
    <style>
      * {
    
    
        margin: 0;
        padding: 0;
      }
      canvas {
    
    
        width: 100vw;
        height: 100vh;
        display: block;
      }
    </style>
  </head>
  <body>
    <canvas></canvas>

    <script>
      // 获取canvas元素
      let canvas = document.querySelector("canvas");
      //   设置canvas的宽高
      canvas.width = window.innerWidth;
      canvas.height = window.innerHeight;
      // 获取webgl上下文
      let gl = canvas.getContext("webgl");

      // 创建顶点着色器
      const vShader = gl.createShader(gl.VERTEX_SHADER);

      // 顶点着色器源码
      gl.shaderSource(
        vShader,
        `
        attribute vec4 v_position;
        void main(){
          gl_Position = v_position;
        }
      `
      );
      //   编译顶点着色器
      gl.compileShader(vShader);

      // 创建片元着色器
      const fShader = gl.createShader(gl.FRAGMENT_SHADER);
      // 片元着色器源码
      gl.shaderSource(
        fShader, // vec4-> 参数rgba
        `
          void main(){
            gl_FragColor = vec4(1.0,0.0,0.0,1.0);
          }
        `
      );

      // 编译片元着色器
      gl.compileShader(fShader);

      // 创建着色器程序链接顶点着色器和片元着色器
      const program = gl.createProgram();
      //   添加顶点着色器
      gl.attachShader(program, vShader);
      //   添加片元着色器
      gl.attachShader(program, fShader);
      //   链接着色器程序
      gl.linkProgram(program);
      //   使用着色器程序
      gl.useProgram(program);

      // 创建顶点数据
      const position = gl.getAttribLocation(program, "v_position");
      //   创建缓冲区
      const pBuffer = gl.createBuffer();
      //   绑定缓冲区
      gl.bindBuffer(gl.ARRAY_BUFFER, pBuffer);

      //   设置顶点数据
      gl.bufferData(
        gl.ARRAY_BUFFER,
        new Float32Array([0.0, 0.5, -0.5, -0.5, 0.5, -0.5]),
        gl.STATIC_DRAW // 静态绘制
      );

      // 将顶点数据提供给到atttribute变量
      gl.vertexAttribPointer(
        //告诉attribute变量从哪里获取数据
        position,
        2, // 每次迭代提供2个单位的数据
        gl.FLOAT, // 每个单位的数据类型是32位浮点型
        false, // 不需要归一化数据
        0, // 0 步长
        0 // 从缓冲区的哪个位置开始读取数据
      );

      //   开启attribute变量
      gl.enableVertexAttribArray(position);

      //   绘制
      gl.drawArrays(gl.TRIANGLES, 0, 3);
    </script>
  </body>
</html>

The data we created is like this:
the width of the canvas is 500 * 500. The actual converted data is actually like this

0,0  ====>  0,0 
-0.3, -0.3 ====> 175, 325
0.3, -0.3 ====>  325, 325

4. Scaling matrix, uniform variables and varying variables

Perform animation zoom in and out

<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <title>Document</title>
  <style>
    * {
    
    
      margin: 0;
      padding: 0;
    }

    canvas {
    
    
      width: 100vw;
      height: 100vh;
      display: block;
    }
  </style>
</head>

<body>
  <canvas id="canvas"></canvas>
  <script>
    const canvasEl = document.querySelector("#canvas");
    canvasEl.width = document.body.clientWidth; // 设置 canvas 画布的宽度
    canvasEl.height = document.body.clientHeight; // 设置 canvas 画布的高度

    const gl = canvasEl.getContext("webgl"); // 获取 WebGL 上下文
    gl.viewport(0,0,canvasEl.width,  canvasEl.height )

    // 创建顶点着色器 语法 gl.createShader(type) 此处 type 为枚举型值为 gl.VERTEX_SHADER 或 gl.FRAGMENT_SHADER 两者中的一个
    const vShader = gl.createShader(gl.VERTEX_SHADER);
    // 编写顶点着色器的 GLSL 代码 语法 gl.shaderSource(shader, source); shader - 用于设置程序代码的 webglShader(着色器对象) source - 包含 GLSL 程序代码的字符串
    gl.shaderSource(
      vShader,
      `
          attribute vec4 a_position;
          uniform mat4 u_Mat;
          varying vec4 v_Color;
          void main() {
            gl_Position = u_Mat * a_Position; // 设置顶点位置
            v_Color = gl_Position;
          }
        `
    );
    gl.compileShader(vShader); // 编译着色器代码

    const fShader = gl.createShader(gl.FRAGMENT_SHADER);

    gl.shaderSource(
      fShader,
      `
          precision mediump float;
          varying vec4 v_Color
          void main() {
            gl_FragColor = v_Color; // 设置片元颜色
          }
        `
    ); // 编写片元着色器代码
    gl.compileShader(fShader); // 编译着色器代码

    // 创建一个程序用于连接顶点着色器和片元着色器
    const program = gl.createProgram();
    gl.attachShader(program, vShader); // 添加顶点着色器
    gl.attachShader(program, fShader); // 添加片元着色器
    gl.linkProgram(program); // 连接 program 中的着色器

    gl.useProgram(program); // 告诉 WebGL 用这个 program 进行渲染

    // //   用于指定uniform变量在 GPU 内存中的位置
    // const color = gl.getUniformLocation(program, "v_Color");
    // // 获取 f_color 变量位置
    // gl.uniform4f(color, 0.93, 0, 0.56, 1); // 设置它的值

    // 获取 v_position 位置
    const pBuffer = gl.createBuffer();

    // 创建一个顶点缓冲对象,返回其 id,用来放三角形顶点数据,
    gl.bindBuffer(gl.ARRAY_BUFFER, pBuffer);
    // 将这个顶点缓冲对象绑定到 gl.ARRAY_BUFFER
    // 后续对 gl.ARRAY_BUFFER 的操作都会映射到这个缓存
    gl.bufferData(
      gl.ARRAY_BUFFER,
      new Float32Array([0, 0.5, 0.5, 0, -0.5, -0.5]), // 三角形的三个顶点
      // 因为会将数据发送到 GPU,为了省去数据解析,这里使用 Float32Array 直接传送数据
      gl.STATIC_DRAW // 表示缓冲区的内容不会经常更改
    );
    // 将顶点数据加入的刚刚创建的缓存对象
    const a_Position = gl.getAttribLocation(program, "a_Position");

    gl.vertexAttribPointer(
      // 告诉 OpenGL 如何从 Buffer 中获取数据
      a_Position, // 顶点属性的索引
      2, // 组成数量,必须是 1,2,3 或 4。我们只提供了 x 和 y
      gl.FLOAT, // 每个元素的数据类型
      false, // 是否归一化到特定的范围,对 FLOAT 类型数据设置无效
      0, // stride 步长 数组中一行长度,0 表示数据是紧密的没有空隙,让 OpenGL 决定具体步长
      0 // offset 字节偏移量,必须是类型的字节长度的倍数。
    );
    gl.enableVertexAttribArray(a_Position);
    // 开启 attribute 变量额,使顶点着色器能够访问缓冲区数据


    const scale = {
    
    
      x: 0.5,
      y: 0.5,
      z: 0.5
    };

    // const mat = new Float32Array([
    //   scale.x, 0.0, 0.0, 0.0,
    //   0.0, scale.x, 0.0, 0.0,
    //   0.0, 0.0, scale.x, 0.0,
    //   0.0, 0.0, 0.0, 1.0,
    // ])
    // const u_Mat = gl.getUniformLocation(program, 'u_Mat');
    // gl.uniformMatrix4fv(u_Mat, false, mat)

    // gl.clearColor(0.0, 0.0, 0.0, 0.0); // 设置清空颜色缓冲时的颜色值
    // gl.clear(gl.COLOR_BUFFER_BIT); // 清空颜色缓冲区,也就是清空画布
    // // 语法 gl.drawArrays(mode, first, count); mode - 指定绘制图元的方式 first - 指定从哪个点开始绘制 count - 指定绘制需要使用到多少个点
    // gl.drawArrays(gl.TRIANGLES, 0, 3);

    function animate() {
    
    
      scale.x -= 0.01;
      const mat = new Float32Array([
        scale.x, 0.0, 0.0, 0.0,
        0.0, scale.x, 0.0, 0.0,
        0.0, 0.0, scale.x, 0.0,
        0.0, 0.0, 0.0, 1.0,
      ])
      const u_Mat = gl.getUniformLocation(program, 'u_Mat');
      gl.uniformMatrix4fv(u_Mat, false, mat);
      gl.drawArrays(gl.TRIANGLES, 0, 3);
      requestAnimationFrame(animate)
    }
    animate()
  </script>
</body>

</html>

5. Basic specifications of shader glsl

What is a fragment shader?
We describe shaders as the Gutenberg printing press for graphics. Why? More importantly: what are shaders?

Insert image description here
If you already have experience with computer drawing, you know that in the process you draw a circle, then a rectangle, a line, some triangles, until you form the image you want. The process is very similar to writing a letter or book by hand—it's a set of instructions to perform one task after another.
A shader is also a set of instructions, but the instructions are executed once for each pixel on the screen. This means that the code you write must behave differently depending on the position of the pixel on the screen. Just like a typewriter, your program will work as a function that takes a position and returns a color, and when it is compiled, it will run very fast.
Insert image description here

Why are shaders so fast?

To answer this question, I'll introduce the wonders of parallel processing.
Think of your computer's CPU as a big industrial pipe, and every task flows through it - like a factory production line. Some tasks are larger than others, which means they require more time and effort to handle. We say they need more processing power. Because of the computer's architecture, jobs are forced to run in a series; each job must be completed one at a time. Modern computers typically have groups of four processors that work like these pipes, completing tasks one after the other to keep things running smoothly. Each tube is also called a thread.
Insert image description here
Video games and other graphics applications require more processing power than other programs. Due to their graphical content, they must undergo extensive pixel-by-pixel operations. Every pixel on the screen needs to be calculated, and in 3D games the geometry and perspective also need to be calculated.
Let's go back to the pipeline and task metaphors. Every pixel on the screen represents a simple little task. Individual per-pixel tasks are not a problem for the CPU, but (and here's the problem) tiny tasks must be done for every pixel on the screen! This means that in the old 800x600 screen, 480,000 pixels had to be processed per frame, which means 14,400,000 calculations per second! Yes! This is a problem enough to overload a microprocessor. On a modern 2880x1800 Retina display running at 60 frames per second, a total of 311,040,000 calculations are performed per second. How do graphics engineers solve this problem?
Insert image description here
This is when parallel processing becomes a great solution. Rather than having a few large, powerful microprocessors or pipelines, it would be smarter to have many tiny microprocessors running in parallel at the same time. This is the Graphics Processor Unit (GPU).

Insert image description here

Think of the tiny microprocessor as a table of pipes and the data for each pixel as a ping pong ball. 14,400,000 ping pong balls per second can clog almost any pipe. But an 800x600 micropipe table can handle receiving 30 480,000 pixel waves per second without any problems. This is the same at higher resolutions - the more parallel hardware you have, the larger streams it can manage.
Another "superpower" of the GPU is special mathematical functions that are accelerated by hardware, so complex mathematical operations are solved directly by the microchip rather than by software. This means ultra-fast trigonometric and matrix operations - as fast as electricity.

What is GLSL?

GLSL stands for openGL Shading Language, which is a specific standard for shader programs, as you will see in the following chapters. There are other types of shaders depending on the hardware and operating system. Here we will use the openGL specification regulated by the Khronos Group. Understanding the history of OpenGL will help you understand most of its strange conventions, for which I recommend checking out: openglbook.com/chapter-0-preface-what-is-opengl.html

Why are shaders notoriously painful?

As Uncle Ben said, "With great power comes great responsibility", parallel computing follows this rule; the powerful architectural design of GPU has its own constraints and limitations.
In order to run in parallel, each pipeline or thread must be independent of every other thread. We say that a thread is blind to the work of other threads. This restriction means that all data must flow in the same direction. So it is not possible to check the results of another thread, modify the input data, or pass the results of one thread to another thread. Allowing thread-to-thread communication puts the integrity of the data at risk.
The GPU also keeps the parallel microprocessors (pipelines) busy; once they are free, they receive new information to process. It is impossible for a thread to know what it was doing at the previous moment. It might be drawing a button from the operating system's UI, then rendering part of the sky in the game, and then displaying the text of the email. Each thread is not only blind, but also memoryless. In addition to the abstraction required to code a general function that changes the result pixel by pixel based on position, the blind and memoryless constraints make shaders less popular among junior programmers.

"Hello world!" is often the first example of learning a new language. This is a very simple, one-line program. It serves as both a warm welcome and a message of the possibilities that programming can bring.
However, in the world of GPUs, rendering a line of text in the first step is too difficult, so we choose a bright welcome color instead, let’s get excited!

#ifdef GL_ES
precision mediump float;
#endif

uniform float u_time;

void main() {
    
    
	gl_FragColor = vec4(1.0,0.0,1.0,1.0);
}

If you are reading this book online, the above code is interactive. You can click or change any part of the code to explore. Thanks to the GPU architecture, shaders are compiled and updated quickly, so your changes appear immediately before your eyes. Try changing the value on line 8 and see what happens.
Although these simple lines of code don't look like there is much content, we can still infer some knowledge points based on them:

  1. The shader language has a main function that returns the color value at the end. This is very similar to C language.
  2. The final pixel color depends on the preset global variable gl_FragColor.
  3. This C-like language has built-in variables (like gl_FragColor), functions and data types. In this example we just introduced vec4 (a four-component floating point vector). Later we will see more types, like vec3 (three-component floating point vector) and vec2 (two-component floating point vector), and the very famous ones: float (single-precision floating point type), int (integer type) and bool (Boolean).
  4. If we look closely at the vec4 type, we can surmise that these four arguments respond to the red, green, blue and transparency channels respectively. At the same time, we can also see that these variables are normalized, which means that their values ​​are from 0 to 1. Later we'll learn how to normalize variables to make it easier to map values ​​between variables.
  5. Another very important feature of C-like languages ​​that can be seen from this example are preprocessor macros. Macros are part of precompilation. With macros, you can #define (define) global variables and perform some basic conditional operations (by using #ifdef and #endif). All macros begin with #. Precompilation will occur just before compilation, copy all commands into #defines, check whether the #ifdef conditional statement has been defined, and whether the #ifndef conditional statement has not been defined. In our "hello world!" example, we checked whether GL_ES is defined on line 2. This is usually used in mobile or browser compilation.
  6. The float type is very important in shaders, so precision is very important. Lower precision results in faster rendering, but at the expense of quality. You can choose the precision of each floating point value. In the first line (precision mediump float;) we set all floating point values ​​to be medium precision. But we can also choose to set this value to "low" (precision lowp float;) or "high" (precision highp float;).
  7. The final and perhaps most important detail is that the GLSL language specification does not guarantee that variables will be automatically converted. What does this sentence mean? Graphics card hardware manufacturers have different graphics card acceleration methods, but they are required to have the most streamlined language specifications. Therefore, automatic casts are not included. In our "hello world!" example, vec4 is accurate to single-precision floating point, so it should be given float format. But if you want the code to be consistent and not spend a lot of time debugging later, it is best to develop a good habit of adding a "." to the float value. Code like the following may not run properly:
void main() {
    
    
    gl_FragColor = vec4(1,0,0,1);   // 出错
}

There are many ways to construct the vec4 type, try other ways. Here is one of the ways:

vec4 color = vec4(vec3(1.0,0.0,1.0),1.0);

Although this example doesn't look very exciting, it's very basic - we're changing every pixel on the canvas to an exact color. In the next chapters we'll see how to change the color of pixels using two input sources: space (based on the pixel's position on the screen) and time (based on how many seconds it has been since the page loaded).

Guess you like

Origin blog.csdn.net/woyebuzhidao321/article/details/134148002