WebGPU Learning (II): Learning "draws a triangle" exemplary

Hello everyone, Chrome-> webgl-samplers-> helloTriangle examples in this article to learn.

Previous article: WebGPU learning (a): opening

Prepare Sample Code

Cloning webgl-samplers Github Repo locally.
(Note: The current version is 0.0.2)

The actual sample code is in src / examples / folder, the code is written in typescript:
Screenshot 3.53.16.png-64.7kB 2019-12-04 PM

Learn helloTriangle.ts

Open helloTriangle.ts file, we look at the contents of the init function.

First is the shader code

    const vertexShaderGLSL = `#version 450
      const vec2 pos[3] = vec2[3](vec2(0.0f, 0.5f), vec2(-0.5f, -0.5f), vec2(0.5f, -0.5f));

      void main() {
          gl_Position = vec4(pos[gl_VertexIndex], 0.0, 1.0);
      }
    `;

    const fragmentShaderGLSL = `#version 450
      layout(location = 0) out vec4 outColor;

      void main() {
          outColor = vec4(1.0, 0.0, 0.0, 1.0);
      }
    `;

Here is a vertex shader and fragment shader of glsl code.

(Webgpu support vertex shader, fragment shader, compute shader, where only the first two)

"#Version 450" declares glsl version 4.5 (which should be placed glsl the first line)

Line 2 defines the coordinates of the three vertices of the triangle, using the two-dimensional array containing (vec2 type of each element). Because in a plane, the vertex defined only x, y coordinates (z vertex 0.0)

gl_VertexIndex fifth row number of vertices, each time when the execution order of 0,1,2 (vertex shader is performed three times, since only three vertices) (see particularly to draw the end of this analysis)

Line 9 is the fragment shader, color as a triangle, so the color of all fragments with a fixed value

Then we continue to look at the code below

    const adapter = await navigator.gpu.requestAdapter();
    const device = await adapter.requestDevice();
    // 准备编译glsl的库
    const glslang = await glslangModule();
    // 获得webgpu上下文
    const context = canvas.getContext('gpupresent');

On line 4 glslangModule is import third-party libraries:

import glslangModule from '../glslang';

Read on

    // 定义swapbuffer的格式为RGBA8位的无符号归一化格式
    const swapChainFormat = "bgra8unorm";

    // @ts-ignore:
    const swapChain: GPUSwapChain = context.configureSwapChain({
      device,
      format: swapChainFormat,
    });

@ Ts-ignore is the typescript to ignore the error. Because of the type of context is RenderingContext, it does not define configureSwapChain function, if the compiler will complain that line typescript, so it is necessary to ignore the error.

Line 5 is configured swap chain. vulkan tutorial illustrates this:
swap chain is a buffer structure, webgpu will first render the contents of the buffer swap chain, and then display it on the screen;
on the swap chain is essentially wait presented on the screen of a picture queue.

The next step is to create a render pipeline

    const pipeline = device.createRenderPipeline({
      layout: device.createPipelineLayout({ bindGroupLayouts: [] }),

      vertexStage: {
        module: device.createShaderModule({
          code: glslang.compileGLSL(vertexShaderGLSL, "vertex"),

          // @ts-ignore
          source: vertexShaderGLSL,
          transform: source => glslang.compileGLSL(source, "vertex"),
        }),
        entryPoint: "main"
      },
      fragmentStage: {
        module: device.createShaderModule({
          code: glslang.compileGLSL(fragmentShaderGLSL, "fragment"),

          // @ts-ignore
          source: fragmentShaderGLSL,
          transform: source => glslang.compileGLSL(source, "fragment"),
        }),
        entryPoint: "main"
      },

      primitiveTopology: "triangle-list",

      colorStates: [{
        format: swapChainFormat,
      }],
    });

Learn pipeline

WebGPU two pipeline: render pipeline and compute pipeline, where only a render pipeline

As used herein render pipeline descriptor to create the render pipeline, which is defined as follows:

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    required GPUPipelineLayout layout;
};

...

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStageDescriptor vertexStage;
    GPUProgrammableStageDescriptor fragmentStage;

    required GPUPrimitiveTopology primitiveTopology;
    GPURasterizationStateDescriptor rasterizationState = {};
    required sequence<GPUColorStateDescriptor> colorStates;
    GPUDepthStencilStateDescriptor depthStencilState;
    GPUVertexStateDescriptor vertexState = {};

    unsigned long sampleCount = 1;
    unsigned long sampleMask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
    // TODO: other properties
};

render pipeline may be provided bound resource distribution, compiled shader, fixed functions (such as hybrid format vertexState, depth, stencil, cullMode other states and vertex data) with respect to WebGL (WebGL an API only set one, as used gl.cullFace provided cull mode), enhanced performance (static set various conditions, need not be provided at run time), ease of management (state set to the respective set together).

分析render pipeline descriptor

vertexStage fragmentStage are provided and vertex shader and fragment shader:
use of third-party libraries, and compiled into bytecode glsl (V-format of the SPIR);
Source and transform fields are redundant and can be deleted.

Because there is no binding shader resources (e.g., uniform buffer, texture, etc.), so that the second line bindGroupLayouts empty array, and do not need to bind group bind group layout

PrimitiveTopology line 25 specified topology sheet element, here a triangle.
It can be the following values:

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip"
};

Now Ignore colorStates

We continue to analyze the code behind, then define the frame function

frame defines a logical function performed for each frame:

    function frame() {
      const commandEncoder = device.createCommandEncoder({});
      const textureView = swapChain.getCurrentTexture().createView();

      const renderPassDescriptor: GPURenderPassDescriptor = {
        colorAttachments: [{
          attachment: textureView,
          loadValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },
        }],
      };

      const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
      passEncoder.setPipeline(pipeline);
      passEncoder.draw(3, 1, 0, 0);
      passEncoder.endPass();

      device.defaultQueue.submit([commandEncoder.finish()]);
    }

    return frame;

Learn command buffer

We can not direct the operation command buffer, need to create a command encoder, it will use multiple commands (such as a render pass of the draw) to set up a command buffer, and then do submit, submit the command buffer to the gpu driver queue.

According webgpu design documents -> the Command Submission :

Command buffers carry sequences of user commands on the CPU side. They can be recorded independently of the work done on GPU, or each other. They go through the following stages:
creation -> "recording" -> "ready" -> "executing" -> done

We know, command buffer there is
creation, recording, ready, executing, done five states.

According to the document, in conjunction with the code to analyze operational processes command buffer is:
When you create a command encoder line 2, should be to create a command buffer, its state creation;
line 12 beginning render pass (webgpu also support compute pass, but here unused), the state becomes a command buffer Recording;
13-14 rows "setting Pipeline", "drawing" is set to the commands in the command buffer;
line 15 ends the render pass, (Pass next may be provided, such as compute pass , but where only one Pass);
line 17 "commandEncoder.finish ()" will be changed to READY status command buffer;
then execute subimit, command buffer becomes Executing state, are submitted to the GPU Driver queue, not longer be operated cpu end;
if the submission is successful, gpu decided to deal with it at some time.

Analysis render pass

renderPassDescriptor on line 5 describes render pass, which is defined as:

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachmentDescriptor> colorAttachments;
    GPURenderPassDepthStencilAttachmentDescriptor depthStencilAttachment;
};

Here only used the colorAttachments. It is similar to WebGL-> framebuffer of colorAttachments. Here only you use a color buffer attachment.

We look at the definition of colorAttachment:

dictionary GPURenderPassColorAttachmentDescriptor {
    required GPUTextureView attachment;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    GPUStoreOp storeOp = "store";
};

Here provided attachment, which is associated with the swap chain:

          attachment: textureView,

We now ignore resolveTarget.

before loadValue and storeOp decisions rendered after rendering and how to deal with data attachment in.
We look its type:

enum GPULoadOp {
    "load"
};
enum GPUStoreOp {
    "store",
    "clear"
};

...
dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

loadValue If GPULoadOp type, only one value: "load", it means to retain the data attachment is before rendering;
if GPUColor type (e.g., where {r: 0.0, g: 0.0 , b: 0.0, a: 1.0}), not only "load", and set the initial value before rendering, similar to the WebGL clearColor.

If storeOp is "store", meaning the rendering of content to be rendered to save memory, it can be read back;
if it is "clear", meaning that the content rendered empty.

Now we look back and look render pipeline in colorStates:

      colorStates: [{
        format: swapChainFormat,
      }],

colorStates correspondence with colorAttachments, only one, it should be the same as the format of the format swap chain

We continue to see the render pass codes:

      const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
      passEncoder.setPipeline(pipeline);
      passEncoder.draw(3, 1, 0, 0);
      passEncoder.endPass();

The draw is defined as:

void draw(unsigned long vertexCount, unsigned long instanceCount,
              unsigned long firstVertex, unsigned long firstInstance);

A triangle has three vertices, where only draw one example, both from zero (it is the vertex shader followed gl_VertexIndex 0,1,2), the third behavior "draw (3, 1, 0, 0) "

The final rendering results

2019-12-04 screenshots afternoon 9.53.50.png-8kB

Reference material

Samplers Github Repo-webgl
Vulkan the Tutorial
webgpu design documents -> the Command Submission
WebGPU-4

Guess you like

Origin www.cnblogs.com/chaogex/p/11993144.html