1. Basic knowledge of WebGPU

This article will try to introduce you to the basics of WebGPU.

Before reading this article, I hope you should have a basic understanding of mapping arrays, destructuring assignment, spreading values, async/await, es6 modules, because these contents will be widely used below.

If you already know WebGL, please read this article .

WebGPU is an API that allows you to perform 2 basic operations.

  1. Draw triangles/points/lines to texture
  2. Run calculations on the GPU

that's it!

After that everything about WebGPU is up to you. It's like learning a computer language like JavaScript, Rust, or C++. First, you learn the fundamentals, and then it's up to you to use those fundamentals creatively to solve your problems.

WebGPU is a very low-level API. While you can make small examples, for many applications it can require a lot of code and some serious data organization. For example, three.js, which supports WebGPU, consists of ~600k minified JavaScript, and that's just its base library. This does not include loaders, controls, post-processing and many other features.

The point is, if you just want to display something on the screen, it's better to choose a library that provides a lot of code that you will write yourself.

On the other hand, maybe you have a custom use case, or maybe you want to modify an existing library, or maybe you're just curious about how it works. If this is the case, read on!

1. Getting Started

It's hard to decide where to start. In a way, WebGPU is a very simple system. All it does is run 3 types of functions on the GPU. Vertex Shaders, Fragment Shaders, Compute Shaders.

A Vertex Shader computes vertices. The shader returns vertex positions. For every group of 3 vertices, it returns a triangle drawn between those 3 positions [1]

Vertex shaders compute vertices . The shader returns the vertex position . For every group of 3 vertices, it returns the triangle drawn between these 3 positions [see note 1 ]

A Fragment Shader computes colors [2]. When a triangle is drawn, for each pixel to be drawn the GPU calls your fragment shader. The fragment shader then returns a color.

The fragment shader computes the color [see note 2 ]. When drawing a triangle, the GPU calls your fragment shader for each pixel to be drawn . The fragment shader then returns a color.

A Compute Shader is more generic. It’s effectively just a function you call and say “execute this function N times”. The GPU passes the iteration number each time it calls your function so you can use that number to do something unique on each iteration.

Compute shaders are more generic. It's really just a function that you can call and tell it to "execute this function N times". The GPU passes the iteration number every time it calls your function, so you can use that number to do something unique in each iteration.

If you squint hard, you can think of these functions similar to the functions to pass to array.forEach or array.map. The functions you run on the GPU are just functions, just like JavaScript functions. The part that differs is they run on the GPU, and so to run them you need to copy all the data you want them to access to the GPU in the form of buffers and textures and they only output to those buffers and textures. You need to specify in the functions which bindings or locations the function will look for the data. And, back in JavaScript, you need to bind the buffers and textures holding your data to the bindings or locations. Once you’ve done that you tell the GPU to execute the function.

If you're a bit confused, think of these functions as if they were passed to array.forEach or array.map. The functions you run on the GPU are just functions, just like JavaScript functions. The different part is that they run on the GPU, and in order to run them, all the data they want to access needs to be copied to the GPU in the form of buffers and textures, and they only output to those buffers and textures. You need to specify in the function the binding or location where the function will look for the data. And, back in JavaScript, you need to bind the buffers and textures that hold the data to bindings or positions. Once done, you tell the GPU to execute the function.

Maybe a picture will help. Here is a simplified diagram of a WebGPU setup for drawing a triangle using a vertex shader and a fragment shader
insert image description here
What to look out for in this diagram

  • This is a rendering pipeline. It contains the vertex and fragment shaders that the GPU will run. You can also have pipelines with compute shaders.
  • The shaders reference resources (buffers, textures, samplers)
    indirectly through Bind Groups The shaders reference resources (buffers, textures, samplers) indirectly through Bind Groups
  • The pipeline defines attributes that reference buffers indirectly
    through the internal state The pipeline defines attributes that reference buffers indirectly through the internal state
  • Attributes pull data out of buffers and
    feed the data into the vertex shader.
  • The vertex shader may feed data into the fragment shader
    The vertex shader may feed data into the fragment shader
  • The fragment shader writes to textures indirectly through the render
    pass description The fragment shader writes to textures indirectly through the render pass description

To execute shaders on the GPU, you need to create all these resources and set this state. The creation of resources is relatively simple. One interesting thing is that most WebGPU resources cannot be changed after they are created. You can change their content, but not their size, usage, format, etc... If you want to change anything, you create a new resource and destroy the old one.

Some states are set by creating command buffers and then executing them. The command buffer is as the name suggests. They are command buffers. You create encoders. The encoder encodes commands into a command buffer. Then you finish the encoder, which gives you the command buffer it created. You can then submit that command buffer and let WebGPU execute the command.

Below is some pseudocode to encode a command buffer, followed by a representation of the command buffer created.

encoder = device.createCommandEncoder()
// draw something
{
    
    
  pass = encoder.beginRenderPass(...)
  pass.setPipeline(...)
  pass.setVertexBuffer(0,)
  pass.setVertexBuffer(1,)
  pass.setIndexBuffer(...)
  pass.setBindGroup(0,)
  pass.setBindGroup(1,)
  pass.draw(...)
  pass.end()
}
// draw something else
{
    
    
  pass = encoder.beginRenderPass(...)
  pass.setPipeline(...)
  pass.setVertexBuffer(0,)
  pass.setBindGroup(0,)
  pass.draw(...)
  pass.end()
}
// compute something
{
    
    
  pass = encoder.beginComputePass(...)
  pass.beginComputePass(...)
  pass.setBindGroup(0,)
  pass.setPipeline(...)
  pass.dispatchWorkgroups(...)
  pass.end();
}
commandBuffer = encoder.finish();

insert image description here

After creating the command buffer, you can submit it for execution

device.submit([commandBuffer]);

The figure above represents the status of some draw commands in the command buffer. Execute commands will set up internal state, then draw commands will tell the GPU to execute the vertex shader (and indirectly the fragment shader). The dispatchWorkgroup command will tell the GPU to execute compute shaders.

I hope this gives some overview of the state you need to set. As mentioned above, WebGPU has 2 basic functions to do

  1. Draw triangles/points/lines to texture
  2. Run calculations on the GPU

We'll do these things with a small example. Other articles will show various ways of providing data for these things. Note that this is going to be very basic. We need to build on the foundations of these fundamentals. Later we'll show how to use them to do things people usually do with GPUs like 2D graphics, 3D graphics, etc...

2. Drawing triangles to textures Drawing triangles to textures

WebGPU can draw triangles to textures. For the purposes of this article, a texture is a two-dimensional rectangle of pixels [see note 3 ]. Elements represent textures on a web page. In WebGPU, we can request a texture from the canvas and render to that texture.

To draw triangles using WebGPU, we must provide 2 "shaders". Likewise, shaders are functions that run on the GPU. These 2 shaders are

  1. Vertex Shaders Vertex
    shaders are functions that calculate the position of vertices for drawing triangles/lines/points

  2. Fragment Shaders
    Fragment shaders are functions that calculate the color (or other data) of each pixel to be drawn/rasterized when drawing triangles/lines/points

Let's start with a very small WebGPU program that draws a triangle.

We need a canvas to display our triangle

<canvas></canvas>

Then we need a <script> tag to hold our JavaScript.

<canvas></canvas>
<script type="module">
 
... javascript goes here ...
 
</script>

All JavaScript below will be placed in this script tag

WebGPU is an asynchronous API, so it is easy to use in asynchronous functions. We first request the adapter and then request the device from the adapter.

async function main() {
    
    
  const adapter = await navigator.gpu?.requestAdapter();
  const device = await adapter?.requestDevice();
  if (!device) {
    
    
    fail('need a browser that supports WebGPU');
    return;
  }
}
main();

The above code is quite clear. First, we request the adapter using the ?. optional chaining operator. Therefore, if navigator.gpu does not exist, adapter will be undefined. If it does exist, then we call requestAdapter. It transforms the result asynchronously, so await is required. An adapter represents a specific GPU. Some devices have multiple GPUs.

We request the device from the adapter, but again using ?. so that if the adapter happens to be undefined, the device will be undefined as well.

If device is not set, it may be that the user is using an old version of the browser (chrome>=113 only supports WebGPU).

Next we find the canvas and create a webgpu context for it. This will get us a texture to render which will be used to render the canvas in the web page.

  // Get a WebGPU context from the canvas and configure it
  const canvas = document.querySelector('canvas');
  const context = canvas.getContext('webgpu');
  const presentationFormat = navigator.gpu.getPreferredCanvasFormat();
  context.configure({
    
    
    device,
    format: presentationFormat,
  });

Again, the above code is also very clear. We get a "webgpu" context from the canvas. We ask what the system's preferred canvas format is. This will be "rgba8unorm" or "bgra8unorm". It doesn't matter what it is, but it will be the fastest format for the user's system by querying it.

We pass it as format into the webgpu canvas context by calling configure. We also pass in device which associates this canvas with the device we just created.

Next we create a shader module. A shader module contains one or more shader functions. In our case we will write 1 vertex shader function and 1 fragment shader function.

  const module = device.createShaderModule({
    
    
    label: 'our hardcoded red triangle shaders',
    code: `
      @vertex fn vs(
        @builtin(vertex_index) vertexIndex : u32
      ) -> @builtin(position) vec4f {
        var pos = array<vec2f, 3>(
          vec2f( 0.0,  0.5),  // top center
          vec2f(-0.5, -0.5),  // bottom left
          vec2f( 0.5, -0.5)   // bottom right
        );
 
        return vec4f(pos[vertexIndex], 0.0, 1.0);
      }
 
      @fragment fn fs() -> @location(0) vec4f {
        return vec4f(1.0, 0.0, 0.0, 1.0);
      }
    `,
  });

Shaders are written in a language called WebGPU Shading Language (WGSL), usually pronounced wig-sil. WGSL is a strongly typed language, which we will try to cover in detail in another article. Now, I hope with some illustrations, you can deduce some basics.

Above we saw that a function named vs was declared with the @vertex attribute. This specifies it as a vertex shader function.

      @vertex fn vs(
        @builtin(vertex_index) vertexIndex : u32
      ) -> @builtin(position) vec4f {
    
    
         ...

It takes one parameter named vertexIndex. vertexIndex is a u32 representing a 32-bit unsigned integer. It gets its value from a built-in function called vertex_index (indicated by @builtin(vertex_index) ). vertex_index is like an iteration number, similar to index in JavaScript's Array.map(function(value, index) { … }). If we tell the GPU to execute this function 10 times by calling draw, the first time vertex_index will be 0, the second time it will be 1, the third time it will be 2, etc... [see note 4 ]

Our vs function is declared to return a vec4f, which is a vector of four 32-bit floating point values. Think of it as an array of 4 values ​​or an object with 4 properties like {x: 0, y: 0, z: 0, w: 0}. This return value will be assigned to the position builtin function (ie @builtin(position)). In "triangle-list" mode, every 3 executions of the vertex shader, a triangle is drawn connecting the 3 position values ​​we return.

Positions in WebGPU need to be returned in clip space, where X goes from -1.0 on the left to +1.0 on the right, and Y goes from -1.0 at the bottom to +1.0 at the top. This is true regardless of the size of the texture we draw.
insert image description here

The vs function declares an array of 3 vec2f's. Each vec2f consists of two 32-bit floating point values. The code then fills the array with 3 vec2f's.

        var pos = array<vec2f, 3>(
          vec2f( 0.0,  0.5),  // top center
          vec2f(-0.5, -0.5),  // bottom left
          vec2f( 0.5, -0.5)   // bottom right
        );

Finally, it uses vertexIndex to return one of 3 values ​​from the array. Since the function expects 4 float values ​​as its return type, and since pos is an array of vec2f, the code provides 0.0 and 1.0 for the remaining 2 values.

        return vec4f(pos[vertexIndex], 0.0, 1.0);

The shader module also declares a function called fs, which is declared with the @fragment attribute, making it a fragment shader function.

      @fragment fn fs() -> @location(0) vec4f {
    
    

This function takes no arguments and returns a vec4f at location(0). This means it will be written to the first render target. We'll use the first render target as our canvas later on.

        return vec4f(1, 0, 0, 1);

The code returns 1, 0, 0, 1 in red. Colors in WebGPU are usually specified as floating point values ​​from 0.0 to 1.0, where the above 4 values ​​correspond to red, green, blue and alpha respectively.

When the GPU rasterizes a triangle (draws it with pixels), it calls the fragment shader to find out the color of each pixel. In our case we just return red.

Another thing to note is the label. Almost every object you can create with WebGPU can use a label. Labels are completely optional, but it's a good idea to have labels for everything you make. The reason is that when you encounter an error, most WebGPU implementations will print an error message with a label of the thing related to the error.

In a normal application you'd have 100's or 1000's of buffers, textures, shader modules, pipelines etc... If you get an error like "WGSL syntax error in shaderModule at line 10" if you have 100 shader modules, which one has the error? If you tag the module then you get an error more like "WGSL syntax error in shaderModule('our hardcoded red triangle shaders') at line 10 which is a more useful error message that saves you Lots of time to track down issues.

Now that we have created a shader module, we need to make a rendering pipeline

  const pipeline = device.createRenderPipeline({
    
    
    label: 'our hardcoded red triangle pipeline',
    layout: 'auto',
    vertex: {
    
    
      module,
      entryPoint: 'vs',
    },
    fragment: {
    
    
      module,
      entryPoint: 'fs',
      targets: [{
    
     format: presentationFormat }],
    },
  });

In this example, there is nothing to see. We set layout to 'auto', which means asking WebGPU to get the data layout from the shader. We didn't use any data though.

We then tell the pipeline to use the vs function in the shader module for the vertex shader and the fs function for the fragment shader. Additionally we tell it the format of the first render target. A "render target" refers to the texture we will render to. We create a pipeline and we have to specify the format of the texture we will finally render to using this pipeline.

Element 0 of the targets array corresponds to the position 0 we specified for the fragment shader's return value. Later, we set that target to the texture of the canvas.

Next we prepare a GPURenderPassDescriptor that describes which textures we want to draw and how to use them.

  const renderPassDescriptor = {
    
    
    label: 'our basic canvas renderPass',
    colorAttachments: [
      {
    
    
        // view: <- to be filled out when we render
        clearValue: [0.3, 0.3, 0.3, 1],
        loadOp: 'clear',
        storeOp: 'store',
      },
    ],
  }; 

The GPURenderPassDescriptor has an array of colorAttachments that lists the textures we will render to and how to handle them. We'll wait to populate the texture we actually want to render. Now, we set a simple half-dark gray value, and a loadOp and storeOp. loadOp: 'clear' specifies to clear the texture to the clear value before drawing. Another option is 'load', which means load the existing content of the texture into the GPU, so we can draw what is already there. storeOp: 'store' means to store the result of our drawing. We can also pass 'discard', which discards what we draw. We'll cover why we might want to do this in another post.

Now it's time to render.


  function render() {
    
    
    // Get the current texture from the canvas context and
    // set it as the texture to render to.
    renderPassDescriptor.colorAttachments[0].view =
        context.getCurrentTexture().createView();
 
    // make a command encoder to start encoding commands
    const encoder = device.createCommandEncoder({
    
     label: 'our encoder' });
 
    // make a render pass encoder to encode render specific commands
    const pass = encoder.beginRenderPass(renderPassDescriptor);
    pass.setPipeline(pipeline);
    pass.draw(3);  // call our vertex shader 3 times
    pass.end();
 
    const commandBuffer = encoder.finish();
    device.queue.submit([commandBuffer]);
  }
 
  render();

First we call context.getCurrentTexture() to get the texture that will appear in the canvas. Calling createView lets you view a specific part of the texture, but without arguments it will return the default part, which is what we want in this case. In this case, our only colorAttachment is the texture view from the canvas, which we get through the context created at the beginning. Likewise, element 0 of the colorAttachments array corresponds to the location(0) we specified for the fragment shader's return value.

Next we create a command encoder. Command encoders are used to create command buffers. We use it to encode commands and then "commit" the command buffer it creates to execute the command.

We then use the command encoder to create a render pass encoder by calling beginRenderPass. Render pass encoders are specific encoders used to create rendering-related commands. We pass it a renderPassDescriptor to tell it which texture we want to render to.

We code the command setPipeline to set up our pipeline, then tell it to execute our vertex shader 3 times by calling draw with 3. By default, every 3 executions of our vertex shader, a triangle will be drawn by concatenating the 3 values ​​just returned from the vertex shader.

Finally we end the render pass, and then end the encoder. This gives us a command buffer representing the steps we just specified. Finally we submit the command buffer to be executed.

This will be our state when the draw command is executed
insert image description here

We have no textures, no buffers, no binding groups, but we have a pipeline, a vertex and fragment shader, and a render pass descriptor which tells our shader to render to the canvas texture.

The following is a screenshot of all the code and running results:

@import url(https://webgpufundamentals.org/webgpu/resources/webgpu-lesson.css);

<canvas></canvas>
<script type="module">
  
async function main() {
      
      
  const adapter = await navigator.gpu?.requestAdapter();
  const device = await adapter?.requestDevice();
  if (!device) {
      
      
    fail('need a browser that supports WebGPU');
    return;
  }

  // Get a WebGPU context from the canvas and configure it
  const canvas = document.querySelector('canvas');
  const context = canvas.getContext('webgpu');
  const presentationFormat = navigator.gpu.getPreferredCanvasFormat();
  context.configure({
      
      
    device,
    format: presentationFormat,
      });

  const module = device.createShaderModule({
      
      
    label: 'our hardcoded red triangle shaders',
    code: `
      @vertex fn vs(
        @builtin(vertex_index) vertexIndex : u32
      ) -> @builtin(position) vec4f {
        var pos = array<vec2f, 3>(
          vec2f( 0.0,  0.5),  // top center
          vec2f(-0.5, -0.5),  // bottom left
          vec2f( 0.5, -0.5)   // bottom right
        );

        return vec4f(pos[vertexIndex], 0.0, 1.0);
      }

      @fragment fn fs() -> @location(0) vec4f {
        return vec4f(1, 0, 0, 1);
      }
    `,
  });

  const pipeline = device.createRenderPipeline({
      
      
    label: 'our hardcoded red triangle pipeline',
    layout: 'auto',
    vertex: {
      
      
      module,
      entryPoint: 'vs',
    },
    fragment: {
      
      
      module,
      entryPoint: 'fs',
      targets: [{
      
       format: presentationFormat }],
    },
  });

  const renderPassDescriptor = {
      
      
    label: 'our basic canvas renderPass',
    colorAttachments: [
      {
      
      
        // view: <- to be filled out when we render
        clearValue: [0.3, 0.3, 0.3, 1],
        loadOp: 'clear',
        storeOp: 'store',
      },
    ],
  };

  function render() {
      
      
    // Get the current texture from the canvas context and
    // set it as the texture to render to.
    renderPassDescriptor.colorAttachments[0].view =
        context.getCurrentTexture().createView();

    // make a command encoder to start encoding commands
    const encoder = device.createCommandEncoder({
      
       label: 'our encoder' });

    // make a render pass encoder to encode render specific commands
    const pass = encoder.beginRenderPass(renderPassDescriptor);
    pass.setPipeline(pipeline);
    pass.draw(3);  // call our vertex shader 3 times.
    pass.end();

    const commandBuffer = encoder.finish();
    device.queue.submit([commandBuffer]);
  }

  render();
}

function fail(msg) {
      
      
  // eslint-disable-next-line no-alert
  alert(msg);
}

main();
</script>

insert image description here

It's important to emphasize that all these functions we call like setPipeline and draw only add commands to the command buffer. They don't actually carry out orders. These commands are executed when we submit the command buffer to the device queue.

So, now we've seen a very small working WebGPU example. It should be pretty obvious that hard coding a triangle inside a shader is not very flexible. We need ways to provide data and we'll cover those in the following articles. The points to take away from the code above,
so, now we've seen a very small working example of WebGPU. Obviously, hardcoding triangles in shaders is not very flexible. We need methods to provide data, which we will describe in the following articles. The gist obtained from the code above is as follows:

  • WebGPU just runs shaders. It's up to you to fill them with code to do useful things
  • Shaders are specified in shader modules, which are then assembled into the rendering pipeline
  • WebGPU can draw triangles
  • WebGPU draws to texture (we happen to get texture from canvas)
  • WebGPU works by encoding commands and then submitting them.

3. Run computations on the GPU Run computations on the GPU

Let's write a basic example that does some calculations on the GPU

We start with the same code to get the WebGPU device

async function main() {
    
    
  const adapter = await gpu?.requestAdapter();
  const device = await adapter?.requestDevice();
  if (!device) {
    
    
    fail('need a browser that supports WebGPU');
    return;
  }

When we create the shader module

  const module = device.createShaderModule({
    
    
    label: 'doubling compute module',
    code: `
      @group(0) @binding(0) var<storage, read_write> data: array<f32>;
 
      @compute @workgroup_size(1) fn computeSomething(
        @builtin(global_invocation_id) id: vec3<u32>
      ) {
        let i = id.x;
        data[i] = data[i] * 2.0;
      }
    `,
  });

First, we declare a variable called data, of type storage, which we want to be able to read and write.

      @group(0) @binding(0) var<storage, read_write> data: array<f32>;

We declare its type as array which means an array of 32bit floating point values. We tell it we’re going to specify this array on binding location 0 (the binding(0)) in bindGroup 0 (the @group(0)).

We declare its type as array, which means an array of 32-bit floating point values. We tell it that we're going to specify this array at binding position 0 ( binding(0) ) in binding group 0 ( @group(0) ).

We then declare a function called computeSomething with the @compute attribute, making it a compute shader.

      @compute @workgroup_size(1) fn computeSomething(
        @builtin(global_invocation_id) id: vec3u
      ) {
    
    
        ...

Compute shaders are required to declare a workgroup size which we will cover later. For now we’ll just set it to 1 with the attribute @workgroup_size(1). We declare it to have one parameter id which uses a vec3u. A vec3u is three unsigned 32 integer values. Like our vertex shader above, this is the iteration number. It’s different in that compute shader iteration numbers are 3 dimensional (have 3 values). We declare id to get its value from the built-in global_invocation_id.

Compute shaders need to declare a workgroup size which we will cover later. For now we'll just set it to 1 with the attribute @workgroup_size(1). We declare it to have a parameter id using vec3u. vec3u are three unsigned 32-bit integer values. Just like our vertex shader above, this is the number of iterations. The difference is that compute shader iterations are 3-dimensional (have 3 values). We declare id to get its value from the built-in global_invocation_id.

You can think of compute shaders as working like this. This is a simplified explanation, but it will do it for now.

// pseudo code
for (z = 0; z < depth; ++z) {
    
    
  for (y = 0; y < height; ++y) {
    
    
    for (x = 0; x < width; ++x) {
    
    
      const global_invocation_id = {
    
    x, y, z};
      computeShaderFn(global_invocation_id);
    }
  }
}

Finally we index data using the x attribute of id and multiply each value by 2

        let i = id.x;
        data[i] = data[i] * 2.0;

Above, i is just the first of 3 iteration numbers.

Now that we have created our shaders, we need to create a pipeline

  const pipeline = device.createComputePipeline({
    
    
    label: 'doubling compute pipeline',
    layout: 'auto',
    compute: {
    
    
      module,
      entryPoint: 'computeSomething',
    },
  });

Here we just tell it we're using a compute stage from the shader module we created and we want to call the computeSomething function. layout is 'auto' again, telling WebGPU to figure out the layout from the shaders. [5]
here We just tell it to use the shader module we created in the compute phase and want to call the computeSomething function. layout is 'auto' again, telling WebGPU to figure out the layout from the shader. [see note 5 ]

Next we need some data

  const input = new Float32Array([1, 3, 5]);

This data exists only in JavaScript. In order for WebGPU to use it, it needs to create a buffer that exists on the GPU and copy the data into the buffer.

  // create a buffer on the GPU to hold our computation
  // input and output
  const workBuffer = device.createBuffer({
    
    
    label: 'work buffer',
    size: input.byteLength,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC | GPUBufferUsage.COPY_DST,
  });
  // Copy our input data to that buffer
  device.queue.writeBuffer(workBuffer, 0, input);

Above we call device.createBuffer to create the buffer. size is the size in bytes, in this case it will be 12 because the size of 3 values ​​of Float32Array is 12 in bytes. See this article if you're new to Float32Array and typed arrays .

Every WebGPU buffer we create must specify a usage. We can pass a bunch of flags to use, but not all flags can be used together. Here we say we want to use this buffer as storage by passing GPUBufferUsage.STORAGE. This makes it compatible with var<storage,…> in shaders. Additionally, we want to be able to copy data into this buffer, so we include the GPUBufferUsage.COPY_DST flag. Finally, we want to be able to copy data from the buffer, so we include GPUBufferUsage.COPY_SRC.

Note that you can not directly read the contents of a WebGPU buffer from JavaScript. Instead you have to “map” it which is another way of requesting access to the buffer from WebGPU because the buffer might be in use and because it might only exist on the GPU.
Note that the contents of the WebGPU buffer cannot be read directly from JavaScript. Instead, you have to "map" it, which is another way to request access to the buffer from WebGPU, because the buffer may be in use and because it may only exist on the GPU.

WebGPU buffers that can be mapped in JavaScript can't be used for much else. In other words, we can not map the buffer we just created above and if we try to add the flag to make it mappable we'll get an error that That is not compatible with usage STORAGE.
WebGPU buffers that can be mapped in JavaScript cannot be used for other purposes. In other words, we cannot map the buffer we just created above, and if we try to add flags to make it mappable, we will get an incompatible STORAGE usage error.

So, in order to see the result of our computation, we'll need another buffer. After running the computation, we'll copy the buffer above to this result buffer and set its flags so we can map it
. , we need another buffer. After running the calculation, we'll copy the buffer above into this result buffer and set its flags so we can map it.

  // create a buffer on the GPU to get a copy of the results
  const resultBuffer = device.createBuffer({
    
    
    label: 'result buffer',
    size: input.byteLength,
    usage: GPUBufferUsage.MAP_READ | GPUBufferUsage.COPY_DST
  });

MAP_READ means we want to be able to map this buffer for reading data.

In order to tell our shader which buffer we want it to work on, we need to create a bindGroup

  // Setup a bindGroup to tell the shader which
  // buffer to use for the computation
  const bindGroup = device.createBindGroup({
    
    
    label: 'bindGroup for work buffer',
    layout: pipeline.getBindGroupLayout(0),
    entries: [
      {
    
     binding: 0, resource: {
    
     buffer: workBuffer } },
    ],
  });

We get the layout for the bindGroup from the pipeline. Then we setup bindGroup entries. The 0 in pipeline.getBindGroupLayout(0) corresponds to the @group(0) in the shader. The {binding: 0 … of the entries corresponds to the @group(0) @binding(0) in the shader.

We get the bindGroup's layout from the pipeline. Then we set the bindGroup entry. The 0 in pipeline.getBindGroupLayout(0) corresponds to @group(0) in the shader. The {binding: 0 ... of entries corresponds to @group(0) @binding(0) in the shader.

Now we can start encoding commands

  // Encode commands to do the computation
  const encoder = device.createCommandEncoder({
    
    
    label: 'doubling encoder',
  });
  const pass = encoder.beginComputePass({
    
    
    label: 'doubling compute pass',
  });
  pass.setPipeline(pipeline);
  pass.setBindGroup(0, bindGroup);
  pass.dispatchWorkgroups(input.length);
  pass.end();

We create a command encoder. We start a compute pass. We set the pipeline, then we set the bindGroup. Here, the 0 in pass.setBindGroup(0, bindGroup) corresponds to @group(0) in the shader. We then call dispatchWorkgroups and in this case we pass it input.length which is 3 telling WebGPU to run the compute shader 3 times. We then end the pass.
We create a command encoder. and start a calculation process. Set up the pipeline first, then the bindGroup. Here, 0 in pass.setBindGroup(0, bindGroup) corresponds to @group(0) in the shader. Then we call dispatchWorkgroups, in this case we pass it input.length, which is 3 which tells WebGPU to run the compute shader 3 times. Then we end the pass.

The following is the situation when executing dispatchWorkgroups
insert image description here

After the calculation is complete, we ask WebGPU to copy from buffer to resultBuffer

  // Encode a command to copy the results to a mappable buffer.
  encoder.copyBufferToBuffer(workBuffer, 0, resultBuffer, 0, resultBuffer.size);

Now we can finish the encoder to get the command buffer and then submit that command buffer.

  // Finish encoding and submit the commands
  const commandBuffer = encoder.finish();
  device.queue.submit([commandBuffer]);


We then map the results buffer and get a copy of the data

  // Read the results
  await resultBuffer.mapAsync(GPUMapMode.READ);
  const result = new Float32Array(resultBuffer.getMappedRange());
 
  console.log('input', input);
  console.log('result', result);
 
  resultBuffer.unmap();

To map the results buffer we call mapAsync and have to await for it to finish. Once mapped, we can call resultBuffer.getMappedRange() which with no parameters will return an ArrayBuffer of entire buffer. We put that in a Float32Array typed array view and then we can look at the values. One important detail, the ArrayBuffer returned by getMappedRange is only valid until we called unmap. After unmap its length with be set to 0 and its data no longer accessible.

To map the result buffer, we call mapAsync and must call await to complete. After mapping, we can call resultBuffer.getMappedRange() with no parameters and it will return an ArrayBuffer of the entire buffer. We put it in an array view of type Float32Array, and then we can look at the values. An important detail is that the ArrayBuffer returned by getMappedRange is only valid until we call unmap. After unmap, its length is set to 0, and its data is no longer accessible.

Running we can see that we got the result, all numbers doubled.

insert image description here

We'll cover how to actually use compute shaders in another article. By now, you hopefully have some understanding of what WebGPU can do. Everything else is up to you! Think of WebGPU like other programming languages. It provides some basic functionality, and the rest is up to you to get creative.

What makes WebGPU programming unique are these functions, Vertex Shaders, Fragment Shaders and Compute Shaders, which run on your GPU. A GPU can have over 10,000 processors, which means they can perform over 10,000 calculations in parallel, which is probably 3 or more orders of magnitude more than your CPU can do in parallel.

4. Simple Canvas Resizing Simple Canvas Resizing

Before we continue, let's go back to our triangle drawing example and add some basic support for resizing the canvas. Canvas resizing is actually a subject with potentially so many subtleties that there is an entire article devoted to it. Now we just add some basic support

First we add some CSS to make our canvas fill the page

<style>
html, body {
    
    
  margin: 0;       /* remove the default margin          */
  height: 100%;    /* make the html,body fill the page   */
}
canvas {
    
    
  display: block;  /* make the canvas act like a block   */
  width: 100%;     /* make the canvas fill its container */
  height: 100%;
}
</style>

The CSS above will just make the canvas appear to cover the page, but it won't change the resolution of the canvas itself, so you might notice that if you make the example below bigger, like if you clicked the fullscreen button, you'll see The edges of the triangle are blocky.

insert image description here

By default, the label has a resolution of 300x150 pixels. We want to resize the canvas to match the size. A good way is to use ResizeObserver. You create a ResizeObserver and give it a function to call when the element you ask it to observe changes its size. Then tell it which elements to observe.

    ...
    // render();// 这行被删除
 
    const observer = new ResizeObserver(entries => {
    
    
      for (const entry of entries) {
    
    
        const canvas = entry.target;
        const width = entry.contentBoxSize[0].inlineSize;
        const height = entry.contentBoxSize[0].blockSize;
        canvas.width = Math.min(width, device.limits.maxTextureDimension2D);
        canvas.height = Math.min(height, device.limits.maxTextureDimension2D);
        // re-render
        render();
      }
    });
    observer.observe(canvas);

In the code above, we iterate over all entries, but there should only be one, since we're only observing our canvas. We need to limit the size of the canvas to the maximum supported by our device, otherwise WebGPU will start generating errors that we try to make a texture that is too large.

We call render to re-render the triangle at the new resolution. We removed the old call to render because it wasn't needed. The ResizeObserver will always call the callback at least once to report the element's size when the element begins to be observed.

The texture with the new size is created when we call context.getCurrentTexture() in render, so nothing needs to be done.

insert image description here

In the next articles, we'll cover the various ways of passing data to shaders.

Then we'll cover the basics of WGSL .

I'm a little worried that these articles will be boring at first. Feel free to jump around if you want. Keep in mind that you may need to read or review these basics if you don't understand them. Once we have the basics down, we move on to reviewing the actual techniques.

another thing. All sample programs can be edited in real time on the web page. Also, they can all be easily exported to jsfiddle and codepen or even stackoverflow. Just click "Export".

The code above gets a WebGPU device in a very neat way. A more detailed method is

async function start() {
    
    
  if (!navigator.gpu) {
    
    
    fail('this browser does not support WebGPU');
    return;
  }

  const adapter = await navigator.gpu.requestAdapter();
  if (!adapter) {
    
    
    fail('this browser supports webgpu but it appears disabled');
    return;
  }

  const device = await adapter?.requestDevice();
  device.lost.then((info) => {
    
    
    console.error(`WebGPU device was lost: ${
      
      info.message}`);

    // 'reason' will be 'destroyed' if we intentionally destroy the device.
    if (info.reason !== 'destroyed') {
    
    
      // try again
      start();
    }
  });
  
  main(device);
}
start();

function main(device) {
    
    
  ... do webgpu ...
}

device.lost is a promise that was initially unresolved. It will address if and when the device is lost. There are many reasons for a device to be lost. Maybe the user ran a very intensive application and it crashed their GPU. Maybe the user updated their drivers. Maybe the user had an external GPU and unplugged it. Maybe another page is using a lot of GPU, your tab is in the background, and the browser decides to free up some memory by losing the background tab device. The point is that for any serious application, you probably want to deal with a lost device.

Note that requestDevice always returns a device. It might start to get lost. WebGPU is designed so that in most cases the device appears to work, at least from the API level. The calls to create things and use them will appear to succeed, but they won't actually work. When the lost promise resolves, it's up to you to decide what to do instead.

5. Notes

[ Note 1 ] There are actually 5 modes.

'point-list' : for each position, draw a point
'line-list' : for every 2 positions, draw a line
'line-strip' : draw a line connecting the latest point with the previous point
'triangle-list' : for every 3 positions, draw a triangle (default)
'triangle-strip' : for each new position, draw a triangle from it and the last two positions
↩︎
[ Note 2 ]
The fragment shader writes data to the texture indirectly. The data does not have to be color. For example, it is common to output the surface orientation represented by pixels. ↩︎

[ Note 3 ]
Textures can also be 3d rectangles of pixels, cubemaps (6 pixel squares forming a cube) and a few other things, but the most common textures are 2d rectangles of pixels. ↩︎

[ Note 4 ]
We can also use a specific vertex_index in the index buffer. This is covered in the article on vertex buffers. ↩︎

[ Note 5 ] ↩︎
layout: 'auto' is convenient, but with layout: 'auto' it is not possible to share binding groups across pipelines. Most examples on this site never use bindgroups with multiple pipes. We'll cover explicit layout in another article.

original address

Guess you like

Origin blog.csdn.net/xuejianxinokok/article/details/130820952