Three.js WebXR Immersive Rendering Concise Tutorial

In the previous article, we learned about VR concepts and how they map in WebXR. This allows you to think about the experience you want to provide your users. In this article, we describe how to use WebXR with Three.JS to create immersive experiences for large and heterogeneous user bases.

Warning: The WebXR API is still being polished (the first public working draft has just been released), so I'll do my best to update this series to reflect the changes and keep these articles evergreen. There will be new features in the WebXR Device API specification, if it simplifies the code, I will update this article accordingly.

insert image description here

Use the 3DConvert online tool to quickly convert the format of the 3D model without local installation.

1. A quick overview of the Three.js rendering pipeline

I won't spend too much time discussing how the Three.JS rendering pipeline works, as it's well documented on the internet (eg here). I'll lay out the basics in the diagram below to make it easier to understand where the various pieces go.

insert image description here

2. Getting Started with WEBXR Device API

Before we dive into the WebXR API itself, you should know that the WebXR Device API Polyfill helps developers in two main ways:

If the device/browser doesn't support the WebXR Device API, it will attempt to populate it with available sensors like gyroscope and accelerometer, allowing developers to provide a basic Cardboard-style experience or inline rendering.

If the browser supports the legacy WebVR API, it will populate the WebXR device API on top of WebVR, allowing developers to take advantage of all the work done to support WebVR in the first place (and thus allowing them to take advantage of the VR runtime underneath it).

The main types of 3D experiences a user can enter include:

  • Desktop computer based keyboard/mouse without any immersive support.
  • Take advantage of inline rendering or magic windows from your phone's sensors. Inline rendering is a great way to "tease" the user with your content, show them your experience, and hopefully get them to click a button and enter a more immersive experience inside the HMD. Here is an example:

insert image description here

  • Immersive VR with a dedicated VR system, mobile-based VR, or a cardboard-like experience.

The WebXR Device API is based on the concept of sessions. For example, you request a session, and the browser takes care of starting the rendering in the HMD. When you end the session, rendering stops inside the HMD, and you can start rendering again on the page as usual. There are 3 types of sessions: immersive VR, immersive AR (not covered in this article), and inline.

This is a very simple flowchart that can help you decide which experience to provide based on factors such as device capabilities or WebXR Device API support.

insert image description here

3. Request a WebXR session and render the content

This section describes the high-level flow of the steps required to request a session and render content using the WebXR Device API. We'll go into detail in some steps, focusing on the progressive aspects rather than the rendering itself.

In this article, we will refer to a simple demo here. It uses WebXR and Three.JS and is the basis for the code snippets in this article. The full source is here.

4. Check WEBXR support

It's not uncommon to find browsers or devices that don't support the WebXR Device API (even with the WebXR polyfill). In this case, instead of providing an empty page so the user can navigate through the experience (similar to a 3D game), you can consider a keyboard and mouse based fallback.

In Three.JS, supporting this is relatively simple. You can use the PointerLockControls class to automatically map the mouse to move the camera (the same way as in FPS shooters). The nice thing about using a pointer lock is that when the lock is acquired, it sends the deltas of the mouse movements rather than their absolute position in the viewport. Another benefit is that unless the user unlocks the pointer (usually with the escape key, giving you a way to pause the experience), the cursor cannot leave the browser window, and the cursor will be hidden. This is perfect for our needs.

this._controls = new THREE.PointerLockControls(this._camera);
this._scene.add(this._controls.getObject());

Note that THREE.PointerLockControls does not lock the pointer for you. Typically, you want to start by interacting with a button that alerts the user that something is about to happen. Here's a simplified piece of code that does this:

document.body.addEventListener( 'click', _ => {
  // Ask the browser to lock the pointer
  document.body.requestPointerLock = document.body.requestPointerLock ||
    document.body.mozRequestPointerLock ||
    document.body.webkitRequestPointerLock;
    document.body.requestPointerLock();
}, false);
THREE.PointerLockControls 将在鼠标移动时负责更新相机。

The last part to handle is the keyboard movement, which is very simple:

document.addEventListener('keydown', event => {this._onKeyDown(event)}, false);
document.addEventListener('keyup', event => {this._onKeyUp(event)}, false);

In your handler, just update the camera's position stored in some variable. In the demo, I'm storing the intended direction of motion because I'm smoothing the motion through the velocity so the user doesn't appear to be jumping positions.

Once you're done updating the position inside the render function, update THREE.PointerLockControls:

let controls_yaw = this._controls.getObject();
controls_yaw.translateX(new_position.x);
controls_yaw.translateZ(new_position.z);

Then render the scene as usual and loop again using requestAnimationFrame:

this._renderer.render(this._scene, this._camera);
return requestAnimationFrame(this._update);

5. Check supported WebXR session modes

If your browser supports WebXR (whether natively or via a polyfill), you need to query the supported modes of the XR session to determine next steps, for example, adding a button to enter immersive mode. Here's an example that tries to determine if immersive VR mode is supported:

navigator.xr.supportsSession('immersive-vr').then(() => {
    this._createPresentationButton();
}).catch((err) => {
    console.log("Immersive VR is not supported: " + err);
});

If the Promise resolves, then you can add a button to the page informing the user that they can enter VR mode using the HMD they own. You can also use inline sessions to query to see if you can render certain inline content (aka magic windows) within the page.

6. Adjusted to add support for WEBXR Device API

If the Three.JS render loop is set up to render normally on a 2D screen, it needs to be adjusted to support rendering with WebXR. First, here's the basic flow of how rendering works with the WebXR Device API, whether it's an immersive session or an inline session.

insert image description here

Let me expand on some of the boxes in this diagram:

request session

navigator.xr.requestSession({mode: 'request-mode'})

The mode parameter can be immersive-vr, immersive-ar, or inline. Remember to handle the rejection gracefully if the XR device is no longer available between the time you ask support and the request session.

request reference space

When the requestSession Promise is successfully resolved, the reference space can be requested using:

xrSession.requestReferenceSpace({ type:'type' })

The various possible types and subtypes were discussed earlier in this article. If you want to specify a subtype, use the following code:

xrSession.requestReferenceSpace({ type:'type', subtype:'subtype' })

I recommend that you request the reference space with the fewest features required for your experience, as it allows you to support a wider range of existing XR devices.

7. Set the XR layer

I won't go into the details here, as this is covered in detail in the WebXR Device API explainer here. However, I'll cover specific examples where inline rendering is taking place, and then the user wants to "upgrade" their experience to switch to a more immersive mode.

Now, if you want to use inline mode and then switch to immersive mode, you need two canvases: one for rendering immersive content, and one for rendering inline content. Each canvas has its own webgl context. The main reason for needing two contexts is that when you require a GL context to be XR-compatible (using makeXRCompatible), the underlying implementation ensures that the context is created on the GPU connected to the HMD. This is very relevant for computers with multiple GPUs, where the HMD may be attached to a case containing the additional GPU.

Note: There was a discussion within the WebXR Device API community to avoid the possible need for 2 sessions, making it easier to switch from one experience to another. This is the question under discussion. There is also a work in progress to avoid having to request 2 sessions (one inline, one immersive), which would simplify the code quite a bit.

Here is the code to set up the XR layer using Three.JS:

await this._renderer.context.makeXRCompatible();
this._xrSession.baseLayer = new XRWebGLLayer(this._xrSession, this._renderer.context);

8. Adjusted rendering to support immersive VR rendering

Whenever you render using the WebXR Device API, the render loop needs to be updated. First and foremost, you don't request new animation frames on the window, but on the session itself. The reason is that the XRSession will call your code at the correct frequency based on the refresh rate suggested by the HMD.

In VR, the refresh rate will most likely be different than that of the main screen. For example, some all-in-one HMDs have a 72 Hz refresh rate, while high-end VR HMDs like the Oculus* Rift or HTC* Vive have a 90 Hz refresh rate, and we expect to see 120 Hz HMDs in the near future. One of the main reasons for rendering VR experiences at higher refresh rates is to provide users with a smoother, more comfortable experience (which also helps combat motion sickness).

Usually you'd have something like this:

this._xrSession.requestAnimationFrame(this._render);

The render function looks like this:

function _render(timestamp, xrFrame);

The xrFrame parameter is an object carrying the information required by the current XR device to render the current frame. Before proceeding with rendering, Three.JS needs to be documented a bit to make sure the rendering will work with WebXR:

// Disable autoupdating because these values will be coming from the
// xrFrame data directly.
this._scene.matrixAutoUpdate = false;

// Make sure not to clear the renderer automatically, because we will need
// to render it ourselves twice, once for each eye.
this._renderer.autoClear = false;

// Clear the canvas manually.
this._renderer.clear();

The next step is to finally get into the WebXR specific rendering bits. First, you need to bind the layer, which means you will tell the Three.JS renderer to render to the XRSession layer:

let xrLayer = this._xrSession.baseLayer;
this._renderer.setSize(xrLayer.framebufferWidth, xrLayer.framebufferHeight, false);
this._renderer.context.bindFramebuffer(this._renderer.context.FRAMEBUFFER, xrLayer.framebuffer);

Then you need to get the pose of the device (the pose is the rotation and position, if any):

let pose = xrFrame.getViewerPose(xrReferenceSpace);
if (!pose)
    return;

This pose will give you all the information you need to render the scene for your XR device. Then you will need to render the view. The WebXR Device API has a concept of views, which typically map to screen quantities within a VR HMD. Typically, you will have 2 views (one for each eye):

insert image description here

From there you can iterate over the views and render each eye:

for (let view of pose.views) {
    let viewport = xrSession.baseLayer.getViewport(view);
    this._renderEye(pose.viewMatrix, view.projectionMatrix, viewport);
}

Rendering each eye would look like this:

_renderEye(viewMatrixArray, projectionMatrix, viewport) {
  // Set the left or right eye half.
  this._renderer.setViewport(viewport.x, viewport.y, viewport.width, viewport.height);

  let viewMatrix = new THREE.Matrix4();
  viewMatrix.fromArray(viewMatrixArray);

  // Update the scene and camera matrices.
  this._camera.projectionMatrix.fromArray(projectionMatrix);
  this._camera.matrixWorldInverse.copy(viewMatrix);
  this._scene.matrix.copy(viewMatrix);

  // Tell the scene to update (otherwise it will ignore the change of matrix).
  this._scene.updateMatrixWorld(true);
  this._renderer.render(this._scene, this._camera);
  // Ensure that left eye calcs aren't going to interfere.
  this._renderer.clearDepth();
}

The best part of the "view" approach is that you can keep reusing the same rendering code for inline sessions, since the only difference (besides the matrix values) is that you only have one view to render.


Original Link: Getting Started with Three.js WebXR Rendering—BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/132210088