three.js using gpu selected object and calculates the position of the intersection point

Ray Casting

Use three.js own light projector (Raycaster) Select the object is very simple code as follows:

var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();

function the onMouseMove (Event) {
     // computing device coordinates of the location of the mouse 
    // three coordinate components are -1. 1 
    mouse.x = event.clientX / window.innerWidth * 2 -. 1 ;
    mouse.y = - (event.clientY / window.innerHeight) * 2 + 1;
}

function Pick () {
     // using the camera and a mouse to select the location update ray 
    raycaster.setFromCamera (mouse, camera);

    // Calculation ray intersection with the object selected 
    var the intersects = raycaster.intersectObjects (scene.children);
}

It is the use of filtering bounding box, whether the calculated projection light and the intersection of each face element realized.

However, when the model is very large, for example, there are 400,000 face, select objects and calculate the position of the point of impact will be very slow method by traversing a bad user experience.

But using gpu selected object does not have this problem. Regardless of the scene and how the model, the object can be acquired and the intersection point where the position of the mouse within a frame.

Select the object using the GPU

Implementation is simple:

1. Create a material selected to replace the material for each model of the scene in different colors.

2. Read the color of the pixel position of the mouse, the object color determination according to the mouse position.

Specific implementation code:

1. Select Create material, traversing the scene, each scene model replaces a different color.

let maxHexColor = 1;

// Replace Select Material 
scene.traverseVisible (n-=> {
     IF ((n-! The instanceof THREE.Mesh)) {
         return ;
    }
    n.oldMaterial = n.material;
     IF (n.pickMaterial) { // have already created a material selected 
        n.material = n.pickMaterial;
         return ;
    }
    let material = new THREE.ShaderMaterial({
        vertexShader: PickVertexShader,
        fragmentShader: PickFragmentShader,
        uniforms: {
            pickColor: {
                value: new THREE.Color(maxHexColor)
            }
        }
    });
    n.pickColor = maxHexColor;
    maxHexColor++;
    n.material = n.pickMaterial = material;
});

2. render the scene on WebGLRenderTarget, read color mouse cursor position, determining the selected object.

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);

// Draw and reads the pixel 
renderer.setRenderTarget (renderTarget);
renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels (RenderTarget, offsetX, height - offsetY,. 1,. 1, Pixel); // read the color location of the mouse

// restore the original material, and acquires the selected object 
const currentColor = Pixel [0] + * 0xFFFF Pixel [. 1] + 0xFF * Pixel [2 ];

let selected = null;

scene.traverseVisible(n => {
    if (!(n instanceof THREE.Mesh)) {
        return;
    }
    IF (n.pickMaterial && n.pickColor === currentColor) { // the same color 
        Selected = n-; // location of the mouse cursor object 
    }
     IF (n.oldMaterial) {
        n.material = n.oldMaterial;
        delete n.oldMaterial;
    }
});

Description: offsetX and offsetY mouse position, height is the height of the canvas. Meaning readRenderTargetPixels line is the mouse to select the location (offsetX, height - offsetY), a width and a height of 1 pixel color.

pixel is Uint8Array (4), four channels are stored rgba colors, each channel ranges from 0 to 255.

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Use GPU to obtain the intersection position

Implementation is very simple:

1. Create depth shader material, rendering the scene depth to the WebGLRenderTarget.

2. Calculate the depth position of the mouse cursor, the position of the intersection point is calculated according to the mouse position and depth.

Specific implementation code:

1. Create a material shader depth, the depth information is encoded in a certain way, to render the WebGLRenderTarget.

Depth Material:

const depthMaterial = new THREE.ShaderMaterial({
    vertexShader: DepthVertexShader,
    fragmentShader: DepthFragmentShader,
    uniforms: {
        far: {
            value: camera.far
        }
    }
});
DepthVertexShader:
precision highp float;

uniform float far;

varying float depth;

void main() {
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    depth = gl_Position.z / far;
}
DepthFragmentShader:
precision highp float;

varying float depth;

void main() {
    float hex = abs(depth) * 16777215.0; // 0xffffff

    a float R & lt = Floor (hex / 65535.0 );
     a float G = Floor ((hex - R & lt * 65535.0) / 255.0 );
     a float B = Floor (hex - R & lt * 65535.0 - G * 255.0 );
     a float A = Sign (depth)> ? = 0.0 1.0: 0.0; // depth is greater than or equal to 0, 1.0; less than 0, 0.0. 

    gl_FragColor = vec4 (R & lt / 255.0, G / 255.0, B / 255.0 , A);
}

important:

a. gl_Position.z is a depth camera space, is linear, ranging from cameraNear to cameraFar. Shader may be used as varying variables are interpolated.

Cause b. Gl_Position.z / far is converted into a value in the range of 0 to 1, as easy to color output.

c. not using depth in screen space, the perspective projection, depth becomes -1 to 1, 1 is very close to the majority (more than 0.9), not linear, almost constant, almost the same color output, very accurate .

. d acquires fragment shader depth Method: camera space depth gl_FragCoord.z , screen-space depth gl_FragCoord.z / gl_FragCoord.w .

E. The above description is for the perspective projection , orthogonal projection of the gl_Position.w 1, using the camera and screen-space depth of the space are the same.

F. To output depth as accurate as possible, the use of three components rgb output depth. gl_Position.z / far in the range of 0 to 1, multiplying 0xFFFFFF, converted to an rgb color value, a component represented by R & lt 65535, g represents the component 1 255, b 1 represents a component.

 

Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

 

2. Read the color of the mouse cursor position, the read color value is reduced to camera space depth value.

a. The "Encryption" depth after treatment drawn on WebGLRenderTarget. Read color method

let renderTarget = new THREE.WebGLRenderTarget(width, height);
let pixel = new Uint8Array(4);

scene.overrideMaterial = this.depthMaterial;

renderer.setRenderTarget(renderTarget);

renderer.clear();
renderer.render(scene, camera);
renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel);

Description: offsetX and offsetY mouse position, height is the height of the canvas. Meaning readRenderTargetPixels line is the mouse to select the location (offsetX, height - offsetY), a width and a height of 1 pixel color.

pixel is Uint8Array (4), four channels are stored rgba colors, each channel ranges from 0 to 255.

 

b. The "Encryption" after the camera space depth value "decryption", to get the right camera space depth value.

if (pixel[2] !== 0 || pixel[1] !== 0 || pixel[0] !== 0) {
    let hex = (this.pixel[0] * 65535 + this.pixel[1] * 255 + this.pixel[2]) / 0xffffff;

    if (this.pixel[3] === 0) {
        hex = -hex;
    }

    cameraDepth = -hex * camera.far; // camera coordinate system depth point mouse cursor (Note: the depth camera coordinate system is a negative value) 
}

 

The camera position and depth of the space mouse on the screen, the coordinates of the intersection point of the world coordinate system Inverse interpolation.

= nearPosition the let new new THREE.Vector3 (); // mouse position coordinates in the screen coordinate system of the camera at the near 
the let farPosition = new new THREE.Vector3 (); // coordinate position of the mouse in the screen coordinate system of the camera at the far 
World = the let new new THREE.Vector3 (); // by interpolation calculating world coordinates

// device coordinates 
const = deviceX the this .offsetX / width 2 * -. 1 ;
const deviceY = - this.offsetY / height * 2 + 1;

// near point 
nearPosition.set (deviceX, deviceY, 1); // screen coordinates: (0, 0, 1) 
nearPosition.applyMatrix4 (camera.projectionMatrixInverse); // camera coordinates: (0, 0, -far )

// far point 
farPosition.set (deviceX, deviceY, -1); // the screen coordinate system: (0, 0, -1) 
farPosition.applyMatrix4 (camera.projectionMatrixInverse); // Camera coordinates: (0, 0, -near)

// in camera space, depending on the depth, in proportion to the calculated camera space x and y values. 
T = const (cameraDepth - nearPosition.z) / (farPosition.z - nearPosition.z);

// the intersection from the camera coordinate space, converted into the world coordinate system coordinates. 
world.set (
    nearPosition.x + (farPosition.x - nearPosition.x) * t,
    nearPosition.y + (farPosition.y - nearPosition.y) * t,
    cameraDepth
);
world.applyMatrix4(camera.matrixWorld);

 

Complete code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

Related Applications

Use gpu selected object and calculating the position of intersection, is mostly used in the performance is very high. E.g:

1. hover the mouse to move to effect the three-dimensional model.

2. Add the model, the model with the mouse movement, real-time preview of the model into effect in the scene.

3. The distance measurement area measurement tools, lines, and polygons with the mouse moves on a plane, the real-time preview, and calculates the length and area.

4. scene and the model is very large, selecting ray casting is slow, very bad user experience.

Here gpu to a selected object and achieve the effect mouse hover the picture to use. Red border is choose an effect, yellow translucent effect mouse hover effect.

 

 

 

Do not understand? Perhaps you are not familiar with the various projection operations three.js. Three.js projection operation following equation is given.

 

The projection operation three.js

1. modelViewMatrix = camera.matrixWorldInverse * object.matrixWorld

2. viewMatrix = camera.matrixWorldInverse

3. modelMatrix = object.matrixWorld

4. project = applyMatrix4( camera.matrixWorldInverse ).applyMatrix4( camera.projectionMatrix )

5. unproject = applyMatrix4( camera.projectionMatrixInverse ).applyMatrix4( camera.matrixWorld )

6. gl_Position = projectionMatrix * modelViewMatrix * position
                      = projectionMatrix * camera.matrixWorldInverse * matrixWorld * position
                      = projectionMatrix * viewMatrix * modelMatrix * position

 

References:

1. Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js

2. OpenGL depth values ​​plotted using shaders: https: //stackoverflow.com/questions/6408851/draw-the-depth-value-in-opengl-using-shaders

3. In glsl, the acquisition fragment shader real depth values: https: //gamedev.stackexchange.com/questions/93055/getting-the-real-fragment-depth-in-glsl

 

Guess you like

Origin www.cnblogs.com/tengge/p/11924663.html