UE draws the rendering pipeline in the blueprint study notes

This is mapped to the scene space, and then converted into the local coordinates of the camera, and the calculation in Figure 1 can be performed.

The depth and width of Figure 1 are easy to find, as are the FOV angles. When projecting, scale the scene content into a cube from -1 to 1. Finally, calculate the screen resolution and tile the X and Y axes in the cube to the computer screen. on, and then perform Z-Buffer processing. Here you can calculate the scene depth and perform processing on the Z axis to solve the problem of occlusion relationship.

The node that obtains vertex information in the blueprint is to use the procedural mesh

The internal implementation of commonly used material functions is also similar.

Construct a Cartesian coordinate system based on the Camera position. The above is to calculate the relative position of the material information position to the camera position.

It will be transformed into this picture.

Dot multiplication is to calculate the distance between the position and the projection value of XY axis values

Then do depth and Z-axis drawing, and also give the resolution to XY, and project XY to the size of the screen resolution.

Here, the Camera coordinate system is converted into a UV coordinate system.

This is the coordinate system of the Camera

This is the coordinate system of UV

Why multiply by 0.5 in the first place?

Because perspective projection will compress the objects in the scene to the range of -1 to 1, and the value range of UV is 0~1, so first multiply by 0.5, and then translate to the UV coordinate system

Why is multiplying by Y negative 0.5?

Because the Y-axis of the camera's coordinate system and the Y-axis of the UV coordinate system are in opposite directions, they must be multiplied by a negative sign.

Guess you like

Origin blog.csdn.net/qqQQqsadfj/article/details/134875535