3D graphics rendering technology

How to use 2D plane to display 3D graphics

2D graphics

With two points in a plane, knowing their XY coordinates, you can connect them and draw a line.

A line can be controlled by controlling the XY coordinates of points A and B

In 3D images, the coordinates of points have an additional Z-axis coordinate system.

But it is impossible to have XYZ three-dimensional coordinate axes in 2D screen coordinates.

Therefore, a graphics algorithm is required to "flatten" the 3D coordinates and display them on the 2D screen. This is called 3D projection.

After converting 3D points into 2D points, use the previous method of linking 2D points to connect these points. This is called wireframe rendering .

Projection Inspiration: Two Projection Methods

recommended article:

zhuanlan.zhihu.com/p/473031788

Generally speaking, it is to move a 3D graphic to the 2D coordinate system, and the center corresponds to the origin of the coordinate system, and then the 2D coordinates of the 3D can be obtained.

If you use light to find a 3D object and use a flat surface to display the projection, when the 3D object is rotated, the projection will look like a 3D object, even though the projection surface is a flat surface.

Computers also do this to convert 3D into 2D. First of all, the screen is a 2D projection plane. According to the projection algorithm, 3D can be converted into 2D coordinates.

Orthographic projection

The sides of the cube are parallel to each other in the projection. It can be said that mathematics is used to convert 3D into a 2D coordinate system.

Orthographic projection is a kind of parallel projection, which is similar to using a beam of parallel light to project the image of an object vertically onto the ground.

perspective projection

Perspective projection can produce near-large and far-small effects, similar to the way humans observe the world.

In the real 3D world, parallel line segments will converge to a point in the distance

Why triangles are used to draw complex shapes

In 3D graphics, we call triangles "polygons"

A collection of polygons is called a "grid"

The denser the mesh, the smoother the surface and the more detail

First, let’s talk about why triangles are used instead of squares

In a space, three points define a plane

If 3 3D points are given, a plane can be drawn. But it’s not necessarily the case with four points.

If it is two points, it is not enough to define a plane, but only a line segment; if it is four points, it is possible to define more than just one plane, so 3 is a perfect number.

Filled graphics algorithm

Sweep line rendering

Wireframe renderings are cool, but 3D images need padding

step:
  1. First lay a layer of pixel network

  1. The scan line algorithm will first read the three points of the polygon, find the maximum and minimum Y values, and then only work between these two points.

  1. The algorithm then works from top to bottom, one row at a time, calculating the two points at which each row intersects the polygon. The scan line algorithm will fill in the pixels between two intersection points. Because it is a triangle, if it intersects one side, it must intersect the other side.

Anti-aliasing

This kind of triangle is ugly because the edges are full of jagged edges.

One method of reducing aliasing is called anti-aliasing

Anti-aliasing: Instead of painting every pixel the same, you can adjust the color by judging how much the polygon cuts through the pixel.

If the pixel fills the color directly inside the polygon; if the polygon cuts across the pixel, the color will be lighter.

Occlusion rendering algorithm

There are many polygons in a 3D scene, but only some of them are visible because others are obscured.

Sorting Algorithm (Painter’s Algorithm)

The simplest way to deal with it

Arrange from far to near, and render from far to near. This is called the painter's algorithm, because the painter also draws the background first and then draws closer things
.

step
  1. The first step is to sort from far to near (the distance between the three triangles A yellow, B blue, and C green)

  1. After ordering, use the scan line algorithm to fill multiple polygons, one at a time. (The order is from far to near)

depth buffer

This algorithm has the same idea as the painter's algorithm, but the method is different.

And the depth buffer algorithm does not need to be sorted, so it will be faster

The Z-buffering algorithm records the distance between each pixel in the scene and the camera, and stores a digital matrix in the memory.

step
  1. First, the distance of each pixel is initialized to "infinity", and then Z-buffering starts processing from the first polygon in the list, which is A

It has the same logic as the scan line algorithm, but instead of filling the pixels with color, it compares the polygon distance with the distance in Z-buffing. It always records the lower value.

  1. Once the Z buffer is complete, it will be used with an improved and advanced version of the "scan line" algorithm, which can not only detect line intersections but also know whether a certain pixel is visible in the final scene . If it is not visible, the scanline algorithm will skip that part

But there will be a question, if the distance is the same, which one is drawn on top?

Polygons are moved around in memory, and the access order changes constantly, so which one is spent on is often unpredictable.

An optimization for 3D games: backface culling

A triangle has two sides, front and back.

Only the outward side of the game character's head or the ground can be seen, so in order to save processing time, the back side of the polygon is ignored, reducing the number of general polygon faces.

But there is also a bug. If you look at the model from outside, the head and the ground will actually disappear.

3D scene shading

In a 3D scene, the surface of an object should have changes in light and dark.

This time I used a teapot for the experiment, which is different from the previous example. This time the thing to consider is the direction the polygons are facing. They are not parallel to the screen, but facing different directions. The direction you are facing is called the "surface normal"

Use a small arrow perpendicular to the surface to show this direction

Add a light source. Because different polygons face the light source at different angles, the more the direction of the arrow overlaps with the direction illuminated by the light source, the brighter the polygon is.

Textures

Texture in graphics refers to appearance, not feel.

Textures also have multiple algorithms

texture mapping

The simplest usage

When we used the scan line algorithm to fill the color before, we could look at the texture image in the memory and decide what color to use when filling the pixel area.

To do this, you need to correspond polygon coordinates and texture coordinates

  1. Correspondence between polygon coordinates and texture coordinates
  2. When deciding what color to choose to fill the current pixel, the texture algorithm queries the texture, takes the average color from the corresponding area, and fills it into the polygon.

GPU: graphics processor

We can make specialized hardware for this specific operation to speed up

Secondly, we can break the 3D scene into multiple small parts and render them in parallel instead of sequentially.

The CPU was not designed for this, so graphics operations are not fast, so computer engineers made a special processor for graphics, called a GPU "Graphics Processing Unit"

The GPU is on the graphics card, surrounded by dedicated RAM where all the meshes and textures are, allowing the multiple cores of the GPU to access it at high speed

Reference link

www.bilibili.com/video/av213…

Original link: 3D graphics rendering technology - Nuggets (juejin.cn)

Guess you like

Origin blog.csdn.net/m0_65909361/article/details/132989033