1.12 Learning Unity game development from 0--rendering concept

Before we continue to expand the logic of our control of scene objects, I hope to talk about some concepts about rendering in Unity. If you are not doing graphics programs or technical art, you may not be able to use it for a long time in game development, but since you need the system To learn from 0, at least you need to understand some of the most basic content.

In this article, we will start from a 0-based perspective and start with the camera to understand how our commonly used rendering components and logic are related. If and very roughly, we will explain what factors affect the pictures we see, and how to present them in the end of.

camera components

We select the Main Camera to take a look at the Camera component. We can see that it is very long and there are a lot of configuration items in it, but we will only talk about the most basic one today, which is the Projection part inside.

If you look at the Scene window after selecting it, you can see some lines like this:

In fact, these lines are auxiliary lines, which are used to remind you what the camera's visible area looks like in 3D space. This is not very clear, we try to change the Far of the Clipping Plane under the Projection in the Camera component to 10:

It can be seen that the entire camera viewing distance is actually a lying pyramid shape, but it is actually a Mayan pyramid. If we modify the Near next to Far to 5, we can see that it is actually a 3D trapezoid:

So what does this do?

In the normal 3D world, we see that objects have the concept of being close, large, and far small, so in order to realize the real 3D scene in the game, we must also restore such an effect. Objects, such as a mathematical transformation of perspective projection to our so-called screen, which is our screen? It is the top of the Mayan pyramid we saw above, which is the red part in the picture below:

And the preview screen in the lower right corner is actually the screen formed by flattening our cube onto our red plane through perspective projection.

The calculation rule of perspective projection is transformed by a matrix. This matrix is ​​calculated by the Near (near cut plane), Far (far cut plane), Fov (field of view), and Aspect (screen aspect ratio) we just modified. decided together.

As for how to calculate it, this time I just introduce the basic concepts, and I won’t elaborate on it. I just need to understand that after we get this matrix through the camera, we can use the matrix multiplication in discrete mathematics to convert any point in our 3D world ( x, y, z) into our 3D trapezoidal area, if it is not in this trapezoidal area, it will be cropped and will not be drawn.

Through such a transformation, we will get the visual effect of near big and far small that we want.

If you want to adjust the perspective effect of this picture, you can directly modify the Projection part in the Camera component we mentioned above, and Projection actually means projection.

Of course, in addition to perspective projection, when we play RTS, tower defense and other types of games, in fact, there is no near big and far small. All objects are the same size wherever they are placed. At this time we will use orthogonal projection. The calculation is actually to construct a matrix. To change the position in the 3D world.

If you need to enable orthographic projection, you can switch directly on the panel:

Perspective means perspective, and Orthographic means orthogonal.

object rendering component

With our camera, we determine how the position of any point in the scene should appear on the screen, but this alone is definitely not enough, we don't yet know which positions need to be drawn on the screen, and what color to draw.

And now we actually have a Cube that can appear on the screen, let's take a look at its components:

In addition to Transform and the two components we wrote ourselves, there is also a BoxCollider that is actually used for physical calculations, so let's press the table first.

Then the remaining two components are Mesh Filter and Mesh Renderer, and these two components are the reason why our Cube can actually be displayed on the screen.

To prove this, we try to click the check box next to Mesh Renderer, which is the operation entry for enabling and disabling component functions. After we uncheck it, this component will not take effect. We can see that after canceling, we can't see the silver-gray surface of Cube, only some wireframes are left:

The Game window completely loses the Cube since it does not have these auxiliary wireframes.

So let's take a look at these two components, first look at Mesh Filter:

Double-click the dot next to the input box of the Cube. According to our previous article, we can understand that this dot can locate the object we are currently referencing. Looking at the picture, we can see that we have selected a cube and it is called the Mesh type.

And Mesh is translated as grid. What is the relationship between rendering and grid? The grid is actually the shape of the model in the modeling we often say, and we often hear how many faces a model has. This face is actually a triangular face (or a face of other shapes). If you look carefully , the Mesh preview of the Cube at the bottom of the Select window, you can see lines, and it can be seen that it is composed of triangular faces.

So now it is actually clearer that the Mesh Filter component stores a reference to a Mesh resource.

Next, look at the second component Mesh Renderer:

So this looks weird. If our Mesh is used as a model for rendering, why is there no place for this Mesh Renderer to reference it?

I personally think this is a historical legacy of Unity design. We can see more other Renderer components in the future, and some Renderer components configure Mesh resources directly in the Renderer component.

Then Mesh Renderer actually uses an unspoken rule, that is to say, it will automatically find the Mesh configured in the Mesh Filter component on the same GameObject as the Mesh for its own drawing.

And if we want to render Mesh, it is definitely not enough to have only triangular surfaces, so we must at least post a picture, then the place where we use the map is actually Material (material). , for the time being.

That is, this humble Lit is used as the Material we use for rendering. Similarly, we can click the dot on the right to locate which resource we are using, but because the operation frequency of modifying and viewing material data is very high, Unity will default to We display at the very bottom of the panel:

Do you suddenly understand why some GameObjects have such a bunch of things under them, and some don't? In fact, this is the unique display content of the Mesh Renderer component.

You can see that there are a lot of configuration parameters, and the textures we need to configure can also be configured in it, but this Lit is a built-in material inside Unity, and you are not allowed to change it. You have to copy it yourself and replace it. Only the Lit in Mesh Renderer can modify so many of the above contents.

If you have been in touch with rendering, such as those who have studied the OpenGL introductory tutorial, you can be more sensitive to find that the beginning of this extra panel is Shader

Of course, if this material has been copied and can be modified by itself, the drop-down box here can also select other Shaders.

As for what Shader is, this article will not continue to explain in depth, here is just a little mention.

Finishing process

Above we talked about three components:

  1. Camera componentCamere
  2. Mesh Filter component that hosts model resources
  3. Mesh Renderer component responsible for object Mesh rendering

In Unity, each frame will collect the enabled Mesh Renderer components in all scenes, and remove the parts that are not within the visible range of the Camera component.

The Mesh Renderer component will provide the Material data and Mesh data required for object rendering to Unity, and Unity will merge the Camera's projection matrix and pass it to the GPU. The GPU will process the Mesh and convert each position on the Mesh to the screen coordinates through the projection matrix. , while Material data provides part of the color information. Finally our picture is presented.

next chapter

After understanding the basic rendering-related component information in this chapter, we can have a general impression of where the scene controls our picture effects. There is still a long way to go if we want to achieve the various picture effects we imagined. , and the content related to rendering will be explained in a separate series.

In the next chapter, we temporarily put aside rendering to further enhance our ability to control the scene, learn how to dynamically create our objects, and how to dynamically load resources. Among them, the most important component thinking Prefab of Unity management resources will also be carried out explain.

Guess you like

Origin blog.csdn.net/z175269158/article/details/129932609