Detailed explanation of physical rendering based on PBR in Cocos Creator 3.0

After Cocos Creator 3.0 came out, Cocos Creator was finally upgraded to a brand new 2D/3D game engine, which is suitable for 2D and 3D game development. 3D games will have high demand for customization of picture effects and rendering pipelines. PBR art workflow is a secondary The mainstream solution for generations, today we will analyze in detail PBR physical rendering based on Cocos Creator.

1: How computers display colors

 To discuss this issue, let's first analyze how the human eye sees the world. When the human eye sees the color of an object, it sees the superposition of the object's self-illumination and reflected ambient light. Self-illumination of objects refers to the light emitted by the object itself, such as desk lamps and traffic lights. These objects will emit light by themselves. Generally speaking, objects in our natural world rarely emit light by themselves. They all reflect light from light sources (sunlight, lights). For example, there is a book in a dark room (without a light source). We cannot see the book. After turning on the light, we can see the book because the book reflects the light to our eyes and images it on the retina. Why do the objects we see with our human eyes reflect light from the light source (such as sunlight), and some objects look red and some blue? That's because the materials on the surface of different objects can absorb some spectra. For example, a red object can absorb blue and green light and reflect red light. Green objects can absorb red and blue. Reflects green light, which we call the "true color" of the object (baseColor/Diffuse/albedo). When an object reflects light from a light source, we call it direct reflection. When it reflects the light reflected from the surface of other objects, it is called indirect reflection, as shown in the figure:

The reason why nature is real is that it is made up of extremely complex reflections.

How does the 3D game engine handle color imaging to the camera? In fact, it simulates the real world and makes some trade-offs:

  1. Simulation of the self-illumination of an object. This usually gives a self-illumination color or a self-illumination map.
  2. The simulation of the true color of an object is a map of the true color of an object (BaseColor or Diffuse or Albedot)
  3. The simulation of object reflection only simulates the light from the reflected light source (direct reflection), and indirect reflection is not considered.
  4. To simulate the reflection of blocked light (the bottom of the bottle reflects light and is blocked), use ambient occlusion map (oclussion).

The color displayed by the computer is the color value obtained after calculation through this process.

2: Based on empirical model and PBR physical rendering

  Next, among the above four processes, (1) (2) (4) are all determined, and only (3) reflected lighting is the most difficult and the most affecting rendering effect. It is also the most important part of computer graphics. The focus of research inside. There are two ways of reflection of light, one is specular reflection and the other is diffuse reflection, as shown in the figure:

To simulate the reflection of light, there are two directions in mainstream game development, one is based on empirical models, and the other is based on physical PBR. Let’s talk about the empirical model first. In fact, when dealing with reflection, a calculation formula is used to calculate the lighting color. This formula is based on experience. If you want to deal with diffuse reflection, you can use the Lambert lighting model (formula). The Feng model and the Brynvon model can be used to deal with specular colors (this is also often asked during interviews. For example, customized cartoon rendering style rendering pipelines often use empirical models instead of physical PBR). Physical rendering based on PBR is mainly based on physical principles to simulate the reflection of real-world light, Bidirectional Reflection Distribution Function (BRDF), and compliance with energy conservation. The effect of the rendered object is close to the real world. At the same time, these algorithms are relatively mature and can We don’t need to study it, just plug in the formula when writing the Shader.

3: Normal and normal map, height map

The normal is a very important piece of data, and it needs to be used in the calculation of light reflection, as shown in the figure.

Our 3D model is composed of triangular faces, the faces are composed of lines, and the lines are composed of vertices. In addition to coordinates, each vertex also has normals. As shown in the picture:

When we color, we use normals to reflect light. As shown in the picture above, every point on the triangle surface needs to reflect light, and every point needs a normal. However, only the vertices in the model have normals. How does the normal at any point on the surface come from? We can do this using interpolation, as shown below:

The yellow normal is interpolated from the two blue normals, and the blue normal is interpolated from the green vertex normal. In this way, the normal of each point on the surface comes out. Let’s talk about high-poly and low-poly again. In order to obtain better details of objects, the more faces we use when modeling, the better the details will be expressed and the more delicate the lighting will be. However, the greater the number of faces, the greater the amount of calculation and the worse the performance. How can we obtain better lighting details without increasing the number of faces? A commonly used technique is normal mapping. Artists will build a high-precision model. Within the original triangular surface, if there are more triangular surfaces in the high model, there will be more normals of the vertices. These normal data are stored. In the texture map, when each point of the low-polygon triangle is obtained to obtain the normal, there is no need to use interpolation, but the normal data is obtained from the high-poly normal map, so that better lighting details can be obtained. . Normal mapping technology is also standard for the next generation. In addition to normals, in order to make the details more layered, there is also a height map with a similar principle. The picture below is a comparison of the normal map and height map of the same model:

Left: normal, middle (normal map) right (normal map + height map)

4: Ambient occlusion map

You can understand what environmental occlusion is by taking an example. There is a ceramic bottle in the picture below. We look down from the mouth of the bottle. Since the mouth of the bottle is small, it will block part of the reflection at the bottom. If we look from top to bottom, the bottom should be a little blurred. The feeling is more real if it is not bottomed (comparing the bottom of the bottle in the left and right pictures, the bottom cannot be seen on the right side, which is more real than the left side).

The picture below is the ambient occlusion map of this bottle:

5: Art workflow based on PBR

After so much preparation, I can finally explain the art workflow of PBR. The algorithm of PBR is determined, algorithm + data = effect, then all that is left is to adjust the data. Based on the above foreshadowing, let’s take a look at the data of several parts of the PBR adjustment effect.

(1) Self-illumination, this control is a luminous color or a self-illuminating map;

(2) The true color of the object, an original color or a true color map (Albedo/Diffuse);

(3) Reflection, including specular reflection + diffuse reflection control;

(4) Environmental occlusion, use an environmental occlusion map;

(5) Detail enhancement, using normal map and height map.

For the data of the above 5 points, the art workflow and export are basically settled except for reflection. Next, let’s look at how to adjust the reflection effect. We divide reflection into specular reflection and diffuse reflection. There are two methods of PBR. To adjust the reflection effect, one is based on metallicity + roughness. One is reflectivity and gloss. The data of these two methods can also be converted to each other. Just like a color can be adjusted based on color components using RGB, or the brightness and color values ​​can be adjusted using YUV. After adjustment, they can be converted to each other.

   The metallicity + roughness control method is to control the reflectivity of an object by describing its metallicity. Generally, if the surface of an object is single, we can use a numerical value to express the metallicity. For non-metals, we use 0, and for metals, we use a numerical value. To express, the higher the metallicity, the stronger the reflectivity.

If the surface of the object is relatively complex (a character wears a metal armor), there is no way to express it with a numerical value at this time, and the metallicity of the surface needs to be stored in a map (metal map). Roughness controls the smoothness of an object’s surface, as follows:

For objects with a single surface, the roughness can also be a numerical value, and the roughness of the surface of complex objects is also stored in the map.

Coarseness and metallicity are both one value, and can be stored using only one color component (RGBA channel). In this way, metallicity and roughness can be merged into one texture to save memory.

The reflection + glossiness workflow is controlled by adjusting the reflection and glossiness of the object surface, which is closer to our real-world situation, so the control is flexible and the effect is good.

So which mode do we usually use to control in game development? Let’s first list the advantages and disadvantages of the two models (often asked in interviews)

Metallicity + Roughness Workflow

Advantages: 1. Easier to create , each texture is independent  

2. Textures occupy less memory (single channel, can be merged)

3. More widely used

Disadvantages: 1. Artifacts at the edges are more obvious

Reflection and glossiness workflow

Advantages: 1. Edge artifacts are not obvious

2. Flexible control

Disadvantages: 1. Flexible control may lead to non-compliance with energy conservation and destroy the PBR principle

2. There are many RGB textures and occupy a lot of memory.

In game development, we mostly use metallicity and roughness workflow. After the workflow is determined, we also need to implement a Shader to render the final effect based on the data. This Shader can be written and customized by ourselves, or you can use the built-in Shader, Unity has built-in Shader for two workflows, Cocos, and Ue4 have built-in Shader for metallicity and roughness workflows.

6 Detailed explanation of Cocos PBR Shader parameters

  After the above explanation, the data required for physical rendering of a PBR is as follows:

  1. Self-illumination and self-illumination maps;
  2. The original color of the object, baseColor/Diffuse/Albedo map
  3. Metallicity, roughness values ​​and textures (optional), if there are textures, they can generally be merged into one texture;
  4. normal map, height map
  5. ambient occlusion map

After understanding this, let’s look at the Cocos PBR Shader parameters. It’s very simple, as shown below

normal map

original color of object

Metallicity and Roughness Maps

Metallicity + roughness + ambient occlusion merged map

Self-illuminating map

ambient occlusion map

The original color of the object:

Occlusion coefficient adjustment

Metallicity and roughness values

Self-illuminating color

Finally, an explanatory diagram of the data calculated by the official Shader is given.

When art mapping, you can make corresponding textures according to this instruction and develop them with Cocos PBR Shader. At the same time, Cocos open source can also learn how to implement PBR Shader, laying a good foundation for customizing the rendering pipeline.

Guess you like

Origin blog.csdn.net/Unity_RAIN/article/details/134598333