Unity3D study notes 12 - rendering texture

1 Overview

In the article "Unity3D Study Notes 11 - Post-processing", it is discussed that post-processing is one of the frame buffer (Framebuffer) technology implementations; and another frame buffer technology implementation is rendering textures. Generally speaking, the scene we render will be directly displayed in the color buffer of the screen, but in fact, the texture is two-dimensional like the screen. By rendering the scene to the texture, many special three-dimensional application scenarios can be realized. In a 3D rendering engine, the camera is usually encapsulated with a render target (Render Target) interface. If it is not set, it will be rendered to the screen; if it is set as a texture object, it will be rendered to the texture.

2. Detailed discussion

An example of a render texture is a specular effect. The principle of the mirror effect is that, in addition to the normal rendering scene, an additional texture map is rendered off-screen, and the rendered content is the scene in front of the mirror; then, this rendering texture is transferred to the mirror object, and the left and right are drawn upside down.

The case is so simple that it doesn't even need a script. First we create a quad grid as a mirror, and place some 3D objects in front of the mirror:
imglink1

Then create a render texture:
imglink2

Then create a camera in the scene that renders to the texture. Set the camera's render target to the render texture you just created, and you should also adjust the camera's position and rotation so that it is opposite the viewing direction:
imglink4

Modify the material on the mirror object so that the Shader it calls is:

Shader "Custom/Mirrior"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
   
            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;    
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = v.uv;
                o.uv.x = 1 - o.uv.x;
         
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // sample the texture
                fixed4 col = tex2D(_MainTex, i.uv);         
                return col;
            }
            ENDCG
        }
    }
}

The content of the Shader is very simple. Pass in the rendering texture to the Shader, reverse the horizontal texture coordinates, and finally get a left-right inverted rendering:
imglink3

3. Questions

Most of the articles introducing rendering textures are basically this case. But I think this is only slightly interesting.

  1. As far as the case itself is concerned, the rendering texture requires the support of the camera, but the position of the camera and the rotation of the image will eventually have a mirror effect. Of course, we can adjust it according to the actual effect, but it is best to calculate reasonable parameters according to the imaging principle of the mirror surface.
  2. The rendering texture is actually rendering the scene again through the camera. Rendering batches are doubled, so rendering textures is often more performance-intensive. Sometimes it is necessary to control some objects to enter the mirror, and some objects do not need to enter, then Unity's Layer (layer) settings must be used.

code address

Guess you like

Origin blog.csdn.net/charlee44/article/details/126438057