Every time makes a big head Shader - Speaking from a simple function

  Recently, a function to render a different camera visual range and scope not seen visible from the main camera angle, probably in the following figure: 

  Is simply the main camera field of view and observer vision camera coincides with the place, to mark the observer camera visible and invisible, just like ShadowMap achieve the same principle is related to the conversion of the depth map, the coordinates of the world class, every time such functionality will be very sad reminder that, although its logic is simple, but with Unity3D done a lot of trouble ...

  Principle: obtaining depth maps viewer camera, stored in a RenderTexture sheets, and then each fragment will acquire the world coordinate position corresponding to the main camera, converting the world coordinates to camera viewPort observer, if the observer the camera field of view, the viewport continues to UV coordinates into coordinates stored in the corresponding depth is then removed RenderTexture is, the depth value is converted to the actual depth comparison, if the depth is less than the depth equal to the depth map, that is visible to the observer or It can not be met.

 

  How to get first look at the depth chart it, no matter what kind of rendering, to get the camera to add depth map can be marked:

        _cam = GetComponent<Camera>();
        _cam.depthTextureMode |= DepthTextureMode.Depth;

  

  Then the depth map for a high precision point, single-channel to FIG:

        renderTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.RFloat);
        renderTexture.hideFlags = HideFlags.DontSave;

  

  After the maps directly to the camera after treatment rendered to the depth map: 

    private  void OnRenderImage (Render Texture source Render Texture destination)
    {
        if (_material && _cam)
        {
            Shader.SetGlobalFloat("_cameraNear", _cam.nearClipPlane);
            Shader.SetGlobalFloat("_cameraFar", _cam.farClipPlane);

            Graphics.Blit(source, renderTexture, _material);
        }
        Graphics.Blit(source, destination);
    }

  Material is a simple shader acquired depth: 

    sampler2D _CameraDepthTexture;    
    uniform float _cameraFar;
    uniform float _cameraNear;
    
    float DistanceToLinearDepth(float d, float near, float far)
    {
        float z = (d - near) / (far - near);
        return z;
    }
    
    fixed4 frag(v2f i) : SV_Target
    {
        float depth  = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
        depth = LinearEyeDepth(depth);
        depth = DistanceToLinearDepth(depth, _cameraNear, _cameraFar);
        return float4(depth, depth, depth, 1);
    }
    

 

  Here it a bit strange, why the return is not the depth of the depth map, nor is it Linear01Depth (depth) normalized depth, but a calculated depth of their own?

  This is because of the lack of official documents ........ in fact see API which has directly RenderBuffer method to the camera: 

        _cam.SetTargetBuffers(renderTexture.colorBuffer, renderTexture.depthBuffer);

  However, no documentation and no examples ah, the devil knows you render out how I used ah, there renderTexture.depthBuffer in the end how to pass as a Texture shader ah ... I tried converting operation between them by IntPtr as before, are failure...

  So he can only use the most secure way, through a depth map rendering Shader out of it, then SAMPLE_DEPTH_TEXTURE (_CameraDepthTexture, i.uv); acquired depth value should be NDC coordinates, the range of depth values ​​should be [0 , between 1] right (non-linear), if the actual depth acquired in the course of other cameras, then you need to realize their LinearEyeDepth (float z) this method:

inline float LinearEyeDepth( float z )
{
    return  1.0 / (_ZBufferParams.z * z + _ZBufferParams.w);
}

  In different platforms, _ZBufferParams which values ​​are not the same, so if you realize it would be very troublesome, and after my test, the result is not right ......

double x, y;

OpenGL would be this:
x = (1.0 - m_FarClip / m_NearClip) / 2.0;
y = (1.0 + m_FarClip / m_NearClip) / 2.0;

D3D is this:
x = 1.0 - m_FarClip / m_NearClip;
y = m_FarClip / m_NearClip;

_ZBufferParams = float4(x, y, x/m_FarClip, y/m_FarClip);

  

 

  Finally made out of just the basic functions and do not have an estimate of the distance from the sample depth, it will be as hard as the shadow of jagged edges, as well as feeling like there is no tear open the map of the opposite sex, these need to be treated ...

Guess you like

Origin www.cnblogs.com/tiancaiwrk/p/11928333.html