RayMarching realizes volumetric light rendering

RayMarching realizes volumetric light effect (parallel light)


foreword

The core code of this Demo comes from github https://github.com/AsehesL/VolumetricLight
This demo sharing simplifies some codes including model custom adjustment codes, and uses Cube instead of models uniformly, and adds lighting calculations and volumetric fog effects, This demo is a volumetric light calculated under parallel light.


提示:以下是本篇文章正文内容,下面案例可供参考

1. What is volumetric light?

In the game, when a shading object is illuminated by a light source, the radioactive leakage of light that appears around it is called volumetric light. For example, when the sun shines on a tree, it will pass through the gaps in the leaves to form a beam of light. The reason why it is called volumetric light is because the lighting under this special effect gives people a visual sense of space compared to the lighting in previous games. Volumetric lights give gamers a more realistic feel. Sometimes also refers to the technology to achieve this special effect.
insert image description here

game effects

insert image description here

2. Implementation steps

The general process is as follows

insert image description here

1. First, let's implement the first two steps

scene placement

insert image description here

The image above is the approximate location of our scene elements. First define a volume light space, which can be replaced by cube. It is the position of the white area. Secondly, we need to define two cameras, one as the main camera, which is the one on the left, which mainly shoots scene elements. The other camera is used as a camera in the volume light space, which mainly generates the depth information in the volume light area, which is the position in the upper right corner of the above picture. Here we use parallel light, and the depth camera uses orthographic projection. In order to ensure that the camera’s shooting content is in the volumetric light area, we will set the camera’s far and near clipping distance, and attributes such as size, through which such a depth map can be obtained. The depth map is as follows:
insert image description here

Depth map implementation

public class VolumetricLightDepthCamera
{
    
    
    public Camera depthRenderCamera {
    
     get {
    
     return m_DepthRenderCamera; } }

    private Camera m_DepthRenderCamera;

    private RenderTexture m_ShadowMap;

    private Shader m_ShadowRenderShader;

    private const string shadowMapShaderPath = "Shaders/VolumetricLight/ShadowMapRenderer";

    private int m_InternalShadowMapID;

    private bool m_IsSupport = false;

    public bool CheckSupport()
    {
    
    
        m_ShadowRenderShader = Resources.Load<Shader>(shadowMapShaderPath);
        if (m_ShadowRenderShader == null || !m_ShadowRenderShader.isSupported)
            return false;
        m_IsSupport = true;
        return m_IsSupport;
    }

    public void InitCamera(VolumetricLight light)
    {
    
    
        if (!m_IsSupport)
            return;
        m_InternalShadowMapID = Shader.PropertyToID("internalShadowMap");

        if (m_DepthRenderCamera == null)
        {
    
    
            m_DepthRenderCamera = light.gameObject.GetComponent<Camera>();
            if (m_DepthRenderCamera == null)
                m_DepthRenderCamera = light.gameObject.AddComponent<Camera>();
           
            m_DepthRenderCamera.aspect = light.aspect;
            m_DepthRenderCamera.backgroundColor = new Color(0,0,0,0);
            m_DepthRenderCamera.clearFlags = CameraClearFlags.SolidColor;
            m_DepthRenderCamera.depth = 0;
            m_DepthRenderCamera.farClipPlane = light.range;
            m_DepthRenderCamera.nearClipPlane = 0.01f;
            m_DepthRenderCamera.fieldOfView = light.angle;
            m_DepthRenderCamera.orthographic = light.directional;
            m_DepthRenderCamera.orthographicSize = light.size;
            m_DepthRenderCamera.cullingMask = light.cullingMask;
            m_DepthRenderCamera.SetReplacementShader(m_ShadowRenderShader, "RenderType");
        }

        if (m_ShadowMap == null)
        {
    
    
            int size = 0;
            switch (light.quality)
            {
    
    
                case VolumetricLight.Quality.High:
                case VolumetricLight.Quality.Middle:
                    size = 1024;
                    break;
                case VolumetricLight.Quality.Low:
                    size = 512;
                    break;
            }
            m_ShadowMap = new RenderTexture(size, size, 16);
            m_DepthRenderCamera.targetTexture = m_ShadowMap;
            Shader.SetGlobalTexture(m_InternalShadowMapID, m_ShadowMap);
        }
    }


    public void Destroy()
    {
    
    
        if (m_ShadowMap)
            Object.Destroy(m_ShadowMap);
        m_ShadowMap = null;
        if (m_ShadowRenderShader)
            Resources.UnloadAsset(m_ShadowRenderShader);
        m_ShadowRenderShader = null;
    }
}

Here, after directly adding the camera component, start to set the camera parameters, and want to cut distances from far and near, etc. Here, the near clipping distance is selected as 0.01, and the far clipping incentive is selected as range, which is the height range of our volumetric light. These two parameters are very important, because later we use this to calculate the depth information in the linear space of the camera. Finally, the key interface, Camera.SetReplacementShader interface will be used.
The official definition here is: After calling this function, the camera will use the replaced shader to render its view. Two parameters are provided here, one is the shader to be replaced, and the other is the label. What I understand here is that all the rendering objects of the label under the camera are rendered with this shader.
insert image description here
Here we use the shader that renders the depth map, "ShadowMapRenderer" and RenderType as tags. Let's take a look at the implementation of this shader.

Shader "Hidden/ShadowMapRenderer"
{
    
    
	Properties
	{
    
     
		_MainTex("", 2D) = "white" {
    
    }
		_Cutoff("", Float) = 0.5
		_Color("", Color) = (1,1,1,1)
	}
	SubShader
	{
    
    
		Tags{
    
     "RenderType" = "Opaque" }
		Pass
		{
    
    
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#include "UnityCG.cginc"

			struct v2f
			{
    
    
				float4 vertex : SV_POSITION;
				float depth : TEXCOORD0;				
			};
				
			v2f vert(appdata_base v)
			{
    
    
				v2f o;
				UNITY_INITIALIZE_OUTPUT(v2f, o);
				o.vertex = UnityObjectToClipPos(v.vertex);
				o.depth = COMPUTE_DEPTH_01;			
				return o;
			}
				
			fixed4 frag(v2f i) : SV_Target
			{
    
    
				return EncodeFloatRGBA(i.depth);
			}
			ENDCG
		}
	}
}

The code here is simple, using the RenderType tag and marking the render object as solid. In order to obtain the depth information of the object in the vertex shader, we use the COMPUTE_DEPTH_01 macro definition here.
insert image description here
We know that the depth information of the object comes from the z coordinate in the camera space. The camera space with the camera as the origin uses the right-handed coordinate system, while the world space uses the left-handed coordinate system, which is our left-handed rule, so it leads to conversion When z becomes a negative number, first * _ProjectionParams.w, _ProjectionParams.w is a public variable of the shader, which stores 1/far, that is, 1/far clipping distance. The value range of the z value in the camera space is between 0 and far, here we want to get a value with a value range of 0 -1, so directly * _ProjectionParams.w, we want to be positive, so here we need to invert. The EncodeFloatRGBA interface is directly called in the fragment shader to convert the depth value into an rgb value output. There is nothing to explain here. We can get such renderings in the game. as follows
insert image description here

2. Next, let's implement lighting calculation (steps 3, 4, 5, 6)

insert image description here
Next we start ray stepping, which is the white area in our picture above. We define the stepping in the clip space to facilitate the use of the depth map later. Use the current pixel coordinates - camera coordinates to get the stepping direction, cycle 32 times to start stepping, and calculate the light intensity and occlusion relationship at each step. Lighting of volume light = light scattering + light absorption.

The first is light scattering.

When the light emitted from the light source passes through some medium, usually dust and the like will cause reflection. This reflection comes from all directions, and only part of the light will enter the eyes and become the light we see. We can simply call it the scattering of light. Here we use the Henyey-Greenstein phase function to calculate the light intensity.
insert image description here
The dot product of the incoming vertex line-of-sight light and the direction of the light source is specifically manifested as a larger intensity of scattered light in the backlight direction
. And the controllable parameter g, where g is the scattering coefficient of light. The larger the g, the less the scattering and the brighter the light. The smaller the g, the more the scattering and the darker the light. The principle of the formula here will not be elaborated.
The following is our corresponding code implementation
insert image description here

followed by light scattering

For the absorption of light, we use Beer's Law of Lambert. Lambert-Beer Law (Lambert-Beer Law) is the basic law of light absorption, which applies to all electromagnetic radiation and all light-absorbing substances, including gases, solids, liquids, molecules, atoms and ions. In the case of volumetric lights, it can be used to reliably calculate the transmittance based on the density of the medium
exp(-c*d) The transmitted intensity decreases exponentially with the density of the medium, the distance the light travels, where c is the substance density, and d is the distance.
insert image description here
We can add a volumetric fog effect on top of this.
In order to generate volumetric fog effects we simulate the absorption of light by different media in the air. Directly apply Lambert-Beer's law just now. The two parameters we need are our depth and the medium density. For the material density, we will use the current step coordinates, that is, the coordinates of the clipping space multiplied by the time variable to sample a 3D noise texture to obtain a random density value, d is our depth. Bring both in and you get a volumetric fog effect.
insert image description here
insert image description here

The next step is to deal with the occlusion relationship

Next we start to compare the depth. Get the occluded area.
insert image description here
There are three cases of object occlusion: no occlusion, half occlusion and full occlusion. It can be seen that their actual effects are different. Only unoccluded lighting in the direction of the camera is superimposed per pixel. What we need to compare here is the depth of the current stepping position and the depth information captured by the depth camera.
insert image description here

Depth Acquisition --------- Obtain the current stepping depth

The depth of our current step coordinate is our z coordinate, but it is in clip space. It is non-linear, mainly because the transformation matrix from camera space to clipping space is non-linear, so we need to invert the depth value here, and push the z coordinate back to the camera space. Let's first look at the projection matrix of the orthographic projection.
insert image description here
Only need to pay attention to the change of z coordinate. By changing the shift, we can get such a formula.
insert image description here
Far and Near are our near and far clipping distances respectively.
insert image description here

The parameter w and the constant 0.01 here are the far and near clipping distances of the depth camera we set before. We will substitute the inverse z value, here because we want to get a positive number so the inversion is brought in. Get the depth value of the current step coordinates in the camera space.

Depth value in camera space

We pass the depth map taken by the depth camera to the shader, which is our lower left corner, and the next step is to sample the depth map. We will call the interface ComputeScreenPos(), pass in the clipping space coordinates to get the screen coordinates, get the depth value of the screen coordinates between 0-1, and multiply by far to get the value range between near and far clipping 0-far.
insert image description here
insert image description here
Here we use / because we divide by 1 before passing in.

Compare Depth Maps

We know the step depth and space depth, call the step() function. The occlusion relationship is that if there is no occlusion, it will return 1, if it is fully occluded, it will return 0, and if it is half occluded, it will return 1-depth value. Then multiply by the light to get the volumetric light effect

The code is as follows (example):

half cdep = LinearLightEyeDepth(-curpos.z);					
					curpos = ComputeScreenPos(curpos);
					half2 pjuv = curpos.xy / curpos.w;
					pjuv.y = 1- pjuv.y;
					half dep = DecodeFloatRGBA(tex2D(internalShadowMap, pjuv))/ internalProjectionParams.w;
					float shadow = step(cdep , dep) * (1 - saturate(cdep*internalProjectionParams.w));
					//float shadow =  (1 - saturate(cdep*internalProjectionParams.w));
					
					 col += delta * shadow *phaseVal ;

The overall code is as follows

volume light shader

Shader "Unit/VolumetricLight"
{
    
    
	Properties
	{
    
    
	}
	SubShader
	{
    
    
		Tags {
    
     "RenderType" = "Transparent" "Queue" = "Transparent" "IgnoreProjector"="true" }
		LOD 100

		Pass
		{
    
    
			zwrite off
			blend srcalpha one
			colormask rgb
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#pragma multi_compile_fog
			#pragma multi_compile __ USE_COOKIE
			#pragma multi_compile VOLUMETRIC_LIGHT_QUALITY_LOW VOLUMETRIC_LIGHT_QUALITY_MIDDLE VOLUMETRIC_LIGHT_QUALITY_HIGH
			
			#include "UnityCG.cginc"

			#if  VOLUMETRIC_LIGHT_QUALITY_LOW
				#define RAY_STEP 16
			#elif VOLUMETRIC_LIGHT_QUALITY_MIDDLE
				#define RAY_STEP 32
			#elif VOLUMETRIC_LIGHT_QUALITY_HIGH
				#define RAY_STEP 64
			#endif

			struct appdata
			{
    
    
				float4 vertex : POSITION;
				float3 color : COLOR;
			};

			struct v2f
			{
    
    
				UNITY_FOG_COORDS(0)
				float4 vertex : SV_POSITION;
				float4 viewPos : TEXCOORD1;
				float4 viewCamPos : TEXCOORD2;
				float3 vcol : COLOR;
			};

			uniform float4 internalWorldLightColor;
			uniform float4 internalWorldLightPos;

			sampler2D internalShadowMap;
#ifdef USE_COOKIE
			sampler2D internalCookie;
#endif
			float4x4 internalWorldLightVP;
			float4 internalProjectionParams;
			float4x4 internalWorldLightMV;
			float4 m_InternalLightPosID;
			 float4 _phaseParams;
			  float d;
			sampler3D _noise3d;
			float m_noise_speed;
			float LinearLightEyeDepth(float z)
			{
    
    
				float oz = (-z*(1 / internalProjectionParams.w - 0.01) + 1 / internalProjectionParams.w + 0.01) / 2;
				///float pz = 1.0 / (internalProjectionParams.y * z + internalProjectionParams.z);
				return oz;
			}

			 // Henyey-Greenstein
            float hg(float a, float g) {
    
    
                float g2 = g * g;
                return (1 - g2) / (4 * 3.1415 * pow(1 + g2 - 2 * g * (a), 1.5));
            }

            float phase(float a) {
    
                  
                float hgBlend = hg(a, _phaseParams.x);
                return hgBlend * _phaseParams.w;
            }
			
			v2f vert (appdata v)
			{
    
    
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				UNITY_TRANSFER_FOG(o,o.vertex);

				//o.viewPos = float4(v.vertex.xyz, 1);

				o.viewPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1));
				o.viewCamPos = float4(_WorldSpaceCameraPos.xyz, 1);
			
				o.vcol = v.color;
				
				return o;
			}
			
			fixed4 frag (v2f i) : SV_Target
			{
    
    
				float delta = 2.0 / 64;
				float col = 0;
				float4 beginPjPos = mul(internalWorldLightMV, i.viewPos);
				beginPjPos = mul(internalWorldLightVP, beginPjPos);
				beginPjPos /= beginPjPos.w;

				float4 pjCamPos = mul(internalWorldLightMV, i.viewCamPos);
				pjCamPos = mul(internalWorldLightVP, pjCamPos);
				pjCamPos /= pjCamPos.w;

				float3 pjViewDir = normalize(beginPjPos.xyz - pjCamPos.xyz);
				

				float4 pjLightPos = mul(internalWorldLightMV, internalWorldLightPos);
				pjLightPos = mul(internalWorldLightVP, pjLightPos);
			//	float4 pjLightPos = mul(internalWorldLightVP, internalWorldLightPos);
                float cosAngle = dot(pjViewDir, pjLightPos.xyz);
				float phaseVal = phase(cosAngle);
				float speedShape = _Time.y * m_noise_speed;
				
				for (float k = 0; k< 64; k++) {
    
    
					float4 curpos = beginPjPos;
					float3 vdir = pjViewDir.xyz*k*delta;
					curpos.xyz += vdir;

					half cdep = LinearLightEyeDepth(-curpos.z);					
					curpos = ComputeScreenPos(curpos);
					half2 pjuv = curpos.xy / curpos.w;
					pjuv.y = 1- pjuv.y;
					half dep = DecodeFloatRGBA(tex2D(internalShadowMap, pjuv))/ internalProjectionParams.w;
					float shadow = step(cdep , dep) * (1 - saturate(cdep*internalProjectionParams.w));
					//float shadow =  (1 - saturate(cdep*internalProjectionParams.w));
					
					 col += delta * shadow *phaseVal ;


					float4 uvwShape = curpos + float4(speedShape, speedShape * 0.2,0, 0);
					float4 shapeNoise = tex3Dlod(_noise3d, uvwShape);
					float noise = shapeNoise.r * d;
					col *= exp(-noise  * cdep *internalProjectionParams.w)  ;	

					
				}
				//col = col *2  ;	

				return fixed4(col,col,col,1);
				// return shapeNoise;
			}
			ENDCG
		}
	}
}

Volumetric Light Control Footsteps Cs

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

/// <summary>
/// 体积光渲染脚本
/// </summary>
public class VolumetricLight : MonoBehaviour {
    
    

    public enum Quality
    {
    
    
        High,
        Middle,
        Low,
    }

    /// <summary>
    /// 是否平行光
    /// </summary>
    public bool directional
    {
    
    
        get {
    
     return this.m_Directional; }
        set {
    
     ResetDirectional(value); }
    }

    /// <summary>
    /// 阴影Bias
    /// </summary>
    public float shadowBias
    {
    
    
        get {
    
     return this.m_ShadowBias; }
        set {
    
     ResetShadowBias(value); }
    }
    /// <summary>
    /// 物体密度
    /// </summary>
    public float D
    {
    
    
        get {
    
     return this.d; }
        set {
    
     d = value; }
    }
    /// <summary>
    /// 渲染范围
    /// </summary>
    public float range
    {
    
    
        get {
    
     return this.m_Range; }
        set {
    
     ResetRange(value); }
    }
    /// <summary>
    /// 灯光夹角(非平行光)
    /// </summary>
    public float angle
    {
    
    
        get {
    
     return this.m_Angle; }
        set {
    
     ResetAngle(value); }
    }
    /// <summary>
    /// 灯光区域大小(平行光)
    /// </summary>
    public float size
    {
    
    
        get {
    
     return this.m_Size; }
        set {
    
     ResetSize(value); }
    }
    
    public float aspect
    {
    
    
        get {
    
     return this.m_Aspect; }
        set {
    
     ResetAspect(value); }
    }
    /// <summary>
    /// 灯光颜色
    /// </summary>
    public Color color
    {
    
    
        get {
    
     return this.m_Color; }
        set {
    
     ResetColor(value, m_Intensity); }
    }
    /// <summary>
    /// 灯光强度
    /// </summary>
    public float intensity
    {
    
    
        get {
    
     return m_Intensity; }
        set {
    
     ResetColor(m_Color, value); }
    }

    public Texture2D cookie
    {
    
    
        get {
    
     return m_Cookie; }
        set {
    
     ResetCookie(value); }
    }

    public Texture3D noise3d
    {
    
    
        get {
    
     return _noise3; }
        set {
    
     ResetNoise3D(value); }
    }
    public LayerMask cullingMask
    {
    
    
        get {
    
     return m_CullingMask; }
        set {
    
     ResetCullingMask(value); }
    }

    public Quality quality
    {
    
    
        get {
    
     return this.m_Quality; }
    }

    public bool vertexBased
    {
    
    
        get {
    
     return m_VertexBased; }
    }

    public Vector4 _PhaseParams
    {
    
    
        get {
    
     return _phaseParams; }
        set {
    
     _phaseParams = value; }
    }

    public float _NoiseSpeed
    {
    
    
        get {
    
     return _noiseSpeed; }
        set {
    
     _noiseSpeed = value; }
    }

    [SerializeField]
    private bool m_Directional;
    [SerializeField]
    private float m_ShadowBias;
    [SerializeField]
    private float m_Range;
    [SerializeField]
    private float m_Angle;
    [SerializeField]
    private float m_Size;
    [SerializeField]
    private float m_Aspect;
    [SerializeField]
    private Color m_Color = new Color32(255, 247, 216, 255);
    [SerializeField]
    private float m_Intensity;
    [SerializeField]
    private Texture2D m_Cookie;
    [SerializeField]
    private LayerMask m_CullingMask;
    [SerializeField]
    private Quality m_Quality;
    [SerializeField]
    private bool m_VertexBased;
    [SerializeField]
    private float m_Subdivision = 0.7f;
    [SerializeField]
    private Vector4 _phaseParams;

    [SerializeField]
    private Texture3D _noise3;
    [SerializeField]
    private float d;

    [SerializeField]
    private float _noiseSpeed;


    private VolumetricLightDepthCamera m_DepthCamera;

    private int m_InternalWorldLightVPID;
    private int m_InternalWorldLightMVID;
    private int m_InternalProjectionParams;
    private int m_InternalBiasID;
    private int m_InternalCookieID;
    private int m_InternalLightPosID;
    private int m_InternalLightPosID2;
    private int m_InternalLightColorID;
    private int m_PhaseParamsPosID;
    private int m_DID;
    private int m_NoiseDID;
    private int m_NoiseSpeedDID;

    private Matrix4x4 m_Projection;
    private Matrix4x4 m_WorldToCam;
    private Vector4 m_LightPos;
   
    private bool m_IsInitialized;

    void Start()
    {
    
    
        m_DepthCamera = new VolumetricLightDepthCamera();
        
        m_Subdivision = Mathf.Clamp(m_Subdivision, 0.1f, m_Range*0.9f);

        if (!CheckSupport())
            return;

        m_InternalWorldLightVPID = Shader.PropertyToID("internalWorldLightVP");
        m_InternalWorldLightMVID = Shader.PropertyToID("internalWorldLightMV");
        m_InternalProjectionParams = Shader.PropertyToID("internalProjectionParams");
        m_InternalBiasID = Shader.PropertyToID("internalBias");
        m_InternalCookieID = Shader.PropertyToID("internalCookie");
        m_InternalLightPosID = Shader.PropertyToID("internalWorldLightPos");
        m_InternalLightPosID2 = Shader.PropertyToID("internalWorldLightPos2");
        m_InternalLightColorID = Shader.PropertyToID("internalWorldLightColor");
        m_PhaseParamsPosID = Shader.PropertyToID("_phaseParams");
        m_NoiseDID = Shader.PropertyToID("_noise3d");
        m_DID = Shader.PropertyToID("d");
        m_NoiseSpeedDID = Shader.PropertyToID("m_noise_speed");
        m_DepthCamera.InitCamera(this);

        m_Projection = m_DepthCamera.depthRenderCamera.projectionMatrix;
        Shader.SetGlobalMatrix(m_InternalWorldLightVPID, m_Projection);
        Shader.SetGlobalMatrix("internalProjectionInv", m_Projection.inverse);
        m_WorldToCam = m_DepthCamera.depthRenderCamera.worldToCameraMatrix;
        Shader.SetGlobalMatrix(m_InternalWorldLightMVID, m_WorldToCam);
        SetLightProjectionParams();
        Shader.SetGlobalFloat(m_InternalBiasID, m_ShadowBias);
       
        Shader.SetGlobalColor(m_InternalLightColorID, new Color(m_Color.r * m_Intensity, m_Color.g * m_Intensity, m_Color.b * m_Intensity, m_Color.a));
        if (m_Cookie && !m_VertexBased)
        {
    
    
            Shader.EnableKeyword("USE_COOKIE");
            Shader.SetGlobalTexture(m_InternalCookieID, m_Cookie);
        }
        else
            Shader.DisableKeyword("USE_COOKIE");
        if (_noise3)
            Shader.SetGlobalTexture(m_NoiseDID, _noise3);
        ResetQuality(m_Quality == Quality.Low, m_Quality == Quality.Middle, m_Quality == Quality.High);
        m_IsInitialized = true;
    }

    void OnDestroy()
    {
    
    
        if (m_DepthCamera != null)
            m_DepthCamera.Destroy();
        m_DepthCamera = null;      
    }

    void OnPreRender()
    {
    
    
        if (!m_IsInitialized)
            return;
        if (m_Projection != m_DepthCamera.depthRenderCamera.projectionMatrix)
        {
    
    
            m_Projection = m_DepthCamera.depthRenderCamera.projectionMatrix;
            Shader.SetGlobalMatrix(m_InternalWorldLightVPID, m_Projection);            
        }
        if (m_WorldToCam != m_DepthCamera.depthRenderCamera.worldToCameraMatrix)
        {
    
    
            m_WorldToCam = m_DepthCamera.depthRenderCamera.worldToCameraMatrix;
            Shader.SetGlobalMatrix(m_InternalWorldLightMVID, m_WorldToCam);
        }
        Shader.SetGlobalVector(m_PhaseParamsPosID, _phaseParams);
        Shader.SetGlobalFloat(m_DID, d);
        Shader.SetGlobalFloat(m_NoiseSpeedDID, _noiseSpeed);
        if (LightPosChange())
        {
    
    
            if (!m_Directional)
            {
    
    
                Shader.SetGlobalVector(m_InternalLightPosID, new Vector4(transform.position.x, transform.position.y, transform.position.z, 1));
            }
            else
            {
    
    
                Shader.SetGlobalVector(m_InternalLightPosID, new Vector4(transform.forward.x, transform.forward.y, transform.forward.z, 0));
                Shader.SetGlobalVector(m_InternalLightPosID2, new Vector4(transform.position.x, transform.position.y, transform.position.z, 0));               
            }
        }
    }

    private bool CheckSupport()
    {
    
    
        if (m_DepthCamera == null)
            return false;
        if (!m_DepthCamera.CheckSupport())
            return false;      
        return true;
    }

    private void ResetDirectional(bool directional)
    {
    
    
        if (m_Directional == directional) return;
        m_Directional = directional;
        if (!m_IsInitialized) return;
        if (m_DepthCamera != null) m_DepthCamera.depthRenderCamera.orthographic = m_Directional;
    }

    private void ResetShadowBias(float shadowBias)
    {
    
    
        if (m_ShadowBias == shadowBias) return;
        m_ShadowBias = shadowBias;
        if (!m_IsInitialized) return;
        Shader.SetGlobalFloat(m_InternalBiasID, m_ShadowBias);
    }

    private void ResetRange(float range)
    {
    
    
        if (m_Range == range) return;
        m_Range = range;
        if (!m_IsInitialized) return;
        if (m_DepthCamera != null) m_DepthCamera.depthRenderCamera.farClipPlane = m_Range;
        SetLightProjectionParams();
    }

    private void ResetAngle(float angle)
    {
    
    
        if (m_Angle == angle) return;
        m_Angle = angle;
        if (!m_IsInitialized) return;
        if (m_DepthCamera != null) m_DepthCamera.depthRenderCamera.fieldOfView = m_Angle;
    }

    private void ResetSize(float size)
    {
    
    
        if (m_Size == size) return;
        m_Size = size;
        if (!m_IsInitialized) return;
        if (m_DepthCamera != null) m_DepthCamera.depthRenderCamera.orthographicSize = m_Size;
    }

    private void ResetAspect(float aspect)
    {
    
    
        if (m_Aspect == aspect) return;
        m_Aspect = aspect;
        if (!m_IsInitialized) return;
        if (m_DepthCamera != null) m_DepthCamera.depthRenderCamera.aspect = m_Aspect;
    }

    private void ResetColor(Color color, float intensity)
    {
    
    
        if (m_Color == color && m_Intensity == intensity) return;
        m_Color = color;
        m_Intensity = intensity;
        if (!m_IsInitialized) return;
        Shader.SetGlobalColor(m_InternalLightColorID, new Color(m_Color.r * m_Intensity, m_Color.g * m_Intensity, m_Color.b * m_Intensity, m_Color.a));
    }

    private void ResetCookie(Texture2D cookie)
    {
    
    
        if (m_Cookie == cookie) return;
        if (m_VertexBased) return;
        m_Cookie = cookie;
        if (!m_IsInitialized) return;
        if (m_Cookie && !m_VertexBased)
        {
    
    
            Shader.EnableKeyword("USE_COOKIE");
            Shader.SetGlobalTexture(m_InternalCookieID, m_Cookie);
        }
        else
            Shader.DisableKeyword("USE_COOKIE");
    }

    private void ResetNoise3D(Texture3D noise3d)
    {
    
    
        if (_noise3 == noise3d) return;       
        _noise3 = noise3d;
        if (!m_IsInitialized) return;
        if (_noise3)
        {
    
              
            Shader.SetGlobalTexture(m_InternalCookieID, _noise3);
        }
    }

    private void ResetCullingMask(LayerMask cullingMask)
    {
    
    
        if (m_CullingMask == cullingMask) return;
        m_CullingMask = cullingMask;
        if (!m_IsInitialized) return;
        if(m_DepthCamera!=null) m_DepthCamera.depthRenderCamera.cullingMask = m_CullingMask;
    }

    private void ResetQuality(bool low, bool middle, bool high)
    {
    
    
        if (low)
            Shader.EnableKeyword("VOLUMETRIC_LIGHT_QUALITY_LOW");
        else
            Shader.DisableKeyword("VOLUMETRIC_LIGHT_QUALITY_LOW");
        if (middle)
            Shader.EnableKeyword("VOLUMETRIC_LIGHT_QUALITY_MIDDLE");
        else
            Shader.DisableKeyword("VOLUMETRIC_LIGHT_QUALITY_MIDDLE");
        if (high)
            Shader.EnableKeyword("VOLUMETRIC_LIGHT_QUALITY_HIGH");
        else
            Shader.DisableKeyword("VOLUMETRIC_LIGHT_QUALITY_HIGH");
    }

    private void SetLightProjectionParams()
    {
    
    
        float x = -1 + m_Range / 0.01f;
        Shader.SetGlobalVector(m_InternalProjectionParams, new Vector4(x, (m_Range - 0.01f) / (2 * m_Range * 0.01f), (m_Range + 0.01f) / (2 * m_Range * 0.01f), 1 / m_Range));
    }

    private bool LightPosChange()
    {
    
    
        if (m_LightPos.w == 1 && m_Directional)
            return true;
        if (m_LightPos.w == 0 && !m_Directional)
            return true;
        if (m_Directional)
        {
    
    
            if (m_LightPos.x != transform.forward.x)
                return true;
            if (m_LightPos.y != transform.forward.y)
                return true;
            if (m_LightPos.z != transform.forward.z)
                return true;
        }
        else
        {
    
    
            if (m_LightPos.x != transform.position.x)
                return true;
            if (m_LightPos.y != transform.position.y)
                return true;
            if (m_LightPos.z != transform.position.z)
                return true;
        }
        return false;
    }

    void OnDrawGizmosSelected()
    {
    
    
        if (m_Directional)
        {
    
    
            GizmosEx.DrawOrtho(transform, m_Aspect, m_Size, 0.01f, m_Range,
                new Color(0.5f, 0.5f, 0.5f, 0.7f));
        }
        else
        {
    
    
            GizmosEx.DrawPerspective(transform, m_Aspect, m_Angle, 0.01f, m_Range,
                new Color(0.5f, 0.5f, 0.5f, 0.7f));
        }
    }

}

The data requested by the url network used here.


Summarize

提示:这里对文章进行总结:
The above is the sharing of volume light, welcome to exchange. Plan a topic on the basic concepts of graphics. Wait for the follow-up update

Guess you like

Origin blog.csdn.net/weixin_39289457/article/details/124356098