Event System of UGUI Source Code Analysis in Unity (6)-RayCaster (Part 2)

Event System of UGUI Source Code Analysis in Unity (6)-RayCaster (Part 2)

Continuing from the previous article, continue to introduce the projector.

GraphicRaycaster

GraphicRaycaster inherits from BaseRaycaster , is the specific implementation class of BaseRaycaster , and is a projector for UGUI elements, which requires Canvas components to exist on the object at the same time.

It is worth mentioning that GraphicRaycaster, PhysicsRaycaster and Physics2DRaycaster are stored in different directories. The latter two are placed in the EventSystem directory, while GraphicRaycaster is placed in the UI directory. Maybe the author wants to express that GraphicRaycaster is only for UI use.

GraphicRaycaster mainly relies on the rectangular frame related to RectTranform for ray detection, and basically does not rely on the camera.

Usually when we add a Canvas, this component will be added by default. As shown in the figure:

insert image description here

panel properties

  • Ignore Reversed Graphics: Ignore the back of the graphics (checked by default), and judge whether the ray penetrates from the back by point multiplication. If checked, the back does not add ray casting
  • Blocking Objects: Blocking ray object type, that is to say, the ray will be blocked when it encounters an object of the specified type, and cannot be passed down
    • None: no blocking by default
    • Two D: 2D object blocking, the 2D object here refers to GameObject->2D Objectthe object in the menu, and the object needs to have a 2D collision box (2D Collider)
    • Three D: 3D object blocking, the 3D object here refers to GameObject->3D Objectthe object in the menu, and the object needs to have a 3D collision box (3D Collider)
    • All: Two D + Three D
  • Block Mask: Block mask, that is, objects of certain layers (Layer) participate in blocking, the default is all layers
  • The blocking mentioned above means that there must be a corresponding event handler on the blocking layer (that is, 2D or 3D Collider)

Below is the relevant code.

[AddComponentMenu("Event/Graphic Raycaster")]
[RequireComponent(typeof(Canvas))]
public class GraphicRaycaster : BaseRaycaster
{
    protected const int kNoEventMaskSet = -1;
    public enum BlockingObjects
    {
        None = 0,
        TwoD = 1,
        ThreeD = 2,
        All = 3,
    }

    [FormerlySerializedAs("ignoreReversedGraphics")]
    [SerializeField] private bool m_IgnoreReversedGraphics = true;
    
    [FormerlySerializedAs("blockingObjects")]
    [SerializeField] private BlockingObjects m_BlockingObjects = BlockingObjects.None;

    public bool ignoreReversedGraphics { get {return m_IgnoreReversedGraphics; } set { m_IgnoreReversedGraphics = value; } }
    public BlockingObjects blockingObjects { get {return m_BlockingObjects; } set { m_BlockingObjects = value; } }

    [SerializeField]
    protected LayerMask m_BlockingMask = kNoEventMaskSet;

    private Canvas m_Canvas;
}

Properties, Fields and Methods

//---------------------------------------------------------
// 重写了BaseRaycaster的排序属性
public override int sortOrderPriority
{
    get
    {
        // 如果Canvas的渲染模式为:ScreenSpaceOverlay, 也就是说总是保持在屏幕最上层, 则使用画布的渲染层级在多个画布中排序
        // We need to return the sorting order here as distance will all be 0 for overlay.
        if (canvas.renderMode == RenderMode.ScreenSpaceOverlay)
            return canvas.sortingOrder;

        return base.sortOrderPriority;
    }
}

public override int renderOrderPriority
{
    get
    {
        // 同上
        // We need to return the sorting order here as distance will all be 0 for overlay.
        if (canvas.renderMode == RenderMode.ScreenSpaceOverlay)
            return canvas.rootCanvas.renderOrder;

        return base.renderOrderPriority;
    }
}
//-----------------------------------------------------------------
// GraphicRaycaster主要依赖Canvas来进行各种操作
private Canvas m_Canvas;
private Canvas canvas
{
    get
    {
        if (m_Canvas != null)
            return m_Canvas;

        m_Canvas = GetComponent<Canvas>();
        return m_Canvas;
    }
}

// 用于发射射线的摄像机
// 如果Canvas的渲染模式为:ScreenSpaceOverlay或者没有指定摄像机则使用屏幕空间
public override Camera eventCamera
{
    get
    {
        if (canvas.renderMode == RenderMode.ScreenSpaceOverlay || (canvas.renderMode == RenderMode.ScreenSpaceCamera && canvas.worldCamera == null))
            return null;

        return canvas.worldCamera != null ? canvas.worldCamera : Camera.main;
    }
}

ray casting

Next comes the key and most complex places.

[NonSerialized] static readonly List<Graphic> s_SortedGraphics = new List<Graphic>();

// 向给定graphic投射射线, 收集所有被射线穿过的graphic
private static void Raycast(Canvas canvas, Camera eventCamera, Vector2 pointerPosition, IList<Graphic> foundGraphics, List<Graphic> results)
{
    int totalCount = foundGraphics.Count;
    for (int i = 0; i < totalCount; ++i)
    {
        Graphic graphic = foundGraphics[i];

        // ----------------------------
        // -- graphic相关过滤条件
        // depth==-1代表不被这个Canvas处理, 也就是绘制
        // 
        if (graphic.depth == -1 || !graphic.raycastTarget || graphic.canvasRenderer.cull)
            continue;

        if (!RectTransformUtility.RectangleContainsScreenPoint(graphic.rectTransform, pointerPosition, eventCamera))
            continue;
        // ----------------------------

        // z值超过摄像机范围则忽略, 所以可以通过指定z值来脱离射线投射
        if (eventCamera != null && eventCamera.WorldToScreenPoint(graphic.rectTransform.position).z > eventCamera.farClipPlane)
            continue;

        // 射线是否穿过graphic
        if (graphic.Raycast(pointerPosition, eventCamera))
        {
            s_SortedGraphics.Add(graphic);
        }
    }

    // 深度从大到小排序
    s_SortedGraphics.Sort((g1, g2) => g2.depth.CompareTo(g1.depth));
    //      StringBuilder cast = new StringBuilder();
    totalCount = s_SortedGraphics.Count;
    for (int i = 0; i < totalCount; ++i)
        results.Add(s_SortedGraphics[i]);
    //      Debug.Log (cast.ToString());

    s_SortedGraphics.Clear();
}

// [public]向给定graphic投射射线, 收集所有被射线穿过的graphic
[NonSerialized] private List<Graphic> m_RaycastResults = new List<Graphic>();
public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
{
    if (canvas == null)
        return;

    // 收集canvas管理的所有graphic
    var canvasGraphics = GraphicRegistry.GetGraphicsForCanvas(canvas);
    if (canvasGraphics == null || canvasGraphics.Count == 0)
        return;

    int displayIndex;
    var currentEventCamera = eventCamera; // Propery can call Camera.main, so cache the reference

    // 根据canvas的渲染模式, 选择targetDisplay
    if (canvas.renderMode == RenderMode.ScreenSpaceOverlay || currentEventCamera == null)
        displayIndex = canvas.targetDisplay;
    else
        displayIndex = currentEventCamera.targetDisplay;

    // 获取屏幕坐标, 支持多屏输出
    var eventPosition = Display.RelativeMouseAt(eventData.position);
    if (eventPosition != Vector3.zero)
    {
        // 根据屏幕坐标获取targetDisplay
        // We support multiple display and display identification based on event position.
        int eventDisplayIndex = (int)eventPosition.z;

        // 抛弃非当前targetDisplay
        // Discard events that are not part of this display so the user does not interact with multiple displays at once.
        if (eventDisplayIndex != displayIndex)
            return;
    }
    else
    {
        // The multiple display system is not supported on all platforms, when it is not supported the returned position
        // will be all zeros so when the returned index is 0 we will default to the event data to be safe.
        eventPosition = eventData.position;

        // We dont really know in which display the event occured. We will process the event assuming it occured in our display.
    }

    // 转换视口坐标
    // Convert to view space
    Vector2 pos;
    if (currentEventCamera == null)
    {
        // Multiple display support only when not the main display. For display 0 the reported
        // resolution is always the desktops resolution since its part of the display API,
        // so we use the standard none multiple display method. (case 741751)
        float w = Screen.width;
        float h = Screen.height;
        if (displayIndex > 0 && displayIndex < Display.displays.Length)
        {
            w = Display.displays[displayIndex].systemWidth;
            h = Display.displays[displayIndex].systemHeight;
        }
        pos = new Vector2(eventPosition.x / w, eventPosition.y / h);
    }
    else
        pos = currentEventCamera.ScreenToViewportPoint(eventPosition);

    // 抛弃视口之外的位置
    // If it's outside the camera's viewport, do nothing
    if (pos.x < 0f || pos.x > 1f || pos.y < 0f || pos.y > 1f)
        return;

    float hitDistance = float.MaxValue;

    // 生成射线
    Ray ray = new Ray();

    // 使用相机生成
    if (currentEventCamera != null)
        ray = currentEventCamera.ScreenPointToRay(eventPosition);

    // 2D和3D物体阻挡部分, 收集到投射距离, 代表被阻挡
    if (canvas.renderMode != RenderMode.ScreenSpaceOverlay && blockingObjects != BlockingObjects.None)
    {
        float distanceToClipPlane = 100.0f;

        if (currentEventCamera != null)
        {
            float projectionDirection = ray.direction.z;
            distanceToClipPlane = Mathf.Approximately(0.0f, projectionDirection)
                ? Mathf.Infinity
                : Mathf.Abs((currentEventCamera.farClipPlane - currentEventCamera.nearClipPlane) / projectionDirection);
        }

        // 使用反射获取PhysicsRaycaster的投射接口
        if (blockingObjects == BlockingObjects.ThreeD || blockingObjects == BlockingObjects.All)
        {
            if (ReflectionMethodsCache.Singleton.raycast3D != null)
            {
                var hits = ReflectionMethodsCache.Singleton.raycast3DAll(ray, distanceToClipPlane, (int)m_BlockingMask);
                if (hits.Length > 0)
                    hitDistance = hits[0].distance;
            }
        }

        // 使用反射获取Physics2DRaycaster的投射接口
        if (blockingObjects == BlockingObjects.TwoD || blockingObjects == BlockingObjects.All)
        {
            if (ReflectionMethodsCache.Singleton.raycast2D != null)
            {
                var hits = ReflectionMethodsCache.Singleton.getRayIntersectionAll(ray, distanceToClipPlane, (int)m_BlockingMask);
                if (hits.Length > 0)
                    hitDistance = hits[0].distance;
            }
        }
    }

    // 收集所有被射线穿过的对象
    m_RaycastResults.Clear();
    Raycast(canvas, currentEventCamera, eventPosition, canvasGraphics, m_RaycastResults);

    int totalCount = m_RaycastResults.Count;
    for (var index = 0; index < totalCount; index++)
    {
        var go = m_RaycastResults[index].gameObject;
        bool appendGraphic = true;

        // 通过点乘判断背面是否参与投射
        if (ignoreReversedGraphics)
        {
            if (currentEventCamera == null)
            {
                // If we dont have a camera we know that we should always be facing forward
                var dir = go.transform.rotation * Vector3.forward;
                appendGraphic = Vector3.Dot(Vector3.forward, dir) > 0;
            }
            else
            {
                // If we have a camera compare the direction against the cameras forward.
                var cameraFoward = currentEventCamera.transform.rotation * Vector3.forward;
                var dir = go.transform.rotation * Vector3.forward;
                appendGraphic = Vector3.Dot(cameraFoward, dir) > 0;
            }
        }

        // 
        if (appendGraphic)
        {
            float distance = 0;

            if (currentEventCamera == null || canvas.renderMode == RenderMode.ScreenSpaceOverlay)
                distance = 0;
            else
            {
                // 抛弃在摄像机背面的对象
                Transform trans = go.transform;
                Vector3 transForward = trans.forward;
                // http://geomalgorithms.com/a06-_intersect-2.html
                distance = (Vector3.Dot(transForward, trans.position - currentEventCamera.transform.position) / Vector3.Dot(transForward, ray.direction));

                // Check to see if the go is behind the camera.
                if (distance < 0)
                    continue;
            }

            // 根据接触点判断是否抛弃对象
            if (distance >= hitDistance)
                continue;

            // 封装投射结果
            var castResult = new RaycastResult
            {
                gameObject = go,
                module = this,
                distance = distance,
                screenPosition = eventPosition,
                index = resultAppendList.Count,
                depth = m_RaycastResults[index].depth,
                sortingLayer = canvas.sortingLayerID,
                sortingOrder = canvas.sortingOrder
            };
            resultAppendList.Add(castResult);
        }
    }
}

Graphic- related content will be given in a later article.

PhysicsRaycaster

When we need to add events on 3D objects, the PhysicsRaycaster component needs to exist in the scene.

PhysicsRaycaster needs to rely on the camera for ray detection. Except for the ray detection part, it is basically the same as GraphicRaycaster.

panel properties

  • Event Mask: The commonly used mask, like the mask on the camera, is used to determine the object of the layer that needs to be detected, and will perform a "bit AND" operation with the camera, that is, the object participating in the detection needs to be "seen" by the camera. 0 stands for "Nothing", -1 stands for "EveryThing"
  • Max Ray Intersections: The maximum number of ray hits, that is to say, this number determines the number of objects that the ray can judge to hit, the default is 0, which means unlimited, and this version needs to apply for additional memory (in the unmanaged heap , C++ layer, because the C# layer cannot know the number in advance), and other values ​​do not need to apply for additional memory, only need to apply for memory in the managed heap. Of course, it cannot be an assignment, because this value will be used to apply for the array, and a negative value will report an error.

insert image description here

The following is the relevant code of the attribute, and the usage part is given in the following detection.

[AddComponentMenu("Event/Physics Raycaster")]
[RequireComponent(typeof(Camera))]
public class PhysicsRaycaster : BaseRaycaster
{
    /// EventMask的默认值
    protected const int kNoEventMaskSet = -1;
    [SerializeField] protected LayerMask m_EventMask = kNoEventMaskSet;

    /// 最大射线击中数量, 为0时代表不受限的数量, 会在非托管堆申请内存(c++), 其它正数会在托管堆申请(c#)
    [SerializeField] protected int m_MaxRayIntersections = 0;
    protected int m_LastMaxRayIntersections = 0;

    /// 击中结果
    RaycastHit[] m_Hits;

    /// EventMask与摄像机按位与之后的结果
    public int finalEventMask
    {
        get { return (eventCamera != null) ? eventCamera.cullingMask & m_EventMask : kNoEventMaskSet; }
    }

    /// EventMask属性
    public LayerMask eventMask
    {
        get { return m_EventMask; }
        set { m_EventMask = value; }
    }

    /// maxRayIntersections属性
    public int maxRayIntersections
    {
        get { return m_MaxRayIntersections; }
        set { m_MaxRayIntersections = value; }
    }
    
    /// 摄像机, 主要用于用于Mask确定检测的物体, 还有发射射线和计算起始点距离剪切平面的距离(clipPlane)
    protected Camera m_EventCamera;
    public override Camera eventCamera
    {
        get
        {
            if (m_EventCamera == null)
                m_EventCamera = GetComponent<Camera>();
            return m_EventCamera ?? Camera.main;
        }
    }
}

X-ray inspection

The general idea is to emit rays from the camera, and calculate the distance from the starting point to the clipping plane for the physics module (Physic) to perform ray detection.

According to the maximum number of ray hits, different interfaces of the physics module are called to return the hit results.

Here is the relevant code, the C# part:

// 发射射线并计算距离, 注意这个摄像机非常重要, 在不同的摄像机视角下判断击中的结果可能是不一样的
protected void ComputeRayAndDistance(PointerEventData eventData, out Ray ray, out float distanceToClipPlane)
{
    ray = eventCamera.ScreenPointToRay(eventData.position);
    // compensate far plane distance - see MouseEvents.cs
    float projectionDirection = ray.direction.z;
    
    // 如果发射点距离摄像机非常近, 则认为距离平面无限远
    distanceToClipPlane = Mathf.Approximately(0.0f, projectionDirection)
        ? Mathf.Infinity
        : Mathf.Abs((eventCamera.farClipPlane - eventCamera.nearClipPlane) / projectionDirection);
}

public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
{
    // 抛弃摄像机viewRect之外的部分
    // Cull ray casts that are outside of the view rect. (case 636595)
    if (eventCamera == null || !eventCamera.pixelRect.Contains(eventData.position))
        return;

    Ray ray;
    float distanceToClipPlane;
    ComputeRayAndDistance(eventData, out ray, out distanceToClipPlane);

    int hitCount = 0;

    // ====================================================
    // -- 根据最大射线击中数量调用物理模块不同接口返回击中结果
    if (m_MaxRayIntersections == 0)
    { // 等于0, 代表接受不受限的击中物体
        
        // 通过物理模块的检测所有物体的接口
        // 底层是PhysicManager.RaycastAll
        if (ReflectionMethodsCache.Singleton.raycast3DAll == null)
            return;

        // 返回击中结果
        m_Hits = ReflectionMethodsCache.Singleton.raycast3DAll(ray, distanceToClipPlane, finalEventMask);
        hitCount = m_Hits.Length;
    }
    else
    { // 非0, 代表接受有限的击中物体
        
        // 通过物理模块的检测所有物体的接口
        // 底层是PhysicManager.Raycast
        if (ReflectionMethodsCache.Singleton.getRaycastNonAlloc == null)
            return;

        // 有限击中物体, 预先申请好最大击中结果, 如果两次数量一致则不需重新申请
        if (m_LastMaxRayIntersections != m_MaxRayIntersections)
        {
            m_Hits = new RaycastHit[m_MaxRayIntersections];
            m_LastMaxRayIntersections = m_MaxRayIntersections;
        }

        // 返回实际击中数量
        hitCount = ReflectionMethodsCache.Singleton.getRaycastNonAlloc(ray, m_Hits, distanceToClipPlane, finalEventMask);
    }
    // ====================================================

    // 根据距离从小到大排序(由近到远)
    if (hitCount > 1)
        System.Array.Sort(m_Hits, (r1, r2) => r1.distance.CompareTo(r2.distance));

    // 返回检测结果
    if (hitCount != 0)
    {
        for (int b = 0, bmax = hitCount; b < bmax; ++b)
        {
            var result = new RaycastResult
            {
                gameObject = m_Hits[b].collider.gameObject,
                module = this,
                distance = m_Hits[b].distance,
                worldPosition = m_Hits[b].point,
                worldNormal = m_Hits[b].normal,
                screenPosition = eventData.position,
                index = resultAppendList.Count,
                sortingLayer = 0,
                sortingOrder = 0
            };
            resultAppendList.Add(result);
        }
    }
}

C++ part (for copyright reasons, only part of the code is posted):

// 数量不受限版本
const PhysicsManager::RaycastHits& PhysicsManager::RaycastAll (const Ray& ray, float distance, int mask)
{
	// ....
    // 会生成静态数组, 处于非托管堆, 无法释放
	static vector<RaycastHit> hits;
	// ....
    
    RaycastCollector collector;
	collector.hits = &hits;
    GetDynamicsScene ().raycastAllShapes ((NxRay&)ray, collector, NX_ALL_SHAPES, mask, distance);

	return hits;
}

// 数量受限版本(没有找到实现, 我自己猜的)
int PhysicsManager::Raycast (const Ray& ray, RaycastHit[] outHits, float distance, int mask)
{
	// ....
	vector<RaycastHit> hits;

	RaycastCollector collector;
	collector.hits = &hits;
	GetDynamicsScene().raycastAllShapes((NxRay&)ray, collector, NX_ALL_SHAPES, mask, distance);

	int resultCount = hits.size();
	const int allowedResultCount = std::min(resultCount, outHitsSize);
	for (int index = 0; index < allowedResultCount; ++index)
		*(outHits++) = hits[index];

	return allowedResultCount;
}

Physics2DRaycaster

Physics2DRaycaster inherits from PhysicsRaycaster, the content is basically the same, but when selecting the physics module, the 2D physics module is used, and only the key code is pasted here.

[AddComponentMenu("Event/Physics 2D Raycaster")]
[RequireComponent(typeof(Camera))]
public class Physics2DRaycaster : PhysicsRaycaster
{
    public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList)
    {
 		// ...
        if (maxRayIntersections == 0)
        {
            if (ReflectionMethodsCache.Singleton.getRayIntersectionAll == null)
                return;

            // 用的接口不一样
            m_Hits = ReflectionMethodsCache.Singleton.getRayIntersectionAll(ray, distanceToClipPlane, finalEventMask);
            hitCount = m_Hits.Length;
        }
        else
        {
            if (ReflectionMethodsCache.Singleton.getRayIntersectionAllNonAlloc == null)
                return;

            if (m_LastMaxRayIntersections != m_MaxRayIntersections)
            {
                m_Hits = new RaycastHit2D[maxRayIntersections];
                m_LastMaxRayIntersections = m_MaxRayIntersections;
            }

            // 用的接口不一样
            hitCount = ReflectionMethodsCache.Singleton.getRayIntersectionAllNonAlloc(ray, m_Hits, distanceToClipPlane, finalEventMask);
        }
        
        // ...
    }
}

Summarize

Today I introduced several ray casters for Unity, GraphicRaycaster, PhysicsRaycaster, Physics2DRaycaster.

It is worth noting that the two physics-related projectors need to have corresponding colliders (Collider and Collider2D) on the projected object, and do not require any parent-child relationship. They can be projected as long as they are seen by the camera in the scene. GraphicRaycaster Mainly use Canvas for projection instead of camera, and the projected object must be Canvas or its child nodes.

During the analysis, it was also found that Unity's 3D physics engine used "NVIDIA PhysX", while the 2D physics engine used "Box2D". I always thought it was written by Unity itself.

It may be that the latest article is relatively deep. I found that many students don't like to read it very much. I thought that the students who played Unity at the beginning did not have a strong sense of pursuit of the bottom layer. It seems that this feeling is correct.

From my personal point of view, I still recommend that you take time to study the bottom layer when you are happily developing games with Unity, because the understanding of the principles can often save us from detours, and at the same time, we can go further in the end.

Next is the last part of the event system, which is the core input module. I will use several articles to introduce it in detail.

In short, everyone can get it according to their needs, that's it for today, I hope it can help you a little bit.

Guess you like

Origin blog.csdn.net/woodengm/article/details/123742357