[Unity] Virtual reality VRTK plug-in tutorial (6) VR UI (event handling mechanism of UGUI and VRTK)


UI

UI is usually divided into three types by category, namely 2D UI, 3D UI, and model UI.

  • 2D UI: The 2D UI is fixed on the screen and will not move with the movement of the character.
  • 3D UI: 3D UI is placed in the scene world, and can only be seen when the camera moves to the front of the UI.
  • Model UI:

2D UI

canvas settings

To use 2D UI, you must first create a canvas Canvas. There are three rendering modes of the canvas:

  1. Screen Space - Overlay (2D): Do not select this in VR, otherwise it will not be displayed (it will be displayed in the simulator, but not in the actual environment ).
  2. Screen Space - Camera (2D): If you want to use 2D UI in VR, use this method.
  3. World Space (3D): World Space 3D.

In VR games, if you want a 2D UI, you must choose Camera rendering . The Overlay method is only displayed in the simulator, and it will not be displayed on the real machine.

camera settings

After the canvas is created, create a camera Camera, and drag the camera to the Render Camera property of the canvas.

Clear Flags: Determine how to render the blank part of the camera. This property is usually set to Depth Only, so that the blank part is displayed according to the depth, that is, only what the player sees is displayed (Note: This is the blank part mentioned here, this setting does not affect UI itself, only affects parts outside the UI).

Depth: Camera depth, this attribute must be greater than the depth value of the VR camera, otherwise the UI cannot be displayed.

Culling Mask: occlusion culling, this attribute must be set to UI, to ensure that the UI camera only renders the UI, and does not do redundant rendering.
Please note that the main camera corresponding to the UI camera (usually under the eye object) should also set the UI to not render to avoid repeated rendering.

Since there are 2D UI and 3D UI in VR, the common UI layer is usually not used, but two additional layers are created: 2D UI and 3D UI, and here only the 2D UI can be eliminated.

use

Different from normal 3D games, traditional 2D UI is usually not used in VR games, such as displaying player status and displaying operation buttons. On the one hand, such display is usually at the edge of the screen, and the player’s view cone usually cannot see the content on the edge clearly, just like when you stare at a landscape painting, usually you can only see the position of the author’s inscription on the edge of the painting, but not To clear the content of the inscription, if you want to see the content clearly, you need to turn the perspective to the position of the inscription, but the 2D UI is fixed at the edge of the screen, no matter how you turn the perspective, the UI is still on the edge. On the other hand, VR games usually emphasize immersive experience, and such UIs often reduce the experience. Having said so many disadvantages, what is the use of 2D UI?

Scene 1:
Scene switching: spread the 2D UI over the entire camera angle of view, then make the UI content black, and fade in and fade out by adjusting the transparency of the UI.

Scenario 2:
The player is injured: spread the 2D UI to cover the entire camera angle of view, and then make the UI content red, or make a picture with blood stains, which can be achieved by adjusting the transparency of the UI.

Scene 3:
Night Vision Goggle: Cover the entire camera angle of view with 2D UI, and then make the UI content green by adjusting the transparency of the UI. Here it is necessary to combine the light adjustment of the scene.

Scene 4:
Simulated lens: When the player's perspective is looking outside through the glass window, the 2D UI can be used to achieve the effect of glass shattering.

Scene 5:
Simulate a telescope: When the player picks up the telescope to look at the distance, the UI can be displayed in the shape of the telescope, combined with the VR viewing angle to zoom in, this function can be realized.

In short, the application of 2D UI in VR is different from ordinary games, and the requirements need to be specified according to the actual situation.

3D UI

canvas settings

3D UI also needs to create a canvas, and the name of the canvas should be distinguished from that of 2D UI. Usually all UIs are placed under the same node.
insert image description here
The rendering mode of the 3D UI canvas is set to World Space.

Note: The size unit of the canvas is pixels, while the unit in 3D is meters. If you want the UI to fit the scene size, the usual way is to set the Scale of the canvas to 0.01, so that 100 pixels is one meter, and the UI content is different. will be distorted.

interact

Usually the UI of VR needs to be made by oneself, because UI plug-ins such as NGUI and UGUI do not support VR. Usually, the manufacturer's approach is to improve the existing UI plug-in. Since UGUI is a native plug-in, there are relatively many manufacturers who use UGUI to improve it. Of course, if conditions permit, you can also develop it yourself.

To achieve interaction, first add a VRTK_UI Canvas component to the MainCanvas canvas , and then add a VRTK_UI Pointer component to the controller that needs to interact . If necessary, you can add another VRTK_Straight Pointer Renderer component to generate a ray to make the operation more intuitive.

If it is a shooting game, the default muzzle direction is usually biased. At this time, you can create a new node under the handle controller, put the VRTK_UI Pointer component and VRTK_Straight Pointer Renderer component on this new node, and specify the VRTK_UI Pointer The Controller is the Controller of the handle (the new version of VRTK does not need to specify the Controller), and then drag the newly created node to the Pointer Renderer property of the handle Controller to specify the renderer. Finally, adjust the direction of the newly created node to change the default direction. The purpose of this new node is to operate the UI.

Tip: After adjusting the component in the running state, you can copy the component and stop running, and then paste the content of the component, so that the content adjusted in the running state will be brought to the editing state.

In this section of settings, the VRTK_UI Pointer component is actually interacting, and the VRTK_Straight Pointer Renderer component just provides a ray for easy operation.

At this time, the handle controller has the ability to interact. At this time, we will make a test button, create a UI button directly under MainCanvas, adjust the size of the button, and then put it in the scene. Finally, adjust the highlight color of the button and prepare The work is done. Run the game at this time, press the activation button of the handle (the default is Touchpad Press), point the handle to the button, and see if the button changes color. If it changes color, then the interaction is realized.

UGUI event processing flow

  1. EventSystem calls the Process method in BaseInputModule every frame.
    (implementation class: StandaloneInputModule / TouchInputModule)

  2. Calculate the object ( Graphic ) touched by the cursor .

    • The Process method calls the Raycast method of BaseRaycaster to obtain all Graphics.
      (Implementation class: GraphicRaycaster / PhysicsRaycaster / Physics2DRaycaster)
    • Determine the Graphic selected by the cursor through the IsRaycastLocationValid method of Graphic.
  3. Raise related events of objects through ExecuteEvents.

    • Call the Execute method to obtain the relevant interface type object, and then call its interface method.

What is EventSystem? Every time a UGUI is created, the editor will automatically create an EvenetSystem node, which mounts an EventSystem component and a Standalone Input Module component. The event source of UGUI is the EventSystem class, and the Standalone Input Module is a subclass of EventSystem. The system When the Update of EventSystem is called, the functions of the subclass Standalone Input Module are accessed through polymorphism.

What is Graphic? Objects that need to be rendered to the screen, such as Image and Text, are subclasses of Graphic. It is worth noting that Button is not a subclass of Graphic, because Unity considers the objects actually rendered on the screen when calculating Input, so what actually generates input is the image Image and text Text on the Button.

A default component in the Graphic Raycaster canvas is Graphic Raycaster (graphic ray detector), which enables objects specified by rays in the canvas to be detected (implemented by the IsRaycastLocationValid method ). Similarly, when an object is placed outside the canvas, its ray detection will also fail.

IsRaycastLocationValid is used to determine whether an object is selected by a ray, and this method can be overridden because it is a virtual method. For the rewriting and application of this method, you can refer to another article: [Unity] UGUI advanced (1) custom irregular buttons

It can be seen from the above that UGUI only provides input events for the keyboard, mouse, and mobile phone screen, and does not provide VR-related input events, so if we want to implement VR UI, we need to customize a set of UI components.

VRTK event processing flow

  1. VRTK EventSystem creates VRTK VRInputModule object and calls its Process method every frame.

  2. Calculate the object (Graphic) touched by the cursor.

    • The Process method calls the Raycast method of BaseRaycaster to obtain all Graphics.
      (implementation class: VRTK UIGraphicRaycaster )
    • Determine the Graphic selected by the cursor through the IsRaycastLocationValid method of Graphic.
      (From the position of the object where the VRTK UIPointer is located, a ray is sent to its forward direction)
  3. Raise related events of objects through ExecuteEvents.

    • Call the Execute method to obtain the relevant interface type object, and then call its interface method.

VRTK source code analysis

The OnEnable method in the VRTK_UIPointer class calls the ConfigureEventSystem method:
insert image description here
this method caches the EventSystem object automatically created by Unity, and passes this object into the SetEventSystem method:
insert image description here
In the SetEventSystem method, a component named VRTK_EventSystem is added to the original EventSystem object , thus realizing the expansion of the original EventSystem.
insert image description here
Therefore, when the VRTK UIPointer component is added, the VRTK_EventSystem component will be automatically created, and the VRTK event processing process we wrote earlier is officially started.


For more information, please check the general catalog [Unity] Unity study notes catalog arrangement

Guess you like

Origin blog.csdn.net/xiaoyaoACi/article/details/120512490