The Road to HoloLens2 - Configuration File (3)

Copyright statement: Davidwang's original article is strictly prohibited from being used in any commercial way and may be reproduced only with authorization.

2.2.5 Controller mapping configuration (Controller mapping configuration)

  After the input action is created, it needs to be mapped to a specific controller to bind it to a specific input source. All controllers supported by MRTK are listed in the controller mapping configuration. Select the required controller (Select Control When using the controller, you must select a supported input controller based on the hardware the MR application is running on. Incorrect controller selection will cause the input to be invalid or even the application to crash), and a dialog window will appear containing the controller. All inputs, here you can set an action for each input (make sure the input data type is consistent with the action data type), so that the input action can be associated with the actual underlying hardware controller, mapping the input action of the underlying controller event, as shown in Figure 1.
Insert image description here

Figure 1 Set the input action to the corresponding controller

  In addition, input actions can also be mapped to specific events when the application is running, not just to the physical hardware device controller. For example, when tracking is lost while the application is running, this event can be mapped to an input action.

Tip:
MRTK currently supports the following controllers/systems: Mouse (including mouse in 3D space), Touch Screen, Xbox controllers, WMR controllers (Windows Mixed Reality controllers), HoloLens Gestures ), HTC Vive wand controllers, Oculus Touch controllers, Oculus Remote controller, Generic OpenVR devices (for advanced users only), these are supported All controllers can be used directly. If you want to use a controller that is not currently supported by MRTK, developers need to develop the corresponding functions themselves. Due to the good modular design of MRTK, it is not difficult to develop your own controller. This series is only for HoloLens2 devices, so only HoloLens gestures are selected.

  In the controller configuration column, we can also configure the visual effects of the controller, such as left-hand and right-hand model prefabs and materials used.

2.2.6 Gestures configuration

On the HoloLens2 device, gesture operation is a very important means of interaction. It is detected, recognized and processed by the device HPU. The HoloLens2 gesture controller will generate several gesture recognition results, such as tap, grab, and manipulation. Wait, MRTK will map these recognition results to the default input action. In the configuration of this section, we can also map it to a custom input action, as shown in Figure 2.
Insert image description here

Figure 2 Configuring HoloLens2 device gesture input
  

2.2.7 Speech commands

  In the HoloLens2 device platform, the system provides speech recognition functions for predefined text, so voice commands can be used to control object behavior. Using predefined voice control commands is very simple. You only need to define the voice command keywords and then associate them with a certain input action, as shown in Figure 3.
Insert image description here

Figure 3 Configuring HoloLens2 device voice command input
  In the voice command configuration panel, the "Start Behavior" property can be set to "Auto Start" or "Manual Start". When "Auto Start" is selected, the Keyword Recognizer will automatically start when the application starts (that is, voice command input can be used after the application starts). When "Manual Start" is selected, it needs to be configured through a script after the application starts. Start the keyword recognizer; the "Recognition Confidence Level" attribute can be set to one of "High", "Medium", "Low", and "Rejected" because speech recognition is affected by the user's pronunciation and accent. Affected by various factors such as environmental background sound and so on, the recognition effect varies greatly. This parameter is used to specify the requirements for the voice command recognition effect.   

3. Spatial Awareness Profile

  The spatial perception system of the HoloLens2 device provides the perception of the real environment and the reconstruction of the scene geometry through the TOF depth camera. The configuration file provides related properties for the reconstruction of the scene geometry grid. You can configure the startup behavior (Startup Behavior) and update of the spatial perception function. Period (Update Interval), physical simulation performance (Physics Settings), network LOD (Level of Detail Settings), appearance performance (Display Settings), etc., as shown in Figure 4.
Insert image description here

Figure 4 Spatial awareness system configuration interface
  

4. Diagnostics Profile

  The diagnostic tool is a simple tool provided by MRTK to monitor performance while the application is running. It is a very convenient tool for troubleshooting performance problems during the application development process. The diagnostic tool display status (Show Diagnostics) and display position (Window Anchor) can be set in the configuration file. , Frame Sample Rate, etc., as shown in Figure 5.
Insert image description here

Figure 5 Diagnostic system configuration interface
  

5.Extensions Profile

  MRTK uses services to decouple the interdependencies between components. In addition to the services it provides itself, it also allows developers to create their own services. The services created by the developers themselves are called extension services. The extension services must be configured in the configuration file before they can be used. It is managed and used by MRTK when the application is running. MRTK uses the Service Locator Pattern to manage location services. The Service Locator pattern combines the Factory pattern and/or the Dependency Injection pattern to create service instances, which is very convenient for expansion and development. Register to use customized services.
The extended service is the same as the system service provided by MRTK. It can also receive and process all Unity event messages without the performance consumption caused by inheriting the Monobehaviour class or using the singleton mode, and allows C# scripts without scene objects to be executed in the foreground or background (such as Build system, application logic, etc.). Custom services can be registered and configured in the extended service configuration file, as shown in Figure 6.
Insert image description here

Figure 6 Extended service configuration interface
  

6. Editor Profile

  The editor configuration file is only used to set the functions that work in the Unity editor. These functions can assist developers to check whether the corresponding functions are enabled and working. Currently, it includes two function options: Use Service Inspectors and Render Depth Buffer.
Service Inspector: After the service inspector function is turned on, when you select an object in the Unity Hierarchy window, the services used by it will be displayed, providing links to documents related to the service, editor visual control, and detailed information about the service status.

  Rendering depth buffer: Sharing the depth buffer in the HoloLens2 device can improve the stability of the holographic image. After the rendering depth buffer function is turned on, the depth value in the depth buffer will be rendered from the perspective of the main camera of the current scene, which is convenient Developers know if the depth buffer is functioning properly.

  Under the MRTK main configuration file there are also Boundary Profile (for VR), Teleport Profile (for VR), Scene System Profile (mainly for VR), because they are related to MR Application development is not very relevant, so we will not go into details.

  
  To be continued

Guess you like

Origin blog.csdn.net/yolon3000/article/details/118095442