Mixed Reality Toolkit-Unity Development Series—Input Module

In "Mixed Reality Toolkit-Unity Development Series - Sharing Module", we first talked about the HoloLens development artifact - Microsoft's native development kit Mixed Reality Toolkit-Unity ("MRTK" for short), and introduced the relevant functions and principles of the Sharing module. MixedRealityToolkit is an open source development tool that helps developers quickly build applications for Microsoft HoloLens and Windows Mixed Reality headsets. With the version upgrade of Unity, MRTK has also released a corresponding version for adaptation. MRTK includes nine modules including Sharing, Input, and Spatial Mapping. Today we continue to study the Input module.

Using the Input module

Input behaves differently on different platforms. There are some input methods on the Mixed Reality platform. The first one introduces the use of the Input module. It mainly involves two prefabs (Prefab path HoloToolkit-Input-Prefabs):

(1) MixedRealityCameraParent.prefab: MixedRealityCameraParent.prefab is the parent object of MixedRealityCamera.prefab. In MixedRealityCamera.prefab, the camera parameters are set by default through the script MixedRealityCameraManager.cs. For example, Clear Flags is set to Solid Color, BackgroundColor is set to Clear (0, 0, 0, 0), NearClip is set to 0.85, and QualitySetting is set to Fastest. (2) InputManager.prefab: InputManager.prefab is used to manage multiple input methods in HoloLens, such as gestures, Xbox handles, etc.; and to simulate click gestures in the editor through Shift and Space on the keyboard.

We can use these two Prefabs to implement the functions provided by Input. Of course, you can also add DefaultCursor to control the cursor.

Implementation principle of Input module

Input integrates all currently supported user interaction methods, including Gaze (line of sight), Gesture (gesture), Voice (voice) and Motion controllers (motion controllers), through which users can manipulate objects in the scene. For users, accurate interactive input is an important criterion to ensure the experience. Below we use a few examples to explain the principles (the software tools used below are: Unity 2017 and Visual Studio 2017).

Gaze

  •  Scene:GazeEvents

The scene implements the IFocusable interface to respond to gaze entering and exiting.

Figure 1 Ifocusable interface

As shown in the figure, OnFocusEnter() is called when the line of sight moves in, and OnFocusExit() is called when the line of sight moves out. In this example, we can find that the OnFocusEvent class implements the IFocusable interface.

Figure 2 OnFocusEvent class

In Unity, check the Inspector panel of the object to which the script belongs, and you can find that both the OnFocusEnter and OnFocusExit events are bound to the light intensity control method.

Figure 3 RightSphere's Inspector panel

Gesture

  •  Scene:InputTapTest

The scene implements the IInputClickHandler interface to respond to click gestures.

Figure 4 IInputClickHandler interface

Note: In Unity, hold down the left shift and click the mouse to simulate a left-hand click, hold down the space bar and click the mouse to simulate a right-hand click.

Figure 5 TapResponder class

In this example, the OnInputClicked method is called by clicking the cube; in the OnInputClicked method, the cube's scale is increased by modifying the cube's localScale. It should be noted that the Use method in eventData can be understood as marking the click event as processed to prevent the event from being forwarded or responding again.

  • Scene:InputNavigationRotateTest

The scene implements the INavigationHandler interface to respond to navigation gestures.

 

Figure 6 INavigationHandler interface

The commands shown in the figure above represent:

(1) OnNavigationStarted : This is the start method of the navigation gesture, corresponding to the pinch gesture of the thumb and index finger;

(2) OnNavigationUpdated : This is the navigation gesture update method, which corresponds to the movement process after the pinch gesture, and functions such as zooming, moving, and rotating can be realized in this method;

(3) OnNavigationCompleted: This is the navigation gesture completion method, corresponding to the gesture of releasing the thumb and index finger;

(4) OnNavigationCanceled : This is the navigation gesture cancellation method, corresponding to the cancellation gesture;

  • Scene:TwoHandManipulationTest

In the recent update of the module, support for two-handed operation has also been added. Compared with moving, rotating and scaling the model with one hand, two-handed operation is more flexible and convenient. The method of use is also very simple, just add the TwoHandManipulatable.cs script to the model you need to operate.

These include 5 configuration options:

(1) HostTransform: Indicates the object to be operated, and the default is to add the model of the script;

(2) Bounding Box Prefab: The operation model is the displayed border, which can be customized or not added;

(3) Manipulation Mode: The operation modes include Scale, Rotate, Move Scale, Rotate Scale, and Move Rotate Scale;

(4) Constraint On Rotation: For the rotation mode, you can specify to rotate only a single direction or all directions;

(5) One Handed MoveMent: Whether to support the one-handed movement operation of the model;

Once added, it can be operated with both hands.

Figure 7 Schematic diagram of two-handed operation

Voice

  •  Scene: SpeechInputSource

This scenario implements the ISpeechHandler interface for voice control.

Figure 8 ISpeechHandler interface

The eventData parameter belongs to the SpeechEventData type, which inherits from BaseInputEventData and includes two properties:

(1) PhraseDuration indicates the speech duration of the keyword;

(2) RecognizedText represents the text result after speech recognition, this scene only uses this attribute;

Figure 9 Change color by voice recognition result

As can be seen from the figure above, the recognition result is passed to the ChangeColor method through the RecognizedText property of eventData; in the ChangeColor method, the recognition result is judged and then changed to the corresponding color.

The difference from the scenarios mentioned above is that in this scenario, InputManager and SpeechInputSource together serve as sub-objects of Managers. The SpeechInputSource.cs script of the SpeechInputSource object includes several properties:

(1) PersistentKeywords : Control whether the speech recognition instance will be destroyed when loading a new scene;

(2) RecognizerStart: It is an enumeration type;

(3) public enum RecognizerStartBehavior

{ AutoStart, ManualStart }:

The function of this command is to control the voice recognition function to be turned on automatically or through code control;

(4) Keywords : It is an array used to load keywords and keyboard shortcuts that need to be recognized;

  • Scene:DictationTest

This scenario implements the IDictationHandler interface for speech recognition.

Figure 10 IDictationHandler interface

Four of these methods correspond to different recognition stages:

(1) OnDictationHypothesis: identification speculation;

(2) OnDictationResult: recognition result;

(3) OnDictationComplete: The recognition is complete;

(4) OnDictationError: identification error;

Use speechToTextOutput.text

=eventData.DictationResult; Assign values ​​in all four methods.

The parameter eventData in the method belongs to the DictationEventData type, which includes two attributes:

(1) DictationResult: String type, which is the speech recognition result;

(2) DictationAudioClip: AudioClip type, which is the last item of speech recognition;

Another thing to note is that you need to enable specific permissions to use the speech recognition function. How to enable it: Edit -> Project Settings -> Player -> Settings for Windows Store -> Publishing Settings -> Capabilities

Motion Controllers

  • Scene:XboxControllerExample

This scene implements the IXboxControllerHandler interface to control the cube through the Xbox.

Figure 11 IXboxControllerHandler interface

The parameter eventData of XboxControllerEventData type includes multiple properties:

(1) GamePadName: Indicates the name of the connected controller;

(2) XboxA_Pressed: Whether the A button is pressed and the click status of all buttons;

In this scene, the position of the cube is controlled by the left joystick of the Xbox.

Figure 12 Modify the position

Use the right joystick of the Xbox to control the rotation of the cube:

Figure 13 Modify the angle

Use the Xbox's Y button to reset the cube's moving position and rotation angle:

Figure 14 Reset position and angle

Go to: http://www.ahololens.com/?p=960

Guess you like

Origin blog.csdn.net/Abel02/article/details/94390433