Snapdragon Spaces Development Guide (13)


6.3.2.2 Hand Tracking Interaction

This section describes the different actors and components required to interact with hand tracking.

WARNING
The hand tracking interaction is used not only in the hand tracking sample, but in all samples of the project. This allows users to interact with widget characters and 3D characters that can interact with hand tracking.

6.3.2.2.1 Spatial gesture input manager role

This is a very important role to use for hand tracking interactions. In order to set hand tracking as input mode, this character needs to be added to the level. It is recommended to use the Spawn Actor of class node to spawn this character.

insert image description here

Warning
Be sure to check that hand tracking is available before adding or spawning this character into a level.insert image description here

The Control Gesture Input Manager actor is responsible for listening to gestures in real time and reporting actions based on those gestures using delegates. It spawns and holds ASpacesHandInteractiontwo participants of the class, one for each hand, which is needed to perform the interaction. The following sections list the different variables and functions that can be accessed and used in this class.

6.3.2.2.1.1 Variables
  • FOnSpacesHandPinch OnSpacesHandPinchLeftAnd FOnSpacesHandPinch OnSpacesHandPinchRight: Fires the delegate to notify the pinch gesture state.
  • FOnSpacesHandOpen OnSpacesHandOpenLeftand FOnSpacesHandOpen OnSpacesHandOpenRight: Fires the delegate to be notified about the state of the pinch gesture.
  • FOnSpacesHandGrab OnSpacesHandGrabLeftAND FOnSpacesHandGrab OnSpacesHandGrabRight: Fires the delegate to notify the Grab gesture state.
  • FOnSpacesHandInteractionStatusUpdated OnSpacesHandInteractionStatusUpdated: A delegate triggered when the hand interaction state is updated, it notifies the state with a boolean value.
  • TSubclassOf < ASpacesHandInteraction > HandInteractionClass: 空间手部交互角色A subclass that is generated as a hand interaction character.
  • FVector LeftRayPositionOffsetand FVector RightRayPositionOffset: Light offsets for two hand-interacting characters.
  • float RayDistance: The light distance between the two hand-interacting characters.
6.3.2.2.1.2 Functions
  • ASpacesHandInteraction* GetHandLeftInteraction() constand ASpacesHandInteraction* GetHandRightInteraction() const: Used to get hand interaction characters.
  • void SetHandInteractionState(bool active): Used to enable or disable hand interaction, this function will affect all hand interaction systems. Hand interaction is disabled by default.
  • void GetHandInteractionState() const: Returns the state of the hand interaction system.

TIP
See BP_HandTrackingControllerComponent(under SnapdragonSpacesSamples Content > SnapdragonSpaces > Common > Core > Components) to learn how to use control gestures to enter the manager role.

6.3.2.2.2 Spatial Hand Interaction Character

Spaces Hand InteractionCharacters are responsible for interacting with the different hand-interactive elements present in the level, such as 3D widgets and other types of characters. Spatial hand interaction characters include far hand interactions (which perform raycasts on scene elements) and near hand interactions (which are activated when a hand is touched on an interactable character). Spatial hand interaction characters will be represented by a ray that will start from the hand specified by the user. This ray is always visible unless near interaction is used. Lights have two interaction modes: active (when the ray hits any hand-interactable character in the scene) or inactive. The color and length of the light will change according to these two modes. To interact with the UI, use far-end interaction and pinch gestures.

TIP
In the Snapdragon Spaces sample project for Unreal Engine, 手势输入管理器two types of actors are generated 空间手部交互, one for each hand, which means it 手势输入管理器is enough to add them to the scene.

insert image description here
The following variables and functions can be used to customize the Spatial Hand Interaction character.

6.3.2.2.2.1 Variables
  • float LerpFactor: Adjust the translation and rotation speed of the character.
  • float LerpFactor: Adjust the scaling speed of the character.
  • floatMinimumScaleFactor: The minimum scale factor that can be applied to the character.
  • float MaximumScaleFactor: The maximum scale factor that can be applied to the character.
  • bool bApplyTranslation: Indicates whether to translate the character.
  • bool bApplyRotation: Indicates whether to rotate the character.
  • bool bApplyScale: Indicates whether to scale the character.
  • FOnSpacesHandInteractableStateChanged OnSpacesHandInteractableStateChanged: The delegate/dispatcher that communicates the state of the hand interaction through the actor, using ESpacesHandInteractableStatethe enum to return the state (see how this enum works in the code sample below).
UENUM(BlueprintType, Category = "Snapdragon Spaces Hand Interaction")
enum class ESpacesHandInteractableState : uint8
{
    
    
    Focused = 0,
    Grabbed = 1,
    UnFocused = 2
};

An example of using this component is located BP_SpacesHandInteractableCubein the Blueprint Actor, under the plugin folder SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > HandTracking > Placeable > InteractableObjects.

insert image description here

6.3.2.2.3 Spatial Capture Volume Components

The Spaces Snapping Volume component inherits from the Unreal Engine Box component(opens new window). It is mainly used to SpacesHandInteractionsnap the ray end of a character to a desired position within another character. This is especially useful for interactive 3D widget components such as buttons, checkboxes, or sliders, but it can be used with any type of 3D character. This component can be added inside the character and must be placed manually where desired, for example, on top of a button in a 3D widget, as shown in the image below.

insert image description here

Warning
When using this component with UI, it is important to follow the following guidelines:

  • The position and size of the box must match the form of the 3D widget component, otherwise unwanted effects may appear, such as flickering when raycasting outside the 3D widget character.
  • The X-axis orientation of the component must match the orientation of the 3D widget component. This is required to properly interact with widget components.

To customize the behavior of this component, different variables and functions are available.

6.3.2.2.3.1 Variables
  • bool bSnap: Determines whether the component will be used for snapping. A possible scenario for using this component without snapping enabled is a 3D widget slider. In this case, it is not possible to snap the end of the ray to the slider handle, but the visualization of the ray in the hand is required for user experience.
  • bool bIsUI: This option must be enabled when the component is used for 3D Widget UI component interaction. It can be disabled for use by any other participant.
  • bool bIsDisabled: Determines whether component collision is active from the start. By default it will be false.
6.3.2.2.3.1 Functions
  • void SetCollisionDisabledState(bool disabled): Used to set the value of bIsDisabled.
  • void UpdateCollisionStatus(bool active): Update the collision state of the component. If bIsDisabled is true, collisions will be disabled regardless of the function's input values.
6.3.2.2.4 The role of the space remote interaction box

BP_SpacesDistalInteractionBoxAllows to rotate and scale participants using far end interaction. In order to use it, it must be added as a sub-Actor component inside the Actor for manipulation. Found under SnapdragonSpaces Content > Hand Tracking > Actors BP_SpacesDistalInteractionBox. BP_PandaInteractableTo see an example of how to add a BP_SpacesDistalInteractionBox to another actor, see Actors located under SnapdragonSpacesSamples Content > Snapdragon Spaces > Samples > HandTracking > Placeable > InteractableObjects . The Remote Interaction Box consists of different Spaces Distal Manipulator Actorinstances, each of which can be called a "handling point". Spaces Distal Manipulator ActorThere are two different types, namely Spaces Distal Scale Pointand Spaces Distal Rotation Point. Depending on the part in the box, the action point will behave differently. The manipulation points at the edge of the box are used to change the scale of the character, the rest of the manipulation points will perform different rotations and depending on their position on the box they will be in the X, Y or Z plane. Please use pinch gestures to interact with the interaction box.

insert image description here

6.3.3 Hit testing

6.3.3.1 Hit Test Example

This example demonstrates how to hit test points and planes found in the real world. For basic information on hit testing and 跟踪对象 3Dthe capabilities of Unreal Engine's Wire Tracing node (pictured below), see the Unreal Engine documentation (opens in new window). In order to use this feature, it must be enabled in the OpenXR plugin settings under Project Settings > Snapdragon Spaces plugin. Also, 平面检测the feature must be enabled for hit testing to work properly. For more accurate hits, enable the Use Convex Hull Detection option AR 会话配置of the Plane Detection feature in

insert image description here

6.3.3.1.1 How the example works

When the sample is open, the gizmo is always in front of the user, and every update starts the raycast. If it returns a successful hit result, the gizmo will move to the hit pose and display in cyan, yellow, and magenta. If no hit is detected, the gadget will move one meter ahead of the head pose and turn red.

6.3.3.1.2 Hit Manager

The sample uses BP_HitManagerthe Blueprint asset (found under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > HitTesting > Placeable) to handle hit testing in the sample map. To enable and disable hit testing, Toggle Spaces Featurethe method must be used with hit testing as a function. Developers have several options to customize the rays used for hit testing:

  • Distance Ray Cast : The length of the ray.
  • GizmoTag : BP_PawnThe name of the tag defined in the SceneComponent for placement (also called the white gizmo in the example).
  • Distance Gizmo : The distance from the head pose to the placed object.

6.3.4 Image Tracking

6.3.4.1 Image tracking example

This sample demonstrates how to detect and augment image objects found in the real world.

For basic information on customizing trackable object updates and Unreal Engine's AR Trackable Notifycomponent functionality, see the Unreal Engine documentation (opens new window). In order to use this feature, it must be enabled in the OpenXR plugin settings under Project Settings > Snapdragon Spaces plugin.

6.3.4.1.1 How the example works

By default, when the example runs and recognizes an image, it spawns a gizmo on the physical target. The sample currently only recognizes one image and displays its world position in a UI panel included in the map.

insert image description here

6.3.4.1.2 Image AR Manager

BP_ImageTrackingManagerThe Blueprint asset (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > ImageTracking > Placeable) handles BP_Gizmo_AugmentedImagecharacter creation and destruction through the event system. It binds events from the AR Trackable Notify component (opens new window) to react to changes to the AR Trackable's image. When the system detects an image, it calls add/update/delete tracking image events. In the example Blueprint, this Toggle AR Capturemust be set to ON if detection should start and OFF if it should stop detecting objects and destroy all generated AR images . Can be used as an alternative to enabling this feature. Additionally, Scene Understanding must be set as the capture type for this node.Toggle AR Capture切换空间功能

6.3.4.1.3 Image AR session configuration

The system starts D_SpacesSessionConfig_ImageTrackingdetecting images using the resource (found under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > ImageTracking > Core). This resource is SpacesSessionConfiga data resource derived from the class.

A session configuration file provides three fields: one to define the image size, one to specify the maximum number of simultaneous images that should be tracked, and one to reference candidate images to track.

Creating the image tracker happens in an asynchronous thread to avoid freezing issues when the number of images to track is very large. Therefore, sometimes image tracking may be delayed in starting. Listen to On Spaces Image Tracking Is Readydelegate events to determine when images can start being tracked.

insert image description here

6.3.4.1.4 AR Candidate Images

Unreal Engine uses a specialized asset type called an AR Candidate Image (opens a new window). Creates a reference to the image that the XR system should track. Developers can add as many as they want AR 候选图像and assign them to AR 会话配置the array indicated in .

To create AR 候选图像, the image to be tracked must first be imported into the Content folder project as a texture asset. The created texture resource must set UserInterface2D (RGBA) in the compression settings, and it is recommended to turn off the mip map.

insert image description here

TIP
You can find the reference images used in the Image Targets for Testing section

The next step is to create AR 候选图像the resource, and the candidate texture field references the created texture resource. Each AR 候选图像should have a unique identifier that can be set in the friendly name field. Otherwise, AR 会话配置any identical name in different candidates used in the same will cause a hash code collision.

The final step is to define the physical dimensions of the image in centimeters via the width/height fields. Proper measurements are critical for correct pose estimation and subsequent augmentation placement. This data is automatically populated taking into account the scale of the image following the orientation defined in the "Orientation" field. Unfortunately, Unreal Engine's current orientation is reversed, so developers must use landscape for portrait images and portrait for landscape images .

The Snapdragon Spaces plugin can choose between AR 候选图像different tracking modes if the asset parent is a Spaces AR candidate image.

  • Dynamic mode : Update the position of the tracked image every frame, suitable for moving and static targets. If no tracked image is found, no position or pose is reported. Used by default.
  • Adaptive mode : Update the position of static images periodically (approximately every 5 frames) if they move slightly. This strikes a balance between power consumption and accuracy for still images.
  • Still Mode : Good for tracking images that are known to be still. Images tracked in this mode are fixed in position when first detected, and are never updated. This reduces power consumption and improves performance, but if there is any drift in the tracked image, its position will not be updated.

The tracking mode can be changed while the application is running without stopping or restarting the AR session using the following nodes:

  • Set Image Target Tracking Mode by Friendly Name.
  • Set Image Target Tracking Mode by Candidate Image.
  • Set Image Targets Tracking Mode by Friendly Name.
  • Set Image Targets Tracking Mode by Candidate Image.

SetImageTrackedModeByIDDeprecated in version 0.15.0.

insert image description here
This sample uses D_SpacesARCandidateImage_SpaceTownthe Blueprint asset (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > ImageTracking > Placeable). Image targets are measured at 26 cm high (when printed in DIN A4 or US letter). BP_Gizmo_AugmentedImageThe blueprint asset (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > ImageTracking > Placeable) is rendering a gizmo on top of the physical image target that indicates its orientation when recognized and tracked.

insert image description here

6.3.5 Plane detection

6.3.5.1 Example of plane detection

This example demonstrates how to visualize tracking planes found in the real world. For basic information on customizing trackable object updates and Unreal Engine's AR Trackable Notifycomponent functionality, see the Unreal Engine documentation (opens new window).

6.3.5.1.1 How the example works

By default, when the example is opened, it generates simple shapes for detected planes. When the Use Convex Hull Detection option is enabled, the sample generates complex shapes using the convex hull of the detected planes.

Wireframes of these geometries can also be displayed.

insert image description here

6.3.5.1.2 Flat AR Manager

BP_PlaneARManagerThe Blueprint file (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > PlaneDetection > Placeable) centralizes the creation and destruction of planes as augmented geometry operations through the event system. This Blueprint binds events from the AR Trackable Notify component (opens new window) to react to changes to the AR Trackable Plane. The following events are related to plane detection:

  • Add/Update/Remove Tracking Planes : These events are called when the system uses simple planar geometry.
  • Add/Update/Remove Tracked Geometry : These events are called when the system uses complex planar geometry.
    • Contrary to other cases, different types of objects can be registered as UARTrackedGeometry. To verify that it is convex, its object classification should be EARObjectClassification::NotApplicable. Please use GetObjectClassificationthe function to confirm.insert image description here

WARNING
Remember to change the state of ToggleARCapture to restart detection, select ON when character behavior starts, stop detection, and OFF when finished , to destroy all generated AR geometry.

6.3.5.1.3 Complex AR session configuration

When the user enables the Convex option, the system starts using D_ConvexHullSessionConfigthe resource (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > PlaneDetection > Core) to detect complex planes.

Options related to plane detection are:

  • Use convex hull detection
  • level detection
  • Vertical plane detection
6.3.5.1.4 Function Settings

平面检测Feature settings can be found by clicking the gear icon next to a feature in the OpenXR project settings .

  • Use scene understanding:
    • Enabling or disabling this setting will produce different results in terms of the shape and number of detected planes.
    • Enabling this setting will enable 场景理解detection of planes. This utilizes 了空间网格划分(实验)the same technique used by functions.

Difference between default plane detection and scene understanding based plane detection

measure Defaults scene understanding
Detection speed and first detection normal quick
false positive Very low false positive rate prone to false positives
Plane accuracy high high
Number of planes rare a lot of
Plane updates and moves Stable, unlikely to be updated Dynamic AR session configuration with more flat updates
planar orientation filter Horizontal and vertical filter options no filter options
hit test Operates against planes as expected Hit testing against grids

6.3.6 Camera framework access (experimental)

6.3.6.1 Camera frame access example

WARNING
The camera frame access feature is marked as experimental because optimizations in plugins and Snapdragon Spaces services are currently breaking backwards compatibility across releases.

This sample demonstrates how to access relevant camera information, in this case the camera image and intrinsics, from a supported device. Currently this feature is limited to RGB cameras.

6.3.6.1.1 How the example works

By default, when the sample runs, the UI displays the RGB image captured by the device and the intrinsic values ​​associated with it. Users can pause and resume frame capture with the corresponding buttons.

insert image description here
If the device is not granted 相机访问权限, the image, button and camera information will be replaced with a warning message advising the user to enable it.

6.3.6.1.1.1 Camera Framework Access AR Manager

The sample uses BP_CameraFrameAccessARManagerthe Blueprint asset (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > CameraFrameAccess > Placeable) to 切换 AR 捕获start and stop camera capture, which must be set to ON if the capture should start , and OFF if it should stop the capture. Additionally, Camera must be set as the capture type for this node. This feature supports SpacesSessionConfigthe configuration of any resource. It can also be enabled or disabled using the Toggle Spaces Feature method.

6.3.6.1.1.2 Camera capture library

The Unreal AR interface provides us with the ability to get information from the camera.

  • Get AR Texture
    • Returns the camera frame. The Snapdragon Spaces plugin extends the information to provide RGB frames as 2D textures. Cast to Spaces AR 相机图像纹理class to get frame RGB texture . Also, the camera image must be set as the texture type for this node. ,
  • Get Camera Intrinsics
    • Returns the camera's image resolution , focal length and principal point .
      insert image description here
      The Snapdragon Spaces plugin provides additional functionality to help manage camera capture for Blueprints:
  • Set Camera Frame Access State
    • To pause camera capture, set Activeto FALSE . To continue capturing with the camera, Activeset to TRUE . Deprecated in version 0.15.0, use Pause AR Session or Pause Space functionality instead .
  • Is Camera Frame Access Supported
    • Returns TRUE if capture is available, FALSE otherwise . If the application uses capture, it is recommended to check this result during Tick . Deprecated in version 0.15.0, use instead if the feature is available .insert image description here

TIP
The sample's behavior WBP_CameraFrameAccessis implemented in the Blueprint asset (located under SnapdragonSpacesSamples Content > SnapdragonSpaces > Samples > CameraFrameAccess > UI).

6.3.6.2 Advanced

This section describes how to access raw YUV camera frame data without extracting it from the generated texture. It also explains how to get extra data that is not accessible in blueprints. Contains all data types and structures involved in this procedure, including the functions available as USpacesRuntimeBlueprintLibrarypart of and included in SpacesRuntimeBlueprintLibrary.h.

Tip
These data and functions are only accessible via C++.

6.3.6.2.1 Data types

ESpacesPlaneCameraFrameTypeEnumeration describing the type of frame plane.

UENUM()
enum class ESpacesPlaneCameraFrameType : uint8
{
    
    
	Y = 0,
	U = 1,
	V = 2,
	UV = 3
};

ESpacesDistortionCameraFrameModelEnum describing different lens distortion models for camera calibration.

UENUM()
enum class ESpacesDistortionCameraFrameModel : uint8
{
    
    
	Linear = 0,
	Radial_2 = 1,
	Radial_3 = 2,
	Radial_6 = 3,
	FishEye_1 = 4,
	FishEye_4 = 5
};

ESpacesCameraFrameFormatEnum describing the different formats of camera frames.

UENUM()
enum class ESpacesCameraFrameFormat : uint8
{
    
    
	Unknown = 0,
	Yuv420_NV12 = 1,
	Yuv420_NV21 = 2,
	Mjpeg = 3,
	Size = 4,
};

FFrameDataOffsetStructure describing the offset of the sensor image data within the framebuffer data.

USTRUCT()
struct FFrameDataOffset
{
    
    
	GENERATED_BODY()
	int32 X;
	int32 Y;
};

FSpacesPlaneCameraFrameDataA structure describing a plane in a framebuffer.

USTRUCT()
struct FSpacesPlaneCameraFrameData
{
    
    
	GENERATED_BODY()
	uint32 PlaneOffset;
	uint32 PlaneStride;
	ESpacesPlaneCameraFrameType PlaneType;
};

  • PlaneOffsetContains the offset from the beginning of the buffer to the beginning of the flat data.
  • PlaneStrideIndicates the byte distance from one line to the next.
  • PlaneTypeDescribes the type of frame data.

FSpacesSensorCameraFrameDataStructure containing extended camera intrinsic data.

USTRUCT()
struct FSpacesSensorCameraFrameData
{
    
    
	GENERATED_BODY()
	FARCameraIntrinsics SensorCameraIntrinsics;
	FFrameDataOffset SensorImageOffset;
	TArray<float> SensorRadialDistortion;
	TArray<float> SensorTangentialDistortion;
	ESpacesDistortionCameraFrameModel DistortionCameraFrameModel;
};

  • SensorCameraIntrinsicsContains the camera's image resolution, principal point, and focal length.
  • SensorImageOffsetis the offset of the sensor image data in the framebuffer data.
  • SensorRadialDistortionAn array of floats describing the radial distortion coefficients.
  • SensorTangentialDistortionAn array of floating point numbers describing the tangential distortion coefficients.
  • DistortionCameraFrameModelDescribes a lens distortion model for camera calibration.

FSpacesCameraFrameDataStructure contains frame data and camera data.

USTRUCT()
struct FSpacesCameraFrameData
{
    
    
	GENERATED_BODY()
	uint32 BufferSize;
	uint8* Buffer;
	ESpacesCameraFrameFormat FrameFormat;
	TArray<FSpacesPlaneCameraFrameData> Planes;
	FSpacesSensorCameraFrameData SensorData;
};

  • BufferSizeis the size of the buffer containing the data.
  • Bufferis a pointer to the frame data.
  • FrameFormatis the format of the camera frame.
  • Planesis an array containing the frame planes.
  • SensorDataExtended intrinsic data containing the camera that captured the frame.
6.3.6.2.2 Functions
  • static FSpacesCameraFrameData GetCameraYUVFrameData(): Access and return the latest camera frame data in FSpacesCameraFrameData format.
  • static bool ReleaseCameraFrameData(): Frees a previously accessed frame. It must be used after the previous frame's data to access another frame.

Guess you like

Origin blog.csdn.net/weixin_38498942/article/details/132585525