Snapdragon Spaces Development Guide (3)


3.4 Composite layers

When presenting a rendered image to the user, it is sometimes necessary to separate the elements of the scene into different layers. Each of these layers can be rendered individually and then composited together to form the final scene. This helps to improve the stability and quality of each rendered layer at the expense of some performance.

These layers are called compositing layers. The Snapdragon Spaces SDK allows developers to access different types of compositional layers. These types are briefly summarized here. Additional details can be found in the engine specific documentation.

Supported renderers

Not all rendering APIs (OpenGLES, Vulkan, etc.) support all layers described here. See any engine-specific documentation for more details.

layer type

Projection Layers
Projection layers represent planar projected images rendered from each eye using perspective projection. This is most commonly used to render virtual worlds from the user's perspective. Automatically create shadow layers from the main camera's perspective and render to the XR headset. No action is required to render this automatically generated projection layer. There is currently no way to submit additional projection layers.

Four
Layers Four layers are useful for displaying user interface elements or 2D content rendered into a virtual world. The four layers are only visible from the front and have no thickness.

  • Unity Gaze Pointer combined layer example

Depth layer
The depth layer contains the data needed to match the depth information with the color information of the final image. Depth images are only submitted with projection layers.

VR only

Depth submission is only supported on selected VR hardware. In Unity, the depth layer can be enabled by opening Project Settings > XR Plug-in Management > OpenXR and selecting the depth commit mode of Depth 16 Bit or Depth 24 Bit. A depth image corresponding to the default projection layer is then automatically generated and submitted without further action from the developer.

Sort Order
All composition layers are sorted and rendered using the "painter's algorithm", where lower layers are rendered first, and higher layers are rendered on top of them.

Number of Layers
Up to 16 compositing layers can be used. This total includes the default projection layer responsible for rendering the main camera content - the actual count is 1 + 15 additional compositing layers. Each layer used incurs additional performance overhead. If the maximum number of layers is exceeded, the highest layer will not be rendered.

hint

Use as few layers as possible for best performance.

layer size

The maximum size allowed for a compositing layer is determined by the hardware, but should typically support texture sizes up to 16k.

4 Design and User Experience

4.1 Hand Tracking

4.1.1 Best Practices

When designing XR applications, user comfort, utility, and usability are key to creating a high-quality experience.

The purpose of this section is to provide you with best practices for designing the most comfortable, immersive, and engaging experiences possible.

The best practices section covers gestures and interaction with virtual objects and interfaces, ergonomics, usability, and troubleshooting limitations.

#VirtualInteraction
Hand tracking technology enables users to interact with virtual content in two ways:

  • Gestures are specific gestures that can be used to trigger actions or interact with items.
  • Object interaction allows the hand to physically interact with virtual objects through far or near interaction.
    #gesture
    Gestures as a means of interacting usually require some level of tutorial or instruction. Even though gestures are one of the most ubiquitous and natural languages, there may be some differences in how different people interpret suggested gestures.

Gestures are best used when:

  • Not introduced all at once, but over the course of the tutorial
  • Gestures are limited and consistent across the application
  • Each gesture is different and maps to a specific function
  • Tutorials are presented to the user and can be accessed by the user at any time
  • Successful use of virtual objects and gestures at the right time and place will help create a positive user experience.

#Usability
Usability is key to creating a truly immersive user experience. To achieve ergonomics, design factors need to be carefully considered: spatial context, user interaction, object placement, and user journey.

#user pose

Please add a picture description

  • maintain a neutral body position
  • keep the arms close to the side of the body and down
  • Avoid strenuous, strenuous, and repetitive exercise

#Accessibility
Please add a picture description
Designing for accessible experiences in XR means everyone can access the user interface including tutorials, fonts, colors, and methods of interaction.

Remember, in XR, the user interface is not tied to the phone screen or laptop. It is important to use cues to guide the user through the 360° 3D environment.

In general, it is recommended that users be able to perform maneuvers at close range with the arm's natural bend, rather than at longer ranges with the arm extended, which can be more tiring for the user.

#User Journey

  • Use hints to guide the user experience
  • Notifies the user with visual or audio feedback whether an action can be performed and whether an interaction is detected
  • Give the user an opportunity to exit unwanted states - all actions should be reversible

#Context
To further improve user experience, context can be used to simplify content delivery and related interactions. By keeping track of what the user is looking at and adjusting available elements, visual clutter can be reduced and the experience will be simpler.

For example, a context can be used in a drawing presentation, where there are many elements available for customization (pencil thickness, color, etc.). The default menu provides the ability to select any of these elements. Once the user selects one of these elements, the menu changes to only offer options related to that element. In the case where the user selects a color, the subsequent menu narrows down to only the different color options available for the drawing presentation.

# object placement

  • Group similar objects to make them easier to find (i.e. Gestalt principles)
  • Place objects in clear view without blocking the user's view
  • If near-collision interactions are expected, make sure the object is within reach of the user
  • Interactive objects need to be sized appropriately according to the engagement mode

Limitations
#Field of View (FOV)
FOV is usually measured in degrees and is the open observable area that a person can see through their eyes or an AR/VR device.

Please add a picture description
Please add a picture description
In AR/VR, FOV depends on the device, especially on:

  • Camera FOV consists of cameras (usually monochrome or RGB).
  • Display FOV is "internal screen FOV". This is what the user sees.

#obliteration

Please add a picture description
Occlusion occurs when one object in 3D space blocks the view of another object. This can happen in situations where one hand is in the way of the other, preventing accurate hand tracking.

To minimize occlusion:

  • Avoid gestures and interactions that may prompt users to overlap their hands
  • Use one-handed interaction mode whenever possible
  • Inform the user to move their hands for optimal hand tracking to account for incidents where occlusion does occur

#outdoor environment

For the best experience, use hand tracking internally. Applications intended for external use may not function as expected due to limitations of currently supported hardware.

4.1.2 Interactive Gestures

Main Interaction Gestures

pinch

Perform a pinch gesture by touching the tip of your thumb to the tip of your index finger while extending the rest of your fingers. Pinch force is best recognized if the hand is viewed from the side.

Pinch to select when a finger is on touch, or to operate by holding the gesture and moving.

The pinch gesture is an efficient and immersive verification gesture because it produces automatic haptic feedback.

catch

The grab gesture is performed by placing the hand in front of the camera and making a fist.

This gesture is used to grab and manipulate nearby bulky objects.

Grab gestures also enable automatic haptic feedback.

open hand

The open hand pose is a neutral pose. It is performed by holding the hand outstretched, fingers spread, palm away from the camera.

Open-Hand is often used to connect raycasts to interact with remote elements.

This gesture is used to show raycasts or as a release gesture.

# dots (deprecated)

Starting with Snapdragon Spaces version 0.12.1, the point gesture has been deprecated. Use this gesture only when developing with Snapdragon Spaces 0.11.1 and earlier.

The point gesture is performed by extending the index finger while keeping the rest of the fingers closed.

This gesture is typically used when interacting with a GUI in near-end interaction, such as pressing a button.

hand expression

#augmentedreality
In an AR environment, it is not recommended to display a virtual hand over a real hand. It is recommended to focus on feedback from virtual elements rather than hand avatars.

#VirtualReality
3D models utilizing inverse kinematics are best suited for VR applications. The 3D representation of the hand is overlaid on the hand in digital space, creating a more immersive experience for the user. It is important to adapt the hand representation to the context of the presentation.

Additionally, the 3D hand must return visuals to alert the user that he/she is interacting.

Here are 2 examples of 3D hand avatars:

alpha hand

Harlequin hand

Feedback, hints, and affordances

Since the hand cannot provide tactile feedback like other input devices, it must be compensated for by visual and audio feedback when the user interacts with 3D objects. It is important to design unique sounds and visual changes to confirm when the user interacts with the component.

To improve user experience, consider incorporating real-world feedback into interaction design. This will help to confirm through visual or audio cues that an interaction with an object or gesture is or has been successfully performed.

In general, the main states of virtual elements are as follows:

state visual feedback audio feedback
idle not any not any
to wander Yes Yes
chosen Yes Yes

visual feedback

Purpose

The behavior of the object can change, or the object can be highlighted when successfully engaged. Objects can also change shape or size based on interaction or gestures.

Please add a picture description
Object states from left to right: idle, hover, selected.

hand

Depending on the situation, visual feedback on the 3D hand avatar is recommended in addition to the interactive objects.

In general, the hand goes through three states: no interaction, collision with an interactive element, and interaction. The following is a representation of the types of effects a 3D hand can produce:
Please add a picture description
Hand Feedback: Idle, Hover, Selection

cross hair

In order to specify the element you want to interact with, especially in the context of remote interaction, it is important to return different types of effects on pointers. Traditionally, in HMI interfaces, the pointer's appearance changes according to its interaction state. Therefore, these behaviors are very intuitive for users. It is crucial to consider the different types of behavior that raycasts and their reticles may have during the design phase.
Please add a picture description
Crosshair Feedback: Idle, Hover, Selected

#Audio Feedback
It is preferable to implement audio feedback only on virtual objects, since interaction sounds are context sensitive. If a user interacts with a 2D menu, the audio feedback will be different than if he/she interacts in a video game.

Audio can also be used for ambient sound, but it still depends on the context.

#Interaction hint

An interaction hint is a hand animation that is triggered when the user's hand is not detected. This component guides users when they don't know how to interact with virtual elements.

If the user has no interaction for x seconds (defined when designing the demo), the component animates in a loop until the system detects an interaction.

cheat sheet

Gestures and feedback best suited for interactive states:

proximal remote the feedback
Target collision Open hand (raycast) visual, audio
choose pinch, grab pinch visual, audio
manipulate pinch, grab pinch visual, depending on context
UI goals collision Open hand (raycast) visual, audio
User Interface Selection pinch, point, open hand pinch visual, audio
User Interface Operation pinch, point, open pinch visual, audio

4.2 Custom Controller Project

The developer package includes an Android Studio project for building a custom controller archive that can be used later instead of the controller archive included by default in the Snapdragon Spaces Unity package or the Unreal Engine plugin.

To change the appearance of the controller, open the project with Android Studio (2020.3 or later recommended), then open SpacesController > res > layout > custom_input_companion_controller.xml. The class located under SpacesController > java > com.qualcomm.snapdragon.spaces.spacescontroller.SpacesCustomInputContentViewFactory will override the controller inside the Spaces Services application. If the ID or other value declared under the SpacesController > res > value path in the layout file is about to change, the corresponding counterpart in the aforementioned classes should also be managed accordingly to avoid any linking errors.

Please add a picture description
To build a custom controller archive that can be used in the Snapdragon Spaces Unity package or Unreal Engine plugin, run the module task assemble in the Gradle window (open View > Tool Windows > Gradle) located under SpacesController > Tasks > Build or Run in From the command line on Windows or macOS/Linux in the project root directory. SpacesController gradle assemble./gradle assemble

Please add a picture description
If the build was successful, one of the generated archives (release or debug) located under SpacesController > Build > Output > aar can be used in one of the next steps:

  • Using custom controllers in Unity
  • Using custom controllers in Unreal

4.3 Image targets for testing

Here you can find all images for testing the XR experience in the sample applications included in the SDK.

Print

Make sure to print the PDF at 100% scale so that the physical size of the image target is correct.

Guess you like

Origin blog.csdn.net/weixin_38498942/article/details/132339066