2017 3DUI paper Evaluating Gesture-Based Augmented Reality Annotation

ABSTRACT
Drawing annotations with 3D hand gestures in augmented reality
are useful for creating visual and spatial references in the real world,
especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with handdrawn circle and arrow annotations from a distance, assuming an
approximate 3D scene model (e.g., as provided by the Microsoft
HoloLens). However, little is known about user preference and
performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform:
Surface-Drawing and Air-Drawing, with either raw but smoothed or
interpreted and beautified gesture input. For the Surface-Drawing
method, users control a cursor that is projected onto the world
model, allowing gesture input to occur directly on the surfaces of
real-world objects. For the Air-Drawing method, gesture drawing occurs at the user’s fingertip and is projected onto the world
model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control.
We performed an experiment in which users draw on two different
real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than
Air-Drawing and Beautified annotations are drawn faster than NonBeautified; participants also preferred Surface-Drawing and Beautified.

6 CONCLUSION
We presented an evaluation of two 3D drawing gesture annotation
methods—Surface-Drawing and Air-Drawing—for spatial referencing of real-world objects in augmented reality. Surface-Drawing
directly draws onto real-world surfaces, while Air-Drawing draws
at the user’s fingertip and is projected into the world upon release. Experimental results indicate that Surface-Drawing is more
accurate than Air-Drawing and Beautified annotations are drawn
faster than Non-Beautified; user participants also preferred SurfaceDrawing over Air-Drawing and generally appreciated Beautification. Note that our findings generalize beyond HoloLens to any AR
and VR devices that can detect hand gestures and have an environment model. Future work will investigate different gestures, target
objects, and additional distances for drawing 3D annotations in AR.
Future work will also explore ways to handle the vergence problem
for the Air-Drawing method (e.g., using and projecting from only
the dominant eye for issuing annotations [7]). While drawing is a
thoroughly explored concept for traditional user interfaces, 3D gesture drawing for AR annotation needs further exploration, and this
work presents results in this direction.
 

猜你喜欢

转载自blog.csdn.net/moonlightpeng/article/details/85600450