Abstract
Augmented reality (AR) is widely used to guide users when performing complex tasks, for example, in education or industry. Sometimes, these tasks are a succession of subtasks, possibly distant from each other. This can happen, for instance, in inspection operations, where AR devices can give instructions about subtasks to perform in several rooms. In this case, AR guidance is both needed to indicate where to head to perform the subtasks and to instruct the user about how to perform these subtasks. In this paper, we propose an approach based on user activity detection. An AR device displays the guidance for wayfinding when current user activity suggests it is needed. We designed the first prototype on a head-mounted display using a neural network for user activity detection and compared it with two other guidance temporality strategies, in terms of efficiency and user preferences. Our results show that the most efficient guidance temporality depends on user familiarity with the AR display. While our proposed guidance has not proven to be more efficient than the other two, our experiment hints toward several improvements of our prototype, which is a first step in the direction of efficient guidance for both wayfinding and complex task completion.
Highlights
Augmented reality (AR) devices have the potential to help people learn or perform complex tasks by displaying virtual content spatially registered over real objects or points of interest (POIs)
We realized a within-subject study with the three types of guidance as independent parameters, and the scenario total completion time and participants rating as dependent variables
From the confusion matrix (Table 1), one can see that some misclassifications are likely to happen due to unbalanced data during training
Summary
Augmented reality (AR) devices have the potential to help people learn or perform complex tasks by displaying virtual content spatially registered over real objects or points of interest (POIs). This AR guidance for complex task completion has been widely studied in the literature. Several spatially distant tasks are combined to form a sequence of operations in a large space This is what happens when operators need to perform assembly, disassembly or maintenance of massive machines, such as plane motors, trucks, etc. This happens when operators need to inspect large spaces, such as in powerplants. Users need both help from the AR device to realize their local tasks and guidance from the AR device to locate these tasks in space
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.