Abstract

Contemporary assistance systems support a broad variety of tasks. When they provide information or instruction, the way they do it has an implicit and often not directly graspable impact on the user. System design often forces static roles onto the user, which can have negative side effects when system errors occur or unique and previously unknown situations need to be tackled. We propose an adjustable augmented reality-based assistance infrastructure that adapts to the user’s individual cognitive task proficiency and dynamically reduces its active intervention in a subtle, not consciously noticeable way over time to spare attentional resources and facilitate independent task execution. We also introduce multi-modal mechanisms to provide context-sensitive assistance and argue why system architectures that provide explainability of concealed automated processes can improve user trust and acceptance.

Highlights

  • Available assistance systems, be it prototypes or commercially available products, can be characterized as ranging from advisory systems that ease information access for decision-making to artificial instructors that in their most distinct shape do not require any decision by the user during operation

  • The theoretical foundations of structural-dimensional analysis of mental representations (SDA-M) relate to the hierarchically-organized cognitive action architecture, which assumes that conscious mental functions of action control evolutionary emerged from more elementary functions [12]

  • We have proposed a dynamic process for subtle and not consciously noticeable modulation of assistance allocation based on analyses of task-related memory structures in combination with peripheral displays to minimize unnecessary cognitive load, provide a seamless and pleasant assistance experience, and facilitate learning processes for self-dependent task execution

Read more

Summary

Introduction

Be it prototypes or commercially available products, can be characterized as ranging from advisory systems that ease information access for decision-making to artificial instructors that in their most distinct shape do not require any decision by the user during operation. Devices acting as personal assistants have been introduced as advisory systems. They react and provide information when queried, e.g., when the user requested prompts by setting an alarm or a custom workflow. While the vast majority of assistance systems are designed to provide feedback that is intended to be perceived consciously and/or guide the user’s attention, the framing of active support or guidance contains several occasions where unconscious information processing impacts the user experience and behaviour. We discuss several aspects of unconscious information processing phenomena that potentially influence the user experience and user acceptance and how we tackle these issues with the currently developed cognitive assistance system AVIKOM. We argue that exposing decision making processes and embracing explainability as system feature can ease system introduction and adaptation

AVIKOM—A Cognitive Assistance System
Skill Assessment and Preemptive Adjustments
I: Sensorimotor control
Context-Sensitive Prompting
System Transparency and Explainability
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.