Abstract
With rapid technological development, humans are more likely to cooperatively work with intelligence systems in everyday life and work. Similar to interpersonal teamwork, the effectiveness of human-machine teams is affected by conflicts. Some human-machine conflict scenarios occur when neither the human nor the system was at fault, for example, when the human and the system formulated different but equally effective plans to achieve the same goal. In this study, we conducted two experiments to explore the effects of human-machine plan conflict and the different conflict resolution approaches (human adapting to the system, system adapting to the human, and transparency design) in a computer-aided visual search task. The results of the first experiment showed that when conflicts occurred, the participants reported higher mental load during the task, performed worse, and provided lower subjective evaluations towards the aid. The second experiment showed that all three conflict resolution approaches were effective in maintaining task performance, however, only the transparency design and the human adapting to the system approaches were effective in reducing mental load and improving subjective evaluations. The results highlighted the need to design appropriate human-machine conflict resolution strategies to optimize system performance and user experience.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.