Abstract

The introduction of increasingly intelligent and autonomous systems raises novel human factors challenges for human-machine teaming. People utilize differing mental models in understanding the functioning of complex systems that may be capable of social agency. Operators may perceive the machine as either a complex tool or a humanlike teammate. When the “advanced tool” mental model is adopted, operator trust may reflect individual differences in expectations of automation. By contrast, when the “teammate” mental model is activated, trust may depend on evaluative attitudes to robots. This article investigates predictors of trust in an autonomous robot detecting threat on either a physics-based or psychological basis. Distinct dimensions of physics-based and psychological trust are identified, corresponding to advanced tool and team mental models, respectively. Dispositional perceptions of automation, measured with the perfect automation schema scale, are associated with both aspects of trust. By contrast, the negative attitudes toward robots scale is specifically associated with lower psychological trust. The findings suggest that transparency information should be designed for compatibility with the operator's mental model in order to support accurate trust calibration and situation awareness. Transparency may be personalized to emphasize either the machine's data-analytic capabilities (advanced tool) or its humanlike social functioning (teammate).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call