Abstract

Advancements in autonomy are leading to an increased need for machines capable of collaborative effort with humans to achieve team goals. One way of enhancing these human-autonomous system work arrangements leverages the concept of a shared mental model. The idea being that when the human and autonomous teammate have aligned models, the team is more productive due to an increase in trust, predictiveness, and apparent understanding. An open issue is how to have autonomous teammates learn a user aligned mental model. This research presents a dual-process learning model that leverages multivariate normal probability density functions (DPL-MN) to extrapolate state-responses into system 2. By leveraging dual-process learning concepts, an autonomous teammate is able to rapidly align with a user and extrapolate their consistencies into longer term memory. Evaluation of DPLM with user responses from a game called Space Navigator shows that DPL-MN accurately responds to situations similarly to each unique user.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call