Abstract
When we interact with others, we make inferences about their internal states (i.e., intentions, emotions) and use this information to understand and predict their behavior. Reasoning about the internal states of others is referred to as mentalizing, and presupposes that our social partners are believed to have a mind. Seeing mind in others increases trust, prosocial behaviors and feelings of social connection, and leads to improved joint performance. However, while human agents trigger mind perception by default, artificial agents are not automatically treated as intentional entities but need to be designed to do so. The panel addresses this issue by discussing how mind attribution to robots and other automated agents can be elicited by design, what the effects of mind perception are on attitudes and performance in human-robot and human-machine interaction and what behavioral and neuroscientific paradigms can be used to investigate these questions. Application areas covered include social robotics, automation, driver-vehicle interfaces, and others.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.