Abstract
Robots and virtual agents need to adapt existing and learn novel behavior to function autonomously in our society. Robot learning is often in interaction with or in the vicinity of humans. As a result the learning process needs to be transparent to humans. Reinforcement Learning (RL) has been used successfully for robot task learning. However, this learning process is often not transparent to the users. This results in a lack of understanding of what the robot is trying to do and why. The lack of transparency will directly impact robot learning. The expression of emotion is used by humans and other animals to signal information about the internal state of the individual in a language-independent, and even species-independent way, also during learning and exploration. In this article we argue that simulation and subsequent expression of emotion should be used to make the learning process of robots more transparent. We propose that the TDRL Theory of Emotion gives sufficient structure on how to develop such an emotionally expressive learning robot. Finally, we argue that next to such a generic model of RL-based emotion simulation we need personalized emotion interpretation for robots to better cope with individual expressive differences of users.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.