Abstract

AbstractRobots in multi-user environments require adaptation to produce personalized interactions. In these scenarios, the user’s feedback leads the robots to learn from experiences and use this knowledge to generate adapted activities to the user’s preferences. However, preferences are user-specific and may suffer variations, so learning is required to personalize the robot’s actions to each user. Robots can obtain feedback in Human–Robot Interaction by asking users their opinion about the activity (explicit feedback) or estimating it from the interaction (implicit feedback). This paper presents a Reinforcement Learning framework for social robots to personalize activity selection using the preferences and feedback obtained from the users. This paper also studies the role of user feedback in learning, and it asks whether combining explicit and implicit user feedback produces better robot adaptive behavior than considering them separately. We evaluated the system with 24 participants in a long-term experiment where they were divided into three conditions: (i) adapting the activity selection using the explicit feedback that was obtained from asking the user how much they liked the activities; (ii) using the implicit feedback obtained from interaction metrics of each activity generated from the user’s actions; and (iii) combining explicit and implicit feedback. As we hypothesized, the results show that combining both feedback produces better adaptive values when correlating initial and final activity scores, overcoming the use of individual explicit and implicit feedback. We also found that the kind of user feedback does not affect the user’s engagement or the number of activities carried out during the experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call