Abstract

ABSTRACT Developing dialogue services for robots has been promoted nowadays for providing natural human–robot interactions to enhance user experiences as conversation is a key instrument for creating and maintaining their mutual relationships. In this work, we present a trainable framework for modeling context-aware human–robot dialogues, and apply it to a live streaming application to demonstrate the need for social robots. Our social chatting robot framework takes into account both emotional context and topic-focused content information to generate appropriate responses. This framework has some unique features. It adopts a multimodal deep learning model to recognize the participants’ emotions. Moreover, our work includes a topic-aware neural model that enables the social robot to follow certain conversation topics to fulfill the specific dialoguing goals and to enhance the chatting coherence. Most importantly, we design a strategy able to take emotions as the drive to control over the neural method of generating utterances. To evaluate the proposed approach, we have conducted several sets of quantitative and quantitative experiments. The results highlight the importance of multimodal emotion recognition and topic-awareness in dialoguing and confirm the feasibility and effectiveness of our framework for human–robot interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call