Abstract

As social robots become more prevalent, they often employ non-speech sounds, in addition to other modes of communication, to communicate emotion and intention in an increasingly complex visual and audio environment. These non-speech sounds are usually tailor-made, and research into the generation of non-speech sounds that can convey emotions has been limited. To enable social robots to use a large amount of non-speech sounds in a natural and dynamic way, while expressing a wide range of emotions effectively, this work proposes an automatic method of sound generation using a genetic algorithm, coupled with a random forest model trained on representative non-speech sounds to validate each produced sound’s ability to express emotion. The sounds were tested in an experiment wherein subjects rated the perceived valence and arousal. Statistically significant clusters of sounds in the valence arousal space corresponded to different emotions, showing that the proposed method generates sounds that can readily be used in social robots.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.