Abstract

Emotional recognition can be made from Many sources including text, speech, hand, body language and facial expressions. Currently, most sensory systems use only one of these sources. People's feelings change every second and one method used to process emotional recognition may not reflect emotions in the right way. This research recommends the desire to understand and explore people's feelings in many similar ways speech and face. We have chosen to explore, sound and video inputs to develop an ensemble model that gathers the information from all these sources and displays it in a clear and interpretable way. By improving the emotion recognition accuracy, the proposed multisensory emotion recognition system can help to improve the naturalness of human computer interaction. Speech, hand, body language, and facial expressions are all examples of sources for emotional recognition. Most sensory systems currently use only one of these sources. People's feelings fluctuate by the second, therefore one method for processing emotional identification may not accurately reflect emotions. This study suggests that there is a need to comprehend and explore people's sentiments in many ways that voice and face do. Various emotional states were utilised in this case. Speech, facial expressions, and both can be used to detect emotions in the proposed framework. Audio, and video inputs and construct an ensemble model that collects data from all of these sources and presents it in a clear and understandable manner. The suggested multisensory emotion recognition system can help to increase the naturalness of human-computer interaction by boosting emotion recognition accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.