Abstract
Multi-sensor information fusion is a rapidly developing research area which forms the backbone of numerous essential technologies such as intelligent robotic control, sensor networks, video and image processing and many more. In this paper, we have developed a novel technique to analyze and correlate human emotions expressed in voice tone & facial expression. Audio and video streams captured to populate audio and video bimodal data sets to sense the expressed emotions in voice tone and facial expression respectively. An energy based mapping is being done to overcome the inherent heterogeneity of the recorded bi-modal signal. The fusion process uses sampled and mapped energy signal of both modalities�s data stream and further recognize the overall emotional component using Support Vector Machine (SVM) classifier with the accuracy 93.06%.
Highlights
MULTI-SENSOR information fusion is a rapidly developing area of research and development which forms the foundation of intelligent robotic control
We propose a feature level linear weighted fusion model based on a human-inspired concept of brain energy mapping model
We further develop a machine model using C-SVC Support Vector Machine (SVM) and train it using the feature sets obtained in Stage III
Summary
MULTI-SENSOR information fusion is a rapidly developing area of research and development which forms the foundation of intelligent robotic control It comprises of methods and techniques which collect input from multiple similar or dissimilar sources and sensors, extract the required information and fuse them together to achieve improved accuracy in inference than that could be achieved by the use of a single data source alone. Besides the problem solving, reasoning, perception and cognitive tasks, emotion recognition plays a pivot role in functions which are essential for artificial intelligence Considering these two aspects of human behavior, we have designed and developed a technique to analyze and correlate bimodal data sets and further recognize the emotional component from these fused data sets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Interactive Multimedia and Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.