Abstract
We present a computer vision-based system named Anubhav (a Hindi word meaning feeling) which recognizes emotional facial expressions from streaming face videos. Our system runs at a speed of 10 frames per second (fps) on a 3.2-GHz desktop and at 3 fps on an Android mobile device. Using entropy and correlation-based analysis, we show that some particular salient regions of face image carry major expression-related information compared with other face regions. We also show that spatially close features within a salient face region carry correlated information regarding expression. Therefore, only a few features from each salient face region are enough for expression representation. Extraction of only a few features considerably saves response time. Exploitation of expression information from spatial as well as temporal dimensions gives good recognition accuracy. We have done extensive experiments on two publicly available data sets and also on live video streams. The recognition accuracies on benchmark CK\(+\) data set and on live video stream by our system are at least 13 and 20 % better, respectively, compared to competing approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.