Abstract

An emotion recognition method based on multispectral imaging technology and tissue oxygen saturation (StO2) is proposed in this study. This method is called spatial-spectral-temporal adjustment convolutional neural network (SACNN). First, we use the algorithm to extract the StO2 content of an emotionally sensitive nose area through real-time multispectral imaging technology. Compared with facial expression data, StO2 data are more objective and cannot be controlled and changed artificially. Second, we construct a clustering algorithm based on the emotional state by extracting the spectral, StO2, and spatial features of the nose image to obtain accurate signals of emotionally sensitive areas. To utilize the correlation between spectral and spatial signals, we propose an adjustment-based CNN module, which reorganizes the relationship between all previous layers of the feature map, thereby making the relationship among layers close and highly quantitative. The features extracted through this method are consistent with spatial-spectral features. Third, we incorporate the extracted temporal feature signal into the long short-term memory module and finally complete the correlation between the spatial-spectral-temporal features. Experimental results show that the accuracy of the SACNN algorithm in emotional recognition reaches 90%, and the proposed method is more competitive than state-of-the-art approaches. To the best of our knowledge, this study is the first to use time-series StO2 signals for emotion recognition.

Highlights

  • As the basis of human–computer interaction (HCI), emotion recognition affects the continuous development of machine intelligence

  • We developed an emotion recognition algorithm called spatial–spectral–temporal adjustment convolutional neural network (SACNN) based on nose tissue oxygen saturation (StO2) information and multispectral signals

  • We evaluated our model with different depths and growth rates on the emotion recognition task and compared it with state-of-the-art DenseNet architectures

Read more

Summary

Introduction

As the basis of human–computer interaction (HCI), emotion recognition affects the continuous development of machine intelligence. Many mental diseases are relevant to emotions [1], [2]. Research on emotion recognition technology has a great development prospect and academic value. Emotional recognition is essentially pattern recognition, and increased focus has been devoted to developing emotional artificial intelligence in HCI. The methods of emotion recognition have achieved notable performance, but improvements are still necessary. (1) Researchers have attempted to use spectral signals to construct a model for single emotion assessment, such as stress. Using spectral imaging technology to recognize multiple emotions remains to be an undeveloped area

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.