Abstract
To find the genuineness of a human behavior/emotion is an important research topic in affective and human centered computing. This paper uses a feature level fusion technique of three peripheral physiological features from observers, namely pupillary response (PR), blood volume pulse (BVP), and galvanic skin response (GSR). The observers’ task is to distinguish between real and posed smiles when watching twenty smilers’ videos (half being real smiles and half are posed smiles). A number of temporal features are extracted from the recorded physiological signals after a few processing steps and fused before computing classification performance by k-nearest neighbor (KNN), support vector machine (SVM), and neural network (NN) classifiers. Many factors can affect the results of smile classification, and depend upon the architecture of the classifiers. In this study, we varied the K values of KNN, the scaling factors of SVM, and the numbers of hidden nodes of NN with other parameters unchanged. Our final experimental results from a robust leave-one-everything-out process indicate that parameter tuning is a vital factor to find a high classification accuracy, and that feature level fusion can indicate when more parameter tuning is needed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.