Abstract

Public speaking is a common type of social evaluative situation and a significant amount of the population feel uneasy with it. It is of utmost importance to detect public speaking stress so that appropriate action can be taken to minimize its impacts on human health. In this study, a multimodal human stress classification scheme in response to real-life public speaking activity is proposed. Electroencephalography (EEG), galvanic skin response (GSR), and photoplethysmography (PPG) signals of forty participants are acquired in rest-state and during public speaking activity to divide data into a stressed and non-stressed group. Frequency domain features from EEG and time-domain features from GSR and PPG signals are extracted. The selected set of features from all modalities are fused to classify the stress into two classes. Classification is performed via a leave-one-out cross-validation scheme by using five different classifiers. The highest accuracy of 96.25% is achieved using a support vector machine classifier with radial basis function. Statistical analysis is performed to examine the significance of EEG, GSR, and PPG signals between the two phases of the experiment. Statistical significance is found in certain EEG frequency bands as well as GSR and PPG data recorded before and after public speaking supporting the fact that brain activity, skin conductance, and blood volumetric flow are credible measures of human stress during public speaking activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call