Abstract

Videotape recordings obtained during an initial and conventional psychiatric interview were used to assess possible emotional differences in facial expressions and acoustic parameters of the voice between Borderline Personality Disorder (BPD) female patients and matched controls. The incidence of seven basic emotion expressions, emotional valence, heart rate, and vocal frequency (f0), and intensity (dB) of the discourse adjectives and interjections were determined through the application of computational software to the visual (FaceReader) and sound (PRAAT) tracks of the videotape recordings. The extensive data obtained were analyzed by three statistical strategies: linear multilevel modeling, correlation matrices, and exploratory network analysis. In comparison with healthy controls, BPD patients express a third less sadness and show a higher number of positive correlations (14 vs. 8) and a cluster of related nodes among the prosodic parameters and the facial expressions of anger, disgust, and contempt. In contrast, control subjects showed negative or null correlations between such facial expressions and prosodic parameters. It seems feasible that BPD patients restrain the facial expression of specific emotions in an attempt to achieve social acceptance. Moreover, the confluence of prosodic and facial expressions of negative emotions reflects a sympathetic activation which is opposed to the social engagement system. Such BPD imbalance reflects an emotional alteration and a dysfunctional behavioral strategy that may constitute a useful biobehavioral indicator of the severity and clinical course of the disorder. This face/voice/heart rate emotional expression assessment (EMEX) may be used in the search for reliable biobehavioral correlates of other psychopathological conditions.

Highlights

  • Expressing, detecting, and evaluating emotions are crucial social and cognitive skills for behavioral adaptation

  • The main difference between the two networks consists in the set of interconnections among acoustic and facial parameters of repulse emotions depicted in the patient’s group which is absent in the controls, where the relation of acoustic and facial expressions is weak and does not include Contempt

  • Each subject was recorded by the FaceReader software every 0.04 s for an average period of 11.4 min resulting in ±17,000 data points

Read more

Summary

Introduction

Expressing, detecting, and evaluating emotions are crucial social and cognitive skills for behavioral adaptation. Human emotions are mainly expressed in facial, postural, verbal, or vocal behaviors and usually involve physiological correlates, such as heart rate. These expressions manifest a variety of subjective states and constitute objective behaviors that can be recorded and closely scrutinized. There is extensive evidence of the salient role that facial expressions, verbal and vocal parameters play in the communication of basic emotions among human beings [1]. In face-to-face human encounters, the simultaneous emission of facial and vocal expressions expands the information that results in the recognition and attribution of emotional states that played an important role in human evolution. It was stressed that most of the currently available methods to assess emotional expression have not been validated in clinical settings, a desirable condition to ascertain their diagnostic value

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.