Non-speech emotion recognition involves identifying emotions conveyed through non-verbal vocalizations such as laughter, crying, and other sound signals, which play a crucial role in emotional expression and transmission. This paper employs a nine-category discrete emotion model encompassing happy, sad, angry, peaceful, fearful, loving, hateful, brave, and neutral. A proprietary non-speech dataset comprising 2337 instances was utilized, with 384-dimensional feature vectors extracted. The traditional Backpropagation Neural Network (BPNN) algorithm achieved a recognition rate of 87.7% on the non-speech dataset. In contrast, the proposed Whale Optimization Algorithm - Backpropagation Neural Network (WOA-BPNN) algorithm, applied to a self-made non-speech dataset, demonstrated a remarkable accuracy of 98.6%. Notably, even without facial emotional cues, non-speech sounds effectively convey dynamic information, and the proposed algorithm excels in their recognition. The study underscores the importance of non-speech emotional signals in communication, especially with the continuous advancement of artificial intelligence technology. The abstract thus encapsulates the paper’s focus on leveraging AI algorithms for high-precision non-speech emotion recognition.
Read full abstract