Since the last decade, autism spectrum disorder (ASD) has been used as a general term to describe a wide range of conditions, including autistic syndrome, Asperger's disorder, and pervasive developmental disability. This problem emerges as a decreased ability to share emotions and a greater difficulty understanding others' feelings, leading to increased social communication difficulties. To assist patients with ASD, we proposed a concept that incorporates speech emotion detection technologies, which are widely used in the field of human-computer interaction (particularly youngsters). An algorithm based on a novel method for classifying normal and autistic children's speech emotions is implemented in this article. The training data set is treated to a new approach after all features have been extracted. The technique discussed in this study is the creation of a hybrid algorithm that serves as a classifier for normal and autistic children's speech emotions. Voice emotion recognition can be identified accurately and with a lower error rate. The data collection includes speech samples from 200 normal and 250 autistic groups in four moods (Angry, Happy, Neutral and Sad). As per research findings, the implemented hybrid algorithm for Normal and Autistic Children Speech Emotions (SERNAC) outperformed the existing classifiers by increasing accuracy.