The classification of speech disorders (SDs) is crucial for treating children with speech impairment (SI). An automated SD classification can assist speech therapists in rendering services to children with SI in rural areas. Automated techniques for detecting SDs provide objective assessments of speech attributes, including articulation, fluency, and prosody. Clinical examinations and quantitative assessments provide an in-depth understanding of the patient’s speaking abilities and limitations. Existing deep learning (DL) models for SD detection often lack generalization across diverse populations and speech variations, leading to suboptimal performance when applied to individuals with different linguistic backgrounds or dialects. This study introduces a DL-based model for classifying normal and abnormal speeches using voice samples. To overcome the overfitting and bias, the authors construct convolutional neural network models with the weights of MobileNet V3 and EfficientNet B7 models for feature extraction (FE). To improve performance, they integrate the squeeze and excitation block with the MobileNet V3-based FE model. Similarly, the EfficientNet B7-model-based FE is improved using the structure pruning technique. The enhanced CatBoost model differentiates the normal and abnormal speeches using the extracted features. The experimental analysis is performed using the public dataset that contains 4620 utterances of healthy children and 2178 utterances of children with SI. The comparative study reveals the exceptional performance of the proposed SD classification model. The model outperforms the current SD classification models. It can be employed in clinical settings to support speech therapists. Substantial training with diverse voice samples can improve the generalizability of the proposed model.
Read full abstract