Abstract

The subject matter of this paper revolves around the utilization of Deep Learning techniques for the identification of Autism Spectrum Disorder (ASD) through facial expression analysis. The goal is to assess the performance of various Deep Learning architectures in this context, aiming to support the evaluation of AI-based ASD identification technologies within medical imaging standards. The tasks undertaken include conducting a comprehensive performance analysis of different Deep Learning models, emphasizing the significance of data augmentation techniques, and evaluating the convergence ability of these models. Methods employed involve a simulation setup for evaluating Deep Learning architectures using facial expression images of children with ASD. The research utilizes secondary data from open-source sharing platforms comprising 2,840 optical images. The evaluation is conducted with consideration of data ratio settings and data augmentation procedures. Results indicate that data augmentation significantly improves the recall performance, with ResNet-101 architecture demonstrating superior accuracy, precision, and convergence ability compared to ResNet-50 and VGG-16. Finally, the conclusion drawn from this analysis highlights the efficacy of ResNet-101 with augmented data. It stands out as the most suitable model for ASD identification based on facial expressions, emphasizing its potential for early intervention and increased awareness. the scientific novelty of the results obtained lies in its contribution to advancing the state of the art in AI-driven ASD identification, adhering to medical standards, enhancing model performance through data augmentation, and facilitating early intervention strategies for improved patient outcomes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call