Abstract

Atypical visual attention is a hallmark of autism spectrum disorder (ASD). Identifying the attention features accurately discerning between people with ASD and typically developing (TD) at the individual level remains a challenge. In this study, we developed a new systematic framework combining high accuracy deep learning classification, deep learning segmentation, image ablation and a direct measurement of classification ability to identify the discriminative features for autism identification. Our two-stream model achieved the state-of-the-art performance with a classification accuracy of 0.95. Using this framework, two new categories of features, Food & drink and Outdoor-objects, were identified as discriminative attention features, in addition to the previously reported features including Center-object and Human-faces, etc. Altered attention to the new categories helps to understand related atypical behaviors in ASD. Importantly, the area under curve (AUC) based on the combined top-9 features identified in this study was 0.92, allowing an accurate classification at the individual level. We also obtained a small but informative dataset of 12 images with an AUC of 0.86, suggesting a potentially efficient approach for the clinical diagnosis of ASD. Together, our deep learning framework based on VGG-16 provides a novel and powerful tool to recognize and understand abnormal visual attention in ASD, which will, in turn, facilitate the identification of biomarkers for ASD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call