Abstract

Computer-aided diagnosis using eye tracking data is classically based on regions of interest in the image. However, in recent years, the modeling of visual attention by saliency maps has shown better results. Wang et al., considering 3-layered saliency model that incorporated pixel-level, object-level, and semantic-level attributes, showed differences in the performance of eye tracking in autism spectrum disorder (ASD) and better characterized these differences by looking at which attributes were used, providing meaningful clinical results about the disorder. Our hypothesis is that the context interpretation would be worse according to the severity of ASD, consequently, the eye tracking data processed based on visual attention model (VAM) could be used to classify patients with ASD according to gravity. In this context, the present work proposes: 1) based on VAM, using Image Processing and Artificial Intelligence to learn a model for each group (severe and non-severe), from eye tracking data, and 2) a supervised classifier that, based on the models learned, performs the severity diagnosis. The classifier using the saliency maps was able to identify and separate the groups with an average accuracy of 88%. The most important features were the presence of face and skin color, in other words, semantic features.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.