Abstract

Compared with typically developed people, patients with autism spectrum disorder (ASD) have many differences, including visual attention. Recently, researchers have employed saliency prediction to conduct early diagnosis for patients with ASD by observing whether the fixation maps of the patients are consistent with prediction results. Though numerous saliency models have been designed, which promotes the performance of saliency prediction significantly, the efforts major in atypical attention prediction are fewer. Therefore, we propose a novel atypical saliency model, namely attention-guided dual-branch integration network, to perform atypical visual saliency prediction. Generally, our model is a U-shaped architecture including an encoder and a decoder. In the encoder, we first deploy the dual-branch fusion module to extract multiscale deep features, where the dual branches consist of a coarse branch and a fine branch. Particularly, the multiscale features of each branch are integrated in a bilateral way. Then, we employ the multiattention module consisting of dilated convolution, channel attention, and spatial attention to further elevate the high-level semantic cues. Particularly, the attention units are constructed in a cascaded way. Besides, to give an effective representation for the differences between atypical saliency estimation and the typical saliency estimation, we design a new discriminative mask to compute the loss function. Extensive experiments are performed on challenging datasets, and the results show the superiority and effectiveness of our atypical saliency model when compared with the state-of-the-art saliency prediction models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call