Abstract

Brain 18F-FDG PET images are commonly-known materials for effectively predicting Alzheimer's disease (AD). How-ever, the data volume of PET is usually insufficient, which is unfavorable to train an accurate AD prediction networks. Fur-thermore, the PET image is noisy with low signal-to-noise ratio, and simultaneously the feature (metabolic abnormality) used for predicting AD in PET image is not always obvious. Such charac-teristics of 18F-FDG PET images hinder the existing deep learning networks to learn the feature of lesion (i.e., glucose metabolism abnormality) effectively, which leads to unsatisfactory classifica-tion performance and poor robustness. In this paper, a contrastive-based learning method is proposed to address the challenges of PET image inherently possessed. Firstly, the slices of 3D PET image are amplified by cropping the image of anchors (i.e., an augmented version of the same image) to generate extended train-ing data. Meanwhile, contrastive loss is adopted to enlarge inter-class feature distances and reduce intra-class feature differences using subject fuzzy labels as supervised information. Secondly, we construct a double convolutional hybrid attention module to enhance the network to learn different perceptual domains where two convolutional layers with different convolutional kernels (7 × 7 and 5 × 5) are constructed. Moreover, we recommend a diagnosis mechanism by analyzing the consistency of predicted result for PET slices alone with clinical neuropsychological assessment to achieve a better AD diagnosis. The experimental results show that the proposed method outperforms the state-of-the-arts for brain 18F-FDG PET images while remaining satisfactory computational performance, and hence demonstrate the advantage of the method in effectively predicting AD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call