Abstract

AbstractBackgroundBrain PET imaging techniques provide in‐vivo information about brain metabolism and the density/distribution of amyloid and tau proteins, the two hallmarks of Alzheimer’s disease (AD). Combination of such imaging biomarkers has led to improved performances of deep‐learning models designed for disease‐classification and prediction of disease‐progression. However, training such networks is often a challenge, especially when subjects have missing imaging markers at given time‐points. Such incomplete data is often excluded from the training process. We propose a “multibranch”‐convolutional‐neural‐network architecture to cope with this issue and make use of all available data for better AD vs. normal‐controls (NC) classification performance.MethodWe obtained multi‐time point ADNI FDG‐ and AV45‐PET scans corresponding to 257 NC and 222 AD subjects. N = 124 subjects had only one PET scan available at different time‐points. We designed a CNN with three training “branches”, taking as input either FGD, AV45, or a combination of the two when available (see Fig.1). In total, 1832 scans (786 singles and 523 pairs) were used as input for training. Branches are weighted differently (with 0 or 1) to control how training weights are updated. When both imaging‐biomarkers are available, each branch is fed with the appropriate input and contributes to the overall training. The overall network architecture is shown on Fig.1. For each branch, the calculated loss is multiplied by the associated weight and backpropagated through the network to update its training parameters. For comparison purposes, we independently trained a CNN with the same image pairs as above.ResultFor validation, we only used cases with both scans available to assess the performance of each branch on the same number of inputs. Classification sensitivity, specificity, accuracy, and area‐under‐the‐curve averaged over 10 folds were: (i) FDG‐branch: 93.11%, 88.88%, 91.35%, and 0.954, (ii) AV45‐branch: 96.53%, 74.87%, 90.27%, and 0.941, (iii) Multimodal‐branch: 94.43%, 86.50%, 91.56%, and 0.961, iv) Independent multi‐input CNN: 93.71%, 71.80%, 88.60%, and 0.959.ConclusionWe designed a multibranch‐CNN to handle missing data when training a multimodal classification CNN. Better classification accuracy was achieved with the multi‐input branch compared to the independently trained multi‐modal CNN. We will test our network on other type and/or number of modalities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.