Abstract
Introduction: Deep learning (DL) algorithms have demonstrated remarkable advancements in the field of medical image analysis, particularly in the classification of Alzheimer’s Disease (AD). Despite these advancements, a significant challenge remains in acquiring the extensively annotated image datasets required to effectively train DL models. Attention mechanisms have emerged as powerful tools in DL, enabling models to prioritize critical regions within data and extract essential features. This focus not only enhances training efficiency but also improves overall classification performance. In this study, convolutional neural networks (CNNs) are utilized as the foundational architecture for AD classification tasks. To further enhance their performance, the Convolutional Block Attention Module (CBAM) is integrated into the CNN framework. CBAM is a lightweight and versatile attention mechanism that can be easily incorporated into any CNN architecture with minimal computational cost. By emphasizing important spatial and channel-wise features, CBAM significantly improves the feature extraction capability of CNNs. Building on this concept, an enhanced version of CBAM, referred to as enCBAM, is proposed. EnCBAM optimizes the generation of output feature maps, further improving the discriminative power of CNN architectures. In this work, the pre-trained VGG-16 network is employed as the base CNN model. When combined with enCBAM, the resulting architecture, referred to as EnCNN, achieves a substantial boost in classification performance. Specifically, EnCNN attains an impressive classification accuracy of 95.06%, outperforming its standalone counterpart and demonstrating the effectiveness of the enhanced attention mechanism.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have