As a common malignancy in females, breast cancer represents one of the most serious threats to a female's life, which is also closely associated with the Sustainable Development Goal 3 (SDG 3) of the United Nations for keeping healthy lives and promoting the well-being of all people. Breast cancer accounts for the highest number of cancer mortality for females, and early diagnosis is key to reducing disease-specific mortality and mortality in general. Current methods struggle to accurately localize important regions, model sequential dependencies, or combine different features despite considerable improvements in artificial intelligence and deep learning domains. They prevent diagnostic frameworks from being reliable and scalable, especially in low-resourced healthcare settings. This study proposes a novel hybrid deep learning framework, BreastHybridNet, using mammogram images to tackle these mutual challenges. The proposed framework combines a pre-trained CNN backbone for feature extraction, a spatial attention mechanism to automatically highlight the image area, which contains signature patterns carrying diagnostic information, a BiLSTM layer to obtain sequential dependencies of diagnostic features, and a feature fusion strategy to process complementarily. Experimental results show that the accuracy of the proposed model is 98.30%, which outperforms the state-of-the-art methods LMHistNet, BreastMultiNet, and DOTNet 2.0 to a considerable extent quantitatively. BreastHybridNet works towards the feasibility of interpretability and scalability on existing systems while contributing to worldwide efforts to alleviate cancer-related mortality using cost-efficient diagnostic lenses. This study highlights the need for AI-enabled solutions to contribute to accessing reliable healthcare technologies for breast cancer screening.
Read full abstract