Abstract

Breast cancer is a worldwide medical challenge that requires Early diagnosis. While there are numerous diagnostic methods for breast cancer, many primarily focus on network structure, neglecting the guidance of professional medical knowledge. Moreover, these methods limit their analysis to 2-dimensional B-mode ultrasound images and rarely consider the potential insights provided by Contrast-Enhanced Ultrasound (CEUS) videos, which offer more detailed dynamic pathological information. Therefore, how to effectively utilize prior medical knowledge to achieve a precise diagnosis of breast cancer based on CEUS videos has emerged as a pressing issue. To address this challenge, we propose a multimodal breast cancer diagnostic method based on Knowledge-Augmented Deep Learning named KAMnet. This method integrates three types of prior knowledge into deep neural networks through different integration strategies. First, we devise a temporal segment selection strategy guided by Gaussian sampling through data-level integration, guiding the model to focus on keyframes. Second, we construct a feature fusion network for architecture-level integration and achieve collaborative inference through decision-level integration, facilitating multimodal information exchange. Finally, a spatial attention-guided loss function through training-level integration helps the model target lesion regions. We validate our model on our breast cancer video dataset consisting of 332 cases. The result shows that our model can achieve a sensitivity of 90.91% and an accuracy of 88.238%. Extensive ablation experiments demonstrate the effectiveness of our knowledge enhancement modules. The code is released at https://github.com/tobenan/BCCAD_torch.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call