Abstract

Breast cancer is one of the most prevalent cancers among women worldwide. Breast ultrasound (US) video is a common means of clinical diagnosis and contains rich information about lesions. However, information from just a single view may lead to a higher rate of misdiagnosis. Also, the size, spatial locations, and time of occurrence of lesions vary widely between patients. Meanwhile, there is plentiful redundant information in the video. To address these issues, we propose a novel dual-branch classification model based on US videos. This model can extract lesion information in both the transverse and longitudinal planes and combine the diagnostic results of the two branches to diagnose a lesion. Additionally, we design a region-guided module (RGM) and time-guided module (TGM) to mitigate the impact of redundant information on the network. The TGM simulates radiologists’ temporal attention in the diagnosis process as a guiding message, helping the network focus on frames that are more beneficial to the diagnosis. The RGM considers the roughly located lesion region as a priori information to guide the network to focus on the lesion region, which is helpful in extracting and strengthening texture features for breast cancer classification. We validate the effectiveness of our model using our collected US video dataset, which comprises 1000 cases obtained from 888 patients. The results show that the accuracy (ACC), area under the ROC curve (AUC), and F1-score of the model are 86.40%, 92.08%, and 82.39%, respectively, and its performance is better than that of other state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call