Abstract

Breast cancer is one of the most common cancers among women in the world. The breast imaging reporting and data system (BI-RADS) features effectively improve the accuracy and sensitivity of breast tumors. Based on the description of signs in BI-RADS, a quantitative scoring scheme is proposed based on ultrasound (US) data. This scheme includes feature extraction of high-level semantic, that is, an intermediate step interpreting the subsequent diagnosis. However, the scheme requires doctors to score the features of breast data, which is labor-intensive. To reduce the burden of doctors, we design a multi-task learning (MTL) framework, which can directly output the scores of different BI-RADS features from the raw US images. The MTL framework consists of a shared network that learns global features and K soft attention networks for different BI-RADS features. Thus, it enables the network not only to learn the potential correlation among different BI-RADS features, but also learn the unique specificity of each feature, which can assist each other and jointly improve the scoring accuracy. In addition, we group different BI-RADS features according to the correlation among tasks and build a multi-task/single-task joint framework. Experimental results on the US breast tumor dataset collected from 1859 patients with 4458 US images show that the proposed BI-RADS feature scoring framework achieves an average scoring accuracy of 84.91% for 11 BI-RADS features on the test dataset, which is helpful for the subsequent diagnosis of breast tumors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call