Abstract

Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by hand-crafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, ). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.

Highlights

  • Mammographic density (MD) is one of the most important independent risk factors for breast cancer and can be defined as the relative proportion of radio-dense fibroglandular tissue to radiolucent fatty tissue in the breast, as visualized in mammograms

  • As visual analogue scales (VAS) has been shown to be a better predictor of cancer than other automated methods, we developed a method of breast density estimation by predicting VAS scores using a supervised deep learning approach that learns features associated with breast cancer

  • We propose an automated method for assessing breast cancer risk based on whole-image full-field digital mammograms (FFDM) using reader VAS scores as a measure of breast density

Read more

Summary

Introduction

Mammographic density (MD) is one of the most important independent risk factors for breast cancer and can be defined as the relative proportion of radio-dense fibroglandular tissue to radiolucent fatty tissue in the breast, as visualized in mammograms. A number of methods have been used to measure MD These include visual area-based methods, for example, BI-RADS breast composition categories,[7] Boyd categories,[8] percent density recorded on visual analogue scales (VAS),[9] and semiautomated thresholding (Cumulus).[10] The automated Densitas software[11] operates in an area-based fashion on processed (for presentation) full-field digital mammograms (FFDM), while methods including Volpara[12] and Quantra[13] use raw (for processing) mammograms to estimate volumes of dense fibroglandular and fatty tissue in the breast. Recent studies have investigated the relationship between breast density and the risk of breast cancer and found differences depending on the density method used.[14,15]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call