Abstract

This study presents an integrated deep learning architecture with an object-detection algorithm and a convolutional neural network (CNN) for breast mass detection and visualization. Mammograms are analyzed to identify and localize breast mass lesions to aid clinician review. Two complementary forms of deep learning are used to identify the regions of interest (ROIs). An object-detection algorithm, YOLO v5, analyzes the entire mammogram to identify discrete image regions likely to represent masses. Object detections exhibit high precision, but the object-detection stage alone has insufficient overall accuracy for a clinical application. A CNN independently analyzes the mammogram after it has been decomposed into subregion tiles and is trained to emphasize sensitivity (recall). The ROIs identified by each analysis are highlighted in different colors to facilitate an efficient staged review. The CNN stage nearly always detects tumor masses when present but typically occupies a larger area of the image. By inspecting the high-precision regions followed by the high-sensitivity regions, clinicians can quickly identify likely lesions before completing the review of the full mammogram. On average, the ROIs occupy less than 20% of the tissue in the mammograms, even without removing pectoral muscle from the analysis. As a result, the proposed system helps clinicians review mammograms with greater accuracy and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call