Abstract

BackgroundQuantizing the Breast Imaging Reporting and Data System (BI-RADS) criteria into different categories with the single ultrasound modality has always been a challenge. To achieve this, we proposed a two-stage grading system to automatically evaluate breast tumors from ultrasound images into five categories based on convolutional neural networks (CNNs).MethodsThis new developed automatic grading system was consisted of two stages, including the tumor identification and the tumor grading. The constructed network for tumor identification, denoted as ROI-CNN, can identify the region contained the tumor from the original breast ultrasound images. The following tumor categorization network, denoted as G-CNN, can generate effective features for differentiating the identified regions of interest (ROIs) into five categories: Category “3”, Category “4A”, Category “4B”, Category “4C”, and Category “5”. Particularly, to promote the predictions identified by the ROI-CNN better tailor to the tumor, refinement procedure based on Level-set was leveraged as a joint between the stage and grading stage.ResultsWe tested the proposed two-stage grading system against 2238 cases with breast tumors in ultrasound images. With the accuracy as an indicator, our automatic computerized evaluation for grading breast tumors exhibited a performance comparable to that of subjective categories determined by physicians. Experimental results show that our two-stage framework can achieve the accuracy of 0.998 on Category “3”, 0.940 on Category “4A”, 0.734 on Category “4B”, 0.922 on Category “4C”, and 0.876 on Category “5”.ConclusionThe proposed scheme can extract effective features from the breast ultrasound images for the final classification of breast tumors by decoupling the identification features and classification features with different CNNs. Besides, the proposed scheme can extend the diagnosing of breast tumors in ultrasound images to five sub-categories according to BI-RADS rather than merely distinguishing the breast tumor malignant from benign.

Highlights

  • Quantizing the Breast Imaging Reporting and Data System (BI-RADS) criteria into different categories with the single ultrasound modality has always been a challenge

  • Effect of the identification on final grading accuracy With Dice similarity coefficient (DSC), average distance between two boundaries (AvgDist), and Hausdorff distance between two boundaries (HDist), Table 2 exhibits the comparison on the similarity of the generated tumor areas from experimental cases “No regions of interest (ROIs)-convolutional neural networks (CNNs)”, “No Refined ROICNN”, and “Refined ROI identification network (ROI-CNN)”

  • We discovered that the ROI-CNN can effectively recognize tumors in breast ultrasound (BUS) images

Read more

Summary

Introduction

Quantizing the Breast Imaging Reporting and Data System (BI-RADS) criteria into different categories with the single ultrasound modality has always been a challenge. To prevent needlessly biopsies and reduce unnecessary expenses and anxiety for thousands of women each year [4, 5], screening ultrasound is usually leveraged in most of the routine examination and clinical diagnosis [6,7,8,9]. The Breast Imaging Reporting and Data System (BI-RADS) [10] provides a guidance and criteria for physicians to determine the categories of breast tumor based on medical images. Considering the concerns of physicians, an automatic breast tumor grading scheme should at least cover an objective diagnosis from Category 3 to Category 5, including the subcategories of Category 4 (refer to Table 1). An automatic categorization system can relieve the burden of the manual diagnosis and reduce the individual bias

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call