Abstract
Mammography is a breast imaging technique that has been widely used in breast cancer diagnosis and screening. The Breast Imaging Reporting and Data System (BI-RADS) defines a six-point overall cancer risk scale from negative to highly suggestive of malignancy based on mammography, and also a four-point breast density based cancer risk scale. Automatic BI-RADS density classification of mammogram images is still a challenge. The current state of the art is about 80% on the MIAS (Mammogram Image Analysis Society) database. In this paper we present a deep learning study of BI-RADS density classification using MIAS, based on a lightweight Convolutional Neural Networks (CNNs) architecture. This is a small data problem as MIAS has only 322 images with ground truth, so we use image pre-processing and augmentation to solve the problem. Five-fold cross validation is used to evaluate the proposed approach, and has achieved a test accuracy of 83.6% on average. This suggests that deep learning has the potential to address the small data problem in mammography, which is prevalent in many medical image analysis tasks. The experience we have, especially in how to optimize the deep learning architecture, will benefit other researchers and medical practitioners.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.