Abstract
The most common form of cancer among women in both developed and developing countries is breast cancer. The early detection and diagnosis of this disease is significant because it may reduce the number of deaths caused by breast cancer and improve the quality of life of those effected. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) methods have shown promise in recent years for aiding in the human expert reading analysis and improving the accuracy and reproducibility of pathology results. One significant application of CADe and CADx is for breast cancer screening using mammograms. In image processing and machine learning research, relevant results have been produced by sparse analysis methods to represent and recognize imaging patterns. However, application of sparse analysis techniques to the biomedical field is challenging, as the objects of interest may be obscured because of contrast limitations or background tissues, and their appearance may change because of anatomical variability. We introduce methods for label-specific and label-consistent dictionary learning to improve the separation of benign breast masses from malignant breast masses in mammograms. We integrated these approaches into our Spatially Localized Ensemble Sparse Analysis (SLESA) methodology. We performed 10- and 30-fold cross validation (CV) experiments on multiple mammography datasets to measure the classification performance of our methodology and compared it to deep learning models and conventional sparse representation. Results from these experiments show the potential of this methodology for separation of malignant from benign masses as a part of a breast cancer screening workflow.
Highlights
The topic of this work is automated classification of breast masses into benign or malignant using mammograms
We evaluate the performance of our framework and compare it to straightforward sparse representation classification (SRC), and the well-known Convolutional Neural Nets (CNNs) architectures of Alexnet [16], Googlenet [17], Resnet50 [33], and InceptionV3 [34], after applying transfer learning and data augmentation techniques
Our Spatially Localized Ensemble Sparse Analysis (SLESA) methods significantly outperform the best CNN performance on the Mammographic Image Analysis Society (MIAS) dataset
Summary
The topic of this work is automated classification of breast masses into benign or malignant using mammograms. The development of CADe and CADx techniques for breast cancer using mammograms has attracted significant interest Among these techniques, conventional classification models use specific procedures to craft features for representing and classifying imaging pattern. The concentration of this research is the diagnosis (CADx) of breast cancer masses into benign or malignant states using sparse representation and dictionary learning techniques. The objective of sparse representation methods is to use sparse linear approximations of patterns, or atoms, from a dictionary of signals to represent a specific signal These sparse approximations can be used for applications such as compression and denoising of signals/images, classification, object recognition, and other areas. Our premise is that optimized spatially localized dictionaries trained using label separation or label consistency constraints, will improve the classification accuracy of our spatially localized sparse analysis We employ this system for diagnosis of breast cancer in mammograms. We evaluate the performance of our framework and compare it to straightforward sparse representation classification (SRC), and the well-known CNN architectures of Alexnet [16], Googlenet [17], Resnet50 [33], and InceptionV3 [34], after applying transfer learning and data augmentation techniques
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.