Abstract

ABSTRACT Breast cancer is one of the most common types of cancer among women worldwide and mammography is the primary method which plays a major role in early diagnosis of breast cancer. Mammograms can be obtained in two-dimensional (2D) or three-dimensional (3D) forms. 3D images contain more information than their 2D forms, however, they involve more computational and time complexities and are more challenging to produce. This study proposes a simple method to convert mammograms into artificial 3D forms. In this method, first, the gray level values of a 2D image are defined as an artificial third dimension for the same image and then the obtained image is saved in a different viewpoint again in the 2D form to make it include more information than its original form. Both, the original mammograms and their converted forms are classified as mass or normal mammograms using the ResNet. The method is tested on masses and normal mammograms taken from MIAS, INBreast and CBIS-DDSM databases, which are frequently used in the literature. The results show that the classification performed on the images obtained with the original 2D forms provided 82.4% accuracy for MIAS, 86.7% for INbreast and 96.7% for CBIS-DDSM databases. Whereas the transformed forms provided 97.8% accuracy for MIAS, 100% for INbreast and 100% for CBIS-DDSM databases, in terms of mass and normal classification. According to these results, it is seen that the transformed forms of the images provided an average of 10.67% higher accuracy for all three databases compared to their original forms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call