Abstract

Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call