Abstract
SummaryNowadays, artificial intelligence‐based medicine plays an important role in determining correlations not comprehensible to humans. In addition, the segmentation of organs at risk is a tedious and time‐consuming procedure. Segmentation of these organs or tissues is widely used in early diagnosis, treatment planning, and diagnosis. In this study, we trained semantic segmentation networks to segment healthy parotid glands using deep learning. The dataset we used in the study was obtained from Recep Tayyip Erdogan University Training and Research Hospital, and there were 72 T2‐weighted magnetic resonance (MR) images in this dataset. After these images were manually segmented by experts, masks of these images were obtained according to them and all images were cropped. Afterward, these cropped images and masks were rotated 45°, 120°, and 210°, quadrupling the number of images. We trained ResNet‐18/MobileNetV2‐based DeepLab v3+ without augmentation and ResNet‐18/MobileNetV2‐based DeepLab v3+ with augmentation using these datasets. Here, we set the training set and testing set sizes for all architectures to be 80% and 20%, respectively. We designed two different graphical user interface (GUI) applications so that users can easily segment their parotid glands by utilizing all of these deep learning‐based semantic segmentation networks. From the results, mean‐weighted dice values of MobileNetV2‐based DeepLab v3+ without augmentation and ResNet‐18‐based DeepLab v3+ with augmentation were equal to 0.90845–0.93931 and 0.93237–0.96960, respectively. We also noted that the sensitivity (%), specificity (%), F1 score (%) values of these models were equal to 83.21, 96.65, 85.04 and 89.81, 97.84, 87.80, respectively. As a result, these designed models were found to be clinically successful, and the user‐friendly GUI applications of these proposed systems can be used by clinicians. This study is competitive as it uses MR images, can automatically segment both parotid glands, the results are meaningful according to the literature and have software application.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.