Abstract

Deep learning-based models have been extensively used in computer vision and image analysis to automatically segment the region of interest (ROI) in an image. Optical coherence tomography (OCT) is used to obtain the images of the kidney’s proximal convoluted tubules (PCTs), which can be used to quantify the morphometric parameters such as tubular density and diameter. However, the large image dataset and patient movement during the scan made the pattern recognition and deep learning task to be difficult. Another challenge is a large number of non-ROIs compared to ROI pixels which caused data imbalanced and low network performance. This paper aims at developing a soft Attention-based UNET model for automatic segmentation of tubule lumen kidney images. Attention-UNET can extract features based on the ground truth structure and hence the irrelevant feature maps are not contributed during training. The performance of the soft-Attention-UNET is compared with standard UNET, Residual UNET (Res-UNET), and fully convolutional neural network (FCN). The original dataset contains 14403 OCT images from 169 transplant kidneys for training and testing. The results have shown that soft-Attention-UNET can achieve the dice score of 0.78±0.08 and intersection over union (IOU) of 0.83 which was as accurate as the manual segmentation results (dice score = 0.835±0.05) and the best segmentation scores among Res-UNET, regular UNET, and FCN networks. The results show that CLAHE contrast enhancement can improve the segmentation metrics of all models significantly (p < 0.05). Experimental results of this paper have proven that the soft Attention-based UNET is highly powerful for tubule lumen identification and localization and can improve clinical decision-making on a new transplant kidney as fast and accurately as possible.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.