Abstract

Semantic nuclei segmentation is a challenging area of computer vision. Accurate nuclei segmentation can help medics in diagnosing many diseases. Automatic nuclei segmentation can help medics in diagnosing many diseases such as cancer by providing automatic tissue analysis. Deep learning algorithms allow automatic feature extraction from medical images, however, hematoxylin and eosin (H&E) stained images are challenging due to variability in staining and textures. Using pre-trained models in deep learning speeds up development and improves their performance. This paper compares Deeplabv3+ and U-Net deep learning methods with the pre-trained models ResNet-50 and EfficientNetB4 embedded in their architecture. In addition, different regularization and dropout parameters are applied to prevent overtraining. The experiment was conducted on the PanNuke dataset consisting of nearly 8,000 histological images and annotated nuclei. As a result, the ResNet50-based DeepLabV3+ model with L2 regularization of 0.02 and dropout of 0.7 showed efficiency with dice coefficient (DCS) of 0.8356, intersection over union (IOU) of 0.7280, and loss of 0.3212 on the test set.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.