Abstract

This research proposes a dilated depthwise separable network for human tissue identification using three-dimensional (3D) optical coherence tomography (OCT) images. Automatic human tissue identification has made it possible for fast pathological tissue analyses, detecting tissue changes over time, and efficiently making a precise therapy treatment plan. 3D medical image classification is a challenging task because of the indistinct tissue characteristics and computational efficiency. To address the challenge, a deep dilated depthwise separable convolutional neural network is proposed in this research. A depthwise separable architecture is introduced to improve parameter utilization efficiency. Dilated convolutions are applied to systematically aggregate multiscale contextual information and provide a large receptive field with a small number of trainable weights, which provide computational benefit. 2D convolutions are applied in the proposed model to enhance the computational efficiency. The constructed model is tested by performing a multi-class human thyroid tissue classification on 3D OCT images. For comparison, experimental results are obtained for texture feature-based shallow learning models and typical deep learning classification models. The results show that the proposed DDSCN model outperforms those state-of-art models and can improve accuracy by 3.2% compared with the best texture-based model and 2.27% compared with the best CNN model. The proposed deep model provides significant potential for the applicability of the deep learning technique to analyze medical images of human tissue while advancing the next generation of OCT-based real-time surgery image guidance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.