Abstract
Medical image segmentation plays a crucial role in diagnosing and staging diseases. It facilitates image analysis and quantification in multiple applications, but building the right appropriate solutions is essential and highly reliant on the features of different datasets and computational resources. Most existing approaches provide segmentation for a specific anatomical region of interest and are limited to multiple imaging modalities in a clinical setting due to their generalizability with high computational requirements. To mitigate these issues, we propose a robust and lightweight deep learning real-time segmentation network for multi-modality medical images called MISegNet. We incorporate discrete wavelet transform (DWT) of the input to extract salient features in the frequency domain. This mechanism allows the neurons’ receptive field to enlarge within the network. We propose a self-attention-based global context-aware (SGCA) module with varying dilation rates to enlarge the field of view and designate the importance of each scale that enhances the network’s ability to discriminate features. We build a residual shuffle attention (RSA) mechanism to improve the feature representation of the proposed model and formulate a new boundary-aware loss function called Farid End Point Error (FEPE) that correctly segments regions with ambiguous boundaries for edge detection. We confirm the versatility of the proposed model by performing experiments against eleven state-of-the-art segmentation methods on four datasets of different organs, including two publicly available datasets (i.e., ISBI2017, and COVID-19 CT) and two private datasets (i.e., ovary and liver ultrasound images). Experimental results prove that the MISegNet with 1.5M parameters, outperforms the state-of-the-art methods by 1.5%–7% (i.e., dice coefficient score) with a corresponding 23× decrease in the number of parameters and multiply-accumulate operations respectively compared to U-Net.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.