Abstract
Medical image segmentation is a prominent task in medical image processing and a fundamental aspect of medical image analysis. However, accurate segmentation is exceedingly challenging due to the variations in size, shape, and location of the lesions. U-Net is well-linked in the domain of medical image segmentation, but it fails to fully exploit the advantageous characteristics of the channel or leverage contextual information. This article presents a novel, improved MultiResU-Net framework called DCA-MultiResU-Net to minimize the loss of crucial features and also to enhance the efficiency of end-to-end image segmentation by utilizing the full advantages of U-Net, Res2Net, Channel attention mechanism, Dilated convolution layers, and MultiResUNet models. In contrast to the typical convolutions of U-Net in the encoder path, the dilated convolution modules are inserted into the MultiRes blocks to extract and concatenate the multi-scale features. Furthermore, a channel attention mechanism is employed in the bottom portion of the “U” shaped network and the decoder path to fuse the information from receptive fields of varying sizes. Afterward, the virtual reality visualization technology was implemented for the 3-D visualization of segmented RD lesions. The proposed segmentation model was evaluated on four online databases: Retinal Image Bank, RIADD, Kaggle, and Cataract Image Dataset or GitHub. The performance outcomes of this work were measured using standard evaluation indexes such as Dice similarity coefficient, Intersection over the union, F1-score, and Loss function score of 90.76%, 88.42%, 93.28%, and 0.1641, respectively. The experimental findings indicate that the DA-MultiResU-Net framework achieved superior performance and generalization capabilities while utilizing 6.76% fewer parameters than the U-Net model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.