Abstract
A key study area throughout the medical sector for image processing and analysis is medical image segmentation. The diagnosis and treatment strategies of doctors may have a solid foundation owing to accurate and effective medical image segmentation. Conventional approaches in this field rely on manual feature extraction, which makes segmentation complex, costs doctors’ time and energy, and involves a subjective evaluation that is readily susceptible to diagnostic errors. Researchers have applied convolutional neural network-based deep learning techniques to the segmentation of medical images as a result of their impressive advancements and successes in the field of computer vision. The research described here uses the U-Net network’s outstanding feature learning capabilities and end-to-end processing mode for lung CT image segmentation via fully convolutional network (FCN) research. However, focusing on valuable, crucial information aspects in the U-Net network is challenging. This study employed multilevel attention mechanisms on the basis of U-Net networks to enhance the model’s accuracy in lung CT image segmentation. These mechanisms were inspired by attention mechanisms. By improving the segmentation accuracy and optimizing the segmentation effect, the new model embeds a self-attention module in front of each upsampling layer in the U-Net model. This module provides more detailed information by stitching the self-attention module of the original image and then suppresses irrelevant and redundant information by using the effect of feature extraction of the upsampling layer. Several additional comparative experiments were conducted on the 2019nCoVR dataset. The outcomes demonstrate the efficacy of the optimized model described in this paper and its application results in improved segmentation effects in lung CT images. Additionally, the new model has distinct advantages over existing approaches that are typical of medical image segmentation, which represent its higher level of lung CT image segmentation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.