Abstract

Segmenting COVID-19 from CT images remains a challenging task due to the structural characteristics, including complex and diverse infected regions, high inter-class similarity, and intra-class variability, which make it difficult to capture detailed semantic information such as edges and textures effectively. The present study introduces a novel methodology for segmenting COVID-19 CT images, consisting of three distinct stages. The proposed approach is primarily based on the margin adaptive deep supervised feature fusion network (MADFNet) architecture, which adopts an encoder-decoder framework. In this strategy, the segmentation outcomes obtained in the previous phase are utilized as supervised terms to facilitate the semantic segmentation process in the subsequent stage. In MADFNet, firstly, the channel edge detection and spatial edge detection modules capture the rich semantic information of edge context from different dimensions to obtain the fine texture information and semantic dependencies, respectively. Then, the encoder feature fusion module is designed to aggregate multiscale feature representations at the same level, facilitating information integration across different channels and enabling adaptive detection of structural features such as edges and textures. Finally, the decoder feature fusion module integrates multiscale feature dependencies from different levels to alleviate the semantic discrepancies of features at different levels and accurately locate lesion regions. The single-class segmentation Dice scores of MADFNet on the three experimental datasets are 0.831, 0.825, and 0.821, respectively, and the results of single-class as well as multi-class lesion segmentation on COVID-19 CT images validated the feasibility of our work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call