Abstract

Background and objective: COVID-19 is a serious threat to human health. Traditional convolutional neural networks (CNNs) can realize medical image segmentation, whilst transformers can be used to perform machine vision tasks, because they have a better ability to capture long-range relationships than CNNs. The combination of CNN and transformers to complete the task of semantic segmentation has attracted intense research. Currently, it is challenging to segment medical images on limited data sets like that on COVID-19.Methods: This study proposes a lightweight transformer+CNN model, in which the encoder sub-network is a two-path design that enables both the global dependence of image features and the low layer spatial details to be effectively captured. Using CNN and MobileViT to jointly extract image features reduces the amount of computation and complexity of the model as well as improves the segmentation performance. So this model is titled Mini-MobileViT-Seg (MMViT-Seg). In addition, a multi query attention (MQA) module is proposed to fuse the multi-scale features from different levels of decoder sub-network, further improving the performance of the model. MQA can simultaneously fuse multi-input, multi-scale low-level feature maps and high-level feature maps as well as conduct end-to-end supervised learning guided by ground truth.Results: The two-class infection labeling experiments were conducted based on three datasets. The final results show that the proposed model has the best performance and the minimum number of parameters among five popular semantic segmentation algorithms. In multi-class infection labeling results, the proposed model also achieved competitive performance.Conclusions: The proposed MMViT-Seg is tested on three COVID-19 segmentation datasets, with results showing that this model has better performance than other models. In addition, the proposed MQA module, which can effectively fuse multi-scale features of different levels further improves the segmentation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call