Abstract
The robustness and generalization of medical image segmentation models are being challenged by the differences between different disease types, different image types, and different cases.Deep learning based semantic segmentation methods have been providing state-of-the-art performance in the last few years. One deep learning technique, U-Net, has become the most popular architecture in the medical imaging segmentation. Despite outstanding overall performance in segmenting medical images, it still has the problems of limited feature expression ability and inaccurate segmentation. To this end, we propose a DTA-UNet based on Dynamic Convolution Decomposition (DCD) and Triple Attention (TA). Firstly, the model with Attention U-Net as the baseline network uses DCD to replace all the conventional convolution in the encoding-decoding process to enhance its feature extraction capability. Secondly, we combine TA with Attention Gate (AG) to be used for skip connection in order to highlight lesion regions by removing redundant information in both spatial and channel dimensions. The proposed model are tested on the two public datasets and actual clinical dataset such as the public COVID-SemiSeg dataset, the ISIC 2018 dataset, and the cooperative hospital stroke segmentation dataset. Ablation experiments on the clinical stroke segmentation dataset show the effectiveness of DCD and TA with only a 0.7628 M increase in the number of parameters compared to the baseline model. The proposed DTA-UNet is further evaluated on the three datasets of different types of images to verify its universality. Extensive experimental results show superior performance on different segmentation metrics compared to eight state-of-art methods.The GitHub URL of our code is https://github.com/shuaihou1234/DTA-UNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.