Abstract
The image feature extraction method based on the attention mechanism has contributed significantly to the accuracy of medical image segmentation. However, the current attention mechanism is based on the single-view information for feature extraction which imposes certain limitations for extracting efficient features. In this study, we propose the encoder-decoder structure of the U-Net as the basic network structure to construct a medical image segmentation method based on the multi-view attention mechanism and adaptive fusion strategy. We will refer to this new network as CFNet. The first component of CFNet is a cross-scale feature fusion method (CFF) employing a new multi-view attention mechanism (MAM) for feature extraction. This can effectively extract features in the multi-receptive field space and obtain more effective cross-scale fusion features in skip-connection. The second component is a fusion weight adaptive allocation strategy (FAS), which can guide the cross-scale fusion features to effectively connect to the decoder features for solving the semantic gap. We evaluated the CFNet using two publicly available medical image segmentation datasets: MoNuSeg and LGG. The experimental results show that the CFNet can achieve better performance compared with the current state-of-the-art methods in medical image segmentation. We then perform extensive ablation studies to validate our method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.