Abstract

In this paper, we embed two types of attention modules in the dilated fully convolutional network (FCN) to solve biomedical image segmentation tasks efficiently and accurately. Different from previous work on image segmentation through multiscale feature fusion, we propose the fully convolutional attention network (FCANet) to aggregate contextual information at long-range and short-range distances. Specifically, we add two types of attention modules, the spatial attention module and the channel attention module, to the Res2Net network, which has a dilated strategy. The features of each location are aggregated through the spatial attention module, so that similar features promote each other in space size. At the same time, the channel attention module treats each channel of the feature map as a feature detector and emphasizes the channel dependency between any two channel maps. Finally, we weight the sum of the output features of the two types of attention modules to retain the feature information of the long-range and short-range distances, to further improve the representation of the features and make the biomedical image segmentation more accurate. In particular, we verify that the proposed attention module can seamlessly connect to any end-to-end network with minimal overhead. We perform comprehensive experiments on three public biomedical image segmentation datasets, i.e., the Chest X-ray collection, the Kaggle 2018 data science bowl and the Herlev dataset. The experimental results show that FCANet can improve the segmentation effect of biomedical images. The source code models are available at https://github.com/luhongchun/FCANet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call