Abstract

Automatic classification and segmentation of medical images play essential roles in computer-aided diagnosis. Deep convolutional neural networks (DCNNs) have shown their advantages on image classification and segmentation. However, they have not achieved the same success on medical images as they have done on natural images. In this paper, two challenges are exploited for DCNNs on medical images, including 1) lack of feature diversity; 2) neglect of small lesions. These two issues heavily influence the classification and segmentation performances. To improve the performance of DCNN on medical images, similarity-aware attention (simi-attention) module is proposed, including a <i>Feature-similarity-aware Channel Attention</i> (FCA) and a <i>Region-similarity-aware Spatial Attention</i> (RSA). Our simi-attention provides three advantages: 1) higher accuracy can be achieved since it extracts both diverse and discriminant features from medical images via our FCA and RSA; 2) the lesions can be exactly focused and located by it even for the data with low intensity contrast and small lesions; 3) it does not increase the complexity of backbone models due to NO trainable parameters in its module. The experimental results are conducted on both classification and segmentation tasks under four public medical classification datasets and two public medical segmentation datasets. The visualization results show that our simi-attention can accurately focuses on the lesions for classification and generate fine segmentation results even for small objects. The overall performances show that our simi-attention can significantly improve the performances of backbone models and outperforms compared attention models on most of datasets for both classification and segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call