Abstract

Recently, multimodal image-based methods have shown great performance in skin cancer diagnosis. These methods usually use convolutional neural networks (CNNs) to extract the features of two modalities (i.e., dermoscopy and clinical images), and fuse these features for classification. However, they commonly have the following two shortcomings: 1) the feature extraction processes of the two modalities are independent and lack cooperation, which may lead to limited representation ability of the extracted features, and 2) the multimodal fusion operation is a simple concatenation followed by convolutions, thus causing rough fusion features. To address these two issues, we propose a co-attention fusion network (CAFNet), which uses two branches to extract the features of dermoscopy and clinical images and a hyper-branch to refine and fuse these features at all stages of the network. Specifically, the hyper-branch is composed of multiple co-attention fusion (CAF) modules. In each CAF module, we first design a co-attention (CA) block with a cross-modal attention mechanism to achieve the cooperation of two modalities, which enhances the representation ability of the extracted features through mutual guidance between the two modalities. Following the CA block, we further propose an attention fusion (AF) block that dynamically selects appropriate fusion ratios to conduct the pixel-wise multimodal fusion, which can generate fine-grained fusion features. In addition, we propose a deep-supervised loss and a combined prediction method to obtain a more robust prediction result. The results show that CAFNet achieves the average accuracy of 76.8% on the seven-point checklist dataset and outperforms state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call