Abstract

ObjectiveClinical and dermoscopy images (multi-modality image pairs) are routinely used sequentially in the assessment of skin lesions. Clinical images characterize a lesion's geometry and color; dermoscopy depicts vascularity, dots and globules from the sub-surface of the lesion. Together these modalities provide labels to characterize a skin lesion. Recently, convolutional neural networks (CNNs), due to the ability to learn low-level features and high-level semantic information in an end-to-end architecture, have been shown to be the state-of-the-art in skin lesion classification. Most of the CNN methods have relied on dermoscopy alone. In the few published papers that support multi-modalities, the methods are based on ‘late-fusion’ to integrate extracted clinical and dermoscopy image features separately. These late-fusion methods tend to ignore the accessible complementary image features between the paired images at the early stage of the CNN architecture. MethodsWe propose a hyper-connected CNN (HcCNN) to classify skin lesions. Compared to existing multi-modality CNNs, our HcCNN has an additional hyper-branch that integrates intermediary image features in a hierarchical manner. The hyper-branch enables the network to learn more complex combinations between the images at all, early and late, stages of the network. We also coupled the HcCNN with a multi-scale attention block (MsA) to prioritize semantically important subtle regions in the two modalities across various image scales. ResultsOur HcCNN achieved an average accuracy of 74.9% for multi-label classification on the 7-point Checklist dataset, which is a well-benchmarked public dataset. Conclusions: Our method is more accurate than the state-of-the-art methods and, in particular, our method achieved consistent and the best results in datasets with imbalanced label distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call