Abstract

Medical image annotation aims to automatically describe the content of medical images. It helps doctors to understand the content of medical images and make better informed decisions like diagnoses. Existing methods mainly follow the approach for natural images and fail to emphasize the object abnormalities, which is the essence of medical images annotation. In light of this, we propose to transform the medical image annotation to a multi-label classification problem, where object abnormalities are focused directly. However, extant multi-label classification studies rely on arduous feature engineering, or do not solve label correlation issues well in medical images. To solve these problems, we propose a novel deep learning model where a frequent pattern mining component and an adversarial-based denoising autoencoder component are introduced. Extensive experiments are conducted on a real retinal image dataset to evaluate the performance of the proposed model. Results indicate that the proposed model significantly outperforms image captioning baselines and multi-label classification baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call