Abstract

Radiology includes using medical images for detection and diagnosis of diseases as well as guiding further interventions. Chest X-rays are commonly used radiological examinations to help spot thoracic abnormalities or diseases, especially lung-related diseases. However, the reporting of chest x-rays requires experienced radiologists who are often in shortage in many regions of the world. In this paper, we first develop an automatic radiology report generation system. Due to the lack of large annotated radiology report datasets and the difficulty of evaluating the generated reports, the clinical value of such systems is often limited. To this end, we train our report generation network on the small IU Chest X-ray dataset then transfer the learned visual features to classification networks trained on the large ChestX-ray14 dataset and use a novel attention guided feature fusion strategy to improve the detection performance of 14 common thoracic diseases. Through learning the correspondences between different types of feature representations, common features learned by both the report generation and the classification model are assigned with higher attention weights and the weighted visual features boost the performance of state-of-the-art baseline thoracic disease classification networks without altering any learned features. Our work not only offers a new way to evaluate the effectiveness of the learned radiology report generation network, but also proves the possibility of transferring different types of visual representations learned on a small dataset for one task to complement features learned on another large dataset for a different task and improve the model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call