Abstract

In clinical diagnosis, radiological reports are essential to guide the patient’s treatment. However, writing radiology reports is a critical and time-consuming task for radiologists. Existing deep learning methods often ignore the interplay between medical findings, which may be a bottleneck limiting the quality of generated radiology reports. Our paper focuses on the automatic generation of medical reports from input chest X-ray images. In this work, we mine the associations between medical discoveries in the given texts and construct a knowledge graph based on the associations between medical discoveries. The patient’s chest X-ray image and clinical history file were used as input to extract the image–text hybrid features. Then, this feature is used as the input of the adjacency matrix of the knowledge graph, and the graph neural network is used to aggregate and transfer the information between each node to generate the situational representation of the disease with prior knowledge. These disease situational representations with prior knowledge are fed into the generator for self-supervised learning to generate radiology reports. We evaluate the performance of the proposed method using metrics from natural language generation and clinical efficacy on two public datasets. Our experiments show that our method outperforms state-of-the-art methods with the help of a knowledge graph constituted by prior knowledge of the patient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call