Abstract

Radiology report generation through chest radiography interpretation is a time-consuming task that involves the interpretation of images by expert radiologists. It is common for fatigue-induced diagnostic error to occur, and especially difficult in areas of the world where radiologists are not available or lack diagnostic expertise. In this research, we proposed a multi-objective deep learning model called CT2Rep (Computed Tomography to Report) for generating lung radiology reports by extracting semantic features from lung CT scans. A total of 458 CT scans were used in this research, from which 107 radiomics features and 6 slices of segmentation related nodule features were extracted for the input of our model. The CT2Rep can simultaneously predict position, margin, and texture, which are three important indicators of lung cancer, and achieves remarkable performance with an F1-score of 87.29%. We conducted a satisfaction survey for estimating the practicality of CT2Rep, and the results show that 95% of the reports received satisfactory ratings. The results demonstrate the great potential in this model for the production of robust and reliable quantitative lung diagnosis reports. Medical personnel can obtain important indicators simply by providing the lung CT scan to the system, which can bring about the widespread application of the proposed framework.

Highlights

  • We aimed to develop a comprehensive system for automatic lung radiology report generation

  • It formulated database management methods and application procedures to standardize the management of the database, access, usage rules, and security protocol, as well as to enable health data to be used for biomedical research

  • As for neural network-based methods, since recurrent neural network (RNN) is proven to be applicable to sequential processing and convolutional neural network (CNN) is suitable for visual feature extraction [45,46], these neural models (i.e., RNN, multilayer perceptron (MLP), and CNN) can further improve performances up to about 75%, 84%, and 85%, respectively

Read more

Summary

Introduction

The global burden of cancer morbidity and mortality is increasing rapidly. Lung cancer is the most common cancer, the leading cause of cancer deaths in men, and the second leading cause of cancer deaths in women [1]. Among lung cancer patients diagnosed between 2010 and 2014, the 5 year survival rate of lung cancer patients in most countries is only 10% to 20% after diagnosis [2]. Low-dose computed tomography (LDCT) is used for high-risk groups and can help diagnose cancer early when it is more likely to be successfully treated. The efficacy of low-dose CT screening once a year in reducing lung cancer mortality has been confirmed in many independent international randomized controlled clinical trials [3,4]. Application of Lung-RADS report after clinical CT scans can increase the positive predictive value and assist decision making [5]. Extracting information from LDCT scans and converting it into medical reports can be time-consuming. We aim to interpret from LDCT scans to generate a text report with Lung-RADS by utilizing machine learning methods

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call