Abstract

e13569 Background: The generation of databases in oncology is a necessity when evaluating the data generated in medical activity. Moreover, this is a long process that generates significant expenditure not only in dedicated hours but also in the possibility of errors. On the other hand, having tools that help us in the interpretation and management of clinical images in different clinical scenario can also help in routine clinical practice. The objective has been the use of Integrated Artificial Intelligence Systems (IAIs) for, first, automatic dataset generation from unstructured clinical information, with manual validation. Secondly, validate the applicability of Microsoft's tool “Inner Eye®” to automate medical image analysis and estimate the effort required to reach a model with satisfactory accuracy. Methods: Our study was based on anonymized database of 130 patients with localized rectal cancer (LRC) belonging to clinical studies. Data from the first 100 patients was used to extract and tag pertinent medical information from unstructured text such as medical notes, discharge summaries, medical records, and electronic health records. The following 10 patients were used to implement an end user application to review and refine the dataset: A comparative practical test was carried out with variables of time and precision between following the traditional procedure by two treating physicians to the generation of dataset vs the processing and using the new application generated by artificial intelligence “Text Analytics for Health” from Microsoft. Regarding AI applied to image processing, we trained a model based on a convolutional neural network (CNN) for automatic segmentation of structures within a CT scan images from the Department of Radiation Oncology, used in treatment planning for patients diagnosed with LRC, by using of existing images, with target structures focus on bladder, left femoral head, right femoral head and small intestine. Results: In the comparative test carried out, the raw times used by the doctors were: in the first session without assistance (NAI) 4400 seconds vs 3878 seconds with assistance (AI) and in a second session in which the order was reversed, 4387 seconds were used with NAI vs 4568 seconds with AI being a total of time invested in generating dataset NAI 8787 seconds vs 8446 AI. (p = 0.49) Respect inner eye results, the concordance (DSC) between the AI system trained in the automatic segmentation of CT-images with respect to the traditional method once the system was trained with new images were: Bladder 93%, Left femoral head 88%, right femoral head 94% and small intestine 42% of accuracy. Conclusions: AI tools represent an advance in the exploitation of clinical data as well as in the management of radiological images, which can change the clinical-care paradigm in oncology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call