Abstract

Abstract The purpose of this study was to build a software tool to test the feasibility of machine-learning (ML) assisted analysis of in vivo 3D images in oncology research studies. Conventionally, in vivo image segmentations are performed by hand, and are a bottleneck for data analysis, taking time and introducing both inter and intra-user variability. Recently, advances in ML have shown promise in accelerating similar clinical image analysis tasks. Our objective was to work toward making this technology accessible to preclinical cancer researchers. 3D image data across multiple mouse models, organs, modalities, and imaging systems were used. ML segmentation models were trained with user-defined masks on images of mouse bladders (N=134), kidneys (N=131), livers (N=65), and lungs (N=26) from either ultrasound or micro-computed tomography (uCT) scanners. Models were developed using PyTorch/MONAI, hosted in an AWS cloud environment, and served to users via a custom extension for 3D Slicer (www.slicer.org). Performance was assessed as the difference between the human vs. ML organ volumes on a test set unseen by the models during training. Lastly, the functionality of on-the-fly segmentation improvement tools through iterative user feedback was assessed. Results demonstrated that generating segmentations with the prototype ML software took 2-6 seconds per image running on a modern GPU (NVIDIA RTX 3090), compared to 3-7 minutes if performed manually by an expert user (>30x improvement). The Dice scores between the ML generated and human segmentations ranged from 0.81 (liver ultrasound) to 0.84 (bladder ultrasound) but could be improved using ML-based on-the-fly mask improvement tools. Using data from this pilot study, we estimate that the threshold to achieve 90% accuracy on first-pass segmentations is between 200-300 training images, depending on the variance observed in the image data. In summary, the prototype software was able to decrease the manual analysis time across a variety of preclinical in vivo image modalities, which is broadly relevant to preclinical oncology research. The next steps for this project will be to extend the platform to additional orthotopic tumor models (e.g. pancreatic, liver, prostate, etc.) and deploy to the field to enable access to this toolset by preclinical imaging scientists across the world. Citation Format: Adam M. Aji, Thomas M. Kierski, Juan D. Rojas, Hannah Morilak, Olivia J. Kelada, Kathryn H. Gessner, Andrew S. Gdowski, Lucia Kim, William Y. Kim, Ryan C. Gessner, Tomek J. Czernuszewicz. A prototype for a machine-learning assisted image analysis tool for preclinical in vivo studies [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 2336.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call