Abstract
The Response Assessment in Neuro-Oncology (RANO) criteria and requirements for a uniform protocol have been introduced to standardise assessment of MRI scans in both clinical trials and clinical practice. However, these criteria mainly rely on manual two-dimensional measurements of contrast-enhancing (CE) target lesions and thus restrict both reliability and accurate assessment of tumour burden and treatment response. We aimed to develop a framework relying on artificial neural networks (ANNs) for fully automated quantitative analysis of MRI in neuro-oncology to overcome the inherent limitations of manual assessment of tumour burden. In this retrospective study, we compiled a single-institution dataset of MRI data from patients with brain tumours being treated at Heidelberg University Hospital (Heidelberg, Germany; Heidelberg training dataset) to develop and train an ANN for automated identification and volumetric segmentation of CE tumours and non-enhancing T2-signal abnormalities (NEs) on MRI. Independent testing and large-scale application of the ANN for tumour segmentation was done in a single-institution longitudinal testing dataset from the Heidelberg University Hospital and in a multi-institutional longitudinal testing dataset from the prospective randomised phase 2 and 3 European Organisation for Research and Treatment of Cancer (EORTC)-26101 trial (NCT01290939), acquired at 38 institutions across Europe. In both longitudinal datasets, spatial and temporal tumour volume dynamics were automatically quantified to calculate time to progression, which was compared with time to progression determined by RANO, both in terms of reliability and as a surrogate endpoint for predicting overall survival. We integrated this approach for fully automated quantitative analysis of MRI in neuro-oncology within an application-ready software infrastructure and applied it in a simulated clinical environment of patients with brain tumours from the Heidelberg University Hospital (Heidelberg simulation dataset). For training of the ANN, MRI data were collected from 455 patients with brain tumours (one MRI per patient) being treated at Heidelberg hospital between July 29, 2009, and March 17, 2017 (Heidelberg training dataset). For independent testing of the ANN, an independent longitudinal dataset of 40 patients, with data from 239 MRI scans, was collected at Heidelberg University Hospital in parallel with the training dataset (Heidelberg test dataset), and 2034 MRI scans from 532 patients at 34 institutions collected between Oct 26, 2011, and Dec 3, 2015, in the EORTC-26101 study were of sufficient quality to be included in the EORTC-26101 test dataset. The ANN yielded excellent performance for accurate detection and segmentation of CE tumours and NE volumes in both longitudinal test datasets (median DICE coefficient for CE tumours 0·89 [95% CI 0·86-0·90], and for NEs 0·93 [0·92-0·94] in the Heidelberg test dataset; CE tumours 0·91 [0·90-0·92], NEs 0·93 [0·93-0·94] in the EORTC-26101 test dataset). Time to progression from quantitative ANN-based assessment of tumour response was a significantly better surrogate endpoint than central RANO assessment for predicting overall survival in the EORTC-26101 test dataset (hazard ratios ANN 2·59 [95% CI 1·86-3·60] vs central RANO 2·07 [1·46-2·92]; p<0·0001) and also yielded a 36% margin over RANO (p<0·0001) when comparing reliability values (ie, agreement in the quantitative volumetrically defined time to progression [based on radiologist ground truth vs automated assessment with ANN] of 87% [266 of 306 with sufficient data] compared with 51% [155 of 306] with local vs independent central RANO assessment). In the Heidelberg simulation dataset, which comprised 466 patients with brain tumours, with 595 MRI scans obtained between April 27, and Sept 17, 2018, automated on-demand processing of MRI scans and quantitative tumour response assessment within the simulated clinical environment required 10 min of computation time (average per scan). Overall, we found that ANN enabled objective and automated assessment of tumour response in neuro-oncology at high throughput and could ultimately serve as a blueprint for the application of ANN in radiology to improve clinical decision making. Future research should focus on prospective validation within clinical trials and application for automated high-throughput imaging biomarker discovery and extension to other diseases. Medical Faculty Heidelberg Postdoc-Program, Else Kröner-Fresenius Foundation.
Highlights
IntroductionThe downward pull of gravity requires that we make continuous motor corrections to remain upright
Whenever we stand, the downward pull of gravity requires that we make continuous motor corrections to remain upright
During Experiment 1, mean whole-body sway angle was comparable across the 1F, 1.5F and 2F loading conditions [F(1.37,20.59) = 0.411, p = 0.592], the mean removed root mean square (RMS) sway angle varied depending on the specific load [F(2,30) = 7.650, p = 0.002]
Summary
The downward pull of gravity requires that we make continuous motor corrections to remain upright Critical to this process is the ability to accurately estimate our orientation relative to gravity. The somatosensory system encodes the gravitational load (i.e., forces) throughout the body within the local reference frame of the support surface, whereas the vestibular system’s otolith organs, together with the semicircular canal organs, encode head orientation within a fixed gravito-inertial reference frame. These sensory signals shape the corrective motor commands that maintain balance such that any change they undergo has an influence on the postural responses necessary to stand. Similar effects can be observed when subjects take on asymmetric standing postures (Marsden et al, 2002), suggesting that load-related afferent feedback of gravity influences the processing of vestibular signals for the control of balance (Marsden et al, 2003)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.