Functional lumen imaging probe (FLIP) Panometry is performed at the time of sedated endoscopy and evaluates esophageal motility in response to distension. This study aimed to develop and test an automated artificial intelligence (AI) platform that could interpret FLIP Panometry studies. The study cohort included 678 consecutive patients and 35 asymptomatic controls that completed FLIP Panometry during endoscopy and high-resolution manometry (HRM). "True" study labels for model training and testing were assigned by experienced esophagologists per a hierarchical classification scheme. The supervised, deep learning, AI model generated FLIP Panometry heatmaps from raw FLIP data and based on convolutional neural networks assigned esophageal motility labels using a two-stage prediction model. Model performance was tested on a 15% held-out test set (n=103); the remainder of the studies were utilized for model training (n=610). "True" FLIP labels across the entire cohort included 190 (27%) "normal," 265 (37%) "not normal/not achalasia," and 258 (36%) "achalasia." On the test set, both the Normal/Not normal and the achalasia/not achalasia models achieved an accuracy of 89% (with 89%/88% recall, 90%/89% precision, respectively). Of 28 patients with achalasia (per HRM) in the test set, 0 were predicted as "normal" and 93% as "achalasia" by the AI model. An AI platform provided accurate interpretation of FLIP Panometry esophageal motility studies from a single center compared with the impression of experienced FLIP Panometry interpreters. This platform may provide useful clinical decision support for esophageal motility diagnosis from FLIP Panometry studies performed at the time of endoscopy.
Read full abstract