Abstract
Deep learning algorithms gained attention for detection (computer-aided detection [CADe]) of biliary tract cancer in digital single-operator cholangioscopy (dSOC). We developed a multimodal convolutional neural network (CNN) for detection (CADe), characterization and discriminating (computer-aided diagnosis [CADx]) between malignant, inflammatory, and normal biliary tissue in raw dSOC videos. In addition, clinical metadata were included in the CNN algorithm to overcome limitations of image-only models. Based on dSOC videos and images of 111 patients (total of 15,158 still frames), a real-time CNN-based algorithm for CADe and CADx was developed and validated. We established an image-only model and metadata injection approach. In addition, frame-wise and case-based predictions on complete dSOC video sequences were validated. Model embeddings were visualized, and class activation maps highlighted relevant image regions. The concatenation-based CADx approach achieved a per-frame area under the receiver-operating characteristic curve of .871, sensitivity of .809 (95% CI, .784-.832), specificity of .773 (95% CI, .761-.785), positive predictive value of .450 (95% CI, .423-.467), and negative predictive value of .946 (95% CI, .940-.954) with respect to malignancy on 5715 test frames from complete videos of 20 patients. For case-based diagnosis using average prediction scores, 6 of 8 malignant cases and all 12 benign cases were identified correctly. Our algorithm distinguishes malignant and inflammatory bile duct lesions in dSOC videos, indicating the potential of CNN-based diagnostic support systems for both CADe and CADx. The integration of non-image data can improve CNN-based support systems, targeting current challenges in the assessment of biliary strictures.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have