Abstract Accurate intraoperative diagnosis is crucial for decision-making during tumor surgery and stereotactic-guided biopsies. Differentiating between primary CNS lymphoma (PCNSL) and non-PCNSL entities such as gliomas and metastatic cancers presents significant challenges due to overlapping histomorphological features, time constraints, and the differing further therapy strategies required. Our study utilized deep learning and stimulated Raman histology (SRH) to address this challenge. We imaged unprocessed, label-free tissue samples intraoperatively using a portable Raman scattering microscope, generating virtual H&E-like images within less than five minutes. Our training set comprised over 54,000 SRH tumor patch images, including various brain tumors. We developed a deep learning pipeline utilizing a ResNet-50 feature extractor and the BYOL (Bootstrap Your Own Latent) self-supervised learning algorithm. We trained a binary linear classifier to distinguish between PCNSL and non-PCNSL entities. Data were collected from New York University (NYU), the University of Cologne (UKK), and the University of Michigan (UM), including 263 SRH whole slide images from 177 patients sourced from surgical resections and stereotactic-guided biopsies. In an international bicentric test cohort (N = 100, NYU and UKK), the model achieved an overall accuracy of 91.1%, an area under the receiver operating characteristic curve (AUROC) of 0.983, a sensitivity of 89.3%, and a specificity of 100% compared to the final formalin-fixed, paraffin-embedded-based diagnosis. Our algorithm can accurately extract typical histomorphological features to help differentiate between PCNSL and non-PCNSL tumors. These results underscore the potential of deep learning models to enhance the accuracy of intraoperative diagnoses for PCNSL and non-PCNSL entities, thereby improving intraoperative decision-making and further treatment strategies by supporting neuropathologists and surgeons.
Read full abstract