Background: Deep learning algorithms can help to analyze whole-slide images (WSI) in lymphoma pathology, identifying deeper features and patterns that may not be easily discernible to human observers. This pilot project is focused on diffuse large B-cell lymphoma (DLBCL), a heterogeneous disease with diverse genetic alterations. By leveraging self-attention trained clusters and transformers, the project aims to identify patterns and associations between mutational status and overall survival, potentially enhancing personalized treatment strategies and improving data reliability. Methods: This study employed a computer vision pipeline learning system to classify DLBCL subtypes using self-discovery of discriminatory features from scanned WSI. The workflow is shown in Figure 1a. First, we segmented tiles or patches from the gigapixel-sized WSI of 223 lymphoma cancer biopsy slides sourced from The Cancer Genome Atlas (TCGA), DLBCL, and Stanford DLBCL-Morph datasets. to optimize extraction of relevant features. For feature extraction, we utilized self-supervised pretraining with a Vision Transformer (ViT) network on a dataset of 1,515,000 patches. These patches were grouped into morphologically similar clusters using K-means, representing various lymphoma proliferation patterns. Dimensionality reduction with UMAP allowed for computational efficiency and feature visualization. Extracted features were used to predict overall survival using Bag-of-Words (BoWs). We further enhanced the model by incorporating geometric information, including nuclear characteristics from HoverNet. We assessed the model's performance using key metrics such as area under the curve (AUC) and accuracy. Additionally, genomic mutations with a frequency of 10% or higher from the TCGA were incorporated for enhanced predictive capability, including PIM1, SGK1, CARD11, KMT2D, SOCS1, BTG1, MUC16 and FAT4. Correlations and hierarchical clustering were performed on mutations, patches, and outcomes (Figure 1b). Results: Using the self-trained ViT encoder as the backbone and random forest, we demonstrate an accuracy of 0.88 on a weakly supervised task with all samples. The performance is significantly improved when compared to ResNet-50 trained on ImageNet. In addition, saliency maps from multi-attention heads provide excellent interpretations of morphological characteristics, including tumor stroma, cell location and necrosis. With the feature embeddings extracted by ViT, 10 morphologically distinct clusters were identified. The algorithms identified morphologically similar and dissimilar clusters among tiles that represent variation in the distribution of lymphoma cells as seen in clustering results. There was a strong correlation between clusters one and four, the mutated genes, and an increased probability of poor outcomes. When this was visualized by the pathologist, cluster 4 showed a high level of necrosis in the WSI. Finally, using the segmented WSI cortex with clustering distribution, our method achieves an AUC of 0.77 for predicting the vital status of alive/dead as an outcome. Conclusions: Our pipeline effectively leverages WSI to employ machine learning and deep learning tools for disease classification and outcome prediction in DLBCL. By computationally analyzing the entire tumor landscape, it captures tumor heterogeneity and disease risk, establishing correlations between patch-level characteristics, genomic mutations and overall outcomes. The Vision Transformer (ViT) plays a pivotal role in this process, successfully identifying specific features in histopathology tissue by leveraging self-attention mechanisms, which enables precise feature extraction and accurate analysis for DLBCL classification. Notably, the ViT model achieves high performance across various tasks without the need for external labeled data, owing to its self-supervised learning capability that enables independent learning and extraction of meaningful information. Based on this discovery cohort from this dataset, we will update the analysis using a larger, independent external cohort from institutional archives, and the results will be presented at the annual meeting.
Read full abstract