Deep Learning-Enabled Virtual Multiplexed Immunostaining of Label-Free Tissue for Vascular Invasion Assessment.
Objective: We report the development and validation of a deep learning-based virtual multiplexed immunostaining method for label-free tissue, enabling the simultaneous generation of ERG (ETS-related gene), PanCK (pan-cytokeratin), and hematoxylin and eosin (H&E) images for vascular invasion assessment. Impact Statement: This work delivers routine laboratory-compatible virtual multiplexed immunohistochemistry (mIHC) that reproduces ERG, PanCK, and H&E on the same tissue section without chemical staining. It addresses the cost, labor, tissue loss, and section-to-section variability of conventional IHC, as well as the practical unavailability of mIHC in most pathology laboratories, thereby improving accuracy and efficiency in assessing vascular invasion. Introduction: Traditional IHC requires one tissue section per stain, exhibits section-to-section variability, and incurs high costs and laborious staining procedures. While mIHC techniques enable simultaneous staining with multiple antibodies on a single slide, they are more tedious to perform and are currently unavailable in routine pathology laboratories. Here, we present a deep learning-based virtual multiplexed immunostaining framework that simultaneously generates ERG and PanCK, in addition to H&E virtual staining, enabling the accurate localization and interpretation of vascular invasion in thyroid cancers. Methods: This virtual mIHC technique is based on the autofluorescence microscopy images of label-free tissue sections, and its output images closely match the histochemical staining counterparts (ERG, PanCK, and H&E) of the same tissue sections. Results: Blind evaluation by board-certified pathologists demonstrated that virtual mIHC staining achieved high concordance with the histochemical staining results, accurately highlighting epithelial and endothelial cells. Virtual mIHC conducted on the same tissue section also allowed the identification and localization of small vessel invasion. Conclusion: This virtual mIHC approach can substantially improve diagnostic accuracy and efficiency in the histopathological evaluation of vascular invasion, potentially eliminating the need for traditional staining protocols and mitigating issues related to tissue loss and heterogeneity.
- Abstract
2
- 10.1136/jitc-2023-sitc2023.0072
- Nov 1, 2023
- Journal for ImmunoTherapy of Cancer
BackgroundLung cancer, the worldwide leading cause of cancer-related deaths, is expected to account for over 127,000 deaths in the United States in 2023. Advances in the management of NSCLC are...
- Research Article
30
- 10.1016/j.labinv.2023.100070
- Jan 25, 2023
- Laboratory Investigation
Unstained Tissue Imaging and Virtual Hematoxylin and Eosin Staining of Histologic Whole Slide Images
- Research Article
141
- 10.1038/modpathol.2011.56
- Aug 1, 2011
- Modern Pathology
ERG–TMPRSS2 rearrangement is shared by concurrent prostatic adenocarcinoma and prostatic small cell carcinoma and absent in small cell carcinoma of the urinary bladder: evidence supporting monoclonal origin
- Research Article
3
- 10.34133/bmef.0151
- Jan 1, 2025
- BME frontiers
Objective and Impact Statement: We present a panel of virtual staining neural networks for lung and heart transplant biopsies, providing rapid and high-quality histological staining results while bypassing the traditional histochemical staining process. Introduction: Allograft rejection is a common complication of organ transplantation, which can lead to life-threatening outcomes if not promptly managed. Histological examination is the gold standard method for evaluating organ transplant rejection status, as it provides detailed insights into rejection signatures at the cellular level. Nevertheless, the traditional histochemical staining process is time-consuming, costly, and labor-intensive since transplant biopsy evaluations typically necessitate multiple stains. Furthermore, once these tissue slides are stained, they cannot be reused for other ancillary tests. More importantly, suboptimal handling of very small tissue fragments from transplant biopsies may impede their effective histochemical staining, and color variations across different laboratories or batches can hinder efficient histological analysis by pathologists. Methods: To mitigate these challenges, we developed a panel of virtual staining neural networks for lung and heart transplant biopsies, which digitally convert autofluorescence microscopic images of label-free tissue sections into their bright-field histologically stained counterparts-bypassing the traditional histochemical staining process. Specifically, we virtually generated hematoxylin and eosin (H&E), Masson's Trichrome (MT), and elastic Verhoeff-Van Gieson stains for label-free transplant lung tissue, along with H&E and MT stains for label-free transplant heart tissue. Results: Blind evaluations conducted by 3 board-certified pathologists confirmed that the virtual staining networks consistently produce high-quality histology images with high color uniformity, closely resembling their well-stained histochemical counterparts across various tissue features. The use of virtually stained images for the evaluation of transplant biopsies achieved comparable diagnostic outcomes to those obtained via traditional histochemical staining, with a concordance rate of 82.4% for lung samples and 91.7% for heart samples. Moreover, virtual staining models create multiple stains from the same autofluorescence input, eliminating structural mismatches observed between adjacent sections stained in the traditional workflow, while also saving tissue, expert time, and staining costs. Conclusion: The presented virtual staining panels provide an effective alternative to conventional histochemical staining for transplant biopsy evaluation. These virtual staining panels have the potential to enhance the clinical diagnostic workflow for organ transplant rejection and improve the performance of downstream automated models for the analysis of transplant biopsies.
- Research Article
151
- 10.1038/s41377-020-0315-y
- May 6, 2020
- Light, Science & Applications
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a “digital staining matrix”, which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones’ silver stain, and Masson’s trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.
- Research Article
- 10.1016/j.modpat.2025.100885
- Dec 1, 2025
- Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc
Lymphovascular Space Invasion in Endometrial Cancer: Does it Matter Where and How Much to Sample? A Macroscopic Study of 208 Hysterectomies.
- Research Article
- 10.1158/1538-7445.am2024-914
- Mar 22, 2024
- Cancer Research
An emerging predictive parameter of immunotherapy response is the patient’s tumor immune status, generally classified as “inflamed”, "immune excluded” and “desert”. Historically, classification was performed semi-quantitatively with somewhat subjective parameters as evidenced by the recent Delphi Workshop consensus. There are a variety of approaches to classifying immune status, most based upon analysis of H&E images (counting tumor infiltrating lymphocytes, TILs), or counting subpopulations of immune cells using immunologic staining. However, use of multiplexed or multiple stains has recently been explored to better quantify the counts and types of immune cells in individual tissue compartments. While the immune microenvironment represents an important parameter in predicting patient response, other regions such as the extracellular matrix (ECM) may yield complementary information, as collagen-rich ECMs may present a barrier to drug diffusion and the orientation of fibers may direct the migration of malignant cells. Here we present a proof of concept virtual staining enhanced image analysis pipeline, which converts autofluorescence signals from a single tissue section into virtual H&E and Masson’s Trichrome along with virtual detection of pan cytokeratin (AE1/AE3/PCK26) and CD45 (LCA) positive cells. We apply image analysis to the panel of virtual stains to study the spatial and case-by-case heterogeneity of tumor collagen frameworks and immune phenotype. Using a standard slide scanner (Axio Scan Z1, Zeiss), multiple autofluorescence images were captured from unstained sections (4-6um thick) of lung tissue. The virtual staining was performed by four deep neural networks trained in a supervised learning fashion. Select chemical stains were performed on previously scanned tissues and reviewed by pathologists side-by-side with virtual stains to ensure consistency and quality control. Four perfectly registered WSI virtual stain images were generated from each tissue section. The multi-stain results were rendered from a variety of lung cancers, including tissue microarray slides. Image analysis was performed using HALO (Indica Labs) and custom python scripts. We identified unique areas of tumor and adjacent stroma with heterogeneous immune phenotype and collagen characteristics, suggesting that an interplay might exist that could be utilized to improve patient selection or prognosis when deploying advanced AI tools. Future work includes applying this virtual stain technique retrospectively to cases with known immunotherapy treatment responses to determine the prognostic significance of the combined immune/ECM phenotype. Citation Format: Serge Alexanian, Yair Rivenson, Ning Xuan, Brian Cone, Zihang Fang, Sean Meyering, Luis A. Carvalho, David Palacios, Raymond Kozikowski. Combination analysis of tumor-associated collagen frameworks and tumor immune phenotype of lung carcinomas using virtual staining [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 914.
- Research Article
26
- 10.34133/2022/9818965
- Jan 1, 2022
- Intelligent Computing
Deep learning-based virtual staining was developed to introduce image contrast to label-free tissue sections, digitally matching the histological staining, which is time-consuming, labor-intensive, and destructive to tissue. Standard virtual staining requires high autofocusing precision during the whole slide imaging of label-free tissue, which consumes a significant portion of the total imaging time and can lead to tissue photodamage. Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope’s autofocusing precision. This framework incorporates a virtual autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images using a successive network. These cascaded networks form a collaborative inference scheme: the virtual staining model regularizes the virtual autofocusing network through a style loss during the training. To demonstrate the efficacy of this framework, we trained and blindly tested these networks using human lung tissue. Using 4× fewer focus points with 2× lower focusing precision, we successfully transformed the coarsely-focused autofluorescence images into high-quality virtually stained H&E images, matching the standard virtual staining framework that used finely-focused autofluorescence input images. Without sacrificing the staining quality, this framework decreases the total image acquisition time needed for virtual staining of a label-free whole-slide image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time, and has the potential to eliminate the laborious and costly histochemical staining process in pathology.
- Research Article
- 10.1158/1538-7445.am2021-2780
- Jul 1, 2021
- Cancer Research
Introduction: In recent years, the ability to detect several proteins on a single slide in Multiplex Immunofluorescence (mIF) experiments has become common place. This is possible thanks to factors such as development of antibodies of high quality, improvements in fluorescent technology, and automation of IHC staining. One of the advantages of multiplexing protein detection is to gain maximal data per tissue section, which is critical when samples are limited. Understanding co-expression and spatial organization of multiple targets within preserved tissue architecture is also important, especially in tumor microenvironment (TME) analysis. mIF plays a key role in research fields like tumor immunology where it is needed to catalog subsets of immune and cancer cells within the TME. Ultimately, this will enable the development of personalized, combinatorial therapeutic interventions. Materials and Methods: Selected antibody panels: Tonsil Panel 1: p40, CD3, CD8, CD20 & CD68 Tonsil Panel 2: LAG3 [BC40], LAG3 [CAL26], CD57, CD19, FOXP3, T-bet, PD-L1, PD-1 & GATA-3 Colon Cancer Panel: Granzyme B, CD8, CDX2, CD3 & Pan Cytokeratin Plus Melanoma Panel: Melan A, CD3, SOX10, CD8 & Granzyme B Detection was performed with Biocare MACH 2 Universal HRP Polymer. Fluorochromes: Opal 480, Opal 520, Opal 570, Opal 620, Opal 650 VALENT (Biocare Medical) fully automated staining platform was utilized to stain slides. Image acquisition and analysis was performed on an Olympus BX61 microscope coupled with the ASI FISH imaging system. In the workflow, a proprietary room temperature (RT) stripping reagent was utilized instead of the heat stripping step commonly used between antibody detection incubations. Results and Discussion: The antibodies used have been previously optimized for standard IHC protocols on VALENT using DAB. However, the sensitivity of tyramide detection is very high and for most of the antibodies, the incubation times had to be drastically reduced to minimize non-specific staining and the potential bleed-through to other filters for neighboring fluors. Additionally, Opal fluorochromes were diluted between 1:200 and 1:300. The RT stripping reagent helps in the preservation of tissue integrity even after the sequential detection of 5 antibodies. Additionally, by using this reagent, a rigorous order of primary antibody application is no longer needed. Finally, this reagent dimmed some autofluorescence coming from red blood cells. Following some brief optimization steps, we obtained strong and specific staining patterns similar to ones obtained with individual IHC experiments visualized with DAB. Conclusion: We have demonstrated the use on an automated platform of a RT stripping solution. Additionally, this reagent can be used with different antibodies in order to effectively characterize the TME in many cancers. Citation Format: Julio S. Masabanda, Sherry Wang, Joseph Vargas, Jason Ramos. Development of automated multiplex immunofluorescence protocols for tumor microenvironment evaluation [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 2780.
- Research Article
73
- 10.1007/s11307-020-01508-6
- Jun 8, 2020
- Molecular Imaging and Biology
PurposeHistological analysis of artery tissue samples is a widely used method for diagnosis and quantification of cardiovascular diseases. However, the variable and labor-intensive tissue staining procedures hinder efficient and informative histological image analysis.ProceduresIn this study, we developed a deep learning-based method to transfer bright-field microscopic images of unlabeled tissue sections into equivalent bright-field images of histologically stained versions of the same samples. We trained a convolutional neural network to build maps between the unstained images and histologically stained images using a conditional generative adversarial network model.ResultsThe results of a blind evaluation by board-certified pathologists illustrate that the virtual staining and standard histological staining images of rat carotid artery tissue sections and those involving different types of stains showed no major differences. Quantification of virtual and histological H&E staining in carotid artery tissue sections showed that the relative errors of intima thickness, intima area, and media area were lower than 1.6 %, 5.6 %, and 12.7 %, respectively. The training time of deep learning network was 12.857 h with 1800 training patches and 200 epoches.ConclusionsThis virtual staining method significantly mitigates the typically laborious and time-consuming histological staining procedures and could be augmented with other label-free microscopic imaging modalities.
- Research Article
- 10.1158/1538-7445.am2024-6185
- Mar 22, 2024
- Cancer Research
Introduction Hematoxylin and eosin (HE) is the standard stain used in histology to make tissue visible to the human eye by highlighting certain cellular and tissue structures, and it is a technique that is widely used in the diagnosis of cancer and other pathologies. Chemical staining, however, is irreversible, making the tissue unusable for subsequent measurements, such as spatial transcriptomics. Here we utilize generative AI method based on pix2pix image-to-image translation to generate virtual HE-staining for whole slide images (WSIs) acquired of unstained tissue with brightfield microscopy and perform a thorough histological evaluation for the feasibility in breast cancer diagnostics. Materials and Methods We optimized sample preparation and imaging setup for virtual staining purposes, and developed a custom generative adversarial network architecture for learning the virtual staining from paires samples of unstained and H&E stained tissue. Here, we focus on the utility of virtual staining in breast cancer diagnostics, using a set of breast cancer samples for acquiring whole slide images from unstained tissue before H&E staining and from reference H&E stained tissue after chemical staining. Hold-out set of sample pairs are left for validation, allowing us to evaluate the virtual staining performance for a vast array of tissue components and to examine the potential shortcomings in staining reproduction. We use a comprehensive set of quantitative metrics both on pixel and object level to evaluate virtual staining quality. In addition, we perform thorough visual evaluation of histological feasibility by histology experts to examine the computational staining. Results We demonstrate that by careful optimization of both sample preparation and imaging workflow, as well as the computational methods, generative adversarial networks can be used for virtual staining of whole slide images of breast cancer tissue acquired from unstained tissue using regular bright field microscopy. We analyzed the virtual staining performance quantitatively and visually, and highlight the potential of the method through successful cases and demonstrate the applicability in breast cancer diagnostics, as well as discuss the challenges and shortcomings of virtual staining for clinical samples. Conclusions We demonstrate the potential of virtual HE staining for clinical histopathology of breast cancer tissue. Notably, our virtual staining based on generative AI shows promise towards more sustainable and streamlined sample processing and staining process in digital pathology. Citation Format: Leena Latonen, Umair Khan, Sonja Koivukoski, Johan Hartman, Pekka Ruusuvuori. Generative AI for virtual HE-staining of whole slide images of unstained breast cancer tissue [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 6185.
- Research Article
19
- 10.1007/s11307-021-01641-w
- Oct 7, 2021
- Molecular Imaging and Biology
Image-to-Images Translation for Multiple Virtual Histological Staining of Unlabeled Human Carotid Atherosclerotic Tissue.
- Research Article
- 10.3390/data8020040
- Feb 15, 2023
- Data
In recent years, there has been an increased effort to digitise whole-slide images of cancer tissue. This effort has opened up a range of new avenues for the application of deep learning in oncology. One such avenue is virtual staining, where a deep learning model is tasked with reproducing the appearance of stained tissue sections, conditioned on a different, often times less expensive, input stain. However, data to train such models in a supervised manner where the input and output stains are aligned on the same tissue sections are scarce. In this work, we introduce a dataset of ten whole-slide images of clear cell renal cell carcinoma tissue sections counterstained with Hoechst 33342, CD3, and CD8 using multiple immunofluorescence. We also provide a set of over 600,000 patches of size 256 × 256 pixels extracted from these images together with cell segmentation masks in a format amenable to training deep learning models. It is our hope that this dataset will be used to further the development of deep learning methods for digital pathology by serving as a dataset for comparing and benchmarking virtual staining models.
- Research Article
114
- 10.1016/j.copbio.2006.06.002
- Jun 16, 2006
- Current Opinion in Biotechnology
Molecular imaging of thin mammalian tissue sections by mass spectrometry
- Research Article
1
- 10.1016/j.media.2025.103865
- Feb 1, 2026
- Medical image analysis
Deep learning based label-free virtual staining and classification of human tissues using digital slide scanner.