Abstract

Quantitative oblique back-illumination microscopy (qOBM) is a novel microscopy technology that enables real-time, label-free quantitative phase imaging (QPI) of thick and intact tissue specimens. This approach has the potential to address a number of important biomedical challenges. In particular, qOBM could enable in-situ/in-vivo imaging of tissue during surgery for intraoperative guidance, as opposed to the technically challenging and often unsatisfactory ex-vivo approach of frozen-section-based histology. However, the greyscale phase contrast provided by qOBM differ from the colorized histological contrasts most familiar to pathologists and clinicians, limiting potential adoption in the medical field. Here, we demonstrate the use of a CycleGAN (generative adversarial network), an unsupervised deep learning framework, to transform qOBM images into virtual H&E. We trained CycleGAN models on a collection of qOBM and H&E images of excised brain tissue from a 9L gliosarcoma rat tumor model. We observed successful mode conversion of both healthy and tumor specimens, faithfully replicating features of the qOBM images in the style of traditional H&E. Some limitations were observed however, including attention-based constraints in the CycleGAN framework that occasionally allowed the model to ‘hallucinate’ features not actually present in the qOBM images used. Strategies for preventing these hallucinations, comprising both improved hardware capabilities and more stringent software constraints, will be discussed. Our results indicate that deep learning could potentially bridge the gap between qOBM and traditional histology, an outcome that could be transformative for image-guided therapy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call