Malignant glioma resection is often the first line of treatment in neuro-oncology. During glioma surgery, the discrimination of tumor's edges can be challenging at the infiltration zone, even by using surgical adjuncts such as fluorescence guidance (e.g., with 5-aminolevulinic acid). Challenging cases in which there is no visible fluorescence include lower-grade gliomas, tumor cells infiltrating beyond the margin as visualized on pre- and/or intraoperative MRI, and even some high-grade tumors. One field of research aiming to address this problem involves inspecting in detail the light emission spectra from different tissues (e.g., tumor vs. normal brain vs. brain parenchyma infiltrated by tumor cells). Hyperspectral imaging measures the emission spectrum at every image pixel level, thus combining spatial and spectral information. Assuming that different tissue types have different "spectral footprints," eventually related to higher or lower abundances of fluorescent dyes or auto-fluorescing molecules, the tissue can then be segmented according to type, providing surgeons a detailed spatial map of what they see. However, processing from raw hyperspectral data cubes to maps or overlays of tissue labels and potentially further molecular information is complex. This chapter will explore some of the classical methods for the various steps of this process and examine how they can be improved with machine learning approaches. While preliminary work on machine learning in hyperspectral imaging has had relatively limited success in brain tumor surgery, more recent research combines this with fluorescence to obtain promising results. In particular, this chapter describes a pipeline that isolates biopsies in ex vivo hyperspectral fluorescence images for efficient labeling, extracts all the relevant emission spectra, preprocesses them to correct for various optical properties, and determines the abundance of fluorophores in each pixel, which correspond directly with the presence of cancerous tissue. Each step contains a combination of classical and deep learning-based methods. Furthermore, the fluorophore abundances are then used in four machine learning models to classify tumor type, WHO grade, margin tissue type, and isocitrate dehydrogenase (IDH) mutation status in brain tumors. The classifiers achieved average test accuracies of 87%, 96.1%, 86%, and 93%, respectively, thus greatly outperforming prior work both with and without fluorescence. This field is new, but these early results show great promise for the feasibility of data-driven hyperspectral imaging for intraoperative classification of brain tumors during fluorescence-guided surgery.
Read full abstract