Spatially Resolved Transcriptomics (SRT) offers unprecedented opportunities to elucidate the cellular arrangements within tissues. Nevertheless, the absence of deconvolution methods that simultaneously model multi-modal features has impeded progress in understanding cellular heterogeneity in spatial contexts. To address this issue, SpaDA is developed, a novel spatially aware domain adaptation method that integrates multi-modal data (i.e., transcriptomics, histological images, and spatial locations) from SRT to accurately estimate the spatial distribution of cell types. SpaDA utilizes a self-expressive variational autoencoder, coupled with deep spatial distribution alignment, to learn and align spatial and graph representations from spatial multi-modal SRT data and single-cell RNA sequencing (scRNA-seq) data. This strategy facilitates the transfer of cell type annotation information across these two similarity graphs, thereby enhancing the prediction accuracy of cell type composition. The results demonstrate that SpaDA surpasses existing methods in cell type deconvolution and the identification of cell types and spatial domains across diverse platforms. Moreover, SpaDA excels in identifying spatially colocalized cell types and key marker genes in regions of low-quality measurements, exemplified by high-resolution mouse cerebellum SRT data. In conclusion, SpaDA offers a powerful and flexible framework for the analysis of multi-modal SRT datasets, advancing the understanding of complex biological systems.
Read full abstract