Abstract

Spatially Resolved Transcriptomics (SRT) offers unprecedented opportunities to elucidate the cellular arrangements within tissues. Nevertheless, the absence of deconvolution methods that simultaneously model multi-modal features has impeded progress in understanding cellular heterogeneity in spatial contexts. To address this issue, SpaDA is developed, a novel spatially aware domain adaptation method that integrates multi-modal data (i.e., transcriptomics, histological images, and spatial locations) from SRT to accurately estimate the spatial distribution of cell types. SpaDA utilizes a self-expressive variational autoencoder, coupled with deep spatial distribution alignment, to learn and align spatial and graph representations from spatial multi-modal SRT data and single-cell RNA sequencing (scRNA-seq) data. This strategy facilitates the transfer of cell type annotation information across these two similarity graphs, thereby enhancing the prediction accuracy of cell type composition. The results demonstrate that SpaDA surpasses existing methods in cell type deconvolution and the identification of cell types and spatial domains across diverse platforms. Moreover, SpaDA excels in identifying spatially colocalized cell types and key marker genes in regions of low-quality measurements, exemplified by high-resolution mouse cerebellum SRT data. In conclusion, SpaDA offers a powerful and flexible framework for the analysis of multi-modal SRT datasets, advancing the understanding of complex biological systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.