Abstract
Spatially resolved transcriptomics technologies potentially provide the extra spatial position information and tissue image to better inferspatial cell-cell interactions (CCIs) in processes such as tissue homeostasis, development, and disease progression. However, methods for effectively integrating spatial multimodal data to infer CCIs are still lacking. Here, the authors propose a deep learning method for integrating features through co-convolution, called SpaGraphCCI, to effectively integrate data from different modalities of SRT by projecting gene expression and image feature into a low-dimensional space. SpaGraphCCI can achieve significant performance on datasets from multiple platforms including single-cell resolution datasets (AUC reaches 0.860-0.907) and spot resolution datasets (AUC ranges from 0.880 to 0.965). SpaGraphCCI shows better performance by comparing with the existing deep learning-based spatial cell communication inference methods. SpaGraphCCI is robust to high noise and can effectively improve the inference of CCIs. We test on a human breast cancer dataset and show that SpaGraphCCI can not only identify proximal cell communication but also infer new distal interactions. In summary, SpaGraphCCI provides a practical tool that enables researchers to decipher spatially resolved cell-cell communication based on spatial transcriptome data.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have