Abstract

The recent development in single-cell multiomics analysis has enabled simultaneous detection of multiple traits at the single-cell level, thus providing deeper insights into the cellular phenotypes and functions in diverse tissues. However, currently, it is challenging to infer the joint representations and learn relationships among multiple modalities from complex multimodal single-cell data. Herein, we present scMM, a novel deep generative model-based framework for the extraction of interpretable joint representations and cross-modal generation. scMM addresses the complexity of data by leveraging a mixture-of-experts multimodal variational autoencoder. The pseudocell generation strategy of scMM compensates for the limited interpretability of deep learning models and discovered multimodal regulatory programs associated with latent dimensions. Analysis of recently produced datasets validated that scMM facilitates high-resolution clustering with rich interpretability. Furthermore, we show that cross-modal generation by scMM leads to more precise prediction and data integration compared with the state-of-the-art and conventional approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call