Abstract

The goal of causal representation learning is to map low-level observations to high-level causal concepts to learn interpretable and robust representations for various downstream tasks. Latent variable models such as the variational autoencoder (VAE) are frequently leveraged to learn disentangled representations. However, there are often complex non-linear causal relationships underlying the observed data that cannot be captured through disentangled representations or linear dependence assumptions. Further, an independent conditional prior assumption can make learning causal dependencies in the latent space more challenging. We propose a framework, coined SCM-VAE, which uses apriori causal knowledge, a structural causal prior, and a non-linear additive noise structural causal model (SCM) to learn independent causal mechanisms and identifiable causal representations. We conduct theoretical analysis and perform experiments on synthetic and real-world datasets to show the improved quality of learned causal representations and robustness under interventions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call