Abstract

AbstractExploratory factor analysis is a widely used framework in the social and behavioral sciences. Since measurement errors are always present in human behavior data, latent factors, generating the observed data, are important to identify. While most factor analysis methods rely on linear relationships in the data‐generating process, deep learning models can provide more flexible modeling approaches. However, two problems need to be addressed. First, for interpretation, scaling assumptions are required, which can be (at least) cumbersome for deep generative models. Second, deep generative models are typically not identifiable, which is required in order to identify the underlying latent constructs. We developed a model that uses a variational autoencoder as an estimator for a complex factor analysis model based on importance‐weighted variational inference. In order to receive interpretable results and an identified model, we use a linear factor model with identification constraints in the measurement model. To maintain the flexibility of the model, we use normalizing flow latent priors. Within the evaluation of performance measures in a talent development program in soccer, we found more clarity in the separation of the identified underlying latent dimensions with our models compared to traditional PCA analyses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call