Abstract

In recent years, graph learning for smooth signals under Laplacian constraints has attracted increasing attention due to the wide application of graph Laplacian matrix in spectral graph theory, machine learning, and graph signal processing tasks. Standard graph learning methods usually assume that graphs are sparse, but the correlation between real-world entities is only sometimes sparse because of some common and potential effects. In this paper, we model these common effects as latent variables and assume that the Gaussian graphical model (GGM) under Laplacian constraints is conditionally sparse given latent variables but marginally non-sparse. Based on this assumption, the graph learning problem is formulated in a regularized maximum marginal likelihood (MML) framework with a sparse plus low-rank decomposition form. The specialized algorithm is developed to solve the proposed graph learning problem by incorporating Laplacian constraints into a multi-block alternating direction method of multipliers (ADMM) with proximal regularization terms. The experiments conducted on synthetic and real-world data sets demonstrate that the proposed graph learning method outperforms the standard method in inferring the sparsity pattern of the conditional graphical model of observed variables with the presence of latent variables.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call