Abstract

This paper considers the problem of learning the underlying graph topology of Gaussian Graphical Models (GGMs) from observations. Under high-dimensional settings, to achieve low sample complexity, many existing graph topology learning algorithms assume structural constraints such as sparsity to hold. Without prior knowledge of graph sparsity, the correctness of their results is difficult to check. In this paper, we aim to do away with these assumptions by developing algorithms for learning degree-bounded GGMs and separable GGMs without any sparsity priors. The proposed algorithms, which are based only on the knowledge of conditional independence relations in the data distribution, require minimal structural assumptions while still achieving low sample complexity, and hence are ‘applicable’. Specifically, for any user defined sparsity parameter <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$k$</tex-math></inline-formula> , we prove that the proposed algorithms can consistently identify whether a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> -dimensional GGM is degree-bounded by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$k$</tex-math></inline-formula> (or strongly <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$k$</tex-math></inline-formula> -separable) with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\Omega (k \log p)$</tex-math></inline-formula> sample complexity. Besides, our algorithms also demonstrate ‘partial’ learning properties whenever the overall graph is not entirely sparse, that is, not all nodes are degree-bounded (or are strongly separable). In this case, we can still learn the sparse portions of the graph, with theoretical guarantees included. Numerical results show that existing algorithms fail even in some simple settings where sparsity assumptions do not hold, whereas our algorithms do not.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call