Abstract

We provide finite sample guarantees for the classical Chow–Liu algorithm [Chow and Liu, IEEE Trans. Inform. Theory, 14 (1968), pp. 462–467] to learn a tree-structured graphical model of a distribution. For a distribution on and a tree on nodes, we say is an -approximate tree for if there is a -structured distribution such that is at most more than the best possible tree-structured distribution for . We show that if itself is tree-structured, then the Chow–Liu algorithm with the plug-in estimator for mutual information with independent and identically distributed samples outputs an -approximate tree for with constant probability. In contrast, for a general (which may not be tree-structured), samples are necessary to find an -approximate tree. Our upper bound is based on a new conditional independence tester that addresses an open problem posed by Canonne et al. [Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, ACM, 2018, pp. 735–748]: we prove that for three random variables each over , testing if is 0 or is possible with samples. Finally, we show that for a specific tree , with samples from a distribution over , one can efficiently learn the closest -structured distribution in KL divergence by applying the add-1 estimator at each node.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.