Abstract

Directed acyclic graph (DAG) learning plays a key role in causal discovery and many machine learning tasks. Learning a DAG from high-dimensional data always faces scalability problems. A local-to-global DAG learning approach can be scaled to high-dimensional data, however, existing local-to-global DAG learning algorithms employ either the AND-rule or the OR-rule for constructing a DAG skeleton. Simply using either rule, existing local-to-global methods may learn an inaccurate DAG skeleton, leading to unsatisfactory DAG learning performance. To tackle this problem, in this paper, we propose an <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> daptive <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">D</u> AG <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</u> earning (ADL) algorithm. The novel contribution of ADL is that it can simultaneously and adaptively use the AND-rule and the OR-rule to construct an accurate global DAG skeleton. We conduct extensive experiments on both benchmark and real-world datasets, and the experimental results show that ADL is significantly better than some existing local-to-global and global DAG learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call