Abstract

Learning Bayesian networks is known to be an NP-hard problem, and this, combined with the growing interest in learning models from high-dimensional domains, leads to the necessity of finding more efficient learning algorithms. Recent papers have proposed constrained approaches of successfully and widely used local search algorithms, such as Hill Climbing. One of these algorithms families, called constrained Hill Climbing CHC, greatly improves upon the efficiency of the original approach, obtaining models with slightly lower quality but maintaining their theoretical properties. In this paper, we propose three different modifications to the most scalable version of these algorithms, fast constrained Hill Climbing, to improve the quality of its output by relaxing the constraints imposed to include some diversification in the search process. The aim of these new approaches is to adjust the trade-off between efficiency and accuracy of the algorithm, as they do not modify its complexity and only imply a few more search iterations. We perform an intensive experimental evaluation of the modifications proposed with an extensive comparison between the original algorithms and the new modifications covering several scenarios with quite large data sets. Available code and data for further use of the algorithms presented in this paper and experiment replication can be available at http://simd.albacete.org/supplements/FastCHC.html.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.