Abstract

Bayesian network is a graphical model that is widely used to perform probabilistic reasoning. However, learning the structure of Bayesian network is a complex task. In this paper, we propose a hybrid structure learning algorithm that has two phases: a constraint-based phase to reduce the search space and a score-and-search phase that employs case-injected genetic algorithms for determining the optimal structure from the reduced space of structures. We use a case-injected genetic algorithm-based hybrid approach for the structure learning in order to improve the learning accuracy over similar problems. A case-injected genetic algorithm is the augmentation of a case-based memory with the Genetic Algorithm (GA). Thereby, it finds near-optimal solutions in fewer generations compared to GA. Our method stores relevant or partial solutions in a case-base while solving the problems and utilizes those stored solutions on new similar problems. We use small-to-very large networks for assessing our viability of our approach. In this paper, a series of experiments are conducted on datasets generated from four benchmark Bayesian networks. We compare our method against GA-based hybrid approach and a state-of-the-art algorithm, Max-Min Hill Climbing (MMHC). Presented results indicate an enhanced improvement of our approach over GA and MMHC in learning the Bayesian network structures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call