Abstract

Learning is one of the most powerful concepts in artificial intelligence research. It allows a system to learn from its environment and automatically modify its behavior to suit its needs. On par with human champions, the world’s best computer backgammon player is a computer program that learns by playing against itself. The learning algorithm and available data set limit computer learning. Several techniques are available to perform machine learning. Decision trees, the naïve Bayes approach, and the more general Bayes net approach are a few choices. The naïve Bayes approach is an instance of the more general Bayes nets. This paper examines and analyzes the naïve Bayes and decision tree approaches to learning. Various techniques to avoid over-fitting, such as ensemble construction and cross-validation, are also implemented and analyzed. A novel hybrid between the naïve Bayes approach and the decision tree method is presented. The hybrid system produces a spectrum of options that could be used for learning by merely changing parameter values. At one end lies the naïve Bayes approach, while at the other lies the decision tree technique. The proposed hybrid scheme solves the problem of poor naïve Bayes performance in a domain with dependent attributes and the memory consumption problem of the decision tree. One can analyze this idea and show encouraging experimental data that backs the need for such a solution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call