Abstract

An efficient algorithm exists for learning disjunctive normal form (DNF) expressions in the uniform-distribution PAC learning model with membership queries (J. Comput. System Sci. 55 (1997) 414), but in practice the algorithm can only be applied to small problems. We present several modifications to the algorithm that substantially improve its asymptotic efficiency. First, we show how to significantly improve the time and sample complexity of a key subprogram, resulting in similar improvements in the bounds on the overall DNF algorithm. We also apply known methods to convert the resulting algorithm to an attribute efficient algorithm. Furthermore, we develop a technique for lower bounding the sample size required for PAC learning with membership queries under a fixed distribution and apply this technique to produce a lower bound on the number of membership queries needed for the uniform-distribution DNF learning problem. Finally, we present a learning algorithm for DNF that is attribute efficient in its use of random bits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call