Abstract

k-decision lists and decision trees play important roles in learning theory as well as in practical learning systems, k-decision lists generalize classes such as monomials, k-DNF, and k-CNF and like these subclasses is polynomially PAC-learnable [19]. This leaves open the question of whether k-decision lists can be learned as efficiently as k-DNF. We answer this question negatively in a certain sense, thus disproving a claim in a popular textbook [2]. Decision trees, on the other hand, are not even known to be polynomially PAC-learnable, despite their widespread practical application. We will show that decision trees are not likely to be efficiently PAC-learnable. We summarize our specific results. The following problems cannot be approximated in polynomial time within a factor of $$2^{\log ^\delta n}$$ for any δ<1, unless NP⊂DTIME[2polylog n ]: a generalized set cover, k-decision lists, k-decision lists by monotone decision lists, and decision trees. Decision lists cannot be approximated in polynomial time within a factor of n δ, for some constant δ>0, unless NP=P. Also, k-decision lists with l 0–1 alternations cannot be approximated within a factor logl n unless NP⊂DTIME[n O(log log n)] (providing an interesting comparison to the upper bound recently obtained in [1]).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call