Abstract

We give an algorithm that with high probability properly learns random monotone DNF with t ( n ) terms of length ≈ log t ( n ) under the uniform distribution on the Boolean cube { 0 , 1 } n . For any function t ( n ) ≤ poly ( n ) the algorithm runs in time poly ( n , 1 / ϵ ) and with high probability outputs an ϵ -accurate monotone DNF hypothesis. This is the first algorithm that can learn monotone DNF of arbitrary polynomial size in a reasonable average-case model of learning from random examples only. Our approach relies on the discovery and application of new Fourier properties of monotone functions which may be of independent interest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call