Abstract

We describe a new approach for understanding the difficulty of designing efficient learning algorithms. We prove that the existence of an efficient learning algorithm for a circuit class C in Angluin's model of exact learning from membership and equivalence queries or in Valiant's PAC model yields a lower bound against C. More specifically, we prove that any subexponential time, deterministic exact learning algorithm for C (from membership and equivalence queries) implies the existence of a function f in EXP NP such that f ∉ C . If C is PAC learnable with membership queries under the uniform distribution or exact learnable in randomized polynomial-time, we prove that there exists a function f ∈ BPEXP (the exponential time analog of BPP ) such that f ∉ C . For C equal to polynomial-size, depth-two threshold circuits (i.e., neural networks with a polynomial number of hidden nodes), our result shows that efficient learning algorithms for this class would solve one of the most challenging open problems in computational complexity theory: proving the existence of a function in EXP NP or BPEXP that cannot be computed by circuits from C. We are not aware of any representation-independent hardness results for learning depth-2, polynomial-size neural networks with respect to the uniform distribution. Our approach uses the framework of the breakthrough result due to Kabanets and Impagliazzo showing that derandomizing BPP yields non-trivial circuit lower bounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call