Abstract

We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. We introduce a neural net learning model and classify several neural net design and optimization problems within the polynomial-time hierarchy. We also investigate how much easier the problems become if the class of concepts to be learned is known a priori. We show that the training problem for 2-cascade neural nets (which have only one hidden unit) is NP-complete, which implies that finding an optimum net to load a set of examples is also NP-complete. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also NP-complete, for each k ≥ 2.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call