Abstract

AbstractThe Probably Approximately Correct (PAC) learning theory creates a framework to assess the learning properties of static models for which the data are assumed to be independently and identically distributed (i.i.d.). One important family of dynamic models to which the conventional PAC learning can not be applied is nonlinear Finite Impulse Response (FIR) models. The present article, using an extension of PAC learning that covers learning with m‐dependent data, the learning properties of FIR modeling with sigmoid neural networks are evaluated. These results include upper bounds on the size of the data set required to train FIR sigmoid neural networks, provided that the input data are uniformly distributed. © 2001 John Wiley & Sons, Inc.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call