Abstract

The probably approximately correct (PAC) learning theory was originally introduced to address static models where the input data were assumed to be i.i.d. In many real applications; however, datasets and systems to be modeled are often dynamic. This encourages the efforts to extend the conventional PAC learning theory to address typical dynamic models such as finite impulse response (FIR) and auto regressive exogenous (ARX) models. This paper presents such extensions for the PAC learning theory and uses the resulting theory to evaluate the learning properties of some families of FIR and ARX neural networks. For ARX models, besides the learning properties of the neural models, stochastic stability of the models are also evaluated.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call