Abstract

The internal dynamics approach realizes a nonlinear state space model without information about the true process states. Consequently, the model’s internal states can be seen as a somewhat artificial tool for realization of the desired dynamic input/output behavior. Models with internal dynamics are most frequently based on an MLP network architecture. A systematic overview of internal dynamics neural networks can be found in [382]. Basically, four types can be distinguished: fully recurrent networks (Sect. 21.1) [406]; partially recurrent networks (Sect. 21.2) with the particular well known Elman [76] and Jordan [195] architectures; nonlinear state space networks (Sect. 21.3) proposed by Schenker [342]; and locally recurrent globally feed-forward networks (Sect. 21.4) systematized by Tsoi and Back [382]. Finally, the major differences between internal and external dynamics are analyzed in Sect. 21.5. An overview of and a comparison between the external and internal dynamics approaches on the basis of their fundamental properties is given in [80, 273, 274]. For additional case studies refer to [175].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call