Abstract
In feedforward networks, signals flow in only one direction without feedback. Applications in forecasting, signal processing and control require explicit treatment of dynamics. Feedforward networks can accommodate dynamics by including past input and target values in an augmented set of inputs. A much richer dynamic representation results from also allowing for internal network feedbacks. These types of network models are called recurrent network models and are used by Jordan (1986) for controlling and learning smooth robot movements, and by Elman (1990) for learning and representing temporal structure in linguistics. In Jordan's network, past values of network output feed back into hidden units; in Elman's network, past values of hidden units feed back into themselves. The main focus of this study is to investigate the relative forecast performance of the Elman type recurrent network models in comparison to feedforward networks with deterministic and noisy data. The salient property of the Elman type recurrent network architecture is that the hidden unit activation functions (internal states) are fed back at every time step to provide an additional input. This recurrence gives the network dynamical properties which make it possible for the network to have internal memory. Exactly how this memory is represented in the internal states is not determined in advance. Instead, the network must discover the underlying temporal structure of the task and learn to encode that structure internally. The simulation results of this paper indicate that recurrent networks filter noise more successfully than feedforward networks in small as well as large samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.