Abstract

Neuro-evolution and neural architecture search algorithms have gained significant interest due to the challenges of designing optimal artificial neural networks (ANNs). While these algorithms possess the potential to outperform the best human crafted architectures, a less common use of them is as a tool for analysis of ANN topologies and structural components. By performing these techniques while varying the allowable components, the best performing architectures for those components can be found and compared to best performing architectures for other components, allowing for a best case comparison of component capabilities – a more rigorous examination than simply applying those components in some standard fixed topologies. In this work, we utilize the Evolutionary eXploration of Augmenting Memory Models (EXAMM) algorithm to perform a rigorous examination and comparison of recurrent neural networks (RNNs) applied to time series prediction. Specifically, EXAMM is used to investigate the capabilities of recurrent memory cells as well as various complex recurrent connectivity patterns that span one or more steps in time, i.e., deep recurrent connections. Over 10.56 million RNNs were evolved and trained in 5, 280 repeated experiments with varying components. Many modern hand-crafted RNNs rely on complex memory cells (which have internal recurrent connections that only span a single time step) operating under the assumption that these sufficiently latch information and handle long term dependencies. However, our results show that networks evolved with deep recurrent connections perform significantly better than those without. More importantly, in some cases, the best performing RNNs consisted of only simple neurons and deep time skip connections, without any memory cells. These results strongly suggest that utilizing deep time skip connections in RNNs for time series data prediction not only deserves further, dedicated study, but also demonstrate the potential of neuro-evolution as a means to better study, understand, and train effective RNNs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.