Abstract

Neural architecture search poses one of the most difficult problems for statistical learning, given the incredibly vast architectural search space. This problem is further compounded for recurrent neural networks (RNNs), where every node in an architecture can be connected to any other node via recurrent connections which pass information from previous passes through the RNN via a weighted connection. Most modern-day RNNs focus on recurrent connections which pass information from the immediately preceding pass by utilizing gated constructs known as memory cells; however, connections farther back in time, or deep recurrent connections, are also possible. A novel neuro-evolutionary metaheuristic called EXAMM is utilized to conduct extensive experiments evolving RNNs consisting of a suite of memory cells and simple neurons, with and without deep recurrent connections. These experiments evolved and trained 10.56 million RNNs, with results showing that networks with deep recurrent connections perform significantly better than those without, and in some cases the best evolved RNNs consist of only simple neurons and deep recurrent connections. These results strongly suggest that utilizing complex recurrent connectivity patterns in RNNs deserves further study and also showcases the strong potential for using neuro-evolutionary metaheuristic algorithms as tools for understanding and training effective RNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call