Abstract

An overview of neural network architectures is presented. Some of these architectures have been created in recent years, whereas others originate from many decades ago. Apart from providing a practical tool for comparing deep learning models, the Neural Network Zoo also uncovers a taxonomy of network architectures, their chronology, and traces back lineages and inspirations for these neural information processing systems.

Highlights

  • The past decade has witnessed a spectacular rise of interest in artificial intelligence, driven by large volumes of data being available for machine learning, decreasing costs for data storage and graphics processing units, and a technical and commercial infrastructure that allows for the commodification of intelligent applications

  • In order to gain insight into the interdependencies between these neural network models, and to support the discovery of new types, we decided to create a taxonomy of neural networks, uncovering some of the inspirations and underlying lineages of network architectures

  • We speculate that this trend is caused by the field of neural information processing systems becoming increasingly embraced by the engineering community, leading to a continued emphasis on practical applicability over biological inspiration and plausibility

Read more

Summary

Introduction

The past decade has witnessed a spectacular rise of interest in artificial intelligence, driven by large volumes of data being available for machine learning, decreasing costs for data storage and graphics processing units, and a technical and commercial infrastructure that allows for the commodification of intelligent applications. A particular branch of artificial intelligence that involves machine learning using multi-layered neural network models, is generally considered a key technology for the recent success in artificial intelligence. In order to gain insight into the interdependencies between these neural network models, and to support the discovery of new types, we decided to create a taxonomy of neural networks, uncovering some of the inspirations and underlying lineages of network architectures. This effort has resulted in the Neural Network. For each of the models depicted, we wrote a brief description that includes a reference to the original publication

Feed Forward Neural Networks
Recurrent Neural Networks
Long Short-Term Memory
Autoencoders
Hopfield Networks and Boltzmann Machines
Convolutional Networks
Generative Adversarial Networks
Liquid State Machines and Echo State Machines
Deep Residual Networks
2.10. Neural Turing Machines and Differentiable Neural Computers
2.11. Attention Networks
2.12. Kohonen Networks
2.13. Capsule Networks
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.