Abstract

The scale of modern neural networks is growing rapidly, with direct hardware implementations providing significant speed and energy improvements over their software counterparts. However, these hardware implementations frequently assume global connectivity between neurons and thus suffer from communication bottlenecks. Such issues are not found in biological neural networks. It should therefore be possible to develop new architectures to reduce the dependence on global communications by considering the connectivity of biological networks. This paper introduces two reconfigurable locally-connected architectures for implementing biologically inspired neural networks in real time. Both proposed architectures are validated using the segmented locomotive model of the C. elegans, performing a demonstration of forwards, backwards serpentine motion and coiling behaviours. Local connectivity is discovered to offer up to a 17.5× speed improvement over hybrid systems that use combinations of local and global infrastructure. Furthermore, the concept of locality of connections is considered in more detail, highlighting the importance of dimensionality when designing neuromorphic architectures. Convolutional Neural Networks are shown to map poorly to locally connected architectures despite their apparent local structure, and both the locality and dimensionality of new neural processing systems is demonstrated as a critical component for matching the function and efficiency seen in biological networks.

Highlights

  • Neural networks are ubiquitous as tools for data analysis and processing, finding applications in many research fields and commercial applications including classification problems and image recognition tasks [1]

  • The concept of locality of connections is considered in more detail, highlighting the importance of dimensionality when designing neuromorphic architectures

  • This paper presents two reconfigurable architectures that are designed with differing levels of communication locality in a manner that is technology independent and inherent to each architecture

Read more

Summary

Introduction

Neural networks are ubiquitous as tools for data analysis and processing, finding applications in many research fields and commercial applications including classification problems and image recognition tasks [1] The scale of these artificial neural networks has grown substantially in recent years, driven largely by increases in processing power and memory capacity; some of the largest networks comprise several thousand neurons, with some research using millions of connections. Law and the fast approaching physical transistor scaling limits mean that new technologies and design paradigms must be developed to allow the continued improvement of processor technologies To date, this rapid advancement in processor speed and scale, which has previously driven significant progress in neural networks, has been mainly achieved through the continued scaling of transistors. Computationally efficient but somewhat approximative models (such as binary or integrate and fire models) have provided significant advancement in large-scale neural network performance due to their simplicity.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call