Abstract

Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms—from algae to primates—excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal—taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future.

Highlights

  • Despite the significant strides made in neuromorphic engineering in recent years, the field has not yet seen widespread industrial or commercial adoption

  • Building on the needs and requirements identified for neuromorphic benchmarking systems, we have developed a set of required characteristics that are essential for creating benchmarking tasks that properly assess and quantify the performance of neuromorphic systems

  • Conventional Reinforcement Learning (RL) approaches introduce the requirement for operational real-time performance in inference, but not in training, nor do they address the issue of power consumption in their evaluation metrics

Read more

Summary

INTRODUCTION

Despite the significant strides made in neuromorphic engineering in recent years, the field has not yet seen widespread industrial or commercial adoption. The most significant strides in computer vision and deep neural networks were spurred by the ImageNet moment (Krizhevsky et al, 2017) and the rise of data-driven systems (Torralba and Efros, 2011), leading to some truly astonishing capabilities, from the ability to achieve human-like (and even super-human) levels of performance under ideal viewing conditions on certain vision tasks (He et al, 2016; Geirhos et al, 2017), to the unsettling ability to realistically replace faces and people in highdefinition video (Wang et al, 2021) Such cuttingedge data-driven systems require unprecedentedly large datasets that have only become feasible in terms of size and required computation starting with the release of ImageNet in 2012 and the advent of high-performance computing centres. We finish with concluding remarks for future developments of closed-loop benchmarks to bootstrap the generation of artificial and neuromorphic intelligence

History of the Analysis of Neuromorphic Benchmarks
Promises of Neuromorphic Systems
DIFFERENT STYLES OF SENSING
EXISTING BENCHMARKS
Neuromorphic Open-Loop Datasets
Conventional Closed-Loop
Simulators
NOVEL NEUROMORPHIC
Looking Beyond Accuracy as a Single
A Case Study for Event-Based
CONCLUDING REMARKS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.