Abstract

In this paper, a novel hardware architecture for neuroevolution is presented, aiming to enable the continuous adaptation of systems working in dynamic environments, by including the training stage intrinsically in the computing edge. It is based on the block-based neural network model, integrated with an evolutionary algorithm that optimizes the weights and the topology of the network simultaneously. Differently to the state-of-the-art, the proposed implementation makes use of advanced dynamic and partial reconfiguration features to reconfigure the network during evolution and, if required, to adapt its size dynamically. This way, the number of logic resources occupied by the network can be adapted by the evolutionary algorithm to the complexity of the problem, the expected quality of the results, or other performance indicators. The proposed architecture, implemented in a Xilinx Zynq-7020 System-on-a-Chip (SoC) FPGA device, reduces the usage of DSPs and BRAMS while introducing a novel synchronization scheme that controls the latency of the circuit. The proposed neuroevolvable architecture has been integrated with the OpenAI toolkit to show how it can efficiently be applied to control problems, with a variable complexity and dynamic behavior. The versatility of the solution is assessed by also targeting classification problems.

Highlights

  • Artificial Neural Networks (ANN) are computational models inspired by the structure and physiology of the human brain, aiming to mimic their natural learning capabilities

  • We propose using a System-on-a-Chip (SoC) FPGA, in which a dual-core ARM processor and reconfigurable logic are combined in the same chip

  • Once the Block-based Neural Network (BbNN) generates a set of valid outputs, the GSFM asserts an interrupt signal to indicate that the processor can read the output values and generate a new input signal

Read more

Summary

Introduction

Artificial Neural Networks (ANN) are computational models inspired by the structure and physiology of the human brain, aiming to mimic their natural learning capabilities. In addition to the design automation benefits inherent to neuroevolution and the expected acceleration produced by hardware, implementing a neuroevolvable hardware architecture allows training (and re-training) the neural network, in an edge computing device, during its whole lifetime. This approach enables the continuous adaptation of systems working in dynamic environments. When a network is applied to different problems during different system operation stages, it is expected that its size could be changed For these reasons, the BbNN implementation we propose in this paper is dynamically scalable.

Basic Principles
Related Works
Existing Approaches to Scalability
Proposed Bbnn Architecture
Numerical Range for Inputs and Parameters
Fixed-Point Representation Scheme
Approximation of the Activation Function
Proposed Processing Element Architecture
From the Basic PE to the Block-Based Neural Network IP
Management of Latency and Datapath Imbalance
Proposed Evolutionary Algorithm
A New Approach to Build a Scalable Bbnn
Logic Resource Utilization and Reconfiguration Times
Case Studies
Classification Domain: the Xor Problem
Control Domain
Online Adaptation for Control in Dynamic Environments
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call