Abstract

Extreme Learning Machine (ELM) proposes a non-iterative training method for Single Layer Feedforward Neural Networks that provides an effective solution for classification and prediction problems. Its hardware implementation is an important step towards fast, accurate and reconfigurable embedded systems based on neural networks, allowing to extend the range of applications where neural networks can be used, especially where frequent and fast training, or even real-time training, is required. This work proposes three hardware architectures for on-chip ELM training computation and implementation, a sequential and two parallel. All three are implemented parameterizably on FPGA as an IP (Intellectual Property) core. Results describe performance, accuracy, resources and power consumption. The analysis is conducted parametrically varying the number of hidden neurons, number of training patterns and internal bit-length, providing a guideline on required resources and level of performance that an FPGA based ELM training can provide.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call