Abstract

Chaotic time series prediction can be performed by applying different architectures of artificial neural networks (ANNs) that can be implemented on field-programmable gate arrays (FPGAs). However, the main challenges are the reduction of hardware resources to develop faster ANNs and the prediction capabilities for large horizons. In this manner, the contribution is devoted to introduce pipeline architectures in which some registers are placed between combinational blocks to divide the logic into shorter stages that can run with a faster clock. The cases of study are the multilayer perceptron (MLP), nonlinear autoregressive with exogenous input (NARX), and echo state network (ESN). In addition, another contribution is devoted to introduce the application of the decimation technique to extend the prediction horizon of the ANNs from 12 to 600-steps-ahead. The prediction capabilities of the MLP, NARX and ESN are compared by using eight chaotic time series with different maximum Lyapunov exponents. The pipeline FPGA-based implementations show that the ESN with a reservoir of at least 30 neurons guarantees a large prediction horizon of 600-steps-ahead. Another important advantage of the ESN is that its FPGA-based implementation can be performed by reusing one neuron, thus requiring the lowest quantity of hardware resources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call