Abstract

In the era of big data, large scale classification and prediction problems pose new challenges that the traditional VonNeumann architecture struggles to address. This calls for implementation of new computational paradigms. Photonic reservoir computing is a promising paradigm for large-scale classification and prediction problems. Reservoir Computing (RC) has three layers: the input layer, reservoir layer and output layer. The reservoir layer is a random interconnected network of neurons that is independent of the task being performed using RC. This enables a particular reservoir to be used for multiple tasks, as only the output layer needs to be trained. The independent nature of reservoir layer provides an opportunity for parallel processing of multiple tasks at the same time. Unfortunately, the optoelectronic architectures for RC in literature do not exploit this capability. Therefore, in this paper, we propose a multi-layer opto-electronic hardware architecture for parallel RC. Our architecture employs time division multiplexing to perform jobs in parallel. The implementation of the reservoir is based on Delayed Feedback Reservoir (DFR) model. In our experiments, we study the performance of different configurations of the proposed architecture for NARMA task and analog speech recognition task. We show that our architecture can outperform some of the leading single layer architectures by up to 90% for NARMA task while performing analog speech recognition in parallel and closely matches the performance of leading multi-layer photonic RC architectures with an increased error of 8% due to parallel processing. The proposed high-speed architecture has a power consumption of ~50W for a 4-layer network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call