Abstract

We propose a scalable photonic architecture for implementation of feedforward and recurrent neural networks to perform the classification of handwritten digits from the MNIST database. Our experiment exploits off-the-shelf optical and electronic components to currently achieve a network size of 16,384 nodes. Both network types are designed within the reservoir computing paradigm with randomly weighted input and hidden layers. Using various feature extraction techniques (e.g., histograms of oriented gradients, zoning, and Gabor filters) and a simple training procedure consisting of linear regression and winner-takes-all decision strategy, we demonstrate numerically and experimentally that a feedforward network allows for classification error rate of 1%, which is at the state-of-the-art for experimental implementations and remains competitive with more advanced algorithmic approaches. We also investigate recurrent networks in numerical simulations by explicitly activating the temporal dynamics, and predict a performance improvement over the feedforward configuration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call