Abstract

This manuscript presents the design of a deep differential neural network (DDNN) for pattern classification. First, we proposed a DDNN topology with three layers, whose learning laws are derived from a Lyapunov analysis, justifying local asymptotic convergence of the classification error and the weights of the DDNN. Then, an extension to include an arbitrary number of hidden layers in the DDNN is analyzed. The learning laws for this general form of the DDNN offer a contribution to the deep learning framework for signal classification with biological nature and dynamic structures. The DDNN is used to classify electroencephalographic signals from volunteers that perform an identification graphical test. The classification results show exponential growth in the signal classification accuracy from 82 percent with one layer to 100 percent with three hidden layers. Working with DDNN instead of static deep neural networks (SDNN) represents a set of advantages, such as processing time and training period reduction up to almost 100 times, and the increment of the classification accuracy while working with less hidden layers than working with SDNN, which are highly dependent on their topology and the number of neurons in each layer. The DDNN employed fewer neurons due to the induced feedback characteristic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call