Abstract
Learning deep neural networks using the backpropagation algorithm is considered implausible from a biological point of view. Numerous recent publications offer sophisticated models for biologically plausible deep learning options that typically define success as achieving a test accuracy of around 98 % in the MNIST dataset. Here we examine how far we can go in the classification of numbers (MNIST) with biologically plausible rules for learning in a network with one hidden layer and one reading layer. The weights of the hidden layer are either fixed (random or random Gabor filters), or are trained by uncontrolled methods (analysis of main/independent components or sparse coding), which can be implemented in accordance with local training rules. The paper shows that high dimensionality of hidden layers is more important for high performance than global functions retrieved by PCA, ICA, or SC. Tests on the CIFAR10 object recognition problem lead to the same conclusion, indicating that this observation is not entirely problem specific. Unlike biologically plausible deep learning algorithms that are derived from the backpropagation algorithm approximations, we have focused here on shallow networks with only one hidden layer. Globally applied, randomly initialized filters with fixed weights/Gabor coefficients (RP/RGs) of large hidden layers result in better classification performance than training them with unsupervised methods such as principal/independent analysis (PCA/ICA) or sparse coding (SC). Therefore, the conclusion is that uncontrolled training does not lead to better performance than fixed random projections or Gabor filters for large hidden layers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Vestnik komp'iuternykh i informatsionnykh tekhnologii
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.