Abstract

We show that simple linear classification of pairwise products of convolutional features achieves near state-of-the-art performance on some standard labelled image databases. Specifically, we found test classification error rates on the MNIST handwritten digits image database of under 0.5%, and achieved under 19% and under 44% error rates on the CIFAR-10 and CIFAR-100 RGB image databases. Since the number of weights in such a classifier grows with the square of the number of features, we discuss how implementation of such a pair-wise products classifier can be achieved in an SLFN architecture where the hidden unit function is the simple quadratic nonlinearity: we can this a Quadratic Neural Network (QNN). We compare this method to setting the input weights in a QNN randomly, and find optimal performance can be achieved provided the hidden layer is sufficiently large. This analysis provides insight on why ‘extreme-learning machines’ can achieve classification performance equal to or better than the use of backpropagation training.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.