Abstract

We propose a parallel training framework of convolutional neural networks (CNNs) for small sample learning. In the framework we model the feature filter process and show Sadowsky energy distribution exists in the model. Using Sadowsky energy distribution, the weights in convolutional kernels can be rearranged after each update according to special cases. With this rearrangement, each CNNs in the framework has different predicted probability especially for easily misclassified samples, which avoids the situation of a low predicted probability traditional CNNs may have. The class that gets the maximum predicted probability among the CNNs would be chosen as the result of prediction. Our CNNs framework gives better hand-written digit classification for small samples than one-stage CNNs, and has a faster convergence rate than multiplestages CNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call