Abstract

Background. Due to that the preceding approaches for improving the MNIST image dataset error rate do not have a clear structure which could let repeat it in a strengthened manner, the formalization of the performance improvement is considered. Objective. The goal is to strictly formalize a strategy of reducing the MNIST dataset error rate. Methods. An algorithm for achieving the better performance by expanding the training data and boosting with ensembles is suggested. The algorithm uses the designed concept of the training data expansion. Coordination of the concept and the algorithm defines a strategy of the error rate reduction. Results. In relative comparison, the single convolutional neural network performance on the MNIST dataset has been bettered almost by 30 %. With boosting, the performance is 0.21 % error rate meaning that only 21 handwritten digits from 10,000 are not recognized. Conclusions. The training data expansion is crucial for reducing the MNIST dataset error rate. The boosting is ineffective without it. Application of the stated approach has an impressive impact for reducing the MNIST dataset error rate, using only 5 or 6 convolutional neural networks against those 35 ones in the benchmark work.

Highlights

  • The MNIST (Mixed National Institute of Standards and Technology) database is widely used for training and testing in the field of machine learning [1, 2]

  • Any of the preceding approaches for improving the MNIST image dataset error rate does not have a clear structure which could let repeat it in a strengthened manner

  • The goal is to strictly formalize a strategy of the reduction. This goal is going to be reached after fulfilling the following tasks: 1. Construction of a single convolutional neural networks (CNNs) whose performance on the MNIST dataset shall be better than 0.35 % error rate

Read more

Summary

Background

Due to that the preceding approaches for improving the MNIST image dataset error rate do not have a clear structure which could let repeat it in a strengthened manner, the formalization of the performance improvement is considered. The goal is to strictly formalize a strategy of reducing the MNIST dataset error rate. An algorithm for achieving the better performance by expanding the training data and boosting with ensembles is suggested. The algorithm uses the designed concept of the training data expansion. The single convolutional neural network performance on the MNIST dataset has been bettered almost by 30 %. The training data expansion is crucial for reducing the MNIST dataset error rate. Application of the stated approach has an impressive impact for reducing the MNIST dataset error rate, using only 5 or 6 convolutional neural networks against those 35 ones in the benchmark work

Introduction
D1 R1 P1 C2 D2 R2
F ScRtSf
F F 0 F 2
Findings
Discussion
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.