Abstract

The back-propagation (BP) algorithm is usually used to train convolutional neural networks (CNNs) and has made greater progress in image classification. It updates weights with the gradient descent, and the farther the sample is from the target, the greater the contribution of it to the weight change. However, the influence of samples classified correctly but that are close to the classification boundary is diminished. This paper defines the classification confidence as the degree to which a sample belongs to its correct category, and divides samples of each category into dangerous and safe according to a dynamic classification confidence threshold. Then a new learning algorithm is presented to penalize the loss function with danger samples but not all samples to enable CNN to pay more attention to danger samples and to learn effective information more accurately. The experiment results, carried out on the MNIST dataset and three sub-datasets of CIFAR-10, showed that for the MNIST dataset, the accuracy of Non-improve CNN reached 99.246%, while that of PCNN reached 99.3%; for three sub-datasets of CIFAR-10, the accuracies of Non-improve CNN are 96.15%, 88.93%, and 94.92%, respectively, while those of PCNN are 96.44%, 89.37%, and 95.22%, respectively.

Highlights

  • With the improvement of the computational processing capabilities and the growth of data, deep learning approaches [1] have attracted extensive attention

  • The effectiveness of the new learning algorithm is verified by comparing the classification accuracy of PCNN and convolutional neural networks (CNNs), where PCNN and CNN respectively represent the convolutional neural network trained by the new algorithm and the traditional BP algorithm

  • The experiments are conducted on CIFAR-10 [24] and MNIST [25] datasets, which have been divided into the training and test sets

Read more

Summary

Introduction

With the improvement of the computational processing capabilities and the growth of data, deep learning approaches [1] have attracted extensive attention. CNN [2] makes a huge contribution in object detection [3], image classification [4], semantic segmentation [5], and so on. All this is attributed to its ability of constructing high-level features from low-level ones, so as to learn the feature hierarchy of images. To boost the performance of CNN, the existing strategies usually attempt from two aspects. Sun et al [6]

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.