Abstract

This paper investigates the problem of image classification with limited or no annotations, but abundant unlabelled data. We propose the DBP (Differential-weighted Global Optimum of BP Neural Network) to make the performance of the BP Neural Network to become more stable. In details, the optimal weights will be saved as potential global optimum during the process of iteration and then we combine the BP Neural Network with the potential global weights to adjust parameters in the backward feedback process for the first time. As the model has fallen into local optimization, we replace the present parameters with the potential global optimal weights to optimize our model. Besides, we consider EP, CNN, SIFT image features and conduct several experiments on eight standard datasets. The results show that DBP mostly outperforms other supervised and semi-supervised learning methods in the state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.