Abstract

Residual convolutional neural network (R-CNN) has become a promising method for image recognition in deep learning applications. The application accuracy, as a key indicator, has a close relationship with filter weights in trained R-CNN models. In order to make filters work at full capacity, we find out that lower relevancy between filters in the same layer promotes higher accuracy for R-CNN applications. Furthermore, we propose an improved R-CNN model training method to acquire a higher accuracy and a better generalization ability. In this paper, the main focus is to control the update of filter weights during model training. The key mechanism is achieved through computing the relevancy between filters in the same layer. The relevancy is quantified by a correlation coefficient, e.g., Pearson Correlation Coefficient (PCC). The mechanism takes a larger probability to utilize the updated filter weights with a lower correlation coefficient, and vice versa. In order to validate our proposal, we construct an experiment through PCC on residual networks. The experiment demonstrates that the improved model training method is a promising mean with better generalization ability and higher recognition accuracy (0.52%-1.83%) for residual networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call