Abstract

Deep learning has attracted a lot of attention in research and industry in recent years. Behind the success of deep learning, there is much space for improvement. It is difficult to identify if a testing sample can be represented by the deep network effectively before we examining the final result. In this paper, we proposed a dynamic boosting strategy according to reconstruction error in deep networks. We use reconstruction error to determine whether the result is reliable or not. From the perspective of prediction interval, we demonstrated that with the increase of reconstruction error, the prediction interval would become bigger. Therefore, the classification result is not reliable when the reconstruction error exceeds the predetermined threshold. Since we can record the reconstruction error as well as the classification error for all training samples in training set. We can learn an extra boosting model besides the deep network in training set to improve the performance of the model. An important factor in learning the boosting model is to determine an appropriate threshold for selecting training samples. In testing, we first examine whether the reconstruction error of a testing sample exceeds the threshold to determine if we should use the boosting model. If the boosting model is used, the final result is the average of the output of the deep network and the boosting model. We conducted experiments on two widely used classification datasets and an air quality dataset. From the experiments, we see that our boosting strategy is effective in improving the performance of classification. We tested several boosting models in this paper. They can all reduce the test error to some extent under appropriate parameter settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call