Abstract

This article proposes an Enhanced Stochastically Robust and Optimized Bag-of-Words (ESRO-BoW) modeling technique that simultaneously accounts for the problems of robustness against random initialization and optimal model-order selection in BoW modeling. To address the aforementioned problems, the modeling performance of multiple executions of the BoW technique is considered as a discrete random variable and the ESRO-BoW is developed such that a convergence in mean is guaranteed for the resulting sequence of random variables. The BoW model order is tuned such that the expected value of the limit of the random variable of the classification performance is maximized. Hence, the ESRO-BoW realizes both robustness against random initializations and selects the optimal BoW model order. In order to evaluate its efficiency, the ESRO-BoW is applied to the classification of Caltech 101 image set and excellent performance is obtained. Comparison with the state-of-the-art approaches, employed for classifying Caltech 101 image set, demonstrates the superiority of the suggested ESRO-BoW modeling technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call