An important issue in statistical modeling is to determine the complexity of the model based on the scale of data so as to effectively mitigate the model’s overfitting problems without big data. We adopt a data-driven approach to automatically determine the number of components of the model. In order to better extract robust features, we propose a framework of data-driven two-layer structure visual dictionary learning (DTSVDL). It works by dividing the visual dictionary structure learning into two levels: the attribute layer and the detail layer. In the attribute layer, the attributes of the image dataset are learned, and these attributes are obtained by a data-driven Bayesian nonparametric model. Then, in the detail layer, the detailed information over attributes is further explored and refined, and the attributes are weighted by the number of effective observations associated with each attribute. Our proposed approach has three main advantages: (1) the two-layer structure makes our building visual dictionary be more expressive; (2) the number of components in the attribute layer can be determined automatically from the data; (3) the components are automatically determined based on the scale of visual words; therefore, our model can well mitigate the overfitting problem. In addition, by comparing with stacked autoencoders, stacked denoising autoencoders, LeNet-5, speeded-up robust features, and pretrained deep learning model ImageNet-VGG-F algorithms, we find that our approach achieves satisfactory image categorization results on two benchmark datasets. Specifically, higher categorization performance is achieved than by the classical approaches on 15 scene categories and action datasets. We conclude that the resulting DTSVDL possesses a good generality derived from attribute information as well as an excellent distinction derived from detailed information. In other words, the visual dictionary learned by our algorithm is more expressive and discriminatory.
Read full abstract