Abstract
Multi feature space representation is a common practise in computer vision applications. Traditional features such as HOG, SIFT, SURF etc., individually encapsulates certain discriminative cues for visual classification. On the other hand, each layer of a deep neural network generates multi ordered representations. In this paper we present a novel approach for such multi feature representation learning using Adaptive Boosting (AdaBoost). General practise in AdaBoost [8] is to concatenate components of feature spaces and train base learners to classify examples as correctly/incorrectly classified. We posit that multi feature space learning should be viewed as a derivative of cooperative multi agent learning. To this end, we propose a mathematical framework to leverage performance of base learners over each feature space, gauge a measure of difficulty of training space and finally make soft weight updates rather than strict binary weight updates prevalent in regular AdaBoost. This is made possible by periodically sharing of response states by our learner agents in the boosting framework. Theoretically, such soft weight update policy allows infinite combinations of weight updates on training space compared to only two possibilities in AdaBoost. This opens up the opportunity to identify 'more difficult' examples compared to 'less difficult' examples. We test our model on traditional multi feature representation of MNIST handwritten character dataset and 100-Leaves classification challenge. We consistently outperform traditional and variants of multi view boosting in terms of accuracy while margin analysis reveals that proposed method fosters formation of more confident ensemble of learner agents. As an application of using our model in conjecture with deep neural network, we test our model on the challenging task of retinal blood vessel segmentation from fundus images of DRIVE dataset by using kernel dictionaries from layers of unsupervised trained stacked autoencoder network. Our work opens a new avenue of research for combining a popular statistical machine learning paradigm with deep network architectures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.