Abstract

Object-class recognition is one of the most challenging fields of pattern recognition and computer vision. Currently, most authors represent an object as a collection of parts and their mutual spatial relations. Therefore, two types of information are extracted: local information describing each part, and contextual information describing the (spatial) context of the part, i.e. the spatial relations between the rest of the parts and the current one. We define a generalized correlogram descriptor and represent the object as a constellation of such generalized correlograms. Using this representation, both local and contextual information are gathered into the same feature space. We take advantage of this representation in the learning stage, by using a feature selection with boosting that learns both types of information simultaneously and very efficiently. Simultaneously learning both types of information proves to be a faster approach than dealing with them separately. Our method is compared with state-of-the-art object-class recognition systems by evaluating both the accuracy and the cost of the methods.KeywordsFeature SelectionContextual InformationLocal PropertySpatial RelationModel PartThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call