Abstract

Robust long term positioning for autonomous mobile robots is essential for many applications. Key to a successful visual SLAM system is correctly recognizing the objects and labeling where the robot is. Local image features are popular with constructing object recognition system, which are invariant to image scaling, translation, rotation, and partially invariant to illumination changes and affine. In this paper, we proposed an object recognition method based on the bag of word model, mainly idea includes three steps as follows: firstly, a set of local image patches are sampled using a key point detector, and each patch is a descriptor based on scale invariant feature transform. Then outliers are removed by RANSAC algorithm, and the resulting distribution of descriptors is quantified by using vector quantization against a pre-specified codebook to convert it to a histogram of votes for codebook centers. Finally, a KNN algorithm is used to classify images through the resulting global descriptor vector. The experimental results show that our proposed method has a better performance against the previous methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.