Abstract

This paper demonstrates Content Based Image Retrieval (CBIR) algorithms implementation on a huge image set. Such implementation will be used to match query images to previously stored geotagged image database for the purpose of vision based indoor navigation. Feature extraction and matching are demonstrated using the two famous key-point detection CBIR algorithms: Scale Invariant Feature Transformation (SIFT) and Speeded Up Robust Features (SURF). The key-points matching results using Brute Force and FLANN (Fast Library for Approximate Nearest Neighbors) on various levels for both SIFT and SURF algorithms are compared herein. The algorithms are implemented on Hadoop MapReduce framework integrated with Hadoop Image Processing Interface (HIPI) and Open Computer Vision Library (OpenCV). As a result, the experiments shown that using SIFT with KNN (4, 5, and 6) levels give the highest matching accuracy in comparison to the other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call