Abstract

Scene image matching is often used for positioning of a visually navigated autonomous robot. The robot memorizes the scene as an image at each navigation point in the teaching mode, and knows being at the same position when the outside scene is matched to the image in the playback mode. The scene matching is usually accomplished by feature-based image matching methods, such as SIFT or SURF. However the problem is that matching results of such methods are greatly affected by changes in illumination condition. Therefore, it is important to know which method is robust to the illumination change. Several performance evaluation results of these matching methods have been reported, but they are not focusing on illumination change problem. In this paper, we present performance comparison results of these feature-based image matching methods against illumination change in outdoor scenes assuming usage for visual navigation purpose. We also encounter another problem when conducting such the comparison for visual navigation. In this application, the matching score gradually increases as approaching the matching point, and gradually decreases as being apart from that point. This impedes to define the right matching response (ground truth). We present one method by which giving the right response.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call