Abstract

Image registration is a crucial and fundamental problem in image processing and computer vision, which aims to align two or more images of the same scene acquired from different views or at different times. In image registration, since different keypoints (e.g., corners) or similarity measures might lead to different registration results, the selection of keypoint detection algorithms or similarity measures would bring uncertainty. These different keypoint detectors or similarity measures have their own pros and cons and can be jointly used to expect a better registration result. In this paper, the uncertainty caused by the selection of keypoint detector or similarity measure is addressed using the theory of belief functions, and image information at different levels are jointly used to achieve a more accurate image registration. Experimental results and related analyses show that our proposed algorithm can achieve more precise image registration results compared to several prevailing algorithms.

Highlights

  • Image registration is a fundamental problem encountered in image processing, e.g., image fusion [1] and image change detection [2]

  • The belief functions introduced in Demspter–Shafer Theory (DST) [19] of evidence offer a powerful theoretical tool for uncertainty modeling and reasoning; we propose a fusion based image registration method using belief functions

  • We propose an evidential reasoning [19] based image registration algorithm to generate a combined transformation from T1, T2, . . . , TQ thanks to the ability of belief functions for uncertainty modeling and reasoning

Read more

Summary

Introduction

Image registration is a fundamental problem encountered in image processing, e.g., image fusion [1] and image change detection [2]. It refers to the alignment of two or more images of the same scene taken at different time, from different sensors, or from different viewpoints. Image registration align each sensed image to the reference image by finding the correspondence between all pixels in the image pair and estimating the spatial transformation from the sensed image to the reference image. We just consider the image registration between two images, i.e., there is only one sensed image together with a given reference image

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call