Abstract

Image Fusion is a powerful and necessary tool to incorporate the relevant visual information provided by multiple sensors simultaneously. The quality of the results however, is bounded not only by the quality of the algorithm, but also by the outcome of the required image registration algorithm. Despite this dependency, images are always assumed to be pre-aligned. With 3rd Generation surveillance systems, centralized computations are shifted to distributed visual nodes low on computational and power resources. This article presents a combined approach that is able to register and fuse multimodal images, dubbed MIRF. Combining both algorithms into one image domain not only offers a reduction in complexity making it a better fit for a resource constrained embedded platform, but also improves the response time of the system. Two algorithms for area-based image registration and object-based image fusion are proposed. They are based on Dual-Tree Complex Wavelet Transform. Qualitative and quantitative experimental results show that the proposed registration approach achieves comparable accuracies to its counterparts, with lower-complexity. On the other hand, the developed fusion scheme exhibits higher accuracy and proves its immunity to minor errors in registration

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call