Abstract

Scene matching navigation based on multisensor image fusion is studied in this paper. Pixel and low feature -level fusion images of optical and IR images are used as real time images matching with optical satellite images as base images, in which linear superposition, nonlinear operators and multiresolution image fusion approaches are adopted to acquire the fused gray and edge strength images. Gray and low feature -level scene matching schemes are also employed to execute the scene matcing simulation experiments on the real flight image data, in which scene matching methods are general CCF and MAD algorithms. The experimental results are given to compare the matching performance when taking different fusion images as real time images under certain matching schemes. The scene matching results based on single sensor images are also given for comparison with the results based on multisensor fusion images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call