Abstract

Automatic registration is still a challenging problem for multimodal remote sensing images including optical, light detection and ranging, synthetic aperture radar images, and so on. Due to the differences in imaging principles, the gray value, texture, and landscape characteristic of these images are different in the local area. This also makes it difficult to obtain satisfactory results for the conventional image registration methods. In order to achieve registration of multimodal images to obtain complementary information, we apply the transfer algorithm based on a deep image analogy to the preprocessing of image registration. It eliminates the differences in multimodal remote sensing images by blending the original image structure and texture. The conventional local feature-based method is applied to match the original and generated images. Correspondences are increased and the registration error is reduced. The experiments demonstrate that our method can effectively deal with multimodal data and produce more accurate results. The algorithm is based on the joint of image deep semantic features and indirectly achieves matching of the original image pair. It provides a new solution to the problem of multimodal images registration.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.