Abstract

Infrared–visible image registration and fusion are closely related processes, and it is an attractive problem to implement coordinated registration and fusion in a unified framework. The registration accuracy of existing methods fails to satisfy the fusion needs in some scenarios, which affects the fusion visual performance. In addition, as an image preprocessing step, the speed of the network after cascading registration and fusion is not sufficient for more advanced tasks, thus restricting the usability of these methods. To solve the above problems, we propose a network that uses semantics to lead all, termed SemLA, capable of unifying the registration and fusion processes in an efficient and robust way. Our key idea is to explicitly embed semantic information at all stages of the network. In particular, SemLA employs a coordinated approach that involves joint training of registration and semantic features to ensure efficient network operation. The calibration of the semantic-aware maps and the description of their spatial structure information mutually reinforce each other to obtain more accurate registration. Additionally, the semantic-guided fusion process enhances the representation of complementary information within the semantic object while effectively suppressing visual interference caused by overlapping regional demarcation lines of the aligned image. The results of different experiments show that our SemLA has a better tradeoff between performance and efficiency compared to state-of-the-art methods and adapts to the semantic needs of advanced vision tasks. The source code is publicly available at https://github.com/xiehousheng/SemLA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call