Abstract
Infrared and visible image fusion is a common solution that joints multispectral modals to address weaknesses of observation in a complex environment to achieve all-weather and full-time perception. The core concept of pixel-level fusion is to obtain complementary features and remove redundant factors. The existing works based on enhancement aim to improve the details of raw information, but most of them easily suffer from noise while in the enhancement process since they ignore well-designed feature selection. To further suppress noise and bridge infrared and visible latent cues, we designed a two-stage enhancement (TSE) framework by using an attention mechanism and feature-linking model (FLM). First of all, we construct a novel decomposition scheme that combines structure adaptive total-variational (SATV) and l1 sparsity term, which is called SATV-l1, to extract two scale detail layers and base layer. Furthermore, the l1 sparsity term is exploited on the base layer to compute its piecewise smoothness property owing to the initial base layer containing texture features. In the final stage, we introduce the TSE to fulfill the detail layer and reconstruct the fused image. Extensive experimental validations on public datasets are performed, demonstrating the robustness and effectiveness of our approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.