Abstract

This paper presents an adaptive fusion framework of infrared and visual images using saliency detection and an improved dual-channel pulse-coupled neural network (ID-PCNN) in the local non-subsampled shearlet transform (LNSST) domain. The first step is to use the LNSST, an upgrade of the non-subsampled shearlet transform, for multi-scale analysis to separate the source images into low-pass and high-pass sub-images. The final fusion effect is determined by the fusion rule of the low-pass component. Thus, an improved algorithm based on frequency-tuned saliency extraction is adopted to guide the adaptive weighted fusion of the low-pass sub-image. An ID-PCNN model is used as the fusion rule for high-pass sub-images. A sum of directional gradients acts as the linking strength to characterize the texture details of an image. A modified spatial frequency that reflects the gradient features of images is used to motivate neurons. A series of images from diverse scenes is used for fusion experiments. Fusion results are evaluated subjectively and objectively. The results show that our algorithm exhibits superior fusion performance and is more effective than typical fusion techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.