Abstract

In this paper, we present a novel algorithm for video fusion of multi-sensor sequences applicable to real-time night vision systems. We employ the Laplacian pyramid fusion of a block of successive frames to add temporal robustness to the fused result. For the fusion rule, we first group high and low frequency levels of the decomposed frames in the block from both input sensor sequences. Then, we define local space-time energy measure to guide the selection based fusion process in a manner that achieves spatio-temporal stability. We demonstrate our approach on several well-known multi-sensor video fusion examples with varying contents and target appearance and show its advantage over conventional video fusion approaches. Computational complexity of the proposed methods is kept low by the use of simple linear filtering that can be easily parallelised for implementation on general-purpose graphics processing units (GPUs).

Highlights

  • Multi-sensor night-vision systems use multiple sensors based on different physical phenomena to monitor the same scene

  • In this paper, we present a novel algorithm for video fusion of multi-sensor sequences applicable to real-time night vision systems

  • Static, left, and proposed right, for the frames shown in Figure abovewithfrom showthe a significantly greater with 3 above show a significantly greater consistency in the proposed dynamic method

Read more

Summary

Introduction

Multi-sensor night-vision systems use multiple sensors based on different physical phenomena to monitor the same scene. Some form of coordination of all data sources is necessary These problems can be solved by using multi-sensor data fusion methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57], which combine multiple image or video signals into a single, fused output signal. The most common multi-resolution techniques are the Laplacian pyramid (LAP) [25,27], ROLP or Contrast pyramid [26,45], Discrete wavelet transform (DWT) [46,47,48], Shift invariant discrete wavelet (SIDWT) [21], bilateral filter [11], guided filter [12,13], Shearlet Transform [3], Nonsubsampled contourlet transform [14] etc

Video Fusion
Dynamic
Block diagram of of thethe video basedon onspatio-temporal spatio-temporal
Video fusion performance energyHF
Temporally
Results
11. Database
Evaluation
14. Temporal
15. Results
16. Comparing
Subjective Evaluation
18. Subjective
Computational Complexity
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call