Abstract

Video analytics and computer vision applications face challenges when using video sequences with low visibility. The visibility of a video sequence is degraded when the sequence is affected by atmospheric interference like rain. Many approaches have been proposed to remove rain streaks from video sequences. Some approaches are based on physical features, and some are based on data-driven (i.e., deep-learning) models. Although the physical features-based approaches have better rain interpretability, the challenges are extracting the appropriate features and fusing them for meaningful rain removal, as the rain streaks and moving objects have dynamic physical characteristics and are difficult to distinguish. Additionally, the outcome of the data-driven models mostly depends on variations relating to the training dataset. It is difficult to include datasets with all possible variations in model training. This paper addresses both issues and proposes a novel hybrid technique where we extract novel physical features and data-driven features and then combine them to create an effective rain-streak removal strategy. The performance of the proposed algorithm has been tested in comparison to several relevant and contemporary methods using benchmark datasets. The experimental result shows that the proposed method outperforms the other methods in terms of subjective, objective, and object detection comparisons for both synthetic and real rain scenarios by removing rain streaks and retaining the moving objects more effectively.

Highlights

  • We have considered four existing methods to compare the performance of the proposed method including three model-based video deraining methods, PMOG [21], MS-CSC [32], and temporal appearance (TA) [10] and one network architecture-based image deraining method, CGAN [57]

  • We have found true positive (TP), False positive (FP) and False Negative (FN) from Figure 15 based on the object detection for all three frames for different methods

  • The precision and recall values show that the proposed method outperforms the state-of-the-art methods for every frame

Read more

Summary

Introduction

Improving visibility of the video sequences by removing rain streaks has become an obligatory processing step for object detection and tracking [7], scene analysis [8], and person reidentification [9]. These tasks have extensive applications such as driverless cars, advanced driver assistant systems, intelligent traffic surveillance systems, security surveillance systems, etc. The background remains the same over all the frames in a video scene captured by a static camera, except for the interference of moving objects and change of light This background layer can be formulated as recovering a low-dimensional subspace [49,50,51,52,53]. The regular approach to subspace learning is the subsequent low-rank matrix factorisation (LRMF):

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call