The problem of video deraining is one of the most focused research areas where several techniques are introduced to achieve higher visual quality in the results. The process of video deraining tries to eliminate the presence of rain streaks in the videos so that the overall quality of the video can be enhanced. The existing frameworks tried to accurately eliminate the rain streaks in videos, but there is still room for improvement to preserve the temporal consistencies and intrinsic properties of rain streaks. This work does the job with the combination of handcrafted and deep priors that deeply extract and identify the nature of rain streaks in different dimensions. The proposed work included three main steps: Prior extraction, Derain modelling and optimization. Four major priors are extracted from the frames where the gradient prior (GP) and sparse prior (SP) are extracted from the rain streaks, and the smooth temporal prior (STP) and deep prior (DP) are extracted from the clean video. The unidirectional total variation (UTV) is applied to extract the GP, and the L1 normalization method is followed to extract the SP and STP. The DP is then extracted from the clean frames using the residual gated recurrent deraining network (Res-GRRN) model based on deep learning. Derain modelling is carried out based on the extracted priors, and the stochastic alternating direction multiplier method (SADMM) algorithm is utilized to solve the optimization problem. The proposed approach is then implemented in python and evaluated using the real-world dataset. The overall PSNR achieved by the proposed approach is 39.193dB, which is more optimal than the existing methods.
Read full abstract