Recent advancements in the residual sparsity strategy have garnered widespread attention in video compressive sensing (CS) reconstruction. However, most of the existing residual sparsity-based video CS reconstruction methods usually suffer from some limitations that lead to undesired visual artifacts. Firstly, these methods only rely on a patch sparsity scheme that is limited by their focus on the local structures of each video frame, neglecting the nonlocal self-similarity (NSS) property inherent to each video frame. Secondly, these methods concentrate on utilizing the NSS property of external reference frames for multi-hypothesis (MH) prediction while disregarding the internal NSS property of the current frame. In this paper, we propose a new structured residual sparsity (SRS) approach for video CS reconstruction, which jointly exploits the NSS properties of the current frame and its reference frames. Specifically, due to the unavailability of the original video frames, we first devise an effective intraframe CS (EICS) reconstruction method that leverages the internal NSS property of each frame. This approach enables us to obtain initial recovery frames, which then facilitate the execution of MH prediction. Following this, we generate a residual frame for the current frame by employing the MH prediction. Then, we propose a novel SRS model jointly using the NSS properties of the current frame and its reference frames to explore both the correlations of intraframe and interframe for reconstructing the current frame. Furthermore, for the sake of optimization feasibility, we develop an effective alternating direction method of multipliers (ADMM) algorithm to address the objective. Our experimental findings reveal that the proposed SRS not only yields superior quantitative results, but also uncovers finer details and causes fewer visual artifacts compared to many popular or state-of-the-art video CS reconstruction approaches.
Read full abstract