A Dual-Domain Collaboration Network for VCS Reconstruction
Efficient exploration of temporal correlation between frames is essential for improving the quality of video compressive sensing reconstruction. Most methods focus on designing a deeper or wider network to strengthen the perception of context knowledge, which inevitably exacerbating the burden of devices. This paper proposes a dual-domain collaboration network (dubbed CollabNet), in which the pixel restoration performs preliminary alignment and aggregation with less parameter consumption, and the following feature enhancement precisely reuses the inter-frame information for further improvement. Moreover, we enrich the aligned prediction and conduct an interactive update for abundant additional content in the first step, while restricted offsets eliminate undesirable interference and attentional fusion produces final reconstructed frames secondly. Experiments manifest that our network performs favorably against other state-of-the-art models with the best trade-off between parameters and performance. The code is available at https://github.com/VideoRecons/CollabNet.