Abstract

In recent years, various methods have been proposed to tackle the compressed video quality enhancement problem. It aims at restoring the distorted information in low-quality target frames from high-quality reference frames in the compressed video. Most methods for video quality enhancement contain two key stages, i.e., the synchronization and the fusion stages. The synchronization stage synchronizes the input frames by compensating the estimated motion vector to reference frames. The fusion stage reconstructs each frame with the compensated frames. However, the synchronization stage in previous works merely estimates the motion vector between the reference frame and the target frame. Due to the quality fluctuation of frames and region occlusion of objects, the missing detail information cannot be adequately replenished. To make full use of the temporal motion between input frames, we propose a motion approximation scheme to utilize the motion vector between the reference frames. It is able to generate additional compensated frames to further refine the missing details in the target frame. In the fusion stage, we propose a deep neural network to extract frame features with blended attention to the texture details and the quality discrepancy at different times. The experimental results show the effectiveness and robustness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call