Abstract

This paper presents an adaptable decoder-like model for video error concealment through optical flow prediction using deep neural networks. The horizontal and vertical motion fields from previous optical flows are separated and passed through two parallel pipelines with convolutional and long short-term memory layers. The combined output from these two networks, the predicted flow, is then used to reconstruct the degraded portion of the future video frame. Unlike current methods that use pixel or voxel information, we propose an architecture that uses three previous optical flows obtained through a flow generation step. The generator portion of the network can be easily interchanged with other methods, increasing the adaptability of the model. The network is trained in supervised mode and the performance is evaluated using standard video quality metrics by comparing the reconstructed frames from our prediction and the generated ground truth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call