Abstract

AbstractVideo conference communication can be seriously affected by dropped frames or reduced frame rates due to network or hardware restrictions. Video frame interpolation techniques can interpolate the dropped frames and generate smoother videos. However, existing methods can not generate plausible results in video conferences due to the large motions of the eyes, mouth and head. To address this issue, we propose a Spatial-Temporal Deformable Convolution Network (STDC-Net) for conference video frame interpolation. The STDC-Net first extracts shallow spatial-temporal features by an embedding layer. Secondly, it extracts multi-scale deep spatial-temporal features through Spatial-Temporal Representation Learning (STRL) module, which contains several Spatial-Temporal Feature Extracting (STFE) blocks and downsample layers. To extract the temporal features, each STFE block splits feature maps along the temporal pathway and processes them with Multi-Layer Perceptron (MLP). Similarly, the STFE block splits the temporal features along horizontal and vertical pathways and processes them by another two MLPs to get spatial features. By splitting the feature maps into segments of varying lengths in different scales, the STDC-Net can extract both local details and global spatial features, allowing it to effectively handle large motions. Finally, Frame Synthesis (FS) module predicts weights, offsets and masks using the spatial-temporal features, which are used in deformable convolution to generate the intermediate frames. Experimental results demonstrate the STDC-Net outperforms state-of-the-art methods in both quantitative and qualitative evaluations. Compared to the baseline, the proposed method achieved a PSNR improvement of 0.13 dB and 0.17 dB on the Voxceleb2 and HDTF datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call