Abstract

Video retargeting is a technique used to transform a given video to a target aspect ratio. Current methods often cause severe visual distortion due to frequent temporal incoherence during the retargeting. In this study, we propose a new extrapolation-based video retargeting method using an image-to-warping vector generation network to maintain temporal coherence and prevent deformation of an input frame by extending the side area of an input frame. Backward warping-based extrapolation is performed using a displacement vector (DV) that is generated by a proposed convolutional neural network (CNN). The DV is defined as the displacement between the current hole to be filled in the extended area and a pixel in the input frame used to fill the hole. We also propose a technique to efficiently train the CNN including a method for ground-truth DV generation. After the extrapolation, we propose a technique for the maintenance of temporal coherence of the extended region and a distortion suppression scheme (DSC) for minimizing visual artifacts. The simulation results demonstrated that the proposed method improved bidirectional similarity (BDS) up to 3.69, which is a measure of the quality of video retargeting, compared with existing video retargeting methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.