Abstract

Due to the poor filling effect of the video image defect commonly used in the video stabilization field, the video is seemed still unstable after the image stabilization process, which seriously affects the visual effect. To solve this problem, we improve a video stabilization method based on time-series network prediction and pyramid fusion restoration is proposed to optimize the visual effect after image stabilization. The flow of the proposed method is as follows: First, it is adaptive to determine whether the defect of the corresponding frame at the current time needs padding inpainting. Then, for the frame that needs to be inpainting, the frames generated before the current moment are sent to the model combining the convolutional neural networks and the gate recurrent unit to predict the part to be filled. Next the current defect image and the complete image to be filled are brought into the Laplacian pyramid reconstruction, and the improved weighted optimal suture is introduced for splicing during the fusion. Finally, the video frame is cut after reconstruction. The method is tested on a data set composed of videos commonly used in the field of video stabilization. The experimental results show that the average peak signal to noise ratio of the method is 2 to 5dB higher than that of the comparison algorithm, and the average structural similarity index is improved by about 2% to 7% compared with the contrast algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.