Abstract

This paper describes a novel inpainting approach for removing marked dynamic objects from videos captured with a camera, so long as the objects occlude parts of the scene with a static background. Proposed approach allow to remove objects or restore missing or tainted regions present in a video sequence by utilizing spatial and temporal information from neighboring scenes. The algorithm iteratively performs following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove with use of a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. An image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in a frame using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged frame by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture reconstruction using patch-based method is carried out. We demonstrate the performance of a new approach via several examples, showing the effectiveness of our algorithm and compared with state-of-the-art video inpainting methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call