Abstract

Reconstructing and repairing of corrupted or missing parts after object removal in digital video is an important trend in artwork restoration. Video inpainting is an active subject in video processing, which deals with the recovery of the corrupted or missing data. Most previous video inpainting approaches consume more time in extensive search to find the best patch to restore the damaged frames. In addition to that, most of them cannot handle the gradual and sudden illumination changes, dynamic background, full object occlusion, and object changes in scale. In this paper, we present a complete video inpainting framework without the extensive search process. The proposed framework consists of a segmentation stage based on low-resolution version and background subtraction. A background inpainting stage is applied to restore the damaged background regions after static or moving object removal based on the gray-level co-occurrence matrix (GLCM). A foreground inpainting stage is based on objects repository. GLCM is used to complete the moving occluded objects during the occlusion. The proposed method reduces the inpainting time from hours to a few seconds and maintains the spatial and temporal consistency. It works well when the background has clutter or fake motion, and it can handle the changes in object size and in posture. Moreover, it is able to handle full occlusion and illumination changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call