Abstract

The advent of inexpensive smartphones/tablets/phablets equipped with cameras has resulted in the average person capturing cherished moments as images/videos and sharing them on the internet. However, at several locations, an amateur photographer may be frustrated with the captured images. For example, the object of interest to the photographer might be occluded or fenced. Currently available image de-fencing methods in the literature are limited by non-robust fence detection and can handle only static occluded scenes whose video is captured by constrained camera motion. In this work, we propose an algorithm to obtain a de-fenced image using a few frames from a video of the occluded static or dynamic scene. We also present a new fenced image database captured under challenging scenarios such as clutter, poor lighting, viewpoint distortion, etc. Initially, we propose a supervised learning-based approach to detect fence pixels and validate its performance with qualitative as well as quantitative results. We rely on the idea that freehand panning of the fenced scene is likely to render visible hidden pixels of the reference frame in other frames of the captured video. Our approach necessitates the solution of three problems: (i) detection of spatial locations of fences/occlusions in the frames of the video, (ii) estimation of relative motion between the observations, and (iii) data fusion to fill in occluded pixels in the reference image. We assume the de-fenced image as a Markov random field and obtain its maximum a posteriori estimate by solving the corresponding inverse problem. Several experiments on synthetic and real-world data demonstrate the effectiveness of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call