Abstract

One of the most fundamental problems in image processing and computer vision is the inherent ambiguity that exists between texture edges and object boundaries in real-world images and video. Despite this ambiguity, many applications in computer vision and image processing often use image edge strength with the assumption that these edges approximate object depth boundaries. However, this assumption is often invalidated by real world data, and this discrepancy is a significant limitation in many of today's image processing methods. We address this issue by introducing a simple, low-level, and patch-consistency assumption that leverages the extra information present in video data to resolve this ambiguity. Through analyzing how well patches can be modeled by simple transformations over time, we can obtain an indication of which image edges correspond to texture edges versus object boundaries. Our approach is simple to implement and has the potential to improve a wide range of image and video-based applications by suppressing the detrimental effects of strong texture edges on regularization terms. We validate our approach by presenting results on a variety of scene types and directly incorporating our augmented edge map into existing image segmentation and optical flow applications, showing results that better correspond to object boundaries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call