Abstract

The problem of segmenting image sequences based on 2D motion has been under study for many years now. Most early approaches were either region-based, doing some sort of robust motion estimation, or boundary-based, preferring instead to track the bounding contours of the moving image region. In this paper, we explore an approach based on a synergy between these two previous approaches. For example, while motion constraints are often in violation of their underlying assumptions at region boundaries, image edges are a rich source of information. The approach we propose uses feed-forward to use region-based information to propagate boundary estimates, feedback to use boundaries to improve motion estimation, and finally uses motion-based warping to compare image appearance between frames in order to provide additional information for the boundary estimation process. We show results from an implementation in which a hierarchical, layered-motion estimation using parametric models is coupled with a distance-transform based active contour. The system is shown to provide stable and accurate segmentation in sequences with background motion, and multiple moving objects. Quantitative measures are proposed and reported for these sequences. Finally, a modification is detailed which allows the system to incorporate a Condensation algorithm tracker, but without requiring off-line learning in advance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call