Abstract

Summary form only given. Video segmentation is usually termed as foreground (moving objects) segmentation in a fixed camera scenario, and as independent motion segmentation in a moving platform scenario. It is a fundamental step in many vision systems including video surveillance, human-machine interface, and very low-bandwidth telecommunications. Accurate foreground segmentation is a difficult task due to such factors as illumination variation, occlusion, background movements, and noise. We present a transform domain approach that employs a set of DCT-based features to exploit the spatial and temporal correlation in the video sequences. The approach is shown to be insensitive to illumination change and to noise. It also overcomes many common difficulties of segmentation such as foreground aperture, and moved background objects. The algorithm can perform in real-time. If time allows, I briefly show some results of the moving camera case. With a moving camera, estimation of camera ego-motion is the key issue. Our approach is based on solving an optimization problem for the FOE (focus of expansion). The flow field defined by FOE can be used to detect independent motions on the road.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call