Abstract

We propose a method for an accurate subject tracking by selecting only tracked subject boundary edges in a video stream with changing background and a moving camera. Our boundary edge selection is done in two steps; 1) remove background edges using an edge motion, 2) from the output of the previous step, select boundary edges using a normal direction derivative of the tracked contour. Our accurate tracking is based on reducing affects from irrelevant edges by selecting boundary edge pixels only. In order to remove background edges using the edge motion, we compute tracked subject motion and edge motions. The edges with different motion direction than the subject motion are removed. In selecting boundary edges using the contour normal direction, we compute image gradient values on every edge pixels, and select edge pixels with large gradient values. We use multi-level Canny edge maps to get proper details of a scene. Multi-level edge maps allow us robust tracking even though the tracked object boundary is not clear, because we can adjust the detail level of an edge map for the scene. The computed contour is improved by checking against a strong (simple) Canny edge map and hiring strong Canny edge pixels around the computed contour using Dijkstra's minimum cost routing. Our experimental results show that our tracking approach is robust enough to handle a complex-textured scene.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call