Abstract
Two-dimensional mesh-based motion tracking preserves neighboring relations (through connectivity of the mesh) and also allows warping transformations between pairs of frames; thus, it effectively eliminates blocking artifacts that are common in motion compensation by block matching. However, available uniform 2-D mesh model enforces connec-tivity everywhere within a frame, which is clearly not suitable across occlusion boundaries. To overcome this limitation, BTBC (background to be covered) detection and MF (model failure) detection algorithms are being used. In this algorithm, connectivity of the mesh elements (patches) across covered and uncovered region boundaries are broken. This is achieved by allowing no node points within the background to be covered and refining the mesh structure within the model failure region at each frame. We modify the occlusion-adaptive, content-based mesh design and forward tracking algorithm used by Yucel Altunbasak for selection of points for triangular 2-D mesh design. Then, we propose a new triangulation procedure for mesh structure and also a new algorithm to justify connectivity of mesh structure after motion vector estimation of the mesh points. The modified content-based mesh is adaptive which eliminates the necessity of transmission of all node locations at each frame.
Highlights
Motion estimation is an important part of any video processing system and is divided as 2-D motion estimation and 3-D motion estimation.2-D motion estimation has a wide range of applications; including video compression, motion forward tracking, sampling rate conversion, filtering and so on
Two-dimensional mesh-based motion tracking preserves neighboring relations and allows warping transformations between pairs of frames; it effectively eliminates blocking artifacts that are common in motion compensation by block matching
We present an adaptive forward-tracking mesh procedure which in it, none of the points are allowed to locate in the background regions that will be covered in the frame (BTBC regions) and the mesh within the model failure region(s) is redefined for subsequent tracking of these regions [21]
Summary
Motion estimation is an important part of any video processing system and is divided as 2-D motion estimation and 3-D motion estimation. We define optical flow and its equation which imposes a constraint between image gradients and flow vectors [4] This is a fundamental equality that many motion estimation algorithms are based on. A global parameterize model which assumes all objects in the scene have motion and estimates a vector motion for every scene or any frame is usually inadequate It is suitable if only the camera is moving or image scene contains a single moving object with a planar surface. For scenes containing multiple moving objects, it is more appropriate to divide an image frame into multiple regions, so that the motion within each region can be characterized well by a parameterized model This is known as region-based motion representation. In step, this new algorithm must be tested on the real states consist of deformation or multiple objects scenes and results must be compared to other algorithms for tracking the object especially in presence the occlusion
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.