Abstract

This paper proposes a low-power very large-scale integration (VLSI) architecture for motion tracking. It uses a hierarchical adaptive structured mesh that generates a content-based video representation. The proposed mesh is a coarse-to-fine hierarchical two-dimensional mesh that is formed by recursive triangulation of the initial coarse mesh geometry. The structured mesh offers a significant reduction in the number of bits that describe the mesh topology. The motion of the mesh nodes represents the deformation of the video object. The architecture consists of motion estimation and motion compensation units. The motion estimation architecture generates a progressive mesh code and the motion vectors of the mesh nodes. It reduces the power consumption, uses a simpler approach for mesh construction, approximates the mesh nodes motion vector by using the three step search algorithm and uses a parallel motion estimation core to evaluate the mesh nodes motion vectors. Moreover, it maximizes the lifetime of the internal buffers. The motion compensation architecture uses a multiplication-free algorithm for affine transformation, which significantly reduces the complexity of the motion compensation architecture. Moreover, using pipelined affine units contributes to the power savings. The video motion compensation architecture processes a reference frame, mesh nodes and motion vectors to predict a video frame. It implements parallel threads in which each thread implements a pipelined chain of scalable affine units. This motion compensation algorithm allows the use of one simple warping unit to map a hierarchical structure. The affine unit warps the texture of a patch at any level of hierarchical mesh independently. The processor uses a memory serialization unit, which interfaces the memory to the parallel units. The architecture has been prototyped using top-down low-power design methodology. The performance analysis shows that this processor can be used in online object-based video applications such as in MPEG and VRML.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.