Abstract
In this paper we outline a fully parallel and locally connected computation model for the segmentation of motion events in video sequences based on spatial and motion information. Extraction of motion information from video series is very time consuming. Most of the computing effort is devoted to the estimation of motion vector fields, defining objects and determining the exact boundaries of these objects. The split and merge segmentation of different small areas, those obtained by oversegmentation, needs an optimization process. In our proposed algorithm the process starts from an oversegmented image, then the segments are merged by applying the information coming from the spatial and temporal auxiliary data: motion fields and motion history, calculated from consecutive image frames. This grouping process is defined through a similarity measure of neighboring segments, which is based on intensity, speed and the time-depth of motion-history. There is also a feedback for checking the merging process, by this feedback we can accept or refuse the cancellation of a segment-border. Our parallel approach is independent of the number of segments and objects, since instead of graph representation of these components, image features are defined on the pixel level. We use simple VLSI implementable functions like arithmetic and logical operators, local memory transfers and convolution. These elementary instructions are used to build up the basic routines such as motion displacement field detection, disocclusion removal, anisotropic diffusion, grouping by stochastic optimization. This relaxation-based motion segmentation can be a basic step of the effective coding of image series and other automatic motion tracking systems. The proposed system is ready to be implemented in a Cellular Nonlinear Network chip-set architecture.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.