Abstract

A robust system for the extraction of temporal coherent masks of physical meaningful objects in video sequences is proposed. The system is based on six modules combining three-dimensional nonlinear diffusion filters, morphological operators and a region merging technique that exploit information from previous frames and motion compensation. An initial mask is produced using the semiautomatic segmentation tool for still images. The video sequence is then filtered to remove high frequency components due to noise and texture in the spatial domain and to smooth abrupt changes in the temporal domain. Prefiltered sequences are passed to the main processing unit to initialize the segmentation using morphological tools. Here information from the YUV and HSV color spaces is used. Highly accurate segmentation areas defined by watershed lines are fused together to build larger regions representing objects in the scene. This process is carried out by a region merger module which uses information from the previously segmented frame and motion compensation. Several computer simulations were conducted to assess the performance of the proposed system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.