Abstract

A robust system for the extraction of temporal coherent masks of physical meaningful objects in video sequences is proposed. The system is based on six modules combining three-dimensional nonlinear diffusion filters, morphological operators and a region merging technique that exploit information from previous frames and motion compensation. An initial mask is produced using the semiautomatic segmentation tool for still images. The video sequence is then filtered to remove high frequency components due to noise and texture in the spatial domain and to smooth abrupt changes in the temporal domain. Prefiltered sequences are passed to the main processing unit to initialize the segmentation using morphological tools. Here information from the YUV and HSV color spaces is used. Highly accurate segmentation areas defined by watershed lines are fused together to build larger regions representing objects in the scene. This process is carried out by a region merger module which uses information from the previously segmented frame and motion compensation. Several computer simulations were conducted to assess the performance of the proposed system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call