Abstract

A novel parametric deformable model of a goal object controlled by shape and appearance priors learned from co-aligned training images is introduced. The shape prior is built in a linear space of vectors of distances to the training boundaries from their common centroid. The appearance prior is modeled with a spatially homogeneous 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> -order Markov-Gibbs random field (MGRF) of gray levels within each training boundary. Geometric structure of the MGRF and Gibbs potentials are analytically estimated from the training data. To accurately separate goal objects from arbitrary background, the deformable model is evolved by solving an Eikonal partial differential equation with a speed function combining the shape and appearance priors and the current appearance model. The latter represents empirical gray level marginals inside and outside an evolving boundary with adaptive linear combinations of discrete Gaussians (LCDG). The analytical shape and appearance priors and a simple Expectation-Maximization procedure for getting the object and background LCDGs, make our segmentation considerably faster than most of the known counterparts. Experiments with various images confirm robustness, accuracy, and speed of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call