Abstract

Challenging biomedical segmentation problems can be addressed by combining top-down information based on the known anatomy along with bottom-up models of the image data. Anatomical priors can be provided by probabilistic atlases. Nevertheless, in many cases the available atlases are inadequate. We present a novel method for the co-segmentation of multiple images into multiple regions, where only a very few annotated examples exist. The underlying, unknown anatomy is learned throughout an interleaved process, in which the segmentation of a region is supported both by the segmentation of the neighboring regions which share common boundaries and by the segmentation of corresponding regions in the other jointly segmented images. The method is applied to a mouse brain MRI dataset for the segmentation of five anatomical structures. Experimental results demonstrate the segmentation accuracy with respect to the data complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call