Abstract

Contextual information has been widely used as a rich source of information to segment multiple objects in an image. A contextual model uses the relationships between the objects in a scene to facilitate object detection and segmentation. Using contextual information from different objects in an effective way for object segmentation, however, remains a difficult problem. In this paper, we introduce a novel framework, called multiclass multiscale (MCMS) series contextual model, which uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates cross-object and inter-object information into one probabilistic framework and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. We demonstrate that our MCMS model improves object segmentation performance in electron microscopy images and provides a coherent segmentation of multiple objects. Through speeding up the segmentation process, the proposed method will allow neurobiologists to move beyond individual specimens and analyze populations paving the way for understanding neurodegenerative diseases at the microscopic level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call