Abstract

Concurrent Self Organizing Maps (CSOMs) deal with the pattern classification problem in a parallel processing way, aiming to minimize a suitable objective function. Similarly, Active Contour Models (ACMs) (e.g., the Chan-Vese (CV) model) deal with the image segmentation problem as an optimization problem by minimizing a suitable energy functional. The effectiveness of ACMs is a real challenge in many computer vision applications. In this paper, we propose a novel regional ACM, which relies on a CSOM to approximate the foreground and background image intensity distributions in a supervised way, and to drive the active-contour evolution accordingly. We term our model Concurrent Self Organizing Map-based Chan-Vese (CSOM-CV) model. Its main idea is to concurrently integrate the global information extracted by a CSOM from a few supervised pixels into the level-set framework of the CV model to build an effective ACM. Experimental results show the effectiveness of CSOM-CV in segmenting synthetic and real images, when compared with the stand-alone CV and CSOM models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.