Abstract

The heavy use of camera phones and other mobile devices all over the world has produced a market for mobile image analysis, including image segmentation to separate out objects of interest. Automatic image segmentation algorithms, when employed by many different users for multiple applications, cannot guarantee high quality results. Yet interactive algorithms require human effort that may become quite tedious. To reduce human effort and achieve better results, it is worthwhile to know in advance which images are difficult to segment and may require further user interaction or alternate processing. For this purpose, we introduce a new research problem: how to estimate the image segmentation difficulty level without actually performing image segmentation. We propose to formulate it as an estimation problem, and we develop a linear regression model using image features to predict segmentation difficulty level. Different image features, including graytone, color, gradient, and texture features are tested as the predictive variables, and the segmentation algorithm performance measure is the response variable. We use the benchmark images of the Berkeley segmentation dataset with corresponding F-measures to fit, test, and choose the optimal model. Additional standard image datasets are used to further verify the model's applicability to a variety of images. A new feature that combines information from the log histogram of log gradient and the local binary pattern histogram is a good predictor and provides the best balance of predictive performance and model complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.