Abstract

Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

Highlights

  • Image processing is a means for automatically extracting image contents, often used in science (e.g. biological readouts, surveying and mapping [4], particle accelerator [5] and man-machine interaction [6, 7]) or industry.In computer vision applications, image processing routines need to be developed for image data sets containing a set of similar images

  • Image processing routines need to be developed for image data sets containing a set of similar images

  • An image processing routine consists of the elements preprocessing and filtering, segmentation, interpretation and quantification, each consisting of further sub-units or operators building a so-called pipeline

Read more

Summary

Introduction

Image processing routines need to be developed for image data sets containing a set of similar images. This happens in many real-time acquisition systems (surveillance camera etc.), lab equipment (high-throughput microscopic imaging etc.) or PLOS ONE | DOI:10.1371/journal.pone.0131098. An image processing routine consists of the elements preprocessing and filtering, segmentation, interpretation and quantification, each consisting of further sub-units or operators building a so-called pipeline. The segmentation seeks to assign each pixel a property (e.g. being part of a structure or not), the interpretation assigns pixels with same properties to similar objects and the quantification assigns each object a feature vector of numbers describing the properties of each object. Classification algorithms are applied to assign a label to all found objects based on the object feature vector

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.