Abstract

We take a look at crowdsourcing for subjective image quality evaluation using real image stimuli with nonsimulated distortions. Our aim is to scale the task of subjectively rating images while ensuring maximal data validity and accuracy. While previous work has begun to explore crowdsourcing for quality assessment, it has either used images that are not representative of popular consumer scenarios or used crowdsourcing to collect data without comparison to experiments in a controlled environment. Here, we address the challenges imposed by the highly variable online environment, using stimuli that are subtle and more complex than has traditionally been used in quality assessment experiments. In a series of experiments, we vary different design parameters and demonstrate how they impact the subjective responses obtained. Of the parameters examined are stimulus display mode, study length, stimulus habituation, and content homogeneity/heterogeneity. Our method was tested on a database that was rated in a laboratory test previously. Once our design parameters were chosen, we rated a database of consumer photographs and are making this data available to the research community.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call