Abstract

The pervasive view of the mobile crowd bridges various real-world scenes and people's perceptions with the gathering of distributed crowdsensing photos. To elaborate informative visuals for viewers, existing techniques introduce photo selection as an essential step in crowdsensing. Yet, the aesthetic preference of viewers, at the very heart of their experiences under various crowdsensing contexts (e.g., travel planning), is seldom considered and hardly guaranteed. We propose CrowdPicker, a novel photo selection framework with adaptive aesthetic awareness for crowdsensing. With the observations on aesthetic uncertainty and bias in different crowdsensing contexts, we exploit a joint effort of mobile crowdsourcing and domain adaptation to actively learn contextual knowledge for dynamically tailoring the aesthetic predictor. Concretely, an aesthetic utility measure is invented based on the probabilistic balance formalization to quantify the benefit of photos in improving the adaptation performance. We prove the NP-hardness of sampling the best-utility photos for crowdsourcing annotation and present a (1-1/e) approximate solution. Furthermore, a two-stage distillation-based adaptation architecture is designed based on fusing contextual and common aesthetic preferences. Extensive experiments on three datasets and four raw models demonstrate the performance superiority of CrowdPicker over four photo selection baselines and four typical sampling strategies. Cross-dataset evaluation illustrates the impacts of aesthetic bias on selection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call