Abstract

In participatory mobile crowdsensing (MCS) users repeatedly make <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">choices</i> among a finite set of alternatives, i.e., whether to contribute to a task or not and which task to contribute to. The platform coordinating the MCS campaigns often <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">engineers</i> these choices by selecting MCS tasks to recommend to users and offering monetary or in-kind rewards to motivate their contributions to them. In this paper, we revisit the well-investigated question of how to optimize the contributions of mobile end users to MCS tasks. However, we depart from the bulk of related literature by explicitly accounting for the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">bounded rationality</i> evidenced in human decision making. Bounded rationality is a consequence of cognitive and other kinds of constraints, e.g., time pressure, and has been studied extensively in behavioral science. We first draw on work in the field of cognitive psychology to model the way boundedly rational users respond to MCS task offers as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Fast-and-Frugal-Trees (FFTs)</i> . With each MCS task modeled as a vector of feature values, the decision process in FFTs proceeds through sequentially parsing lexicographically ordered features, resulting in choices that are satisfying but not necessarily optimal. We then formulate, analyze and solve the novel optimization problems that emerge for both nonprofit and for-profit MCS platforms in this context. The evaluation of our optimization approach highlights significant gains in both platform revenue and quality of task contributions when compared to heuristic rules that do not account for the lexicographic structure in human decision making. We show how this modeling framework readily extends to platforms that present multiple task offers to the users. Finally, we discuss how these models can be trained, iterate on their assumptions, and point to their implications for applications beyond MCS, where end-users make choices through the mediation of mobile/online platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call