Abstract

Task assignment is a fundamental research problem in mobile crowdsensing (MCS) since it directly determines an MCS system’s practicality and economic value. Due to the complex dynamics of tasks and workers, task assignment problems are usually NP-hard, and approximation-based methods are preferred to impractical optimal methods. In the literature, a graph neural network-based deep reinforcement learning (GDRL) method is proposed in Xu and Song (2022) to solve routing problems in MCS and shows high performance and time efficiency. However, GDRL, as a centralized method, has to cope with the limitation in scalability and the challenge of privacy protection. In this paper, we propose a multi-agent deep reinforcement learning-based method named CQDRL to solve a task assignment problem in a decentralized fashion. The CQDRL method not only inherits the merits of GDRL over traditional heuristic and metaheuristic methods but also exploits computation potentials in mobile devices and protects workers’ privacy with a decentralized decision-making scheme. Our extensive experiments show that the CQDRL method can achieve significantly better performance than other traditional methods and performs fairly close to the centralized GDRL method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call