Abstract

Unmanned aerial vehicle (UAV) has been widely deployed in efficient data collection for Internet of Things (IoT). UAV can not only act as a relay, but also as an energy source to provide information and energy transmission for ground sensor nodes (SNs). This paper studies the efficient multi-UAV-assisted data collection problem in wireless powered IoT. Specifically multiple UAVs wirelessly charge SNs using radio frequency (RF) energy transfer, and the SNs then use the harvested energy to upload the updates of the sensed information to the UAVs, thus improving the freshness of collected data and extending the service time of the SNs. The problem is modeled as a partially observed Markov decision process (POMDP) with a large observation and action space, where each UAV acts as an intelligent agent to learn the environment and make decisions independently. The value-decomposition network (VDN) algorithm is employed to find the optimal strategy in the multi-agent deep reinforcement learning framework. Simulation results validate the effectiveness of the proposed data collection approach compared to two baseline policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call