Abstract

With the rapid development of unmanned aerial vehicle (UAV), extensive attentions have been paid to UAV-aided data collection in wireless sensor networks. However, it is very challenging to maintain the information freshness of the sensor nodes (SNs) subject to the UAV’s limited energy capacity and/or the large network scale. This chapter introduces two modes of data collection: single and continuous data collection with the aid of UAV, respectively. In the former case, the UAVs are dispatched to gather sensing data from each SN just once according to a preplanned data collection strategy. To keep information fresh, a multistage approach is proposed to find a set of data collection points at which the UAVs hover to collect and the age-optimal flight trajectory of each UAV. In the later case, the UAVs perform data collection continuously and make real-time decisions on the uploading SN and flight direction at each step. A deep reinforcement learning (DRL) framework incorporating the deep Q-network (DQN) algorithm is proposed to find the age-optimal data collection solution subject to the maximum flight velocity and energy capacity of each UAV. Numerical results are presented to show the effectiveness of the proposed methods in different scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call