Recent advancements in Unmanned Aerial Vehicle (UAV) technology have made them effective platforms for data capture in applications like environmental monitoring. UAVs, acting as mobile data ferries, can significantly improve ground network performance by involving ground network representatives in data collection. These representatives communicate opportunistically with accessible UAVs. Emerging technologies such as Software Defined Wireless Sensor Networks (SDWSN), wherein the role/function of sensor nodes is defined via software, can offer a flexible operation for UAV data-gathering approaches. In this paper, we introduce the “UAV Fuzzy Travel Path”, a novel approach that utilizes Deep Reinforcement Learning (DRL) algorithms, which is a subfield of machine learning, for optimal UAV trajectory planning. The approach also involves the integration between UAV and SDWSN wherein nodes acting as gateways (GWs) receive data from the flexibly formulated group members via software definition. A UAV is then dispatched to capture data from GWs along a planned trajectory within a fuzzy span. Our dual objectives are to minimize the total energy consumption of the UAV system during each data collection round and to enhance the communication bit rate on the UAV-Ground connectivity. We formulate this problem as a constrained combinatorial optimization problem, jointly planning the UAV path with improved communication performance. To tackle the NP-hard nature of this problem, we propose a novel DRL technique based on Deep Q-Learning. By learning from UAV path policy experiences, our approach efficiently reduces energy consumption while maximizing packet delivery.
Read full abstract