Abstract

Mobile charging is feasible to deal with the energy-constrained problem in wireless rechargeable sensor networks (WRSNs). The mobile chargers (MCs) are usually employed to charge the sensors sequentially according to the charging schemes. Existing studies assume that each sensor should be charged to its maximum energy capacity or to a fixed upper threshold before the next one can be charged. However, they neglect to control the charging time adaptively of each sensor according to the charging demand. Hence, in this paper, we assume that the charging time of each sensor can be controlled, and we study the joint optimization of charging sequence and charging time problem (JCSCT). Correspondingly, we propose a novel deep reinforcement learning with hybrid action space approach for JCSCT (DRLH-JCSCT), which utilizes deep q-network (DQN) to generate the charging sequence, and adopts deep deterministic policy gradient (DDPG) to determine the charging time. An attention-based encoder–decoder model is integrated in the actor network of DDPG, and a modified bi-directional gate recurrent unit network (MBGRU) is utilized as the decoder. We also design a novel reward function to evaluate the quality of the charging actions. Simulations demonstrate the improved charging performance of the proposed approach, with a longer network lifetime and fewer failed sensors compared with the existing mobile charging scheduling approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call