Abstract

Dear Editor, This letter is concerned with the energy-aware multiple sensor co-scheduling for bearing-only target tracking in the underwater wireless sensor networks (UWSNs). Considering the traditional methods facing with the problems of strong environment dependence and lack flexibility, a novel sensor scheduling algorithm based on the deep reinforcement learning is proposed. Firstly, the sensors' co-scheduling strategy in UWSNs is formulated as Markov decision process (MDP). Then, a dueling double deep Q network (D3QN) is developed to solve the MDP in a scalable and model free manner. Besides, the prioritized experience replay (PER) method is utilized to accelerate network convergence. Finally, the effectiveness and superiority of the proposed algorithm are confirmed by experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call