Abstract

The underwater wireless sensor networks (UWSNs) receive extensive concern and become common tools for underwater target passive tracking. To address the energy constraints of battery-powered sensors in UWSNs, efficient sensor scheduling methods that balance tracking accuracy and energy consumption are essential. Considering the common sensor scheduling methods facing with the problems of strong environment dependence and lack flexibility, an end-to-end sensor scheduling algorithm based on the deep reinforcement learning is proposed. In particular, we formulate the sensors’ scheduling strategy in UWSNs as Markov decision process (MDP) within the context of reinforcement learning. Moreover, a Dueling double deep Q network (D3QN) is introduced to solve the MDP in a scalable and model free manner for a suitable sensor scheduling policy of UWSNs and the prioritized experience replaybuffer (PER) is utilized to improve the performance of D3QN. In addition, to ensure practical applicability, a mock data method is introduced to train the algorithm without relying on accurate trajectory information of Non-cooperative target. Thus, an end-to-end sensor scheduling method can be built. Experimental results demonstrate the effectiveness and superiority of the proposed algorithm. In large-scale UWSNs for underwater passive tracking, the proposed method achieves a minimum of 20% improvement in tracking accuracy while reducing the system’s energy consumption by 10% compared to the traditional method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call