Abstract
In this work we examine a specific case of wireless sensor networks (WSN) we call peer-to-peer WSN where source and destination are both dynamic and each is subject to constraints of low bandwidth, limited energy storage, and limited computational resources. Peer-to-peer WSN require the consideration of data computation time as a limiting constraint on information availability unlike a standard WSN that can rely on the unconstrained sink to perform the necessary computation of the raw sensor data into usable information. To effectively manage and improve upon peer-to-peer WSN routing, and WSN routing in general, we present a deep reinforcement learning algorithm known as distributed cooperative reinforcement for routing (DCRL-R) which uses a neural network and expanded state space parameters to learn routing policies for WSN. DCRL-R also incorporates an increased action space for determining when and where to perform in-network computation of the raw sensor data. We perform tests of DCRL-R on a physical network utilizing measured node state parametric data and show its viability in future WSN applications compared to a baseline routing algorithm using shortest path decisions with no computational offloading.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have