Abstract
Energy harvesting (EH)-powered sensor nodes can achieve theoretically unlimited lifetime by scavenging energy from ambient power sources, such as radio-frequency (RF) and kinetic energy. The nodes can collect and transmit data wirelessly with the harvested energy. However, the transmission between two sensor nodes is successful only when both nodes have enough energy at the same time. While the receiver can be actively listening, it may deplete the energy long before the sender has accumulated enough energy. Thus, given the scarce, unpredictable, and unevenly distributed energy among sensor nodes, it is challenging to ensure efficient data transmission between them. To address this challenge, we propose a sensor node architecture with multiple radios, each with different energy consumption on the sender and receiver. A node can be put into sleep when charged up and wakes up for communication when it infers that both nodes have enough energy based on its observations. What is more, two nodes can cooperatively and dynamically select different radios according to the stored energy and historical information to maximize the data throughput. To achieve cooperative communication adaptively, the communication procedure is modeled as a cooperative Markov game with partial observability on each node, and multiagent reinforcement learning (MARL) is employed to achieve the best results. Experimental results on hardware prototype and by simulation show that the proposed approaches achieve up to 89.1% of the optimal throughput and significantly outperform other online algorithms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have