Underwater optical wireless sensor networks (UOWSNs) have been attracting many interests for the advantages of high transmission rate, ultrawide bandwidth, and low latency. However, due to limited energy resources and highly dynamic topology caused by the water flow movement, it is challenging to provide a low-consumption and reliable routing in UOWSNs. To tackle this issue, in this article, we propose an efficient routing protocol based on multiagent reinforcement learning, termed as DMARL, for UOWSNs. The network is first modeled as a distributed multiagent system, and residual energy and link quality are considered into the routing protocol design to improve the adaptation to a dynamic environment and the support of prolonging network life. Additionally, two optimization strategies are proposed to accelerate the convergence of the reinforcement learning algorithm. On the basis, a reward mechanism is provided for the distributed system. The simulation results show that the DMARL-based routing protocol has low energy consumption and high packet delivery ratio (over 90%), and it is suitable for networks where the average number of neighbor nodes is less than 14.
Read full abstract