Abstract

Underwater acoustic sensor networks (UASNs) are challenged by the dynamic nature of the underwater environment, large propagation delays, and global positioning system (GPS) signal unavailability, which make traditional medium access control (MAC) protocols less effective. These factors limit the channel utilization and performance of UASNs, making it difficult to achieve high data rates and handle congestion. To address these challenges, we propose a reinforcement learning (RL) MAC protocol that supports asynchronous network operation and leverages large propagation delays to improve the network throughput.he protocol is based on framed ALOHA and enables nodes to learn an optimal transmission strategy in a fully distributed manner without requiring detailed information about the external environment. The transmission strategy of sensor nodes is defined as a combination of time-slot and transmission-offset selection. By relying on the concept of learning through interaction with the environment, the proposed protocol enhances network resilience and adaptability. In both static and mobile network scenarios, it has been compared with the state-of-the-art framed ALOHA for the underwater environment (UW-ALOHA-Q), carrier-sensing ALOHA (CS-ALOHA), and delay-aware opportunistic transmission scheduling (DOTS) protocols. The simulation results show that the proposed solution leads to significant channel utilization gains, ranging from 13% to 106% in static network scenarios and from 23% to 126% in mobile network scenarios.oreover, using a more efficient learning strategy, it significantly reduces convergence time compared to UW-ALOHA-Q in larger networks, despite the increased action space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call