In Wireless Sensor Networks (WSN) communication protocols, rule-based approaches have been traditionally used for managing caching and congestion control. These approaches rely on explicitly defined, unchanging models. Recently, a trend has been toward incorporating adaptive methods that leverage machine learning (ML), including its subset deep learning (DL), during network congestion conditions. However, an adaptive cache-aware congestion control mechanism using Deep Reinforcement Learning (DRL) in WSN has not yet been explored. Therefore, this study developed a DRL-based adaptive cache-aware congestion control mechanism called DRL-CaCC to alleviate WSN during congestion scenarios. The DRL-CaCC uses intermediate caching parameters as its state space and adaptively moves the congestion window as its action space through the Rapid Start and DRL algorithms. The mechanism aims to find the optimal congestion window movement to avoid further network congestion while ensuring maximum cache utilization. Results show that DRL-CaCC achieved an average improvement gain between 20% and 40% compared to its baseline protocol, RT-CaCC. Finally, DRL-CaCC outperformed other caching-based and DRL-based congestion control protocols in terms of cache utilization, throughput, end-to-end delay, and packet loss metrics, with improvement gains between 10% and 30% in various congestion scenarios in WSN.
Read full abstract