Abstract

Underwater Wireless Sensor Networks (UWSNs) have aroused increasing interest of many researchers in industry, military, commerce and academe recently. Due to the harsh underwater environment, energy efficiency is a significant theme should be considered for routing in UWSNs. Underwater positioning is also a particularly tricky task since the high attenuation of radio-frequency signals in UWSNs. In this paper, we propose an energy-efficient depth-based opportunistic routing algorithm with Q-learning (EDORQ) for UWSNs to guarantee the energy-saving and reliable data transmission. It combines the respective advantages of Q-learning technique and opportunistic routing (OR) algorithm without the full-dimensional location information to improve the network performance in terms of energy consumption, average network overhead and packet delivery ratio. In EDORQ, the void detection factor, residual energy and depth information of candidate nodes are jointly considered when defining the Q-value function, which contributes to proactively detecting void nodes in advance, meanwhile, reducing energy consumption. In addition, a simple and scalable void node recovery mode is proposed for the selection of candidate set so as to rescue packets that are stuck in void nodes unfortunately. Furthermore, we design a novel method to set the holding time for the schedule of packet forwarding base on Q-value so as to alleviate the packet collision and redundant transmission. We conduct extensive simulations to evaluate the performance of our proposed algorithm and compare it with other three routing algorithms on Aqua-sim platform (NS2). The results show that the proposed algorithm significantly improve the performance in terms of energy efficiency, packet delivery ratio and average network overhead without sacrificing too much average packet delay.

Highlights

  • With the great application prospects in marine environmental protection, underwater exploration, marine disaster monitoring, offshore operations and marine military activities, Underwater WirelessSensor Networks (UWSNs) have drawn a great attention from governments, industry and academia over the course of the past few years [1,2]

  • The simulation results demonstrate that our proposal can significantly improve terms of energy efficiency, packet delivery ratio and average network overhead

  • We evaluate the performance of the proposed efficient depth-based opportunistic routing algorithm with Q-learning (EDORQ) algorithm and compare it with vector-based forwarding (VBF) [31], depth-based routing (DBR) [22] and QELAR [26]

Read more

Summary

Introduction

With the great application prospects in marine environmental protection, underwater exploration, marine disaster monitoring, offshore operations and marine military activities, Underwater Wireless. The aforementioned routing algorithms based on reinforcement learning for UWSNs can improve the network performance in some aspects by observing and learning the environment, some of them cannot control the routing overhead well when exchange the related information about Q-values and the others require position information, in which it is a tricky to achieve accurate underwater location. Motivated by the above considerations, in this paper, we propose an energy-efficient depth-based OR algorithm with Q-learning (EDORQ) to further reduce the energy consumption and improve the robustness for UWSNs. Instead of depending on full-dimensional position coordinates for packet delivery, the EDORQ needs only local depth information which can be obtained via an inexpensive pressure sensor.

Related Work
Q‐Learning Model
Q-Learning Model
Overview of EDORQ
Void-Detection Based Candidate Set Selection
Q-Learning Based Candidate Set Coordination
Summary
Simulation Setup
Simulation Metrics
Performance Comparison
Impact of Sink Number
Impact of
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.