Abstract

In Internet of Underwater Things, data collection is assisted by autonomous underwater vehicle (AUV) to enhance the reliable transmission. AUV acts as a mobile collector and transmits the collected data to the station via relay nodes. However, the highly mobile nature of AUV needs an adaptive and efficient relay selection scheme for achieving good capacity performance. In this article, we propose a new contextual multiarmed bandit with evolving relay set (CMAB-ERS) learning framework, which successfully addresses crucial issues, including dynamic environment conditions and evolving relay set. To deal with the evolving relay set, CMAB-ERS incorporates collaborative effects into inference as well as learning processes, the new relays will acquire prior knowledge by having experienced nodes sharing observations, reducing the learning time significantly. To overcome the uncertainty of environmental information, we exploit the contextual environment factors to assist relay reward estimation and execute time-sensitive parameter update after every transmit–receive cycle, aiming for minimizing potential loss due to the time-varying channel. Correspondingly, the collaboration-aware online contextual bandit learning (COCBL) algorithm is designed that enables AUV to switch optimal relay adaptively and promises high-capacity transmission. Further, we rigorously prove the convergence of the COCBL algorithm by considering the evolving relay set and give its upper bound on the cumulative regret. Finally, extensive simulation results elucidate the effectiveness of the proposed COCBL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call