Abstract

Drive-thru Internet has been considered as an effective Internet access method for Internet of Vehicles (IoV). Through the opportunistic vehicle-to-roadside WiFi connection, it can provide high throughput performance with low communication cost for IoV applications, such as intelligent transportation system, automotive infotainment, etc. However, its usability is highly affected by a fundamental issue called rate adaptation (RA), which is to adjust the modulation and coding rate to adapt to the dynamic wireless channel between the vehicle and the roadside access point (AP). Conventional WiFi RA schemes are designed for indoor or quasistatic scenarios and do not account for the channel variations in drive-thru Internet. In this article, we study the limitation of applying existing RA schemes in drive-thru Internet and propose a reinforcement learning (RL)-based RA scheme to capture the potential channel variation patterns and efficiently select the rate for every vehicle’s egress frame. Simulation results demonstrate that the proposed RA scheme outperforms the existing schemes in network throughput and that the efficiency of the learning model can be generalized under various conditions. The proposed RA method can provide useful inspirations for designing robust and scalable link adaptation protocols in IoV.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call