Abstract

IoT stream processing is an emerging technology that plays a crucial role in enabling time-critical IoT applications, which often demand high accuracy and low latency. For example, Augmented Reality (AR) applications require high object detection precision and object localization precision, as well as low latency. Note that the exact definition of accuracy varies depending on the specific application Existing stream processing engines are insufficient to meet these requirements, since they could not integrate and respond timely to variable network conditions in the dynamic wireless environment. Recent efforts focusing on adaptive streaming support user-specified policies to adapt to the variable network conditions. However, existing works suffer from manual policies or simplified models of the deployment environment which limit their ability to achieve optimal performance across a broad set of network conditions and quality of experience (QoE) objectives. In this paper, we present a Reinforcement Learning-based Adaptive streaming system (RL-Adapt) that is capable of generating adaption policies using RL-strategy and providing declarative APIs for efficient development. RL-Adapt trains a neural network model that can automatically select the optimal policy based on the observed network conditions. RL-Adapt does not rely on pre-defined models or assumptions about the environment. Instead, it learns to make decisions solely through observations of the resulting performance of past decisions. We implemented RL-Adapt and evaluated its performance extensively in three representative real-world IoT applications. Our results show that RL-Adapt outperforms the state-of-the-art schemes in terms of QoE metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call