Abstract

The demand for real-time data processing has become increasingly attractive in Cyber-Physical Systems(CPSs), especially for data-intensive embedded real-time applications. In order to timely perceive and respond to environmental changes, the basic design requirement in such systems is to provide data service with high freshness. As modern CPSs become more complex, there are a broad set of system mode switch behaviors, some unforeseen, in a dynamic computational environment. However, conventional control algorithms can hardly handle such new scenarios, since most of them assume that the operational behavior is fixed. In this paper, we study the problem of how to maximize the freshness of data in multi-modal systems. We first use a recently proposed new conception, namely Age of Information (AoI) to quantify the freshness of data by combining the AoI metric with real-time constraints. Then, we propose, to our knowledge, the first freshness-aware scheduling solution to settle the problem via deep reinforcement learning(RL). To be specific, we develop an RL framework that can continuously update its scheduling strategies and maximize the freshness of data in the long term. Extensive simulation experiments are conducted and the results demonstrate that the proposed FAS-DQN outperforms other traditional state-of-the-art methods in terms of data freshness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call