Abstract

Markov Decision Processes (MDPs) provide important capabilities for facilitating the dynamic adaptation and self-optimization of cyber physical systems at runtime. In recent years, this has primarily taken the form of Reinforcement Learning (RL) techniques that eliminate some MDP components for the purpose of reducing computational requirements. In this work, we show that recent advancements in Compact MDP Models (CMMs) provide sufficient cause to question this trend when designing wireless sensor network nodes. In this work, a novel CMM-based approach to designing self-aware wireless sensor nodes is presented and compared to Q-Learning, a popular RL technique. We show that a certain class of CPS nodes is not well served by RL methods and contrast RL versus CMM methods in this context. Through both simulation and a prototype implementation, we demonstrate that CMM methods can provide significantly better runtime adaptation performance relative to Q-Learning, with comparable resource requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call