Abstract

Real-time applications increasingly rely on context information to provide relevant and dependable features. Context queries require large-scale retrieval, inferencing, aggregation, and delivery of context using only limited computing resources, especially in a distributed environment. If this is slow, inconsistent, and too expensive to access context information, the dependability and relevancy of real-time applications may fail to exist. This paper argues, transiency of context (i.e., the limited validity period), variations in the features of context query loads (e.g., the request rate, different Quality of Service (QoS), and Quality of Context (QoC) requirements), and lack of prior knowledge about context to make near real-time adaptations as fundamental challenges that need to be addressed to overcome these shortcomings. Hence, we propose a performance metric driven reinforcement learning based adaptive context caching approach aiming to maximize both cost- and performance-efficiency for middleware-based Context Management Systems (CMSs). Although context-aware caching has been thoroughly investigated in the literature, our approach is novel because existing techniques are not fully applicable to caching context due to (i) the underlying fundamental challenges and (ii) not addressing the limitations hindering dependability and consistency of context. Unlike previously tested modes of CMS operations and traditional data caching techniques, our approach can provide real-time pervasive applications with lower cost, faster, and fresher high quality context information. Compared to existing context-aware data caching algorithms, our technique is bespoken for caching context information, which is different from traditional data. We also show that our full-cycle context lifecycle-based approach can maximize both cost- and performance-efficiency while maintaining adequate QoC solely based on real-time performance metrics and our heuristic techniques without depending on any previous knowledge about the context, variations in query features, or quality demands, unlike any previous work. We demonstrate using a real world inspired scenario and a prototype middleware based CMS integrated with our adaptive context caching approach that we have implemented, how realtime applications that are 85% faster can be more relevant and dependable to users, while costing 60.22% less than using existing techniques to access context information. Our model is also at least twice as fast and more flexible to adapt compared to existing benchmarks even under uncertainty and lack of prior knowledge about context, transiency, and variable context query loads.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call