Abstract

Decentralized self-adaptive systems consist of multiple control loops that adapt some local and system-level global goals of each locally managed system or component in a decentralized setting. As each component works together in a decentralized environment, a control loop cannot take adaptation decisions independently. Therefore, all the control loops need to exchange their adaptation decisions to infer a global knowledge about the system. Decentralized self-adaptation approaches in the literature uses the global knowledge to take decisions that optimize both local and global goals. However, coordinating in such an unbounded manner impairs scalability. This paper proposes a decentralized self-adaptation technique using reinforcement learning that incorporates partial knowledge in order to reduce coordination overhead. The Q-learning algorithm based on Interaction Driven Markov Games is utilized to take adaptation decisions as it enables coordination only when it is beneficial. Rather than using unbounded number of peers, the adaptation control loop coordinates with a single peer control loop. The proposed approach was evaluated on a service-based Tele Assistance System. It was compared to random, independent and multiagent learners that assume global knowledge. It was observed that, in all cases, the proposed approach conformed to both local and global goals while maintaining comparatively lower coordination overhead.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.