Abstract

The use of learning, in particular reinforcement learning, has been explored in the context of policy-driven autonomic management as a means of aiding decision making. In this context, the autonomic manager “learns” a model about what actions to take, for example, in certain situations. However, when the set of policies changes, the model is typically discarded or, if used, may yield misleading information. In contrast, this paper presents an approach for “re-using” past knowledge - by transforming a model learned from the use of one set of active policies to a new model when those policies change. This means that some of the “learned” knowledge can be utilized within the new environment. This is possible because our approach to modeling learning and adaptation is dependent only on the structure of the policies. Consequently, changes to policies can be mapped onto transformations specific to the model derived from the use of those policies. In this paper, we describe the model construction and policy modifications and elaborate, with a detailed case study, on how such changes could alter the currently learned model. Our analysis of the different kinds of policy modifications also suggest that, in most cases, most of the learned model can still be reused. This can significantly accelerate the learning process, essentially improving the overall quality of service, as the results presented in this paper demonstrate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.