Abstract

QoS-aware service composition is challenging due to a high number of QoS attributes, component services, and candidate services. Realistic service composition applications operate in uncertain environments where QoS values may change dynamically. Moreover, user requirements on QoS attributes should be considered, and their different nature can make it difficult to express them by adopting relative weights. Reinforcement Learning is proposed as a viable approach in order to deal with the complexity and variability of the environment. In this paper, we propose a novel approach that integrates traditional reinforcement learning with a norm-based paradigm to consider cases where component services may have a different number and types of QoS attributes. In such a way, it is possible to consider additional local requirements that may hold only for specific service components still pursuing a global optimization. Norms allow using a uniform formalism to express qualitative and quantitative as well as hard and soft user requirements. The approach has been tested on a real dataset of 2500 web services showing its performance, scalability, and adaptability properties.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.