QoS-aware service composition is challenging due to a high number of QoS attributes, component services, and candidate services. Realistic service composition applications operate in uncertain environments where QoS values may change dynamically. Moreover, user requirements on QoS attributes should be considered, and their different nature can make it difficult to express them by adopting relative weights. Reinforcement Learning is proposed as a viable approach in order to deal with the complexity and variability of the environment. In this paper, we propose a novel approach that integrates traditional reinforcement learning with a norm-based paradigm to consider cases where component services may have a different number and types of QoS attributes. In such a way, it is possible to consider additional local requirements that may hold only for specific service components still pursuing a global optimization. Norms allow using a uniform formalism to express qualitative and quantitative as well as hard and soft user requirements. The approach has been tested on a real dataset of 2500 web services showing its performance, scalability, and adaptability properties.
Read full abstract