AbstractOver the last few years, big data have emerged as a paradigm for processing and analyzing a large volume of data. Coupled with other paradigms, such as cloud computing, service computing, and Internet of Things, big data processing takes advantage of the underlying cloud infrastructure, which allows hosting and managing massive amounts of data, while service computing allows to process and deliver various data sources as on‐demand services. This synergy between multiple paradigms has led to the emergence of big services, as a cross‐domain, large‐scale, and big data‐centric service model. Apart from the adaptation issues (e.g., need of high reaction to changes) inherited from other service models, the massiveness and heterogeneity of big services add a new factor of complexity to the way such a large‐scale service ecosystem is managed in case of execution deviations. Indeed, big services are often subject to frequent deviations at both the functional (e.g., service failure, QoS degradation, and IoT resource unavailability) and data (e.g., data source unavailability or access restrictions) levels. Handling these execution problems is beyond the capacity of traditional web/cloud service management tools, and the majority of big service approaches have targeted specific management operations, such as selection and composition. To maintain a moderate state and high quality of their cross‐domain execution, big services should be continuously monitored and managed in a scalable and autonomous way. To cope with the absence of self‐management frameworks for large‐scale services, the goal of this work is to design an autonomic management solution that takes the whole control of big services in an autonomous and distributed lifecycle process. We combine autonomic computing and big data processing paradigms to endow big services with self‐* and parallel processing capabilities. The proposed management framework takes advantage of the well‐known MapReduce programming model and Apache Spark and manages big service's related data using knowledge graph technology. We also define a scalable embedding model that allows processing and learning latent big service knowledge in a distributed manner. Finally, a cooperative decision mechanism is defined to trigger non‐conflicting management policies in response to the captured deviations of the running big service. Big services' management tasks (monitoring, embedding, and decision), as well as the core modules (autonomic managers' controller, embedding module, and coordinator), are implemented on top of Apache Spark as MapReduce jobs, while the processed data are represented as resilient distributed dataset (RDD) structures. To exploit the shared information exchanged between the workers and the master node (coordinator), and for further resolution of conflicts between management policies, we endowed the proposed framework with a lightweight communication mechanism that allows transferring useful knowledge between the running map‐reduce tasks and filtering inappropriate intermediate data (e.g., conflicting actions). The experimental results proved the increased quality of embeddings and the high performance of autonomic managers in a parallel and cooperative setting, thanks to the shared knowledge.
Read full abstract