• In workflow performance prediction, DAG structure matters; • DAG-Transformer effectively embeds the DAG information and outperforms mainstream ML, DL , and GCN methods; • A new dataset for cloud workflow performance prediction is accompanied as well as the source code. With the rapid growth of cloud computing, efficient operational optimization and resource scheduling of complex cloud business processes rely on real-time and accurate performance prediction. Previous research on cloud computing performance prediction focused on qualitative (heuristic rules), model-driven, or coarse-grained time-series prediction, which ignore the study of historical performance, resource allocation status and service sequence relationships of workflow services. There are even fewer studies on prediction for workflow graph data due to the lack of available public datasets. In this study, from Alibaba Cloud's Cluster-trace-v2018, we extract nearly one billion offline task instance records into a new dataset, which contains approximately one million workflows and their corresponding directed acyclic graph (DAG) matrices. We propose a novel workflow performance prediction model (DAG-Transformer) to address the aforementioned challenges. In DAG-Transformer, we design a customized position encoding matrix and an attention mask for workflows, which can make full use of workflow sequential and graph relations to improve the embedding representation and perception ability of the deep neural network. The experiments validate the necessity of integrating graph-structure information in workflow prediction. Compared with mainstream deep learning (DL) methods and several classic machine learning (ML) algorithms, the accuracy of DAG-Transformer is the highest. DAG-Transformer can achieve 85-92% CPU prediction accuracy and 94-98% memory prediction accuracy, while maintaining high efficiency and low overheads. This study establishes a new paradigm and baseline for workflow performance prediction and provides a new way for facilitating workflow scheduling.
Read full abstract